title
sequencelengths 0
18
| author
sequencelengths 0
4.41k
| authoraffiliation
sequencelengths 0
6.45k
| venue
sequencelengths 0
9
| abstract
stringlengths 1
37.6k
| doi
stringlengths 10
114
⌀ | pdfurls
sequencelengths 1
3
⌀ | corpusid
int64 158
259M
| arxivid
stringlengths 9
16
| pdfsha
stringlengths 40
40
| text
stringlengths 66
715k
| github_urls
sequencelengths 0
36
|
---|---|---|---|---|---|---|---|---|---|---|---|
[
"Quantum Surrogate Modeling for Chemical and Pharmaceutical Development",
"Quantum Surrogate Modeling for Chemical and Pharmaceutical Development"
] | [
"Jonas Stein jonas.stein@ifi.lmu.de ",
"Michael Poppel michael.poppel@campus.lmu.de ",
"Philip Adamczyk ph.adamczyk@campus.lmu.de ",
"Ramona Fabry ra.fabry@campus.lmu.de ",
"Zixin Wu zixin.wu@campus.lmu.de ",
"Michael Kölle michael.koelle@ifi.lmu.de ",
"Jonas Nüßlein jonas.nuesslein@ifi.lmu.de ",
"Daniëlle Schuman danielle.schuman@ifi.lmu.de ",
"Philipp Altmann philipp.altmann@ifi.lmu.de ",
"Thomas Ehmer thomas.ehmer@merckgroup.com ",
"Vijay Narasimhan vijay.narasimhan@emdgroup.com ",
"Claudia Linnhoff-Popien ",
"\nEMD Electronics\nLMU Munich\nLMU Munich\nLMU Munich\nLMU Munich\nLMU Munich\nLMU Munich\nLMU Munich\nLMU Munich\nLMU Munich\nMerck KGaA\nDarmstadt, San JoseGermany, California\n",
"\nLMU Munich\n\n"
] | [
"EMD Electronics\nLMU Munich\nLMU Munich\nLMU Munich\nLMU Munich\nLMU Munich\nLMU Munich\nLMU Munich\nLMU Munich\nLMU Munich\nMerck KGaA\nDarmstadt, San JoseGermany, California",
"LMU Munich\n"
] | [] | A central problem of development in chemical and pharmaceutical industries is modelling a cheap to evaluate surrogate function, that approximates a given black box function sufficiently well. As state-of-the-art methods from classical machine learning struggle to solve this problem accurately for the typically scarce and noisy datasets in practical applications, investigating novel approaches is of great interest to chemical companies worldwide. We demonstrate that quantum neural networks (QNNs) offer a particularly promising approach to this issue and experimentally support recent theoretical findings indicating their potential to outperform classical equivalents in training on small datasets and noisy data. Our contribution displays the first application centered exploration of using QNNs as surrogate models on higher dimensional, realistic data. In extensive experiments, our QNN significantly outperforms a minimalist classical artificial neural network on noisy and scarce data, displaying a possible advantage of quantum surrogate models empirically. Finally, we demonstrate the performance of current NISQ hardware experimentally and estimate the gate fidelities necessary to replicate our simulation results. | null | [
"https://export.arxiv.org/pdf/2306.05042v1.pdf"
] | 259,108,793 | 2306.05042 | cede3d71bd9b45c67330ca624657d1aba8b4be73 |
Quantum Surrogate Modeling for Chemical and Pharmaceutical Development
8 Jun 2023
Jonas Stein jonas.stein@ifi.lmu.de
Michael Poppel michael.poppel@campus.lmu.de
Philip Adamczyk ph.adamczyk@campus.lmu.de
Ramona Fabry ra.fabry@campus.lmu.de
Zixin Wu zixin.wu@campus.lmu.de
Michael Kölle michael.koelle@ifi.lmu.de
Jonas Nüßlein jonas.nuesslein@ifi.lmu.de
Daniëlle Schuman danielle.schuman@ifi.lmu.de
Philipp Altmann philipp.altmann@ifi.lmu.de
Thomas Ehmer thomas.ehmer@merckgroup.com
Vijay Narasimhan vijay.narasimhan@emdgroup.com
Claudia Linnhoff-Popien
EMD Electronics
LMU Munich
LMU Munich
LMU Munich
LMU Munich
LMU Munich
LMU Munich
LMU Munich
LMU Munich
LMU Munich
Merck KGaA
Darmstadt, San JoseGermany, California
LMU Munich
Quantum Surrogate Modeling for Chemical and Pharmaceutical Development
8 Jun 2023Index Terms-Quantum ComputingSurrogate ModelsNISQQNN
A central problem of development in chemical and pharmaceutical industries is modelling a cheap to evaluate surrogate function, that approximates a given black box function sufficiently well. As state-of-the-art methods from classical machine learning struggle to solve this problem accurately for the typically scarce and noisy datasets in practical applications, investigating novel approaches is of great interest to chemical companies worldwide. We demonstrate that quantum neural networks (QNNs) offer a particularly promising approach to this issue and experimentally support recent theoretical findings indicating their potential to outperform classical equivalents in training on small datasets and noisy data. Our contribution displays the first application centered exploration of using QNNs as surrogate models on higher dimensional, realistic data. In extensive experiments, our QNN significantly outperforms a minimalist classical artificial neural network on noisy and scarce data, displaying a possible advantage of quantum surrogate models empirically. Finally, we demonstrate the performance of current NISQ hardware experimentally and estimate the gate fidelities necessary to replicate our simulation results.
Abstract-A central problem of development in chemical and pharmaceutical industries is modelling a cheap to evaluate surrogate function, that approximates a given black box function sufficiently well. As state-of-the-art methods from classical machine learning struggle to solve this problem accurately for the typically scarce and noisy datasets in practical applications, investigating novel approaches is of great interest to chemical companies worldwide. We demonstrate that quantum neural networks (QNNs) offer a particularly promising approach to this issue and experimentally support recent theoretical findings indicating their potential to outperform classical equivalents in training on small datasets and noisy data. Our contribution displays the first application centered exploration of using QNNs as surrogate models on higher dimensional, realistic data. In extensive experiments, our QNN significantly outperforms a minimalist classical artificial neural network on noisy and scarce data, displaying a possible advantage of quantum surrogate models empirically. Finally, we demonstrate the performance of current NISQ hardware experimentally and estimate the gate fidelities necessary to replicate our simulation results.
Index Terms-Quantum Computing, Surrogate Models, NISQ, QNN
I. INTRODUCTION
The development of new products in the chemical and pharmaceuticals industries can take dozens of years and R&D costs range up to several billion dollars. At the center of such processes lies the problem of simulating the output of chemical experiments, given a set of inputs which allows searching for the compound that fits the requirements optimally. Having to account for highly complex interactions in the examined chemicals typically demands for tedious laboratory experiments, as computational simulations either have very long runtimes or only approximate the actual outcomes crudely. As a result, only a small number of configurations can be tested in each product development iteration, often leading to suboptimal and misguided steps. Additionally, experiments suffer from aleatoric uncertainty, i.e., imprecisions in the experiment setup, read-out errors or other types of noise.
A popular in silico approach for accelerating these simulations are so called surrogate models, which aim to closely approximate the simulation model while being much cheaper to evaluate. More recently, the usage of highly parameterized models like classical Artificial Neural Networks (ANNs) as surrogate models has gained increasing research interest, as they have shown promising performance in coping with the typically high dimensional solution space [1]- [3]. However a substantial issue with using ANNs as surrogate models is overfitting, which is mainly caused by a combination of noisiness and small sample sizes of the available data points in practice. [4]- [7] In contrast to classical ANNs, Quantum Neural Networks (QNNs), have been shown to be quite robust with respect to noise and data scarcity [8], [9]. Furthermore, QNNs also natively allow to construct quantum surrogate models in higher dimensions than classically possible, displaying another possible quantum advantage.
Eager to investigate the usage of QNNs as surrogate models in a realistic setting, our core contributions amount to:
• An exploration of the practical application of QNNs as surrogate functions beyond existing proofs of concept. • An extensive evaluation of the designed QNNs against a minimalist ANN constructed to solve the given tasks accurately when given many noise free data samples. • An empirical analysis of NISQ hardware performance and an estimation of needed hardware error improvements to replicate the simulator results.
Section II of this paper lays out the techniques described in the literature. In section III, we introduce which of these techniques we chose for our approach, followed by an evaluation on two-dimensional synthetic benchmark functions and a real world data set. Section V ultimately concludes our findings.
II. BACKGROUND
A surrogate function g is typically used to approximate a given black box function f for which some data points ((x 1 , f (x 1 )) , ..., (x n , f (x n ))) ∈ (R × R) n are given or can be obtained from a costly evaluation of f . The goal in surrogate modelling is finding a suitable whitebox function g such that
d (f (x i ), g(x i )) ≤ ǫ(1)
where d denotes a suitable distance metric and ǫ > 0 is sufficiently small for all x i . Possible employed functions and techniques for representing g include polynomials, Gaussian Processes, radial basis functions and classical ANNs.
[10]- [12] In our approach, we use a QNN, which modifies the parameters θ of a Parameterized Quantum Circuit (PQC) to approximate the function f as described in [9], [13]. Each input data sample x i is initially encoded into a quantum state |ψ in (x i ) and then manipulated by a series of unitary quantum gates of a PQC U (θ).
|ψ out (x i , θ) = U (θ) |ψ in (x i )(2)
Choosing a suitable measurement operator M (e.g., tensor products of Pauli σ z matrices), the expectation value gets measured to obtain the predicted output data.
g QN N (x i ) = ψ out (x i , θ)| M |ψ out (x i , θ)(3)
Aggregating the deviation of the generated output from the original output data then provides a quantification of the quality of the prediction which will finally be used to update the parameters of the gates within the PQC in the next iteration using, e.g., the parameter shift rule [9]. This combination of quantum and classical computation delivers promising results already in the current NISQ era [14].
To encode the provided input data samples, many possibilities such as basis encoding, angle encoding, QUAM encoding, QRAM encoding and amplitude encoding have been explored in literature [15]. While some of these encodings, typically called feature maps, exploit quantum benefits to very efficiently upload more than one classical data point into one qubit (e.g., amplitude encoding), others are less costly with the respect to the amount of state preparation gates required (e.g., angle encoding).
For the PQC, typically called ansatz, many architectures have been proposed, these generally consist of parameterized single qubit rotation and entanglement layers. [16] provides an overview of various circuits composed of such gates, together with the expressibility and entangling capability of the presented circuits. In this context, expressibility describes the size of the subset of states that can be obtained from a given input state by changing the parameters of the ansatz. The more states can be reached, the more universal the quantum function can be, hence one typically strives for circuits with high expressibility.
As shown in [17], quantum models can be written as partial Fourier series in the data, which can in turn represent universal function approximators for a rich enough frequency spectrum. Following this, techniques like parallel encoding as well as data reuploading display potent tools in modelling more expressive QNNs [17], [18]. More specifically, parallel encoding describes the usage of a quantum feature map in parallel, i.e., for multiple qubit registers at the same time, while data reuploading is nothing more than using the feature map repeatedly throughout the circuit.
The approximation quality achieved by the QNN (i.e., ǫ from equation 1), can be quantified by choosing a suitable distance function, such as the mean squared error. Employing a suitable parameter optimization method such as the parameter shift rule [9] then finally allows optimizing the QNN. Notably, all known gradient based techniques for parameter optimization (such as the parameter shift rule) demand multiple circuit executions per parameter to calculate its partial derivative, and thus scale linearly in their runtime with respect to the number of parameters [9].
III. METHODOLOGY
To explore the practical application of QNNs as surrogate functions, we propose the following straightforward procedure:
1) Identify suitable QNN architectures 2) Select realistic benchmark datasets (i.e., functions and samples) 3) Choose a reasonable classical ANN as baseline
A. Identifying suitable QNN architectures
Following the description of all QNN components in section II, we now (1) identify a suitable encoding, (2) select an efficient ansatz for the parameterized circuit, then (3) combine both, e.g., by layering and finally (4) choose an appropriate decoding of measurement results to represent the prediction result.
Being limited by the hardware capabilities of classical quantum circuit simulators, we decide to use angle encoding and hence focus on small to medium dimensional datasets. In particular, angle encoding has the useful property of generating desired expressibility while keeping the required number of gates and parameters low, resulting in shallower circuits compared to more space-efficient 1 encodings such as amplitude encoding. This leads to circuits that can be evaluated faster when executing calculations analytically while also being more robust against imperfect gate fidelity when run on NISQ hardware [19], [20]. A possible implementation can be seen in the first two wires of figure 1, where two 1 The term space-efficient coins the efficiency in the number of (qu-)bits used to encode information.
|0
Rx(x0) Fig. 1: The general QNN architecture used in this paper, exemplary showing two layers, each comprised of a data encoding and a parameterized layer. It combines data reuploading by inserting a feature map ("FM") in each layer with parallel encoding by using two qubits per input data point dimension. For the parameterized part of the circuit, two different circuits from established literature have been combined: Circuit 11 is alternating with circuit 9 to create the required minimum number of parameters [16]. CNOT, Hadamard and CZ gates create superposition and entanglement, while trainable parameters θ i allow the approximation of the surrogate model. After repeated layers, the standard measurement (denoted with "MSMT") is applied to all qubits.
Ry(θ0) Rz(θ4) Rx(x0) H Rx(θ12) |0 Rx(x1) Ry(θ1) Rz(θ5) Ry(θ8) Rz(θ10) Rx(x1) H Rx(θ13) |0 Rx(x0) Ry(θ2) Rz(θ6) Ry(θ9) Rz(θ11) Rx(x0) H Rx(θ14) |0 Rx(x1) Ry(θ3) Rz(θ7) Rx(x1) H Rx(θ15)dimensional input data x = (x 0 , x 1 ) is angle encoded using R x rotations.
Notably, additional normalization of the entries x i to an interval inside [0, 2π[ has been proven useful [21]. For selecting an efficient ansatz, we use established circuits proposed in literature [16] that allow for a sufficient degree of expressibility and entanglement capacity. A natural choice is "circuit 9" as proposed in [16] (see ansatz 2 in figure 1 for its architecture) as it showed the best tradeoff between high expressibility and a low number of parameters. However, according to [17] a certain minimum number of parameters is necessary to be able to fully exploit the spectrum of the nonlinear components that data reuploading and parallel encoding generate. In essence, at least twice the number of qubit encodings in the entire circuit are required as parameters. To fix this issue of circuit 9 when applied to our data, we combine it with another expressive circuit, i.e., "circuit 11" from the same article [16]. For the full architecture, see figure 1.
To construct a coherent quantum circuit from the data encoding and the ansatz components, we choose the popular approach of layering, alternating both iteratively. In addition to that, we also employ parallel encoding and data reuploading to allow building a sufficient quantum function approximator. While both have a similar effect on expressibility [13], [18], [22], in preliminary experiments conducted for this paper data reuploading appeared to be more effective than parallel encoding, as it led to faster evaluations with higher R2 scores for our datasets. Additional empirical results led to the final circuit architecture displayed in figure 1, were a small amount of parallel encoding was applied, i.e., only once, while the focus is on data reuploading. In practice, empirical data shows, that the number of necessary layers can vary from dataset to dataset by a lot (in our cases, from 6 layers up to 42 layers), for details, see section IV.
After the various feature map and parameterized layers in the QNN, the expectation value of the qubits must be calculated with some measurement operator in order to extract classical information out of the quantum circuit. Having tried various different possibilities (tensor products of σ z and I matrices), prestudies to this work led to the selection of the standard measurement operator σ z for all qubits involved. Note that this demands for a suitable rescaling of the output values if the functions to be tested live outside the range of [−1, 1] (which typically is a parameter that can be estimated by domain experts in the given use case). The difference between the obtained output and the original data is calculated as the mean squared error and the parameters θ for the next iteration are generated with the help of an optimizer. Following the recommendation of [23], COBYLA [24] was chosen for this purpose, as it minimizes the number of evaluations for noise free objective functions and thus is particularly efficient.
B. Selecting realistic benchmark datasets
In previous publications, [25] and [9] showed that QNNs can resemble very simple one-dimensional functions like sin(x), x 2 or e x . We now expand the evaluation into the more complex, two-dimensional, standard benchmark functions, namely:
• Griewank [26], evaluated on the interval [−5, 5] • Schwefel [27], evaluated on the interval [−50, 50] • Styblinski-Tang [28], evaluated on the interval [− 5,5] In addition to that, we also investigate the performance on the real world data set Color bob [29], which contains 241 sixdimensional data points from a chemical process related to colorimetry. The intervals of the functions are chosen to have a balance between complexity and visual observability while keeping the number of required parameters in the PQC at a level that still allows for the necessary iterative calculations within reasonable time (i.e., of a couple of days). To facilitate an unambiguous angle encoding (for details, see [21]), we normalize the data to the range [0, 1] in all dimensions, as this is a requirement for the pre-built EstimatorQNN class in Qiskit [25] which we use for the implementation 2 .
C. Choosing a reasonable classical ANN as baseline
Our goal in constructing the classical ANN baseline relies on finding a small model that accurately approximates the given benchmark functions without any noise present and given sufficiently many data samples, analog to our way of constructing the QNNs. Notably, we outweigh a search for possibly existing, more sophisticated models that offer a higher resilience towards data scarcity and noise with our intent of taking the most similar process in identifying the needed network structure between the quantum and classical approaches.
The identified classical ANN resembles a sequential model with one input layer, two hidden layers and one output layer. The first hidden layer contains ten neurons and uses a sigmoid activation function, the second layer is comprised of three neurons and employs a tanh activation function, as this configuration showed decent results. This hence resembles a minimalist ANN using only 15 parameters. Analog to the QNN, we employ the mean squared error loss and an ADAM optimizer [30]. Without any focus on hyperparameter tuning or regularization techniques, experimental results show that this model satisfies our demands.
IV. EVALUATION
Having prepared a suitable QNN architecture, a classical ANN as baseline and a variety of challenging benchmark functions, we can now evaluate the applicability of QNNs as surrogate functions in domains with small samples sizes and noisy data.
A. Baseline results for noisefree datasets
To quantify all results, we use the R2 score, a standard tool to measure the similarity of estimated function values to the original data point values, as well as visual inspection, to assess how well the shape of the original function gets approximated. The R2 score is defined such that it yields 1 if the model perfectly predicts the outcome and any lower value down to 0, the less well it predicts the outcome:
R2(y,ŷ) = 1 − n i=1 (y i −ŷ i ) 2 n i=1 (y i −ȳ i ) 2(4)
Here, n is the number of given samples, y i is the original value,ŷ i is the predicted value andȳ i = 1 /n n i=1 y i . For all two-dimensional benchmark functions mentioned in section III-B, we are successfully able to achieve very good R2 scores, even obtaining values above 0.9. Exemplary results of the original surface and the surrogate model surface for the two-dimensional Schwefel function can be seen in figure 2. Interestingly, these results can be achieved with many different circuits as long as enough layers and optimization iterations are used, e.g.: For the Color bob data set, we obtain an R2 score close to 0.9 with 3 layers consisting only of the feature map and ansatz 1 from 1, with merely 500 optimization iterations. To account for the increased dimensionality in the input data, we used five qubits, and skipped the parallel encoding, as the results turned out to be good enough without it. These evaluation results indicate that our QNN models can fit the given functions well.
B. Introducing noise and sample scarcity
In real world applications, the available data samples are usually scarce and noisy. In order to model this situation, we introduce different degrees of noise on varying sample sizes. For this rather time intensive part of the evaluation, we focus on the Griewank function, as the number of layers and iterations (i.e., the computational effort) required to find a suitable quantum surrogate model for this function is a lot lower than for the others.
Given the discussed QNN (see figure 1) and classical ANN (see section III-C), we are now ready to evaluate them for a range of 100, 400, 900, 1600 and 2500 data samples (selected analog to selecting data points in grid search) while also introducing standard gaussian noise factors [31] of 0.1, 0.2, 0.3, 0.4 and 0.5 on the input data. More specifically, the noise is applied using the following map (
x i , f (x i )) → (x i , f (x i ) + δν),
where δ denotes the noise factor and ν is random value sampled from a Gaussian standard normal distribution ϕ(z) = exp( −z 2 /2) / √ 2π. Adding noise on the input data hereby can be thought of as imprecise measurements in chemical experiments. For simplicity, we use the term sample size in the following to denote the granularity of the grid in both dimensions, i.e., we investigate sample sizes of 10, 20, 30, 40, and 50.
As we are mainly interested in the relative performance of QNN and ANN against each other, we focus our evaluation on the difference of their R2 scores Delta R2 = R2 QNN (y,ŷ) − R2 ANN (y,ŷ), as depicted in figure 3. The results show that our QNN has a tendency to achieve better R2 scores for higher noise levels compared to the classical ANN. This trend seems to be more obvious even for smaller sample sizes compared to larger ones. This indicates that QNNs can have better generalization learning ability when the data set is relatively small and noisy. In order to also examine the results generated by the QNN and the classical ANN visually, we depict plots of the surrogate model for the Griewank function as well as the original Griewank function and the function resulting from overlaying noise with a factor of 0.5 in figure 4. The QNN seems to be more resilient than the classical ANN for increasing noise levels and clearly maintains the shape of the original function a lot better. Notably, for noise levels below 0.3 the classical ANN was able to achieve higher R2 scores than the QNN. The improvement of the QNN on a relative basis when reducing sample sizes and increasing noise levels indicates better generalization capabilities of the QNN.
C. NISQ Hardware Results
In all of our conducted circuit simulations, we have been able to optimize the parameters of the QNN by calculating the expectation value for all of the data inputs analytically. In our experience this works reasonably fast for up to eight qubits (i.e., taking no more than a couple hours). Today's HPC quantum circuit simulators have shown the capability to simulate small circuits up to 48 qubits [32]. Taking this as an approximate bound for how many qubits a circuit can consist of in analytic calculations, one can see that classical simulations face a clear limit when assessing high dimensional input data when using qubit intensive data encoding. Based on our experiments, two to three qubits per dimension showed the best results. For 29-dimensional input data as available in the related PharmKG project [33], this already would require 58 to 87 qubits, quickly exceeding what is technically possible with classical computers. Therefore, one needs to rely on real quantum hardware in order to examine scaling performance for higher dimensional data sets, optimally using fault tolerant qubits.
In order to explore such possibilities, we test our quantum circuits on real quantum computers as well. For this, we chose the five qubit quantum computer ibmq belem 3 as it offered sufficient resources for running our circuits, i.e., availability, number of quibts and gate fidelities. Despite privileged access and using the Qiskit Runtime environment (which does not require re-queuing for each iteration), our multi-layer approach faced difficulties for higher dimensions. Due to current error rates, the execution on the real hardware ultimately always failed for two-dimensional input data when using two qubits per dimension. After several attempts, we were able to obtain a fit for the one-dimensional Griewank function with three qubits, six layers and 100 iterations as depicted in fig 5. While the achieved R2 score of 0.54 is fairly moderate, we observe that in both evaluations ( figure 4 and 5), the QNN is very accurate in determining the shape of the underlying function. This offers very valuable information to experimentalists, who are typically interested in finding areas of the input domain containing sufficiently good local minima.
D. Analysis of scaling behavior
Neglecting decoherence, there are three main types of errors on real hardware causing failures: Single-qubit errors, twoqubit gate errors and readout errors. For the ibmq belem, the mean Pauli-X error is around 0.04%, the mean CNOT error roughly amounts to 1.08% and the readout error is about 2.17%. Taking the mean Pauli-X error as proxy for single-qubit errors and the mean CNOT error as proxy for two-qubit gate errors, one can approximate the probability of a circuit remaining without error. Subtracting the error rate from one results in the "survival rate" per gate, which will then be exponentiated with the number of respective gates in the circuit. Our three qubit, six layer circuit for ibmq belem consists (on average 4 ) of 10 single-qubit gates, two two-qubit gates per layer and three readouts, resulting in a total survival rate of 80.13% for six layers. The survival rates for different circuit sizes with regard to the number of qubits and the number of layers is shown in table I.
Taking this total survival rate of 80.13% as the minimum requirement for a successful real hardware run, one can also approximate the required error rate improvement such that the four qubit, 20 layer circuit we use for finding a surrogate model for the two-dimensional Griewank function could be run on real hardware. In order to have only one variable to solve for, we keep the single-qubit error constant at its current rate and assume that the ratio of readout error to twoqubit gate error remains at two. This results in a required two-qubit gate error of 0.15% and a readout error of 0.3%, resembling a reduction to about 14% of current error levels. Looking towards more recent IBM QPUs like the Falcon r5.11, lower error rates than the here employed ibmq belem are already available: 0.9% for CNOTs and 0.02% for Pauli-X gates. For this QPU, our calculations yield a survival rate of 52.76%, which displays significant improvement, but still prevents modelling the two-dimensional Griewank function sufficiently well.
V. CONCLUSION
Surrogate models have provided enormous cost and time savings in development processes of the chemical and pharmaceuticals industry. Nevertheless, dealing with small and noisy data sets still remains a challenge, mostly due to overfitting 4 We are mixing two different circuits in our ansatz, see figure 1. tendencies of the current state of the art machine learning approaches applied. In this paper, we have demonstrated that Quantum Surrogate Models based on QNNs can offer an advantage over classical ANNs in terms of prediction accuracy for substantially more difficult datasets than those used in literature previously, when the sample size is scarce and substantial noise is present. For this, we constructed suitable QNNs, while having employed state of the art ansatz design knowledge, namely: data preprocessing in form of scaling, data reuploading, parallel encoding, layering with a sufficient number of parameters and using different ansätze in one circuit.
In addition to that, our noise and scaling analyses on quantum surrogate models for higher dimensional inputs, combined with the envisaged reduction of quantum error rates by quantum hardware manufacturers show that our simulation results could be replicated on QPUs by 2025. A possible way to accelerate this process might be switching from a datareuploading-heavy circuit to one focused on parallel encoding, as this would shorten the overall circuit, allowing for the use of more qubits.
•Fig. 2 :
220 layers and 3000 iterations for the Griewank function • 42 layers and 4000 iterations for the Schwefel and Styblinski-Tang functions The two-dimensional Schwefel function can be approximated well by a QNN based surrogate model with R2 score of 0.94.
Fig. 3 :
3Delta R2 score obtained by subtracting the classical ANN R2 score from the QNN R2 score for different noise levels (x-axis) and sample sizes (y-axis) for the two-dimensional Griewank function. Positive values indicate a performance advantage of the QNN (as can be seen for higher noise levels and smaller simple sizes), while negative values represent a disadvantage of the QNN.
Surface fitted by the ANN.
Fig. 4 :
4Surface plots of the Griewank function when introducing noise and sample scarcity. The input to the QNN and classical ANN was modified by multiplication of standardnormal noise with a factor of 0.5 on the 400 individual input data points (which corresponds to a sample size of 20).
Fig. 5 :
5Original data points (sample size of 20) for a onedimensional Griewank function and the quantum surrogate function that has been obtained by running 100 iterations of 6 layers of our ansatz described infigure 1on the ibmq belem.
TABLE I :
IEstimated survival rates on noisy quantum hardware.
Values between 0 and 2π would have also been feasible for angle encoding, but the interval [0,1] sufficed in our experiments
Available on https://quantum-computing.ibm.com/services/resources.
ACKNOWLEDGMENTThis work was partially funded by the German BMWK projects QCHALLenge (01MQ22008A) and PlanQK (01MK20005I) and is part of the Munich Quantum Valley, which is supported by the Bavarian state government with funds from the Hightech Agenda Bayern Plus.
A review of the artificial neural network surrogate modeling in aerodynamic design. Gang Sun, Shuyue Wang, Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering. 23316Gang Sun and Shuyue Wang. A review of the artificial neural network surrogate modeling in aerodynamic design. Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering, 233(16):5863-5872, 2019.
Deep-learning-based surrogate model for fast and accurate simulation in pipeline transport. Feng Qin, Zhenghe Yan, Peng Yang, Shenglai Tang, Hu Huang, Frontiers in Energy Research. 102022Feng Qin, Zhenghe Yan, Peng Yang, Shenglai Tang, and Hu Huang. Deep-learning-based surrogate model for fast and accurate simulation in pipeline transport. Frontiers in Energy Research, 10, 2022.
Deep neural networks as surrogate models for urban energy simulations. Jose Vazquez-Canteli, Aysegul Dilsiz Demir, Julien Brown, Zoltan Nagy, Journal of Physics: Conference Series. 1343112002Jose Vazquez-Canteli, Aysegul Dilsiz Demir, Julien Brown, and Zoltan Nagy. Deep neural networks as surrogate models for urban energy simulations. Journal of Physics: Conference Series, 1343(1):012002, nov 2019.
A deep learning approach to antibiotic discovery. Jonathan M Stokes, Kevin Yang, Kyle Swanson, Wengong Jin, Andres Cubillos-Ruiz, Nina M Donghia, Craig R Macnair, Shawn French, Lindsey A Carfrae, Zohar Bloom-Ackermann, Victoria M Tran, Anush Chiappino-Pepe, Ahmed H Badran, Ian W Andrews, Emma J Chory, George M Church, Eric D Brown, Tommi S Jaakkola, Regina Barzilay, James J Collins, Cell. 1804Jonathan M. Stokes, Kevin Yang, Kyle Swanson, Wengong Jin, Andres Cubillos-Ruiz, Nina M. Donghia, Craig R. MacNair, Shawn French, Lindsey A. Carfrae, Zohar Bloom-Ackermann, Victoria M. Tran, Anush Chiappino-Pepe, Ahmed H. Badran, Ian W. Andrews, Emma J. Chory, George M. Church, Eric D. Brown, Tommi S. Jaakkola, Regina Barzilay, and James J. Collins. A deep learning approach to antibiotic discovery. Cell, 180(4):688-702.e13, Feb 2020.
Deep neural nets as a method for quantitative structure-activity relationships. Junshui Ma, Robert P Sheridan, Andy Liaw, George E Dahl, Vladimir Svetnik, 25635324Journal of Chemical Information and Modeling. 552Junshui Ma, Robert P. Sheridan, Andy Liaw, George E. Dahl, and Vladimir Svetnik. Deep neural nets as a method for quantitative structure-activity relationships. Journal of Chemical Information and Modeling, 55(2):263-274, 2015. PMID: 25635324.
Deep learning in drug discovery. Erik Gawehn, Jan A Hiss, Gisbert Schneider, Molecular Informatics. 351Erik Gawehn, Jan A. Hiss, and Gisbert Schneider. Deep learning in drug discovery. Molecular Informatics, 35(1):3-14, 2016.
Hybridizing feature selection and feature learning approaches in qsar modeling for drug discovery. Ignacio Ponzoni, Víctor Sebastián-Pérez, Carlos Requena-Triguero, Carlos Roca, J María, Fiorella Martínez, Cravero, F Mónica, Juan A Díaz, Ramón Páez, Javier Gómez Arrayás, Nuria E Adrio, Campillo, Scientific Reports. 712403Ignacio Ponzoni, Víctor Sebastián-Pérez, Carlos Requena-Triguero, Car- los Roca, María J. Martínez, Fiorella Cravero, Mónica F. Díaz, Juan A. Páez, Ramón Gómez Arrayás, Javier Adrio, and Nuria E. Campillo. Hybridizing feature selection and feature learning approaches in qsar modeling for drug discovery. Scientific Reports, 7(1):2403, May 2017.
Generalization despite overfitting in quantum machine learning models. Evan Peters, Maria Schuld, Evan Peters and Maria Schuld. Generalization despite overfitting in quantum machine learning models, 2022.
Quantum circuit learning. K Mitarai, M Negoro, M Kitagawa, K Fujii, Physical Review A. 983K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii. Quantum circuit learning. Physical Review A, 98(3), sep 2018.
Overview of surrogate modeling in chemical process engineering. Kevin Mcbride, Kai Sundmacher, Chemie Ingenieur Technik. 913Kevin McBride and Kai Sundmacher. Overview of surrogate modeling in chemical process engineering. Chemie Ingenieur Technik, 91(3):228- 239, 2019.
Surrogate-based analysis and optimization. Nestor V Queipo, Raphael T Haftka, Wei Shyy, Tushar Goel, Rajkumar Vaidyanathan, P. Kevin Tucker, Progress in Aerospace Sciences. 411Nestor V. Queipo, Raphael T. Haftka, Wei Shyy, Tushar Goel, Rajkumar Vaidyanathan, and P. Kevin Tucker. Surrogate-based analysis and optimization. Progress in Aerospace Sciences, 41(1):1-28, 2005.
Response surface methodology. I André, Siuli Khuri, Mukhopadhyay, WIREs Computational Statistics. 22André I. Khuri and Siuli Mukhopadhyay. Response surface methodol- ogy. WIREs Computational Statistics, 2(2):128-149, 2010.
Effect of data encoding on the expressive power of variational quantum-machinelearning models. Maria Schuld, Ryan Sweke, Johannes Jakob Meyer, Phys. Rev. A. 10332430Maria Schuld, Ryan Sweke, and Johannes Jakob Meyer. Effect of data encoding on the expressive power of variational quantum-machine- learning models. Phys. Rev. A, 103:032430, Mar 2021.
A survey of nisq era hybrid quantum-classical machine learning research. Gennaro De Luca, Journal of Artificial Intelligence and Technology. 21Gennaro De Luca. A survey of nisq era hybrid quantum-classical machine learning research. Journal of Artificial Intelligence and Tech- nology, 2(1):9-15, Dec 2021.
Expanding data encoding patterns for quantum algorithms. Manuela Weigold, Johanna Barzen, Frank Leymann, Marie Salm, 2021 IEEE 18th International Conference on Software Architecture Companion (ICSA-C). Manuela Weigold, Johanna Barzen, Frank Leymann, and Marie Salm. Expanding data encoding patterns for quantum algorithms. In 2021 IEEE 18th International Conference on Software Architecture Companion (ICSA-C), pages 95-101, 2021.
Expressibility and entangling capability of parameterized quantum circuits for hybrid quantum-classical algorithms. Sukin Sim, Peter D Johnson, Alán Aspuru-Guzik, Advanced Quantum Technologies. 2121900070Sukin Sim, Peter D. Johnson, and Alán Aspuru-Guzik. Expressibility and entangling capability of parameterized quantum circuits for hy- brid quantum-classical algorithms. Advanced Quantum Technologies, 2(12):1900070, 2019.
Effect of data encoding on the expressive power of variational quantum-machinelearning models. Maria Schuld, Ryan Sweke, Johannes Jakob Meyer, Physical Review A. 1033Maria Schuld, Ryan Sweke, and Johannes Jakob Meyer. Effect of data encoding on the expressive power of variational quantum-machine- learning models. Physical Review A, 103(3), mar 2021.
Data re-uploading for a universal quantum classifier. Adrián Pérez-Salinas, Alba Cervera-Lierta, Elies Gil-Fuster, José I Latorre, 226Adrián Pérez-Salinas, Alba Cervera-Lierta, Elies Gil-Fuster, and José I. Latorre. Data re-uploading for a universal quantum classifier. Quantum, 4:226, Feb 2020.
Structure optimization for parameterized quantum circuits. Quantum. Mateusz Ostaszewski, Edward Grant, Marcello Benedetti, 5391Mateusz Ostaszewski, Edward Grant, and Marcello Benedetti. Structure optimization for parameterized quantum circuits. Quantum, 5:391, jan 2021.
Universality of quantum computation with cluster states and (x, y)-plane measurements. Atul Mantri, F Tommaso, Joseph F Demarie, Fitzsimons, Scientific Reports. 7142861Atul Mantri, Tommaso F. Demarie, and Joseph F. Fitzsimons. Uni- versality of quantum computation with cluster states and (x, y)-plane measurements. Scientific Reports, 7(1):42861, Feb 2017.
Improving convergence for quantum variational classifiers using weight re-mapping. Michael Kölle, Alessandro Giovagnoli, Jonas Stein, Maximilian Mansky, Julian Hager, Claudia Linnhoff-Popien, Proceedings of the 15th International Conference on Agents. the 15th International Conference on AgentsSciTePress22023INSTICCMichael Kölle., Alessandro Giovagnoli., Jonas Stein., Maximilian Man- sky., Julian Hager., and Claudia Linnhoff-Popien. Improving con- vergence for quantum variational classifiers using weight re-mapping. In Proceedings of the 15th International Conference on Agents and Artificial Intelligence -Volume 2: ICAART,, pages 251-258. INSTICC, SciTePress, 2023.
M Schuld, F Petruccione, Machine Learning with Quantum Computers. Quantum Science and Technology. Springer International PublishingM. Schuld and F. Petruccione. Machine Learning with Quantum Computers. Quantum Science and Technology. Springer International Publishing, 2021.
Simulating molecules using vqe. The Qiskit TeamThe Qiskit Team. Simulating molecules using vqe, Nov 2022.
A Direct Search Optimization Method That Models the Objective and Constraint Functions by Linear Interpolation. Susana Gomez, Jean-Pierre Hennart, M J D Powell, SpringerNetherlands, DordrechtSusana Gomez, Jean-Pierre Hennart, and M. J. D. Powell, editors. A Direct Search Optimization Method That Models the Objective and Constraint Functions by Linear Interpolation, pages 51-67. Springer Netherlands, Dordrecht, 1994.
. Neural Network Classifier, Regressor Available, Neural Network Classifier and Regressor. Available: https://qiskit. org/documentation/machine-learning/tutorials/02 neural network classifier and regressor.html.
Generalized descent for global optimization. A O Griewank, J. Optim. Theory Appl. 341A O Griewank. Generalized descent for global optimization. J. Optim. Theory Appl., 34(1):11-39, 1981.
Numerical optimization of computer models. Hans-Paul Schwefel, John Wiley & Sons, IncHans-Paul Schwefel. Numerical optimization of computer models. John Wiley & Sons, Inc., 1981.
Experiments in nonconvex optimization: Stochastic approximation with function smoothing and simulated annealing. M A Styblinski, T.-S Tang, Neural Networks. 34M.A. Styblinski and T.-S. Tang. Experiments in nonconvex optimiza- tion: Stochastic approximation with function smoothing and simulated annealing. Neural Networks, 3(4):467-483, 1990.
Olympus: a benchmarking framework for noisy optimization and experiment planning. Florian Häse, Matteo Aldeghi, J Riley, Hickman, M Loïc, Melodie Roch, Elena Christensen, Jason E Liles, Alán Hein, Aspuru-Guzik, Machine Learning: Science and Technology. 2335021Florian Häse, Matteo Aldeghi, Riley J Hickman, Loïc M Roch, Melodie Christensen, Elena Liles, Jason E Hein, and Alán Aspuru-Guzik. Olym- pus: a benchmarking framework for noisy optimization and experiment planning. Machine Learning: Science and Technology, 2(3):035021, Jul 2021.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2014.
Handbook for acoustic ecology. Barry Truax, 1383LeonardoBarry Truax. Handbook for acoustic ecology. Leonardo, 13:83, 1980.
Full-state quantum circuit simulation by using data compression. Xin-Chuan Wu, Sheng Di, Emma Maitreyee Dasgupta, Franck Cappello, Hal Finkel, Yuri Alexeev, Frederic T Chong, Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC '19. the International Conference for High Performance Computing, Networking, Storage and Analysis, SC '19New York, NY, USAAssociation for Computing MachineryXin-Chuan Wu, Sheng Di, Emma Maitreyee Dasgupta, Franck Cappello, Hal Finkel, Yuri Alexeev, and Frederic T. Chong. Full-state quantum circuit simulation by using data compression. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC '19, New York, NY, USA, 2019. Association for Computing Machinery.
A review of biomedical datasets relating to drug discovery: a knowledge graph perspective. Stephen Bonner, P Ian, Cheng Barrett, Rowan Ye, Ola Swiers, Andreas Engkvist, Charles Tapley Bender, William L Hoyt, Hamilton, Briefings in Bioinformatics. 236Stephen Bonner, Ian P Barrett, Cheng Ye, Rowan Swiers, Ola Engkvist, Andreas Bender, Charles Tapley Hoyt, and William L Hamilton. A review of biomedical datasets relating to drug discovery: a knowledge graph perspective. Briefings in Bioinformatics, 23(6), 09 2022. bbac404.
| [] |
[
"Efficient Multi-Task Scene Analysis with RGB-D Transformers",
"Efficient Multi-Task Scene Analysis with RGB-D Transformers"
] | [
"Söhnke Benedikt Fischedick soehnke.fischedick@tu-ilmenau.de@orcid:0000-0001-8447-0584 \nNeuroinformatics and Cognitive Robotics Lab\nIlmenau University of Technology\n98684IlmenauGermany\n",
"Daniel Seichter \nNeuroinformatics and Cognitive Robotics Lab\nIlmenau University of Technology\n98684IlmenauGermany\n",
"Robin Schmidt \nNeuroinformatics and Cognitive Robotics Lab\nIlmenau University of Technology\n98684IlmenauGermany\n",
"Leonard Rabes \nNeuroinformatics and Cognitive Robotics Lab\nIlmenau University of Technology\n98684IlmenauGermany\n",
"Horst-Michael Gross \nNeuroinformatics and Cognitive Robotics Lab\nIlmenau University of Technology\n98684IlmenauGermany\n"
] | [
"Neuroinformatics and Cognitive Robotics Lab\nIlmenau University of Technology\n98684IlmenauGermany",
"Neuroinformatics and Cognitive Robotics Lab\nIlmenau University of Technology\n98684IlmenauGermany",
"Neuroinformatics and Cognitive Robotics Lab\nIlmenau University of Technology\n98684IlmenauGermany",
"Neuroinformatics and Cognitive Robotics Lab\nIlmenau University of Technology\n98684IlmenauGermany",
"Neuroinformatics and Cognitive Robotics Lab\nIlmenau University of Technology\n98684IlmenauGermany"
] | [] | Scene analysis is essential for enabling autonomous systems, such as mobile robots, to operate in real-world environments. However, obtaining a comprehensive understanding of the scene requires solving multiple tasks, such as panoptic segmentation, instance orientation estimation, and scene classification. Solving these tasks given limited computing and battery capabilities on mobile platforms is challenging. To address this challenge, we introduce an efficient multi-task scene analysis approach, called EMSAFormer, that uses an RGB-D Transformer-based encoder to simultaneously perform the aforementioned tasks. Our approach builds upon the previously published EMSANet. However, we show that the dual CNN-based encoder of EMSANet can be replaced with a single Transformer-based encoder. To achieve this, we investigate how information from both RGB and depth data can be effectively incorporated in a single encoder. To accelerate inference on robotic hardware, we provide a custom NVIDIA TensorRT extension enabling highly optimization for our EMSAFormer approach. Through extensive experiments on the commonly used indoor datasets NYUv2, SUNRGB-D, and ScanNet, we show that our approach achieves state-of-the-art performance while still enabling inference with up to 39.1 FPS on an NVIDIA Jetson AGX Orin 32 GB. | null | [
"https://export.arxiv.org/pdf/2306.05242v1.pdf"
] | 259,108,867 | 2306.05242 | 8912d87f863b360647eed0a6987e6b28fbd2798d |
Efficient Multi-Task Scene Analysis with RGB-D Transformers
Söhnke Benedikt Fischedick soehnke.fischedick@tu-ilmenau.de@orcid:0000-0001-8447-0584
Neuroinformatics and Cognitive Robotics Lab
Ilmenau University of Technology
98684IlmenauGermany
Daniel Seichter
Neuroinformatics and Cognitive Robotics Lab
Ilmenau University of Technology
98684IlmenauGermany
Robin Schmidt
Neuroinformatics and Cognitive Robotics Lab
Ilmenau University of Technology
98684IlmenauGermany
Leonard Rabes
Neuroinformatics and Cognitive Robotics Lab
Ilmenau University of Technology
98684IlmenauGermany
Horst-Michael Gross
Neuroinformatics and Cognitive Robotics Lab
Ilmenau University of Technology
98684IlmenauGermany
Efficient Multi-Task Scene Analysis with RGB-D Transformers
This work has been submitted to the IEEE for possible publication.
Scene analysis is essential for enabling autonomous systems, such as mobile robots, to operate in real-world environments. However, obtaining a comprehensive understanding of the scene requires solving multiple tasks, such as panoptic segmentation, instance orientation estimation, and scene classification. Solving these tasks given limited computing and battery capabilities on mobile platforms is challenging. To address this challenge, we introduce an efficient multi-task scene analysis approach, called EMSAFormer, that uses an RGB-D Transformer-based encoder to simultaneously perform the aforementioned tasks. Our approach builds upon the previously published EMSANet. However, we show that the dual CNN-based encoder of EMSANet can be replaced with a single Transformer-based encoder. To achieve this, we investigate how information from both RGB and depth data can be effectively incorporated in a single encoder. To accelerate inference on robotic hardware, we provide a custom NVIDIA TensorRT extension enabling highly optimization for our EMSAFormer approach. Through extensive experiments on the commonly used indoor datasets NYUv2, SUNRGB-D, and ScanNet, we show that our approach achieves state-of-the-art performance while still enabling inference with up to 39.1 FPS on an NVIDIA Jetson AGX Orin 32 GB.
I. INTRODUCTION
A broad understanding of the scene is crucial for mobile agents to operate autonomously in indoor environments. Traditional approaches often only provide semantic or instance information, which is not sufficient for many realworld applications. For example, in our research projects CO-HUMANICS and MORPHIA [1], a mobile robot should autonomously operate in indoor environments and should also be controlled by an operator from a remote location. To enable such a remote control to inexperienced operators, a more intuitive way for navigating is required. As shown in Fig. 1, the operator should be able to click onto a specific point or object visible in the camera image of the current surrounding, to which the robot should automatically navigate to. For example, the mobile robot should drive to a chair within a group of many chairs while respecting the chair's orientation to not block it. Furthermore, the operator should be able to select an entire room to which the robot should drive to. To achieve this, a broader scene understanding is required. The robot needs to combine information from multiple tasks, i.e., semantic and instance segmentation (panoptic segmentation), This work has received funding from Carl-Zeiss-Stiftung to the project Co-Presence of Humans and Interactive Companions for Seniors (CO-HUMANICS).
RGB-D Transformer Encoder
Scene Head
Context Module
Semantic Decoder
Instance Decoder (with orientation) Panoptic Segmentation
Driving to the selected chair in the discussion room without blocking it...
EMSAFormer
"discussion room"
Click pixel in image X Fig. 1. Application (bottom) of our proposed efficient multi-task scene analysis approach with an RGB-D Transformer encoder, called EMSAFormer, that simultaneously performs panoptic segmentation, instance orientation estimation, and scene classification (top). See Fig. 2 for prediction colors.
instance orientation estimation, and scene classification. Due to the limited computing and battery capabilities of a mobile robot, a multi-task approach should be preferred.
To gather information for all aforementioned tasks, we introduce an efficient multi-task scene analysis approach utilizing a Transformer-based encoder (EMSAFormer). It builds on top of our Efficient Multi-task Scene Analysis Network (EM-SANet) [2] that already realizes such a system. In [2], we have shown the effectiveness of using RGB and depth as complementary data to improve the performance of individual tasks. For processing the different modalities, a dual ResNet34 encoder is used. The approach enables real-time application (in our application scenario ≥20 FPS) on an NVIDIA Jetson AGX Xavier, while still reaching state-of-the-art performance. With the release of the NVIDIA Jetson AGX Orin 32 GB, computing capabilities have been increased, i.e., it is now possible to apply larger models in real time. Compared to NVIDIA Jetson AGX Xavier, inference throughput has almost doubled with the same power consumption. However, the authors of EMSANet have shown that a larger encoder, e.g., utilizing two ResNet101 backbones, only slightly improves performance. Motivated by the upcoming usage of Transformer-based architectures for computer vision, we address this limitation by replacing the dual encoder based on convolutional neural networks (CNNs) with a single Swin Transformer for process-ing RGB-D data. We conduct detailed experiments to show the effectiveness of using a Swin Transformer for processing RGB and depth data and how it compares to the traditional CNN-based encoders. Furthermore, we address differences and challenges when incorporating a Transformer-based encoder. Our experiments show that our EMSAFormer approach is able to outperform the state-of-the-art method EMSANet in most tasks. Finally, to enable inference in real time on the NVIDIA Jetson AGX Orin on our mobile robot, we provide a custom NVIDIA TensorRT extension that greatly accelerates inference. In summary, our main contributions include:
• A complementary RGB-D encoder approach that replaces the dual encoder in EMSANet with a novel single Swin Transformer encoder, while still effectively incorporating information from both RGB and depth data due to a modified Transformer design. • A custom NVIDIA TensorRT extension for inference acceleration that enables using Swin Transformers as a general-purpose backbone for downstream tasks with import from Open Neural Network Exchange (ONNX) format and arbitrary input resolution. • Detailed quantitative and qualitative experiments on the common indoor datasets NYUv2 [3], SUNRGB-D [4], and ScanNet [5] demonstrating the applicability and state-ofthe-art performance of our approach. Our code as well as the network weights are publicly available at: https://github.com/TUI-NICR/EMSAFormer.
II. RELATED WORK
In the following, we summarize related work for scene analysis with focus on Transformer-based encoders and RGB-D processing. We give a brief introduction how the individual tasks can be accomplished in an encoder-decoder approach.
A. Transformer-based Encoders
In recent years, convolutional neural networks (CNNs) have been the dominant approach for various computer vision tasks, such as image classification, object detection, or semantic segmentation. Various backbones, such as ResNet [6], Effi-cientNet [7], or ConvNeXt [8] have been proposed and achieve state-of-the-art performance on a variety of benchmarks. Inspired by the success of Transformers for natural language processing (NLP) [9]- [11], the work of [12] introduces the Vision Transformer (ViT), which is able to reach similar performance to CNNs for image classification. While the proposed architecture achieves impressive results, it still requires extensive pretraining for comparable performance on the ImageNet benchmark [13] and does not realize a pyramid structure, making it less suitable as a general-purpose backbone [14], [15]. Motivated by the success of ViT, various Transformerbased architectures, such as Pyramid Vision Transformer [16], Swin Transformers [14], [17], and SegFormer [15] have been introduced, improving data efficiency and enabling the usage as general-purpose backbone due to the pyramid-like structure. These architectures achieve state-of-the-art performance on a variety of benchmarks; however, they have been proposed for processing RGB data and have not yet been extended to processing RGB-D data, which is the focus of this work.
B. RGB-D Encoders
Combining RGB and depth data can improve performance in various computer vision tasks as both provide complementary features [18]- [20]. RGB data provide information about semantic features, such as object color and texture, while depth data provide geometric information about the scene. Handling multiple modalities can be challenging as they contain different features with deviating statistics and characteristics. Thus, multiple approaches have been emerged.
The majority of approaches [2], [18], [19], [21]- [26] handles both modalities in two separated encoders and fuse features using a dedicated fusion mechanism. The fusion is mostly done after each resolution stage [2], [18], [19], [26], [27] of the encoders, at the end of the encoders [28], or into a third encoder handling joint features [20], [23], [29], [30]. Moreover, most approaches additionally fuse features from the encoder to the decoder [2], [19], [26] similar to DeepLabV3+ [31]. While enabling independent processing of both modalities, this method implies a crucial design decision, i.e., it introduces the difficulty of deciding where the fusion should happen and how the features are fused. Additionally, depending on the used backbones, a dual-encoder design often leads to increased computational cost, making these approaches less suitable for embedded hardware.
Other approaches try to handle both modalities in a single encoder [32]- [36]. This is done by using specially tailored convolutions for incorporating depth data. However, such approaches often lack optimization and, thus, are less suitable for embedded hardware.
Motivated by the performance achieved by Swin Transformers [14], [17], the authors of OMNIVORE [37] propose a method for handling multiple modalities in a single encoder. However, as the approach mainly focuses on handling many different modalities and large backbones, it again lacks optimization and efficiency. Moreover, OMNIVORE requires heavy pretraining to achieve good performance.
In this paper, we follow the recent trend of using Transformer-based architectures. However, in contrast to the aforementioned approaches, we focus on modifying the Swin Transformer architecture to efficiently incorporate depth information while still relying on a single encoder.
C. Task Decoders
The design of the decoder depends on the task to solve. Scene classification, i.e., assigning a scene label, such as living room or office, to the entire input, is similar to other classification tasks. It only requires a classification layer at the end of the encoder. By contrast, pixel-wise dense prediction tasks require more sophisticated decoders. As the encoder typically lowers the spatial resolution, dense-prediction decoders often incorporate multiple modules to gradually restore the resolution. Most approaches use a CNN-based decoder [26], [31], [38]- [40] of varying complexity. With the rise of Transformerbased architectures, SegFormer [15] proposes another more lightweight MLP-based decoder. Features from different stages are embedded using a fully-connected layer, upsampled to the same spatial resolution, concatenated, and passed through two additional fully-connected layers to encode the final prediction.
For panoptic segmentation, at least one dense-prediction decoder is required. Panoptic segmentation [41] combines semantic and instance segmentation and aims at assigning a semantic class to each pixel of the input image as well as a unique instance ID to each pixel belonging to a distinguishable instance. Panoptic segmentation can be done in a top-down, bottom-up, or end-to-end manner. Top-Down methods are typically based on existing approaches for instance segmentation and extend them with an additional decoder for semantic segmentation [42], [43]. While achieving state-of-the-art performance, these architectures typically feature sophisticated network architectures and require further logic to resolve overlapping instance masks. Bottom-up methods [2], [44], [45], on the other hand, are often based on encoder-decoder architectures for semantic segmentation and extend them by an additional decoder for instances. This additional decoder predicts individual instances by grouping corresponding pixels into clusters. As there are already efficient architectures for semantic segmentation [26], [40], bottom-up approaches are often more efficient than top-down approaches [2]. However, both approaches require an additional post-processing step for combining instance and semantic segmentation. By contrast, end-to-end approaches, such as MaX-DeepLab [46], directly predict the panoptic segmentation without additional postprocessing. While already achieving great performance, these methods are not established yet and also require complex architectures that currently do not focus on efficiency.
Instance orientation estimation can also be done in multiple ways. For extracting the orientation, patch-based methods, such as [47]- [49], can be used. Another way for estimating the orientation of an instance is to estimate the pose of its 3D bounding box as shown in [50]- [52]. In contrast to the aforementioned approaches, EMSANet [2] follows the bottom-up idea and incorporates orientation estimation into the instance decoder in a dense-prediction manner. This way, the computational overhead for orientation estimation is limited and averaging multiple predictions for an instance is enabled.
In this paper, we follow the bottom-up design of EMSANet for tackling the scene analysis tasks. However, to further speed up inference, we examine the lightweight MLP-based decoder of SegFormer.
III. EFFICIENT MULTI-TASK SCENE ANALYSIS WITH RGB-D TRANSFORMERS
Our Transformer-based approach for scene analysis (EM-SAFormer) is shown in Fig. 2. It builds on top of the EM-SANet [2]. Both architectures share a similar encoder-decoder design and the same task encoding. The encoder extracts Fig. 2. Architecture of our proposed efficient multi-task scene analysis approach with a single RGB-D Transformer encoder (EMSAFormer) that simultaneously performs panoptic segmentation, instance orientation estimation, and scene classification. For further details and explanations, see Sec. III. Semantic colors are chosen as in [2] and are the default colors for NYUv2 [3]. Panoptic is visualized by small color differences. semantically rich features from the input and performs downsampling up to a factor of 1 /32 of the input resolution to reduce computational effort. However, instead of using a fused dual encoder, our EMSAFormer features only a single encoder with a modified SwinV2 Transformer backbone to incorporate RGB and depth data. We address these modifications in Sec III-A. After the encoder, a context module (CM) similar to the pyramid pooling module in PSPNet [38] is attached. Although Transformers already enable a larger receptive field [15], we observe a performance boost due to an additional context module in our experiments (see results later in Fig. 4). Due to the large downsampling at the end of the encoder, the computational effort of the context module is almost negligible. While all tasks share the same encoder (often referred to as hard-parameter sharing [53]), we use three independent decoders, not sharing any network parameters, to handle the tasks for scene analysis. We introduce the task-specific decoders in Sec III-B. Similar to EMSANet, the entire network architecture is tailored to enable fast inference. However, as Transformer-based architectures are relatively new, inference optimization is crucial and currently rare. We address this key aspect with an additional NVIDIA TensorRT extension introduced in Sec III-C.
A. Encoder
The encoder of our EMSAFormer is derived from the SwinV2 Transformer [17] architecture. We build on top of the tiny model SwinV2-T as it is the only version that currently enables real-time inference on our target hardware. The next larger SwinV2-S and SwinV2-B increase inference time by a factor of 1.5 and 1.9, respectively, and, thus, are out of our scope. The architecture of SwinV2-T is shown Fig 3 (a). Each RGB input is processed in four stages. The first stage embeds the input using a 4×4 convolution with stride 4 to 96 feature maps. As this convolution processes non-overlapping patches of 4×4 pixels, this step is called patch embedding. Subsequent to the patch embedding, the first two SwinV2 blocks are attached within the same stage. Each SwinV2 block comprises a multi-head self-attention module (MSA) and a subsequent 2-layer multilayer perceptron (MLP). Both the MSA and the MLP are followed by a layer normalization and further feature a skip connection as show in Fig. 3. The design of the SwinV2 block follows the original Transformer block introduced in [9]. However, in contrast to [9], [14], [15], the attention is computed using a scaled cosine function instead of the softmax function. Moreover, as computing the self-attention between all elements requires quadratic complexity, the authors adapt the MSA to a window multi-head self-attention (W-MSA) that divides the input in non-overlapping windows of size 8×8 in order to reduce complexity. However, this approach lacks connections across windows and, thus, prevents the model from building context features. To overcome this limitation, the authors further introduce a shifted-window multi-head self-attention (SW-MSA). By alternating both modules, the network is able to exchange information between adjacent windows. For further details on the exact architecture, we refer to [14], [17]. The remaining stages follow the same design. However, the initial patch embedding is replaced with a patch merging operation, and the number of repeated SwinV2 blocks differs in stage 3. The patch merging aims at reducing complexity while creating hierarchical features as the network gets deeper similar to CNN-based architectures [6]- [8]. To achieve this, the spatial resolution gets downsampled by a factor of 2, while the number of feature maps is doubled.
To incorporate depth data, we examine the modification depicted in Fig 3 (bottom). The most straight-forward way is shown in Fig 3 (b) and integrates depth as additional modality to the patch embedding. The missing weights can be derived either by reusing the existing weights (D=R+G+B) or by performing an additional pretraining step. We refer to both modifications as SwinV2-T and SwinV2-T-Pre, respectively. Unfortunately, the ImageNet dataset [13] used for pretraining does not feature depth data. Therefore, we use a grayscale image instead for pretraining (D=gray). This way, the backbone already learns to handle four input channels during pretraining. Fig. 3. Original SwinV2-T architecture [17] (top) and our modifications (bottom) to efficiently incorporate depth information in a single encoder backbone. However, a major drawback of this approach is that both modalities are mixed right at the beginning of the network in the patch-embedding step. As already shown in [18], such an early mixing of both modalities introduces issues due to deviating statistics and characteristics and eliminates any benefits. Luckily, the Swin Transformer architecture enables to prevent mixing features up to the first patch merging. Similar to most other Transformer-based architectures [9], [11], [12], the attention is not computed across all channels but instead only on a subset of the channels. In Swin Transformers, each attention head only processes a subset of 32 channels of the input to a SwinV2 block (see MSA box in Fig 3). To take advantage of this property, only the patch embedding needs to be adapted. As shown in Fig 3 (c), we propose to split the 4×4 convolution into two convolutions, the first embedding RGB to 64 channels and the second embedding depth to the remaining 32 channels. We refer to this modification as SwinV2-T-Multi. Note that the MLPs subsequent to the multi-head self-attention blocks (see Fig 3 red) still combine channels. However, there is a skip connection, which means that the network is able to learn whether to combine features or not in an adjustable way.
RGB-D Modifications
(b) SwinV2-T / SwinV2-T-Pre (c) SwinV2-T-Multi (d) SwinV2-T-128-Multi / *-Aug
While this approach focuses on independent processing of depth, both modalities are embedded to the 96 channels of original SwinV2-T model. Embedding more information with the same number of channels, may lead to a bottleneck. To overcome this issue, we further propose to enlarge the width of the entire model and to use 128, i.e., 96+32, channels in the initial embedding. This way, the RGB embedding retains its original representation capabilities, and depth is embedded to additional 32 channels. Note, due to the subset property of the attention heads, depth is still processed in independent attention heads. We refer to this modification as SwinV2-T-128-Multi in Fig 3 (d). Note that this modification leads to a width similar to the larger SwinV2-B. However, the number of blocks and, thus, the depth remains the one of SwinV2-T. To further strengthen the independent processing of depth in the subset of channels, we further add an additional augmentation step to the pretraining pipeline that randomly masks out either the whole RGB or grayscale image. This way, the network is forced to learn to extract information from both images and cannot rely on the more meaningful RGB input solely. We refer to this modification as SwinV2-T-128-Multi-Aug.
In Sec IV-D, we examine the suitability of the aforementioned modifications. We further investigated modification to the patch merging to extend our design principle, i.e., processing depth in an independent subset of channels with an adjustable fusion mechanism in the MLPs, to the entire architecture. However, we could not observe any significant performance improvement when adapting subsequent stages.
B. Decoders
The decoders for our multi-task architecture (see Fig 2) are designed to suit the specific needs of each task. To obtain the final prediction for scene classification, a single fully-connected layer is attached to the global-average-pooling branch of the context module. For panoptic segmentation, two dense decoders are used to perform semantic and instance segmentation. Each decoder is followed by a task head that projects to the required number of channels for the corresponding tasks and, finally, restores the input resolution. For semantic segmentation, the task head projects to the number of semantic classes. For instance segmentation, we follow the bottom-up approach of Panoptic DeepLab [45] and EMSANet [2]. As shown in Fig 2, instances are represented by their center of mass encoded within a heatmap predicted by the first instance head. To assign pixels to instance centers, a second instance head further predicts offset vectors pointing towards a corresponding instance center. To ignore pixels belonging to stuff classes, e.g., wall or floor, a foreground mask derived from the semantic head is applied before assigning any pixel. As shown in [2], instance orientation estimation can also be handled in a dense manner with an additional head on top of the instance decoder. For each pixel the orientation is predicted as continuous angle around the axis perpendicular to the ground plane. To obtain the orientation of an instance, all predictions assigned to this instance are averaged.
For the dense decoders, we consider both the CNN-based decoder from EMSANet (see red block in instance branch in Fig. 2) as well as the SegFormer MLP decoder (see green block in semantic branch in Fig. 2). The original SegFormer MLP decoder in [15] uses four branches with equal number of channels, i.e., [128, 128, 128, 128] channels. We propose to follow the pyramid-like structure with increasing resolution of the EMSANet decoder and use [256, 128, 64, 64] channels. This way, both inference throughput and performance are increased. We refer to this decoder as modified SegFormer decoder. In our experiments, we examine and compare the performance of both decoder types, the EMSANet decoder and the modified SegFormer decoder.
C. Optimization
For efficient and fast inference on embedded hardware, optimized inference frameworks, such as NVIDIA TensorRT, are crucial. Compared to CNN-based architectures, Transformerbased architectures are relatively new and, thus, currently lack the same kind of highly optimized inference engines. However, there is already ongoing effort to optimize inference throughput. FasterTransformer [54] provides a first extension to NVIDIA TensorRT with focus on Transformers. However, while already archiving a significant speedup, they currently only focus on optimizing architectures for image classification with inputs of fixed and square resolution. Moreover, the whole encoder is optimized as a single block, which makes it impossible to access intermediate features for skip connections or to incorporate any modification to the encoder architecture, such as a modified patch embedding or merging, a deviating number of channels, or another kind of normalization layer. To overcome these limitations, we propose a custom NVIDIA TensorRT extension. It is based on the existing FasterTransformer implementation but splits the encoder in smaller blocks to enable more flexibility in downstream tasks. We further added support for bottom-right padding to the CUDA kernels if the input size is not a multiple of the window size used in the shifted-window attention. This is of great importance to enable inference with inputs of arbitrary input resolution. The modular design is complemented by the ability to import models from Open Neural Network Exchange (ONNX) format. The proposed extension enables us to examine the architecture modifications described in Sec III-A.
IV. EXPERIMENTS
We evaluate the performance of our proposed multi-task approach on the common indoor RGB-D datasets NYUv2 [3], SUNRGB-D [4], and ScanNet [5]. We start with a singletask setting on the smaller NYUv2 dataset to assess the performance of the SwinV2-based encoder and our proposed modifications across the individual tasks. The goal is to derive a suitable encoder that is capable of handling all tasks in a most efficient way. Moreover, we investigate the performance of our proposed encoder with both dense decoder types, the EMSANet decoder and the modified SegFormer decoder. With the results of these experiments at hand, we combine all tasks in a multi-task approach. The goal is to solve all four tasks, i.e., semantic segmentation, instance segmentation, instance orientation estimation, and scene classification, simultaneously using a single neural network. Finally, we demonstrate the suitability of our approach for the larger SUNRGB-D and ScanNet dataset and compare to other state-of-the-art approaches.
A. Implementation Details
Our implementation is built using PyTorch [55] and is based on the EMSANet implementation [2]. As commonly done in downstream tasks, we use weights pretrained on ImageNet [13] to initialize the encoders. However, as already stated in Sec. III-A, any modification except for SwinV2-T (D=R+G+B) requires an additional pretraining step. Pretraining Transformers from scratch is very time consuming and requires large batch sizes and, thus, heavy memory requirements. We used 8× NVIDIA A100 40 GB GPUs to accomplish these pretrainings. To enable other applications, we share the pretrained weights to the research community on GitHub. Note that pretraining larger models, such as SwinV2-S or SwinV2-B, implies even higher memory requirements and, thus, is out of our scope. Subsequent training of the EMSAFormer architecture requires less resources and can be done on any GPU with at least 25 GB of VRAM (all tasks simultaneously). We stick to the training pipeline of EMSANet [2] for data processing and augmentation. We use a fixed input resolution of 640 × 480 pixels and a batch size of 8. Each network is trained for 500 epochs using SGD with a momentum of 0.9 and a small weight decay of 0.0001. The learning rate is varied in {0.00125, 0.0025, 0.005, 0.01, 0.02, 0.03, 0.04, 0.06} depending on the actual dataset and tasks. We further use a one-cycle learning rate scheduler, similar to the for Transformer commonly used cosine-annealing learning rate scheduler, to adjust the learning during training. For further details and other hyperparameters, we refer to our implementation available on GitHub.
B. Datasets
Selecting datasets suitable for evaluating our multi-task approach is challenging as it requires RGB and depth data as well as annotations for the individual tasks. In the following, we give a brief overview over the RGB-D datasets used.
NYUv2 [3]: The NYUv2 dataset comprises 795 training samples and 654 test samples. It provides annotations for semantic and instance segmentation and scene classification. We use the semantic annotations with 40 classes. As the original dataset does not include annotations for instance orientation, we use the manually annotated ones from [2].
SUNRGB-D [4]: The SUNRGB-D dataset features 5,285 training and 5,050 test samples from multiple RGB-D cameras. The dataset provides annotations for the first 37 NYUv2 semantic classes and for scene classification. However, it lacks dense annotations for instance segmentation. We stick to the reconstructed instance annotations from 3D bounding boxes proposed in [2], which also provide orientation annotations.
ScanNet [5]: The ScanNet dataset comprises 1.89M training, 0.53M validation, and 0.21M test images. It provides annotations for semantic and instance segmentation as well as for scene classification. We use the semantic class mapping to the 40 NYUv2 classes. As the dataset is created from video sequences and, thus, contains many similar images, we follow the official recommendation [5] and use the subsample of 100 for the validation and test split. To reduce training time, we use a subsample of 50 and limit the number of samples to a random subset of 25% for each epoch. As the dataset lacks instance orientation annotations, we cannot train this task on ScanNet. However, given the size and the quality of the annotations, it is still an important dataset.
For creating panoptic annotations, we combine the dense annotations of the datasets and treat floor, wall, and ceiling as stuff classes. For scene classification, we use the unified class spectrum for the most relevant indoor classes presented in [2]. As the ScanNet dataset was not in the scope of [2], we created a similar scene class mapping for this dataset.
C. Metrics
We follow the evaluation protocol of [2] and report the mean intersection over union (mIoU) for semantic segmentation, panoptic quality (PQ), segmentation quality (SQ) and recognition quality (RQ) for panoptic segmentation as well as the mean absolute angular error (MAAE) for instance orientation estimation. To enable experiments in a single-task setting, PQ is also reported for instance segmentation. Note that PQ tracks closely to the average precision (AP) [41] and, thus, also evaluates instance segmentation in a meaningful way. However, the instance decoder performs class-agnostic instance segmentation. Therefore, we use the ground-truth semantic segmentation for creating a foreground mask and for assigning semantic classes. The reported PQ is equal to perfect semantic segmentation. Similarly, for single-task instance orientation estimation, the MAAE is computed assuming perfect instances. For scene classification, the balanced accuracy (bAcc) is used to account for the imbalanced class distribution. As we aim at fast inference, we do not apply any evaluation tricks, such as test time augmentation. Furthermore, to enable fair comparison, we always upsample dense predictions to the full input resolution before determining any metric.
D. Single-task Performance
We start by evaluating the proposed SwinV2-Transformerbased encoder and its modifications in a single-task setting.
Semantic Segmentation (Sem) : Fig 4 (a) visualizes the results for semantic segmentation and compares to the dualencoder approaches of EMSANet. It becomes obvious that SwinV2-T can also be used as backbone in a dual-encoder design leading to an mIoU similar to one with ResNet101 backbones except for processing depth solely (blue in Fig 4). This highlights that SwinV2 is tailored to processing RGB inputs. However, the dual-encoder design results in a significant drop in inference throughput, making such a design not suitable for our application scenario. Changing the decoder to the smaller modified SegFormer decoder leads to similar performance but cannot alleviate the gap in inference throughput. By contrast, relying on a single encoder greatly improves inference throughput. However, the results (red in Fig 4) also highlight the challenge of processing both modalities in a single encoder. The performance of SwinV2-T drops to an mIoU of˜48%. Additional pretraining on RGB-Gray inputs (SwinV2-T-Pre) can only halve the gap to the dual-encoder counterpart (green in Fig 4). Further splitting the patch embedding to emphasize processing both modalities independently, as done in SwinV2-T-Multi, does not close the remaining gap in performance. This shows that the model is not capable of handling both modalities in the original embedding with 96 channels. The wider SwinV2-T-128, which uses 128 instead of 96 channels in the patch embedding, benefits much more from splitting the patch embedding (SwinV2-T-128-Multi). Adapting data augmentation during pretraining (SwinV2-T-128-Multi-Aug) to further strengthen independent processing of depth later in the downstream tasks results in another improvement. Our single encoder with SwinV2-T-128-Multi-Aug backbone leads to slightly better performance than the dual-encoder design with ResNet101 backbone at almost the same inference speed.
Instance Segmentation (Ins): For instance segmentation, a similar trend is emerged. However, the results in Fig 4 (b) show two new aspects. First, there is a gap of˜6% in PQ between CNN-based and Transformer-based encoders independently of the modality or the encoder design. As discussed later in Sec. IV-E, this gap highlights optimization issues in the Transformer-based encoder due to a small number of training samples along with a more challenging task. Second, the modified SegFormer MLP decoder leads consistently to even worse results. Therefore, we stick to the EMSANet decoder for instance segmentation for the remaining experiments.
Orientation Estimation (Or): The results in Tab. I (top) show that the gap experienced for instance segmentation is not present for instance orientation estimation. This could be due to the fact that this task is easier to accomplish in general and of more similar complexity to semantic segmentation.
E. Multi-task Performance
Learning multiple tasks using a single neural network is challenging as the tasks may influence each other. Thus, balancing the losses to each other and selecting the best epoch are crucial. We put less focus on orientation estimation as the results are already close to annotation quality [2]. However, as we focus on the Transformer-based encoder in this publication, we refer to our implementation for the actual task balancing. The best epoch is chosen based on the task most relevant for our application, i.e., the PQ for panoptic segmentation. However, to get a better impression on the performance of the individual tasks when trained simultaneously in a multi-task setting and independent of selecting a specific checkpoint, we also report the best result for each metric within the same run.
Tab II shows the multi-task results for NYUv2. It becomes obvious that all tasks can be solved using a single neural network. The results for the individual tasks are close to single-task performance. The PQ for instance segmentation increases noticeably and closes the gap to CNN-based dual encoders partially. This indicates that the encoder features learned in the multi-task setting are more beneficial for instance segmentation. Surprisingly, exchanging the modified SegFormer decoder (denoted by Sem(SegFormer) in Tab. II) with the EMSANet decoder (denoted by Sem in Tab. II) for semantic segmentation improves almost the entire multi-task performance. This suggests that -at least for the smaller NYUv2 dataset -two identical dense decoders lead to better encoder features. Except for instance segmentation, the multitask results are close to EMSANet with ResNet101 backbone.
The results for SUNRGB-D and ScanNet in Tab. III show a different picture. Performing semantic segmentation with the modified SegFormer decoder (denoted by Sem(SegFormer) in Tab. III) consistently leads to better multi-task results. The gap to the EMSANet decoder further gets larger as the dataset size increases, i.e., for ScanNet. We deduce that the NYUv2 dataset is too small to fit the parameters of our model with modified SegFormer decoder. The proposed EMSAFormer configuration (see Fig 2) outperforms both EMSANet approaches for semantic and panoptic segmentation and scene classification.
Tab II further reports the inference throughput for all approaches and task settings. Note that the values also apply for SUNRGD-D and ScanNet as they feature less or the same number of semantic classes. Even with a lower power profile (measured power consumption of 30 W), our proposed multi-task EMSAFormer approach meets our realtime requirements of at least 20 FPS. Fig. 5 presents qualitative results. For all indoor datasets, our approach is able to analyze the scenes thoroughly. The obtained predictions are well suited for enabling a mobile robot to operate autonomously in indoor environments.
F. Comparison to State of the Art
Comparing our proposed EMSAFormer to other approaches is challenging, as they mainly focus on single-task semantic segmentation and rarely account for efficiency. Moreover, most approaches use test-time augmentation, which is not applicable on a mobile robot with limited computational resources. The only approach that fits our multi-task setting is EMSANet [2], which we already compared to above. However, Tab. IV shows additional comparisons to RGB-D approaches on all three common indoor datasets for semantic segmentation. The results on NYUv2 and SUNRGB-D reveal that our approach outperforms other CNN-based approaches. For NYUv2, the results are close to the OMNIVORE approach with the larger Swin-B backbone. We also report results for the official Scan-Net benchmark (hidden test split). Our approach outperforms EMSANet with a dual ResNet101 encoder on this split as well.
NYUv2 (test)
V. CONCLUSION AND FUTURE WORK
We have presented a Transformer-based RGB-D approach for multi-task scene analysis, called EMSAFormer, that simultaneously performs panoptic segmentation, instance orientation estimation, and scene classification. We have shown that the CNN-based dual encoder of EMSANet [2] can be replaced with a single Trasformer-based encoder. Our extensive experiments on the three common indoor datasets NYUv2, SUNRGB-D, and ScanNet highlight the strong performance of our proposed EMSAFormer. We further have revealed limitations of Transformer-based approaches in a single-task setting on smaller datasets, such as NYUv2. However, we have demonstrated that these issues can be addressed using a multi-task approach. Due to the proposed NVIDIA TensorRT extension, our EMSAFormer approaches can be applied in real time with 39.1 FPS on an NVIDIA Jetson AGX Orin 32 GB, demonstrating its suitability for deployment to mobile robots. In future work, we intend to explore the benefits of additional training on large-scale RGB-D datasets such as Hypersim [56].
Legend: LN: Layer Normalization, AH: Attention Head, MSA: Multi-head Self-Attention, W-MSA: Window MSA, SW-MSA: Shifted-
Our
SwinV2-T-128-Multi-Aug encoder achieves comparable performance to a dual encoder with ResNet101 backbone, while outperforming all other dual-encoder approaches. Scene Classification (Sce): The results in Tab. I (bottom) again highlight the strength of Transformer-based architectures for image classification. The performance of the Transformerbased single encoder model processing RGB solely already outperforms all CNN-based approaches with ResNet backbone. Furthermore, each single RGB-D encoder model outperforms all dual-encoder approaches. Our proposed SwinV2-T-128-Multi-Aug achieves the best result. The results of this set of experiments show that our proposed SwinV2-T-128-Multi-Aug backbone is capable of handling all tasks. Issues for instance segmentation are addressed below.
Fig. 5 .
5Qualitative results as RGB image overlayed with predicted panoptic segmentation, predicted scene class, and estimated orientations if available.
TABLE I
IResults on NYUv2 test split when performing instance orientation estimation (top) and scene classification (bottom) in a single-task setting with various encoder configurations. See Sec. IV-C for metric abbreviations.Orientation Estimation
RGB-D
MAAE ↓
RGB Depth
Fused Dual
Single
[2]
ResNet34-NBt1d
22.24 18.36
17.91
-
ResNet50
23.09 18.81
18.41
-
ResNet101
22.06 18.02
17.50
-
SwinV2-T
24.08 19.13
19.02
19.52
Ours
SwinV2-T-Pre
-
-
-
18.91
SwinV2-T-128
-
-
-
18.89
SwinV2-T-128-Multi-Aug
-
-
-
17.85
Scene Classification
RGB-D
bAcc ↑
RGB Depth
Fused Dual
Single
[2]
ResNet34-NBt1d
74.40 67.26
72.40
-
ResNet50
74.19 69.92
74.91
-
ResNet101
74.95 70.53
75.86
-
SwinV2-T
76.84 66.72
73.39
76.52
TABLE II Results
IIon NYUv2 test split when training our multi-task EMSAFormer and in comparison to EMSANet[2]. See Sec. IV-C for the reported metrics. Panoptic results are obtained after merging semantic and instance prediction. Legend: italic: metric used for determining the best checkpoint, gray: best result within the same run, FPSx: frames per second on an NVIDIA Jetson AGX Orin 32 GB (Jetpack 5.1.1, TensorRT 8.5.2, Float16) at measured power consumption x.PQ ↑ MAAE ↓ bAcc ↑ mIoU ↑ PQ ↑ RQ ↑ SQ ↑ MAEE ↓ FPS ↑ 50W FPS ↑Semantic
Decoder
Instance
Decoder
Scene
Head
Panoptic Results
(after merging)
Inference
Throughput
Backbone
Task(s)
mIoU ↑
30W
EMSAFormer (ours)
SwinV2-T-128-Multi-Aug
Semantic Segmentation (Sem)
50.53
-
-
-
-
-
-
-
-
47.1
30.5
Instance Segmentation (Ins)
-
56.44
-
-
-
-
-
-
-
48.4
33.5
Orientation Estimation (Or)
-
-
17.85
-
-
-
-
-
-
47.7
32.9
Scene Classification (Sce)
-
-
-
78.66
-
-
-
-
-
58.9
40.7
Sem(SegFormer) + Sce + Ins + Or
50.23
58.75
20.95
77.70
51.34 43.41 52.53 81.75 18.94
39.1
27.3
50.51
59.25
20.95
80.02
51.34 43.41 52.53 81.79 18.94
Sem + Sce + Ins + Or
51.06
59.06
20.01
78.80
51.76 43.28 52.48 81.43 18.26
36.5
25.6
51.26
59.27
18.09
78.80
51.76 43.28 52.48 81.51 18.09
EMSANet
2x ResNet101
Sem + Sce + Ins + Or
50.83
62.64
17.87
77.41
50.67 45.12 54.02 82.49 15.33
42.9
30.1
51.01
62.81
17.82
78.43
51.23 45.12 54.02 82.99 14.73
2x ResNet34-NBt1D
Sem + Sce + Ins + Or
50.97
61.33
18.37
76.46
50.61 43.59 52.23 82.48 16.39
70.5
49.9
51.15
61.53
18.37
78.18
51.31 43.59 52.27 82.70 15.76
TABLE III
IIIResults on SUNRGB-D test split and ScanNet validation split when training our multi-task EMSAFormer and in comparison to EMSANet[2]. See Sec. IV-C for details on the reported metrics. Legend: italic: metric used for determining the best checkpoint, gray: best result within the same run. PQ ↑ MAAE ↓ bAcc ↑ mIoU ↑ PQ ↑ RQ ↑ SQ ↑ MAEE ↓Semantic
Decoder
Instance
Decoder
Scene
Head
Panoptic Results
(after merging)
Model
mIoU ↑
SUNRGB-D
EMSAFormer
SwinV2-T-128-Multi-Aug
48.52
61.14
16.99
62.01
45.12 50.08 59.08 84.68 15.32
48.67
61.60
16.99
64.96
45.27 50.08 59.08 84.83 14.93
SwinV2-T-128-Multi-Aug (Sem(SegFormer))
48.61
61.20
15.91
61.97
45.79 51.70 60.12 84.65 14.00
48.82
61.78
15.91
64.50
45.94 51.70 60.15 84.65 13.90
EMSANet
2x ResNet101
47.99
62.07
15.17
59.40
43.22 51.06 58.88 85.53 13.34
47.99
62.96
15.17
61.21
44.19 51.75 59.74 85.64 13.00
2x ResNet34-NBt1D
48.39
60.62
16.28
61.76
45.53 49.88 57.79 84.91 14.23
48.39
61.48
14.83
62.66
45.66 50.53 58.66 85.20 14.15
ScanNet
EMSAFormer
SwinV2-T-128-Multi-Aug
63.78
66.69
-
48.82
61.93 49.70 59.15 83.31
-
63.78
66.71
-
49.70
61.93 49.70 59.15 83.36
-
SwinV2-T-128-Multi-Aug (Sem(SegFormer))
64.75
67.71
-
49.69
62.66 51.18 61.01 83.20
-
64.75
67.84
-
49.73
62.66 51.18 61.01 83.38
-
EMSANet
2x ResNet101
63.63
66.36
-
44.63
61.92 50.35 59.82 83.51
-
64.11
66.64
-
46.32
61.96 50.35 59.82 83.80
-
2x ResNet34-NBt1D
61.25
65.57
-
45.47
58.32 47.76 56.85 83.39
-
61.25
65.57
-
46.35
58.32 47.76 56.85 83.47
-
TABLE IV Comparison
IVto other state-of-the-art approaches on NYUv2 test split, SUNRGB-D test split, and ScanNet test (benchmark) split. * indicates additional test-time augmentation.Backbone
mIoU ↑
NYUv2
OMNIVORE [37]
Swin-T
47.9
Swin-B
51.1
ShapeConv [36]
ResNet50
47.3
ResNext101
50.2
SA-Gate [25]
2xResNet50
50.4
EMSAFormer (ours)
SwinV2-T-128-Multi-Aug
51.26
SUNRGB-D
ShapeConv [36]
ResNet101
47.6
AC-Net [20]
3x ResNet50
48.1
2.5D Conv [33]
ResNet-101
48.2
ESANet [26]
2x ResNet50
48.31
EMSAFormer (ours)
SwinV2-T-128-Multi-Aug (Sem(SegFormer))
48.82
ScanNet
FuseNet (from [23])
2x VGG16
53.5
SSMA [23]
2x mod. ResNet50
57.7*
EMSANet
2x ResNet101
54.0
EMSAFormer (ours)
SwinV2-T-128-Multi-Aug (Sem(SegFormer))
56.4
The MORPHIA Project: First Results of a Long-Term User Study in an Elderly Care Scenario from Robotic Point of View. T Wengefeld, B Schuetz, G Girdziunaite, A Scheidig, H.-M Gross, Proc. of ISR. of ISR2022T. Wengefeld, B. Schuetz, G. Girdziunaite, A. Scheidig, and H.-M. Gross, "The MORPHIA Project: First Results of a Long-Term User Study in an Elderly Care Scenario from Robotic Point of View," in Proc. of ISR, 2022.
Efficient Multi-Task RGB-D Scene Analysis for Indoor Environments. D Seichter, S Fischedick, M Köhler, H.-M Gross, Proc. of IJCNN, 2022. of IJCNN, 2022D. Seichter, S. Fischedick, M. Köhler, and H.-M. Gross, "Efficient Multi- Task RGB-D Scene Analysis for Indoor Environments," in Proc. of IJCNN, 2022, pp. 1-10.
Indoor Segmentation and Support Inference from RGBD Images. N Silberman, D Hoiem, P Kohli, R Fergus, Proc. of ECCV. of ECCVN. Silberman, D. Hoiem, P. Kohli, and R. Fergus, "Indoor Segmentation and Support Inference from RGBD Images," in Proc. of ECCV, 2012.
SUN RGB-D: A RGB-D Scene Understanding Benchmark Suite. S Song, S P Lichtenberg, J Xiao, Proc. of CVPR. of CVPRS. Song, S. P. Lichtenberg, and J. Xiao, "SUN RGB-D: A RGB-D Scene Understanding Benchmark Suite," in Proc. of CVPR, 2015, pp. 567-576.
ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes. A Dai, A X Chang, M Savva, Proc. of CVPR. of CVPRA. Dai, A. X. Chang, M. Savva et al., "ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes," in Proc. of CVPR, 2017.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proc. of CVPR. of CVPRK. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proc. of CVPR, 2016, pp. 770-778.
Efficientnet: Rethinking model scaling for convolutional neural networks. M Tan, Q Le, Proc. of ICML. of ICMLM. Tan and Q. Le, "Efficientnet: Rethinking model scaling for convo- lutional neural networks," in Proc. of ICML, 2019, pp. 6105-6114.
A ConvNet for the 2020s. Z Liu, H Mao, C.-Y Wu, C Feichtenhofer, T Darrell, S Xie, Proc. of CVPR. of CVPRZ. Liu, H. Mao, C.-Y. Wu, C. Feichtenhofer, T. Darrell, and S. Xie, "A ConvNet for the 2020s," in Proc. of CVPR, 2022, pp. 770-778.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, Proc. of NeurIPS. of NeurIPS30A. Vaswani, N. Shazeer, N. Parmar et al., "Attention is all you need," Proc. of NeurIPS, vol. 30, 2017.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, arXiv:1810.04805arXiv preprintJ. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding," arXiv preprint arXiv:1810.04805, 2018.
Language models are few-shot learners. T Brown, B Mann, N Ryder, Proc. of NeurIPS. 33T. Brown, B. Mann, N. Ryder et al., "Language models are few-shot learners," Proc. of NeurIPS, vol. 33, 2020.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. A Dosovitskiy, L Beyer, A Kolesnikov, Proc. of ICLR. of ICLR2021A. Dosovitskiy, L. Beyer, A. Kolesnikov et al., "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale," in Proc. of ICLR, 2021.
ImageNet Large Scale Visual Recognition Challenge. O Russakovsky, W Dong, R Socher, IJCV. O. Russakovsky, W. Dong, R. Socher et al., "ImageNet Large Scale Visual Recognition Challenge," in IJCV, 2015, pp. 211-252.
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. Z Liu, Y Lin, Y Cao, Proc. of ICCV. of ICCV2021Z. Liu, Y. Lin, Y. Cao et al., "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows," in Proc. of ICCV, 2021.
SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers. E Xie, W Wang, Proc. of NeurIPS. of NeurIPS2021E. Xie, W. Wang et al., "SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers," in Proc. of NeurIPS, 2021.
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction Without Convolutions. W Wang, E Xie, X Li, Proc. of ICCV. of ICCVW. Wang, E. Xie, X. Li et al., "Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction Without Convolutions," in Proc. of ICCV, 2021, pp. 568-578.
Swin Transformer V2: Scaling Up Capacity and Resolution. Z Liu, H Hu, Y Lin, Proc. of CVPR. of CVPR2022Z. Liu, H. Hu, Y. Lin et al., "Swin Transformer V2: Scaling Up Capacity and Resolution," in Proc. of CVPR, 2022.
FuseNet: Incorporating Depth into Semantic Segmentation via Fusion-based CNN Architecture. C Hazirbas, L Ma, C Domokos, D Cremers, Proc. of ACCV. of ACCVC. Hazirbas, L. Ma, C. Domokos, and D. Cremers, "FuseNet: Incor- porating Depth into Semantic Segmentation via Fusion-based CNN Architecture," in Proc. of ACCV, 2016, pp. 213-228.
RedNet: Residual Encoder-Decoder Network for indoor RGB-D Semantic Segmentation. J Jiang, L Zheng, F Luo, Z Zhang, arXiv:1806.01054arXiv preprintJ. Jiang, L. Zheng, F. Luo, and Z. Zhang, "RedNet: Residual Encoder- Decoder Network for indoor RGB-D Semantic Segmentation," arXiv preprint arXiv:1806.01054, 2018.
ACNet: Attention Based Network to Exploit Complementary Features for RGBD Semantic Segmentation. X Hu, K Yang, L Fei, K Wang, Proc. of ICIP. of ICIPX. Hu, K. Yang, L. Fei, and K. Wang, "ACNet: Attention Based Network to Exploit Complementary Features for RGBD Semantic Segmentation," Proc. of ICIP, 2019.
RDFNet: RGB-D Multi-Level Residual Feature Fusion for Indoor Semantic Segmentation. S Lee, S J Park, K S Hong, Proc. of ICCV. of ICCVS. Lee, S. J. Park, and K. S. Hong, "RDFNet: RGB-D Multi-Level Residual Feature Fusion for Indoor Semantic Segmentation," Proc. of ICCV, pp. 4990-4999, 2017.
Coupling Two-Stream RGB-D Semantic Segmentation Network by Idempotent Mappings. Y Xing, J Wang, X Chen, G Zeng, Proc. of ICIP. of ICIPY. Xing, J. Wang, X. Chen, and G. Zeng, "Coupling Two-Stream RGB- D Semantic Segmentation Network by Idempotent Mappings," in Proc. of ICIP, 2019, pp. 1850-1854.
Self-supervised Model Adaptation For Multimodal Semantic Segmentation. A Valada, R Mohan, W Burgard, IJCVA. Valada, R. Mohan, and W. Burgard, "Self-supervised Model Adap- tation For Multimodal Semantic Segmentation," IJCV, 2019.
Multi-Modal Attention-based Fusion Model for Semantic Segmentation of RGB-Depth Images. F Fooladgar, S Kasaei, arXiv:1912.11691arXiv preprintF. Fooladgar and S. Kasaei, "Multi-Modal Attention-based Fusion Model for Semantic Segmentation of RGB-Depth Images," arXiv preprint arXiv:1912.11691, pp. 1-12, 2019.
Bi-directional Cross-Modality Feature Propagation with Separation-and-Aggregation Gate for RGB-D Semantic Segmentation. X Chen, K.-Y Lin, J Wang, Proc. of ECCV. of ECCVX. Chen, K.-Y. Lin, J. Wang et al., "Bi-directional Cross-Modality Feature Propagation with Separation-and-Aggregation Gate for RGB-D Semantic Segmentation," in Proc. of ECCV, 2020, pp. 561-577.
Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis. D Seichter, M Köhler, B Lewandowski, T Wengefeld, H.-M Gross, Proc. of ICRA. of ICRAD. Seichter, M. Köhler, B. Lewandowski, T. Wengefeld, and H.-M. Gross, "Efficient RGB-D Semantic Segmentation for Indoor Scene Analysis," in Proc. of ICRA, 2021, pp. 13 525-13 531.
Incorporating Luminance, Depth and Color Information by a Fusion-Based Network for Semantic Segmentation. S.-W Hung, S.-Y Lo, H.-M Hang, Proc. of ICIP. of ICIPS.-W. Hung, S.-Y. Lo, and H.-M. Hang, "Incorporating Luminance, Depth and Color Information by a Fusion-Based Network for Semantic Segmentation," in Proc. of ICIP, 2019, pp. 2374-2378.
LSTM-CF: Unifying Context Modeling and Fusion with LSTMs for RGB-D Scene Labeling. Z Li, Y Gan, X Liang, Y Yu, H Cheng, L Lin, Proc. of ECCV. of ECCVSpringerZ. Li, Y. Gan, X. Liang, Y. Yu, H. Cheng, and L. Lin, "LSTM-CF: Unifying Context Modeling and Fusion with LSTMs for RGB-D Scene Labeling," in Proc. of ECCV. Springer, 2016, pp. 541-557.
MFNet: Towards Real-Time Semantic Segmentation for Autonomous Vehicles with Multi-Spectral Scenes. Q Ha, K Watanabe, T Karasawa, Y Ushiku, T Harada, Proc. of IROS. of IROSQ. Ha, K. Watanabe, T. Karasawa, Y. Ushiku, and T. Harada, "MFNet: Towards Real-Time Semantic Segmentation for Autonomous Vehicles with Multi-Spectral Scenes," in Proc. of IROS, 2017, pp. 5108-5115.
Semantics-guided Multi-Level RGB-D Feature Fusion for Indoor Semantic Segmentation. Y Li, J Zhang, Y Cheng, K Huang, T Tan, Proc. of ICIP. of ICIPY. Li, J. Zhang, Y. Cheng, K. Huang, and T. Tan, "Semantics-guided Multi-Level RGB-D Feature Fusion for Indoor Semantic Segmentation," in Proc. of ICIP, 2017, pp. 1262-1266.
Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. L.-C Chen, Y Zhu, G Papandreou, F Schroff, H Adam, Proc. of ECCV. of ECCVL.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, and H. Adam, "Encoder- Decoder with Atrous Separable Convolution for Semantic Image Seg- mentation," in Proc. of ECCV, 2018, pp. 801-818.
Depth-Aware CNN for RGB-D Segmentation. W Wang, U Neumann, Proc. of ECCV. of ECCVW. Wang and U. Neumann, "Depth-Aware CNN for RGB-D Segmen- tation," in Proc. of ECCV, 2018, pp. 144-161.
2.5D Convolution for RGB-D Semantic Segmentation. Y Xing, J Wang, X Chen, G Zeng, Proc. of ICIP. of ICIPY. Xing, J. Wang, X. Chen, and G. Zeng, "2.5D Convolution for RGB-D Semantic Segmentation," in Proc. of ICIP, 2019, pp. 1410-1414.
Malleable 2.5D Convolution: Learning Receptive Fields along the Depth-axis for RGB-D Scene Parsing. Y Xing, J Wang, G Zeng, Proc. of ECCV. of ECCVY. Xing, J. Wang, and G. Zeng, "Malleable 2.5D Convolution: Learning Receptive Fields along the Depth-axis for RGB-D Scene Parsing," in Proc. of ECCV, 2020, pp. 1-17.
Spatial Information Guided Convolution for Real-Time RGBD Semantic Segmentation. L.-Z Chen, Z Lin, Z Wang, Y.-L Yang, M.-M Cheng, arXiv:2004.04534v1arXiv preprintL.-Z. Chen, Z. Lin, Z. Wang, Y.-L. Yang, and M.-M. Cheng, "Spatial Information Guided Convolution for Real-Time RGBD Semantic Seg- mentation," arXiv preprint arXiv:2004.04534v1, pp. 1-11, 2020.
ShapeConv: Shape-aware Convolutional Layer for Indoor RGB-D Semantic Segmentation. M Cao, H Leng, D Lischinski, D Cohen-Or, C Tu, Y Li, Proc. of ICCV. of ICCV2021M. Cao, H. Leng, D. Lischinski, D. Cohen-Or, C. Tu, and Y. Li, "ShapeConv: Shape-aware Convolutional Layer for Indoor RGB-D Semantic Segmentation," in Proc. of ICCV, 2021.
Omnivore: A Single Model for Many Visual Modalities. R Girdhar, M Singh, N Ravi, Proc. of CVPR. of CVPR2022R. Girdhar, M. Singh, N. Ravi et al., "Omnivore: A Single Model for Many Visual Modalities," in Proc. of CVPR, 2022.
Pyramid Scene Parsing Network. H Zhao, J Shi, X Qi, X Wang, J Jia, Proc. of CVPR. of CVPRH. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, "Pyramid Scene Parsing Network," in Proc. of CVPR, 2017, pp. 2881-2890.
Rethinking Atrous Convolution for Semantic Image Segmentation. L.-C Chen, G Papandreou, F Schroff, arXiv:1706.05587arXiv preprintL.-C. Chen, G. Papandreou, F. Schroff et al., "Rethinking Atrous Convolution for Semantic Image Segmentation," arXiv preprint arXiv:1706.05587, 2017.
In Defense of Pretrained ImageNet Architectures for Real-time Semantic Segmentation of Road-driving Images. M Oršić, I Krešo, P Bevandić, S Šegvić, Proc. of CVPR. of CVPR12616M. Oršić, I. Krešo, P. Bevandić, and S.Šegvić, "In Defense of Pre- trained ImageNet Architectures for Real-time Semantic Segmentation of Road-driving Images," in Proc. of CVPR, 2019, pp. 12 607-12 616.
Panoptic Segmentation. A Kirillov, K He, R Girshick, C Rother, P Dollár, Proc. of CVPR. of CVPRA. Kirillov, K. He, R. Girshick, C. Rother, and P. Dollár, "Panoptic Segmentation," in Proc. of CVPR, 2019, pp. 9404-9413.
Panoptic Feature Pyramid Networks. A Kirillov, R Girshick, K He, P Dollár, Proc. of CVPR. of CVPRA. Kirillov, R. Girshick, K. He, and P. Dollár, "Panoptic Feature Pyramid Networks," in Proc. of CVPR, 2019, pp. 6399-6408.
UPSNet: A Unified Panoptic Segmentation Network. Y Xiong, R Liao, H Zhao, Proc. of CVPR. of CVPRY. Xiong, R. Liao, H. Zhao et al., "UPSNet: A Unified Panoptic Segmentation Network," in Proc. of CVPR, 2019, pp. 8818-8826.
DeeperLab: Single-shot Image Parser. T.-J Yang, M D Collins, Y Zhu, arXiv:1902.05093arXiv preprintT.-J. Yang, M. D. Collins, Y. Zhu et al., "DeeperLab: Single-shot Image Parser," arXiv preprint arXiv:1902.05093, 2019.
Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation. B Cheng, M D Collins, Y Zhu, Proc. of CVPR. of CVPR12485B. Cheng, M. D. Collins, Y. Zhu et al., "Panoptic-DeepLab: A Simple, Strong, and Fast Baseline for Bottom-Up Panoptic Segmentation," in Proc. of CVPR, 2020, pp. 12 475-12 485.
MaX-DeepLab: End-to-End Panoptic Segmentation with Mask Transformers. H Wang, Y Zhu, H Adam, A Yuille, L.-C Chen, Proc. of CVPR. of CVPRH. Wang, Y. Zhu, H. Adam, A. Yuille, and L.-C. Chen, "MaX-DeepLab: End-to-End Panoptic Segmentation with Mask Transformers," in Proc. of CVPR, 2021, pp. 5463-5474.
Biternion Nets: Continuous Head Pose Regression from Discrete Training Labels. L Beyer, A Hermans, B Leibe, Proc. of GCPR. of GCPRSpringerL. Beyer, A. Hermans, and B. Leibe, "Biternion Nets: Continuous Head Pose Regression from Discrete Training Labels," in Proc. of GCPR. Springer, 2015, pp. 157-168.
Deep orientation: Fast and Robust Upper Body orientation Estimation for Mobile Robotic Applications. B Lewandowski, D Seichter, T Wengefeld, Proc. of IROS. of IROSB. Lewandowski, D. Seichter, T. Wengefeld et al., "Deep orientation: Fast and Robust Upper Body orientation Estimation for Mobile Robotic Applications," in Proc. of IROS, 2019, pp. 441-448.
Multi-Task Deep Learning for Depth-based Person Perception in Mobile Robotics. D Seichter, B Lewandowski, D Höchemer, T Wengefeld, H.-M Gross, Proc. of IROS. of IROSIEEED. Seichter, B. Lewandowski, D. Höchemer, T. Wengefeld, and H.-M. Gross, "Multi-Task Deep Learning for Depth-based Person Perception in Mobile Robotics," in Proc. of IROS. IEEE, 2020, pp. 10 497-10 504.
3D Bounding Box Estimation Using Deep Learning and Geometry. A Mousavian, D Anguelov, J Flynn, J Kosecka, Proc. of CVPR. of CVPRA. Mousavian, D. Anguelov, J. Flynn, and J. Kosecka, "3D Bounding Box Estimation Using Deep Learning and Geometry," in Proc. of CVPR, 2017, pp. 7074-7082.
Deep Fitting Degree Scoring Network for Monocular 3D Object Detection. L Liu, J Lu, C Xu, Q Tian, J Zhou, Proc. of CVPR. of CVPRL. Liu, J. Lu, C. Xu, Q. Tian, and J. Zhou, "Deep Fitting Degree Scoring Network for Monocular 3D Object Detection," in Proc. of CVPR, 2019, pp. 1057-1066.
Objects are different: Flexible monocular 3d object detection. Y Zhang, J Lu, J Zhou, Proc. of CVPR. of CVPRY. Zhang, J. Lu, and J. Zhou, "Objects are different: Flexible monocular 3d object detection," in Proc. of CVPR, 2021, pp. 3289-3298.
Multi-Task Learning for Dense Prediction Tasks: A Survey. S Vandenhende, S Georgoulis, W Van Gansbeke, TPAMIS. Vandenhende, S. Georgoulis, W. Van Gansbeke et al., "Multi-Task Learning for Dense Prediction Tasks: A Survey," TPAMI, 2021.
FasterTransformer. Retrieved at 04.02.2023NVIDIANVIDIA, "FasterTransformer," github.com/NVIDIA/FasterTransformer, Retrieved at 04.02.2023, 2022.
PyTorch: An Imperative Style, High-Performance Deep Learning Library. A Paszke, S Gross, F Massa, Proc. of NeurIPS. of NeurIPSCurran Associates, IncA. Paszke, S. Gross, F. Massa et al., "PyTorch: An Imperative Style, High-Performance Deep Learning Library," in Proc. of NeurIPS. Curran Associates, Inc., 2019, pp. 8024-8035.
Hypersim: A Photorealistic Synthetic Dataset for Holistic Indoor Scene Understanding. M Roberts, J Ramapuram, A Ranjan, Proc. of ICCV. of ICCV2021M. Roberts, J. Ramapuram, A. Ranjan et al., "Hypersim: A Photorealis- tic Synthetic Dataset for Holistic Indoor Scene Understanding," in Proc. of ICCV, 2021.
| [
"https://github.com/TUI-NICR/EMSAFormer."
] |
[
"A 4d non-BPS NS-NS microstate",
"A 4d non-BPS NS-NS microstate",
"A 4d non-BPS NS-NS microstate",
"A 4d non-BPS NS-NS microstate"
] | [
"Soumangsu Chakraborty soumangsuchakraborty@gmail.com \nInstitut de Physique Théorique\nUniversité Paris-Saclay\nCNRS\nCEA Orme des Merisiers\n91191Gif-sur-YvetteFrance\n\nInstitute for Theoretical Physics\nUniversity of Amsterdam 1090GL Amsterdam\nThe Netherlands\n",
"Shaun D Hampton shaun.hampton@ipht.fr \nInstitut de Physique Théorique\nUniversité Paris-Saclay\nCNRS\nCEA Orme des Merisiers\n91191Gif-sur-YvetteFrance\n",
"Soumangsu Chakraborty soumangsuchakraborty@gmail.com \nInstitut de Physique Théorique\nUniversité Paris-Saclay\nCNRS\nCEA Orme des Merisiers\n91191Gif-sur-YvetteFrance\n\nInstitute for Theoretical Physics\nUniversity of Amsterdam 1090GL Amsterdam\nThe Netherlands\n",
"Shaun D Hampton shaun.hampton@ipht.fr \nInstitut de Physique Théorique\nUniversité Paris-Saclay\nCNRS\nCEA Orme des Merisiers\n91191Gif-sur-YvetteFrance\n"
] | [
"Institut de Physique Théorique\nUniversité Paris-Saclay\nCNRS\nCEA Orme des Merisiers\n91191Gif-sur-YvetteFrance",
"Institute for Theoretical Physics\nUniversity of Amsterdam 1090GL Amsterdam\nThe Netherlands",
"Institut de Physique Théorique\nUniversité Paris-Saclay\nCNRS\nCEA Orme des Merisiers\n91191Gif-sur-YvetteFrance",
"Institut de Physique Théorique\nUniversité Paris-Saclay\nCNRS\nCEA Orme des Merisiers\n91191Gif-sur-YvetteFrance",
"Institute for Theoretical Physics\nUniversity of Amsterdam 1090GL Amsterdam\nThe Netherlands",
"Institut de Physique Théorique\nUniversité Paris-Saclay\nCNRS\nCEA Orme des Merisiers\n91191Gif-sur-YvetteFrance"
] | [] | We construct a two-parameter four-dimensional non-BPS NS-NS smooth microstate solution that asymptotes to flat spacetime with a linear dilaton in type II superstring theory. From the microscopic point of view, the background is made out of a certain number of decoupled (i.e. g s → 0) NS5 branes wrapping T 3 × S 1 × S 1 with fundamental strings wrapping non-contractable cycles of S 1 × S 1 with integer momentum modes along them. We show that perturbative worldsheet theory in this background is given by a null-gauged WZW model. We also show that the consistency of the worldsheet theory imposes non-trivial constraints on the supergravity background.where the gauged U(1) L × U(1) R is null with embeddingwhere J 3 ,J 3 are the holomorphic and anti-holomorphic timelike currents of SL(2, R), J x,t,y ,J x,t,y are the holomorphic and anti-holomorphic currents of U(1) x , R t , U(1) y of the upstairs theory. Let n 5 be the level of SL(2, R) supersymmetric WZW model. The coordinates x, y that parametrize U(1) x,y respectively are compact with periodicitiesx ∼ x + 2πR x , y ∼ y + 2πR y .(2.3)The currents J 3,x,t,y ≡ iJ 3,x,t,y ,J 3,x,t,y ≡ iJ 3,x,t,y , (2.4) are normalized such that they satisfy the following OPE: 2 J 3 (z)J 3 (0) ∼ −n 5 /2 z 2 , J x (z)J x (0) ∼ 1/2 z 2 , J t (z)J t (0) ∼ −1/2 z 2 , | null | [
"https://export.arxiv.org/pdf/2306.03167v1.pdf"
] | 259,088,552 | 2306.03167 | afabcefe0d99e81a85a483c1f6314ab7c44e2950 |
A 4d non-BPS NS-NS microstate
5 Jun 2023
Soumangsu Chakraborty soumangsuchakraborty@gmail.com
Institut de Physique Théorique
Université Paris-Saclay
CNRS
CEA Orme des Merisiers
91191Gif-sur-YvetteFrance
Institute for Theoretical Physics
University of Amsterdam 1090GL Amsterdam
The Netherlands
Shaun D Hampton shaun.hampton@ipht.fr
Institut de Physique Théorique
Université Paris-Saclay
CNRS
CEA Orme des Merisiers
91191Gif-sur-YvetteFrance
A 4d non-BPS NS-NS microstate
5 Jun 2023
We construct a two-parameter four-dimensional non-BPS NS-NS smooth microstate solution that asymptotes to flat spacetime with a linear dilaton in type II superstring theory. From the microscopic point of view, the background is made out of a certain number of decoupled (i.e. g s → 0) NS5 branes wrapping T 3 × S 1 × S 1 with fundamental strings wrapping non-contractable cycles of S 1 × S 1 with integer momentum modes along them. We show that perturbative worldsheet theory in this background is given by a null-gauged WZW model. We also show that the consistency of the worldsheet theory imposes non-trivial constraints on the supergravity background.where the gauged U(1) L × U(1) R is null with embeddingwhere J 3 ,J 3 are the holomorphic and anti-holomorphic timelike currents of SL(2, R), J x,t,y ,J x,t,y are the holomorphic and anti-holomorphic currents of U(1) x , R t , U(1) y of the upstairs theory. Let n 5 be the level of SL(2, R) supersymmetric WZW model. The coordinates x, y that parametrize U(1) x,y respectively are compact with periodicitiesx ∼ x + 2πR x , y ∼ y + 2πR y .(2.3)The currents J 3,x,t,y ≡ iJ 3,x,t,y ,J 3,x,t,y ≡ iJ 3,x,t,y , (2.4) are normalized such that they satisfy the following OPE: 2 J 3 (z)J 3 (0) ∼ −n 5 /2 z 2 , J x (z)J x (0) ∼ 1/2 z 2 , J t (z)J t (0) ∼ −1/2 z 2 ,
Introduction
String theory provides a rich framework to address many questions about the microscopic nature of black holes [1][2][3][4][5][6]. One of its most significant achievements is to count the number of BPS states of a certain D1-D5 system and show agreement with the Bekenstein-Hawking entropy of the corresponding black hole in the supergravity theory [6,7]. This strongly suggests that there should exist a realization of black hole microstates within gravity. Since then much work has been done to understand this in a concrete way.
The main proponent in this direction is the fuzzball proposal [8][9][10], which says that generic black hole microstates are horizonless configurations of fundamental objects in string theory that have the same mass and charge as the corresponding black hole 1 . The first smooth solutions, [8,9] and [13], which were constructed were two charge configurations. The geometries in [8] were derived in the F1-P system in the M 4,1 × S 1 × T 4 setup with an F1 string with momentum P placed along S 1 where the string was given a nontrivial profile in M 4,1 , a noncompact space. Through dualities this configuration can be mapped to the D1-D5 system with D1 branes wrapping S 1 and D5 branes wrapping S 1 ×T 4 where the branes now have a nontrivial profile in M 4,1 , [9,13]. A further refinement of the fuzzball proposal delineates i) generic and possibly non-geometric configurations, ii) singular configurations, and iii) smooth geometries. The third category has enjoyed a very fruitful exploration. These are called microstate geometries, configurations that are horizonless and smooth, admitting a nice realization in terms of supergravity. They have been studied in a variety of scenarios, i.e. [14][15][16][17][18][19][20][21][22]. Typically, three charge microstate geometries are constructed in the D1-D5-P system where the momentum charge, P , is a wave that travels along the common D1-D5 direction, S 1 . In [23,24] the authors used a consistent truncation from 10d down to 3d to describe a certain class of these microstate geometries. Similarly, from this 3d perspective, smooth, purely NS-NS microstate geometries were constructed in [25] and in [26] a set of NS-NS microstate geometries were derived in 10d. Though numerous examples have been constructed, the entropy coming from these configurations, however, is subleading in the powers of the charges [27,28]. This is because the torus, T 4 , on which the remaining directions of the D5 brane are wrapped, is largely a spectator in the full theory, containing no nontrivial dynamics. The reason being that, in the construction of such solutions, one typically smears over the torus directions resulting, upon reduction, in a theory described by 6d supergravity. The only effects along the T 4 are typically an overall breathing mode in which its volume oscillates according to the motion of the momentum wave along the common D1-D5 circle. In [29] the authors addressed a possibly singular limit of the three-charge solution in which a horizon might have formed. To resolve this, they constructed a geometry in the NS5-F1 frame with a nontrivial density profile along the common circle which carries the momentum P , and with spherical symmetry in the spacetime, M 4,1 . This density profile, coming from a bound state of D0-D4 branes averts the formation of a macroscopic horizon by causing the common circle to pinch-off beforehand. Instead, the horizon area was shown to be of zero size and so near this point, the globally three-charge solution looked locally two-charge. This configuration belongs to the second class of solutions mentioned previously, namely those which are singular and even degenerate, corresponding to brane sources.
This work prompted the investigation of black hole microstates into the regime in which the T 4 was no longer treated democratically but allowed to have nontrivial behavior. Recent analysis [30,31] suggests that in order to obtain the full entropy, at least parametrically in the charges, one must consider nontrivial dynamics along at least one direction of T 4 . A step in this direction was made in [30] where super-symmetry projectors were analyzed for three charge microstates in which one of the torus directions was taken to be nontrivial while maintaining spherical symmetry in the noncompact directions. Two conditions were imposed: 1) that locally the object be half-BPS, a criterion that, so far, indicates that an object is horizonless, and 2) that the global symmetries are consistent with the presence of three global charges. These are statements coming purely from branes and supersymmetry and so should be valid irrespective of a geometrical description. This system of branes is argued to capture, parametrically, the three-charge entropy. In [31] a similar procedure was performed but for a more general scenario where the underlying constituents had nontrivial behavior along the common circle, S 1 , along one spacetime direction in M 4,1 and along one direction in T 4 . See [32] for more recent work on locally enhanced supersymmetry in the context of three charge black hole microstates.
BPS configurations have been the focus of many studies of black hole microstates. This is because supersymmetry provides a reliable identification between gravitational physics and the dual holographic field theory. For works in this direction see [33][34][35][36][37][38]. In order to understand configurations whose properties are closer to the black holes we observe in nature, it is instructive to also study non-BPS objects. From the perspective of black hole microstructure this has traditionally been a challenging task. However, progress has been toward this direction. The construction of a non-BPS microstate was originally carried out in [39] in which the configuration was highly rotating. More recently work has been done in computing non-BPS solutions in [40], which uses again a consistent truncation from the full 10d theory to a 3d. For work on the holography of non-BPS solutions see [37]. A set of smooth nonextremal microstate geometries were constructed in [41]. A set of smooth non-BPS geometries were constructed in a variety of scenarios using non-trivial topological structures in [42][43][44][45], with a smooth Schwarzschild-like geometry being constructed in [46]. While microstate geometries have yielded significant results in recent years one can consider what happens beyond the regime of supergravity. This is where string worldsheet methods play a significant role. Due to the difficulty of constructing the full worldsheet theory with Ramond-Ramond flux, worldsheet methods are most easily explored in NS-NS backgrounds. In [47] the authors constructed the string worldsheet solutions corresponding to two charge BPS microstates of a certain NS5-F1 configuration often termed as supertube geometries [48,49]. Later, the construction of [47] was generalized to three-charge non-BPS microstates in [38,[50][51][52][53][54]. The worldsheet construction of the above-mentioned microstates are given by a certain class of null-cosets.
In addition to the special class of null-cosets, discussed in the previous paragraph in the context of microstate geometries, worldsheet sigma models obtained by various gauging of SL(2, R) WZW models have always played a very important role in un-derstanding black hole physics [55][56][57][58], solvable non-AdS holographic models [58][59][60], resolution of cosmological singularities [61][62][63] and condensed matter applications [64]. In this paper, we will take a similar approach as in [47] to construct a 4d, non-BPS, NS-NS microstate solution that asymptotes to flat spacetime with a dilaton field that is linear in the radial direction. As in [47], perturbative string theory in our microstate background is described by a certain null coset (2.1). We show that the consistency of the worldsheet theory ensures that the geometry is smooth (i.e. no finite area horizon) and is labeled by four integers which are related to the background charges. Changing the background charges (which are allowed by the consistency of the worldsheet theory), yields a different microstate solution. A closer inspection of the background charges reveals that the geometry is sourced by a certain number (greater than 1) of coincident NS5 branes wrapping T 3 × S 1 × S 1 and F1 strings wrapping S 1 × S 1 with winding and momentum charges along the two cycles. The smoothness of the geometry further imposes an algebraic constraint on the F1 winding and momentum charges.
As discussed earlier, most NS5-F1 (or equivalently D1-D5 system in the S-dual frame) systems studied in the literature involve wrapping the fivebranes on T 4 × S 1 and F1 strings on S 1 . In all such constructions, the T 4 is a mere spectator. From the supergravity point of view, exciting one or more moduli of the T 4 is technically challenging. One of the novel features of our worldsheet construction is to bypass this technical difficulty and put winding and momentum charges along one of the circles of T 4 . In Appendix A, we generalize our construction even further where we excite all the moduli of T 4 and put momentum in the space transverse to the fivebranes (i.e. separating the fivebranes rotating along a transverse circle and giving them some rotational angular momentum). One can think of these more general solutions as the fully backreacted supergravity backgrounds discussed in [65] with pure NS-NS flux.
The paper is organized as follows. In section 2, we construct the null gauged sigma model and read off the spacetime metric, the B-field, and the dilaton. In section 3, we take an algebraic approach to construct the string theory spectrum and derive the various worldsheet constraints to be imposed on the supergravity solution derived in section 2. In section 4, we compute the various background charges and impose the worldsheet constraints on the supergravity solution and investigate its consequences. In section 5, we interpret the supergravity solution as an RG flow and discuss its AdS decoupling limit. Finally, in section 6, we discuss our results and point out various directions for future research.
Null gauged WZW model
In this section, we would like to study type II superstrings in the coset background
SL(2, R) × U(1) x × R t × U(1) y U(1) L × U(1) R × S 3 × T 3 , (2.1) J y (z)J y (0) ∼ 1/2 z 2 ,(2.5)
and similarly for the anti-holomorphic componentsJ 3,x,t,y . The detailed derivation of the gauged WZW model can be found in Appendix A with further generalizations. Since the gauge currents J ,J are null, the J J andJJ OPE are regular. This implies − n 5 l 2 1 + l 2 2 − l 2 3 + l 2 4 = 0 , − n 5 r 2 1 + r 2 2 − r 2 3 + r 2 4 = 0 .
(2.6)
Without loss of generality, one can set l 1 = r 1 = 1. This would imply l 2 2 − l 2 3 + l 2 4 = n 5 , r 2 2 − r 2 3 + r 2 4 = n 5 .
(2.7)
2 Unless otherwise states we work in the convention α ′ = 1.
The null gauge currents take the form
J = J 3 + l 2 J x + l 3 J t + l 4 J y , J =J 3 + r 2Jx + r 3Jt + r 4Jy . (2.8)
In order to compute the gauged sigma model, let's consider g ∈ SL(2, R) ∼ SU(1, 1) to be parametrized as
g = e i 2 (τ −σ)σ 3 e ρσ 1 e i 2 (τ +σ) .
(2.9)
The gauged WZW action on
SL(2, R) n 5 × U(1) x × R t × U(1) y U(1) L × U(1) R (2.10)
is given by [66]
S = S[g] + S[x] + S[t] + S[y] + 1 2π d 2 z(AJ +ĀJ − MAĀ) ,(2.11)
where A,Ā are the gauge fields and M = 2(n 5 cosh 2ρ − l 2 r 2 + l 3 r 3 − l 4 r 4 ) .
(2.12)
The gauged action (
S = 1 2π d 2 z − 1 − 2l 3 r 3 ∆ ∂t∂t + 1 + 2l 2 r 2 ∆ ∂x∂x + 1 + 2l 4 r 4 ∆ ∂y∂y + n 5 ∂ρ∂ρ + 2l 3 r 2 ∆ ∂t∂x + 2l 2 r 3 ∆ ∂x∂t + 2l 3 r 4 ∆ ∂t∂y + 2l 4 r 3 ∆ ∂y∂t + 2l 2 r 4 ∆ ∂x∂y + 2l 4 r 2 ∆ ∂y∂x .
(2.17)
Using standard worldsheet techniques, one can easily read off the metric, the Bfield, and the dilaton (after lifting it to 10 dimensions and restoring α ′ = l 2 s = 1) as
ds 2 α ′ = − 1 − 2l 3 r 3 ∆ dt 2 + n 5 dρ 2 + 1 + 2l 2 r 2 ∆ dx 2 + 1 + 2l 4 r 4 ∆ dy 2 + 2(l 3 r 2 + l 2 r 3 ) ∆ dtdx + 2(l 4 r 3 + l 3 r 4 ) ∆ dtdy + 2(l 4 r 2 + l 2 r 4 ) ∆ dxdy + ds 2 S 3 /α ′ + ds 2 T 3 /α ′ , B α ′ = l 3 r 2 − l 2 r 3 ∆ dt ∧ dx + l 3 r 4 − l 4 r 3 ∆ dt ∧ dy + l 2 r 4 − l 4 r 2 ∆ dx ∧ dy + n 5 cos 2 θ dφ ∧ dψ , e 2Φ = e 2Φ 0 ∆ ,(2.
18) where Φ 0 is the dilaton background and ∆ = n 5 cosh 2ρ − l 2 r 2 + l 3 r 3 − l 4 r 4 .
(2.19)
The metric on S 3 is given by
ds 2 s 3 /α ′ = n 5 (dθ 2 + cos 2 θdψ 2 + sin 2 θdφ 2 ) , (2.20)
where ds 2 T 3 denotes the standard metric on T 3 . One can explicitly check that supergravity background (2.18) satisfies the 10d type II supergravity equations of motion for all values of l 2,3,4 , r 2,3,4 subject to the null-gauge constraints (2.7). In the next section, we will show the consistency of the worldsheet theory in the coset background (2.1), imposes further constraints on the gauge parameters l 2,3,4 , r 2,3,4 .
Worldsheet constraints on supergravity
In this section, we will take an algebraic approach to construct the spectrum of physical operators of type II string theory in (3.1) with pure NS-NS flux and derive the constraints on the geometry imposed by the consistency of the worldsheet theory. To begin with, let's consider type II string theory in
SL(2, R) k sl × U(1) x × R t × U(1) y U(1) L × U(1) R × SU(2) ksu × T 3 ,(3.1)
where k sl and k su are respectively the WZW (supersymmetric) levels of SL(2, R) and SU (2). Criticality of the worldsheet theory (i.e. the total worldsheet central charge of the matter sector is 15) requires
k sl = k su ≡ n 5 . (3.2)
The null U(1) L × U(1) R gauge currents are generated by
J = J 3 + l 2 J x + l 3 J t + l 4 J y , J =J 3 + r 2Jx + r 3Jt + r 4Jy . (3.3)
The normalizations of the currents J 3,x,t,y are such that they satisfy the OPE algebra (2.5). Switching the chiralities one can equivalently obtain the OPEs of the antiholomorphic currentsJ 3,x,t,y . Regularity of the JJ andJJ OPEs implies
l 2 2 − l 2 3 + l 2 4 = r 2 2 − r 2 3 + r 2 4 = n 5 . (3.4)
Here, as before, we set l 1 = r 1 = 1 without loss of generality. The vertex operators in the coset sigma model (3.1) in the (−1, −1) picture are given by
V (z,z) = e −ϕ e −φ V j;w m,m e i(P L x L +iP R x R ) e iEt e i(Q L y L +Q R y R ) V M , (3.5)
where ϕ,φ are the worldsheet superconformal fields coming from the bosonization of the βγ andβγ system, V j;w m,m is a spectrally flowed (with integer w) SL(2, R) vertex operator from the discrete or continuous series representation,
e iEt , e i(P L x L +iP R x R ) , e i(Q L y L +Q R y R ) , are respectively the plane wave vertex operators in R t , U(1) x , U(1) y , V M is a vertex operator of SU(2) n 5 × U(1) 3 WZW model with dimensions (∆ M L , ∆ M R ). The charges P L,R , Q L,R are given by P L,R = n x R x ± w x R x , Q L,R = n y R y ± w y R y ,(3.6)
where n x,y and w x,y are integers and E ∈ R.
Using the parafermionic decomposition [67] of SL(2, R) n 5 ∼ SL(2, R) n 5 /U(1) × U(1) Y , one can bosonize the timelike SL(2, R) n 5 currents J 3 ,J 3 as
J 3 = − n 5 2 ∂Y L ,J 3 = − n 5 2∂ Y R . (3.7) where Y = Y L + Y R is a free field normalized such that Y L (z)Y L (0) ∼ − log z , Y R (z)Y R (0) ∼ − logz . (3.8)
The spectrally flowed SL(2, R) n 5 vertex operator V j;w m,m , under parafermionic decomposition, can be expressed as [67]
V j;w m,m = Φ j m,m e 2 n 5 [( m+ n 5 2 w)Y L +(m+ n 5 2 w)Y R] ,(3.9)
where Φ j m,m is the parafermionic part of the SL(2, R) vertex operator whose OPEs with J 3 ,J 3 are regular. Using (3.7), (3.8), and (3.9), it is easy to show
J 3 (z)V j;w m,m (0) ∼ m + n 5 w 2 z V j;w m,m (0) . (3.10)
Similarly, the OPE of the currents J x,t,y with the plane wave vertex operators are given by
J x (z)e i(P L x L +P R x R ) (0) ∼ P L /2 z , J t (z)e iEt (0) ∼ −E/2 z , J y (z)e i(Q L y L +Q R y R ) (0) ∼ Q L /2 z .
(3.11) Switching the chiralities one obtains the equivalent OPEs of the anti-holomorphic components of the currents and the vertex operators.
Physical vertex operators of the form (3.5) must be gauge invariant. This can be imposed by demanding that string states described by vertex operators (3.5) must be annihilated by the null gauge charges
Q B = dzJ ,Q B = dzJ . (3.12)
An equivalent way of saying this would be to say that the OPEs of the null currents (3.3) with the vertex operators (3.5) are regular. This imposes the null-gauge constraints
m + n 5 w 2 + l 2 P L 2 − l 3 E 2 + l 4 Q L 2 = 0 , m + n 5 w 2 + r 2 P R 2 − r 3 E 2 + r 4 Q R 2 = 0 . (3.13)
Note that a detailed analysis of the BRST quantization of the gauged WZW model (3.1) would give cQ B ,cQ B , (3.12), as the BRST charges that square to zero where c,c are the holomorphic and anti-holomorphic fields of the bc andbc system. Thus the null-gauge constraints, (3.13), can also be thought of as the BRST constraints on the vertex operators. We would like to emphasize the fact that cQ B ,cQ B are BRST charges of the gauged WZW model and not of the worldsheet string theory. The worldsheet BRST charges are constructed in Appendix B.
We are interested in vertex operators (3.5) that live in the physical Hilbert space of string theory in (3.1). This means, in addition to the null-gauge constraints (3.13), the vertex operators must satisfy the Virasoro constraints
− j(j + 1) n 5 − mw − n 5 w 2 4 + P 2 L 4 − E 2 4 + Q 2 L 4 + N L + ∆ M L = 1 2 , − j(j + 1) n 5 −mw − n 5 w 2 4 + P 2 R 4 − E 2 4 + Q 2 R 4 + N R + ∆ M R = 1 2 , (3.14)
where j parametrizes the quadratic Casimir of SL(2, R) and N L,R are the oscillator levels in AdS 3 . Naively, one might think that vertex operators (3.5) satisfying the null gauge constraints (3.13) and the Virasoro constraints (3.14) constitute all states in the Hilbert space of the string theory. But, as we are going to argue next, there are still some residual gauge redundancies that need to be fixed. To realize the residual gauge freedom, let's define the following fields:
H L = −i n 5 2 Y L + l 2 x L + l 3 t + l 4 y L , H R = −i n 5 2 Y R + l 2 x R + l 3 t + l 4 y R .(w, P L , E, Q L ) ∼ (w, P L , E, Q L ) + (α, αl 2 , αl 3 , αl 4 ) , (w, P R , E, Q R ) ∼ (w, P R , E, Q R ) + (α, αr 2 , αr 3 , αr 4 ) . (3.16)
The identifications (3.16) impose further constraints on α, l 2,3,4 , r 2,3,4 :
α ∈ Z , αl 2 =P L , αl 4 =Q L , αr 2 =P R , αr 4 =Q R , l 3 = r 3 , (3.17) whereP L,R =ñ x R x ±w x R x , Q L,R =ñ y R y ±w y R y .l 2 =ñ x R x +w x R x , l 4 =ñ y R y +w y R y , r 2 =ñ x R x −w x R x , r 4 =ñ y R y −w y R y , l = l 3 = r 3 = P 2 L +Q 2 L − n 5 = P 2 R +Q 2 R − n 5 = ñ 2 x R 2 x +ñ 2 y R 2 y +w 2 x R 2 x +w 2 y R 2 y − n 5 . (3.19)
In (3.19) we have chosen the positive branch for l 3 , r 3 . The negative branch would simply correspond to t → −t. The fact that l 3 = r 3 imposes the following constraint
P 2 L −P 2 R +Q 2 L −Q 2 R = 0 , (3.20) implyingñ x,y ,w x,y lie on a null cone in Γ 2,2 . Substituting (3.18) into (3.20) one obtains n xwx +ñ ywy = 0 . (3.21)
The reality of the gauge angle l in (3.19) further imposes the constraint
n 2 x R 2 x +ñ 2 y R 2 y +w 2 x R 2 x +w 2 y R 2 y ≥ n 5 ,(3.22)
for all integer values ofñ x,y andw x,y satisfying (3.21). Thus, the allowed values of R x,y heavily depend on the choice of the integersñ x,y andw x,y .
Supergravity background
In this section, we will study the supergravity background (2.18), calculate the conserved charges, and impose the worldsheet constraints derived in the previous section.
We would like to stress the fact that the solution (2.18) satisfies the type II supergravity equations of motion for all values of the gauge angles l 2,3,4 , r 2,3,4 subject to the constraint (3.4), but the consistency of the worldsheet theory imposes further constraints (3.19)-(3.22) on the gauge angles which are otherwise not manifest at the supergravity level.
To begin with, let's recall the supergravity background derived in section 2:
ds 2 α ′ = − 1 − 2l 3 r 3 ∆ dt 2 + n 5 dρ 2 + 1 + 2l 2 r 2 ∆ dx 2 + 1 + 2l 4 r 4 ∆ dy 2 + 2(l 3 r 2 + l 2 r 3 ) ∆ dtdx + 2(l 4 r 3 + l 3 r 4 ) ∆ dtdy + 2(l 4 r 2 + l 2 r 4 ) ∆ dxdy + ds 2 S 3 /α ′ + ds 2 T 3 /α ′ , B α ′ = l 3 r 2 − l 2 r 3 ∆ dt ∧ dx + l 3 r 4 − l 4 r 3 ∆ dt ∧ dy + l 2 r 4 − l 4 r 2 ∆ dx ∧ dy + n 5 cos 2 θ dφ ∧ dψ , e 2Φ = e 2Φ 0 ∆ , (4.1) with ∆ = n 5 cosh 2ρ − l 2 r 2 + l 3 r 3 − l 4 r 4 . (4.2)
The background (4.1) has a boundary at ρ → ∞. Near the boundary, the background (4.1) asymptotes to two-dimensional flat spacetime times two circles with radii R x,y (finite) and a dilaton that grows linear in ρ:
ds 2 α ′ = −dt 2 + kdρ 2 + dx 2 + dy 2 , B = 0 , Φ =Φ 0 − ρ . (4.3)
whereΦ 0 is a constant. The IR limit of the solution (4.1) is obtained by taking the limit R y → ∞ keeping R x fixed 4 . This limit is a bit tricky and should be considered after imposing all the worldsheet constraints into the supergravity solution. We postpone this discussion to section 5.
In the discussion that follows, we will argue that the supergravity background (4.1) is a microstate solution and derive constraints on the solution for which the background geometry is smooth (absence of horizon). The absence of a horizon in microstate geometries is a result of topological changes within the geometry. 5 When branes wrap compact directions, due to their tension, the tendency is for those directions to shrink. Placing momentum along the branes produces an opposing effect in which they want to expand. The balance between these two effects stabilizes the size of the compact directions, allowing for a horizon to form. For microstate geometries with the same global charges, however, the compact directions are stabilized in a certain region of the geometry but not all the way down to the horizon scale. Rather, the momentum dilutes in such a way as to allow the compact directions to shrink to zero precisely as one approaches the would-be horizon, allowing the space to end smoothly (up to an orbifold singularity at ρ = 0). Therefore, let us analyze our supergravity background and thus show that, under suitable constraints, it is a non-BPS, microstate geometry; a solution that is non-BPS, smooth, and horizonless. The analysis below is similar to the one that appears in [53]. Let's begin by looking at the induced metric (in the string frame) on a constant ρ = ρ 0 surface at a fixed time and then consider the limit ρ 0 → 0. This is given by
lim ρ 0 →0 det 1 + 2l 2 r 2 ∆ l 4 r 2 +l 2 r 4 ∆ l 4 r 2 +l 2 r 4 ∆ 1 + 2l 4 r 4 ∆ = − n 5 (l 3 − r 3 ) 2 (n 5 − l 2 r 2 + l 3 r 3 − l 4 r 4 ) 2 .
(4.4) 4 Since there is complete democracy between R x and R y , one can also obtain an equivalent IR limit by taking R x → ∞ keeping R y fixed. However, choosing R x or R y as the parameter to flow to the IR breaks the democracy in x and y. 5 Another way to understand this is the following: the near horizon region of a black hole geometry continues infinitely due to the presence of a horizon. For a microstate geometry, however, the near horizon region ends smoothly due to the presence of nontrivial microstructure at the would be horizon scale. For more discussion see [9,68].
Note that this quantity is negative definite and requiring the absence of a horizon imposes 6 l 3 = r 3 .
(4.5)
This is precisely the constraint one obtains from the consistency of the worldsheet theory (3.17). Let's define l = l 3 = r 3 .
(4.6)
The determinant of the 4d (excluding S 3 and T 3 ) part of the metric is given by
− n 3 5 sinh 2 2ρ ∆ 2 ,(4.7)
which also smoothly goes to zero as ρ → 0. One can also check that the Ricci scalar with l 3 = r 3 is smooth everywhere. Diagonalizing the induced metric at ρ = ρ 0 and at a fixed time and then taking ρ 0 → 0, one can easily show that one of the circles shrinks to zero size in the limit l 3 = r 3 with a conical deficit. This is reflected in the fact that the determinant of the induced metric (4.4) vanishes at ρ 0 = 0. The size of the other circle, on the other hand, remains finite.
The supergravity background (4.1) has a B-field with components along tx, ty, xy, φψ directions. This means that the background is sourced by NS5-branes wrapping T 3 and the x and y circles and F1-strings wrapping the x and y circles. The metric has non-zero tx, ty components which further imply that the F1-strings have momentum modes along the x and y compact directions. In the following subsection, we will compute the conserved fundamental charges namely the NS5 and F1 charges and the ADM mass and angular momenta along x and y.
Background charges
As is standard in any gauge theory, the charges are calculated from the integrals of the field strength over various surfaces. In the discussion that follows, we define the NS5 and F1 charges denoted respectively by Q 5 and Q 1 , such that they are topological charges that count respectively the number of NS5 branes and F1 strings wrapping the x-circle and the y-circle. The NS5-brane charge is given by
Q 5 = 1 (2π) 2 l 2 s S 3 H = n 5 ,(4.8)
where, H = dB. Similarly, the F1-string charge is given by
Q 1 = 1 (2π) 6 l 6 s ∂M 8 e −2Φ ⋆ 10 H ,(4.9)
where ∂M 8 is a 7-dimensional boundary of the 8-dimensional compact manifold at a fixed time and ρ. The 7-formH = ⋆ 10 H has the following non-vanishing components:
H tρxyz 1 z 2 z 3 = 2l 6 s n 5 sinh 2ρ ∆ , H xθψφz 1 z 2 z 3 = l 6 s n 5 l(l 4 − r 4 ) sin 2θ ∆ , H yθψφz 1 z 2 z 3 = l 6 s n 5 l(l 2 − r 2 ) sin 2θ ∆ ,(4.10)
where z 1,2,3 denote coordinates on T 3 . The volume form on the 7-cycle at ρ → ∞ can formally be expressed as
dΩ 7−cycle = dx ∧ dΩ S 3 ∧ dΩ T 3 + dy ∧ dΩ S 3 ∧ dΩ T 3 . (4.11)
Using this measure the total F1 winding charge can be split up into two components
Q 1 = |Q x 1 + Q y 1 | ,(4.12)
where
Q x 1 = 1 (2π) 6 l 6 s dydΩ S 3 dΩ T 3 e −2ΦH yθψφz 1 z 2 z 3 = n 5 l(l 2 − r 2 )R y v e 2Φ 0 , Q y 1 = 1 (2π) 6 l 6 s dydΩ S 3 dΩ T 3 e −2ΦH yθψφz 1 z 2 z 3 = n 5 l(l 4 − r 4 )R x v e 2Φ 0 ,(4.13)
where v is such that the dimension of T 3 is given by
V T 3 = (2π) 3 l 4 s v .
(4.14)
The charges Q x,y 1 can be thought of as the topological winding charges of the F1 strings along the x and y circles respectively.
In order to compute the ADM charges (mass and angular momenta), it is often useful to cast the string frame background metric (4.1) in the form
ds 2 = g tt dt 2 + n 5 dρ 2 + g ab (dx a + R a dt)(dx b + R b dt) ,(4.15)
where (a, b) run over (x, y) and g tt = − n 5 cosh 2 ρ l 2 + n 5 cosh 2 ρ , g ab = 1 + 2l 2 r 2 ∆ l 4 r 2 +l 2 r 4 ∆ l 4 r 2 +l 2 r 4 ∆ 1 + 2l 4 r 4 ∆ , R x = l(l 2 + r 2 ) 2(l 2 + n 5 cosh 2 ρ) , R y = l(l 4 + r 4 ) 2(l 2 + n 5 cosh 2 ρ) . The geometry (4.15) with (4.16) and the B-field have nonzero tx and ty components that do not fall off sufficiently fast. This means that the supergravity solution has finite angular charges, L (G,B)
x,y . Using standard techniques from general relativity, one can read off the angular velocities as
Ω x = − lim ρ→0 R x = − l(l 2 + r 2 ) 2(n 5 + l 2 )
,
Ω y = − lim ρ→0 R y = − l(l 4 + r 4 ) 2(n 5 + l 2 ) .
(4.17)
The angular velocities (4.17) can further be decomposed into left and right movers 7
Ω x L = − ll 2 2(n 5 + l 2 )
,
Ω x R = − lr 2 2(n 5 + l 2 )
,
Ω y L = − ll 4 2(n 5 + l 2 )
, Ω y R = − lr 4 2(n 5 + l 2 ) .
(4.18)
As stated earlier, the solution (4.15) with (4.16) is invariant under constant shifts in t, x, y. The corresponding conserved charges can be evaluated using the covariant phase space formalism [60,69,70] which we calculate next.
The conserved angular charges can be derived using the 1-forms
k a G = G aµ dX µ , k a B = B aµ dX µ , a = x, y ,(4.19)
by the Komar formula 8
L (G,B) a = 1 κ 2 0 V S 3 V T 3 l s ⋆(dk a (G,B) )e −2Φ ,(4.20)
where G µν , B µν are the components of the metric and the B-field in (4.1), and V S 3 ,T 3 ,x,y are respectively the volumes of the S 3 , T 3 and the x and y circles at the boundary given by
V S 3 = 2π 2 n 3/2 5 l 3 s , V T 3 = (2π) 3 l 3 s v , V x = 2πR x l s , V y = 2πR y l s ,(4.21)
7 From the supergravity point of view, the four angular velocities Ω x,y L,R can be understood as follows. Dimensional reduction along S 1
x,y gives rise to four U (1) gauge fields (two coming from the metric and two coming from the B-field). From the value of the gauge fields at ρ = 0, one can work out the chemical potential conjugate to the four U (1) charges using standard methods. From the higher dimensional point of view, the four chemical potentials can be interpreted as angular velocities Ω x,y L,R . Similarly, the four U (1) charges of the compactified theory can be realized as the four angular charges of the higher dimensional theory. 8 That the conserved charges can be expressed as Komar integrals, follow directly from the covariant phase space formalism developed in [69] for spacetimes with arbitrary asymptote. and κ = κ 0 e Φ 0 = 8πG N , (4.22) G N is the 10-dimensional Newton constant. One can also define the left and the right moving components of the angular momenta as
J L,R a = L (G) a ± L (B) a 2 , a = xJ L x = − 2V S 3 V T 3 V x V y l 2 s κ 2 0 ll 2 , J R x = − 2V S 3 V T 3 V x V y l 2 s κ 2 0 lr 2 , J L y = − 2V S 3 V T 3 V x V y l 2 s κ 2 0 ll 4 , J R y = − 2V S 3 V T 3 V x V y l 2 s κ 2 0 lr 4 .
(4.24)
Similarly one can use the analogous Komar integral for the energy/mass
E = − 1 κ 2 0 V S 3 V T 3 l s ⋆(dk t (G) )e −2Φ , (4.25) where dk t (G) = G tµ dX µ which gives E = 2V S 3 V T 3 V x V y l 2 s κ 2 0 l 2 . (4.26)
Let us end this subsection with a quick consistency check of the various ADM charges (4.24), (4.26), and the angular potentials (4.18) derived above. Free energy of supergravity solution (4.1) is given by
F = E − T S − Ω.J = E − T S − Ω x L J L x − Ω y L J L y − Ω x R J R x − Ω y R J R y , (4.27)
where S is the entropy of the system and T is its temperature. It has been proved that the solution (4.1) with (4.5) is smooth with no finite area horizon i.e. S = 0. This is reminiscent of the fact that the solution (4.1) with (4.5) describes a microstate. Thus substituting (4.18),(4.24),(4.26),(4.24) and S = 0 in (4.27) one obtains
F = E − Ω x L J L x − Ω y L J L y − Ω x R J R x − Ω y R J R y = 0 . (4.28)
This can also be cross-checked by explicit computation of the Euclidean classical supergravity action. In fact, this observation agrees with the fact that in the semi-classical approximation (at leading order), the free energy of a supergravity solution that asymptotes to linear dilaton in the UV always vanishes [60,71,72].
Summary of the supergravity solution
Let us close this section with a brief summary of the full supergravity solution with all the worldsheet constraints imposed. As explained before the background is sourced by n 5 NS5 branes and Q 1 , (4.12) F1 strings. This allows us to express the background dilaton Φ 0 in terms of Q 1 . Using (4.12) and (4.13) one obtains
e 2Φ 0 = n 5 vl Q 1 |(l 4 − r 4 )R x + (l 2 − r 2 )R y | = 2n 5 lvR x R y |w x +w y | Q 1 . (4.29)
Imposing the worldsheet constraints (3.19)- (3.22) in the supergravity solution (4.1) with (4.29), one obtains There is another way to see that the solution is non-BPS. For 6d supergravity, BPS solutions take a particular form which makes manifest the supersymmetry. See [73] for more details. However, our solution after reducing to 6d (e.g., reducing on either the x or the y circle) can not be rearranged into such a form thus supporting the claim that the geometry is in fact non-BPS.
ds 2 α ′ = − −1 + 2l 2 Σ dt 2 + n 5 dρ 2 + 1 + 2(ñ 2 x − R 4 xw 2 x ) ΣR 2 x dx 2 + 1 + 2(ñ 2 y − R 4 yw 2 y ) ΣR 2 y dy 2 + 2ñ x l ΣR x dtdx + 2ñ y l ΣR y dtdy + 2 Σ ñ xñy R x R y −w xwy R x R y dxdy + ds 2 S 3 /α ′ + ds 2 T 3 /α ′ , B α ′ = − 2w x R x l Σ dt ∧ dx − 2w y R y l Σ dt ∧ dy + 2 Σ ñ ywx R x R y −ñ xwy R y R x dx ∧ dy + n 5 cos 2 θ dφ ∧ dψ , e 2Φ = 2n 5 R x R y vl|w x +w y | Q 1 Σ , (4.30) where l = ñ 2 x R 2 x +ñ 2 y R 2 y +w 2 x R 2 x +w 2 y R 2 y − n 5 , Σ =n 5 cosh 2ρ + 2w 2 x R 2 x + 2w 2 y R 2 y − n 5 .
Further justification that the solution (4.30)-(4.32) is a microstate can be obtained by looking at the conserved charges. Our solution carries the NS5 charge, n 5 , and F1 charges,ñ x ,ñ y (momentum) andw x ,w y (winding) under the constraint that n xwx +ñ ywy = 0. Changing any of these charges produces a different background seen by an observer at ∞. This is in contrast with a system that has both global charges and dipole charges. Dipole charges usually integrate to zero for an asymptotic observer. However, as we zoom in towards the would-be horizon, these dipole charges begin to distinguish and make manifest, locally, the microscopic structure or different microstates.
2. The JMaRT solution [39] is a non-BPS three-charge configuration. However, this state is very atypical from the point of view of black hole microstates because it is highly rotating along the S 3 . In a similar manner, the microstate constructed in this paper is also atypical where, now, the momentum is carried along S 1 x and S 1 y instead of S 3 . It is still interesting nonetheless, to derive a non-BPS supergravity background from worldsheet string theory, which is exact to all orders in α ′ .
3. The background (4.30)-(4.32) can be thought of a the fivebrane decoupling limit (i.e. g s → 0) of a stack of n 5 NS5 branes wrapping T 3 × S 1 x × S 1 y , with F1 strings wrapping the x and y-circles with integer winding numbersw x,y respectively and n x,y units of momenta along the x and y-circles. Note that the integersw x,y and n x,y are not all independent. The smoothness of the geometry (i.e. no horizon condition) imposes the constraint (4.32) on the integersw x,y andñ x,y . Thus three out of the four integersw x,y andñ x,y are independent.
RG flow interpretation
As stated in section 4, the background (4.30) asymptotes to 2d flat spacetime times two spacelike circles with a dilaton field that is linear in the radial direction (4.3) and a vanishing B-field (The B-field is non-vanishing in the S 3 .). The full background (4.30), however, can be thought of as a two-parameter family of solutions parametrized by R x and R y . In this section, we are going to argue that there exist, one-parameter families of solutions embedded in the two-dimensional R x , R y parameter space which can be interpreted as an RG flow from linear dilaton spacetime in the UV to some AdS 3 (locally) in the IR. From the holographic point of view, the boundary field theory is a certain Little String Theory with two dimensionless parameters λ x,y = α ′ /R 2
x,y that flows to a CFT 2 in the IR. In the discussion that follows, we will identify two oneparameter lines of theories that flows to some AdS 3 at large distances.
Let's start by considering the one-parameter subspace in the R x , R y parameter space obtained by varying R y for some fixed R x . The AdS decoupling limit of such a one-parameter flow is obtained by taking R y → ∞ keeping R x fixed (see the red curve in figure 1). In terms of the coordinates
t = t R y ,ỹ = y R y ,(5.1)
which are well defined in the R y → ∞ limit, the supergravity background (suppressing S 3 × T 3 ) takes the form
ds 2 α ′ = n 5 − cosh 2 ρ w 2 y dt 2 + dρ 2 + sinh 2 ρ w 2 y dỹ 2 + dx +ñ x w y R x dt −w x R x w y dỹ 2 , H = n 5 α ′ w 2 y dρ ∧ dt ∧ dỹ , e 2Φ = n 5 vR x Q 1 1 +w x w y . (5.2)
Note that forw x =ñ x = 0 andw y = 1, the above background is precisely global-AdS 3 × S 1 , but for genericw x,y ,ñ x , the background (5.2) can be identified as (AdS 3 × S 1 )/Zw y with √ n 5 α ′ as the radius of AdS. Redefining the coordinate
x ′ = x +ñ x w y R xt −w x R x w yỹ , (5.3) with x ′ ∼ x ′ + 2πR x 1 −w x w y ,(5.4)
the metric takes the simpler form
ds 2 α ′ = n 5 − cosh 2 ρ w 2 y dt 2 + dρ 2 + sinh 2 ρ w 2 y dỹ 2 + dx ′2 ,(5.5)
This is known as a large gauge transformation because the effects are felt at the boundary. In the dual field theory, this is equivalent to a spectral flow operation of the Figure 1. Diagrammatic representation of the two-parameter family of the microstate solution (4.30). The red curve denotes the flow obtained by varying R y keeping R x fixed. The IR of this flow is obtained when R y → ∞ where the spacetime is well approximated by (AdS 3 × S 1 )/Zw y . Similarly, the blue curve denotes the flow obtained by changing R x keeping R y fixed. The IR geometry here is given by (AdS 3 × S 1 )/Zw x .
(AdS 3 × S 1 )/Zw x (AdS 3 × S 1 )/Zw y R x → ∞ R y → ∞ R t × R φ × S 1 x × S 1 y ρ boundary
SL(2, C) invariant NS vacuum. It would be interesting to further explore the boundary interpretation of this background. 9 This feature is similar to the JMaRT solution (See [53] for details.). Since the background (4.30) is democratic in R x and R y , one can equivalently obtain a second line of theories by varying R x and keeping fix R y . The AdS decoupling limit is obtained by taking the limit R x → ∞ with R y kept fixed (the blue curve in figure 1). The decoupled background in the IR, thus obtained, is (AdS
3 × S 1 )/Zw x .
It would be interesting to understand if there exists some other one-parameter family of solutions that would give rise to some (locally) AdS 3 in the IR.
Discussion
In this paper, we studied type II superstrings in the background (2.1) in the presence of pure NS-NS flux. We showed that the corresponding supergravity background (4.30)-(4.32) is a two-parameter (parametrized by R x,y ) family of smooth non-BPS microstate solution that satisfies the type II supergravity equations of motion. Microscopically one can think of (4.30)-(4.32) as the fivebrane decoupling limit of a stack of n 5 NS5 branes wrapping T 3 × S 1
x × S 1 y with F1 strings wrapping the non-contractable cycles of S 1
x × S 1 y with winding chargesw x,y andñ x,y unites of momenta respectively along the x and y-circles. The background asymptotes to flat spacetime with a linear dilaton. The AdS decoupling limit is obtained by sending R y → ∞ (or equivalently R x → ∞) keeping R x fixed (or equivalently R y fixed) where the background takes the form of some Z w orbifold of AdS 3 × S 1 .
The background derived in this paper is similar to the JMaRT solution, a highly rotating (atypical) three-charge non-BPS microstate. The main difference is that instead of carrying angular momentum along the S 3 , our solution carries it along the S 1 x and S 1 y directions. The construction in this paper is intimately connected to single trace TT [58,[74][75][76][77] deformation of string theory in AdS 3 /Z N for some integer N. In particular if one setsñ x andw x to zero then the coset describes single trace TT deformation of the spacetime theory dual to string theory in global AdS 3 or its Z N orbifolds.
Our construction paves the way for various follow-up problems. The first and foremost is to understand the structure of spatial entanglement using the Ryu-Takayanagi (RT) prescription. Microstate geometries are dual to coherent states of the spacetime theory. The entanglement of a bipartite system in a coherent state has certain specific features [78]. It would be nice to revisit this issue from the behavior of the RT surface. Entanglement entropy in similar asymptotically linear dilaton backgrounds [79,80] exhibits a certain non-local features. It would be interesting to investigate such non-local effects in this setup as well.
One of the great advantages of having a coset description is to construct operators of the spacetime theory. In fact, the correlation function of operators can easily be computed using techniques described in [58][59][60]81]. It would be interesting to set up a 4-point scattering problem in the background (4.30)-(4.32) and understand its analytic properties.
For a better understanding of the microstate solution constructed in this paper, it is instructive to study solutions to Klein Gordon equation in this background. From the holographic point of view, this would allow us to compute correlation functions of operators of the spacetime theory. Introducing a perturbative polynomial potential would allow us to evaluate the Witten diagrams which would eventually lead to the computation of S-matrices in this background. It would also be interesting to probe the geometry by a D1 brane with its endpoints anchored at the conformal boundary. As discussed in [82], in the context of a similar non-AdS setup, the classical effective action of a probe D1 brane captures various non-perturbative aspects of the spacetime theory (e.g., various phases of the theory including confinement-deconfinement phase transitions). We hope to report on this in the future.
Most of the previous microstate geometries which have been constructed contain both Ramond-Ramond and NS-NS flux making a worldsheet formulation challenging. Recently, in [25,26], purely NS-NS microstate geometries were constructed. It would be interesting to explore the worldsheet formulation of these geometries. We plan to investigate this in future work. 2020" innovation under the Marie Sk lodowska-Curie grant agreement n°945298. This work was supported in part by the FACCTS Program at the University of Chicago. SC would like to thank APCTP, Pohang for hospitality during part of this work. SDH would like to thank the hospitality of the University of Warsaw, Faculty of Physics during the completion of this work. The work of SDH is supported by ERC Grant 787320 -QBH Structure.
A General null gauging
In this appendix, we would like to study a general null gauging
G H ≡ SL(2, R) × SU(2) × U(1) 4 × R t × U y (1) (U(1) L × U(1) R ) null , (A.1)
where we will refer to the WZW theory on G as the upstairs theory. The four circles of U(1) 4 are parametrized by x i , i = 1, · · · , 4 with periodicities x i ∼ x i + 2πR i . As usual y parametrizes U(1) y with radius R y and t parametrizes R t . Let G = diag g, g ′ , e with g ∈ SL(2, R), g ′ ∈ SU(2) be a diagonal block element of the upstairs theory (A.1). The (U(1) L × U(1) R ) null that we want to gauge are generated by exp T L and exp T R where T L,R are given by
T L = diag il 1 σ 3 , il 2 σ 3 , i 2 n 5 l 3 , i 2 n 5 l 4 , i 2 n 5 l 5 , i 2 n 5 l 6 , 2 n 5 l 7 , i 2 n 5 l 8 , T R = diag ir 1 σ 3 , ir 2 σ 3 , i 2 n 5 r 3 , i 2 n 5 r 4 , i 2 n 5 r 5 , i 2 n 5 r 6 , 2 n 5 r 7 , i 2 n 5 r 8 . (A.3)
In other words, we would like to gauge the symmetry
G → e T L Ge T R . (A.4)
The To compute the gauged sigma model let's choose the following parametrization for g ∈ SL(2, R) and g ′ ∈ SU(2)
g = e i 2 (τ −σ)σ 3 e ρσ 1 e i 2 (τ +σ)σ 3 , g ′ = e i 2 (ψ−φ)σ 3 e iθσ 1 e i 2 (ψ+φ)σ 3 .
(A.9) 10 The projection operator is chosen such that the sign of the kinetic terms of the timelike fields are negative and the spacelike fields are positive. The gauged sigma model is given by
S g = S[G] + 1 2π d 2 z(AJ +ĀJ − MAĀ) , (A.10)
where A,Ā are the gauge fields, S[G] is the WZW action given by
S[G] = n 5 4π d 2 z tr(P G −1 ∂GG −1∂ G)+ in 5 24π B tr(P G −1 dG∧G −1 dG∧G −1 dG) ,ds 2 α ′ = − 1 − 2l 2 ∆ 2 dt 2 + n 5 dρ 2 + G ij dX i dX j +G ij dY i dY j + A ψ dψ + A φ dφ + A t dt , B α ′ = n 5 l cos 2 θ ∆ (r 2 − l 2 )dt ∧ dψ + n 5 l sin 2 θ ∆ (r 2 + l 2 )dt ∧ dφ + dt ∧ B t + dφ ∧ B φ + dψ ∧ B ψ + Bdφ ∧ dψ +B ij dY i ∧ dY j , e 2Φ = e 2Φ 0 ∆ ,
(A.14) where l 7 = r 7 = l which follows from the smoothness of the geometry, ∆ = M/2, the vectors X i , Y j are defined as A ψ,φ,t are given by A ψ = 2n 5 cos 2 θ ∆ l (l 2 + r 2 ) dt + (l 3 r 2 + l 2 r 3 )dx 1 + (l 4 r 2 + l 2 r 4 )dx 2 + (l 5 r 2 + l 2 r 5 )dx 3 + (l 6 r 2 + l 2 r 6 )dx 4 + (l 8 r 2 + l 2 r 8 )dy , A φ = 2n 5 sin 2 θ ∆ l (r 2 − l 2 ) dt + (l 3 r 2 − l 2 r 3 )dx 1 + (l 4 r 2 − l 2 r 4 )dx 2 + (l 5 r 2 − l 2 r 5 )dx 3 + (l 6 r 2 − l 2 r 6 )dx 4 + (l 8 r 2 − l 2 r 8 )dy , A t = 2l ∆ (r 3 + l 3 ) dx 1 + (r 4 + l 4 ) dx 2 + (r 5 + l 5 ) dx 3 + (r 6 + l 6 ) dx 4 + (r 8 + l 8 ) dy , and B ψ,φ,t are given by B ψ = n 5 cos 2 θ ∆ (l 2 r 3 − l 3 r 2 ) dx 1 + (l 2 r 4 − l 4 r 2 ) dx 2 + (l 2 r 5 − l 5 r 2 ) dx 3 + (l 2 r 6 − l 6 r 2 ) dx 4 + (l 2 r 8 − l 8 r 2 ) dx 5 , B φ = − n 5 sin 2 θ ∆ (l 3 r 2 + l 2 r 3 ) dx 1 + (l 4 r 2 + l 2 r 4 ) dx 2 + (l 5 r 2 + l 2 r 5 ) dx 3 + (l 6 r 2 + l 2 r 6 ) dx 4 + (l 8 r 2 + l 2 r 8 ) dx 5 , B t = l ∆ (r 3 − l 3 ) dx 1 + (r 4 − l 4 ) dx 2 + (r 5 − l 5 ) dx 3 + (r 6 − l 6 ) dx 4 + (r 8 − l 6 ) dx 5 .
X i = {ψ, θ, φ} , Y i = {x 1 , x 2 , x 3 , x 4 ,
(A.21)
B Spacetime supersymmetry
The supergravity background (4.30)-(4.32) obtained via the null coset is (2.1) is non-BPS. Below we sketch the worldsheet argument to prove this statement. The numerator theory (i.e. the theory before gauging) is 10+2 dimensional. Such a theory has 12 spacetime fermions. Let ψ ±,3 sl be the worldsheet fermions corresponding to SL(2, R), ψ x,t,y be the worldsheet fermionic superpartners of x, t, y respectively, ψ ±,3 su be the worldsheet fermions corresponding to SU(2) and ψ 7,8,9 be the worldsheet fermionic superpartners of T 3 . Let's bosonize the fermions as follows
ψ + sl ψ − sl = i √ 2H 1 , ψ + su ψ − su = i √ 2H 2 , ψ 3 sl ψ 3 su = i √ 2H 3 , ψ t ψ y = i √ 2H 4 , ψ x ψ 7 = i √ 2H 5 , ψ 8 ψ 9 = i √ 2H 6 . (B.1)
The bosonized fields H a are normalised such that
H a (z)H b (w) ∼ −δ ab log(z − w) . (B.2)
The spin fields are given by
S ε = e i √ 2 6 a=1 εaHa , (B.3)
where ε a = ± are the fermion polarizations. The worldsheet has the usual superconformal β, γ fields as well as theβ,γ fields due to null gauging. The fields β, γ,β,γ are bosonized as βγ = −∂ϕ , γ = e ϕ η , β = e −ϕ ∂η , βγ = −∂φ ,γ = eφη ,β = e −φ ∂η .
(B.4)
The spacetime supersymmetry operators in the −1/2 picture are given by
Q ε = dze − 1 2 (ϕ−φ) S ε . (B.5)
Switching the worldsheet chiralities one can similarly define the anti-holomorphic spacetime superchargeQ ε . The background is BPS or non-BPS depending on whether Q ε is annihilated by the BRST operator or not. The BRST operators are given by Q brst = cT + γG +cJ +γλ + (bc,bc and βγ,βγ terms) , Q brst = cT +γḠ +cJ +γλ + (bc,bc andβγ,βγ terms) ,
(B.6)
where J ,J are the worldsheet null currents that we want to gauge, given by J = l 1 J 3 + l 2 J x + l 3 J t + l 4 J y , J = r 1J3 + r 2Jx + r 3Jt + r 4Jy , (B.7)
with J 3 = J 3 + 2i k ψ 1 sl ψ 2 sl ,
J 3 =J 3 + 2i kψ 1 slψ 2 sl .
(B.8) and λ,λ are the superpartners of the gauge currents J ,J given by λ = l 1 ψ 3 sl + l 2 ψ x + l 3 ψ t + l 4 ψ 4 , λ = r 1ψ 3 sl + r 2ψx + r 3ψt + r 4ψ4 .
(B.9)
Here c,c are the usual worldsheet fields of the bc andbc systems, andc,c are the Faddeev-Popov fields that one needs to introduce due to null gauging. ThecJ and cJ terms in Q brst andQ brst acting on Q ε andQ ε gives l 1 ε 1 and r 1ε1 respectively. Since we assumed l 1 = 0, r 1 = 0 to begin with, under no circumstances, the background can be BPS.
s easy to check that the OPEs of H L,R with respect to the null gauge currents J,J, (3.3), are regular. This means, a physical vertex operator V , (3.5), and V.e iα(H L +H R ) are gauge equivalent ∀α ∈ R. This implies
,y ,w x,y ,w ∈ Z. Without loss of generality, one can set α = 1. Solving (3.16)-(3.18) and (3.4) one obtains 3
,y ,w x,y ∈ Z subject to the constraint n xwx +ñ ywy = 0 .(4.32) Few comments about the supergravity solution (4.30)-(4.32) are in order: 1. The supergravity solution (4.30)-(4.32) is an NS-NS microstate solution of type II string theory. The geometry is smooth (up to an orbifold singularity at ρ = 0) with no finite-size horizon. Perturbative string theory in this background is described by the null coset (2.1). The solution (4.30)-(4.32) is non-BPS because none of the spacetime supersymmetry generators are annihilated by the worldsheet BRST charge (See appendix B for a detailed discussion on this).
projection operator that keeps track of the signature of the Killing form 10 . Substituting (A.3) and (A.6) in (A.5) one obtains, of generality, we will set l 1 = l 2 = 1. The gauge currents are given by J = n 5 tr(P.T L .∂G.G −1 ) , J = n 5 tr(P.T R .G −1 .∂G) . (A.8)
M
= 2 n 5 cosh 2ρ − n 5 l 2 r 2 cos 2θ − 6 i=2 l i r i + l 7 r 7 − l 8 r 8 . (A.12) Integrating out the gauge fields in (A.10), one obtains S g = S[G] A.8), (A.10), and (A.12) into (A.13) and using the parametrizations (A.9) one obtains the gauged WZW action. After fixing gauge τ = σ = 0 and using standard worldsheet techniques one can read off the metric, B-field, and the dilaton as
See[11,12] and references therein for a recent review of the subject.
Note that the worldsheet constraints (3.19) on the gauge parameters l 2,3,4 , r 2,3,4 can also be obtained by directly analyzing the null gauge constraints (3.13), the Virasoro constraints (3.14) and the periodicities of the vertex operator(3.5). See[50] for an elaborate discussion.
One can also independently arrive at (4.5) by demanding the absence of close timelike curves.
It's important to note that the large gauge transformation (5.3) is valid only in the AdS decoupling limit (5.2) and not in the full geometry (4.30)-(4.32).
AcknowledgementsWe thank I. Bena
D-brane approach to black hole quantum mechanics. C G Callan, J M Maldacena, 10.1016/0550-3213(96)00225-8hep-th/9602043Nucl. Phys. B. 472591C. G. Callan and J. M. Maldacena, D-brane approach to black hole quantum mechanics, Nucl. Phys. B 472 (1996) 591 [hep-th/9602043].
Comparing decay rates for black holes and D-branes. S R Das, S D Mathur, 10.1016/0550-3213(96)00453-1hep-th/9606185Nucl. Phys. B. 478561S. R. Das and S. D. Mathur, Comparing decay rates for black holes and D-branes, Nucl. Phys. B 478 (1996) 561 [hep-th/9606185].
Excitations of D strings, entropy and duality. S R Das, S D Mathur, 10.1016/0370-2693(96)00242-0hep-th/9601152Phys. Lett. B. 375103S. R. Das and S. D. Mathur, Excitations of D strings, entropy and duality, Phys. Lett. B 375 (1996) 103 [hep-th/9601152].
Black hole grey body factors and d-brane spectroscopy. J M Maldacena, A Strominger, 10.1103/PhysRevD.55.861hep-th/9609026Phys. Rev. D. 55861J. M. Maldacena and A. Strominger, Black hole grey body factors and d-brane spectroscopy, Phys. Rev. D 55 (1997) 861 [hep-th/9609026].
D1 / D5 moduli in SCFT and gauge theory, and Hawking radiation. J R David, G Mandal, S R Wadia, 10.1016/S0550-3213(99)00620-3hep-th/9907075Nucl. Phys. B. 564103J. R. David, G. Mandal and S. R. Wadia, D1 / D5 moduli in SCFT and gauge theory, and Hawking radiation, Nucl. Phys. B 564 (2000) 103 [hep-th/9907075].
J M Maldacena, G W Moore, A Strominger, hep-th/9903163Counting BPS black holes in toroidal Type II string theory. J. M. Maldacena, G. W. Moore and A. Strominger, Counting BPS black holes in toroidal Type II string theory, hep-th/9903163.
Microscopic origin of the Bekenstein-Hawking entropy. A Strominger, C Vafa, 10.1016/0370-2693(96)00345-0hep-th/9601029Phys. Lett. B. 37999A. Strominger and C. Vafa, Microscopic origin of the Bekenstein-Hawking entropy, Phys. Lett. B 379 (1996) 99 [hep-th/9601029].
Metric of the multiply wound rotating string. O Lunin, S D Mathur, 10.1016/S0550-3213(01)00321-2hep-th/0105136Nucl. Phys. B. 61049O. Lunin and S. D. Mathur, Metric of the multiply wound rotating string, Nucl. Phys. B 610 (2001) 49 [hep-th/0105136].
AdS / CFT duality and the black hole information paradox. O Lunin, S D Mathur, 10.1016/S0550-3213(01)00620-4hep-th/0109154Nucl. Phys. B. 623342O. Lunin and S. D. Mathur, AdS / CFT duality and the black hole information paradox, Nucl. Phys. B 623 (2002) 342 [hep-th/0109154].
The Fuzzball proposal for black holes: An Elementary review. S D Mathur, 10.1002/prop.200410203hep-th/0502050Fortsch. Phys. 53793S. D. Mathur, The Fuzzball proposal for black holes: An Elementary review, Fortsch. Phys. 53 (2005) 793 [hep-th/0502050].
Snowmass White Paper: Micro-and Macro-Structure of Black Holes. I Bena, E J Martinec, S D Mathur, N P Warner, 2203.04981I. Bena, E. J. Martinec, S. D. Mathur and N. P. Warner, Snowmass White Paper: Micro-and Macro-Structure of Black Holes, 2203.04981.
I Bena, E J Martinec, S D Mathur, N P Warner, 2204.13113Fuzzballs and Microstate Geometries: Black-Hole Structure in String Theory. I. Bena, E. J. Martinec, S. D. Mathur and N. P. Warner, Fuzzballs and Microstate Geometries: Black-Hole Structure in String Theory, 2204.13113.
Desingularization by rotation. J M Maldacena, L Maoz, 10.1088/1126-6708/2002/12/055hep-th/0012025JHEP. 1255J. M. Maldacena and L. Maoz, Desingularization by rotation, JHEP 12 (2002) 055 [hep-th/0012025].
D1-D5-P microstates at the cap. S Giusto, O Lunin, S D Mathur, D Turton, 10.1007/JHEP02(2013)050JHEP. 02501211.0306S. Giusto, O. Lunin, S. D. Mathur and D. Turton, D1-D5-P microstates at the cap, JHEP 02 (2013) 050 [1211.0306].
Momentum-carrying waves on D1-D5 microstate geometries. S D Mathur, D Turton, 10.1016/j.nuclphysb.2012.05.014Nucl. Phys. B. 8627641202.6421S. D. Mathur and D. Turton, Momentum-carrying waves on D1-D5 microstate geometries, Nucl. Phys. B 862 (2012) 764 [1202.6421].
Habemus Superstratum! A constructive proof of the existence of superstrata. I Bena, S Giusto, R Russo, M Shigemori, N P Warner, 10.1007/JHEP05(2015)1101503.01463JHEP. 05110I. Bena, S. Giusto, R. Russo, M. Shigemori and N. P. Warner, Habemus Superstratum! A constructive proof of the existence of superstrata, JHEP 05 (2015) 110 [1503.01463].
Smooth horizonless geometries deep inside the black-hole regime. I Bena, S Giusto, E J Martinec, R Russo, M Shigemori, D Turton, 10.1103/PhysRevLett.117.2016011607.03908Phys. Rev. Lett. 117201601I. Bena, S. Giusto, E. J. Martinec, R. Russo, M. Shigemori, D. Turton et al., Smooth horizonless geometries deep inside the black-hole regime, Phys. Rev. Lett. 117 (2016) 201601 [1607.03908].
Asymptotically-flat supergravity solutions deep inside the black-hole regime. I Bena, S Giusto, E J Martinec, R Russo, M Shigemori, D Turton, 10.1007/JHEP02(2018)014JHEP. 02141711.10474I. Bena, S. Giusto, E. J. Martinec, R. Russo, M. Shigemori, D. Turton et al., Asymptotically-flat supergravity solutions deep inside the black-hole regime, JHEP 02 (2018) 014 [1711.10474].
. N Čeplak, R Russo, M Shigemori, Supercharging Superstrata, 10.1007/JHEP03(2019)0951812.08761JHEP. 0395N.Čeplak, R. Russo and M. Shigemori, Supercharging Superstrata, JHEP 03 (2019) 095 [1812.08761].
P Heidmann, N P Warner, 10.1007/JHEP09(2019)0591903.07631Superstratum Symbiosis. 59P. Heidmann and N. P. Warner, Superstratum Symbiosis, JHEP 09 (2019) 059 [1903.07631].
N P Warner, Lectures on Microstate Geometries. N. P. Warner, Lectures on Microstate Geometries, 1912.13108.
Asymptotically flat (1,m,n) superstrata: a farewell to AdS. W Govaerts, R Walker, 2301.10329W. Govaerts and R. Walker, Asymptotically flat (1,m,n) superstrata: a farewell to AdS, 2301.10329.
Microstate Geometries from Gauged Supergravity in Three Dimensions. D R Mayerson, R A Walker, N P Warner, 10.1007/JHEP10(2020)030JHEP. 1030D. R. Mayerson, R. A. Walker and N. P. Warner, Microstate Geometries from Gauged Supergravity in Three Dimensions, JHEP 10 (2020) 030 [2004.13031].
New superstrata from three-dimensional supergravity. B Ganchev, A Houppe, N P Warner, 10.1007/JHEP04(2022)0652110.02961JHEP. 0465B. Ganchev, A. Houppe and N. P. Warner, New superstrata from three-dimensional supergravity, JHEP 04 (2022) 065 [2110.02961].
Elliptical and purely NS superstrata. B Ganchev, A Houppe, N P Warner, 10.1007/JHEP09(2022)0672207.04060JHEP. 0967B. Ganchev, A. Houppe and N. P. Warner, Elliptical and purely NS superstrata, JHEP 09 (2022) 067 [2207.04060].
. N Čeplak, Vector Superstrata, 2212.06947N.Čeplak, Vector Superstrata, 2212.06947.
Counting Superstrata. M Shigemori, 10.1007/JHEP10(2019)0171907.03878JHEP. 1017M. Shigemori, Counting Superstrata, JHEP 10 (2019) 017 [1907.03878].
Counting D1-D5-P microstates in supergravity. D R Mayerson, M Shigemori, 10.21468/SciPostPhys.10.1.018SciPost Phys. 10182010.04172D. R. Mayerson and M. Shigemori, Counting D1-D5-P microstates in supergravity, SciPost Phys. 10 (2021) 018 [2010.04172].
Resolving black-hole microstructure with new momentum carriers. I Bena, N Ceplak, S Hampton, Y Li, D Toulikas, N P Warner, 10.1007/JHEP10(2022)033JHEP. 10332202.08844I. Bena, N. Ceplak, S. Hampton, Y. Li, D. Toulikas and N. P. Warner, Resolving black-hole microstructure with new momentum carriers, JHEP 10 (2022) 033 [2202.08844].
The (amazing) super-maze. I Bena, S D Hampton, A Houppe, Y Li, D Toulikas, 10.1007/JHEP03(2023)237JHEP. 032372211.14326I. Bena, S. D. Hampton, A. Houppe, Y. Li and D. Toulikas, The (amazing) super-maze, JHEP 03 (2023) 237 [2211.14326].
I Bena, N Čeplak, S D Hampton, A Houppe, D Toulikas, N P Warner, 2212.06158Themelia: the irreducible microstructure of black holes. I. Bena, N.Čeplak, S. D. Hampton, A. Houppe, D. Toulikas and N. P. Warner, Themelia: the irreducible microstructure of black holes, 2212.06158.
Local supersymmetries and three-charge black holes. Y Li, 2305.03747CORFU2022: 22th Hellenic School and Workshops on Elementary Particle Physics and Gravity. 5Y. Li, Local supersymmetries and three-charge black holes, in CORFU2022: 22th Hellenic School and Workshops on Elementary Particle Physics and Gravity, 5, 2023, 2305.03747.
Holographic anatomy of fuzzballs. I Kanitscheider, K Skenderis, M Taylor, 10.1088/1126-6708/2007/04/023hep-th/0611171JHEP. 0423I. Kanitscheider, K. Skenderis and M. Taylor, Holographic anatomy of fuzzballs, JHEP 04 (2007) 023 [hep-th/0611171].
Fuzzballs with internal excitations. I Kanitscheider, K Skenderis, M Taylor, 10.1088/1126-6708/2007/06/056JHEP. 06560704.0690I. Kanitscheider, K. Skenderis and M. Taylor, Fuzzballs with internal excitations, JHEP 06 (2007) 056 [0704.0690].
AdS 3 holography for 1/4 and 1/8 BPS geometries. S Giusto, E Moscato, R Russo, 10.1007/JHEP11(2015)0041507.00945JHEP. 114S. Giusto, E. Moscato and R. Russo, AdS 3 holography for 1/4 and 1/8 BPS geometries, JHEP 11 (2015) 004 [1507.00945].
Ads 3 holography at dimension two. S Giusto, S Rawash, D Turton, 10.1007/JHEP07(2019)1711904.12880JHEP. 07171S. Giusto, S. Rawash and D. Turton, Ads 3 holography at dimension two, JHEP 07 (2019) 171 [1904.12880].
AdS 3 holography for non-BPS geometries. B Ganchev, S Giusto, A Houppe, R Russo, 2112.03287B. Ganchev, S. Giusto, A. Houppe and R. Russo, AdS 3 holography for non-BPS geometries, 2112.03287.
On the BPS Sector in AdS3/CFT2 Holography. E J Martinec, S Massai, D Turton, 10.1002/prop.202300015Fortsch. Phys. 7123000152211.12476E. J. Martinec, S. Massai and D. Turton, On the BPS Sector in AdS3/CFT2 Holography, Fortsch. Phys. 71 (2023) 2300015 [2211.12476].
Non-supersymmetric smooth geometries and D1-D5-P bound states. V Jejjala, O Madden, S F Ross, G Titchener, 10.1103/PhysRevD.71.124030hep-th/0504181Phys. Rev. D. 71124030V. Jejjala, O. Madden, S. F. Ross and G. Titchener, Non-supersymmetric smooth geometries and D1-D5-P bound states, Phys. Rev. D 71 (2005) 124030 [hep-th/0504181].
Q-balls meet fuzzballs: non-BPS microstate geometries. B Ganchev, A Houppe, N P Warner, 10.1007/JHEP11(2021)0282107.09677JHEP. 1128B. Ganchev, A. Houppe and N. P. Warner, Q-balls meet fuzzballs: non-BPS microstate geometries, JHEP 11 (2021) 028 [2107.09677].
Non-extremal superdescendants of the D1D5 CFT. A Bombini, S Giusto, 10.1007/JHEP10(2017)0231706.09761JHEP. 1023A. Bombini and S. Giusto, Non-extremal superdescendants of the D1D5 CFT, JHEP 10 (2017) 023 [1706.09761].
Non-BPS multi-bubble microstate geometries. I Bena, G Bossard, S Katmadas, D Turton, 10.1007/JHEP02(2016)0731511.03669JHEP. 0273I. Bena, G. Bossard, S. Katmadas and D. Turton, Non-BPS multi-bubble microstate geometries, JHEP 02 (2016) 073 [1511.03669].
I Bah, P Heidmann, 10.1007/JHEP09(2021)1282106.05118Smooth bubbling geometries without supersymmetry. 128I. Bah and P. Heidmann, Smooth bubbling geometries without supersymmetry, JHEP 09 (2021) 128 [2106.05118].
Schwarzschild-like topological solitons. I Bah, P Heidmann, P Weck, 10.1007/JHEP08(2022)269JHEP. 082692203.12625I. Bah, P. Heidmann and P. Weck, Schwarzschild-like topological solitons, JHEP 08 (2022) 269 [2203.12625].
Non-BPS bubbling geometries in AdS 3. I Bah, P Heidmann, 10.1007/JHEP02(2023)1332210.06483JHEP. 02133I. Bah and P. Heidmann, Non-BPS bubbling geometries in AdS 3 , JHEP 02 (2023) 133 [2210.06483].
Geometric Resolution of Schwarzschild Horizon. I Bah, P Heidmann, I. Bah and P. Heidmann, Geometric Resolution of Schwarzschild Horizon, 2303.10186.
String Theory of Supertubes. E J Martinec, S Massai, 10.1007/JHEP07(2018)1631705.10844JHEP. 07163E. J. Martinec and S. Massai, String Theory of Supertubes, JHEP 07 (2018) 163 [1705.10844].
. D Mateos, P K Townsend, Supertubes , 10.1103/PhysRevLett.87.011602hep-th/0103030Phys. Rev. Lett. 8711602D. Mateos and P. K. Townsend, Supertubes, Phys. Rev. Lett. 87 (2001) 011602 [hep-th/0103030].
Supergravity supertubes. R Emparan, D Mateos, P K Townsend, 10.1088/1126-6708/2001/07/011hep-th/0106012JHEP. 0711R. Emparan, D. Mateos and P. K. Townsend, Supergravity supertubes, JHEP 07 (2001) 011 [hep-th/0106012].
String dynamics in NS5-F1-P geometries. E J Martinec, S Massai, D Turton, 10.1007/JHEP09(2018)0311803.08505JHEP. 0931E. J. Martinec, S. Massai and D. Turton, String dynamics in NS5-F1-P geometries, JHEP 09 (2018) 031 [1803.08505].
Little Strings, Long Strings, and Fuzzballs. E J Martinec, S Massai, D Turton, 10.1007/JHEP11(2019)019JHEP. 11191906.11473E. J. Martinec, S. Massai and D. Turton, Little Strings, Long Strings, and Fuzzballs, JHEP 11 (2019) 019 [1906.11473].
Stringy Structure at the BPS Bound. E J Martinec, S Massai, D Turton, 10.1007/JHEP12(2020)135JHEP. 12135E. J. Martinec, S. Massai and D. Turton, Stringy Structure at the BPS Bound, JHEP 12 (2020) 135 [2005.12344].
Black hole microstates from the worldsheet. D Bufalini, S Iguri, N Kovensky, D Turton, 10.1007/JHEP08(2021)0112105.02255JHEP. 0811D. Bufalini, S. Iguri, N. Kovensky and D. Turton, Black hole microstates from the worldsheet, JHEP 08 (2021) 011 [2105.02255].
Worldsheet computation of heavy-light correlators. D Bufalini, S Iguri, N Kovensky, D Turton, 10.1007/JHEP03(2023)066JHEP. 03662210.15313D. Bufalini, S. Iguri, N. Kovensky and D. Turton, Worldsheet computation of heavy-light correlators, JHEP 03 (2023) 066 [2210.15313].
On string theory and black holes. E Witten, 10.1103/PhysRevD.44.314Phys. Rev. D. 44314E. Witten, On string theory and black holes, Phys. Rev. D 44 (1991) 314.
String propagation in a black hole geometry. R Dijkgraaf, H L Verlinde, E P Verlinde, 10.1016/0550-3213(92)90237-6Nucl. Phys. B. 371269R. Dijkgraaf, H. L. Verlinde and E. P. Verlinde, String propagation in a black hole geometry, Nucl. Phys. B 371 (1992) 269.
Beyond the singularity of the 2-D charged black hole. A Giveon, E Rabinovici, A Sever, 10.1088/1126-6708/2003/07/055hep-th/0305140JHEP. 0755A. Giveon, E. Rabinovici and A. Sever, Beyond the singularity of the 2-D charged black hole, JHEP 07 (2003) 055 [hep-th/0305140].
SL(2,R)×U(1). S Chakraborty, S. Chakraborty, SL(2,R)×U(1)
U(1). U(1)
. Cft, 10.1007/JHEP03(2021)113JHEP. 031132012.03995CFT, NS5+F1 system and single trace T T , JHEP 03 (2021) 113 [2012.03995].
Holography Beyond AdS. M Asrat, A Giveon, N Itzhaki, D Kutasov, 10.1016/j.nuclphysb.2018.05.0051711.02690Nucl. Phys. B. 932241M. Asrat, A. Giveon, N. Itzhaki and D. Kutasov, Holography Beyond AdS, Nucl. Phys. B 932 (2018) 241 [1711.02690].
Solvable time-like cosets and holography beyond AdS. S Chakraborty, M Goykhman, 10.1007/JHEP08(2022)2442204.03024JHEP. 08244S. Chakraborty and M. Goykhman, Solvable time-like cosets and holography beyond AdS, JHEP 08 (2022) 244 [2204.03024].
A Closed, expanding universe in string theory. C R Nappi, E Witten, 10.1016/0370-2693(92)90888-Bhep-th/9206078Phys. Lett. B. 293309C. R. Nappi and E. Witten, A Closed, expanding universe in string theory, Phys. Lett. B 293 (1992) 309 [hep-th/9206078].
Removing singularities. S Elitzur, A Giveon, E Rabinovici, 10.1088/1126-6708/2003/01/017hep-th/0212242JHEP. 0117S. Elitzur, A. Giveon and E. Rabinovici, Removing singularities, JHEP 01 (2003) 017 [hep-th/0212242].
From big bang to big crunch and beyond. S Elitzur, A Giveon, D Kutasov, E Rabinovici, 10.1088/1126-6708/2002/06/017hep-th/0204189JHEP. 0617S. Elitzur, A. Giveon, D. Kutasov and E. Rabinovici, From big bang to big crunch and beyond, JHEP 06 (2002) 017 [hep-th/0204189].
Stringy holography at finite density. M Goykhman, A Parnachev, 10.1016/j.nuclphysb.2013.05.0111304.4496Nucl. Phys. B. 874115M. Goykhman and A. Parnachev, Stringy holography at finite density, Nucl. Phys. B 874 (2013) 115 [1304.4496].
U(1) charges and moduli in the D1 -D5 system. F Larsen, E J Martinec, 10.1088/1126-6708/1999/06/019hep-th/9905064JHEP. 0619F. Larsen and E. J. Martinec, U(1) charges and moduli in the D1 -D5 system, JHEP 06 (1999) 019 [hep-th/9905064].
Asymmetric cosets. T Quella, V Schomerus, 10.1088/1126-6708/2003/02/030hep-th/0212119JHEP. 0230T. Quella and V. Schomerus, Asymmetric cosets, JHEP 02 (2003) 030 [hep-th/0212119].
Superstrings on AdS(3) and symmetric products. R Argurio, A Giveon, A Shomer, 10.1088/1126-6708/2000/12/003hep-th/0009242JHEP. 123R. Argurio, A. Giveon and A. Shomer, Superstrings on AdS(3) and symmetric products, JHEP 12 (2000) 003 [hep-th/0009242].
The Slowly rotating near extremal D1 -D5 system as a 'hot tube. O Lunin, S D Mathur, 10.1016/S0550-3213(01)00428-Xhep-th/0107113Nucl. Phys. B. 615285O. Lunin and S. D. Mathur, The Slowly rotating near extremal D1 -D5 system as a 'hot tube', Nucl. Phys. B 615 (2001) 285 [hep-th/0107113].
Covariant theory of asymptotic symmetries, conservation laws and central charges. G Barnich, F Brandt, 10.1016/S0550-3213(02)00251-1hep-th/0111246Nucl. Phys. B. 6333G. Barnich and F. Brandt, Covariant theory of asymptotic symmetries, conservation laws and central charges, Nucl. Phys. B 633 (2002) 3 [hep-th/0111246].
Advanced Lectures on General Relativity. G Compère, A Fiorucci, 1801.07064G. Compère and A. Fiorucci, Advanced Lectures on General Relativity, 1801.07064.
Introduction to little string theory. D Kutasov, ICTP Lect. Notes Ser. 7165D. Kutasov, Introduction to little string theory, ICTP Lect. Notes Ser. 7 (2002) 165.
The Charged black hole/string transition. A Giveon, D Kutasov, 10.1088/1126-6708/2006/01/120hep-th/0510211JHEP. 01120A. Giveon and D. Kutasov, The Charged black hole/string transition, JHEP 01 (2006) 120 [hep-th/0510211].
6D microstate geometries from 10D structures. S Giusto, L Martucci, M Petrini, R Russo, 10.1016/j.nuclphysb.2013.08.018Nucl. Phys. B. 8765091306.1745S. Giusto, L. Martucci, M. Petrini and R. Russo, 6D microstate geometries from 10D structures, Nucl. Phys. B 876 (2013) 509 [1306.1745].
. A Giveon, N Itzhaki, D Kutasov, 10.1007/JHEP07(2017)1221701.05576JHEP. 07122A. Giveon, N. Itzhaki and D. Kutasov, TT and LST, JHEP 07 (2017) 122 [1701.05576].
black holes and negative strings. S Chakraborty, A Giveon, D Kutasov, T , 10.1007/JHEP09(2020)057JHEP. 0957S. Chakraborty, A. Giveon and D. Kutasov, T T , black holes and negative strings, JHEP 09 (2020) 057 [2006.13249].
S Chakraborty, A Giveon, D Kutasov, 2303.12422Comments on Single-Trace TT Holography. S. Chakraborty, A. Giveon and D. Kutasov, Comments on Single-Trace TT Holography, 2303.12422.
S Chakraborty, A Giveon, D Kutasov, 2304.09212Momentum in Single-trace TT Holography. S. Chakraborty, A. Giveon and D. Kutasov, Momentum in Single-trace TT Holography, 2304.09212.
Entanglement Entropy and D1-D5 geometries. S Giusto, R Russo, 10.1103/PhysRevD.90.066004Phys. Rev. D. 90660041405.6185S. Giusto and R. Russo, Entanglement Entropy and D1-D5 geometries, Phys. Rev. D 90 (2014) 066004 [1405.6185].
Entanglement beyond AdS. S Chakraborty, A Giveon, N Itzhaki, D Kutasov, 10.1016/j.nuclphysb.2018.08.0111805.06286Nucl. Phys. B. 935290S. Chakraborty, A. Giveon, N. Itzhaki and D. Kutasov, Entanglement beyond AdS, Nucl. Phys. B 935 (2018) 290 [1805.06286].
Entanglement entropy for TT, JT, TJ deformed holographic CFT. S Chakraborty, A Hashimoto, 10.1007/JHEP02(2021)096JHEP. 02962010.15759S. Chakraborty and A. Hashimoto, Entanglement entropy for TT, JT, TJ deformed holographic CFT, JHEP 02 (2021) 096 [2010.15759].
W Cui, H Shu, W Song, J Wang, 2304.04684Correlation Functions in the TsT/TT Correspondence. W. Cui, H. Shu, W. Song and J. Wang, Correlation Functions in the TsT/TT Correspondence, 2304.04684.
Wilson loop in a TT like deformed CFT 2. S Chakraborty, 10.1016/j.nuclphysb.2018.12.0031809.01915Nucl. Phys. B. 938605S. Chakraborty, Wilson loop in a TT like deformed CFT 2 , Nucl. Phys. B 938 (2019) 605 [1809.01915].
| [] |
[
"Equivariant localization and holography",
"Equivariant localization and holography"
] | [
"Dario Martelli \nDipartimento di Matematica \"Giuseppe Peano\"\nUniversità di Torino\nVia Carlo Alberto 1010123TorinoItaly\n\nINFN\nSezione di Torino\nVia Pietro Giuria 110125TorinoItaly\n",
"Alberto Zaffaroni \nDipartimento di Fisica\nUniversità di Milano\nBicocca Piazza della Scienza 320126MilanoItaly\n\nINFN\nsezione di Milano-Bicocca\nPiazza della Scienza 320126MilanoItaly\n"
] | [
"Dipartimento di Matematica \"Giuseppe Peano\"\nUniversità di Torino\nVia Carlo Alberto 1010123TorinoItaly",
"INFN\nSezione di Torino\nVia Pietro Giuria 110125TorinoItaly",
"Dipartimento di Fisica\nUniversità di Milano\nBicocca Piazza della Scienza 320126MilanoItaly",
"INFN\nsezione di Milano-Bicocca\nPiazza della Scienza 320126MilanoItaly"
] | [] | We discuss the theory of equivariant localization focussing on applications relevant for holography. We consider geometries comprising compact and non-compact toric orbifolds, as well as more general non-compact toric Calabi-Yau singularities. A key object in our constructions is the equivariant volume, for which we describe two methods of evaluation: the Berline-Vergne fixed-point formula and the Molien-Weyl formula, supplemented by the Jeffrey-Kirwan prescription. We present two applications in supersymmetric field theories. Firstly, we describe a method for integrating the anomaly polynomial of SCFTs on compact toric orbifolds. Secondly, we discuss equivariant orbifold indices that are expected to play a key role in the computation of supersymmetric partition functions. In the context of supergravity, we propose that the equivariant volume can be used to characterise universally the geometry of a large class of supersymmetric solutions. As an illustration, we employ equivariant localization to prove various gravitational block formulas, recovering previous results as well as obtaining generalizations. | null | [
"https://export.arxiv.org/pdf/2306.03891v1.pdf"
] | 259,088,609 | 2306.03891 | 8c3fa59960fd6c50e844b0b50d9162926d53d403 |
Equivariant localization and holography
Dario Martelli
Dipartimento di Matematica "Giuseppe Peano"
Università di Torino
Via Carlo Alberto 1010123TorinoItaly
INFN
Sezione di Torino
Via Pietro Giuria 110125TorinoItaly
Alberto Zaffaroni
Dipartimento di Fisica
Università di Milano
Bicocca Piazza della Scienza 320126MilanoItaly
INFN
sezione di Milano-Bicocca
Piazza della Scienza 320126MilanoItaly
Equivariant localization and holography
arXiv:2306.03891v1 [hep-th] 6 Jun 2023 June 7, 2023
We discuss the theory of equivariant localization focussing on applications relevant for holography. We consider geometries comprising compact and non-compact toric orbifolds, as well as more general non-compact toric Calabi-Yau singularities. A key object in our constructions is the equivariant volume, for which we describe two methods of evaluation: the Berline-Vergne fixed-point formula and the Molien-Weyl formula, supplemented by the Jeffrey-Kirwan prescription. We present two applications in supersymmetric field theories. Firstly, we describe a method for integrating the anomaly polynomial of SCFTs on compact toric orbifolds. Secondly, we discuss equivariant orbifold indices that are expected to play a key role in the computation of supersymmetric partition functions. In the context of supergravity, we propose that the equivariant volume can be used to characterise universally the geometry of a large class of supersymmetric solutions. As an illustration, we employ equivariant localization to prove various gravitational block formulas, recovering previous results as well as obtaining generalizations.
Introduction
The theory of equivariant co-homology is a rich mathematical subject, with numerous applications in diverse areas of geometry and mathematical physics. A remarkable consequence of this theory is that quite generally, on manifolds endowed with a Hamiltonian action of a Lie group, there exist localization formulas that express certain integrals in terms of contributions arising from the fixed point sets of the group action, thus simplifying enormously their evaluation. A prime example of this feature is the classic result of Duistermaat-Heckmann [1]. More generally, such integrals involve equivariant characteristic classes and may arise in the evaluation of equivariant indices of transversally elliptic operators [2,3]. See for instance the review articles [4,5], or [6] for a discussion perhaps more accessible to physicists. In this paper we will need an extension of these results to the setup of orbifolds [7], including spindles and other orbifolds with singularities in co-dimension less than two, that are not so often considered in the physics literature but have recently received attention in the context of holography.
A very general framework in which the ideas of equivariant localization are realized is that of symplectic toric geometry. In this case one starts with a symplectic manifold (M 2m , ω) in dimension 2m, equipped with an effective Hamiltonian action of the real torus T m . The image of the associated moment maps is a convex rational polytope P ⊂ Q m [8].
Together with the angular coordinates on the torus, φ i ∼ φ i +2π, the moment maps y i can be used as "symplectic coordinates", endowing the manifold with a natural coordinate system (y i , φ i ) in which any metric can then be written in a canonical form in terms of combinatorial data of the polytope [9]. In the toric setting the fixed points sets of the T m action are always isolated singularities, so that the localization formulas take the form of sums over fixed points. The applications of the localization theorems in this context range from the quantization of symplectic manifolds to algorithms for computing the volumes of polytopes and counting integral points. See e. g. [10,11]. The extensions to symplectic toric orbifolds was discussed in [12] and that to non-compact toric cones in [13].
In theoretical physics equivariant localization came to the fore in the work of Nekrasov as a technique for calculating partition functions counting instantons in supersymmetric field theories [14]. See e.g. [15] for a review. Applications of toric geometry motivated by the AdS/CFT correspondence were first discussed in [16] and further developed in [17], where the volume functional of toric Sasakian manifolds was shown to be extremized by Sasaki-Einstein metrics. The extension to the more general equivariant setting and the relation to fixed point theorems and the index-character of the associated Calabi-Yau cone singularities was explored in [18]. From the viewpoint of holography, these results can be used to compute the volume and other properties of Sasaki-Einstein manifolds, without explicit knowledge of the metric, from which in turn one can infer properties of the dual field theories. Subsequent developments in geometry include results about: toric Sasaki-Einstein metrics [19,20], Kähler-Einstein metrics [21], extremal Sasaki metrics [22,23], conformally Kähler Einstein-Maxwell metrics [24].
Following on the steps of [17,18] we take holography as a motivation for uncovering precise mathematical relationships between geometry and supersymmetric field theories. In particular, we provide new evidence that toric geometry and equivariant localization are well-suited mathematical frameworks for this purpose. In the context of supergravity our aim is to develop a universal approach to study the geometry underlying supersymmetric solutions based on extremization problems analogous to those formulated in [17] and [25]. We will argue that the functionals to be extremized can be calculated in each case using the technique of equivariant localization, generalizing the results appeared in [18,[26][27][28]. We will consider, in particular, the novel supergravity constructions featuring compact spaces with conical singularities, like spindles and other orbifolds [29][30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47]. These solutions imply that we should work in the orbifold setting from the outset and indeed from our results we will obtain a direct localization proof of the factorization in gravitational blocks [48] recently discussed in the literature [31,36,45,49].
In the context of supersymmetric field theory our motivation is that of extending to the orbifold realm some tools that are well-established for field theories compactified on smooth manifolds. Specifically, working with orbifold equivariant co-homology, we wish to put on a sounder mathematical footing the technique of integration of the anomaly polynomials of even-dimensional SCFTs compactified on orbifolds. Furthermore, following [50], where a new index for three-dimensional N = 2 theories was computed exploiting the spindle index-character, we present a discussion of equivariant orbifold indices, that we expect to be key building blocks for computing supersymmetric partition functions of SQFTs on orbifolds, extending Pestun's [51] approach to supersymmetric localization. When the underlying space is (the resolution of) a non-compact Calabi-Yau singularity the same objects have been employed previously to compute Hilbert series of the moduli spaces of supersymmetric field theories [52][53][54]. We expect that new insights can be gained from the study of equivariant indices and their relation to the equivariant volume. We now summarize the structure of this paper.
In section 2 we recall the symplectic geometry description of compact toric orbifolds, following [9,12,55,56]. We introduce the equivariant volume of symplectic toric orbifolds, that is our main object of interest. We discuss two alternative methods for evaluating this, namely the Berline-Vergne fixed point formula and the Molien-Weyl integral formula, that exploits the presentations of the toric orbifolds as symplectic quotients, based on the Lerman and Tolman's generalization [12] of Delzant's construction [8]. Although the topics covered in this section are mainly not original, we consider an extension to non-compact toric orbifolds, which leads to localization formulas for odd-dimensional orbifolds, arising as the base of complex cones thus generalizing the results of [18,57] for the Sasakian volume. Aspects of the equivariant volume of non-compact symplectic toric manifolds were recently studied in [58,59]. In this section we also discuss the equivariant orbifold index of the Dolbeault complex, twisted by a holomorphic line orbi-bundle. In the toric setting, the geometric interpretation of this object is that of counting integer lattice points inside a convex integral polytope (or polyhedral cone, in the non-compact case), corresponding to sections of the line bundle. It can then be thought of as the quantum (or K-theoretical) version of the equivariant volume, which is recovered in a limit in which the lattice spacing goes to zero. Besides this close relationship with the equivariant volume, equivariant orbifold indices are expected to provide fundamental building blocks for the construction of supersymmetric partition functions defined on orbifolds [50], with applications to black hole microstrate counting. We therefore present some examples of these indices in appendix C.
In section 3 we discuss in detail a number of examples of toric orbifolds and their associated equivariant volumes. We start with the complex projective line WP [n 1 ,n 2 ] , also known as the spindle. This is the simplest compact toric orbifold, which arises in com-plex dimension one. It has a prominent role in several recent supergravity constructions corresponding to various wrapped branes [29][30][31][32][33][34][35][36][37][38][39][40][41][42][43]. Moving to complex dimension two, we consider generic compact toric orbifolds, described by a rational convex polyhedron with an arbitrary number of vertices. We then consider in more detail triangles, namely weighted projective spaces WP [n 1 ,n 2 ,n 3 ] and their quotients by discrete groups, as well as quadrilaterals. Examples of supergravity solutions comprising Hermitian (non-Kähler) metrics on quadrilateral toric orbifolds are discussed in [44][45][46][47]. As a warm-up for section 5, we also describe the pecularities of the non-compact case in a few explicit examples.
Section 4 concerns the application of equivariant localization to the calculation of the anomaly polynomial of 4d and 6d SCFTs compactified on various orbifolds. Using this approach we prove the localized form of the anomaly polynomial for theories compactified on the spindle [29,[31][32][33], as well as on general four dimensional toric orbifolds, for which one example was considered in [44]. Our approach leads to a uniform derivation for SCFTs compactified on different orbifolds and explains the localized form of the integrated anomaly polynomials that was previous observed in examples.
Finally, in section 5 we discuss the application of equivariant localization in the context of supergravity. Firstly, we show that the results of [18] on the localized form of the Sasakian volume are immediately recovered from the equivariant volume of the associated non-compact Calabi-Yau singularity. Furthermore, we prove that also the master volume introduced in [60], in the context of GK geometry [61], can be extracted from the equivariant volume and as a corollary we prove the localized form of the gravitational block formulas for D3 and M5 branes wrapped on the spindle, reproducing the results of [49] in the toric case. We then propose analogous constructions for other branes wrapped on the spindle from which, in each case, we can extract the localized form of the gravitational blocks, which was anticipated in [36]. This leads us to propose that equivariant localization is the common thread of the geometry of supersymmetric supergravity solutions, at least in setups with a holographic interpretation. In particular, we believe that the equivariant volume should be a key object for setting up extremal problems characterising supersymmetric geometries, in different supergravity theories.
In section 6 we discuss our findings and indicate some directions for future work. The paper contains three appendices with some complementary material.
The equivariant volume 2.1 The symplectic geometry description of toric orbifolds
In this section we review the geometry of symplectic toric orbifolds following the general formalism developed in [9,12,55,56]. We emphasise that although it appears that we are relying on symplectic geometry, we will be interested in computing topological quantities that do not depend on the existence of an integrable symplectic structures and it should be possible to reformulate our computations entirely in terms of complex geometry. In particular, our results are applicable also to situations in which one is interested in metrics that are not compatible with a symplectic structure, such as the Hermitian (non-Kähler) metrics on toric orbifolds constructed in [44][45][46][47].
We consider a symplectic toric orbifold (M 2m , ω) in dimension 2m equipped with an effective Hamiltonian action of the real torus T m = R m /2πZ m . We introduce symplectic coordinates (y i , φ i ) with i = 1, . . . , m where φ i are angular coordinates on the the torus, φ i ∼ φ i + 2π. In terms of these coordinates the symplectic form is given by 1 ω = dy i ∧ dφ i .
(2.1) By a generalization of Delzant's theorem [8], compact symplectic toric orbifolds are classified by labelled polytopes which are the image of M 2m under the moment maps y i associated with the toric action [12]. The image of M 2m is simply obtained by forgetting the angular coordinates φ i and it is a rational simple convex polytope P in R m . We can describe it by introducing a set of linear functions 2
l a (y) = y i v a i − λ a , a = 1, . . . , d , (2.2)
where v a are vectors in R m . The convex polytope is the subset of R m defined by
P = {y ∈ R m : l a (y) ≥ 0} a = 1, . . . , d . (2.3)
The linear equations l a (y) = 0 (2.4) define the facets F a of the polytope. We denote with d the number of facets of P.
The condition that P be rational is equivalent to the fact that the vectors v a have integer entries. The condition that P be simple requires that each vertex p lies at the intersection of precisely m facets l a 1 (p) = l a 2 (p) = · · · = l am (p) = 0 , (2.5) and the corresponding m vectors {v a 1 , . . . , v am } are a basis for R m . The polytope comes equipped with a label for each facet, a positive integer n a such that the structure group of every point in the inverse image of F a is Z na . This construction exhibits M 2m as a torus fibration over the polytope P and generalises the familiar construction for toric varieties. As in the latter case, the torus fibration is non-degenerate in the interior of P. A particular one-cycle in T m , determined by the vector v a , collapses at the facet F a . Thus each facet F a defines a symplectic subspace of M 2m of real codimension two. In the complex case this becomes a divisor 3 which we will denote D a . Similarly, at the intersection of q facets, more cycles in T m degenerates and we have a symplectic subspace of M 2m of real codimension 2q. The intersection of m facets is a vertex of the polytope and it corresponds to a fixed point of the T m action. We denote with n the number of such fixed points. In particular, we can give an alternative definition of the polytope P as the convex hull of the images of the fixed points of M 2m .
In the context of toric varieties, the vectors v a define what is called the fan of M 2m . We will use the same terminology but the reader should be aware that we are dealing with a generalization of the concept which allows for more general type of orbifold singularities. For toric varieties, the vectors v a are primitive, while in the case of generic symplectic toric orbifolds they are not. We can always define for each v a a primitive vectorv a and a positive integer n a such that v a = n av a . 4 The integer n a is precisely the label of the facet F a defined above. In particular, each symplectic divisor D a has a local Z na singularity. This cannot happen for toric varieties which are normal and have no singularities of complex co-dimension less than two. Notice also that the local singularity at the fixed point given by the intersection of the m facets F a i , with i = 1 . . . m, has order d = | det(v a 1 , . . . , v am )|. M 2m is a smooth manifold if and only if all the labels n a are one and for each vertex | det(v a 1 , . . . , v am )| = 1.
We want to equip M 2m with a compatible complex structure. Any T m -invariant Kähler metric on M 2m is of the form [55]
ds 2 = G ij (y)dy i dy j + G ij (y)dφ i dφ j , (2.6)
where G ij is determined by a symplectic potential G(y) as
G ij = ∂ 2 G ∂y i ∂y j (2.7)
and G ij = (G −1 ) ij is the inverse matrix. Holomorphic coordinates are given by
z i = x i + iφ i , x i = ∂G ∂y i . (2.8)
The existence of a symplectic potential is equivalent to the integrability of the complex structure.
The canonical metric on M 2m is given by [9] G(y) = 1 2 d a=1 l a log l a , (2.9) so that
G ij = ∂ 2 G ∂y i ∂y j = 1 2 d a=1 v a i v a j l a .
(2.10)
Notice that G ij has poles at the facets but the metric is smooth up to orbifold singularities. The inverse matrix G ij has rank m − 1 at the facets indicating that a one cycle in T m is degenerate. With a change of coordinates we obtain a smooth orbifold metric on F a . The structure of poles of (2.9) is compatible with the degeneration of T m at the faces of P and it is precisely designed to obtain an orbifold metric on M 2m . The most general Kähler metric on M 2m is discussed in [56] and it is obtained by replacing the symplectic potential G(y) for the canonical metric with G(y) + h(y), where h(y) is smooth on the whole P. The topological quantities we will discuss in this paper do not depend on the metric and we can safely set h(y) = 0. In our applications to holography we will encounter metrics that are not Kähler and not even symplectic, but the underlying spaces are in fact symplectic toric orbifolds and we can therefore use the symplectic coordinates and the canonical metric to compute topological quantities that ultimately will not depend on the metric. Each facet F a defines a T m -invariant divisor D a and an associated line bundle L a . These objects will be important for our construction so we will spend some time discussing their properties. The first Chern class of L a has been explicitly computed in [9,56]
5 c 1 (L a ) = − i 2π ∂∂ log l a ,(2.11)
where with [α] we denote the co-homology class of the differential form α. An explicit representative is given by 6
c 1 (L a ) = d µ i a dφ i , (2.13) where µ i a = µ i a (y) = − 1 4π G ij v a j l a .
(2.14)
Notice that µ i a can be seen as a moment map for the torus action
i ∂ φ i c 1 (L a ) = −dµ i a . (2.15) From G ij G jk = δ i k we find d a=1 µ i a v a k = − δ i k 2π . (2.16)
We can relate the Chern classes of the divisors to the Kähler form ω as follows. From
G ij y j = 1 2 a v a i v a j y j l a = 1 2 a v a i (l a + λ a ) l a = 1 2 a λ a v a i l a + a v a i ,(2.17)
we obtain
y i = −2π a λ a µ a i + 1 2 a G ij v a j , (2.18) so that ω = dy i ∧ dφ i = −2π a λ a d(µ a i dφ i ) + 1 2 d( a G ij v a j dφ i ) . (2.19)
Using the degeneration of G ij at the facets, one can check that a G ij v a j dφ i is a welldefined one-form, so that the last term on the right hand side of (2.19) is exact. Therefore we obtain the important relation (see Theorem 6.3 in [9])
[ω] 2π = − a λ a c 1 (L a ) .
(2.20) 5 We believe that there is a minus sign error in equation (6.17) in [9] and we have therefore changed the sign here. 6 Writing as in (2.8),
z i = x i + iφ i where x i = ∂G ∂yi , we have ∂ = 1 2 2 i=1 (dx i + idφ i ) ∂ ∂xi − i ∂ ∂φi
and, therefore, for a torus invariant function f that only depends on y
∂∂f = − i 2 ∂ 2 f ∂x i ∂x j dx i ∧ dφ j = − i 2 d ∂f ∂x j dφ j = − i 2 d G kj ∂f ∂y k dφ j . (2.12)
With some abuse of language, we will often denote with c 1 (L) an explicit representative of the corresponding co-homology class.
Notice that this equation holds only in co-homology. We see that the parameters λ a defining the shape of the polytopes through (2.3) are parameterizing the Kähler moduli of the symplectic orbifold.
There are actually only d − m independent line bundles. Indeed, using (2.13) and (2.16), we find m relations among the Chern classes In this paper we will also consider non-compact cases, in particular non-compact Calabi-Yau cones. In this case, the polytope is replaced by a rational convex polyhedral cone. Calabi-Yau cones have typically singularities worse than orbifold. This happens when more than m facets intersect at a vertex. To use the general formalism of this section we will perform a partial resolution to have only orbifold singularities. In applications to holography, we will also encounter polytopes and polyhedral cones that are not convex. We will obtain results by performing a suitable extrapolation from the convex case.
The definition of the equivariant volume
In this section we define the equivariant volume. In order to simplify the exposition, in this and the next two sections we assume that M 2m is compact. We will discuss the subtleties and necessary modifications for the non-compact case in section 2.5.
We want to work equivariantly in the T m action on M 2m , which is generated by the m vector fields ∂ φ i . Correspondingly we introduce m equivariant parameters ǫ i , with i = 1, . . . m and the vector field ξ = ǫ i ∂ φ i . We can introduce a Hamiltonian H = ǫ i y i for this vector field
i ξ ω = −dH ,(2.23)
and define an equivariant Kähler form
ω T = ω + 2πH = d(y i dφ i ) + 2πǫ i y i . (2.24)
We similarly introduce equivariant Chern classes for the line bundles L a
c T 1 (L a ) = c 1 (L a ) + 2πǫ i µ a i = d(µ a i dφ i ) + 2πǫ i µ a i ,(2.25)
using the moment maps µ a . All these forms are equivariantly closed
(d + 2πi ξ )ω T = 0 , (d + 2πi ξ )c T 1 (L a ) = 0 ,(2.V(λ a , ǫ i ) = 1 (2π) m M 2m e −H ω m m! , (2.27)
which is sometimes referred in the literature to as "symplectic volume" or "equivariant symplectic volume". We can write the equivariant volume as
V(λ a , ǫ i ) = (−1) m M 2m e −H− ω 2π = (−1) m M 2m e −ǫ i y i − ω 2π ,(2.V(λ a , ǫ i ) = (−1) m M 2m e − ω T 2π = (−1) m M 2m e a λac T 1 (La) . (2.29)
Notice also that the two integrands in (2.29) are not equal but they differ by an equivariantly exact form by (2.20) and the integrals are therefore equal. Notice that the last inequality strictly holds only if M 2m is compact. We will return to this point in section 2.5. A geometrical interpretation of the equivariant volume of compact orbifolds is that it is the generating functional for the integrals of the equivariant Chern classes
V(λ a , ǫ i ) = (−1) m p 1 p! d a 1 ,...ap=1 λ a 1 . . . λ ap M 2m c T 1 (L a 1 ) . . . c T 1 (L ap ) . (2.30)
The equivariant intersection numbers
D a 1 ...ap = M 2m c T 1 (L a 1 ) . . . c T 1 (L ap ) = (−1) m ∂ p V(λ a , ǫ i ) ∂λ a 1 . . . ∂λ ap λa=0 (2.31)
are polynomials in ǫ i and they are topological in nature. Notice that the integrand in (2.31) is a formal linear combination of forms of various degree and the integral selects the piece of degree 2m. In particular, the equivariant intersection numbers are different from zero for all p ≥ m. The expression in (2.27) can be easily reduced to an integral over the polytope P by performing the angular integrations
V(λ a , ǫ i ) = 1 (2π) m M 2m e −H ω m m! = P e −H dy 1 . . . dy m = P e −ǫ i y i dy 1 . . . dy m . (2.32)
Thus we have another interpretation of the equivariant volume as the volume of the polytope P with measure e −H . Integrals of this type and their relation to equivariant localization are discussed in [11]. The equivariant volume satisfies some interesting identities. From the expression (2.29)
V(λ a , ǫ i ) = (−1) m M 2m
e a λa(c 1 (La)+2πǫ i µ a i ) (2.33) and the identities (2.21) and (2.16) we find that the identity
V(λ a + β j v a j , ǫ i ) = e −β j ǫ j V(λ a , ǫ i ) (2.34)
holds, for arbitrary β i ∈ R m . This formula reflects the fact that only d − m parameters λ a are geometrically independent. A closely related and useful identity can be obtained by taking derivatives of (2.33) and using again (2.21) and (2.16)
d a=1 v a i ∂V ∂λ a = −ǫ i V . (2.35) Similarly we have d a 1 ,...,aq=1 v a 1 i 1 . . . v aq iq ∂ n V ∂λ a 1 . . . ∂λ aq = (−1) q ǫ i 1 . . . ǫ iq V . (2.36)
In the next sections we review two different methods to evaluate V(λ a , ǫ i ), the fixed point and the Molien-Weyl formula. They are both discussed in the literature. Here we adapt them to our notations and we discuss the relations among them.
The fixed point formula
The equivariant volume can be computed using a fixed point formula. This can be obtained from (2.27) using the Duistermaat-Heckman theorem [1], the localization formula for equivariant co-homology [62,63], or a direct evaluation of (2.32) [10,11]. Here we use the localization approach and, for completeness, in Appendix A we report an explicit proof for m = 2.
Consider a symplectic toric orbifold M 2m with the properties discussed in section 2.1. The torus action T m acts on M 2m with n isolated fixed points corresponding to the vertices of the polytope P. Consider an equivariantly closed form α T on M 2m . The equivariant localization theorem for orbifolds applied to our situation states that
M 2m α T = n A=1 α T | y A d A e T | y A , (2.37)
where the sum is over the fixed points y A of the T m action, e T is the equivariant Euler class of the tangent bundle at y A and d A are the orders of the orbifold singularity at the fixed point y A . In particular, applying this theorem to the computation of the equivariant volume (2.29) gives
V(λ a , ǫ i ) = (−1) m n A=1 e − 1 2π ω T |y A d A e T | y A = (−1) m n A=1 e d a=1 λac T 1 (La)|y A d A e T | y A . (2.38)
To evaluate the localization formula we need to compute the restriction c T 1 (L a )| y A of the equivariant Chern classes of L a to the A-th fixed point. The fixed point y A is defined by m linear equations l a 1 = . . . = l am = 0 (2.39)
for a choice of m intersecting facets associated with the vectors {v a 1 , . . . , v am }. The order of the orbifold singularity is given by
d A = | det(v a 1 , . . . , v am )| . (2.40)
We will denote the A-th fixed point also with the m-tuples of indices A = (a 1 , . . . , a m ) defining the vertex. In the neighbourhood of the fixed point A = (a 1 , . . . , a m )
G ij = 1 2 v a 1 i v a 1 j l a 1 + 1 2 v a 2 i v a 2 j l a 2 + . . . + 1 2 v am i v am j l am + . . . ,(2.41)
up to regular pieces. This can be inverted to give
G ij = 2 d 2 A ((u a 1 A ) i (u a 1 A ) j l a 1 + (u a 2 A ) i (u a 2 A ) j l a 2 + . . . + (u am A ) i (u am A ) j l am ) + . . . ,(2.42)
where the vectors u a i A are constructed by taking the wedge product of the m − 1 vectors v a j with j = i. The sign ambiguity is determined by requiring u a i A · v a i = d A . The vectors u a i A have integer entries and satisfy
u a i A · v a j = d A δ ij (2.43) as well as v a 1 i (u a 1 A ) j + v a 2 i (u a 2 A ) j + . . . + v am i (u am A ) j = d A δ ij . (2.44)
They have a dual geometrical interpretation as the inward normal vectors to the facets of the cone (v a 1 , . . . , v am ) in the fan or, equivalently, as integer vectors along the m edges of the polytope P meeting at y A . Notice that the m-tuplet of vectors u a i A depends on the choice of vertex A.
The restriction of the moment maps (2.14) to the fixed points then gives
µ a i y A = − 1 2π u a i d A if a ∈ A 0 if a / ∈ A (2.45)
and, therefore
c T 1 (L a ) y A = − ǫ i u a i d A if a ∈ A 0 if a / ∈ A . (2.46)
Notice that the restriction of c T 1 (L a ) to a fixed point is not zero only if the point belongs to the facet F a .
Finally the tangent bundle at a fixed point splits as a direct sum of the m line bundles L a i associated with A = (a 1 , . . . , a m ) and so we have [18]
e T | y A = a i ∈A c T 1 (L a i )| y A = (−1) m m i=1 ǫ · u a i A d A . (2.47)
Now the fixed point formula gives
V(λ a , ǫ i ) = A=(a 1 ,...,am) e − m i=1 λa i ( ǫ·u a i A d A ) d A m i=1 ǫ·u a i A d A ,(2.48)
where A runs over the n vertices of the polytope. This formula can be also obtained by computing the volume of the polytope P, as discussed, for example, in [11]. We note on passing that the identities (2.34), (2.35) and (2.36) can be also derived from the fixed point formula. For example,
V(λ a + m j=1 β j v a j , ǫ i ) = A=(a 1 ,...,am) e − m i,j,k=1 β j v a i j (u a i A ) k ǫ k d A e − m i=1 λa i ( ǫ·u a i A d A ) d A m i=1 ǫ·u a i A d A = e − m i=1 β i ǫ i V(λ a , ǫ i ) ,(2.
The Molien-Weyl formula
The equivariant volume can be also computed using a Molien-Weyl integral formula following [58,59].
The orbifold M 2m can be realized as a symplectic quotient. Consider the space C d , where d is the number of vectors in the fan, and the subgroup K of T d of elements of the form e 2πiQ 1 , . . . , e 2πiQ d ∈ T d ,
(2.50) where d a=1 Q a v a i ∈ Z , i = 1, . . . , d . (2.51)
Then M 2m is the symplectic reduction [12,56] M 2m = C d //K , (2.52) generalizing a familiar result in toric geometry. Notice that, in general, the group K has a continuous part
U(1) d−m Q a = d−m A=1 Q A a α A , α A ∈ R (2.53) which can be expressed in terms of the GLSM charges Q A a ∈ Z, with A = 1, . . . d − m, satisfying d a=1 Q A a v a i = 0 ,(2.54)
as well as a discrete part of more complicated characterization. Consider first the case where there is no discrete part in the quotient and M 2m = C d //U(1) d−m . Using the results in [58,59], we can write a Molien-Weyl formula for the equivariant volume
V M W (t A ,ǭ a ) = d−m A=1 dφ A 2πi e A φ A t A d a=1 (ǭ a + A φ A Q A a )
.
(2.55)
A particular contour of integration should be used depending on the direction of the vector of Kähler parameters t A . A prescription based on the Jeffrey-Kirwan (JK) residue [64] has been proposed in [58,59]. Formula (2.55) should be also divided by the order of the discrete group corresponding to the torsion part of K when this is present. We will discuss subtleties related to the choice of contour and discrete groups when illustrating examples. Notice that V M W depends on d − m Kähler parameters t A , which is the right number of geometrically inequivalent parameters for M 2m . It also depends on d equivariant parametersǭ a associated to the T d action on the ambient space, which is larger than the m parameters that we expect for M 2m . However, d − m equivariant parameters can be eliminated by shifting the integration variables. Indeed, up to an exponential factor, the Molien-Weyl integral is invariant under the shift
V M W (t A ,ǭ a + A Q A a η A ) = e − A η A t A V M W (t A ,ǭ a ) ,(2.56)
for η A ∈ R.
We thus have two expressions for the equivariant volume, V(ǫ i , λ a ), depending on m equivariant parameters ǫ i and d Kähler parameters λ a but with the "gauge" invariance (2.34), and V M W (ǭ a , t A ), depending on d equivariant parametersǭ a and d − m Kähler parameters t A but with the "gauge" invariance (2.56). The two set of parameters can be related by comparing gauge invariant quantities. Based on examples, we find that
t A = − a λ a Q A a ǫ i = a v a iǭ a ,(2.57)
and that the two expressions for the equivariant volume agree up to a multiplicative factor when expressed in terms of the over-redundant variables (λ a ,ǭ a ) which is also zero because of the identity (2.35) and the identification (2.57).
V M W (t A = − a λ a Q A a ,ǭ a ) = e a λaǭa V(λ a , ǫ i = a v a iǭ a
Remarks on the non-compact case
Most of the previous results hold also for non-compact orbifolds, but with some important differences. We still define the equivariant volume as
V(λ a , ǫ i ) = 1 (2π) m M 2m e −H ω m m! = (−1) m M 2m e − ω T 2π ,(2.60)
which we can also write as an integral over a polyhedron P V(λ a , ǫ i ) = P e −ǫ i y i dy 1 . . . dy m .
(2.61)
Since M 2m and P are non-compact, we need to assume that the integrals are convergent. This happens for the examples we will be interested in, where P is asymptotically a cone. In this case the Hamiltonian H = ǫ i y i acts as a convergence factor, at least if the vector ǫ lies inside the cone. The fixed point formula and the Molien formula hold under general conditions also for non-compact orbifolds, and we can even use them as an operative definition for the equivariant volume. For example, the fixed point formula only assumes that there are isolated fixed points in P and that there are no contributions from infinity. Notice that the identities (2.34), (2.35) and (2.36), which follow from the fixed point formula, are still valid.
However, the co-homological interpretation (2.30) fails. In particular, the two integrals in (2.29) are not equal in general. The second term in (2.19)
ω = dy i ∧ dφ i = −2π a λ a c 1 (L a ) + 1 2 d( a G ij v a j dφ i ) ,(2.62)
is still exact, but it cannot be ignored in a integral over non-compact orbifolds. 7 As a result, the equivariant volume can be still formally expanded in power series of λ a but the coefficients cannot be straightforwardly interpreted as generalized intersection numbers as in (2.30). In particular, while in the compact case, the power series of λ a starts at order m and the coefficients are polynomials in ǫ i , in the non-compact setting the power series has all coefficients different from zero and these are in general rational function of ǫ i , as it follows from the fixed point formula. We will discussed explicit examples of non-compact orbifolds in sections 3.3 and 5.
Relation to the equivariant orbifold index
In this section we briefly discuss how the equivariant volume arises as a limit of an index character, following the logic in [18,58,59]. Examples of calculations of characters are provided in the appendix C as this is not the main theme of this paper. The equivariant index for the twisted Dolbeault complex on M 2m is defined by
Z(q i , Λ a ) = m p=0 (−1) p Tr{q|H (0,p) (M 2m , O( a Λ a D a ))} (2.63)
where q ∈ T m is an element of the torus and Λ a ∈ Z, a = 1, . . . , d are integers specifying a choice of line bundle. The trace is taken on the induced action on the co-homology. The index can be computed with the Hirzebruch-Riemann-Roch theorem. We start with the smooth case that is considerably simpler and we take all the labels to be one, the vectors v a to be primitive and all the orders d A = 1. The fixed point formula for the equivariant index is where the symbol q n is an abbreviation for q n 1 1 . . . q nm m . The geometrical interpretation of the equivariant index is generically to count integer points
Z(Λ a , q i ) = A=(a 1 ,...,am) q − m i=1 Λa i u a i A m i=1 (1 − q u a i A ) ,(2.Z(Λ a , q i ) = m∈∆(Λa) q m , (2.65) in the polytope ∆(Λ a ) = {m · v a ≥ −Λ a } , m ∈ Z m ,(2.66)
as discussed, for example, in [11]. The equivariant volume can be obtained by setting q i = e − ǫ i and taking the limit → 0. The limit of (2.64) is singular but exhibits the equivariant volume as the coefficient of the leading pole
Z(Λ a , q i ) = →0 V(λ a , ǫ i ) m + . . . (2.67) where we scale Λ a = −λ a / .
Analogously, the Molien-Weyl formula reads [58,59]
Z M W (T A ,q a ) = (−1) d−m d−m A=1 dz A 2πiz A A z −T A A d a=1 (1 − A z Q A a Aq a ) , (2.68)
where the contour is defined again through the JK prescription. Settingq a = e − ǭa ,
T A = t a / ∈ Z and changing variables to z a = e − φ A we find Z M W (T A ,q a ) = →0 V M W (t A , ǫ i ) m + . . . . (2.69)
The Molien-Weyl formula (2.68) can be interpreted as an average over the complexified group U(1) d−m acting on the coordinates of the ambient space C d . The continuous average in (2.68) should be supplemented by a discrete average if the group K appearing in the symplectic reduction M 2m = C d //K has a torsion part. The formulas (2.64) and (2.68) are familiar in the context of non-compact conical Calabi-Yau singularities. For Λ a = T A = 0 they have been used to compute Hilbert series for the mesonic moduli space of the dual N = 1 superconformal theories [52,53]. For Λ a , T A = 0 they compute Hilbert series for the baryonic moduli space [54].
The relation between the two formulas is
Z M W T A = a Q A a Λ a ,q a = d a=1q Λa a Z Λ a , q i = aq v a i a .
(2.70)
The corresponding fixed point formulas for orbifolds are more complicated [7,65]. Each fixed point contribution is replaced by a discrete Molien formula implementing the quotient
Z d A Z(Λ a , q i ) = A=(a 1 ,...,am) 1 d A d A −1 k=0 e −2πik m i=1 J i Λa i q − m i=1 Λa i u a i A /d A m i=1 (1 − e 2πikJ i q u a i A /d A )
,
(2.71) where (e 2πiJ 1 , . . . , e 2πiJm ) ∈ T m , m i=1 v a i J i ∈ Z , (2.72)
is a generator of the local orbifold singularity Z d A . In the limit → 0 only the term with k = 0 contributes at the leading order and we recover the fixed point formula (2.48) as the coefficient of the most singular term (see (2.67)).
In the special case of non-trivial labels but no extra orbifold singularities we can use the formulas in Appendix C to resum (2.71) and we obtain
Z(Λ a , q i ) = A m i=1 q −û a i A ⌊ Λa i na i ⌋ m i=1 (1 − qû a i A ) , (2.73)
where the floor function ⌊x⌋ denotes the integer part of x, v a = n av a and u a = n aû a have been decomposed into primitive vectorsv a ,û a and the label n a and we assume that d a,a+1 = det(v a ,v a+1 ) = 1. One can check with elementary methods that this formula computes indeed the number of integer points in the polytope (2.66).
3 Examples from geometry and holography
The spindle
The first example that we consider is the weighted projective line WP 1 [n + ,n − ] , namely the spindle Σ, which is at heart of the recent novel supergravity constructions. It can be defined as the set of pairs of complex numbers (z 1 , z 2 ) under the identification
(z 1 , z 2 ) ∼ (λ n + z 1 , λ n − z 2 ) , λ ∈ C * . (3.1)
This is the most general toric symplectic orbifold in two real dimensions. It is the simplest example of toric orbifold that is not a toric variety, as its singular points occur in complex co-dimension one. In particular, the orbifold points, that are fixed under the U(1) action, are also the divisors and therefore d = n = 2. The image of Σ under the moment map is simply a segment P = I, exactly as for P 1 ≃ S 2 and the two facets F a are its end-points, defined by the linear equations
l a (y) = v a y − λ a = 0 , a = 1, 2 , (3.2)
where the one-dimensional non-primitive vectors are v 1 = n 1 , v 2 ≡ −n 2 , where we will also denote n 1 ≡ n + and n 2 ≡ n − . 8 The two labels are n + , n − ∈ N. Thus Σ is a circle fibration over the segment I, with the circle collapsing at the end-points, where are the two C/Z n + , C/Z n − singularities, see figure 1. In the following it will be convenient to define two rescaled Kähler parameters λ 1 = n +λ1 , λ 2 = n −λ2 , so that Figure 1: The spindle Σ as a circle fibration over a segment.
l 1 (y) = n + (y −λ 1 ) , l 2 (y) = −n − (y +λ 2 ) . (3.3) C/Z n+ C/Z n−
We can then write explicitely the one-dimensional (canonical) metric (2.10) 4) and the related moment maps
G 11 = 1 2 (v 1 ) 2 l 1 + (v 2 ) 2 l 2 = 1 2 n + y −λ 1 − n − y +λ 2 ,(3.µ 1 = 1 2π y +λ 2 n − (y −λ 1 ) − n + (y +λ 2 ) , µ 2 = 1 2π y −λ 1 n − (y −λ 1 ) − n + (y +λ 2 ) . (3.5)
In the conventions of section 2.1, in which l a (y) ≥ 0, we have that 9 y ∈ [y 1 ,
y 2 ] = [λ 1 , −λ 2 ],
with the extrema of the interval corresponding to the north and south poles of Σ, respectively.
We can then write all the relations discussed in section 2.1 explicitly in terms of the moment maps (3.5). The explicit representatives of the first Chern classes c 1 (L a ) associated to the line bundles L a defined by the extrema of the interval (the two divisors) are
c 1 (L a ) = d(µ a dφ) ,(3.6)
where φ ∼ φ + 2π is the azimutal coodinate, and denoting by ǫ the equivariant parameter, their equivariant versions are
c T 1 (L a ) = d(µ a dφ) + 2πǫµ a . (3.7)
The equivariant Kähler form is
ω T = ω + 2πH = dy ∧ dφ + 2πǫy ,(3.8)
where H = ǫy is the Hamiltonian for the vector field ξ = ǫ∂ φ . With this information, we can check various relations explicitely, without appealing to the fixed point theorem. For example, it is immediate to compute
Σ c 1 (L a ) = 2π y 2 y 1 dµ a = 2π(µ a (−λ 2 ) − µ a (λ 1 )) = 1 n a . (3.9)
The co-homological relation (2.20) can also be verified explicitly, noting that
y = −2π a λ a µ a + Φ(y) ,(3.10)
where Φ(y) = (n + − n − )(y +λ 2 )(y −λ 1 )
n + (λ 2 + y) + n − (λ 1 − y) , (3.11) 9
Thus, in these conventions, the fibration exists forλ 1 < −λ 2 .
is such that Φ(y a ) = 0, implying that Φ(y)dφ is an exact one-form on the spindle. Let us now turn to the fixed-point relations and the equivariant volume. From (3.5) it immediately follows that the restriction of the moment maps to the fixed points reads
µ a | y b = − 1 2π u a n a δ ab ,(3.12)
where u 1 = 1, u 2 = −1 and therefore
c T 1 (L a ) y b = − ǫu a n a δ ab . (3.13)
Using these and the fact that the equivariant Euler class of the tangent bundle at a fixed point y a is
e T | ya = − ǫu a n a ,(3.14)
one can reproduce the integrals (3.9) from the fixed point theorem (2.37). The equivariant volume can be easily computed either directly 15) or using the fixed point formula
V(λ a , ǫ) = − Σ e −ǫy− ω 2π = y 2 y 1 e −ǫy dy = 1 ǫ e − ǫλ 1 n + − e ǫλ 2 n − ,(3.V(λ a , ǫ) = − 2 a=1 e − ω T 2π |y a n a e T | ya = 1 ǫ e − ǫλ 1 n + − e ǫλ 2 n − ,(3.16)
which of course gives the same result. From this it is immediate to compute the non-zero equivariant intersection numbers (2.31),
Σ c T 1 (L 1 ) s = (−ǫ) s−1 n s + , Σ c T 1 (L 2 ) s = ǫ s−1 n s − , (3.17)
generalizing (3.9) to s > 1.
We end this subsection noting that it is straightforward to check directly that the equivariant volume depends only on the co-homology class of [ω]. Specifically, if we use a different representative of [ω], such as
ω ′ = dy ∧ dφ − d(Φ(y)dφ) , (3.18) then we have V(λ a , ǫ) = − Σ e − ω ′T 2π = 1 2π Σ e −ǫy+ǫΦ(y) ω ′ = y 2 y 1 e −ǫy+ǫΦ(y) d(y − Φ(y)) = y 2 y 1 e −ǫy dy ,(3.19)
where the last equality follows trivially from a change of variables and the fact that Φ(y a ) = 0.
Four dimensional toric orbifolds
We now move to compact four-dimensional orbifolds (m = 2), that have applications in holography [44][45][46], as well as in geometry [66][67][68]. The image of M 4 under the moment map is a rational simple convex polygon P in R 2 and its facets F a , defined by the linear equations l a (y) = 0, are the edges of the polygon, as shown in figure 2. Of course each vertex p a of the polygon lies at the intersection of precisely 2 edges, so that the simplicity condition is automatic. Moreover, the number of facets/edges d of P coincides with the number of vertices n, that are also the fixed points of the T 2 action. The condition that P be rational is equivalent to the fact that the d normals to the edges, v a , have integer entries; however, they do not need to be primitive. The highest common factor n a of the two entries of each of the v a is precisely the label associated to an edge/facet Each facet defines a symplectic subspace of M 4 of real dimension two, namely a divisor D a of complex dimension one, with a local C/Z na singularity. The vertices p a of P correspond precisely to points where two divisors D a , D a+1 intersect, with local singularity
F a . v a v a+1 p a−1 u 2 a p a+1 u 1 a p aC 2 /Z d a,a+1 , where d a,a+1 = | det(v a , v a+1 )|. The linear dependence of the d vectors v a in R 2 is expressed by the relations d a=1 Q A a v a = 0 , (3.20) defining the GLSM charges Q A a ∈ Z, with A = 1, . . . d − 2, that are used to present M 4 as the symplectic quotient M 4 = C d //U(1) d−2 , up to torsion factors. Correspondingly there are two linear relations among the divisors d a=1 v a i D a = 0 , i = 1, 2 . (3.21)
The equivariant volume and equivariant intersection numbers can be conveniently written in terms of the quantities
ǫ a i = ǫ · u i a d a,a+1 , i = 1, 2 ,(3.22)
where u i a are the inward normals to the cones (v a , v a+1 ) as discussed in section 2.3 and pictured in figure 2. We can rewrite these as
ǫ a 1 = − det(v a+1 , ǫ) det(v a , v a+1 ) , ǫ a 2 = det(v a , ǫ) det(v a , v a+1 ) , (3.23)
where ǫ ≡ (ǫ 1 , ǫ 2 ). In writing these formulas, we have assumed that the vectors v a lie on the plane in anticlockwise order and we have identified cyclically v a+d ≡ v a . In particular, the equivariant Euler class of the tangent bundle at a fixed point y a reads
e T | ya = ǫ a 1 ǫ a 2 . (3.24)
The restriction to the fixed points of the equivariant Chern classes c T 1 (L a ) (2.46) can be written as
c T 1 (L a )| y b = −(δ a,b ǫ b 1 + δ a,b+1 ǫ b 2 ) , (3.25)
so that the general formula for the equivariant intersection numbers reads
M 4 c T 1 (L a 1 ) . . . c T 1 (L ap ) = a c T 1 (L a 1 )| ya . . . c T 1 (L ap )| ya d a,a+1 ǫ a 1 ǫ a 2 .
(3.26)
Below we report explicit formulas for intersection numbers up to p = 4. First of all, for p = 0, 1 the integrals vanish automatically, giving rise to identities that may also be verified with elementary algebra. Specifically, we have
0 = M 4 1 = a 1 d a,a+1 ǫ a 1 ǫ a 2 , (3.27) 0 = M 4 c T 1 (L a ) = − 1 d a,a+1 ǫ a 2 − 1 d a−1,a ǫ a−1 1 .
(3.28)
For the double intersection numbers we reproduce the intersections matrix of divisors (see e.g. [45]), which is independent of the equivariant parameters ǫ 1 , ǫ 2 :
D a · D b = D ab = M 4 c T 1 (L a )c T 1 (L b ) = 1 d a−1,a if b = a − 1 , 1 d a,a+1 if b = a + 1 , − d a−1,a+1 d a−1,a d a,a+1 if b = a , 0 otherwise ,(3.29)
where we used the identity
ǫ a 1 d a,a+1 ǫ a 2 + ǫ a−1 2 d a−1,a ǫ a−1 1 = − d a−1,a+1 d a−1,a d a,a+1 (3.30)
to deal with the terms occurring in the self-intersection D a ·D a . Similarly, we can compute the integral of three equivariant first Chern classes, that read:
D a 1 a 2 a 3 = M 4 c T 1 (L a 1 )c T 1 (L a 2 )c T 1 (L a 3 ) = − ǫ a−1 2 d a−1,a if a i = a j = a k + 1 ≡ a , ǫ a 1 d a,a+1 if a i = a j = a k − 1 ≡ a , (ǫ a 1 ) 2 d a,a+1 ǫ a 2 + (ǫ a−1 2 ) 2 d a−1,a ǫ a−1 1 if a 1 = a 2 = a 3 ≡ a , 0 otherwise ,(3.31)
where the diagonal term can be shown to be linear in ǫ i , using the identity
(ǫ a 1 ) 2 d a,a+1 ǫ a 2 + (ǫ a−1 2 ) 2 d a−1,a ǫ a−1 1 = − d a−1,a+1 d 2 a−1,a d a,a+1 (2d a−1,a ǫ a 1 + d a−1,a+1 ǫ a 2 ) . (3.32)
For the integrals of four equivariant first Chern classes we have
D a 1 a 2 a 3 a 4 = M 4 c T 1 (L a 1 )c T 1 (L a 2 )c T 1 (L a 3 )c T 1 (L a 4 ) = ǫ a−1 2 ǫ a−1 1 d a−1,a if a i = a j = a k + 1 = a l + 1 ≡ a , ǫ a 1 ǫ a 2 d a,a+1 if a i = a j = a k − 1 = a l − 1 ≡ a , (ǫ a−1 2 ) 2 d a−1,a if a i = a j = a k = a l + 1 ≡ a , (ǫ a 1 ) 2 d a,a+1 if a i = a j = a k = a l − 1 ≡ a , (ǫ a 1 ) 3 d a,a+1 ǫ a 2 + (ǫ a−1 2 ) 3 d a−1,a ǫ a−1 1 if a 1 = a 2 = a 3 = a 4 ≡ a , 0 otherwise , (3.33)
where the diagonal term can be shown to be quadratic in ǫ i , using the identity
(ǫ a 1 ) 3 d a,a+1 ǫ a 2 + (ǫ a−1 2 ) 3 d a−1,a ǫ a−1 1 = − d a−1,a+1 d 3 a−1,a d a,a+1 (3d 2 a−1,a (ǫ a 1 ) 2 + 3d a−1,a d a−1,a+1 ǫ a 1 ǫ a 2 + d 2 a−1,a+1 (ǫ a 2 ) 2 ) .V(λ a , ǫ i ) = d a=1 1 d a,a+1 ǫ a 1 ǫ a 2 e −λaǫ a 1 −λ a+1 ǫ a 2 ,(3.35)
which is easily evaluated in examples. On the other hand, starting from the definition (2.27), the equivariant volume can also be written as in integral over the polygon P as
V(λ a , ǫ i ) = P e −y i ǫ i dy 1 dy 2 ,(3.36)
that can be evaluated explicitely using Stokes' theorem. Notice that the dependence on the Kähler parameters λ a arises from the shape of P = P(λ a ). The details of this calculation are given in appendix A.
3.2.1 The weighted projective space WP 2 [N 1 ,N 2 ,N 3 ]
As a first example of equivariant volume for four-dimensional toric orbifolds we consider the weighted projective space WP 2 [N 1 ,N 2 ,N 3 ] defined as the set of triples of complex numbers (z 1 , z 2 , z 3 ) under the identification
(z 1 , z 2 , z 3 ) ∼ (λ N 1 z 1 , λ N 1 z 2 , λ N 1 z 3 ) , λ ∈ C * . (3.37)
There are various presentations in terms of labelled polytopes [56] and it is interesting to consider some of them in order to understand better the role of the labels.
Consider first the fan
v 1 = (n 3 , n 3 ) , v 2 = (−n 1 , 0) , v 3 = (0, −n 2 ) , (3.38)
where each of the three facets have non-trivial labels. We take the labels n a to be mutually coprime. The GLSM charges are given by 39) and the symplectic quotient description WP 2
Q ≡ (N 1 , N 2 , N 3 ) = (n 1 n 2 , n 2 n 3 , n 1 n 3 ) ,(3.[N 1 ,N 2 ,N 3 ] = C 3 //U(1),
where U(1) acts with charge Q, is just the definition (3.37) of the weighted projective space. Notice however that our choice of fan corresponds to "non-minimal" Q a = N a that are products of integers.
The equivariant volume can be computed with the fixed point formula (3.35)
V(λ a , ǫ i ) = e −ǫ 2 λ 1 /n 3 −(ǫ 2 −ǫ 1 )λ 2 /n 1 ǫ 2 (ǫ 2 − ǫ 1 ) + e ǫ 1 λ 2 /n 1 +ǫ 2 λ 3 /n 2 ǫ 1 ǫ 2 + e −ǫ 1 λ 1 /n 3 −(ǫ 1 −ǫ 2 )λ 3 /n 2 ǫ 1 (ǫ 1 − ǫ 2 ) . (3.40)
This expression is a rational function of ǫ i but when expanded in power series in λ a as
V(λ a , ǫ i ) = ∞ k=0 V (k) (λ a , ǫ i ) , (3.41)
where V (k) is the component of degree k in λ a , all singular terms cancel. The constant and linear terms vanish and the other V (k) (λ a , ǫ i ) are homogeneous polynomials of degree k − 2 in ǫ that encode the equivariant intersection numbers of the line bundles L a . In particular, V (2) (λ a , ǫ i ) coincides with the non-equivariant limit V(λ a , 0) = 1 2 n 1 n 2 λ 1 + n 2 n 3 λ 2 + n 1 n 3 λ 3 n 1 n 2 n 3 2 = 1 2
(N 1 λ 1 + N 2 λ 2 + N 3 λ 3 ) 2 N 1 N 2 N 3 , (3.42)
encoding the classical intersections numbers of the divisors D a . Since only one divisor is geometrically independent because of (2.22), this is a quadratic form of rank one. We can compare the result with the Molien-Weyl formula (2.55),
V M W (t,ǭ) = dφ 2π e tφ (ǭ 1 + n 1 n 2 φ)(ǭ 2 + n 3 n 2 φ)(ǭ 3 + n 1 n 3 φ) ,(3.43)
where we use the charges Q. The JK prescription for a single integration is very simple. 11 It prescribes to take all residues associated with φ with positive charge for t > 0, and minus the residues for φ with negative charge for t < 0. Taking t > 0 and performing the residue computation, we obtain with non-equivariant limit
V M W (t,ǭ) = e −ǭ 1 t/(n 1 n 2 ) (n 3ǭ1 − n 1ǭ2 )(n 3ǭ1 − n 2ǭ3 ) + e −ǭ 2 t/(n 2 n 3 ) (n 3ǭ1 − n 1ǭ2 )(n 2ǭ3 − n 1ǭ2 ) + e −ǭ 3 t/(n 1 n 3 ) (n 3ǭ1 − n 2ǭ3 )(n 1ǭ2 − n 2ǭ3 ) ,(3.1 2 t 2 (n 1 n 2 n 3 ) 2 = 1 2 t 2 N 1 N 2 N 3 . (3.45)
We see that each residue corresponds to a fixed point. It is easy to see that the two master volumes coincides under the identification (2.58)
V M W (t = − 3 a=1 N a λ a ,ǭ a ) = e 3 a=1 λaǭa V(λ a , ǫ 1 =ǭ 1 n 3 −ǭ 2 n 1 , ǫ 2 =ǭ 1 n 3 −ǭ 3 n 2 ) . (3.46)
It is interesting to consider also the fan
v 1 = (−n 3 , 0) , v 2 = (0, −n 3 ) , v 3 = (n 1 , n 2 ) , (3.47)
where now the GLSM charges are the "minimal" ones
Q ≡ (N 1 , N 2 , N 3 ) = (n 1 , n 2 , n 3 ) . (3.48)
However in this case there is also a torsion part in the symplectic quotient description. Indeed the symplectic action on C 3 is given by
(e 2πiQ 1 , e 2πiQ 2 , e 2πiQ 3 ) , (3.49) where a Q a v a i ∈ Z . (3.50)
For the minimal fan (3.47) with relatively prime n a , the relations
− n 3 Q 1 + n 1 Q 3 ∈ Z , −n 3 Q 2 + n 2 Q 3 ∈ Z ,(3.51)
have solution
Q 1 = n 1 α + k 1 n 3 , Q 2 = n 2 α + k 2 n 3 , Q 3 = n 3 α , k i = 0, . . . , n 3 − 1 , α ∈ R .
(3.52) One k i can be further gauged away by choosing α = integer/n 3 (assuming again that the n a are relatively prime). So we are left with a continuous U(1) action generated by α and an extra discrete group 12
Γ = Z N 3 × Z N 3 Z N 3 , (3.53) so that M 4 = WP 2 [N 1 ,N 2 ,N 3 ] /Γ . (3.54)
As an exercise, the interested reader can check that the relation
V M W (t = − a λ a N a ,ǭ a ) ≡ e a λaǭa V(λ a , ǫ i = a v a iǭ a ) ,(3.(N 1 λ 1 + N 2 λ 2 + N 3 λ 3 ) 2 N 1 N 2 N 2 3 = 1 2 t 2 N 1 N 2 N 2 3 , (3.56)
with an extra factor of N 3 . Finally, we can consider an example taken from [56] 13 . The fan is 57) and the GLSM charges are the "minimal" ones
v 1 = (−n 2 n 3 , 0) , v 2 = (0, −n 1 n 3 ) , v 3 = (n 1 n 2 , n 1 n 2 ) ,(3.Q ≡ (N 1 , N 2 , N 3 ) = (n 1 , n 2 , n 3 ) . (3.58) However, the equations a Q a v a i ∈ Z − n 3 n 2 Q 1 + n 1 n 2 Q 3 ∈ Z , −n 3 n 1 Q 2 + n 1 n 2 Q 3 ∈ Z ,(3.59)
have solution
Q 1 = n 1 α + k 1 n 2 n 3 , Q 2 = n 2 α + k 2 n 1 n 3 , Q 3 = n 3 α + k 3 n 1 n 2 , α ∈ R ,(3.60)
and one k i can be further gauged away by choosing α = integer/(n 1 n 2 n 3 ). So we are left with a continuous U(1) action generated by α and an extra discrete group so that
M 4 = WP 2 [N 1 ,N 2 ,N 3 ] /Γ , (3.61) where Γ = Z N 2 N 3 × Z N 1 N 3 × Z N 2 N 3 Z N 1 N 2 N 3 , (3.62)
is a discrete group of order N 1 N 2 N 3 . We still find
V M W (t = − a λ a N a ,ǭ a ) ≡ e a λaǭa V(λ a , ǫ i = a v a iǭ a ) ,(3.63)
where the Molien-Weyl integral should be further divided by N 1 N 2 N 3 . The non-equivariant volume is 1 2
(N 1 λ 1 + N 2 λ 2 + N 3 λ 3 ) 2 N 2 1 N 2 2 N 2 3 = 1 2 t 2 N 2 1 N 2 2 N 2 3 .
(3.64) 13 The reader should be aware that, in [56], the weighted projective space is denoted with WP 2 (N1,N2,N3) and the example we are discussing with WP 2 [N1,N2,N3] .
Quadrilaterals
Let us now discuss the most general four-dimensional toric orbifold with four fixed points, that we refer to as quadrilaterals, following [67]. A sub-class of this family of orbifolds has been discussed in [45] and below we will discuss how one can retrieve those results. It is convenient to parameterize the data of this orbifold in term of the six vector products
d a,b ≡ det(v a , v b ) ∈ Z , (3.65)
where v a , a = 1, . . . , 4 is the set of toric data, as in figure 3. Then a vector identity implies that these satisfy the following relation
d 1,2 d 3,4 − d 2,3 d 4,1 = d 1,3 d 2,4 ,(3.66)
showing that there are five independent integers characterising a quadrilateral. One can show that the charges of the U(1) 2 action on C 4 can be written in a particularly symmetric form using this parameterization, and read
Q 1 = (d 2,4 , d 4,1 , 0, d 1,2 ) , Q 2 = (d 2,3 , −d 1,3 , d 1,2 , 0) . (3.67)
Without loss of generality, one can use an SL(2, Z) transformation to set v 1 = (n − , 0), where n − ∈ Z. The remaining vectors solving the constraint
4 a=1 Q A a v a = 0 , (3.68)
are then easily determined and the full set reads
v 1 = (n − , 0) , v 2 = (a + d 2,4 , d 1,2 /n − ) , v 3 = (a − d 2,3 + a + d 3,4 , d 1,3 /n − ) , v 4 = (a − d 2,4 , −d 4,1 /n − ) ,(3.69)
where d 1,2 , d 1,3 , d 4,1 are integer multiples of n − and a ± ∈ Z such that
a − d 1,2 + a + d 4,1 = −n − ,(3.70)
which always exist by Bezout's lemma. The equivariant volume V(λ a , ǫ i ) is easily computed using the general fixed point formula (3.35), but we shall not write it explicitly as it does not have a compact expression. Expanding it in Taylor series in powers of λ a as in (3.41), one can check that V (0) (ǫ i ) = V (1) (λ a , ǫ i ) = 0, whereas the quadratic part is independent of ǫ i and reads
V (2) (λ a ) = 1 2 a,b λ a λ b D ab , (3.71)
where D ab is the intersection matrix of divisors (3.29), which reads
D ab = d 2,4 d 1,2 d 4,1 1 d 1,2 0 1 d 4,1 1 d 1,2 −d 1,3 d 1,2 d 2,3 1 d 2,3 0 0 1 d 2,3 − d 2,4 d 2,3 d 3,4 1 d 3,4 1 d 4,1 0 1 d 3,4 d 1,3 d 3,4 d 4,1 .
(3.72)
It may be useful to compare with the orbifold geometry studied in [45] (see also [44]), which may be viewed as spindle fibered over another spindle, and interpreted as a natural orbifold generalization of Hirzebruch surfaces. This is obtained setting
d 1,3 = 0 , d 2,4 = −t , d 1,2 = m − n − , d 2,3 = m − n + , d 3,4 = m + n + , d 4,1 = n − m + . (3.73)
and
r + = −ta + , r − = −ta − .
For these values the vectors of the fan reduce to those written in eq. (4.27) of [45] and the GLSM charges become
Q 1 = (−t, m + n − , 0, m − n − ) , Q 2 = (n + , 0, n − , 0) . (3.74)
We can then perform the U(1) 2 quotient of C 4 in two stages. The first quotient using weights Q 1 gives the line bundle O(−t) → Σ m + ,m − . The second quotient using weights Q 2 projectivize this bundle, giving a Σ n + ,n − → Σ m + ,m − bundle. Indeed for n + = n − = m + = m − = 1 this reduces exactly to the Hirzebruch surface F t , which is the most general family of toric manifolds with four fixed points. It is instructive to compare with the Molien-Weyl formula. We do it explicitly for the case of the Σ n + ,n − → Σ m + ,m − bundle. The Molien-Weyl integral reads
V M W = n − dφ 1 2π dφ 2 2π e t 1 φ 1 +t 2 φ 2 (ǭ 1 − tφ 1 + n + φ 2 )(ǭ 2 + n − m + φ 1 )(ǭ 3 + n − φ 2 )(ǭ 4 + m − n − φ 1 )
.
(3.75) The multiplicative factor n − takes into account that the U(1) 2 action is not effective since
g 1 = e 2πi n + n − Q 1 , g 2 = e −2πi t n − Q 2 ,(3.76)
acts in the same way on all points in C 4 . The Molien formula mods out twice by a discrete subgroup Z n − and we need to multiplicate by n − to obtain the right result. The poles in the integrand are associated with two-dimensional vectors Q a = (Q 1 a , Q 2 a ), a = 1, 2, 3, 4. The JK prescription instructs us to take the (simultaneous) residues at the poles Q a 1 and Q a 2 if the vector of Kähler parameters (t 1 , t 2 ) is contained in the cone (Q a 1 , Q a 2 ). When Figure 4: The JK prescription for the Σ n + ,n − → Σ m + ,m − bundle.
Q 4 = (m − n − , 0) , Q 2 = (m + n − , 0) Q 3 = (0, n − ) Q 1 = (−t, n + ) (t 1 , t 2 )
(t 1 , t 2 ) lies in the first quadrant, it is contained in four different cones (see figure 4). In the Kähler chamber t 1 > 0 and t 2 > 0 we then add four contributions. One can check that
V M W (t A = − a λ a Q A a ,ǭ a ) = e a λaǭa V(λ a , ǫ i = a v a iǭ a ) ,(3.77)
where the gauge invariant variables (2.57) are
t 1 = tλ 1 − m + n − λ 2 − m − n − λ 4 t 2 = −n + λ 1 − n − λ 3 ǫ 1 = n −ǭ1 + r +ǭ2 − n +ǭ3 + r −ǭ4 ǫ 2 = m −ǭ2 − m +ǭ4 . (3.78)
The non-equivariant volume is
V M W (t A , 0) = 2n + t 1 t 2 + m − r − t 2 1 + m + r + t 2 2 2m − m + n 2 − n 2 + .
(3.79)
Non-compact examples
We consider now some non-compact examples where the polytope P is asymptotically a polyhedral cone. We compute the volume using the fixed point formula, assuming that there is a finite number of fixed points and no contributions from infinity. As anticipated in section 2.5, the equivariant volume is a rational function of ǫ i . As we will see, two of the singular terms for the Calabi-Yau examples can be identified with the Sasakian volume of [17,18] and the master volume introduced in [60]. This identification will be discussed in more detail in section 5.
C 2 /Z p
Our first non-compact example is C 2 /Z p , which is a conical orbifold singularity. The most general toric action is obtained using as fan the vectors
v 1 = (1, 0) , v 2 = (q, p) . (3.80)
The corresponding Z p action on (z 1 , z 2 ) ∈ C 2 is given by
(z 1 , z 2 ) → (e 2πiQ 1 z 1 , e 2πiQ 2 z 2 ) ,(3.81)
where the GLSM charges are pure torsion, namely a Q a v a ∈ Z 2 , (3.82) whose solution is, for p and q coprime, given by
Q 1 = k q p , Q 2 = − k p , k = 0, . . . , p − 1 . (3.83)
This is then the C 2 /Z p quotient
(z 1 , z 2 ) → (ω q p z 1 , ω −1 p z 2 ) ,(3.84)
with ω p = e 2πi p , which for |z 1 | 2 + |z 2 | 2 = 1 gives the Lens space L(p, q). The equivariant volume is obtained from the general formula (3.35), and is entirely encoded by the the contribution of the single fixed point with Z p singularity, namely
V C 2 /Zp (λ a , ǫ i ) = p e 1 p (λ 1 (qǫ 2 −pǫ 1 )−λ 2 ǫ 2 ) (pǫ 1 − qǫ 2 )ǫ 2 . (3.85)
Expanding (3.85) in powers of λ a we obtain
V C 2 /Zp (λ a , ǫ i ) = p ǫ 2 (pǫ 1 − qǫ 2 ) − λ 1 ǫ 2 + λ 2 pǫ 1 − qǫ 2 + O(λ 2 a ) ,(3.86)
where the constant and linear terms are non zero, as a difference with the compact case. They are also singular functions of ǫ. The constant piece is actually the Sasakian volume of the Lens space L(p, q) and the linear term is the corresponding master volume introduced in [60]. In particular, for q = 1 that corresponds to the "supersymmetric" Lens space L(p, 1), the linear term coincides precisely with (B.6) of [70].
Since there are no continuous GLSM charges, the Molien-Weyl formula is trivial
V M W C 2 /Zp (ǭ a ) = 1 pǭ 1ǭ2 ,(3.87)
where there are no Kähler parameters t and we divide by p for the order of the discrete group Z p . Nevertheless, it is still true that
V M W C 2 /Zp (ǭ a ) = e a λaǭa V C 2 /Zp (λ a , ǫ i = a v a iǭ a ) ,(3.88)
where the gauge invariant combinations are
ǫ 1 =ǭ 1 + qǭ 2 , ǫ 2 = pǭ 2 . (3.89) 3.3.2 O(−p) → Σ
Let us now consider the asymptotically conical non-compact orbifold 14 O(−p) → Σ, where Σ = Σ n + ,n − is a spindle with Z n + , Z n − singularities at its poles. This can be thought of as a "blow-up" of the C 2 /Z p of the previous section, where the apex of the cone is replaced with the spindle Σ. In general, this is a partial resolution, but in the special case that n + = n − = 1, it is a complete resolution.
The GLSM charges are given by
Q = (n + , −p, n − ) , (3.90)
where notice that the space is a Calabi-Yau if and only if p = n + + n − . In particular, only for p = 2 we have a complete resolution of the CY singularity C 2 /Z 2 . The vectors of the fan solving the condition
3 a=1 Q a v a = 0 ,(3.91)
can be taken to be
v 1 = (1, 0) , v 2 = (t, n − ) , v 3 = (q, p) , (3.92) with t, q ∈ Z satisfying tp − qn − = n + ,(3.93)
which always exist by Bezout's lemma. Indeed, we see that the basis (3.92) is obtained from (3.80) by adding the vector (t, n − ), which is normal to the edge representing the spindle, as illustrated in figure 5. The equivariant volume is obtained from the general formula (3.35), and now it receives contributions from the two orbifold fixed points, with Z n − , Z n + singularities, namely
V O(−p)→Σ (λ a , ǫ i ) = n − e 1 n − (λ 1 (tǫ 2 −n − ǫ 1 )−ǫ 2 λ 2 ) ǫ 2 (n − ǫ 1 − tǫ 2 ) + n + e 1 n + (λ 2 (qǫ 2 −pǫ 1 )+λ 3 (n − ǫ 1 −tǫ 2 )) (qǫ 2 − pǫ 1 )(n − ǫ 1 − tǫ 2 ) ,(3.94)
and expanding this in λ a we get
V O(−p)→Σ (λ a , ǫ i ) = p ǫ 2 (pǫ 1 − qǫ 2 ) − λ 1 ǫ 2 + λ 3 pǫ 1 − qǫ 2 + O(λ 2 a ) ,(3.95)
which coincides with (3.86). Thus we see that up to linear order, the information about the (partial) resolution is washed out as there is no dependence on n − and n + , nor on the Kähler parameter λ 2 , corresponding to the compact divisor Σ. Specifically, the leading and sub-leading terms reproduce precisely the Sasakian and GMS master volume of the Lens space L(p, q). Notice that this behaviour is independent of the Calabi-Yau condition p = n + + n − . Another notable case is given by n + = n − = p = 1, which corresponds to the smooth non-compact space O(−1) → P 1 , namely the blow-up of C 2 at one point 15 . This is not a Calabi-Yau and its link is the round S 3 . We can also compare with the Molien-Weyl formula
V M W O(−p)→Σ (t,ǭ a ) = dφ 2π e tφ (ǭ 1 + n + φ)(ǭ 2 − pφ)(ǭ 3 + n − φ)
, (3.96) where, according to the JK prescirption, for t > 0 we take the residues for the terms with positive charge Q i , φ = −ǭ 1 /n + and φ = −ǭ 3 /n − . We find, as usual
V M W O(−p)→Σ (t = − a Q a λ a ,ǭ a ) = e a λaǭa V O(−p)→Σ (λ a , ǫ i = a v a iǭ a ) ,(3.97)
where the gauge invariant combinations are The corresponding polytope is a conical non-compact polyhedron with four facets all intersecting at the tip of the cone. Since the vectors v a lie on a plane, or, equivalently a Q a = 0, this defines a conical Calabi-Yau three-fold. In order to use the results of section 2 we need to resolve the conifold singularity. This can be done in a standard way by triangulating the fan as in figure 6. The fan is now the union of the two cones (v 1 , v 2 , v 4 ) and (v 2 , v 3 , v 4 ), each associated with a vertex of the polytope and therefore with a fixed point of the torus action. The original conical singularity has been replaced by a compact two cycle, which can be visualised as a circle fibration over the segment connecting the two vertices of the polytope. There are actually two small resolutions related by a flop. The second one is obtained by triangulating the fan in a different way, by adding the line v 1 − v 3 and considering the fan which is the union of the two cones (v 1 , v 2 , v 3 ) and (v 1 , v 3 , v 4 ). 15 In this case we have t − q = 1 and therefore we can take t = 1, q = 0, reproducing the obvious fan v 1 = (1, 0), v 2 = (1, 1), v 3 = (0, 1).
t = −n + λ 1 + pλ 2 − n − λ 3 , ǫ 1 =ǭ 1 + tǭ 2 + qǭ 3 , ǫ 2 = n −ǭ2 + pǭ 3 .V(λ a , ǫ i ) = A=(a 1 ,a 2 ,a 3 ) e −ǫ·(λa 1 u a 1 A +λa 2 u a 2 A +λa 3 u a 3 A )/da 1 ,a 2 ,a 3 d a 1 ,a 2 ,a 3 ( ǫ·u a 1 A da 1 ,a 2 ,a 3 )( ǫ·u a 2 A da 1 ,a 2 ,a 3 )( ǫ·u a 3 A da 1 ,a 2 ,a 3 ) , (3.101)
where A runs over the triangular cones of the resolution, d a 1 ,a 2 ,a 3 = | det(v a 1 , v a 2 , v a 3 )| is the order of the orbifold singularity at the fixed point and u a A are the inward normals to the faces of the cones as in figure 7.
V(λ a , ǫ i ) = e −(ǫ 1 −ǫ 2 −ǫ 3 )λ 1 −ǫ 2 λ 2 −ǫ 3 λ 4 ǫ 2 ǫ 3 (ǫ 1 − ǫ 2 − ǫ 3 ) + e −(ǫ 1 −ǫ 3 )λ 2 −(−ǫ 1 +ǫ 2 +ǫ 3 )λ 3 −(ǫ 1 −ǫ 2 )λ 4 (ǫ 1 − ǫ 2 )(ǫ 1 − ǫ 3 )(−ǫ 1 + ǫ 2 + ǫ 3 ) . (3.102)
At λ a = 0 we recover the Sasakian volume of the 5-dimensional base of the singular cone (cfr (7.30) in [18])
V (0) (ǫ i ) = V(λ a = 0, ǫ i ) = ǫ 1 ǫ 2 ǫ 3 (ǫ 1 − ǫ 2 )(ǫ 1 − ǫ 3 )
, (3.103) and the quadratic part is, up to a normalization, the master volume introduced in [60] 16
2V (2) (ǫ i ) = 4 a=1 λ a (λ a−1 det(v a , v a+1 , ǫ) − λ a det(v a−1 , v a+1 , ǫ) + λ a+1 det(v a−1 , v a , ǫ)) det(v a−1 , v a , ǫ) det(v a , v a+1 , ǫ) .
(3.104) Things work similarly for the second resolution (a 1 , a 2 , a 3 ) = (1, 2, 3) ,
u a 1 A = (1, −1, 0) , u a 2 A = (0, 1, −1) , u a 3 A = (0, 0, 1) , (a 1 , a 2 , a 3 ) = (1, 3, 4) , u a 1 A = (1, 0, −1) , u a 2 A = (0, 1, 0) , u a 3 A = (0, −1, 1) .
The equivariant volume now reads
V(λ a , ǫ i ) = e −(ǫ 1 −ǫ 2 )λ 1 −(ǫ 2 −ǫ 3 )λ 2 −ǫ 3 λ 3 ǫ 3 (ǫ 1 − ǫ 2 )(ǫ 2 − ǫ 3 ) + e −(ǫ 1 −ǫ 3 )λ 1 −ǫ 2 λ 3 −(ǫ 3 −ǫ 2 )λ 4 ǫ 2 (ǫ 1 − ǫ 3 )(ǫ 3 − ǫ 2 ) , (3.105)
which is different from (3.102). The difference between the equivariant volumes for the two resolutions starts at order three in the Kähler parameters
V res2 − V res1 = (λ 1 − λ 2 + λ 3 − λ 4 ) 3 6 + O(λ 4 a ) .
(3.106)
In particular, equations (3.103) and (3.104) are valid for both resolutions. We see that the first terms in the λ a expansion of V, which are singular in ǫ i , capture the geometry of the asymptotic singular cone and are therefore independent of the resolution. The difference (3.106) is instead a power series in λ a with coefficients which are regular polynomials in ǫ i and this reflects the fact that the different flops differ by a compact cycle. It is also interesting to compare with the result of the Molien-Weyl formula (2.55) which, using Q = (1, −1, 1, −1), reads
V M W (t,ǭ) = dφ 2πi e tφ (ǭ 1 + φ)(−ǭ 2 + φ)(ǭ 3 + φ)(−ǭ 4 + φ)
.
(3.107)
We need to specify a prescription for the contour and the residues to take, which depends on the sign of t and the choice of resolution. The JK prescription requires to take the residues where the charge Q is positive for t > 0 (first resolution) and minus the residues where the charge Q is negative for t < 0 (second resolution). For example, for t > 0 we take the two residues φ = −ǭ 1 and φ = −ǭ 3 , obtaining
V M W (t,ǭ) = e −tǭ 3 (ǭ 1 −ǭ 3 )(ǭ 2 +ǭ 3 )(ǭ 3 +ǭ 4 ) − e −tǭ 1 (ǭ 1 −ǭ 3 )(ǭ 1 +ǭ 2 )(ǭ 1 +ǭ 4 )
, (3.108) 16 In this formula we identify cyclically v 5 = v 1 .
while for t < 0 we take (minus) the residues associated with negative charge, φ =ǭ 2 and φ =ǭ 4 . It is easy to check that in both cases
V M W (t = − a λ a Q a ,ǭ a ) t>0 t<0 = e a λaǭa V(λ a , ǫ i = a v a iǭ a ) res1 res2 , (3.109)
where the gauge invariant variables explicitly read t = −λ 1 +λ 2 −λ 3 +λ 4 , ǫ 1 =ǭ 1 +ǭ 2 +ǭ 3 +ǭ 4 , ǫ 2 =ǭ 2 +ǭ 3 , ǫ 3 =ǭ 3 +ǭ 4 . (3.110)
Integrating the anomaly polynomial on orbifolds
In this section we will consider field theories associated to D3 and M5 branes and we will show how the formalism of equivariant integration can be employed to determine the anomaly polynomials of lower dimensional theories, obtained compactifying the original theories on orbifolds. For theories compactified on the spindle these results were obtained in [29,[31][32][33], but the equivariant formalism allows for uniform derivations for various branes wrapped on different orbifolds. In the case of theories compactified on manifolds the method of integration of the anomaly polynomial is well-known and is reviewed in [71], to which we refer for more details. In this reference is also discussed the extension in which background gauge fields for the isometries of the compactification manifolds are turned on, and the relation to equivariant integration is spelled out. In a SCFT in (even) dimension d, the anomaly polynomial can be thought of as a formal (d + 2)-form on an auxiliary (d + 2)-dimensional space Z d+2 that is the total space of a M p fibration over a (d + 2 − p)-dimensional base space B d+2−p ,
M p ֒→ Z d+2 → B d+2−p ,(4.1)
and integrating it on M p gives a (d + 2 − p)-form, that is the anomaly polynomial of a (d − p)-dimensional SCFT. In [29] an extension of this construction to the case that M 2 = Σ is a spindle has been proposed and succesfully matched to the dual supergravity solution. More generally, here we will assume that M p (and hence, generically, also Z d+2 ) is an orbifold. The anomaly polynomial is a linear combination of characteristic classes of vector bundles on Z d+2 which (using the splitting principle) can always be decomposed in terms of first Chern classes of holomorphic line (orbi-)bundles on Z d+2 . Specifically, for any U(1) symmetry acting on M p , the latter can be fibered over B d+2−p by gauging this U(1) with a connection on a line bundle J over B d+2−p , denoted A J . When M 2m is a toric orbifold this consists in replacing the occurrences of each term dφ i with dφ One can then use these c 1 (L a ) as a basis to parameterise the characteristic classes appearing in the anomaly polynomials.
i + A J i , where J i are auxiliary line bundles with c 1 (J i ) = [F J i ] /2π ∈ H 2 (B d+2−p , Z) and F J i = dA J i .
D3 branes on the spindle
For 4-dimensional SCFTs the anomaly polynomial is a 6-form defined on an six-dimensional orbifold Z 6 that we take to be the total space of a Σ fibration over an auxiliary space B 4 ,
Σ ֒→ Z 6 → B 4 . (4.3)
Neglecting terms that are sub-leading in the large N limit the six-form anomaly polynomial is given by
A 4d = 1 6 I,J,K c IJK c 1 (F I )c 1 (F J )c 1 (F K ) ,(4.4)
where c IJK are the cubic 't Hooft anomaly coefficients c IJK =Tr(F I F J F K ) and F I denote the generators of global U(1) symmetries. The c 1 (F I ) are formal first Chern classes associated to these U(1) I symmetries that can be decomposed as 17
c 1 (F I ) = ∆ I c 1 (F 2d R ) − p a I c 1 (L a ) , c 1 (L a ) = c 1 (L a ) + 2πµ a c 1 (J ) ,(4.5)
where p a I ∈ Z are "fluxes stuck at the fixed points", L a are the line bundles discussed in section 3.1 and c 1 (F 2d R ) ∈ H 2 (B 4 , Z) is the first Chern class of the 2d R-symmetry line bundle.
The anomaly polynomial of the two-dimensional theory is obtained integrating the anomaly polynomial of the four-dimensional theory on the spindle,
A 2d = Σ A 4d ,(4.6)
and is a four-form on B 4 . Substituting (4.5) in (4.4) it is immediate to see that this is exactly equivalent to the equivariant integral on the spindle
F (∆ I , ǫ) = 1 6 I,J,K c IJK Σ (∆ I − p a I c T 1 (L a ))(∆ J − p a J c T 1 (L a ))(∆ K − p a K c T 1 (L a )) ,(4.7)
where we have expressed the resulting four-form as a function of the variables ∆ I and the equivariant paremeter ǫ. This can also be understood as allowing the trial 2d R-symmetry to mix with the U(1) symmetry of the spindle, formally setting c 1 (J ) = ǫc 1 (F 2d R ) (thus "undoing" the replacement (4.2)), and extracting the coefficient of the four-form A 2d = F (∆ I , ǫ)c 1 (F 2d R ) 2 . Expanding the integrand we obtain 18
F (∆ I , ǫ) = 1 6 I,J,K c IJK p a I −3∆ J ∆ K D a + 3p b J ∆ K D ab − p b J p c K D abc ,(4.8)
with the equivariant intersection numbers
D a = Σ c T 1 (L a ) , D ab = Σ c T 1 (L a )c T 1 (L b ) , D abc = Σ c T 1 (L a )c T 1 (L b )c T 1 (L c ) ,(4.9)
17 To avoid clumsiness in the formulas, we sistematically use the Einstein's notation for the indices a, b, . . . in this section. 18 We use that c IJK are totally symmetric to write (4.8).
whose non-zero values are given in (3.17). Alternatively, the same result can be obtained by evaluating the equivariant integral (4.7) using the fixed point theorem (2.37), explaining the observation made in [31]. Specifically, we get
F (∆ I , ǫ) = F 1 (∆ I , ǫ) + F 2 (∆ I , ǫ) ,(4.10)
where, in terms of the trial 4d central charge
a 4d (∆ I ) = 9 32 I,J,K c IJK ∆ I ∆ J ∆ K , (4.11) we have F a (∆ I , ǫ) = (−1) a 16 27ǫ a 4d (∆ a I ) , ∆ a I ≡ ∆ I − (−1) a p a I ǫ n a .
(4.12)
Notice that the terms singular in ǫ (cubic in the ∆ I ) cancels out, as expected.
The expression above depends on the p a I , but it can be rewritten in terms of "physical fluxes" n I , defined as the integrals of the flavour line bundles [71]
c 1 (E I ) ≡ −p a I c 1 (L a ) ,(4.13)
namely
n I ≡ − Σ c 1 (E I ) = p 1 I n 1 + p 2 I n 2 .
(4.14)
Recall that in general supersymmetry can be preserved on the spindle by coupling with a background R-symmetry gauge field with first Chern class
c 1 (E R ) = −σ 1 c 1 (L 1 ) − σ 2 c 1 (L 2 ) ,(4.15)
where σ a = ±1, corresponding to either twist or anti-twist [37]. Since the supercharge should couple to 2c 1 (F 2d R ) + c 1 (E R ) we must have
I ∆ I c 1 (F 2d R ) + c 1 (E I ) = 2c 1 (F 2d R ) + c 1 (E R ) ,(4.16)
implying the constraints
I ∆ I = 2 , I p a I = σ a .(4.17)
As a consequence, the physical fluxes obey
I n I = σ 1 n 1 + σ 2 n 2 (4.18)
and we can introduce new variables
ϕ I ≡ ∆ I + 1 2 ( p 1 I n 1 − p 2 I n 2 )ǫ , I ϕ I − 1 2 σ 1 n 1 − σ 2 n 2 ǫ = 2 .(4.19)
In terms of these variables the gravitational blocks depend only on the physical fluxes, namely F a (ϕ I , ǫ) = (−1) a 16 27ǫ a 4d (ϕ I − (−1) a n I 2 ǫ) , (4.20)
yielding
F (ϕ I , ǫ) = − 1 24 I,J,K c IJK n I 2ϕ J ϕ K + n J n K ǫ 2 ,(4.21)
in agreement with [36]. Notice that (4.21) is much simpler than the analogous expression in eq. (5.9) of [31] and is therefore the most convenient form to be used in the extremization. The reason is that in the variables ϕ I the expression is manifestly independent of the redundant parameters r I 0 used in the construction in [31], as we now explain. In this reference one starts from the background gauge fields A I = ρ I (y)dφ inherited from the supergravity solution, where y, φ are coordinates on the spindle. This is lifted to Z 6 by the gauging procedure dϕ → dϕ + A J , which leads to the connection one-forms A I = ρ I (y)(dφ + A J ), where dA J = 2πc 1 (J ), so that
dA I = ρ ′ I (y)(dφ + A J ) + 2πρ I (y)c 1 (J ) . (4.22)
Comparing with (4.5) leads us to identify ρ I (y) precisely with our moment maps, namely
ρ I (y) = −2πp a I µ a .
(4.23)
In our notation 19 , the functions ρ I (y) satisfy [31] ρ I (y 1 ) = 1 2
n I − 1 4 r I 0 1 n 1 + 1 n 2 , ρ I (y 2 ) = − 1 2 n I − 1 4 r I 0 1 n 1 + 1 n 2 ,(4.24)
where I r I 0 = 2, while using (3.12) we have
2πp a I µ a | y b = (−1) b p b I n b ,(4.25)
implying that we must identify
r I 0 2 1 n 1 + 1 n 2 = − p 1 I n 1 + p 2 I n 2 . (4.26)
Summing over I we then get
− σ 1 n 1 + σ 2 n 2 = I − p 1 I n 1 + p 2 I n 2 = 1 n 1 + 1 n 2 ,(4.27)
consistently with σ 1 = −1, σ 2 = 1, corresponding to the anti-twist [31]. This shows that the constants r I 0 parameterise the redundancy in the relation between the physical fluxes n I and the p a I . However, notice that (4.26) implies that r I 0 ∈ Q.
M5 branes on the spindle
The anomaly polynomial of M5 brane 6-dimensional SCFTs is an eight-form defined on an eight-dimensional orbifold Z 8 that we take to be the total space of a Σ fibration over an auxiliary space B 6 , Σ ֒→ Z 8 → B 6 .
(4.28)
Neglecting terms that are sub-leading in the large N limit, the eight-form anomaly polynomial is given by
A 6d = N 3 24 p 2 (R) = N 3 24 c 1 (F 1 ) 2 c 1 (F 2 ) 2 ,(4.29)
where p 2 (R) is the second Pontryagin class of the SO(5) R normal bundle to the M5brane in the eleven-dimensional spacetime. The F I , I = 1, 2 are the generators of U(1) 1 × U(1) 2 ⊂ SO(5) R global symmetries that are preserved when the M5-brane is compactified on the spindle [33]. The c 1 (F I ) are the first Chern classes of the line bundles on Z 8 associated to these U(1) I symmetries and are decomposed as
c 1 (F I ) = ∆ I c 1 (F 4d R ) − p a I c 1 (L a ) . (4.30)
The anomaly polynomial of the four-dimensional theory is obtained integrating the anomaly polynomial the six-dimensional theory on the spindle,
A 4d = Σ A 6d ,(4.31)
and is a six-form on B 6 . We now substitute (4.30) in (4.29), where c 1 (L a ) is as in eq. (4.5) and c 1 (F 4d R ) ∈ H 2 (B 6 , Z) is the first Chern class of the 4d R-symmetry line bundle. Formally setting c 1 (J ) = ǫc 1 (F 4d R ) and extracting the coefficient of the six-form A 4d = F (∆ I , ǫ)c 1 (F 4d R ) 3 leads to the equivariant integral
F (∆ I , ǫ) = N 3 24 Σ (∆ 1 − p a 1 c T 1 (L a )) 2 (∆ 2 − p a 2 c T 1 (L a )) 2 ,(4.32)
which may be expanded as
F (∆ I , ǫ) = N 3 24 − 2∆ 1 ∆ 2 (∆ 1 p a 2 + ∆ 2 p a 1 )D a + (∆ 2 1 p a 2 p b 2 + ∆ 2 2 p a 1 p b 1 + 4∆ 1 ∆ 2 p a 1 p b 2 )D ab − 2p a 1 (∆ 1 p b 2 + ∆ 2 p b 1 )p c 2 D abc + p a 1 p b 1 p c 2 p d 2 D abcd ,(4.33)
where the non-zero equivariant intersection numbers are given in (3.17). Alternatively, the same result can be obtained evaluating the equivariant integral (4.32) using the fixed point theorem (2.37). Specifically, we get
F (∆ I , ǫ) = F 1 (∆ I , ǫ) + F 2 (∆ I , ǫ) , (4.34)
where, in terms of the "trial 6d central charge" 20
a 6d (∆ I ) ≡ N 3 24 ∆ 2 1 ∆ 2 2 , (4.35) we have F a (∆ I , ǫ) = (−1) a 1 ǫ a 6d (∆ a I ) , ∆ a I ≡ ∆ I − (−1) a p a I ǫ n a . (4.36)
Notice that the terms singular in ǫ (quartic in the ∆ I ) cancel out, as expected. The rest of the discussion proceeds exactly as for the D3 branes in the previous subsection. In particular, supersymmetry implies that the constraints (4.17) hold and the physical fluxes n I obey (4.18). In terms of the variables ϕ I (4.19) the fixed point contributions read F a (ϕ I , ǫ) = (−1) a 1 ǫ a 6d (ϕ I − (−1) a n I 2 ǫ) , (4.37) which give the very simple expression
F (ϕ I , ǫ) = − N 3 48 (ϕ 1 n 2 + ϕ 2 n 1 )(4ϕ 1 ϕ 2 + n 1 n 2 ǫ 2 ) , (4.38)
in agreement with [36]. Let us briefly compare the results above with the corresponding calculation in [33]. Again, the functions ρ I (y) used in [33] should be identified with our moment maps as ρ I (y) = −2πp a I µ a , and the resulting p a I are easily obtained from the expressions in eq. (A.2) of [33]. However, due to the supergravity coordinates and the specific gauge used in [33], these are quadratic irrational functions of the spindle parameters and the physical fluxes and do not depend on any free parameter. One can check that the constraint (4.17) on the p a I is respected, with σ 1 = −1 and σ 2 = −1, consistently with the fact that the discussion in [33] concerns the twist case 21 . Notice that the form of (4.38) is much simpler than the corresponding function computed [33]. In particular, in (4.38) there are no linear and cubic terms in ǫ.
M5 branes on 4d orbifolds
We now consider M5-branes compactified on a four-dimensional toric orbifold 22 as discussed in section 3.2. The eight-form anomaly polynomial (4.29) is defined on an eightdimensional orbifold Z 8 that we take to be the total space of a M 4 fibration over an auxiliary space B 4 , 39) and integrating it on M 4 gives a four-form, that is the anomaly polynomial of a 2d SCFT:
M 4 ֒→ Z 8 → B 4 ,(4.A 2d = M 4 A 6d . (4.40)
The c 1 (F I ) now decompose as
c 1 (F I ) = ∆ I c 1 (F 2d R ) − p a I c 1 (L a ) , c 1 (L a ) = c 1 (L a ) + 2πµ i a c 1 (J i ) , (4.41)
where J 1 , J 2 are the auxiliary line bundles with c 1 (J i ) ∈ H 2 (Z 4 , Z). The L a , a = 1, . . . , d are the line bundles discussed in section 3.2 and c 1 (F 2d R ) ∈ H 2 (Z 4 , Z) is the first Chern class of the 2d R-symmetry line bundle. 21 We identify their P i with our −n I . 22 The M5 brane anomaly polynomial in an example of this type of compactification was studied in [44].
Substituting (4.41) in (4.29) and setting c 1 (J i ) = ǫ i c 1 (F 2d R ), leads to the equivariant integral
F (∆ I , ǫ) = N 3 24 M 4 (∆ 1 − p a 1 c T 1 (L a )) 2 (∆ 2 − p a 2 c T 1 (L a )) 2 , (4.42)
wihch has exactly the same form of (4.32)! Indeed, expanding it, this gives again (4.33), where now D a = 0 and the non-zero equivariant intersection numbers can be read off from (3.29), (3.31), and (3.33), respectively. On the other hand, employing the fixed point theorem, we can write (4.42) as a sum of contributions over the fixed points, namely 23 (4.43) in agreement with the formula conjectured in [45]. In terms of the gravitational blocks we have However, recall that this relation cannot be inverted and therefore it is not manifest that (4.43) depends only on the physical fluxes. That this is true was proved in [45], as we now recall, translating the arguments to our current notation 24 where λ I ∈ R 2 are arbitrary two-dimensional constant vectors 25 , leaves the physical fluxes q a I invariant. In order to discuss how this transformation affects (4.43) we will assume 26 , 23 Notice that there is no summation on a in the expressions inside the parenthesis. 24 Recall that in this paper we denote the non-primitive, "long", vectors of the fan by v a , the corresponding divisors as D a , and their intersection matrix as D ab . While in [45] these were denoted byv a , D a , andD a,b , respectively. Similarly, (p a I ) here = (m a p a I ) there and (q a I ) here = (q a I /m a ) there . 25 Notice that these λ I have nothing to do with the Kähler parameters λ a appearing elsewhere. 26 This holds in all known examples of supergravity solution corresponding to M5 branes wrapped on four-dimensional toric orbifolds [44][45][46][47]. It should be possible to prove this, along the lines of the analysis in [37].
F (∆ I , ǫ) = N 3 24 n a=1 1 d a,a+1 ǫ a 1 ǫ a 2 ∆ 1 − p a 1 ǫ a 1 − p a+1 1 ǫ a 2 2 ∆ 2 − p a 2 ǫ a 1 − p a+1 2 ǫ a 2 2 ,F (∆ I , ǫ) = n a=1 F a (∆ I , ǫ) , F a (∆ I , ǫ) = 1 d a,a+1 ǫ a 1 ǫ a 2 a 6d (∆ a I ) , (4.44) with ∆ a I = ∆ 1 − p a I ǫ a 1 − p a+1 I ǫ a 2 .
following [45], that supersymmetry requires the background R-symmetry gauge field to have first Chern class (4.49) where σ a = ±1, generalizing the twist and anti-twist for the spindle [37]. This corresponds to the following constraints
c 1 (E R ) = − d a=1 σ a c 1 (L a ) ,∆ 1 + ∆ 2 = 2 + det(W, ǫ) , p a 1 + p a 2 = σ a + det(W, v a ) ,(4.50)
where ǫ = (ǫ 1 , ǫ 2 ) and W ∈ R 2 is a two-dimensional constant vector transforming as
W →W = W + λ 1 + λ 2 (4.51)
under (4.48). Performing this transformation in (4.43) implies that the variables ∆ a I change as
∆ a I → ∆ I −p a I ǫ a 1 −p a+1 I ǫ a 2 = ∆ I − p a I ǫ a 1 − p a+1 I ǫ a 2 − det(λ I , ǫ) , (4.52) thus F (∆ I , ǫ) → F (∆ I , ǫ) ,∆ I ≡ ∆ I − det(λ I , ǫ) , (4.53)
where∆ I obey the same constraint as the ∆ I , namelỹ
∆ 1 +∆ 2 = 2 + det(W , ǫ) − det(λ 1 , ǫ) − det(λ 2 , ǫ) = 2 + det(W, ǫ) . (4.54)
This is exactly the same function as the initial one, completing the proof.
Non-compact Calabi-Yau singularities
The geometry of many type II and M theory solutions arising from branes can be modelled on singular Calabi-Yau cones. In this section we consider the equivariant volume of partial resolutions of non-compact (asymptotically conical) Calabi-Yau singularities and provide many applications to holography. We assume that X is the (partial) resolution of a Calabi-Yau conical singularity, as in the examples in 3.3. This means that the convex rational polyhedron
P = {l a (y) = v a i y i − λ a ≥ 0 a = 1, . . . , d} , (5.3)
is asymptotically a cone. More precisely, we assume that the large y i approximation of the polyhedron
P ′ = {l a k (y) = v a k i y i ≥ 0 k = 1, . . . , d ′ } , (5.4)
is a non-compact cone with a single vertex in y = 0 and d ′ facets. Notice that only the non-compact facets of P are relevant for the asymptotic behaviour and, in general d ′ ≤ d.
The vectors v a k , k = 1, . . . , d ′ , define the fan of the singular cone X sing of which X is a resolution. The fan of X sing consists of a single cone and it corresponds to a singularity which is, in general, not of orbifold type. The resolution replaces the singularity of P ′ with compact cycles of real dimension 2m − 2, corresponding to the bounded facets of P, as well as sub-cycles of smaller dimensions. 27 We assume that P has only orbifold singularities. From the dual point of view, the fan of X is the union of m-dimensional cones, each corresponding to a vertex of P and therefore to a fixed point. Each cone is specified by a choice of m adjacent vectors (v a 1 , . . . , v am ). We use the notation A = (a 1 , . . . , a m ) to identify the set of such cones and we assume that there are n of them.
Using the results of section 2.3, the equivariant volume is given by
V(λ a , ǫ i ) = A=(a i ,
GK geometry and the GMS master volume
In this section we explore some general properties of the equivariant volume for noncompact Calabi-Yau singularities and its relation with other volumes appearing in the literature in similar contexts, like the the Sasakian volume of [17,18] and the master volume introduced in [60]. Consider the formal expansion
V(λ a , ǫ i ) = ∞ k=0 V (k) (λ a , ǫ i ) , (5.6)
where V (k) (λ a , ǫ i ) is homogeneous of degree k in λ a . We start by observing some general properties of the homogeneous quantities V (k) (λ a , ǫ i ). From (2.35), by matching degrees in λ a , we have
d a=1 v a i ∂V (k) ∂λ a = −ǫ i V (k−1) . (5.7)
By considering the case i = 1 and the fact that, for Calabi-Yaus, v a 1 = 1, we also have d a=1
∂V (k) ∂λ a = −ǫ 1 V (k−1) .(∂V (k) ∂λ a (ǫ 1 v a i − ǫ i ) = 0 . (5.9)
It follows from this equations that V (k) are invariant under the gauge transformation
λ a → λ a + m i=1 γ i (ǫ 1 v a i − ǫ i ) , (5.10)
which allows to eliminate m − 1 28 unphysical λ a that do not correspond to non-trivial co-homology classes.
In the compact case, the terms with k < m in the expansion (5.6) are identically zero. Indeed they can be written as in (2.31) and they are integrals of forms of degree less than 2m. As we already discussed in section 2.5, in the non-compact case this is not true and also the terms with k < m are not vanishing. They are rational functions of ǫ encoding some interesting geometrical information that we will now elucidate.
It is important to observe that V (k) (λ a , ǫ i ) with k < m only depends on the d ′ Kähler parameters λ a associated with the singular fan and the non-compact directions. The Kähler parameter λ a associated with the compact directions start contributing at order m in the expansion. This can be understood as follows. The compact Kähler parameters are associated with the bounded facets of the original polytope. We can always modify the polytope by adding new facets and making it compact. We can also assume that the bounded facets of the original polytope are not modified by this operation. The compact Kähler parameter enters in the fixed point formula only in the terms associated with the vertices of the bounded facets and, therefore, their contribution to V for the old and new polytope is the same. Since the new polytope is compact, this contribution must start at order m.
The term of zero degree is simply
V(0, ǫ i ) = 1 (2π) m X e −H ω m m! = P ′ e −ǫ i y i dy 1 . . . dy m (5.11)
and it computes the equivariant volume of the singular Calabi-Yau cone X sing , or equivalently, the regularized volume of the polyhedron P ′ . At λ a = 0, the metric of X sing becomes conical ds 2 (X) = dr 2 + r 2 ds 2 (Y ) .
(5.12)
A choice of Kähler metric exhibits X sing as a cone over a Sasakian manifold Y of real dimension 2m − 1. As shows in [17,18] there is family of Sasakian metrics parameterized by the Reeb vector ξ = ǫ i ∂ φ i . They correspond to different choices of the radial coordinate , (5.14) was derived indeed in [18]. The term of degree m − 1 is also interesting. It coincides, up to a numerical factor, with the master volume introduced in [60]. To define the master volume, one foliates the 2m − 1 base Y as ds 2 (Y ) = η 2 + ds 2 2m−2 ,
(5.15)
where η is the dual one form to ξ (i ξ η = 1) and the metric ds 2 2m−2 is conformally Kähler with Kähler form ω B . We turn on the d ′ Kähler parameters λ a associated with the noncompact facets of the polyhedron by letting ω B vary [60] [
ω B ] 2π = − a λ a c a ,(5.16)
where c a are the co-homology classes that uplift to c 1 (L a ) on the Calabi-Yau X sing . Notice that [dη] = 2π a c a [60]. The master volume is then defined as
V(λ a , ǫ i ) = Y η∧ ω m−1 B (m − 1)! = (−2π) m−1 (m − 1)! a 1 ,...,a m−1 λ a 1 . . . λ a m−1 Y ηc a 1 ∧. . .∧c a m−1 . (5.17)
For example, for m = 3, the master volume reads [60] V(λ a , ǫ i ) = (2π) 3
2! d ′ a=1 λ a (λ a−1 det(v a , v a+1 , ǫ) − λ a det(v a−1 , v a+1 , ǫ) + λ a+1 det(v a−1 , v a , ǫ)) det(v a−1 , v a , ǫ) det(v a , v a+1 , ǫ) ,(5.
18) where the vectors v a with a = 1, . . . , d ′ runs over the fan of the singular cone X sing .
Notice that, in the original approach of [60], the master volume is defined by considering conical non-Kähler metrics on X. In our approach instead, we give up the conical condition on X and we use a Kähler metric. Despite the difference of approaches, we can recover the master volume as the term of degree m − 1 in the equivariant volume
V(λ a , ǫ i ) = (2π) m V (m−1) (λ a , ǫ i ) ,(5.19)
where we stress that
V (m−1) = 1 (m − 1)! d ′ a 1 ,...,a m−1 =1 ∂ m−1 V ∂λ a 1 . . . ∂λ a m−1 λ=0 λ a 1 . . . λ a m−1 , (5.20)
is a function only of the Kähler parameters λ a , a = 1, . . . , d ′ , associated with the fan of the singular cone. The identity (5.19) can be checked by direct computation. The case m = 3 is explicitly worked out in section B. By differentiating m − 1 times equation (2.35)
for i = 1 d a 1 =1 ∂V ∂λ a 1 = −ǫ 1 V ,(5.21)
setting λ a = 0 and multiplying by λ a 2 . . . λ am , we can also rewrite (5.19) as
V(λ a , ǫ i ) = − (2π) m ǫ 1 (m − 1)! d ′ a 1 ,...,am=1 ∂ m V ∂λ a 1 . . . ∂λ am λ=0 λ a 1 . . . λ a m−1 , (5.22)
where the analogy with (5.17) is manifest. As observed in [60], the master volume reduces to the Sasakian volume when all the Kähler parameters are equal. We can understand this statement in our formalism as follows
V (0) (ǫ i ) = (−1) m−1 ǫ m−1 1 d ′ a 1 ,...,a m−1 =1 ∂ n V ∂λ a 1 . . . ∂λ a m−1 λ=0 = (m − 1)! (−ǫ 1 ) m−1 (2π) m V(λ a = 1, ǫ i ) , (5.23)
where we used (2.36) with all i k = 1 and λ a = 0 and equation (5.19).
The master volume plays an important role in supergravity solutions based on GK geometry [25,60]. In this context the so-called supersymmetric action
S SUSY = − d a=1 ∂V ∂λ a ,(5.24)
plays an even more important role. It is the object that needs to be extremized in order to find a solution of the equation of motions. Using (5.8) we find
S SUSY = ǫ 1 (2π) m V (m−2) (λ a , ǫ i ) . (5.25)
Gravitational blocks from the equivariant volume
In this section we use the equivariant volume to study type II and M theory branes compactified on a spindle. Supersymmetry is preserved either with a topological twist or an antitwist [37]. From the field theory point of view, we consider the case where a superconformal field theory compactified on a spindle flows in the IR to a superconformal quantum mechanics or to a two-dimensional superconformal field theory. The corresponding supergravity solutions describing the IR limit have a geometry AdS 2 × Z or AdS 3 × Z, respectively, and can be interpreted as the near-horizon geometry of black holes or black strings. The local geometry of such brane systems can be modelled in terms of CY m-folds. We can describe many D-branes and M-branes configurations in terms of a formal fibration
CY m ֒→ CY m+1 → Σ ,(5.26)
where the CY m encode the geometry and the information about the original higherdimensional CFT. It is often useful to define an off-shell free energy F (∆ I , ǫ), or extremal function 29 , depending on chemical potentials ∆ I for the continuous global symmetries of the higherdimensional CFT and an equivariant parameter ǫ for the rotation along the spindle, whose extremization gives the entropy of the black hole or the central charge of the two-dimensional CFT. Extremal functions of known black holes and black strings can be expressed in terms of gravitational blocks [48]. The general form of the extremal functions for branes compactified on a spindle in this context was proposed in [36]. The characteristic form of the off-shell free energy F is given by a gluing
F (∆ I , ǫ) = 1 ǫ F m (∆ + I ) ± F m (∆ − I ) (5.27)
where the block F m encodes some universal properties of the higher-dimensional SCFT and it is related to the geometry of CY m , and the gluing depends on the details of the fibration (5.26). For D3 and M5 branes, F m is related to the central charge of the higher-dimensional CFT. For other types of branes, F m is the sphere-free energy of the higher-dimensional CFT at large N [36,48].
In this section we focus on the gravitational interpretation of (5.27). In the case of M2 and D3 branes, the off-shell free energy can be expressed in terms of the supersymmetric action in the formalism of GK geometry and the decomposition (5.27) was explicitly proved in [49] from the gravitational point of view. We will show how to recover and reinterpret this results in terms of the equivariant volume. In the case of D2, D4 and M5 brane systems, we cannot apply the GK formalism but we will propose a possible and intriguing extension.
The geometry of CY m fibred over the spindle
As a preliminary, in this section we derive a general expression for the equivariant volume of the fibration (5.26). Consider a CY cone m-fold defined by m-dimensional vectors v a , with v a 1 = 1, and the (m + 1)-dimensional toric geometry specified by the fan [49] V a = (0, v a ) ,
V + = (n + , w + ) , V − = (−σn − , w − ) ,(5.28)
where σ = ±1 and we use capital letters for m + 1-dimensional vectors. This is a fibration over a spindle WP [n + ,n − ] specified by the vectors w ± . Notice that the m + 1-dimensional geometry is still a Calabi-Yau if the first component of the vectors w ± is one, w ±1 = 1, which we will assume. As shown in [49], this geometry explicitly appears in the gravity solutions of M2 and D3-branes compactified on a spindle with
w + = (1, −a + p) , w − = (1, −σa − p) ,(5.29)
where a − n + + a + n − = 1 and σ = ±1 and p is a (m − 1)-dimensional vector. In this context, supersymmetry is preserved with a twist (σ = 1) or an anti-twist (σ = −1). Notice also that, in the anti-twist case, the toric diagram is not convex and it does not strictly define a toric geometry. We will nevertheless proceed also in this case, considering it as an extrapolation from the twist case. The fixed point formula is
V CY m+1 (λ a , ǫ i ) = (a 1 ,...,a m+1 )∈A e −ǫ (m+1) ·( m+1 i=1 λa i U a i )/da 1 ,...,a m+1 d a 1 ,...,a m+1 m+1 i=1 ( ǫ (m+1) ·U a i da 1 ,...,a m+1 ) ,(5.30)
where A runs over the polyhedral cones of a resolution and ǫ (m+1) = (ǫ 0 , ǫ 1 , . . . , ǫ m ). For ease of notation, we drop the label A from the normal vectors U a . We can choose a resolution for the CY m by subdividing the m-dimensional fan. We then obtain a resolution of the CY m+1 by considering polyhedra where V + and V − are added to the m-dimensional cones (V a 1 , . . . , V am ). Let's assume also d a 1 ,...,am = 1 for the CY m . The inward normals to the tetrahedra are 1, 0, . . . , 0) , (5.31) where u a i are the inward normals to the CY m cones, we identified a m+1 = ± and σ = −1 is obtained by analytic continuation.
(V a 1 , . . . , V am , V + ) → U a i = (−u a i · w + , n + u a i ) , U + = (1, 0, . . . , 0) , (V a 1 , . . . , V am , V − ) → U a i = −(−u a i · w − , −σn − u a i ) , U − = (−
The contribution of the tetrahedra with vertex V + is
(a 1 ,...,a 3 )∈A e −(ǫ−ǫ 0 w + /n + )·( m i=1 λa i u a i )−ǫ 0 λ + /n + ǫ 0 m i=1 (ǫ − ǫ 0 w + n + ) · u a i = 1 ǫ 0 V CYm ǫ − ǫ 0 w + n + , λ a + ǫ 0 n + ǫ 1 − ǫ 0 λ + ,(5.32)
where ǫ = (ǫ 1 , . . . , ǫ m ), and we used d a 1 ,...,a m+1 = n + and the identity among normals i u a i = (1, 0, . . . , 0). 30 The contribution of the tetrahedra of V − is obtained by replacing w + with w − , n + with −σn − and λ 4 ≡ λ + with a new variable λ − . The signs in U a i are compensated by d a 1 ,a 2 ,a 3 ,a 4 = −(−σn − ) (we work for positive σ and analitically continue the result), which also brings an overall extra sign.
The final result is
V CY m+1 = 1 ǫ 0 V CYm ǫ + , λ + − 1 ǫ 0 V CYm ǫ − , λ − (5.33) where ǫ + = ǫ − ǫ 0 w + n + , λ + a = λ a + ǫ 0 n + ǫ 1 − ǫ 0 λ + , ǫ − = ǫ + ǫ 0 w − σn − , λ − a = λ a − ǫ 0 σn − ǫ 1 + ǫ 0 λ − . (5.34)
where we see that V CY m+1 can be obtained by gluing two copies of V CYm .
The case of D3 branes
The case of D3 and M2 branes can be described in terms of GK geometry [60]. We consider a system of branes sitting at the tip of a conical toric Calabi-Yau three-fold singularity (CY m ) with Sasaki-Einstein base Y 2m−1 and further compactified on a spindle. 30 Consider for simplicity m = 3. Assuming an order such that det
(v 1 , v 2 , v 3 ) = 1, i u ai = v 2 ∧ v 3 + v 3 ∧ v 1 + v 1 ∧ v 2 .
Let e i be the canonical basis in R 3 . Then i u ai · e 2,3 = 0. For example i u ai · e 2 = det(e 2 , v 2 , v 3 ) + det(e 2 , v 3 , v 1 ) + det(
e 2 , v 1 , v 2 ) = v 3 3 − v 2 3 + v 1 3 − v 3 3 + v 2 3 − v 1 3 = 0 where we used v a 1 = 1.
On the other hand, i u ai · e 1 = det(e 1 , v 2 , v 3 ) + det(e 1 , v 3 , v 1 ) + det(e 1 , v 1 , v 2 ) = 1 since it is the sum of the areas of three triangular cones obtained by triangulating (v 1 , v 2 , v 3 ) with the insertion of e 1 (it lies in the same plane). The sum of the three areas is the area of the original cone d 1,2,3 = 1.
The dual supergravity solution has an AdS 2 × Z 9 and AdS 3 × Z 7 near horizon geometry, for M2 and D3 branes respectively. The internal manifolds Z 2m+1 are obtained by fibering the Sasaki-Einstein base Y 2m−1 over the spindle. Supersymmetry requires that the cone C(Z 2m+1 ) is topologically a CY m+1 , although the supergravity metric is not Ricci-flat.
The supergravity solution can be described in terms of a family of backgrounds that depends on the equivariant parameters ǫ i and the Kähler parameters λ a of the CY m+1 [60]. The conditions for supersymmetry can be compactly written as
d i=1 ∂S SUSY ∂λ a = 0 , ν m M a = − ∂S SUSY ∂λ a ,(5.35)
where S SUSY is the supersymmetric action (5.24) of the CY m+1 Let us now specialize to the case of D3-branes where m = 3. The solution AdS 3 ×Z 7 is dual to a two-dimensional CFT. In this context, the Killing vector ξ = m i=1 ǫ i ∂ φ i is interpreted as the R-symmetry of the dual CFT and the extremization of the supersymmetric action is the geometrical dual of c-extremization [72] in two-dimensional CFTs [25,60].
S SUSY = ǫ 1 (2π) m+1 V (m−1) CY m+1 (λ a , ǫ i ) ,(5.
More explicitly, the on-shell value of the supersymmetric action, up to a normalization coefficient, is the exact central charge of the two-dimensional CFT, while the off-shell value of S SUSY as a function of ǫ i after imposing (5.35) equal the charge c as a function of a trial R-symmetry. This has been proved explicitly in [73] for the case of a compactification on S 2 , or on a Riemann surface, and in [49] for a spindle. More precisely, S SUSY as a function of ǫ i coincides with the trial c-function after the baryonic directions in the trial R-symmetry have been extremized. 31 It is also shown in [49] how to write the supersymmetric action as the gluing of two gravitational blocks [48]. We now recover this result in our formalism.
We can extract the supersymmetric action from the gluing formula (5.33)
V CY 4 = 1 ǫ 0 V CY 3 ǫ + , λ + − 1 ǫ 0 V CY 3 ǫ − , λ − ,(5.38)
where
ǫ + = ǫ − ǫ 0 w + n + , λ + a = λ a + ǫ 0 n + ǫ 1 − ǫ 0 λ + , ǫ − = ǫ + ǫ 0 w − σn − , λ − a = λ a − ǫ 0 σn − ǫ 1 + ǫ 0 λ − . (5.39)
Notice that the redefinitions are homogeneous in λ. This means that the previous identity can be easily truncated at a given order in λ. Taking the quadratic piece and using (5.19) and (5.25)
S SUSY | CY 4 = 2π ǫ 1 ǫ 0 V CY 3 (ǫ + , λ + ) − V CY 3 (ǫ − , λ − ) ,(5.40)
since the supersymmetric action is the quadratic part of V CY 4 and the 3d master volume is the quadratic part of V CY 3 . We thus recover the factorization in gravitational blocks derived in [49] (see for example (7.27) and (7.31) in that paper). We can be more explicit in the special case of S 5 and the dual N = 4 SYM. This example is discussed in [49] in a different gauge. The vectors are 41) and the equivariant and master volume read where the constraint (5.37) further requires
v 1 = (1, 0, 0) , v 2 = (1, 1, 0) , v 3 = (1, 0, 1) ,(5.V C 3 = e −(ǫ 1 −ǫ 2 −ǫ 3 )λ 1 −ǫ 2 λ 2 −ǫ 3 λ 3 (ǫ 1 − ǫ 2 − ǫ 3 )ǫ 2 ǫ 3 , V C 3 = (2π) 3 ((ǫ 1 − ǫ 2 − ǫ 3 )λ 1 + ǫ 2 λ 2 + ǫ 3 λ 3 ) 2 2(ǫ 1 − ǫ 2 − ǫ 3 )ǫ 2 ǫ 3 .3 i=a n a = − 1 n + − 1 σn − , n 2 = − w +2 n + − w −2 σn − , n 3 = − w +3 n + − w −3 σn − . (5.44)
We will be cavalier about the normalization of the fluxes in the following, but it is clear than N should be proportional to the number of colors of the dual theory. We can use the gauge freedom (5.10) to set λ 1 = λ 2 = λ 3 = 0. The master volume evaluated at the two poles in this gauge reads
V C 3 (ǫ ± , λ ± ) = (2π) 3 ǫ 2 0 λ 2 ± 2n 2 ± (ǫ ± 1 − ǫ ± 2 − ǫ ± 3 )ǫ ± 2 ǫ ± 3 ,(5.45)
and we can find λ ± by solving the a = ± components of equation (5.35) (recall that only two of them are independent)
− ν 3 N = (2π) 4 ǫ 1 ǫ 0 λ + n + (ǫ + 1 − ǫ + 2 − ǫ + 3 )ǫ + 2 ǫ + 3 , −ν 3 σN = − (2π) 4 ǫ 1 ǫ 0 λ − n − (ǫ − 1 − ǫ − 2 − ǫ − 3 )ǫ − 2 ǫ − 3 .
(5.46) We then see that ∆ i parameterize the R-charges of the three chiral fields of N = 4 SYM. The supersymmetric action (5.40) is then obtained by
Defining ǫ 0 = ǫ , ǫ 1 = ∆ 1 + ∆ 2 + ∆ 3 , ǫ 2 = ∆ 2 , ǫ 3 = ∆ 3 ,(5.S SUSY | CY 4 = F m (∆ + 1 , ∆ + 2 , ∆ + 3 ) ǫ − F m (∆ − 1 , ∆ − 2 , ∆ − 3 ) ǫ , (5.49) where F m (∆ 1 , ∆ 2 , ∆ 3 ) is proportional to the block for N = 4 SYM [48] F m (∆ 1 , ∆ 2 , ∆ 3 ) = ν 2 3 64π 4 N 2 ∆ 1 ∆ 2 ∆ 3 ,(5.50)
and
∆ + 1 = ∆ 1 − ǫ n + (1 − 3 i=2 w +i ) , ∆ + 2 = ∆ 2 − ǫ n + w +2 , ∆ + 3 = ∆ 3 − ǫ n + w +3 , ∆ − 1 = ∆ 1 + ǫ σn − (1 − 3 i=2 w −i ) , ∆ − 2 = ∆ 2 + ǫ σn − w −2 , ∆ − 3 = ∆ 3 + ǫ σn − w −3 . (5.51)
We can see that this result reproduces the anomaly polynomial computed in section 4. In doing so, we need to be careful that there is some ambiguity in identifying the chemical potentials associated with the R-symmetry in (5.47). Any redefinition ∆ i → ∆ i + δ i ǫ with 3 i=1 δ i = 0 would respect the constraint (5.48) and would lead to a potentially good choice of R-symmetry chemical potential. To avoid this ambiguity, we can compare the objects that are invariant under this redefinition
3 i=1 ∆ + i = 2 − ǫ n + , 3 i=1 ∆ − i = 2 + ǫ σn − , ∆ + i − ∆ − i = ǫ n i . (5.52)
It is then easy to see that the same relations hold for the anomaly as written in (4.10) with σ 1 = −1 and σ 2 = −σ.
Notice that the supersymmetric action scales like N 2 as expected for N = 4 SYM. This can be understood from the fact that S SUSY | CY 4 is quadratic in λ and, from (5.35), λ scales linearly with N.
The case of M2 branes
We consider now the case of M2 sitting at the tip of a conical toric CY 4 with Sasaki-Einstein base Y 7 and further compactified on a spindle. The dual M theory solutions correspond to 4d black holes and have an AdS 2 × Z 9 near horizon geometry. We can again describe the system in terms of GK geometry. The supersymmetric action provides an entropy functional for the black hole. The construction is the gravitational dual of Iextremization [75] and has been applied to the case of a compactification on S 2 in [76][77][78] 32 and in the case of a compactification on the spindle in [49,79].
From the gluing formula (5.33) we find
V CY 5 = 1 ǫ 0 V CY 4 ǫ + , λ + − 1 ǫ 0 V CY 4 ǫ − , λ − , (5.53) where ǫ + = ǫ − ǫ 0 w + n + , λ + a = λ a + ǫ 0 n + ǫ 1 − ǫ 0 λ + , ǫ − = ǫ + ǫ 0 w − σn − , λ − a = λ a − ǫ 0 σn − ǫ 1 + ǫ 0 λ − . (5.54)
Taking the piece cubic in λ we recover the result in [49] S SUSY | CY 5 = 2π
ǫ 1 ǫ 0 V CY 4 (ǫ + , λ + ) − V CY 4 (ǫ − , λ − ) . (5.55)
To simplify the discussion we restrict to the case Z 7 = S 7 which is dual to the ABJM theory with k = 1 compactified on the spindle. The vectors are where the constraint (5.37) further requires 59) and N will be related to the number of colors of the dual theory. We use again the gauge freedom (5.10) to set λ 1 = λ 2 = λ 3 = λ 4 = 0 and we determine λ ± solving (5.35). This time, the master volume evaluated at the two poles in this gauge reads
V C 4 = e −(ǫ 1 −ǫ 2 −ǫ 3 −ǫ 4 )λ 1 −ǫ 2 λ 2 −ǫ 3 λ 3 −ǫ 4 λ 4 (ǫ 1 − ǫ 2 − ǫ 3 − ǫ 4 )ǫ 2 ǫ 3 ǫ 4 .4 i=a n a = − 1 n + − 1 σn − , n i = − w +i n + − w −i σn − , i = 2, 3, 4 ,(5.V C 4 (ǫ + , λ + ) = (2π) 4 ǫ 3 0 (−λ + ) 3 6n 3 + (ǫ + 1 − ǫ + 2 − ǫ + 3 − ǫ + 4 )ǫ + 2 ǫ + 3 ǫ + 4 , V C 4 (ǫ − , λ − ) = (2π) 4 ǫ 3 0 (λ − ) 3 6(σn − ) 3 (ǫ − 1 − ǫ − 2 − ǫ − 3 − ǫ − 4 )ǫ − 2 ǫ − 3 ǫ − 4 ,(5.60)
and we can find λ ± by solving the a = ± components of equation (5.35) (recall that only two of them are independent)
ν 4 N = (2π) 5 ǫ 1 ǫ 2 0 λ 2 + 2n 2 + (ǫ + 1 − ǫ + 2 − ǫ + 3 − ǫ + 4 )ǫ + 2 ǫ + 3 ǫ + 4 , ν 4 N = (2π) 5 ǫ 1 ǫ 2 0 λ 2 − 2n 2 − (ǫ − 1 − ǫ − 2 − ǫ − 3 − ǫ − 4 )ǫ − 2 ǫ − 3 ǫ − 4 . (5.61)
There is a sign ambiguity in solving (5.61) that we fix to match the gravitational entropy function result and the explicit geometric analysis in [49]. 33 Define
ǫ 0 = ǫ 2 , ǫ 1 = ∆ 1 + ∆ 2 + ∆ 3 + ∆ 4 2 , ǫ 2 = ∆ 2 2 , ǫ 3 = ∆ 3 2 , ǫ 4 = ∆ 4 2 . (5.62)
The extremization in [60] must be done under the condition ǫ 1 = 2/(m − 3) = 1 which corresponds to
∆ 1 + ∆ 2 + ∆ 3 + ∆ 4 = 2 . (5.63)
We then see that ∆ i parameterize the R-charges of the four chiral fields of ABJM. The supersymmetric action (5.40) is then given by
S SUSY | CY 5 = F m (∆ + 1 , ∆ + 2 , ∆ + 3 , ∆ + 4 ) ǫ − σ F m (∆ − 1 , ∆ − 2 , ∆ − 3 , ∆ − 4 ) ǫ , (5.64) where F m (∆ 1 , ∆ 2 , ∆ 3 , ∆ 4 ) is proportional to the block for the ABJM theory [48] F m (∆ 1 , ∆ 2 , ∆ 3 , ∆ 4 ) = ν 3/2 4 24π 5/2 N 3/2 ∆ 1 ∆ 2 ∆ 3 ∆ 4 ,(5.65)
and
∆ + 1 = ∆ 1 − ǫ n + (1 − 4 i=2 w +i ) , ∆ + α = ∆ α − ǫ n + w +α , α = 2, 3, 4 , ∆ − 1 = ∆ 1 + ǫ σn − (1 − 4 i=2 w +i ) , ∆ − α = ∆ α + ǫ σn − w −α , α = 2, 3, 4 . (5.66) Notice that 4 i=1 ∆ + i = 2 − ǫ n + , 4 i=1 ∆ − i = 2 + ǫ σn − , ∆ + i − ∆ − i = ǫ n i , i = 1, 2, 3, 4 . (5.67)
This formula matches the entropy function found in [36,49]. 34 Notice that the entropy function scales like N 3/2 as expected for the ABJM theory. The reason is that S SUSY | CY 5 is cubic in λ and, from (5.35), λ scales as √ N. Notice also that the gluing (5.64) involves a different relative sign for the twist and anti-twist. This is typical of M2 and D4 branes and it is not present for D3 and M5 branes [36].
The case of M5 branes
The case of M5 branes compactified on a spindle has no known description in terms of GK geometry. However, a stack of M5 branes in flat space in M theory probes a transverse 33 We see from (5.61)
that (ǫ ± 1 − ǫ ± 2 − ǫ ± 3 − ǫ ± 4 )ǫ ± 2 ǫ ± 3 ǫ ± 4
should be positive. We then choose the solution λ + < 0, λ − > 0 which is consistent with [49] -for example, one can compare with (7.16) and (7.26) in that reference. 34 To compare with (5.22) in [36] one needs to identify n here i = −n there , n here ± = n there ∓ and 2ǫ there = −ǫ. The chemical potentials are related by the redefinition ϕ i = ∆ i − ǫ n+ w +i − ǫ 2 n i for i = 2, 3, 4 and
ϕ 1 = ∆ 1 − ǫ n+ (1 − w +2 − w +3 − w +4 ) − ǫ 2 n 1 .
geometry that is C 2 × R suggesting that the relevant CY 2 is simply C 2 . In the associated supergravity solution AdS 7 × S 4 , the base S 3 of the cone C 2 is fibred over a real direction to give the S 4 . Notice that there is a supersymmetric generalization where the M5 branes probes a C 2 /Z p × R and the CY 2 could be replaced by C 2 /Z p . We now consider the CY 3 consisting of C 2 fibered over the spindle with fan
V 1 = (0, 1, 0) , V 2 = (0, 1, 1) , V + = (n + , w + ) , V − = (−σn − , w − ) , w + = (1, p + ) , w − = (1, p − ) . (5.68)
From the gluing formula (5.33) we have
V CY 3 = 1 ǫ 0 V C 2 ǫ + , λ + − 1 ǫ 0 V C 2 ǫ − , λ − ,(5.69)
where 70) and the equivariant volume of C 2 can be read from (3.85) with p = q = 1
ǫ + = ǫ − ǫ 0 w + n + , λ + a = λ a + ǫ 0 n + ǫ 1 − ǫ 0 λ + ǫ − = ǫ + ǫ 0 w − σn − , λ − a = λ a − ǫ 0 σn − ǫ 1 + ǫ 0 λ − ,(5.V C 2 = e −λ 1 (ǫ 1 −ǫ 2 )−λ 2 ǫ 2 ǫ 2 (ǫ 1 − ǫ 2 ) . (5.71)
In analogy with S SUSY in the context of GK geometry, we expect to be able to extract an off-shell free energy F , for the spindly M5 branes AdS 5 solutions found in [33] from the equivariant volume of C 2 fibred over the spindle. We now show that this is the case by an appropriate generalization of the equations (5.35). A scaling argument for N suggests that the correct generalization is
ν M 5 M a = − ∂V (2) CY 3 ∂λ a , F = V (3) CY 3 , a M a v a i = 0 ,(5.72)
where the flux equation uses the quadratic piece but the extremal function is the cubic piece of the equivariant volume. In this way, λ is linear in N and the extremal function F cubic in N, as expected for M5 branes. Here and in the following sub-sections we will not be concerned with precise normalizations. Following the same logic as in previous sections, we solve the constraint on the fluxes
M a = N(n 1 , n 2 , 1 n + , 1 σn − ) ,(5.73)
with n 1 + n 2 = − 1 n + − 1 σn − and define
ǫ 0 = ǫ 2 , ǫ 1 = ∆ 1 + ∆ 2 2 , ǫ 2 = ∆ 2 2 ,(5.74)
with ∆ 1 + ∆ 2 = 2. The chemical potentials ∆ 1 and ∆ 2 can be associated with the Cartan subgroup of the SO(5) R-symmetry of the (2, 0) theory. We find that, after solving for λ ± in the gauge λ 1 = λ 2 = 0, the extremal function is given by
F = F m (∆ + 1 , ∆ + 2 ) ǫ − F m (∆ − 1 , ∆ − 2 ) ǫ ,(5.75)
where F m (∆ 1 , ∆ 2 ) is proportional to the block for the (2, 0) theory [48] F m (∆ 1 ,
∆ 2 ) = ν 3 M 5 48 N 3 (∆ 1 ∆ 2 ) 2 ,(5.76)
and
∆ + 1 = ∆ 1 − ǫ n + (1 − p + ) , ∆ + 2 = ∆ 2 − ǫ n + p + , ∆ − 1 = ∆ 1 + ǫ σn − (1 − p − ) , ∆ − 2 = ∆ 2 + ǫ σn − p − . (5.77) Notice that 2 i=1 ∆ + i = 2 − ǫ n + , 2 i=1 ∆ − i = 2 + ǫ σn − , ∆ + i − ∆ − i = ǫ n i . (5.78)
We see that we have reproduced the (2, 0) anomaly integrated on the spindle (4.34) with σ 1 = −1 and σ 2 = −σ.
The case of D4 branes
We now turn to the case of D4 branes compactified on a spinde. The massive type IIA supergravity solution associated with a system of D4 and D8 branes is topologically AdS 6 times an hemisphere S 4 [80] suggesting that the relevant CY 2 is again C 2 . Supergravity solutions with an AdS 4 factor corresponding to compactifications of the dual 5d SCFT on a spindle have been found in [36,38]. It si intriguing to observe that we can again extract an off-shell free energy for these spindly solutions from a generalization of the equations (5.35) valid for GK geometry. We consider as before the equivariant volume (5.69) of C 2 fibred over the spindle. This time we take
ν D4 M a = − ∂V (3) CY 3 ∂λ a , F = V (5) CY 3 , a M a v a i = 0 ,(5.79)
so that λ a ∼ N 2 and F ∼ N 5/2 as expected for D4 branes. We use the same definitions (5.73) and (5.74) of the previous section. After solving for λ ± in the gauge λ 1 = λ 2 = 0, the extremal function is given by 35
F = F m (∆ + 1 , ∆ + 2 ) ǫ − σ F m (∆ − 1 , ∆ − 2 ) ǫ ,(5.80)
where F m (∆ 1 , ∆ 2 ) is proportional to the block for the 5d SCFT [48] F m (∆ 1 , ∆ 2 ) = ν
5/2 D4 60 √ 2 N 3/2 (∆ 1 ∆ 2 ) 3/2 ,(5.81)
and
∆ + 1 = ∆ 1 − ǫ n + (1 − p + ) , ∆ + 2 = ∆ 2 − ǫ n + p + , ∆ − 1 = ∆ 1 + ǫ σn − (1 − p − ) , ∆ − 2 = ∆ 2 + ǫ σn − p − .
(5.82) 35 We made a choice of determination for the fractional power that is similar to the M2 brane case and correctly reproduce the supergravity result.
Notice that
2 i=1 ∆ + i = 2 − ǫ n + , 2 i=1 ∆ − i = 2 + ǫ σn − , ∆ + i − ∆ − i = ǫ n i . (5.83)
We have reproduced the entropy function in [36,38]. 36
The case of D2 branes
We finally consider the case of D2 branes compactified on a spinde. The massive type IIA supergravity solution associated with a system of D2-branes is topologically AdS 4 × X 6 , where the internal manifold is a Sasaki-Einstein five-manifold foliated over a segment [81,82]. Supergravity solutions with an AdS 2 factor corresponding to compactifications of the dual 3d SCFT on a spindle have been found in [41]. We expect that the corresponding extremal function is related to the equivariant volume of a CY 3 fibred over the spindle. A scaling argument suggests to use
ν D2 M a = − ∂V (4) CY 4 ∂λ a , F = V (5) CY 4 , a M a v a i = 0 ,(5.84)
so that λ a ∼ N 1/3 and F ∼ N 5/3 as expected for D2 branes. We can verify that this is the case for the solution associated with C 3 where the three-dimensional SCFT is a pure N = 2 Chern-Simons theory with the same matter content and superpotential of the maximal supersymmetric Yang-Mills theory [81] that we indicate as D2 k . We have
V CY 4 = 1 ǫ 0 V C 3 ǫ + , λ + − 1 ǫ 0 V C 3 ǫ − , λ − ,(5.85)
where V C 3 is given in (5.42). The fluxes can be written as in (5.43). Since the matter content of the theory is the same as for N = 4 SYM we can still use the parameterization
ǫ 0 = ǫ , ǫ 1 = ∆ 1 + ∆ 2 + ∆ 3 , ǫ 2 = ∆ 2 , ǫ 3 = ∆ 3 ,(5.86)
with ∆ 1 + ∆ 2 + ∆ 3 = 2, and interpret the ∆ i as the R-charges of the three chiral fields of the theory. We can use the gauge freedom (5.10) to set λ 1 = λ 2 = λ 3 = 0 and we can find λ ± by solving the first equation in (5.84). The extremal function (5.84) is then obtained by
F = F m (∆ + 1 , ∆ + 2 , ∆ + 3 ) ǫ − F m (∆ − 1 , ∆ − 2 , ∆ − 3 ) ǫ ,(5.87)
where F m (∆ 1 , ∆ 2 , ∆ 3 ) is proportional to the block for the D2 k theory and
F m (∆ 1 , ∆ 2 , ∆ 3 ) ∝ N 5/3 (∆ 1 ∆ 2 ∆ 3 ) 5/3 ,(5.∆ + 1 = ∆ 1 − ǫ n + (1 − 3 i=2 w +i ) , ∆ + 2 = ∆ 2 − ǫ n + w +2 , ∆ + 3 = ∆ 3 − ǫ n + w +3 , ∆ − 1 = ∆ 1 + ǫ σn − (1 − 3 i=2 w −i ) , ∆ − 2 = ∆ 2 + ǫ σn − w −2 , ∆ − 3 = ∆ 3 + ǫ σn − w −3 . (5.89) Notice that 3 i=1 ∆ + i = 2 − ǫ n + , 3 i=1 ∆ − i = 2 + ǫ σn − , ∆ + i − ∆ − i = ǫ n i . (5.90)
This is the natural expectation for the extremal function for a D2 brane theory. It would be interesting to have a more complete analysis in supergravity to compare with.
Discussion
In this paper we provided several applications of equivariant localization to quantum field theory and holography. In particular, we have shown that the equivariant volume, a basic and well-studied geometrical object in symplectic geometry, is at the heart of many constructions characterising supersymmetric geometries. It generalises at once the Sasakian volume of [17,18] and the master volume introduced in [60] that have been proven very useful in the study of superconformal field theories dual to branes sitting at Calabi-Yau singularities, and generalizations thereof. The corresponding quantum (or K-theoretical) version, the equivariant index-character, is analogously crucial for studying the Hilbert series of the moduli spaces of the corresponding SCFTs and we expect that it still has many surprises to unveil. More specifically, we proposed that the equivariant volume should be the key object for all the extremal problems characterising supersymmetric geometries with a holographic interpretation, in different supergravity theories. We showed in this paper that all the extremization problems associated with compactifications on the spindle of M2 and D3 brane at Calabi-Yau singularities as well as M5, D4 and D2 brane configurations in flat space, can be re-expressed in terms of the equivariant volume. In the case of M2 and D3 branes it was known before that the extremization problem can be expressed in terms of the master volume [60] and we showed how this object is encoded in the equivariant volume of the associated fibered Calabi-Yaus. In the M5, D4 and D2 branes there is not yet an analogous of the construction in [60], but we showed that the relevant extremal functions can be written in terms of the equivariant volume. We leave for future work the analysis of more complicated situations, like M5 and D4 branes compactified on a four-dimensional orbifolds M 4 or D2 branes in massive type IIA associated with a generic Sasaki-Einstein five-manifold foliated over a segment [82]. These examples are more complicated but we are confident that the corresponding extremal functions and extremization problems can be written and formulated solely in terms of the equivariant volume.
In this paper we discussed the factorization (5.27) of the extremal functions associated to spindly black objects in terms of gravitational blocks both from the gravitational and the field theory point of views. On the geometry side, in all cases the factorization is an immediate corollary of the fixed-point localization formula for the equivariant volume, applied to the relevant non-compact geometry -see eq. (5.33). For D3 and M5 branes, F and F m are related to the central charge of the lower and higher-dimensional SCFT, respectively, and the gluing (5.27) is nothing else that equivariant localization applied to the computation of the higher-dimensional anomaly polynomial that we discussed in section 4. For other types of branes, F m is the sphere-free energy of the higher-dimensional SCFT at large N [36,48], and (5.27) is typically related to the large N factorization properties of SCFT partition functions of theories with a holographic dual [83][84][85]. In this context, the factorization becomes visible after taking the large N limit of the SCFT free energy, which is usually expressed in terms of a matrix model. The partition function of three-dimensional N = 2 SQFTs compactified on a spindle times a circle was derived in [50], generalizing at once the superconformal index and the topologically twisted index. The large N limit of this spindle index is currently under investigation [86] and we expect to reproduce the factorization (5.27), thus closing the circle.
The derivation of partition functions of SQFTs compactified on orbifolds is another arena where the results of this paper could be useful. Indeed, quantum field theory localization relies on the use of the equivariant index theorem [87] and the building blocks of this construction will involve generalizations of the index-character discussed in section 2.6 and appendix C. More precisely, the index-character enters as a basic building block for partition functions on M × S 1 where supersymmetry is preserved with a topological twist on M, while it needs to be generalized in the case of general σ a as in (4.49). In particular, it would be interesting to generalize the five-dimensional indices of [84,88,89] to the case of a five-dimensional SCFT defined on M 4 × S 1 and apply the result to reproduce the entropy function for the supergravity solutions associated with D4 branes compactified on a four-dimensional orbifold [45], as well as recover the anomaly polynomial results for M5 branes compactified on a four-dimensional orbifold discussed in section 4.3. 37 which can be evaluated by elementary methods. Defining the one-form
ν ≡ e −y i ǫ i ǫ · ǫ (ǫ 2 dy 1 − ǫ 1 dy 2 ) (A.2)
and, using Stoke's theorem, we have
V(λ a , ǫ i ) = P dν = ∂P ν = a Fa ν , (A.3)
where a facet F a of the polytope is defined by the linear equation
F a = {l a ≡ v a i y i − λ a = 0} . (A.4)
We can introduce a coordinate 38 s ∈ [s min a , s max a ] on each facet F a , writing
y i | Fa =ṽ a i s + λ a v a · v a v a i , (A.5) whereṽ a i ≡ ε ij v a j .
The extrema of the interval can be determined by intersecting F a with F a−1 and F a+1 , respectively, and read s min
a = 1 v a−1 , v a λ a−1 − v a−1 · v a v a · v a λ a , s max a = 1 v a+1 , v a λ a+1 − v a+1 · v a v a · v a λ a , (A.6)
where v, w ≡ det(v, w). Plugging (A.5) into ν and integrating we have
Fa ν = v a · ǫ ǫ · ǫ v a , ǫ exp − ǫ · v a v a · v a λ a exp [ v a , ǫ s max a ] − exp v a , ǫ s min a .
(A.7)
Recalling that
ǫ a 1 = − v a+1 , ǫ v a , v a+1 , ǫ a 2 = v a , ǫ v a , v a+1 , (A.8)
and using the vector identities
v a+1 , v a ǫ · v a + v a , ǫ v a · v a+1 = − ǫ, v a+1 v a · v a , v a−1 , v a ǫ · v a + v a , ǫ v a−1 · v a = − ǫ, v a−1 v a · v a , (A.9)
we obtain the compact expression
Fa ν = v a · ǫ ǫ · ǫ v a , ǫ e −(ǫ a 1 λa+ǫ a 2 λ a+1) − e −(ǫ a−1 1 λ a−1 +ǫ a−1 2 λa) . (A.10)
Now, for a four-dimensional orbifold 38 With a slight abuse of notation we do not denote this with an index s a .
c T 1 (L a )| y b = − δ b,a ǫ b 1 + δ b,a−1 ǫ b 2 , (A.11)
so that
Fa ν = v a · ǫ ǫ · ǫ v a , ǫ e b λ b c T 1 (L b )|y a − e b λ b c T 1 (L b )|y a−1 . (A.12)
Rearranging the contributions of the vertices in the sum
V(λ a , ǫ i ) = a Fa ν = a v a · ǫ ǫ · ǫ v a , ǫ − v a+1 · ǫ ǫ · ǫ v a+1 , ǫ e b λ b c 1 (L b )|y a = a 1 d a,a+1 ǫ a 1 ǫ a 2 e b λ b c 1 (L b )|a , (A.13)
where we used another other standard vector identity v a+1 , ǫ ǫ · v a − v a , ǫ v a+1 · ǫ = − v a , v a+1 ǫ · ǫ , (A.14)
we obtain the localization formula (3.35).
B Direct proof of the master volume formula for CY 3 We consider a singular Calabi-Yau cone with fan specified by the vectors v a = (1, w a ), with a = 1, . . . , d ′ and w a ∈ Z 2 . The projection on the plane with first coordinate equal to one is a convex polytope called toric diagram in the physics literature. We label the vectors v a along the toric diagram in anticlockwise order and we identify vectors cyclically, v d ′ +1 ≡ v 1 . In order to apply the fixed point formula we need to resolve the singularity. This can be done by triangulating the toric diagram. We add new vectors v α = (1,w α ), α = d ′ + 1, . . . , d withw α lying inside the toric diagram until it becomes the union of triangles. The master volume V is a function of the λ a associated with the external vectors v a = (1, w a ) only. The equivariant volume V is a function of all λ and can be obtained from the fixed point formula (5.5), where the sum is extended to all triangles in which the toric diagram has been partitioned. The computation is fortunately local. Every particular external λ a enters in just two fixed point contributions, associated with the two triangles in figure 8, wherev is an internal point. Denoting v, w, u ≡ det(v, w, u), and taking into account the orientations of the inward normals u a A , we can write V = d 2 I e −(λ v a−1 ,v a ,ǫ +λ a−1 v a ,v,ǫ +λa v,v a−1 ,ǫ )/d I v a−1 , v a , ǫ v a ,v, ǫ v, v a−1 , ǫ + d 2 II e −(λa v a+1 ,v,ǫ +λ v a ,v a+1 ,ǫ +λ a+1 v,v a ,ǫ )/d II v a+1 ,v, ǫ v a , v a+1 , ǫ v, v a , ǫ + . . .
(B.1) whereλ is the Kälher parameter associated with the internal pointv, d I = | v a−1 , v a ,v | and d II = | v a+1 , v a ,v | are the orders of the local singularities and the dots refer to terms independent of λ a . We can then compute
∂ 2 V ∂λ 2 a λ=0 = v, v a−1 , ǫ v a−1 , v a , ǫ v a ,v, ǫ + v a+1 ,v, ǫ v a , v a+1 , ǫ v, v a , ǫ = − v a−1 , v a+1 , ǫ v a−1 , v a , ǫ v a , v a+1 , ǫ ∂ 2 V ∂λ a ∂λ a−1 λ=0 = 1 v a−1 , v a , ǫ ∂ 2 V ∂λ a ∂λ a+1 λ=0 = 1 v a , v a+1 , ǫ ,
(B.2) thus recovering the master volume expression (5.18).
C The equivariant index
In this appendix we present some simple examples of character indices. It is an interesting topic, with many applications for example to localization computations [50], but, since it is not the main theme of this paper we will be brief.
C.1 The equivariant index for the spindle
We consider the spindle Σ = WP 1 [n + ,n − ] . As in section 3.1 we take v 1 = n + , v 2 = −n − and GLSM charges Q = (n − , n + ). We take n + and n − relatively prime.
The character computes the equivariant index Z(q, Λ a ) = where q is the weight for the T action. We recall that not all the T-invariant divisors are inequivalent. In particular, for the spindle, (2.22) implies n + D + = n − D − . The fixed point formula (2.71) has contribution from two poles of the spindle. In each contribution we need to add an average over the local singularity Z d A Z(Λ a , q) = 1 n + n + −1 k=0 e −2πikΛ + /n + q −Λ + /n + 1 − e 2πik/n + q 1/n + + Setting q = e −ǫ and Λ ± = −λ 1,2 / and taking the limit → 0 we recover the equivariant volume (3.16) as the coefficient of the leading pole as in (2.67). Only the terms with k = 0 contribute to the limit. The index can been resummed using [50] 1 k where ω k = e 2πi k , u is a complex number, α is an integer number and the floor function denotes the integer part. This formula can be proved by expanding both sides in power series of u. The result is
Z(Λ a , q) = q − Λ + n + 1 − q + q Λ − n − 1 − q −1 = q − Λ + n + − q 1+ Λ − n − 1 − q . (C.4)
Notice that, although there were fractional powers of q in (C.2), the final formula contains only integer powers. We see that, for Λ ± > 0, Z(Λ a , q) = q This can be checked by computing explicitly the residue related toq i . One obtain two sums over z k − =q −1/n − 1 e 2πik − /n − and z k + =q −1/n + 2 e 2πik + /n + for k ± = 0, . . . n ± − 1 that match the fixed point formula. 39 On the other hand, we can also evaluate the integral in a simpler way. We define the grand-canonical partition function
∞ T =0 Z M W (T,q a )w T = − dz 2πi 1 (1 − z n −q 1 )(1 − z n +q 2 )(z − w) = 1 (1 − w n −q 1 )(1 − w n +q 2 ) ,
(C.11) where we evaluated the integral by deforming the contour circling z k ± into a contour around z = w, which is the only other singularity of the integrand. From this expression we see that Z M W (T,q a ) for T > 0 counts the monomials in (q 1 ,q 2 ) of charge T under the rescaling with weights Q = (n − , n + ). These are precisely the holomorphic sections F = T n + n − . 39 We assume that n + and n + are relatively prime. For simplicity, we only consider the case of the "non-minimal" fan v 1 = (n 3 , n 3 ) , v 2 = (−n 1 , 0) , v 3 = (0, −n 2 ) , (C. 13) and Q = (N 1 , N 2 , N 3 ) = (n 1 n 2 , n 2 n 3 , n 1 n 3 ). The orbifold is M 4 = WP 2 [N 1 ,N 2 ,N 3 ] if the n a are coprime. There are only labels and not extra orbifold singularities in the sense that the corresponding primitive vectorsv a , obtained by dividing v a by the label n a , satisfŷ d a,a+1 = det(v a ,v a+1 ) = 1.
The fixed point formula (2.71) requires averaging over the local orbifold singularity. For example, we can consider the contribution from the fixed point associated with the cone (v 2 , v 3 ). The local data are d 2,3 = n 1 n 2 and u 1 = (−n 2 , 0) and u 2 = (0, −n 1 ). The local orbifold action is determined by J 2 n 1 = s, J 3 n 2 = p where s, p are integers:
(e 2πiJ 2 , e 2πiJ 3 ) = (e 2πi s n 1 , e 2πi p n 2 ) .
(C.14)
The fixed point contribution is This can be proved with elementary methods. By expanding in power series (C.15) in 1/q 1 and 1/q 2 we obtain all the integer points in an infinite cone, as in figure 9. By similarly expanding the other power series in (C.16) for |q i | > 1 and |q 2 /q 1 | > 1, we find three competing contributions which cancel except for the points inside the polytope as depicted in figure 10.
The Molien-Weyl formula gives instead
Z M W (T,q a ) = − dz 2πiz 1+T 1 (1 − z N 1q 1 )(1 − z N 2q 2 )(1 − z N 3q 3 )
. Figure 9: The integer points counted by (C.15).
(C.19) ( Λ2 n1 , Λ3 n2 ))(1 − ω kn 3 n 2q3q −n 3 /n 2 1 ) =q Λ 1 1q Λ 2 2q Λ 3 3 (q n 1 2q −n 3 1 ) −⌊ Λ 2 n 1 ⌋ (q n 2 3q −n 3 1 ) −⌊ Λ 3 n 2 ⌋ (1 −q n 1 2q −n 3 1 )(1 −q n 2 3q −n 3 1 ) =q Λ 1 1q Λ 2 2q Λ 3 3 q ⌊ Λ 2 n 1 ⌋ 1 q ⌊ Λ 3 n 2 ⌋ 2 (1 − 1/q 1 )(1 − 1/q 2 ) , (C.22)
reproducing the contribution (C.15). In the sum we replaced k with n 3 k using the fact that the n i are coprime. We also used the identity 1 n 1 n 2
n 1 n 2 −1 k=0 ω −kΘ 1 n 1 ω −kΘ 2 n 2 (1 − ω k n 1 u 1 )(1 − ω k n 2 u 2 ) = u Θ 1 −n 1 ⌊ Θ 1 n 1 ⌋ 1 u Θ 2 −n 2 ⌊ Θ 2 n 2 ⌋ 2 (1 − u n 1 1 )(1 − u n 2 2 )
, (C. 23) which can be proved by expanding both sides in power series of u.
The grand-canonical partition function In the presence ofd a,a+1 = det(v a ,v a+1 ) = 1 we would need more general formulas for resumming the averages over the orbifold action and we will not discuss this case.
imply that, if d is the number of vectors in the fan, there are d − m independent (2m − 2)-cycles in homology and therefore only d − m independent Kähler parameters. The d parameters λ a are associated with the T m -invariant divisors and provide an overparameterization of the Kähler moduli.
64) 7
7Notice that this does not affect the argument for the fixed point formula. The contribution of ω T to the fixed point is still ω T = −2π a λ a c T 1 (L a ), since the extra term 1 2 a G ij v a j vanishes at the fixed point by(2.42).
Figure 2 :
2Fixed points are associated with vertices p a of the polytope. The vectors in the fan are orthogonal to the facets. For each vertex there is a corresponding cone (v a , v a+1 ) in the fan. The two inwards normals u i a , 1 = 1, 2 to the cone, which lie along the edges of the polytope, enter in the fixed point formula through the quantities ǫ a i = ǫ·u i a d a,a+1 .
point formula(2.48) for the equivariant volume specializes to the expression10
Figure 3 :
3The fan and the polytope for a generic quarilateral.
Figure 5 :
5The fan and the polytope for O(−p) → Σ. The resolution introduces a compact facet in the polytope that is orthogonal to v 2 . The corresponding segment represents the toric polytope of a spindle Σ.Alternatively, the basis (3.92) can be obtained from (3.69) of the compact example, by removing v 3 and setting n − = 1, d 4,1 = −p, d 1,2 = n − , d 2,4 = −n + and a + d 2,4 = t, a − d 2,4 = q. The condition (3.70) obeyed by a + , a − becoming the condition (3.93) for t, q.
We now consider the two small resolutions of the conifold singularity {z 1 z 2 = z 3 z 4 |z i ∈ C 4 }, corresponding to the asymptotically conical non-compact manifold O(−1)⊕O(−1) → P 1 , known in the physics literature as resolved conifold. The toric data consist in the fan
Figure 6 :
6One of the two small resolutions of the conifold singularity and the corresponding non-compact polytope projected on the plane where the vectors v a live. The fixed point formula (2.48) reads
Figure 7 :(a 1 , a 2 , a 3
7123(L a2 ) y = ǫ·u a 2 da 1 ,a 2 ,a 3 c 1 (L a3 ) y = ǫ·u a 3 da 1 ,a 2 ,a 3 c 1 (L a1 ) y = ǫ·u a 1 da 1 ,a 2 ,a 3 d a1,a2,a3 = | det(v a1 , v a2 , v a3 )| Thecontribution of a single vertex of the polytope to the fixed point formula for m = 3.Consider first the resolution infigure 6. The necessary data are
This recipe is formally equivalent to consider equivariant first Chern classes, as follows. Starting from the line bundles L a with equivariant first Chern classes c T 1 (L a ) discussed in section 2.1, one can make the replacement c T 1 (L a ) = c 1 (L a ) + 2πµ i a ǫ i → c 1 (L a ) = c 1 (L a ) + 2πµ i a c 1 (J i ) .
fluxes are now defined by integrating the first Chern classes of the flavour line bundles, c 1 (E I ) = −p a I c 1 (L a ), over the various divisors, namely q a I ≡ − Da c 1 (E I ) = p b I D ab . (4.46)
only d − 2 physical fluxes are linearly independent. In particular, the "gauge" transformation p a I →p a I = p a I + det(λ I , v a ) , (4.48)
Consider a non-compact toric Calabi-Yau X of complex dimension m defined by a fan with primitive vectors v a , a = 1, . . . , d. The Calabi-Yau condition requires the vectors v a to lie on a plane. We choose an SL(m; Z) basis where the first component of all the vectors v a is one v a = (1, w a ) w a ∈ Z m−1 .
...,am) e −ǫ·(λa 1 u a 1 A +...+λa m u am A )/da 1 ,...,am d a 1 ,...,am ( ǫ·u a 1 A da 1 ,...,am ) . . . ( ǫ·u am A da 1 ,...,am ) , (5.5) where A runs over the m-dimensional cones of the resolution, d a 1 ,...,am = | det(v a 1 , , . . . , v am )|, and u a A are the inward normal to the facets of A defined in section 2.3.
r 2 2
2= H, where H = ǫ i y i is the Hamiltonian for ξ. Expression (5.11) then reduces, up to a numerical factor, to the Sasakian volume of Y [17, 18] V(0, ǫ i ) = (m − 1)! 2π m vol[Y ](ǫ) . (5.13) 28 i = 1 is trivial.
36) and M a are integer fluxes, encoding the flux quantization conditions of the M-theory fourform or the type IIB RR five-form. ν m is a normalization constant that depends on the dimension. The fluxes M a contain the information about the number of branes N and the topological details of the Sasaki-Einstein fibration over the spindle. Notice that for consistency of (5.35) with (5.9) we must haved a=1 v a M a = 0 . (5.37) Given (5.9) and (5.37), only d − m + 1 equations in (5.35) are actually independent. Combining the equations (5.35) with the m − 1 gauge invariances (5.10), we can eliminate all λ a . We are left with a functional of the equivariant parameters ǫ i that needs to be extremized in order to find the solution of the equations of motion.
the fluxes M a as (M 1 , M 2 , M 3 , M + , M − ) ≡ (Nn 1 , Nn 2 , Nn 3
the fluxes M a as (M 1 , M 2 , M 3 , M 4 , M + , M − ) ≡ (Nn 1 , Nn 2 , Nn 3 , Nn 4
88) 36
36To compare with(5.33) in[36] one needs to identify n herei = −n there , n here ± = n there ∓ and 2ǫ there = −ǫ. The chemical potentials are related by the redefinition ϕ 1 = ∆ 1 − ǫ n+ (1 − p + ) − ǫ 2 n 1 and ϕ 2 = ∆ 2 − ǫ n+ p + − ǫ 2 n 2 .
Figure 8 :
8The triangles in the toric diagram contributing a λ a dependence in the fixed point formula for V.
p Tr{q|H (0,p) (Σ, O(Λ + D + + Λ − D − ))} , (C.1)
e
−2πikΛ − /n − q Λ − /n − 1 − e 2πik/n − q −1/n − . (C.2)
1
− u k , (C.3)
has only positive signs and counts the sections ofH 0 (WP 1 [n + ,n − ] , O(Λ + D + + Λ − D − )) , (C.6)refined with respect to the T action. Geometrically, we see that the exponents are the integers the integer points in the polytope (2.66)∆(Λ a ) = {m ∈ Z|v 1 m ≥ −Λ + , v 2 m ≥ −Λ − } . (C.8)The Molien-Weyl formula (2.68) readsZ M W (T,q a ) = − dz 2πiz 1+T 1 (1 − z n −q 1 )(1 − z n +q 2 ) . (C.9)The relation between the two formulas is as in(2.70) Z M W (T = n + Λ − + n − Λ + ,q a
H 0 (
0WP 1 [n + ,n − ] , O(T )) , (C.12)where the line bundle O(T ) corresponds to Chern class c 1 = F 2π with Chern number 1 2π
use the identity (C.3). The contributions of the other fixed points can be computed similarly. Combining the three contributions we find V(Λ a , q i with a T 2 weight) the integer points in the polytope ∆(Λ a ) = {m · v a ≥ −Λ a } , m ∈ Z 2 . (C.18)
Figure 10 :
10The power series contributing to (C.16). The yellow and orange power series enter with a plus sign, while the green with a minus sign. The result is just the set of integer points in the triangle.This can be explicitly evaluated by taking the residues at z Naq a n 1 n 2 Λ 1 + n 2 n 3 Λ 2 + n 3 n 1 Λ 3 .(C.21)One can check that the three set of residues at z Naq a = 1 precisely correspond to the three fixed point contributions. For example, the residue associated withq
Z
M W (T,q a )w T = − dz 2πi1 (1 − z N 1q 1 )(1 − z N 2q 2 )(1 − z N 3q 3 )(z − w) (C.24)can be also computed deforming the contour to encircle z = w which gives∞ T =0 Z M W (T,q a )w T = 1 (1 − w N 1q 1 )(1 − w N 2q 2 )(1 − w N 3q 3 ).(C.25) We see that Z M W (T,q a ) for T > 0 counts the monomials in (q 1 ,q 2 ,q 3 ) of charge T under the rescaling with weights Q =(N 1 , N 2 , N 3 ). These are precisely the holomorphic sectionsH 0 (WP 2 [N 1 ,N 2 ,N 3 ] , O(T )) , (C.26)of the line bundle O(T ). In the familiar case of WP 2 , where N i = 1, these are just the homogeneous polynomials of degree T .
26 )
26with respect to the T m action. Notice that these expressions are formal linear combinations of forms of different degrees.Our main object of interest is the equivariant volume of M 2m[1],
∂λa . Since the GLSM charges satify (2.54) and V M W is a function of t A = − a λ a Q A a , we obtain zero on the left hand side. On the right hand side we obtain) .
(2.58)
As a consistency check of this formula we can apply to both sides the operator
a v a
i
∂
e a λaǭa
a
v a
iǭ a V +
a
v a
i
∂V
∂λ a
,
(2.59)
44 )
4411 See[69] for a physics-oriented review.
55 )
55still holds. The right hand side can be computed as before with the fixed point formula(3.35) while the left hand side with the Molien-Weyl formula. However, since the Molien-Weyl integral is blind to the torsion part, the formula (2.55) should be further divided by the order N 3 of the discrete group Γ. For example, the non-equivariant volume is now1
2
C . 2
.The equivariant index for WP 2 [N 1 ,N 2 ,N 3 ]
We adopt the Einstein summation convention for the indices (two repeated indices imply the sum). However, for sake of clarity, in some formulas we will write explicitly the sums over the indices.2 We use intercheangably the notation y and y i to denote a point in R m .3 They are called ramification divisors, whileD a = n a D a are called branched divisors, with multiplicity (or ramification index) n a .
Notice that we are using opposite convections with respect to[45]!
We use this notation to facilitate comparison with the spindle literature in supergravity.
Notice that there is no summation on a in the exponent.
For the non-minimal case(3.38), we have −n 3 Q 1 + n 1 Q 2 , −n 3 Q 1 + n 2 Q 3 ∈ Z which implies Q 1 = n 1 n 2 α + k 1 /n 3 , Q 2 = n 3 n 2 α + k 2 /n 1 and Q 3 = n 1 n 3 α + k 3 /n 2 but all the k i can be absorbed by taking α = integer/n i if the n i are coprime and we are left with the U (1) action with charges (n 1 n 2 , n 2 n 3 , n 1 n 3 ).
The non-compact examples of this section are the total space of some vector bundles over a base orbifold, but we denote them using the more schematic notation "fibre → base".
We identify their p i with our −n I .
There is no notion of trial central charge in a 6d SCFT, however we adopt this conventional definition, in analogy with the 4d case.
The example of the conifold discussed in section 3.3.3 is special in that there are no compact fourcycles. This is due to the fact we just subdivide the fan without introducing new vectors.
Often also called entropy functions, even if the related physical observable is not the entropy of a black hole.
This is similar to what happens for the Sasakian volume and the trial a-charge for D3 branes sitting at the tip of a conical toric Calabi-Yau three-fold singularity[17,74].
Or, more generally, in the case of a compactification on a Riemann surface.
This can be done by considering the five-dimensional N = 2 SYM theory that decompactifies to the (2, 0) theory in the UV.
AcknowledgementsDM is supported in part by the INFN. AZ is partially supported by the INFN and the MIUR-PRIN contract 2017CC72MK003. We thank F. Faedo for comments on a draft of this paper. We gratefully acknowledge support from the Simons Center for Geometry and Physics, Stony Brook University, at which some of the research for this paper was performed.A Direct proof of the fixed point formula for M 4In this section we derive the fixed-point formula (3.35) for four-dimensional toric orbifolds. We use the notations introduced in section 3.2. We start by writing the equivariant volume as an integral over the polytope, as in (2.32)
On the variation in the cohomology of the symplectic form of the reduced phase space. J J Duistermaat, G J Heckman, Inventiones mathematicae. 692J. J. Duistermaat and G. J. Heckman, On the variation in the cohomology of the symplectic form of the reduced phase space, Inventiones mathematicae 69 (1982), no. 2 259-268.
Classes caractéristiqueséquivariantes. formule de localisation en cohomologieéquivariante. N Berline, M Vergne, CR Acad. Sci. Paris. 2952N. Berline and M. Vergne, Classes caractéristiqueséquivariantes. formule de localisation en cohomologieéquivariante, CR Acad. Sci. Paris 295 (1982), no. 2 539-541.
The moment map and equivariant cohomology. M Atiyah, R Bott, Topology. 231M. Atiyah and R. Bott, The moment map and equivariant cohomology, Topology 23 (1984), no. 1 1-28.
M Vergne, Cohomologieéquivariante et théorème de stokes, Séminaires & Congrès. rédigé par Sylvie Paycha7M. Vergne, Cohomologieéquivariante et théorème de stokes, Séminaires & Congrès 7 (2003) 1-43 (rédigé par Sylvie Paycha).
Applications of equivariant cohomology. M Vergne, math/0607389M. Vergne, Applications of equivariant cohomology, math/0607389.
Review of localization in geometry. V Pestun, arXiv:1608.02954J. Phys. A. 5044V. Pestun, Review of localization in geometry, J. Phys. A 50 (2017), no. 44 443002, [arXiv:1608.02954].
M Vergne, Equivariant index formulas for orbifolds. 82M. Vergne, Equivariant index formulas for orbifolds, Duke Mathematical Journal 82 (1996) 637-652.
Hamiltoniens périodiques et images convexes de l'application moment. T Delzant, Bulletin de la Société Mathématique de France. 1163T. Delzant, Hamiltoniens périodiques et images convexes de l'application moment, Bulletin de la Société Mathématique de France 116 (1988), no. 3 315-339.
Kaehler structures on toric varieties. V W Guillemin, Journal of Differential Geometry. 40V. W. Guillemin, Kaehler structures on toric varieties, Journal of Differential Geometry 40 (1994) 285-309.
Computing the volume, counting integral points, and exponential sums. A I Barvinok, Discrete & Computational Geometry. 102A. I. Barvinok, Computing the volume, counting integral points, and exponential sums, Discrete & Computational Geometry 10 (1993), no. 2 123-141.
Convex polytopes and quantization of symplectic manifolds. M Vergne, Proceedings of the National Academy of Sciences of the United States of America. 9325M. Vergne, Convex polytopes and quantization of symplectic manifolds, Proceedings of the National Academy of Sciences of the United States of America 93 (1996), no. 25 14238-14242.
Hamiltonian torus actions on symplectic orbifolds and toric varieties. E Lerman, S Tolman, dg-ga/9511008Trans. Amer. Math. Soc. 349E. Lerman and S. Tolman, Hamiltonian torus actions on symplectic orbifolds and toric varieties, Trans. Amer. Math. Soc. 349 (1997) 4201-4230, [dg-ga/9511008].
Moment maps on symplectic cones. S F B De Moraes, C Tomei, Pacific Journal of Mathematics. 181S. F. B. de Moraes and C. Tomei, Moment maps on symplectic cones, Pacific Journal of Mathematics 181 (1997) 357-375.
Seiberg-witten prepotential from instanton counting. N A Nekrasov, hep-th/0206161N. A. Nekrasov, Seiberg-witten prepotential from instanton counting, hep-th/0206161.
Y Tachikawa, arXiv:1412.7121A review on instanton counting and W-algebras. Y. Tachikawa, A review on instanton counting and W-algebras, arXiv:1412.7121.
Toric geometry, Sasaki-Einstein manifolds and a new infinite class of AdS/CFT duals. D Martelli, J Sparks, hep-th/0411238Commun. Math. Phys. 262D. Martelli and J. Sparks, Toric geometry, Sasaki-Einstein manifolds and a new infinite class of AdS/CFT duals, Commun. Math. Phys. 262 (2006) 51-89, [hep-th/0411238].
The Geometric dual of a-maximisation for Toric Sasaki-Einstein manifolds. D Martelli, J Sparks, S.-T Yau, hep-th/0503183Commun. Math. Phys. 268D. Martelli, J. Sparks, and S.-T. Yau, The Geometric dual of a-maximisation for Toric Sasaki-Einstein manifolds, Commun. Math. Phys. 268 (2006) 39-65, [hep-th/0503183].
Sasaki-Einstein manifolds and volume minimisation. D Martelli, J Sparks, S.-T Yau, hep-th/0603021Commun. Math. Phys. 280D. Martelli, J. Sparks, and S.-T. Yau, Sasaki-Einstein manifolds and volume minimisation, Commun. Math. Phys. 280 (2008) 611-673, [hep-th/0603021].
Obstructions to the existence of Sasaki-Einstein metrics. J P Gauntlett, D Martelli, J Sparks, S.-T Yau, hep-th/0607080Commun. Math. Phys. 273J. P. Gauntlett, D. Martelli, J. Sparks, and S.-T. Yau, Obstructions to the existence of Sasaki-Einstein metrics, Commun. Math. Phys. 273 (2007) 803-827, [hep-th/0607080].
Transverse Kahler geometry of Sasaki manifolds and toric Sasaki-Einstein manifolds. A Futaki, H Ono, G Wang, J. Diff. Geom. 833math/0607586A. Futaki, H. Ono, and G. Wang, Transverse Kahler geometry of Sasaki manifolds and toric Sasaki-Einstein manifolds, J. Diff. Geom. 83 (2009), no. 3 585-636, [math/0607586].
Kähler-Einstein metrics and volume minimization. C Li, Y Liu, arXiv:1602.05094Adv. Math. 341C. Li and Y. Liu, Kähler-Einstein metrics and volume minimization, Adv. Math. 341 (2019) 440-492, [arXiv:1602.05094].
Stability of Sasaki-extremal metrics under complex deformations. C Van Coevering, arXiv:1204.1630Int. Math. Res. Not. 2013C. van Coevering, Stability of Sasaki-extremal metrics under complex deformations, Int. Math. Res. Not. 2013 (2013), no. 24 5527-5570, [arXiv:1204.1630].
Relative K-stability and Extremal Sasaki metrics. C P Boyer, C Van Coevering, arXiv:1608.06184Math. Res. Lett. 25C. P. Boyer and C. van Coevering, Relative K-stability and Extremal Sasaki metrics, Math. Res. Lett. 25 (2018) 1-19, [arXiv:1608.06184].
A Futaki, H Ono, arXiv:1706.07953Volume minimization and conformally kähler, einstein-maxwell geometry. A. Futaki and H. Ono, Volume minimization and conformally kähler, einstein-maxwell geometry, arXiv:1706.07953.
A geometric dual of c-extremization. C Couzens, J P Gauntlett, D Martelli, J Sparks, arXiv:1810.11026JHEP. 21201C. Couzens, J. P. Gauntlett, D. Martelli, and J. Sparks, A geometric dual of c-extremization, JHEP 01 (2019) 212, [arXiv:1810.11026].
The Central charge of supersymmetric AdS(5) solutions of type IIB supergravity. M Gabella, J P Gauntlett, E Palti, J Sparks, D Waldram, arXiv:0906.3686Phys. Rev. Lett. 10351601M. Gabella, J. P. Gauntlett, E. Palti, J. Sparks, and D. Waldram, The Central charge of supersymmetric AdS(5) solutions of type IIB supergravity, Phys. Rev. Lett. 103 (2009) 051601, [arXiv:0906.3686].
The free energy of N = 2 supersymmetric AdS 4 solutions of M-theory. M Gabella, D Martelli, A Passias, J Sparks, arXiv:1107.5035JHEP. 1039M. Gabella, D. Martelli, A. Passias, and J. Sparks, The free energy of N = 2 supersymmetric AdS 4 solutions of M-theory, JHEP 10 (2011) 039, [arXiv:1107.5035].
Localization of the action in AdS/CFT. P Benetti Genolini, J M Perez Ipiña, J Sparks, arXiv:1906.11249JHEP. 25210P. Benetti Genolini, J. M. Perez Ipiña, and J. Sparks, Localization of the action in AdS/CFT, JHEP 10 (2019) 252, [arXiv:1906.11249].
D3-Branes Wrapped on a Spindle. P Ferrero, J P Gauntlett, J M Pérez Ipiña, D Martelli, J Sparks, arXiv:2011.10579Phys. Rev. Lett. 12611P. Ferrero, J. P. Gauntlett, J. M. Pérez Ipiña, D. Martelli, and J. Sparks, D3-Branes Wrapped on a Spindle, Phys. Rev. Lett. 126 (2021), no. 11 111601, [arXiv:2011.10579].
Accelerating black holes and spinning spindles. P Ferrero, J P Gauntlett, J M P Ipiña, D Martelli, J Sparks, arXiv:2012.08530Phys. Rev. D. 1044P. Ferrero, J. P. Gauntlett, J. M. P. Ipiña, D. Martelli, and J. Sparks, Accelerating black holes and spinning spindles, Phys. Rev. D 104 (2021), no. 4 046007, [arXiv:2012.08530].
Rotating multi-charge spindles and their microstates. S M Hosseini, K Hristov, A Zaffaroni, arXiv:2104.11249JHEP. 18207S. M. Hosseini, K. Hristov, and A. Zaffaroni, Rotating multi-charge spindles and their microstates, JHEP 07 (2021) 182, [arXiv:2104.11249].
Twisted D3-brane and M5-brane compactifications from multi-charge spindles. A Boido, J M P Ipiña, J Sparks, arXiv:2104.13287JHEP. 22207A. Boido, J. M. P. Ipiña, and J. Sparks, Twisted D3-brane and M5-brane compactifications from multi-charge spindles, JHEP 07 (2021) 222, [arXiv:2104.13287].
M5-branes wrapped on a spindle. P Ferrero, J P Gauntlett, D Martelli, J Sparks, arXiv:2105.13344JHEP. 112P. Ferrero, J. P. Gauntlett, D. Martelli, and J. Sparks, M5-branes wrapped on a spindle, JHEP 11 (2021) 002, [arXiv:2105.13344].
Multicharge accelerating black holes and spinning spindles. P Ferrero, M Inglese, D Martelli, J Sparks, arXiv:2109.14625Phys. Rev. D. 10512P. Ferrero, M. Inglese, D. Martelli, and J. Sparks, Multicharge accelerating black holes and spinning spindles, Phys. Rev. D 105 (2022), no. 12 126001, [arXiv:2109.14625].
M2-branes on discs and multi-charged spindles. C Couzens, K Stemerdink, D Van De Heisteeg, arXiv:2110.00571JHEP. 10704C. Couzens, K. Stemerdink, and D. van de Heisteeg, M2-branes on discs and multi-charged spindles, JHEP 04 (2022) 107, [arXiv:2110.00571].
D4-branes wrapped on a spindle. F Faedo, D Martelli, arXiv:2111.13660JHEP. 02F. Faedo and D. Martelli, D4-branes wrapped on a spindle, JHEP 02 (2022) 101, [arXiv:2111.13660].
Supersymmetric spindles. P Ferrero, J P Gauntlett, J Sparks, arXiv:2112.01543JHEP. 01P. Ferrero, J. P. Gauntlett, and J. Sparks, Supersymmetric spindles, JHEP 01 (2022) 102, [arXiv:2112.01543].
Black holes with spindles at the horizon. S Giri, arXiv:2112.04431JHEP. 06S. Giri, Black holes with spindles at the horizon, JHEP 06 (2022) 145, [arXiv:2112.04431].
A tale of (M)2 twists. C Couzens, arXiv:2112.04462JHEP. 07803C. Couzens, A tale of (M)2 twists, JHEP 03 (2022) 078, [arXiv:2112.04462].
Leigh-Strassler compactified on a spindle. I Arav, J P Gauntlett, M M Roberts, C Rosen, arXiv:2207.06427JHEP. 1067I. Arav, J. P. Gauntlett, M. M. Roberts, and C. Rosen, Leigh-Strassler compactified on a spindle, JHEP 10 (2022) 067, [arXiv:2207.06427].
C Couzens, K Stemerdink, arXiv:2207.06449Universal spindles: D2's on Σ and M5's on Σ × H 3. C. Couzens and K. Stemerdink, Universal spindles: D2's on Σ and M5's on Σ × H 3 , arXiv:2207.06449.
M Suh, arXiv:2304.03308Baryonic spindles from conifolds. M. Suh, Baryonic spindles from conifolds, arXiv:2304.03308.
A Amariti, N Petri, A Segati, arXiv:2304.03663T 1,1 truncation on the spindle. A. Amariti, N. Petri, and A. Segati, T 1,1 truncation on the spindle, arXiv:2304.03663.
K C M Cheung, J H T Fry, J P Gauntlett, J Sparks, arXiv:2204.02990M5-branes wrapped on four-dimensional orbifolds. 82K. C. M. Cheung, J. H. T. Fry, J. P. Gauntlett, and J. Sparks, M5-branes wrapped on four-dimensional orbifolds, JHEP 08 (2022) 082, [arXiv:2204.02990].
Branes wrapped on orbifolds and their gravitational blocks. F Faedo, A Fontanarossa, D Martelli, arXiv:2210.16128Lett. Math. Phys. 1133F. Faedo, A. Fontanarossa, and D. Martelli, Branes wrapped on orbifolds and their gravitational blocks, Lett. Math. Phys. 113 (2023), no. 3 51, [arXiv:2210.16128].
D4-branes wrapped on four-dimensional orbifolds through consistent truncation. C Couzens, H Kim, N Kim, Y Lee, M Suh, arXiv:2210.15695JHEP. 0225C. Couzens, H. Kim, N. Kim, Y. Lee, and M. Suh, D4-branes wrapped on four-dimensional orbifolds through consistent truncation, JHEP 02 (2023) 025, [arXiv:2210.15695].
F Faedo, A Fontanarossa, D Martelli, Branes wrapped on quadrilateral orbifolds. To appearF. Faedo, A. Fontanarossa, and D. Martelli, Branes wrapped on quadrilateral orbifolds, To appear.
Gluing gravitational blocks for AdS black holes. S M Hosseini, K Hristov, A Zaffaroni, arXiv:1909.10550JHEP. 12S. M. Hosseini, K. Hristov, and A. Zaffaroni, Gluing gravitational blocks for AdS black holes, JHEP 12 (2019) 168, [arXiv:1909.10550].
A Boido, J P Gauntlett, D Martelli, J Sparks, arXiv:2211.02662Gravitational Blocks, Spindles and GK Geometry. A. Boido, J. P. Gauntlett, D. Martelli, and J. Sparks, Gravitational Blocks, Spindles and GK Geometry, arXiv:2211.02662.
M Inglese, D Martelli, A Pittelli, arXiv:2303.14199The Spindle Index from Localization. M. Inglese, D. Martelli, and A. Pittelli, The Spindle Index from Localization, arXiv:2303.14199.
Localization of gauge theory on a four-sphere and supersymmetric Wilson loops. V Pestun, arXiv:0712.2824Commun. Math. Phys. 313V. Pestun, Localization of gauge theory on a four-sphere and supersymmetric Wilson loops, Commun. Math. Phys. 313 (2012) 71-129, [arXiv:0712.2824].
S Benvenuti, B Feng, A Hanany, Y.-H He, hep-th/0608050Counting BPS Operators in Gauge Theories: Quivers, Syzygies and Plethystics. 50S. Benvenuti, B. Feng, A. Hanany, and Y.-H. He, Counting BPS Operators in Gauge Theories: Quivers, Syzygies and Plethystics, JHEP 11 (2007) 050, [hep-th/0608050].
Dual Giant Gravitons in Sasaki-Einstein Backgrounds. D Martelli, J Sparks, hep-th/0608060Nucl. Phys. B. 759D. Martelli and J. Sparks, Dual Giant Gravitons in Sasaki-Einstein Backgrounds, Nucl. Phys. B 759 (2006) 292-319, [hep-th/0608060].
Counting BPS baryonic operators in CFTs with Sasaki-Einstein duals. A Butti, D Forcella, A Zaffaroni, hep-th/0611229JHEP. 0669A. Butti, D. Forcella, and A. Zaffaroni, Counting BPS baryonic operators in CFTs with Sasaki-Einstein duals, JHEP 06 (2007) 069, [hep-th/0611229].
Kahler geometry of toric manifolds in symplectic coordinates. M Abreu, math/0004122M. Abreu, Kahler geometry of toric manifolds in symplectic coordinates, math/0004122.
Kahler metrics on toric orbifolds. M Abreu, math/0105112M. Abreu, Kahler metrics on toric orbifolds, math/0105112.
An application of the Duistertmaat-Heckman Theorem and its extensions in Sasaki Geometry. C P Boyer, H Huang, E Legendre, arXiv:1708.03006Geom. Topol. 22C. P. Boyer, H. Huang, and E. Legendre, An application of the Duistertmaat-Heckman Theorem and its extensions in Sasaki Geometry, Geom. Topol. 22 (2018) 4205-4234, [arXiv:1708.03006].
Shifts of prepotentials (with an appendix by Michele Vergne). N Nekrasov, N Piazzalunga, M Zabzine, arXiv:2111.07663SciPost Phys. 125N. Nekrasov, N. Piazzalunga, and M. Zabzine, Shifts of prepotentials (with an appendix by Michele Vergne), SciPost Phys. 12 (2022), no. 5 177, [arXiv:2111.07663].
L Cassia, N Piazzalunga, M Zabzine, arXiv:2211.13269From equivariant volumes to equivariant periods. L. Cassia, N. Piazzalunga, and M. Zabzine, From equivariant volumes to equivariant periods, arXiv:2211.13269.
Toric geometry and the dual of c-extremization. J P Gauntlett, D Martelli, J Sparks, arXiv:1812.05597JHEP. 20401J. P. Gauntlett, D. Martelli, and J. Sparks, Toric geometry and the dual of c-extremization, JHEP 01 (2019) 204, [arXiv:1812.05597].
Geometries with Killing Spinors and Supersymmetric AdS Solutions. J P Gauntlett, N Kim, arXiv:0710.2590Commun. Math. Phys. 284J. P. Gauntlett and N. Kim, Geometries with Killing Spinors and Supersymmetric AdS Solutions, Commun. Math. Phys. 284 (2008) 897-918, [arXiv:0710.2590].
Heat kernels and Dirac operators. N Berline, E Getzler, M Vergne, Springer Science & Business MediaN. Berline, E. Getzler, and M. Vergne, Heat kernels and Dirac operators. Springer Science & Business Media, 2003.
Symplectic Surgery and the Spin-C Dirac operator. E Meinrenken, dg-ga/9504002E. Meinrenken, Symplectic Surgery and the Spin-C Dirac operator, dg-ga/9504002.
Localization for nonabelian group actions. L C Jeffrey, F C Kirwan, alg-geom/9307001Topology. 34L. C. Jeffrey and F. C. Kirwan, Localization for nonabelian group actions, Topology 34 (1995) 291-327, [alg-geom/9307001].
A Canas, Silva, Multiplicity formulas for orbifolds. Ph. D. ThesisA. Canas da Silva, Multiplicity formulas for orbifolds, Ph. D. Thesis (1996).
Kahler-Sasaki geometry of toric symplectic cones in action-angle coordinates. M Abreu, arXiv:0912.0492Port. Math. 672M. Abreu, Kahler-Sasaki geometry of toric symplectic cones in action-angle coordinates, Port. Math. 67 (2010), no. 2 121-153, [arXiv:0912.0492].
Toric geometry of convex quadrilaterals. E Legendre, arXiv:0909.4512J. Symplectic Geom. 93E. Legendre, Toric geometry of convex quadrilaterals, J. Symplectic Geom. 9 (2011), no. 3 343-385, [arXiv:0909.4512].
Ambitoric geometry II: Extremal toric surfaces and Einstein 4-orbifolds. V Apostolov, D M J Calderbank, P Gauduchon, arXiv:1302.6979Ann. Sci. Ec. Norm. Super. 485V. Apostolov, D. M. J. Calderbank, and P. Gauduchon, Ambitoric geometry II: Extremal toric surfaces and Einstein 4-orbifolds, Ann. Sci. Ec. Norm. Super. 48 (2015), no. 5 1075-1112, [arXiv:1302.6979].
Elliptic Genera of 2d N = 2 Gauge Theories. F Benini, R Eager, K Hori, Y Tachikawa, arXiv:1308.4896Commun. Math. Phys. 3333F. Benini, R. Eager, K. Hori, and Y. Tachikawa, Elliptic Genera of 2d N = 2 Gauge Theories, Commun. Math. Phys. 333 (2015), no. 3 1241-1286, [arXiv:1308.4896].
Fibred GK geometry and supersymmetric AdS solutions. J P Gauntlett, D Martelli, J Sparks, arXiv:1910.08078JHEP. 17611J. P. Gauntlett, D. Martelli, and J. Sparks, Fibred GK geometry and supersymmetric AdS solutions, JHEP 11 (2019) 176, [arXiv:1910.08078].
Anomalies, Black strings and the charged Cardy formula. S M Hosseini, K Hristov, Y Tachikawa, A Zaffaroni, arXiv:2006.08629JHEP. 16709S. M. Hosseini, K. Hristov, Y. Tachikawa, and A. Zaffaroni, Anomalies, Black strings and the charged Cardy formula, JHEP 09 (2020) 167, [arXiv:2006.08629].
Two-dimensional SCFTs from wrapped branes and c-extremization. F Benini, N Bobev, arXiv:1302.4451JHEP. 065F. Benini and N. Bobev, Two-dimensional SCFTs from wrapped branes and c-extremization, JHEP 06 (2013) 005, [arXiv:1302.4451].
Proving the equivalence of c-extremization and its gravitational dual for all toric quivers. S M Hosseini, A Zaffaroni, arXiv:1901.05977JHEP. 10803S. M. Hosseini and A. Zaffaroni, Proving the equivalence of c-extremization and its gravitational dual for all toric quivers, JHEP 03 (2019) 108, [arXiv:1901.05977].
R-charges from toric diagrams and the equivalence of a-maximization and Z-minimization. A Butti, A Zaffaroni, hep-th/0506232JHEP. 1119A. Butti and A. Zaffaroni, R-charges from toric diagrams and the equivalence of a-maximization and Z-minimization, JHEP 11 (2005) 019, [hep-th/0506232].
Black hole microstates in AdS 4 from supersymmetric localization. F Benini, K Hristov, A Zaffaroni, arXiv:1511.04085JHEP. 05405F. Benini, K. Hristov, and A. Zaffaroni, Black hole microstates in AdS 4 from supersymmetric localization, JHEP 05 (2016) 054, [arXiv:1511.04085].
Geometry of I-extremization and black holes microstates. S M Hosseini, A Zaffaroni, arXiv:1904.04269JHEP. 17407S. M. Hosseini and A. Zaffaroni, Geometry of I-extremization and black holes microstates, JHEP 07 (2019) 174, [arXiv:1904.04269].
Toric geometry and the dual of I-extremization. J P Gauntlett, D Martelli, J Sparks, arXiv:1904.04282JHEP. 14006J. P. Gauntlett, D. Martelli, and J. Sparks, Toric geometry and the dual of I-extremization, JHEP 06 (2019) 140, [arXiv:1904.04282].
Black holes with baryonic charge and I-extremization. H Kim, N Kim, arXiv:1904.05344JHEP. 1150H. Kim and N. Kim, Black holes with baryonic charge and I-extremization, JHEP 11 (2019) 050, [arXiv:1904.05344].
Entropy Functions For Accelerating Black Holes. A Boido, J P Gauntlett, D Martelli, J Sparks, arXiv:2210.16069Phys. Rev. Lett. 1309A. Boido, J. P. Gauntlett, D. Martelli, and J. Sparks, Entropy Functions For Accelerating Black Holes, Phys. Rev. Lett. 130 (2023), no. 9 091603, [arXiv:2210.16069].
The D-4 -D-8 brane system and five-dimensional fixed points. A Brandhuber, Y Oz, hep-th/9905148Phys. Lett. B. 460A. Brandhuber and Y. Oz, The D-4 -D-8 brane system and five-dimensional fixed points, Phys. Lett. B 460 (1999) 307-312, [hep-th/9905148].
String Theory Origin of Dyonic N=8 Supergravity and Its Chern-Simons Duals. A Guarino, D L Jafferis, O Varela, arXiv:1504.08009Phys. Rev. Lett. 1159A. Guarino, D. L. Jafferis, and O. Varela, String Theory Origin of Dyonic N=8 Supergravity and Its Chern-Simons Duals, Phys. Rev. Lett. 115 (2015), no. 9 091601, [arXiv:1504.08009].
D2-brane Chern-Simons theories: F-maximization = a-maximization. M Fluder, J Sparks, arXiv:1507.05817JHEP. 01M. Fluder and J. Sparks, D2-brane Chern-Simons theories: F-maximization = a-maximization, JHEP 01 (2016) 048, [arXiv:1507.05817].
. S Choi, C Hwang, arXiv:1911.01448JHEP. 0368Universal 3d Cardy Block and Black Hole EntropyS. Choi and C. Hwang, Universal 3d Cardy Block and Black Hole Entropy, JHEP 03 (2020) 068, [arXiv:1911.01448].
The joy of factorization at large N: five-dimensional indices and AdS black holes. S M Hosseini, I Yaakov, A Zaffaroni, arXiv:2111.03069JHEP. 0297S. M. Hosseini, I. Yaakov, and A. Zaffaroni, The joy of factorization at large N: five-dimensional indices and AdS black holes, JHEP 02 (2022) 097, [arXiv:2111.03069].
The large N limit of topologically twisted indices: a direct approach. S M Hosseini, A Zaffaroni, arXiv:2209.09274JHEP. 1225S. M. Hosseini and A. Zaffaroni, The large N limit of topologically twisted indices: a direct approach, JHEP 12 (2022) 025, [arXiv:2209.09274].
. S M Hosseini, D Martelli, A Pittelli, A Zaffaroni, Work in progressS. M. Hosseini, D. Martelli, A. Pittelli, and A. Zaffaroni Work in progress.
Introduction to localization in quantum field theory. V Pestun, M Zabzine, arXiv:1608.02953J. Phys. A. 5044V. Pestun and M. Zabzine, Introduction to localization in quantum field theory, J. Phys. A 50 (2017), no. 44 443001, [arXiv:1608.02953].
Topologically twisted indices in five dimensions and holography. S M Hosseini, I Yaakov, A Zaffaroni, arXiv:1808.06626JHEP. 11911S. M. Hosseini, I. Yaakov, and A. Zaffaroni, Topologically twisted indices in five dimensions and holography, JHEP 11 (2018) 119, [arXiv:1808.06626].
5d Partition Functions with A Twist. P M Crichigno, D Jain, B Willett, arXiv:1808.06744JHEP. 05811P. M. Crichigno, D. Jain, and B. Willett, 5d Partition Functions with A Twist, JHEP 11 (2018) 058, [arXiv:1808.06744].
| [] |
[
"Machine Unlearning: A Survey",
"Machine Unlearning: A Survey"
] | [
"Heng Xu ",
"Tianqing Zhu ",
"Lefeng Zhang ",
"Philip S Yu ",
"Heng Xu ",
"Tianqing Zhu ",
"Lefeng Zhang ",
"Wanlei Zhou ",
"Philip S Yu ",
"\nUniversity of Technology Sydney\nAustralia\n",
"\nWANLEI ZHOU\nCity University of Macau\nChina\n",
"\nACM Reference Format\nUniversity of Illinois at Chicago\nUnited States\n"
] | [
"University of Technology Sydney\nAustralia",
"WANLEI ZHOU\nCity University of Macau\nChina",
"ACM Reference Format\nUniversity of Illinois at Chicago\nUnited States"
] | [
"J. ACM"
] | Machine learning has attracted widespread attention and evolved into an enabling technology for a wide range of highly successful applications, such as intelligent computer vision, speech recognition, medical diagnosis, and more. Yet a special need has arisen where, due to privacy, usability, and/or the right to be forgotten, information about some specific samples needs to be removed from a model, called machine unlearning. This emerging technology has drawn significant interest from both academics and industry due to its innovation and practicality. At the same time, this ambitious problem has led to numerous research efforts aimed at confronting its challenges. To the best of our knowledge, no study has analyzed this complex topic or compared the feasibility of existing unlearning solutions in different kinds of scenarios. Accordingly, with this survey, we aim to capture the key concepts of unlearning techniques. The existing solutions are classified and summarized based on their characteristics within an up-to-date and comprehensive review of each category's advantages and limitations. The survey concludes by highlighting some of the outstanding issues with unlearning techniques, along with some feasible directions for new research opportunities.CCS Concepts: • Security and privacy → Human and societal aspects of security and privacy. | 10.1145/3603620 | [
"https://export.arxiv.org/pdf/2306.03558v1.pdf"
] | 259,089,053 | 2306.03558 | 0813793c140adf6f55b5fcb95ddfb12b8a924121 |
Machine Unlearning: A Survey
2018. August 2018
Heng Xu
Tianqing Zhu
Lefeng Zhang
Philip S Yu
Heng Xu
Tianqing Zhu
Lefeng Zhang
Wanlei Zhou
Philip S Yu
University of Technology Sydney
Australia
WANLEI ZHOU
City University of Macau
China
ACM Reference Format
University of Illinois at Chicago
United States
Machine Unlearning: A Survey
J. ACM
37111362018. August 2018111Additional Key Words and Phrases: Machine learningdeep learningmachine unlearningsample removaldata privacymodel usability
Machine learning has attracted widespread attention and evolved into an enabling technology for a wide range of highly successful applications, such as intelligent computer vision, speech recognition, medical diagnosis, and more. Yet a special need has arisen where, due to privacy, usability, and/or the right to be forgotten, information about some specific samples needs to be removed from a model, called machine unlearning. This emerging technology has drawn significant interest from both academics and industry due to its innovation and practicality. At the same time, this ambitious problem has led to numerous research efforts aimed at confronting its challenges. To the best of our knowledge, no study has analyzed this complex topic or compared the feasibility of existing unlearning solutions in different kinds of scenarios. Accordingly, with this survey, we aim to capture the key concepts of unlearning techniques. The existing solutions are classified and summarized based on their characteristics within an up-to-date and comprehensive review of each category's advantages and limitations. The survey concludes by highlighting some of the outstanding issues with unlearning techniques, along with some feasible directions for new research opportunities.CCS Concepts: • Security and privacy → Human and societal aspects of security and privacy.
INTRODUCTION
In recent years, machine learning has seen remarkable progress and wide exploration across every field of artificial intelligence (AI) [1]. However, as AI becomes increasingly data-dependent, more and more factors, such as privacy concerns, regulations and laws, are leading to a new type of request -to delete information. Specifically, concerned parties are requesting that particular samples be removed from a training dataset and that the impact of those samples be removed from an already-trained model [2][3][4]. This is because membership inference attacks [5] and model inversion attacks [6] can reveal information about the specific contents of a training dataset. More importantly, legislators around the world have wisely introduced laws that grant users the right to be forgotten [7,8]. These regulations, which include the European Union's General Data Protection Regulation (GDPR) [9], the California Consumer Privacy Act (CCPA) [10], the Act on the Protection of Personal Information (APPI) [11], and Canada's proposed Consumer Privacy Protection Act (CPPA) [12], compel the deletion of private information.
The Motivation of machine unlearning
Machine unlearning (a.k.a. selectively forgetting, data deletion, or scrubbing) requires that the samples and their influence can be completely and quickly removed from a training dataset and a trained model [13][14][15]. Figure 1 illustrates an example of machine unlearning for a trained model.
Machine unlearning is not only motivated by regulations and laws. It also stems from the privacy and security concerns of the data provider, as well as the requirement of model owners themselves. In fact, removing the influence of outlier training samples from a model will lead to higher model performance and robustness [16]. There are existing data protection techniques that are similar to machine unlearning, but they differ in either objectives or rationales.
Here, we briefly discuss the main differences between current techniques and machine unlearning.
• Differential Privacy. Differential privacy [17,18] guarantees that by looking at a model output, one cannot tell whether a sample is in the training dataset or not. This technique ensures a subtle bound on the contribution of every sample to the final model [19,20], but machine unlearning is targeted on the removing of user-specific training samples. • Data Masking. Data masking [21] is designed to hide sensitive information in the original dataset. It transforms sensitive data to prevent them from being disclosed in unreliable environments [22]. In comparison, The objective of machine unlearning is to prevent a trained model from leaking sensitive information about its training samples. • Online Learning. Online learning [23] adjusts models quickly according to the data in a feedback process, such that the model can reflect online changes in a timely manner. One major difference between online learning and machine unlearning is that the former requires a merge operation to incorporate updates, while machine unlearning is an inverse operation that eliminates those updates when an unlearning request is received [24]. • Catastrophic forgetting. Catastrophic forgetting [25,26] refers to a significant drop in performance on previously learned tasks when a model is fine-tuned for a new task. Catastrophic forgetting causes a deep network to lose accuracy, but the information of the data it uses may still be accessible by analyzing the weights [27], Therefore, it does not satisfy the conditions required by machine unlearning. When users revoke permissions over some training data, it is not sufficient to merely remove those data from the original training dataset, since the attackers can still reveal user information from the trained models [28]. One straightforward approach to perfectly removing information from the model is to retrain it from scratch (the retraining process in Figure 1). However, many Thanh et al. [35] ✓ ✓ × × ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ × × ✓ ✓ ✓ × × × × ✓ × Saurabh et al. [36] × × × × × × × ✓ ✓ × × × × × × × × × × × × × × Anvith et al. [37] ✓ ✓ × × ✓ × × × × × × × × × ✓ × × × ✓ × × × × Ours ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ ✓ complex models have been built on an enormous set of samples. Retraining is generally a computationally expensive process [29,30]. Moreover, in some specific learning scenarios, such as federated learning [31,32], the training dataset may not be accessible, and thus retraining cannot be conducted at all. Therefore, to reduce the computational cost and make machine unlearning possible in all circumstances, new techniques should be proposed (the unlearning process in Figure 1).
Contributions of this survey
Machine unlearning has played an essential role in many applications [33,34]. However, its implementation and verification strategies are still not fully explored. There are various concepts and multiple verification schemes in this field, and the boundary between machine unlearning and other techniques is vague. These phenomena motivate us to compile a comprehensive survey that summarizes, analyzes, and categorizes machine unlearning techniques. In this survey, we aim to find a clear way to present the ideas and concepts in machine unlearning, showing their characteristics and highlighting their advantage. In addition, we propose a novel taxonomy for classifying state-of-the-art literature. We hope this survey provides an in-depth overview to readers who wish to know this field, and it also serves as a stepping-stone for advancing innovations and widening research visions. The main contributions of this paper are listed as follows:
• We proposed a novel taxonomy of current machine unlearning techniques based on their rationale and unlearning strategy. • We comprehensively summarized state-of-the-art unlearning methods based on the proposed taxonomy, showing their benefits and shortcomings. • We summarized the verification methods of machine unlearning within the taxonomy, and reviewed their implementations with related unlearning techniques. • We provided critical and deep discussions on the open issues in machine unlearning, as well as pointing out possible further research directions.
Comparison to existing surveys in machine unlearning
There are some works that have been conducted to summarize machine unlearning. However, few of them provide deep and comprehensive insight into current research. Here we introduce some relevant works for reference. Table 1 summarizes the comparison of those references.
• Thanh et al. [35] summarized the definitions of machine unlearning, the unlearning request types, and different designing requirements. They also provided a taxonomy of the existing unlearning schemes based on available models and data.
Notations Explanation Notations Explanation
X The instance space Y The label space D
The training dataset D The remaining dataset D
The unlearning dataset x One sample in D The label of sample x
The size of D x ,
The -th feature in x The dimension of x A (·)
The learning process U (·) The unlearning process R (·)
The retraining process w
The parameters of learned model w
The parameters of unlearned model w
The parameters of retrained model (·)
The distribution function K (·) The distribution measurement I (·)
The shannon mutual information H The hypothesis space for w
• Saurabh et al. [36] analyzed the problem of privacy leakage in machine learning and briefly described how the "right-to-be-forgotten" can be implemented with the potential approaches. • Anvith et al. [37] discussed the semantics behind unlearning and reviewed existing unlearning schemes based on logits, weights, and weight distributions. They also briefly described partial validation schemes of machine unlearning.
In addition to the difference in Table 1, this survey also differs from the above references in several aspects. Firstly, we provide a comprehensive analysis of each unlearning scheme together with corresponding verification strategies since the verification problem is an important metric in future studies. This is the significant difference between the above reference, as existing works have only reviewed the unlearning schemes used in each work. Second, each unlearning scheme is reviewed and compared through several dimensions, such as whether original training data is required, whether intermediate data needs to be cached, which classes and models are supported for unlearning requests, etc. In addition, we also analyze the commonalities and problems within each category in our taxonomy scheme, summarizing the trends, shortcomings and potential solutions, which have not been fully discussed in the above works [35][36][37].
Our work also involves multiple key areas of privacy preserving and optimization, covering topics of differential privacy, data masking, convex optimization, and so on. In contrast, existing surveys mainly focus on summarizing the methods employed in machine unlearning, ignoring the relationship between unlearning strategy and verification technique. The most similar work to ours is [35], however, it elaborates more on the unlearning framework and its application scenario, while we particularly emphasize unlearning strategy and verification. Moreover, we explore the possible trends of machine unlearning and summarise the latest research progress and possible techniques involved, including universality and security, etc., and suggest several specific research directions. Those are also not provided in the above reference [35][36][37].
PRELIMINARIES
Definition of Machine Unlearning
Vectors are denoted as bold lowercase, e.g., x , and space or set as italics in uppercase, e.g., X. A general definition of machine learning is given based on a supervised learning setting. The instance space is defined as X ⊆ R , with the label space defined as Y ⊆ R. D = {(x , )} =1 ⊆ R × R represents a training dataset, in which each sample x ∈ X is a -dimensional vector , =1 , ∈ Y is the corresponding label, and is the size of D. Let be the dimension of x and let x , denote the -th feature in the sample x .
The purpose of machine learning is to build a model with the parameters w ∈ H based on a specific training algorithm A (·), where H is the hypothesis space for w. In machine unlearning, let D ⊂ D be a subset of the training dataset, whose influence we want to remove from the trained model. Let its complement D = D ∁ = D/D be the dataset that we want to retain, and let R (·) and U (·) represent the retraining process and unlearning process, respectively. w and w donate the parameters of the built models from those two processes. ( ) represents the distribution of a variable and K (·) represents a measurement of the similarity of two distributions. When considering K (·) as a Kullback-Leibler (KL) divergence, K (·) is defined by KL( ( ) ∥ ( )) := E ∼ ( ) [log( ( )/ ( ))]. Given two random variables and , the amount of Shannon Mutual Information that has about is defined as ( ; ). The main notations are summarized in Table 2. Now we give the definition of machine unlearning.
Definition 2.1 (Machine Unlearning [29]). Consider a cluster of samples that we want to remove from the training dataset and the trained model, denoted as D . An unlearning process U (A (D), D, D ) is defined as a function from an trained model A (D), a training dataset D, and an unlearning dataset D to a model w , which ensures that the unlearned model w performs as though it had never seen the unlearning dataset D . Figure 2 presents the typical concept, unlearning targets and desiderata associated with machine unlearning. The infrastructure techniques involved in machine unlearning include several aspects, such as ensemble learning, convex optimization, and so on [38]. These technologies provide robust guarantees for different foundational unlearning requirements that consists of various types of models and unlearning requests, resulting in diverse unlearning scenarios and corresponding verification methods. Additionally, to ensure effectiveness, the unlearning process requires different targets, such as exact unlearning or strong unlearning. Each unlearning target ensures different similarities in the distribution of the parameters between the unlearned model and that of the retrained model. Machine unlearning also involves several unlearning desiderata, including consistency, accuracy, and verifiability. Those desiderata, with the target constraint, simultaneously guarantee the validity and feasibility of each unlearning scheme.
Targets of Machine Unlearning
The ultimate target of machine unlearning is to reproduce a model that 1). behaves as if trained without seeing the unlearned data, and 2). consumes as less time as possible. The performance baseline of an unlearned model is that of the model retrained from scratch (a.k.a, native retraining). Definition 2.2 (Native retraining [29]). Supposing the learning process, A (·), never sees the unlearning dataset D , and thereby performs a retraining process on the remaining dataset, denoted as D = D\D . In this manner, the retraining process is defined as:
w = A (D\D )(1)
The naive retraining naturally ensures that any information about samples can be unlearned from both the training dataset and the already-trained model. However, the computational and time overhead associated with the retraining process could be significantly expensive. Further, a retraining process is not always possible if the training dataset is inaccessible, such as federated learning [39]. Therefore, two alternative unlearning targets have been proposed: exact unlearning and approximate unlearning.
Exact unlearning guarantees that the distribution of an unlearned model and a retrained model are indistinguishable. In comparison, approximate unlearning mitigates the indistinguishability in weights and final activation, respectively. In practice, approximate unlearning further evolves to strong and weak unlearning strategies. Figure 3 illustrates the targets of machine unlearning and their relationship with a trained model. The different targets are, in essence, correspond to the requirement of unlearning results. Definition 2.3 (Exact unlearning [40]). Given a distribution measurement K (·), such as KLdivergence, the unlearning process U (·) will provide an exact unlearning target if K ( (U (A (D), D, D )), (A (D\D ))) = 0,
where (·) denotes the distribution of the weights.
Exact unlearning guarantees the two output distributions are indistinguishable, and thus preventing an observer (e.g., attacker) to exact any information about D . However, a less strict unlearning target is necessary because exact unlearning can only be achieved for simple and well-structured models [24]. As a result, approximate unlearning, which is suitable to complex machine learning models, is proposed. Definition 2.4 (Approximate unlearning [37]). If K ( (U (A (D), D, D )), (A (D\D ))) is limited within a tolerable threshold, the unlearning process U (·) is defined as strong unlearning.
Approximate unlearning ensures that the distribution of the unlearned model and that of a retrained model are approximately indistinguishable. This approximation is usually guaranteed by differential privacy techniques, such as ( , )-certified unlearning [41,42].
Depending on where how the distribution is estimated, approximate unlearning can be further classified into strong unlearning and weak unlearning. Strong unlearning is established based on the similarity between the internal parameter distributions of the models, while weak unlearning is based on the distribution of the model's final activation results [42,43]. Table 3 summarizes the main differences between each unlearning target.
Desiderata of Machine Unlearning
To fairly and accurately assess the efficiency and effectiveness of unlearning approaches, there are some mathematical properties that can be used for evaluation. Consistency denotes how similar the behavior of a retrained model and an unlearned model is. It represents whether the unlearning strategy can effectively remove all the information of the unlearning dataset D . If, for every sample, the unlearned model gives the same prediction result as the retrained model, then an attacker has no way to infer information about the unlearned data. Accuracy refers to the ability of the unlearned model to predict samples correctly. It reveals the usability of a model after the unlearning process, given that a model with low accuracy is useless in practice. Accuracy is a key component of any unlearning mechanism, as we claim the unlearning mechanism is ineffective if the process significantly undermines the original model's accuracy. Verifiability can be used to measure whether a model provider has successfully unlearned the requested unlearning dataset D . Taking the following backdoor verification method as an example [44], if the pre-injected backdoor for an unlearned sample x is verified as existing in Figure 4 summarizes the general taxonomy of machine unlearning and its verification used in this paper. The taxonomy is inspired by the design details of the unlearning strategy. Unlearning approaches that concentrate on modifying the training data are classified in data reorganization, while methods that directly manipulate the weights of a trained model are denoted as model manipulation. As for verification methods, initially, we categorize those schemes as either experimental or theoretical, subsequently, we summarize these methods based on the metrics they use.
TAXONOMY OF UNLEARNING AND VERIFICATION MECHANISMS
Unlearning Taxonomy
3.1.1 Data Reorganization. Data reorganization refers to the technique that a model provider unlearns data by reorganizing the training dataset. It mainly includes three different processing methods according to the different data reorganization modes: obfuscation, pruning and replacement [30,45]. Table 4 compares and summarizes the differences between these schemes.
• Data obfuscation: In data obfuscation, model providers intentionally add some choreographed data to the remaining dataset, that is D ← D ∪ D , where D and D are the new training dataset and the choreographed data, respectively. The trained model is then fine-tuned based on D to unlearn some specific samples. Such methods are usually based on the idea of erasing information about D by recombining the dataset with choreographed data. For example, Graves et al. [45] relabeled D with randomly selected incorrect labels and then fine-tuned the trained model for several iterations for unlearning data.
• Data pruning: In data pruning, the model provider first segments the training dataset into several sub-datasets and trains several sub-models based on each sub-dataset. Those submodels are then used to aggregate a consensus prediction collaboratively, that is D →
D 1 ∪ D 2 ∪ ... ∪ D , w = A (D ) and (x) = ( w (x)),
where D , 0 < < are the sub-datasets, and ∩D = ∅, ∪D = D, is the number of sub-dataset, w is the submodel, and (·) is the aggregation function. After an unlearning request arrives, the model provider deletes the unlearned samples from the sub-datasets that contain them and then retrains the affected sub-models. The flexibility of this methodology is that the influence of unlearning dataset D is limited to each sub-dataset after segmentation rather than the whole dataset. Taking the SISA scheme in [30] as an example, the SISA framework first randomly divided the training dataset into shards. A series of models are then trained separately at one per shard. When a sample needs to be unlearned, it is first removed from the shards that contain it, and only the sub-models corresponding to those shards are retrained.
• Data replacement: In data replacement, the model provider deliberately replaces the training dataset D with some new transformed dataset, that is D ← D. The transformed dataset D is then used to train a model that makes it easy to implement unlearning after receiving an unlearning request. For example, Cao et al. [29] replaced the training dataset with several efficiently computable transformations and used those transformations to complete the training of the model. Those transformations can be updated much more quickly after removing any samples from the transformed dataset. Consequently, computational overheads are reduced, and unlearning operations are more efficient.
Model Manipulation.
In model manipulation, the model provider aims to realize unlearning operations by adjusting the model's parameters. Model manipulation mainly includes the following three categories. Table 4 compares and summarizes the differences between these schemes.
• Model shifting: In model shifting, the model providers directly update the model parameters to offset the impact of unlearned samples on the model, that is w = w + , where w are parameters of the originally-trained model, and is the updated value. These methods are usually based on the idea of calculating the influence of samples on the model parameters and then updating the model parameters to remove that influence. It is usually extremely difficult to accurately calculate a sample's influence on a model's parameters, especially with complex deep neural models. Therefore, many model shifting-based unlearning schemes are based on specific assumptions. For example, Guo et al. 's [41] unlearning algorithms are designed for linear models with strongly convex regularization. • Model replacement: In model replacement, the model provider directly replaces some parameters with pre-calculated parameters, that is w ← w ∪ w , where w are parameters of the unlearned model, w are partially unaffected static parameters, and w are the pre-calculated parameters. These methods usually depend on a specific model structure to predict and calculate the affected parameters in advance. They are only suitable for some special machine learning models, such as decision trees or random forest models. Taking the method in [57] as an example, the affected intermediate decision nodes are replaced based on pre-calculated decision nodes so as to generate an unlearned model. • Model pruning: In model pruning, the model provider prunes some parameters from the trained models to unlearn the given samples, that is w ← w/ , where w are the parameters of the unlearned model, w are the parameters of the trained model, and are the parameters that need to be removed. Such unlearning schemes are also usually based on specific model structures and are generally accompanied by a fine-tuning process to recover performance after the model is pruned. For example, Wang et al. [55] introduced the term frequencyinverse document frequency (TF-IDF) to quantize the class discrimination of channels in a convolutional neural network model, where channels with high TF-IDF scores are pruned.
Verification Mechanisms
Verifying whether the unlearning method has the verifiability property is not an easy task. Model providers may claim externally that they remove those influences from their models, but, in reality, this is not the case [48]. For data providers, proving that the model provider has completed the unlearning process may also be tricky, especially for complex deep models with huge training Hard to implement and only applicable to some specified models datasets. Removing a small portion of samples only causes a negligible effect on the model. Moreover, even if the unlearned samples have indeed been removed, the model still has a great chance of making a correct prediction since other users may have provided similar samples. Therefore, providing a reasonable unlearning verification mechanism is a topic worthy of further research.
Empirical evaluation.
• Retraining-based verification: Retraining can naturally provide a verifiability property, since the retraining dataset no longer contains the samples that need to be unlearned. This is the most intuitive and easy-to-understand solution. • Attack-based verification: The essential purpose of an unlearning operation is to reduce leaks of sensitive information caused by model over-fitting. Hence, some attack methods can directly and effectively verify unlearning operations -for example, membership inference attacks [5] and model inversion attacks [4]. In addition, Sommer et al. [44] provided a novel backdoor verification mechanism from an individual user perspective in the context of machine learning as a service (MLaaS) [61]. This approach can verify, with high confidence, whether the service provider complies with the user's right to unlearn information. • Relearning time-based verification: Relearning time can be used to measure the amount of information remaining in the model about the unlearned samples. If the model quickly recovers performance as the original trained model with little retraining time, then it is likely to still remember some information about the unlearned samples [27]. • Accuracy-based verification: A trained model usually has high prediction accuracy for the samples in the training dataset. This means the unlearning process can be verified by the accuracy of a model's output. For the data that need to be unlearned, the accuracy should ideally be the same as a model trained without seeing D [40]. In addition, if a model's accuracy after being attacked can be restored after unlearning the adversarial data, we can also claim that the unlearning is verified.
Theoretical calculation.
• Theory-based verification: Some methods provide a certified unlearning definition [41,53], which ensures that the unlearned model cannot be distinguished from a model trained on the remaining dataset from scratch. This could also provide a verification method that directly guarantees the proposed schemes can unlearn samples. • Information bound-based verification: Golatkar et al. [40,43], devised a new metric for verifying the effectiveness of unlearning schemes, where they measured the upper bound of the residual information about samples that need to be unlearned. Less residual information represents a more effective unlearning operation. Table 5 summarizes and compares each verification method's advantages and limitations.
DATA REORGANIZATION
In this section, we review how data reorganization methods support the unlearning process. Since proving the verifiability property of unlearning algorithms is also important and should be considered in machine unlearning research, we separately discuss it for each unlearning method.
Reorganization Based on Data Obfuscation
Unlearning Schemes Based on Data Obfuscation.
In general, the majority of model attack scenarios, such as membership inference attacks, arise from model overfitting and rely on observing shifts in the output based on known input shifts [62]. That is, for the vast majority of attackers, it is easy to perform an attack on some trained models by observing the shifts of the output confidence vectors. One optional machine unlearning scheme can be interpreted as confusing the model's understanding of samples so that it cannot retain any correct information within models. This method can further confuse the confidence vector of the model's output [46]. As shown in Figure 5, when receiving an unlearning request, the model continues to train w based on the constructed obfuscation data D giving rise to an updated w . In this vein, Graves et al. [45] proposed a random relabel and retraining machine unlearning framework. Sensitive samples are relabeled with randomly-selected incorrect labels, and then the machine learning model is fine-tuned based on the modified dataset for several iterations to unlearn those specific samples. Similarly, Felps et al. [46] intentionally poisoned the labels of the unlearning dataset and then fine-tuned the model based on the new poisoned dataset. However, such unlearning schemes only confuse the relationship between the model outputs and the samples; the model parameters may still contain information about each sample.
The trained model is always trained by minimizing the loss for all classes. If one can learn a kind of noise that only maximizes the loss for some classes, those classes can be unlearned. Based on this idea, Tarrun et al. [27] divided the unlearning process into two steps, impair and repair. In the first step, an error-maximizing noise matrix is learned that consists of highly influential samples corresponding to the unlearning class. The effect of the noise matrix is somehow the opposite of the unlearning data, and can destroy the information of unlearned data to unlearn single/multiple classes. To repair the performance degradation caused by the model unlearning process, the repair step further adjusted the model based on the remaining data.
Similarly, Zhang et al. [63] considered the unlearning request in the image retrieval field. The approach developed involves creating noisy data using a generative method to adjust the weights of the retrieval model and achieve the unlearning purposes. They also proposed a new learning framework, which includes both static and dynamic learning branches, ensuring that the generated noisy data only affects the unlearning data being forgotten without affecting the contribution of other remaining data. However, the above two scheme consumes more time to generate noise for unlearning process, which will affect the efficiency of the unlearning process [27,63].
Verifiability of Schemes Based on Data Obfuscation.
To verify their unlearning process, Graves et al. [45] used two state-of-the-art attack methodsa model inversion attack and a membership inference attack -to evaluate how much information was retained in the model parameters about specific samples after the unlearning process -in other words, how much information might be leaked after the unlearning process. Their model inversion attack is a modified version of the standard model inversion attack proposed by Fredrikson et al. [6]. The three modifications include: adjusting the process function to every gradient descent steps; adding a small amount of noise to each feature before each inversion; and modifying the number of attack iterations performed. These adjustments allowed them to analyze complex models. For the membership inference attack, they used the method outlined by Yeom et al. in [64]. Felps et al. 's verifiability analysis is also based on the membership inference attack [46].
In comparison, Tarrun et al. [27] evaluated the verifiability through several measurements. They first assessed relearning time by measuring the number of epochs for the unlearned model to reach the same accuracy as the originally-trained model. Then, the distance between the original model, the model after the unlearning process, and the retrained model are further evaluated.
Reorganization Based on Data Pruning
Unlearning Schemes Based on Data Pruning.
As shown in Figure 6, unlearning schemes based on data pruning are usually based on ensemble learning techniques. Bourtoule et al. [30] proposed a "sharded, isolated, sliced, and aggregated" (SISA) framework, similar to the current distributed training strategies [65,66], as a method of machine unlearning. With this approach, the training dataset D is first partitioned into disjoint shards D 1 , D 2 , · · · , D . Then, sub-models M 1 , M 2 , · · · , M are trained in isolation on each of these shards, which limits the influence of the samples to sub-models that were trained on the shards containing those samples. At inference time, individual predictions from each sub-model are simply aggregated to provide a global prediction (e.g., with majority voting), similar to the case of machine learning ensembles [67]. When the model owner receives a request to unlearn a data sample, they just need to retrain the sub-models whose shards contain that sample.
As the amount of unlearning data increases, SISA will cause degradation in model performance, making them only suitable for small-scale scenarios. The cost of these unlearning schemes is the time required to retrain the affected sub-models, which directly relates to the size of the shard. The smaller the shard, the lower the cost of the unlearning scheme. At the same time, there is less training dataset for each sub-model, which will indirectly degrade the ensemble model's accuracy. Bourtoule et al. [30] provided three key technologies to alleviate this problem, including unlearning in the absence of isolation, data replication, and core-set selection.
In addition to this scheme, Chen et al. [33] introduced the method developed in [30] to recommendation systems and designed three novel data partition algorithms to divide the recommendation training data into balanced groups in order to ensure that collaborative information was retained. Wei et al. [68] focused on the unlearning problems in patient similarity learning and proposed PatEraser. To maintain the comparison information between patients, they developed a new data partition strategy that groups patients with similar characteristics into multiple shards. Additionally, they also proposed a novel aggregation strategy to improve the global model utility.
Yan et al. [69] designed an efficient architecture for exact machine unlearning, called ARCANE, similar as the scheme in Bourtoule et al. [30]. Instead of dividing the dataset uniformly, they split it by class and utilized the one-class classifier to reduce the accuracy loss. Additionally, they also preprocessed each sub-dataset to speed up model retraining, which involved representative data selection, model training state saving, and data sorting by erasure probability. Nevertheless, the above unlearning schemes [30,33,69] usually need to cache a large number of intermediate results to complete the unlearning process. This will consume a lot of storage space.
SISA is designed to analyze Euclidean space data, such as images and text, rather than non-Euclidean space data, such as graphs. By now, numerous important real-world datasets are represented in the form of graphs, such as social networks [70], financial networks [71], biological networks [72], or transportation networks [73]. To analyze the rich information in these graphs, graph neural networks (GNNs) have shown unprecedented advantages [74,75]. GNNs rely on the graph's structural information and neighboring node features. Yet naively applying SISA scheme to GNNs for unlearning, i.e., randomly partitioning the training dataset into multiple sub-graphs, will destroy the training graph's structure and may severely damage the model's utility.
To allow efficient retraining while keeping the structural information of the graph dataset, Chen et al. [47] proposed GraphEraser, a novel machine unlearning scheme tailored to graph data. They first defined two common machine unlearning requests in graph scenario: node unlearning and edge unlearning, and proposed a general pipeline for graph unlearning, which is composed of three main steps: graph partitioning, shard model training, and shard model aggravation. In the graph partitioning step, they introduced an improved balanced label propagation algorithm (LPA) [76] and a balanced embedding -means [77] partitioning strategy to avoid highly unbalanced shard sizes.
Given that the different sub-models might provide different contributions to the final prediction, they also proposed a learning-based aggregation method, OptAggr, that optimizes the importance score of each sub-model to improve global model utility ultimately.
Deterministic unlearning schemes, such as SISA [30] or GraphEraser [47], promise nothing about what can be learned about specific samples from the difference between a trained model and an unlearned model. This could exacerbate user privacy issues if an attacker has access to the model before and after the unlearning operation [78]. To avoid this situation, an effective approach is to hide the information about the unlearned model when performing the unlearning operation.
In practical applications, Neel et al. [50] proposed an update-based unlearning method that performs several gradient descent updates to build an unlearned model. The method is designed to handle arbitrarily long sequences of unlearning requests with stable run-time and steady-state errors. In addition, to alleviate the above unlearning problem, they introduced the concept of secret state: an unlearning operation is first performed on the trained model. Then, the unlearned models are perturbed by adding Gaussian noise for publication. This effectively ensures that an attacker cannot access the unlearned model actually after the unlearning operation, which effectively hides any sensitive information in the unlearned model. They also provided an ( , )-certified unlearning guarantee, and leveraged a distributed optimization algorithm and reservoir sampling to grant improved accuracy/run-time tradeoffs for sufficiently high dimensional data.
After the initial model deployment, data providers may make an adaptive unlearning decision. For example, when a security researcher releases a new model attack method that identifies a specific subset of the training dataset, the owners of these subsets may rapidly increase the number of deletion requests. Gupta et al. [49] define the above unlearning requests as adaptive requests and propose an adaptive sequential machine unlearning method using a variant of the SISA framework [30] as well as a differentially private aggregation method [79]. They give a general reduction of the unlearning guarantees from the adaptive sequences to the non-adaptive sequences using differential privacy and max-information theory [80]. A strong provable unlearning guarantee for adaptive unlearning sequences is also provided, combined with the previous works of non-adaptive guarantees for sequence unlearning requests.
He et al. [48] developed an unlearning approach for the deep learning model. They first introduce a process called detrended fluctuation analysis [81], which quantifies the influence of the unlearned data on the model parameters, termed temporal residual memory. They observed that this influence is subject to exponential decay, which fades at an increasing rate over time. Based on these results, intermediate models are retained during the training process and divided into four areas, named unseen, deleted, affected and unaffected. Unseen indicates that the unlearned sample has not yet arrived. Deleted includes the unlearning dataset. Unaffected and affected indicate whether temporal residual memory has lapsed or not. An unlearned model can be stitched by reusing the unseen and unaffected models and retraining the affected areas. However, this scheme does not provide any theoretical verification methods to ensure that the information about unlearning data to be unlearned is indeed removed from the model.
Verifiability of Schemes Based on Data Pruning.
The unlearning schemes proposed in [29,30,33,47,68,69] are essentially based on a retraining mechanism that naturally has a verifiability property. As discussed in Section 2.3, a straightforward way to give an unlearning scheme the verifiability property is to retrain the model from scratch after removing the samples that need to be unlearned from the training dataset. The above schemes introduce distributed and ensemble learning techniques, which train sub-models separately and independently to optimize the loss function on each sub-dataset. The sub-models are then aggregated to make predictions. In terms of the unlearning process, only the affected sub-models are retrained, which avoids a large computational and time overhead and also provides a verifiability guarantee. He et al. [48] use a backdoor verification method in [44] to verify their unlearning process. They designed a specially-crafted trigger and implanted this "backdoor data" in the samples that need to be unlearned, with little effect on the model's accuracy. They indirectly verify the validity of the unlearning process based on whether the backdoor data can be used to attack the unlearned model with a high success rate. If the attack result has lower accuracy, it proves that the proposed unlearning method has removed the unlearned data. The other studies [49,50] did not provide a method for verifying the unlearning process.
Reorganization Based on Data Replacement
Unlearning Schemes Based on Data Replacement.
As shown in Figure 7, when training a model in a data replacement scheme, the first step is usually to transform the training dataset into an easily unlearned type, named transformation T . Those transformations are then used to separately train models. When an unlearning request arrives, only a portion of the transformations t -the ones that contain the unlearned samplesneed to be updated and used to retrain each sub-model to complete the machine unlearning.
Inspired by the previous work of using MapReduce to accelerate machine learning algorithms [82], Cao et al. [29] proposed a machine unlearning method that transforms the training dataset into summation form. Each summation is the sum of some efficiently computable transformation. The learning algorithms depend only on the summations, not the individual data, which breaks down the dependencies in the training dataset. To unlearn a data sample, the model provider only needs to update the summations affected by this sample and recompute the model. However, since the summation form comes from statistical query (SQ) learning, and only a few machine learning algorithms can be implemented as SQ learning, such as naïve bayes classifiers [83], support vector machines [84], and k-means clustering [85], this scheme has low applicability.
Takashi et al. [86] proposed a novel approach to lifelong learning named "Learning with Selective Forgetting", which involves updating a model for a new task by only forgetting specific classes from previous tasks while keeping the rest. To achieve this, the authors designed specific mnemonic codes, which are class-specific synthetic signals that are added to all the training samples of corresponding classes. Then, exploiting the mechanism of catastrophic forgetting, these codes were used to forget particular classes without requiring the original data. It is worth noting, however, that this scheme lacks any theoretical verification methods to confirm that the unlearning data information has been successfully removed from the model.
Verifiability of Schemes Based on Data Replacement.
Cao et al. [29] provide an accuracy-based verification method. Specifically, they attack the LensKit model with the system inference attack method proposed by Calandrino et al. [87] and verify that the unlearning operations successfully prevent the attack from yielding any information. For the other three models, they first performed data pollution attacks to influence the accuracy of those models. They then analyzed whether the model's performance after the unlearning process was restored to the same state as before the pollution attacks. If the unlearned model was actually restored to its pre-pollution value, the unlearning operation was considered to be successful. Takashi et al. [86] provided a new metric, named Learning with Selective Forgetting Measure (LSFM) that is based on the idea of accuracy.
Summary of Data Reorganization
In these last few subsections, we reviewed the studies that use data obfuscation, data pruning, and data replacement techniques as unlearning methods. A summary of the surveyed studies is shown in Table 6, where we present the key differences between each paper.
From those summaries, we can see that most unlearning algorithms retain intermediate parameters and make use of the original training dataset [30,47]. This is because those schemes usually segment the original training dataset and retrain the sub-models that were trained on the segments containing those unlearned samples. Consequently, the influence of specific samples is limited to only some of the sub-models and, in turn, the time taken to actually unlearn the samples is reduced. However, segmenting decreases time at the cost of additional storage. Thus, it would be well worth researching more efficient unlearning mechanisms that ensure the validity of the unlearning process and do not add too many storage costs simultaneously.
Moreover, these unlearning schemes usually support various unlearning requests and models, ranging from samples to classes or sequences and from support vector machines to complex deep neural models [29,47,50]. Unlearning schemes based on data reorganization rarely operate on the model directly. Instead, they achieve the unlearning purpose by modifying the distribution of the original training datasets and indirectly changing the obtained model. The benefit is that such techniques can be applied to more complex machine learning models. In addition to their high applicability, most of them can provide a strong unlearning guarantee, that is, the distribution of the unlearned model is approximately indistinguishable to that obtained by retraining.
It is worth pointing out that unlearning methods based on data reorganization will affect the consistency and the accuracy of the model as the unlearning process continues [30,47,48]. This reduction in accuracy stems from the fact that each sub-model is trained on the part of the dataset rather than the entire training dataset. This phenomenon does not guarantee that the accuracy of the unlearned model is the same as the result before the segmentation. Potential solutions are to use unlearning in the absence of isolation, data replication [30].
Some of the studies mentioned indirectly verify the unlearning process using a retraining method [30,47], while others provide verifiability through attack-based or accuracy-based methods [27,45,46]. However, most unlearning schemes do not present further investigations at the theoretical level. The vast majority of the above unlearning schemes verify validity through experiments, with no support for the theoretical validity of the schemes. Theoretical validity would show, for example, how much sensitive information attackers can glean from an unlearned model after unlearning process or how similar the parameters of the unlearned model are to the retrained model. Further theoretical research into the validity of unlearning schemes is therefore required.
In summary, when faced with unlearning requests for complex models, unlearning schemes based on data obfuscation seldom unlearn information. This is because it is difficult to offset the influence of the unlearning data completely. Data pruning schemes always affect the model's accuracy since they usually train sub-models using a partial training dataset. For data replacement schemes, it is impossible to find a new dataset that can replace all the information within an original dataset to train a model. Thus, researchers should turn to design unlearning schemes that strike more of a balance between the effectiveness of the unlearning process and model usability.
MODEL MANIPULATION
The model training stage involves creating an effective model replicating the expected relationship between the inputs in the training dataset and the model's outputs. Thus, manipulating the model directly to remove specific relationships may be a good way to unlearn samples. In this section, we comprehensively review the state-of-the-art studies on unlearning through model manipulation. Again, the verification techniques are discussed separately for each category.
Manipulation Based on Model Shifting
Unlearning Schemes Based on Model Shifting.
As shown in Figure 8, model shifting methods usually eliminate the influence of unlearning data by directly updating the model parameters. These methods mainly fall into one of two typesinfluence unlearning and Fisher unlearning -but there are a few other methods.
(1). Influence unlearning methods Influence unlearning methods are usually based on influence theory [38]. Guo et al. [41] proposed a novel unlearning scheme called certified removal. Inspired by differential privacy [88], certified removal first limits the maximum difference between the unlearned and retrained models. Then, by applying a single step of Newton's method on the model parameters, a certified removal mechanism is provided for practical applications of 2 − regularized linear models that are trained using a differentiable convex loss function. Additionally, the training loss is perturbed with a loss perturbation technique that hides the gradient residual. This further prevents any adversaries from extracting information from the unlearned model. It is worth noting, however, that this solution is only applicable to simple machine learning models, such as linear models, or only adjusts the linear decision-making layer for deep neural networks, which does not eliminate the information of the removed data sample since the representations are still learned within the model.
Izzo et al. [51] proposed an unlearning method based on a gradient update called projection residual update (PRU). The method focuses on linear regression and shows how to improve the algorithm's run-time given in [41] from quadratic complexity to linear complexity. The unlearning intuition is as follows: if one can calculate the valuesˆD = D , predicted by the unlearned model on each of the unlearned samples D in D without knowing , and then minimize the loss of already-trained model on the synthetic samples D ,ˆ , the parameters will move closer to w since it will achieve the minimum loss with samples D ,ˆD . To calculate the valuesˆD without knowing , they introduced a statistics technique and computed leave-one-out residuals. Similar to the above, this method only considers the unlearning process in simple models.
Information leaks may not only manifest in a single data sample but also in groups of features and labels [53]. For example, a user's private data, such as their telephone number and place of residence, are collected by data providers multiple times and generated as different samples of the training dataset. Therefore, unlearning operations should also focus on unlearning a group of features and corresponding labels.
To solve such problems, Warnecke et al. [53] proposed a certified unlearning scheme for unlearning features and labels. By reformulating the influence estimation of samples on the already-trained models as a form of unlearning, they derived a versatile approach that maps changes of the training dataset in retrospection to closed-form updates of the model parameters. They then proposed different unlearning methods based on first-order and second-order gradient updates for two different types of machine learning models. For the first-order update, the parameters were updated based on the difference between the gradient of the original and the perturbed samples. For the second-order update, they approximated an inverse Hessian matrix based on the scheme proposed in [89] and updated the model parameters based on this approximate matrix. Theoretical guarantees were also provided for feature and label unlearning by extending the concept of differential privacy [88] and certified unlearning [41]. However, this solution is only suitable for feature unlearning from tabular data and does not provide any effective solution for image features.
(
2). Fisher unlearning method
The second type of model shifting technique uses the Fisher information [90] of the remaining dataset to unlearn specific samples, with noise injected to optimize the shifting effect. Golatkar et al. [40] proposed a weight scrubbing method to unlearn information about a particular class as a whole or a subset of samples within a class. They first give a computable upper bound to the amount of the information retained about the unlearning dataset after applying the unlearning procedure, which is based on the Kullback-Leibler (KL) divergence and Shannon mutual information. Then, an optimal quadratic unlearning algorithm based on a Newton update and a more robust unlearning procedure based on a noisy Newton update were proposed. Both schemes can ensure that a cohort can be unlearned while maintaining good accuracy for the remaining samples. However, this unlearning scheme is based on various assumptions, which limits its applicability.
For deep learning models, bounding the information that can be extracted from the perspective of weight or weight distribution is usually complex and may be too restrictive. Deep networks have a large number of equivalent solutions in the distribution space, which will provide the same activation on all test samples [43]. Therefore, many schemes have redirected unlearning operations from focusing on the weights to focus on the final activation.
Unlike their previous work, Golatkar et al. [43] provide bounds for how much information can be extracted from the final activation. They first transformed the bounding from a weight perspective to final activation based on Shannon mutual information and proposed a computable bound using the -divergence between the distribution of final activation of an unlearned model and retrained model. Inspired by the neural tangent kernel (NTK) [91,92], they considered that deep network activations can be approximated as a linear function of the weights. Hence, an optimal unlearning procedure is then provided based on a Fisher information matrix. However, due to the specific structure of deep neural networks, considering unlearning process only in the final activation layer may not satisfy the effectiveness of unlearning. Once an attacker obtains all model parameters in a white-box scenario, they can still infer information from the middle layers.
Golatkar et al. [52] also proposed a mix-privacy unlearning scheme based on a new mixed-privacy training process. This new training process assumes the traditional training dataset can be divided into two parts: core data and user data. Model training on the core data is non-convex, and then further training, based on the quadratic loss function, is done with the user data to meet the needs of specific user tasks. Based on this assumption, unlearning operations on the user data can be well executed based on the existing quadratic unlearning schemes. Finally, they also derived bounds on the amount of information that an attacker can extract from the model weights based on mutual information. Nevertheless, the assumption that the training dataset is divided into two parts and that the model is trained using different methods on each of these parts restricts unlearning requests to only those data that are easy to unlearn, making it difficult to unlearn other parts of the data.
Liu et al. [93] transferred the unlearning method from a centralized environment to federated learning by proposing a distributed Newton-type model updating algorithm to approximate the loss function trained by the local optimizer on the remaining dataset. This method is based on the Quasi-Newton method and uses a first-order Taylor expansion. They also use diagonal empirical Fisher Information Matrix (FIM) to efficiently and accurately approximate the inverse Hessian vector, rather than computing it directly, to further reduce the cost of the retraining process. However, this solution will result in a significant reduction in accuracy when dealing with complex models.
(
3). Other Shifting Schemes
Schelter et al. [24] introduced the problem of making trained machine learning models unlearn data via decremental updates. They described three decremental update algorithms for different machine learning tasks. These included one based on item-based collaborative filtering, another based on ridge regression, and the last based on -nearest neighbors. With each machine learning algorithm, the intermediate results are retained, and the model parameters are updated based on the intermediate results and unlearning data , resulting in an unlearned model. However, this strategy can only be utilized with those models that can be straightforwardly computed to obtain the model parameters after the unlearning process, limiting the applicability of this scheme.
In addition, Graves et al. [45] also proposed a laser-focused removal of sensitive data, called amnesiac unlearning. During training, the model provider retains a variable that stores which samples appear in which batch, as well as the parameter updates for each batch. When a data unlearning request arrives, the model owner undoes the parameter updates from only the batches containing the sensitive data, that is M = M − Δ , where M is the already-trained model and Δ are the parameter updates after each batch. Because undoing some parameters might greatly reduce the performance of the model, the model provider can perform a small amount of fine-tuning after an unlearning operation to regain performance. This approach requires the storage of a substantial amount of intermediate data. As the storage interval decreases, the amount of cached data increases, and smaller intervals lead to more efficient model unlearning. Therefore, a trade-off exists between efficiency and effectiveness in this method.
The above methods mainly focused on the core problem of empirical risk minimization, where the goal is to find approximate minimizers of the empirical loss on the remaining training dataset after unlearning samples [41,51]. Sekhari et al. [42] proposed a more general method of reducing the loss of unseen samples after an unlearning process. They produced an unlearned model by removing the contribution of some samples from an already-trained model using a disturbance update calculated based on some cheap-to-store data statistics during training. In addition, they proposed an evaluation parameter to measure the unlearning capacity. They also improved the data unlearning capacity of convex loss functions, which saw a quadratic improvement in terms of the dependence of d over differential privacy, where d is the problem dimension.
Verifiability of Schemes Based on Parameter Shifting.
Izzo et al. [51] provided two metrics to measure the effectiveness: 2 distance and feature injection test. 2 distance measures the distance between the unlearned model and the retrained model. If the 2 distance is small, the models are guaranteed to make similar predictions, which could reduce the impact of output-based attacks, like a membership inference attack. The feature injection test can be thought of as a verification scheme based on a poisoning attack.
Golatkar et al. [40,43,52] verify the effectiveness of their unlearning schemes based on accuracy and relearning time. They also developed two new verification metrics: model confidence and information bound [40]. Model confidence is formulated by measuring the distribution of the entropy of the output predictions on the remaining dataset, the unlearning dataset, and the test dataset. Then they evaluated the similarity of those distributions against the confidence of a trained model that has never seen the unlearning dataset. The higher the degree of similarity, the better the effect of the unlearning process. The information bound metric relies on KL-divergence to measure the information remaining about the unlearning dataset within the model after the unlearning process.
Different from their previous work, Golatkar et al. [43] also evaluate the information remaining within the weights and the activation. In their other work [52], they provided a new metric, activation distance, to analyze the distance between the final activations of an unlearned model and a retrained model. This is a similar metric to model confidence [40]. In addition, they also use attack-based methods for verification [43,52].
Guo et al. [41], Warnecke et al. [53] and Sekhari et al. [42] provide a method of theoretical verification to verify the effectiveness of their proposed unlearning schemes. Based on the guarantee provided by certified unlearning, they limit the distribution similarity between the unlearned model and the retrained model. Warnecke et al. [53] also use the exposure metric [2] to measure the remaining information after unlearning. Liu et al. [93] analyzed the validity of the unlearning scheme through two aspects. The first metric, Symmetric Absolute Percentage Error (SAPE), is created based on accuracy. The second metric is the difference between the distribution of the model after the unlearning process and the distribution of the retraining model.
Manipulation Based on Model Pruning
Unlearning Schemes Based on Model Pruning.
As shown in Figure 9, methods based on model pruning usually prune a trained model to produce a model that can meet the requests of unlearning. It is usually applied in the scenario of federated learning, where a model provider can modify the model's historical parameters as an update. Federated learning is a distributed machine learning framework that can train a unified deep learning model across multiple decentralized nodes, where each node holds its own local data samples for training, and those samples never need to be exchanged with any other nodes [94]. There are mainly three types of federated learning: horizontal, vertical, and transfer learning [95].
Based on the idea of trading the central server's storage for the unlearned model's construction, Liu et al. [54] proposed an efficient federated unlearning methodology, FedEraser. Historical parameter updates from the clients are stored in the central server during the training process, and then the unlearning process unfolds in four steps: (1) calibration training, (2) update calibrating, (3) calibrated update aggregating, and (4) unlearned model updating, to achieve the unlearning purpose. In calibration training and update calibration steps, several rounds of a calibration retraining process are performed to approximate the unlearning updates without the target client. In the calibrated update aggregating and the unlearned model updating steps, standard federated learning aggregation operations are used to aggregate those unlearning updates and further update the global model. This eliminates the influence of the target data.
However, the effectiveness of this scheme will decrease dramatically as the number of unlearning requests increases; this is because the gradients are cached during the training phase, and the unlearning process will not update these gradients to satisfy subsequent unlearning requests [54]. Second, this solution also requires caching of intermediate data, which will cost more storage.
Inspired by the observation that different channels have a varying contribution to different classes in trained CNN models. Wang et al. [55] analyzed the problem of selectively unlearning classes in a federated learning setting. They introduced the concept of term frequency-inverse document frequency (TF-IDF) [96] to quantify the class discrimination of the channels. Similar to analyzing how relevant a word is to a document in a set of documents, they regarded the output of a channel as a word and the feature map of a category as a document. Channels with high TF-IDF scores have more discriminatory power in the target categories and thus need to be pruned. An unlearning procedure via channel pruning [97] was also provided, followed by a fine-tuning process to recover the performance of the pruned model. In their unlearning scheme, however, while the parameters associated with the class that needs to be unlearned are pruned, the parameters with other classes also become incomplete, which will affect the model performance. Therefore, the unlearned model is only available when the fined-tuned training process is complete. Baumhauer et al. [56] provided a machine unlearning scheme based on linear filtration. They first transformed the existing logit-based classifier models into an integrated model which can be decomposed into a (potentially nonlinear) feature extraction, followed by a multinomial logistic regression. Then, they focused the unlearning operation on the logistic regression layer, proposing a "black-box" unlearning definition. To unlearn the given samples, four different filtration methods are defined, namely, naive unlearning, normalization, randomization, and zeroing. These effectively filter the outputs of the logistic regression layer. On the contrary, they only considered the unlearning process within the last layer, which will lead to a potential risk that if an attacker gets access to the model parameters of the middle layer, the information of unlearning data may also be leaked.
Verifiability of Schemes based on Model Pruning.
Liu et al. [54] present an experimental verification method based on a membership inference attack. Two evaluation parameters are specified: attack precision and attack recall, where attack precision denotes the proportion of unlearned samples that is expected to participate in the training process. Attack recall denotes the fraction of unlearned samples that can be correctly inferred as part of the training dataset. In addition, a prediction difference metric is also provided, which measures the difference in prediction probabilities between the original global model and the unlearned model. Wang et al. [55] evaluate verifiability based on model accuracy.
Baumhauer et al. [56] defined a divergence measure based on a Bayes error rate for evaluating the similarity of the resulting distributions (L seen ) and (L ¬ seen ), where L seen and L ¬ seen are the pre-softmax outputs of the unlearned model and a retrained model. When the result of the Bayes error rate is close to 0, it indicates that (L seen ) and (L -seen ) are similar and the unlearning process has unlearned the sample's information from the model. In addition, they also use a model inversion attack to evaluate verifiability [6].
Manipulation Based on Model Replacement
Unlearning Schemes Based on Model Replacement.
As shown in Figure 10, model replacement-based methods usually calculate almost all possible sub-models in advance during the training process and store them together with the deployed model. Then, when an unlearning request arrives, only the sub-models affected by the unlearning operation need to be replaced with the pre-stored sub-models. This type of solution is usually suitable for some machine learning models, such as tree-based models. Decision tree is a tree-based learning model, in which each leaf node represents a prediction value, and each internal node is a decision node associated with an attribute and threshold value. Random forest is an integrated decision tree model that aims to improve prediction performance [98,99].
To improve the efficiency of the unlearning process for tree-based machine learning models, Schelter et al. [57] proposed Hedgecut, a classification model based on extremely randomized trees (ERTs) [100]. First, during the training process, the tree model is divided into robust splits and non-robust splits based on the proposed robustness quantification factor. A robust split indicates that the subtree's structure will not change after unlearning a small number of samples, while for non-robust splits, the structure may be changed. In the case of unlearning a training sample, HedgeCut will not revise robust splits, but will update those leaf statistics. For non-robust splits, HedgeCut recomputes the split criterion of the maintained subtree variants, which were previously kept inactive, and selects a subtree variant as a new non-robust split of the current model.
For the tree-based models, Brophy et al. [58] also proposed DaRE (Data Removal-Enabled) forests, a random forest variant that enables the efficient removal of training samples. DaRE is mainly based on the idea of retraining subtrees only as needed. Before the unlearning process, most randomly-selected thresholds per attribute are computed, and intermediate statistics data are stored within each node in advance. This information is sufficient to recompute the split criterion of each threshold without iterating through the data, which can greatly reduce the cost of recalculation when unlearning the dataset. They also introduced random nodes at the top of each tree. Intuitively, the nodes near the top of the tree affect more samples than those near the bottom, which makes it more expensive to retrain them when necessary. Random nodes minimally depend on the statistics of the data, rather than the way greedy methods are used, and rarely need to be retrained. Therefore, random nodes can further improve the efficiency of unlearning.
The above two schemes need to compute a large number of possible tree structures in advance, which would cost a large number of storage resources [57,58]. Besides, this replacement scheme is difficult to be applied to other machine learning models, such as deep learning models, since it is difficult to achieve partial model structure after removing each sample in advance.
Chen et al. [59] proposed a machine unlearning scheme called WGAN unlearning, which removes information by reducing the output confidence of unlearned samples. Machine learning models usually have different confidence levels toward the model's outputs [101]. To reduce confidence, WGAN unlearning first initializes a generator as the trained model that needs to unlearn data.
Then, the generator and discriminator are trained alternatingly until the discriminator cannot distinguish the output difference of the model between unlearning dataset and third-party data. Until this, the generator then becomes the final unlearned model. However, this method achieves unlearning process through an alternating training process, which brings a limited improvement in efficiency compared to the unlearning method of retraining from scratch.
Wu et al. [60] proposed an approximate unlearning method based on intermediate parameters cached during the training phase called DeltaGrad, which could quickly unlearn information from machine learning models that are based on gradient descent algorithms. They divided the retraining process into two parts. One part computes the full gradients exactly based on the remaining training dataset. The other part uses the L-BGFS algorithm [102] and a set of updates from some prior iterations to calculate Quasi-Hessians approximating the true Hessian-vector. These Quasi-Hessians are then used to approximate the update in the remaining process. These two parts train cooperatively to generate the unlearned model. This approach will reduce the performance of the model, however, after unlearning process since part of the model update is calculated based on the approximative methods. In addition, the number of iterations required for the model to converge will also increase, which will reduce the efficiency of the unlearning process.
Verifiability of Schemes Based on Model Replacement.
Chen et al. [59] verified their proposed scheme with a membership inference attack and a technique based on false negative rates (FNRs) [103], where : = + , means that the membership inference attack test samples were considered to be training dataset and means the data was deemed to be non-training data. If the target model successfully unlearns the samples, the member inference attack will treat the training dataset as non-training data. Thus will be large, while will be small, and the corresponding will be large. Indirectly, this reflects the effectiveness of the unlearning process.
Schelter et al. [57], Brophy et al. [58] and Wu et al. [60] only provide evaluations in terms of runtime and accuracy, and they do not provide reasonable experimental or theoretical verifiability guarantees of their unlearning processes.
Summary of Model Manipulation
In these last subsections, we reviewed studies that apply model shifting, model pruning, and model replacement techniques as unlearning processes. A summary of the surveyed studies is shown in Table 7, where we list the key differences between each paper.
Compared to the unlearning schemes based on data reorganization, we can see that few of the above papers make use of intermediate data for unlearning. This is because the basic idea of those unlearning schemes is to directly manipulate the model itself, rather than the training dataset. The model manipulation methods calculate the influence of each sample and offset that influence using a range of techniques [38], while data reorganization schemes usually reorganize the training dataset to simplify the unlearning process. For this reason, model manipulation methods somewhat reduce the resource consumption used by intermediate storage.
Second, most of the above schemes focus on relatively simple machine learning problems, such as linear logistic regression, or complex models with special assumptions [40,41,43,51]. Removing information from the weights of standard convolutional networks is still an open problem, and some preliminary results are only applicable to small-scale problems. One of the main challenges with unlearning processes for deep networks is how to estimate the impact of a given training sample on the model parameters. Also, the highly non-convex losses of CNNs make it very difficult to analyze those impacts on the optimization trajectory. Current research has focused on simpler convex learning problems, such as linear or logistic regression, for which theoretical analysis is feasible. Therefore, evaluating the impact of specific samples on deep learning models and further proposing unlearning schemes for those models are two urgent research problems.
In addition, most model manipulation-based methods will affect the consistency or prediction accuracy of the original models. There are two main reasons for this problem. First, due to the complexity of calculating the impact of the specified sample on the model, manipulating a model's parameters based on unreliable impact results or assumptions will lead to a decline in model accuracy. Secondly, Wang et al. 's [55] scheme pruned specific parameters in the original models, which will also reduce the accuracy of the model due to the lack of some model prediction information. Thus, more efficient unlearning mechanisms, which simultaneously ensure the validity of the unlearning process and guarantee performance, are worthy of research.
It is worth pointing out that most schemes provide a reasonable method with which to evaluate the effectiveness of the unlearning process. Significantly, model manipulation methods usually give a verifiability guarantee using theory-based and information bound-based methods [40,41,43]. Compared to the simple verification methods based on accuracy, relearning, or attacks, the methods based on theory or information bounds are more effective. This is because simple verification methods usually verify effectiveness based on output confidence. While the effects of the samples to be unlearned can be hidden from the output of the network, insights may still be gleaned by probing deep into its weights. Therefore, calculating and limiting the maximum amount of information that may be leaked at the theoretical level will be a more convincing method. Overall, however, more theory-based techniques for evaluating verifiability are needed.
In summary, the unlearning methods based on model shifting usually aim to offer higher efficiency by making certain assumptions about the training process, such as which training dataset or optimization techniques have been used. In addition, those mechanisms that are effective for simple models, such as linear regression models, become more complex when faced with advanced deep neural networks. Model pruning schemes require far-reaching modifications of the existing architecture of the model in the unlearning process [55,56], which could affect the performance of the unlearned models. It is worth noting that model replacement unlearning methods usually need to calculate all possible parameters and store them in advance, since they unlearn by quickly replacing the model parameters using these pre-calculated parameters. Thus, more effective unlearning schemes, that simultaneously consider model usability, storage costs, and the applicability of the unlearning process, are urgent research problems.
OPEN QUESTIONS AND FUTURE DIRECTIONS
In this section, we will analyze current and potential trends in machine unlearning, and summarize our findings. In addition, we identify several unanswered research directions that could be addressed to progress the foundation of machine unlearning and shape the future of AI.
Open Questions
As research continues to evolve, machine unlearning may expand further in the following areas, and this potential trend has already begun to take shape. 6.1.1 The universality of unlearning solutions.
Unlearning schemes with higher compatibility need to be explored. As development progresses, machine unlearning schemes supporting different models and unlearning data types have been proposed in various fields. For example, Zhang et al. [63] provided an unlearning scheme in image retrieval, while Chen et al. [47] considered graph unlearning problem. However, most of the current unlearning schemes are limited to a specific scenario. They are mostly designed to leverage the special characteristics of a particular learning process or training scheme [24,47,54]. Although it is feasible to design an appropriate unlearning scheme for every model, this is an inefficient approach that would require many manual interventions [104,105].
Therefore, universality unlearning schemes should be not only applicable to different model structures and training methods, but also to different types of training datasets, such as graphs, images, text, or audio data. The data pruning-based scheme is an existing and effective approach that could achieve universality unlearning purposes based on ensemble learning techniques [30]. However, this method breaks the correlation relationships in some scenarios, which is not suitable for models that require correlation information to complete training. 6.1.2 The security of machine unlearning.
Unlearning schemes should ensure the security of any data, especially the unlearned dataset. Recently, existing research has shown that the unlearning operation not only does not reduce the risk of user privacy leakage but actually increases this risk [106,107]. These attack schemes mainly compare the models before and after the unlearning process. Thus, a membership inference attack or a poisoning attack would reveal a significant amount of detailed information about the unlearned samples [78,108]. In order to counteract such attacks, Neel et al. [50] have proposed a protection method based on Gaussian perturbation in their unlearning scheme.
In addition, many previous unlearning schemes rely on the remaining dataset, intermediate cached model's parameters. However, they do not consider the security of this intermediate information and whether an attack would recover any information about the unlearned samples [30,57]. Therefore, the design of further unlearning schemes needs to consider that any before and after models should not expose any information about the samples that need to be unlearned. Further, the security of the data cached during the unlearning process also needs to be explored.
The verification of machine unlearning.
Verification methods should be easy to implement and applicable to users. Most current simple verification schemes, such as those based on attacks, relearning time, and accuracy [45,52], are derived from existing learning or attack metrics. Those one-sided methods seldom provide strong verification of the unlearning process's effectiveness [44,109,110]. Meanwhile, unlearning methods with a theoretical guarantee are usually based on rich assumptions and can rarely be applied to complex models since complex deep models usually make those assumptions invalid [41,53]. In addition, these verification schemes are not user-friendly and easy to implement.
Therefore, the verification schemes should consider the feasibility and acceptability, that is, users should be able to understand and verify whether their unlearning request has been completed based on some simple operations. There are already some relevant schemes, such as the backdoor-based verification mechanism in [44] and the encryption-based verification scheme in [111]. However, these schemes are still quite difficult for ordinary users. Therefore, an easy-to-implement and understanding verification scheme is a topic worthy of research. 6.1.4 The applications of machine unlearning.
While promoting individual data privacy, machine unlearning has also gradually emerged as a solution for other applications. Regulations and privacy issues have resulted in the need to allow a trained model to unlearn some of its training data. Apart from these, there are several other scenarios where efficient machine unlearning would be beneficial. For instance, it could be used to accelerate the process of leave-one-out-cross-validation, removing adversarial or poisoning samples and identifying significant and valuable data samples within a model [13]. As of now, some relevant applications have emerged [53,112]. For example, Alexander et al. [53] proposed a feature unlearning scheme that could be used to address fairness issues.
At the same time, the machine unlearning scheme can also serve as an effective attack strategy to strengthen the robustness of the model. One potential attack scenario to consider is as follows: the attacker first introduces pre-designed malicious samples into the dataset, which are subsequently used by the model provider to train the model. After that, the attacker initiates unlearning requests to remove the information about those pre-designed samples from the model, which will affect the performance and fairness of the model, or unlearning efficiency [108]. Therefore, in addition to strengthening data protection, machine unlearning also has enormous potential in other areas.
Future Directions
Information synchronization: Similar to process synchronization in operating systems, machine unlearning may create information synchronization problems [113,114]. Since machine unlearning is usually computationally costly, the model provider may not be able to complete the unlearning process immediately. In the interim, how to handle incoming prediction requests deserves careful consideration. Consider that, if predictions continue to be returned prior to the model's update, unlearned data may be revealed. However, if all requests for prediction are rejected until the unlearning process is completed, model utility and service standards will surely suffer. Therefore, how to handle prediction requests within this interval needs comprehensive consideration.
Federated unlearning: Federated learning is a special kind of distributed learning that is characterized by various unstable users distributed in different places, each of whom has control over their devices and data [115,116]. Imteaj et al. [95] show that model providers are more likely to receive requests to remove specific samples from a model trained in a federated learning setting. For example, when a user quits the collaborative training process, they may ask for their contribution to be removed from the collaborative model. Therefore, how to efficiently realize machine unlearning in a federated learning setting, considering the limitations of such a setting, like unacceptable training data, unstable connections, etc., is worthy of research [111].
Disturbance techniques: Problems with privacy leaks before and after machine unlearning, are mainly caused by the differences between the two models. A feasible solution is to interfere with the training process or adjust the model parameters so that the model is different from what it should have been. Data disturbance techniques have the ability to interfere with specific data while ensuring overall data availability [117]. For example, Guo et al. [41] hide information about the unlearned samples using a loss perturbation technique [118] at the time of training. The technique involves perturbing the empirical risk through a random linear term. As such, a useful direction for future research may be to incorporate data disturbance into machine unlearning problems and to develop new mechanisms to support more sophisticated analyses.
Feature-based unlearning methods: Unlearning based on model shifting usually removes the impact of the unlearning dataset by calculating the influence on the model [40,43]. However, calculating the influence of the samples directly may be too complex [38]. Can we shift the calculation of influence from the original training samples to a group of specific features? When an unlearning request arrives, influence can be calculated based on the features instead of the original training samples. Technologies that may be relevant to this question include feature extraction [119], feature generation [120], and feature selection [121], which could be integrated into unlearning operations.
Game-theory-based balance: Game theory has been a booming field with several representative privacy-preserving techniques coming out in the past decade [122]. There are many schemes involving privacy-preserving solutions based on game theory that trade-off data privacy and utility issues [123,124]. For a model provider, machine unlearning is also a trade-off between model performance, and user privacy, where an over-unlearning strategy may lead to performance degradation, while insufficient protection may lead to privacy leaks. Can we formalize the unlearning problem as a game between two players: a model provider and a data provider? If so, we could provide a game model between these two entities and determine a set of strategies and utilities to figure out how to perform unlearning operations that maintain the model's performance to the maximum extent possible. Such an approach could also protect the user's sensitive data from being leaked. These are open issues that need to be explored further.
CONCLUSION
Machine learning methods have become a strong driving force in revolutionizing a wide range of applications. However, they are also bringing requests to delete training samples from models due to privacy, usability, or other entitlement requirements. Machine unlearning is a new technology that can cater to these requests for deletion, and many research studies have been carried out in this regard. In this survey, we provided a comprehensive overview of machine unlearning techniques with a particular focus on the two main types of unlearning processes: data reorganization and model manipulation. First, we provided the basic concept and different targets of machine unlearning. By analyzing typical approaches, we proposed a novel taxonomy and summarized their basic principles. We also reviewed many existing studies and discussed the strengths and limitations of those studies within each category. In addition, we emphasized the importance of verifying machine unlearning processes and reviewed the different ways in which machine unlearning can be verified. Finally, we discussed several issues that would merit future research and provided some feasible directions that need to be explored in the future. Our future work will focus on exploring the potential of machine unlearning in intriguing areas such as federated learning with a verifiability property.
Fig. 1 .
1Illustration of Machine Unlearning.
Fig. 3 .
3Targets of Machine Unlearning.
Definition 2. 5 (
5Consistency). Assume there is a set of samples , with the true labels :1 , 2 , . . . ,. Let : 1 , 2 , . . . , and : 1 , 2 , . . . , be the predicted labels produced from a retrained model and an unlearned model, respectively. If all = , 1 ≤ ≤ , the unlearning process U (A (D), D, D ) is considered to provide the consistency property.
Definition 2.6 (Accuracy). Given a set of samples in remaining dataset, where their true labels are : 1 , 2 , . . . , . Let : 1 , 2 , . . . , to denote the predicted labels produced by the model after the unlearning process, w = U (A (D), D, D ). The unlearning process is considered to provide the accuracy property if all = , 1 ≤ ≤ .
Fig. 4 .
4Taxonomy of Unlearning and Verification Mechanisms.
Definition 2 . 7 (
27Verifiability). After the unlearning process, a verification function V (·) can make a distinguishable check, that is, V (A (D)) ≠ V (U (A (D), D, D )). The unlearning process U (A (D), D, D ) can then provide a verifiability property.
A (D) but not U (A (D), D, D ), that is V (A (D)) = and V (U (A (D), D, D )) = , the unlearning method U (A (D), D, D ) can be deemed to provide verifiability property.
Fig. 9 .
9Unlearning Schemes Based on Model Pruning.
Table 1 .
1Comparison between Existing Machine Unlearning Surveys.Survey
Targets
Desiderata
Unlearning Request
Verification Methods
Open Questions
Exact
Approximate
Strong
Weak
Consistency
Accuracy
Verifiability
Sample
Class
Feature
Sequence
Graph
Client
Retraining-based
Attack-based
Accuracy-based
Relearn Time-based
Theory-based
Information bound-based
Universality
Security
Verification
Applications
Table 2 .
2Notations.
Table 3 .
3Summary and Comparison of Difference Between Targets.Tartgets
Aims
Advantages
Limitations
Exact Unlearning
To make the distributions
of a natively retrained
model and an unlearned
model indistinguishable
Ensures that attackers
cannot recover any
information from the
unlearned model
Difficult to implement
Strong Unlearning
To ensure that the
distributions of two
models are approximately
indistinguishable
Easier to implement
than exact unlearning
Attackers can still
recover some information
from the unlearned model
Weak Unlearning
To only ensure that
the distributions of
two final activations
are indistinguishable
The easiest target for
machine unlearning
Cannot guarantee
whether the internal
parameters of the model
are successfully unlearned
Table 4 .
4Summary and Comparison of Differences Between Unlearning Schemes.Schemes
Basic Ideas
Advantages
Limitations
Data Reorganization
Data
Obfuscation
[27, 45, 46]
Intentionally adds
some choreographed
dataset to the
training dataset and
retrains the model
Can be applied to
almost all types of
models; not too much
intermediate redundant
data need to be
retained
Not easy to completely
unlearn information
from models
Data
Pruning
[29, 30, 47]
[48-50]
Deletes the unlearned
samples from sub-datasets
that contains those
unlearned samples.
Then only retrains the
sub-models that affected
by those samples
Easy to implement and
understand; completes
the unlearning process
at a faster speed
Additional storage
space is required;
accuracy can be
decreased with an
increase in the
number of sub-datasets
Data
Replacement
[29]
Deliberately replaces
the training dataset
with some new
transformed dataset
Supports completely
unlearn information
from models; easy to
implement
Hard to retain all
the information about
the original dataset
through replacement
Model Manipulation
Model
Shifting
[24, 45]
[40, 41, 51]
[42, 43, 52, 53]
Directly updates
model parameters
to offset the impact
of unlearned samples
on the model
Does not require
too much intermediate
parameter storage;
can provide theoretical
verification
Not easy to find
an appropriate offset
value for complex
models; calculating
offset value is
usually complex
Model
Pruning
[54-56]
Replaces partial
parameters with
pre-calculated
parameters
Reduces the cost
caused by intermediate
storage; the unlearning
process can be completed
at a faster speed
Only applicable to
partial models; not
easy to implement
and understand
Model
Replacement
[57-60]
Prunes some parameters
from already-trained
models
Easy to completely
unlearn information
from models
Only applicable to
partial machine
learning models;
original model
structure is usually
changed
Table 5 .
5Summary and Comparison of Different Verification Methods.Methods
Basic Ideas
Advantages
Limitations
Retraining-based
Removes unlearned
samples and retrains
models
Intuitive and
easy to understand
Only applicable to
special unlearning
schemes
Attack-based
Based on membership
inference attacks or
model inversion
attacks
Intuitively measures
the defense effect
against some attacks
Inadequate verification
capability
Relearn
time-based
Measures the time when
the unlearned model
regains performance
on unlearned samples
Easy to understand and
easy to implement
Inadequate verification
capability
Accuracy-based
Same as a model trained
without unlearned samples
Easy to understand and
easy to implement
Inadequate verification
capability
Theory-based
Ensures similarity
between the unlearned
model and the retrained
model.
Comprehensive and
has theoretical support
Implementation is
complex and only
applies to some
specified models
Information
bound-based
Measures the upper-bound
of the residual
information about the
unlearned samples
Comprehensive and has
theoretical support
Training in Data Obfuscation Training Dataset Unlearning in Data Obfuscation ... ... ... ...Fig. 5. Unlearning Schemes Based on Data Obfuscation.Obfuscation Data
Unlearning
Request
Unlearned Samples
Fig. 6. Unlearning Schemes Based on Data Pruning....
...
Aggregation
...
...
...
...
Original Training
Training in Data Pruning
...
...
Aggregation
...
...
...
...
Unlearning in Data Pruning
...
...
...
Machine Unlearning
Unlearned Samples
Training Dataset
Learning
Model
Table 6 .
6The Surveyed Studies That Employed Data Reorganization Techniques for Unlearning Process. Dataset Intermediates Unlearned Samples' Type Target Models' Type Consistency AccuracyPapers
Unlearning
Methods
Unlearning
Target
Training
Table 7 .
7The Surveyed Studies that Employed Model Manipulation Techniques for Unlearning Process.Papers
Unlearning
Methods
Unlearning
Target
Training Dataset Intermediates
Unlearned Samples'
Type
Target Models'
Type
Consistency Accuracy
Verifiability
Guo et al. [41]
Model
Shifting
Strong
Unlearning
Yes
No
Samples
Linear Models with
Strongly Convex
Regularization
No
No
Theory-Based
Izzo et al. [51]
Model
Shifting
Strong
Unlearning
No
Yes
Batches
Linear and
Logistic Regression Models
No
No
2
distance and
Attack-Based
Warnecke et al. [53]
Model
Shifting
Strong
Unlearning
Yes
No
Features
and Labels
Convex or
Non-convex models
No
No
Theory-Based and Method in [2]
Golatkar et al. [40]
Model
Shifting
Strong
Unlearning
Yes
No
Samples
in One Class
DNN
No
No
Accuracy-based,
Relearn time-based,
Model confidence and
Information Bound-Based
Golatkar et al. [43]
Model
Shifting
Strong
Unlearning
Yes
No
Samples
DNN
No
No
Accuracy-based,
Relearn time-based,
Attack-based and
Information Bound-Based
Liu et al. [93]
Model
Shifting
Strong
Unlearning
Yes
Yes
Samples
DNN
No
No
Accuracy-based
Golatkar et al. [52]
Model
Shifting
Strong
Unlearning
Yes
No
Samples
DNN
No
No
Accuracy-based,
Relearn time-based,
Activation distance and
Attack-based
Schelter et al. [24]
Model
Shifting
Exact
Unlearning
No
Yes
Samples
Specified model
Yes
Yes
-
Graves et al. [45]
Model
Shifting
Strong
Unlearning
No
Yes
Samples
DNN
No
No
Attack-Based
Sekhari et al. [42]
Model
Shifting
Strong
Unlearning
No
Yes
Samples
Convex Models
No
No
Theory-Based
Wang et al. [55]
Model
Pruning
Strong
Unlearning
Yes
No
Client Data
Federated Learning Model
No
No
-
Baumhauer et al. [56]
Model
Pruning
Weak
Unlearning
No
No
Classes
Logit-Based Classifiers
No
No
Attack-Based
Schelter et al [57]
Model
Replacement
Exact
Unlearning
No
Yes
Samples
Extremely Randomized Trees
Yes
Yes
-
Brophy et al [58]
Model
Replacement
Exact
Unlearning
Yes
Yes
Batches
Random Forests
Yes
Yes
-
Chen et al. [59]
Model
Replacement
Strong
Unlearning
No
No
Samples
Deep Classifier Models
No
No
Attack-Based
Wu et al. [60]
Model
Replacement
Strong
Unlearning
Yes
Yes
Samples
SGD-Based Models
No
No
-
ACKNOWLEDGMENTSThis paper is supported in part by the Australian Research Council Discovery DP200100946 and DP230100246, and NSF under grants III-1763325, III-1909323, III-2106758, and SaTC-1930941.
Machine learning: Trends, perspectives, and prospects. M I Jordan, T M Mitchell, Science. 3496245M. I. Jordan and T. M. Mitchell. Machine learning: Trends, perspectives, and prospects. Science, 349(6245):255-260, 2015.
The secret sharer: Evaluating and testing unintended memorization in neural networks. Nicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, Dawn Song, 28th USENIX Security Symposium. Nadia Heninger and Patrick TraynorSanta Clara, CA, USAUSENIX AssociationNicholas Carlini, Chang Liu, Úlfar Erlingsson, Jernej Kos, and Dawn Song. The secret sharer: Evaluating and testing unintended memorization in neural networks. In Nadia Heninger and Patrick Traynor, editors, 28th USENIX Security Symposium, Santa Clara, CA, USA, August 14-16, 2019, pages 267-284. USENIX Association, 2019.
Membership inference attacks against recommender systems. Minxing Zhang, Zhaochun Ren, Zihan Wang, Pengjie Ren, Zhumin Chen, Pengfei Hu, Yang Zhang, CCS '21: 2021 ACM SIGSAC Conference on Computer and Communications Security. Yongdae Kim, Jong Kim, Giovanni Vigna, and Elaine ShiACMVirtual Event, Republic of KoreaMinxing Zhang, Zhaochun Ren, Zihan Wang, Pengjie Ren, Zhumin Chen, Pengfei Hu, and Yang Zhang. Membership inference attacks against recommender systems. In Yongdae Kim, Jong Kim, Giovanni Vigna, and Elaine Shi, editors, CCS '21: 2021 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event, Republic of Korea, November 15 -19, 2021, pages 864-879. ACM, 2021.
Model inversion attack by integration of deep generative models: Privacy-sensitive face generation from a face recognition system. Mahdi Khosravy, Kazuaki Nakamura, Yuki Hirose, Naoko Nitta, Noboru Babaguchi, IEEE Trans. Inf. Forensics Secur. 17Mahdi Khosravy, Kazuaki Nakamura, Yuki Hirose, Naoko Nitta, and Noboru Babaguchi. Model inversion attack by integration of deep generative models: Privacy-sensitive face generation from a face recognition system. IEEE Trans. Inf. Forensics Secur., 17:357-372, 2022.
Membership inference attacks against machine learning models. Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov, 2017 IEEE Symposium on Security and Privacy. San Jose, CA, USAIEEE Computer SocietyReza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy, SP 2017, San Jose, CA, USA, May 22-26, 2017, pages 3-18. IEEE Computer Society, 2017.
Model inversion attacks that exploit confidence information and basic countermeasures. Matt Fredrikson, Somesh Jha, Thomas Ristenpart, Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. Indrajit Ray, Ninghui Li, and Christopher Kruegelthe 22nd ACM SIGSAC Conference on Computer and Communications SecurityDenver, CO, USAACMMatt Fredrikson, Somesh Jha, and Thomas Ristenpart. Model inversion attacks that exploit confidence information and basic countermeasures. In Indrajit Ray, Ninghui Li, and Christopher Kruegel, editors, Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, Denver, CO, USA, October 12-16, 2015, pages 1322-1333. ACM, 2015.
Humans forget, machines remember: Artificial intelligence and the right to be forgotten. Eduard Fosch-Villaronga, Peter Kieseberg, Tiffany Li, Comput. Law Secur. Rev. 342Eduard Fosch-Villaronga, Peter Kieseberg, and Tiffany Li. Humans forget, machines remember: Artificial intelligence and the right to be forgotten. Comput. Law Secur. Rev., 34(2):304-313, 2018.
Algorithms that remember: Model inversion attacks and data protection law. Michael Veale, Reuben Binns, Lilian Edwards, abs/1807.04644CoRRMichael Veale, Reuben Binns, and Lilian Edwards. Algorithms that remember: Model inversion attacks and data protection law. CoRR, abs/1807.04644, 2018.
General Data Protection Regulation(GDPR). 20Retrieved in MarchGeneral Data Protection Regulation(GDPR), 2018. [Online; Retrieved in March 20, 2022 from https://data.stats.gov.cn.
. California Consumer Privacy Act (CCPA). Retrieved inCalifornia Consumer Privacy Act (CCPA), 2018. [Online; Retrieved in March 19, 2022 from https://oag.ca.gov/privacy/ ccpa.
Japan -Data Protection Overview(JDPO). Retrieved inJapan -Data Protection Overview(JDPO), 2019. [Online; Retrieved in March 19, 2022 from https://www.dataguidance. com/notes/japan-data-protection-overview.
Making AI forget you: Data deletion in machine learning. Antonio Ginart, Melody Y Guan, Gregory Valiant, James Zou, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems. Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman GarnettNeurIPS; Vancouver, BC, CanadaAntonio Ginart, Melody Y. Guan, Gregory Valiant, and James Zou. Making AI forget you: Data deletion in machine learning. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 3513-3526, 2019.
Variational bayesian unlearning. Bryan Kian Hsiang Quoc Phong Nguyen, Patrick Low, Jaillet, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020. Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin2020Quoc Phong Nguyen, Bryan Kian Hsiang Low, and Patrick Jaillet. Variational bayesian unlearning. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
Formalizing data deletion in the context of the right to be forgotten. Sanjam Garg, Shafi Goldwasser, Prashant Nalini Vasudevan, Advances in Cryptology -EUROCRYPT 2020 -39th Annual International Conference on the Theory and Applications of Cryptographic Techniques. Anne Canteaut and Yuval IshaiZagreb, CroatiaSpringer12106Proceedings, Part IISanjam Garg, Shafi Goldwasser, and Prashant Nalini Vasudevan. Formalizing data deletion in the context of the right to be forgotten. In Anne Canteaut and Yuval Ishai, editors, Advances in Cryptology -EUROCRYPT 2020 -39th Annual International Conference on the Theory and Applications of Cryptographic Techniques, Zagreb, Croatia, May 10-14, 2020, Proceedings, Part II, volume 12106, pages 373-402. Springer, 2020.
PALM: machine learning explanations for iterative debugging. Sanjay Krishnan, Eugene Wu, Proceedings of the 2nd Workshop on Human-In-the-Loop Data Analytics. Carsten Binnig, Joseph M. Hellerstein, and Aditya G. Parameswaranthe 2nd Workshop on Human-In-the-Loop Data AnalyticsChicago, IL, USAACM4Sanjay Krishnan and Eugene Wu. PALM: machine learning explanations for iterative debugging. In Carsten Binnig, Joseph M. Hellerstein, and Aditya G. Parameswaran, editors, Proceedings of the 2nd Workshop on Human-In-the-Loop Data Analytics, HILDA@SIGMOD 2017, Chicago, IL, USA, May 14, 2017, pages 4:1-4:6. ACM, 2017.
More than privacy: Applying differential privacy in key areas of artificial intelligence. Tianqing Zhu, Dayong Ye, Wei Wang, Wanlei Zhou, Philip S Yu, IEEE Transactions on Knowledge and Data Engineering. Tianqing Zhu, Dayong Ye, Wei Wang, Wanlei Zhou, and Philip S. Yu. More than privacy: Applying differential privacy in key areas of artificial intelligence. IEEE Transactions on Knowledge and Data Engineering, pages 1-1, 2020.
Low-latency federated learning over wireless channels with differential privacy. Kang Wei, Jun Li, Chuan Ma, Ming Ding, Cailian Chen, Shi Jin, Zhu Han, H Vincent Poor, IEEE J. Sel. Areas Commun. 401Kang Wei, Jun Li, Chuan Ma, Ming Ding, Cailian Chen, Shi Jin, Zhu Han, and H. Vincent Poor. Low-latency federated learning over wireless channels with differential privacy. IEEE J. Sel. Areas Commun., 40(1):290-307, 2022.
Differentially private data publishing and analysis: A survey. Tianqing Zhu, Gang Li, Wanlei Zhou, Philip S Yu, IEEE Trans. Knowl. Data Eng. 298Tianqing Zhu, Gang Li, Wanlei Zhou, and Philip S. Yu. Differentially private data publishing and analysis: A survey. IEEE Trans. Knowl. Data Eng., 29(8):1619-1638, 2017.
More than privacy: Adopting differential privacy in game-theoretic mechanism design. Lefeng Zhang, Tianqing Zhu, Ping Xiong, Wanlei Zhou, Philip S Yu, 136:1-136:37ACM Comput. Surv. 547Lefeng Zhang, Tianqing Zhu, Ping Xiong, Wanlei Zhou, and Philip S. Yu. More than privacy: Adopting differential privacy in game-theoretic mechanism design. ACM Comput. Surv., 54(7):136:1-136:37, 2022.
High-end equipment data desensitization method based on improved stackelberg GAN. Nan Xiang, Xiongtao Zhang, Yajie Dou, Xiangqian Xu, Ke-Wei Yang, Yuejin Tan, Expert Syst. Appl. 180114989Nan Xiang, Xiongtao Zhang, Yajie Dou, Xiangqian Xu, Ke-Wei Yang, and Yuejin Tan. High-end equipment data desensitization method based on improved stackelberg GAN. Expert Syst. Appl., 180:114989, 2021.
Research on productization and development trend of data desensitization technology. Zhuo Wang, Kai Wei, Chunyu Jiang, Jiafeng Tian, Minjing Zhong, Yuan Liu, Yanmei Liu, 20th IEEE International Conference on Trust, Security and Privacy in Computing and Communications. Shenyang, ChinaIEEEZhuo Wang, Kai Wei, Chunyu Jiang, Jiafeng Tian, Minjing Zhong, Yuan Liu, and Yanmei Liu. Research on produc- tization and development trend of data desensitization technology. In 20th IEEE International Conference on Trust, Security and Privacy in Computing and Communications, TrustCom 2021, Shenyang, China, October 20-22, 2021, pages 1564-1569. IEEE, 2021.
Online perceptual learning and natural language acquisition for autonomous robots. Muhannad Al-Omari, Fangjun Li, David C Hogg, Anthony G Cohn, Artif. Intell. 303103637Muhannad Al-Omari, Fangjun Li, David C. Hogg, and Anthony G. Cohn. Online perceptual learning and natural language acquisition for autonomous robots. Artif. Intell., 303:103637, 2022.
amnesia" -towards machine learning models that can forget user data very fast. Sebastian Schelter, 10th Conference on Innovative Data Systems Research. Amsterdam, The Netherlands2020Online Proceedings. www.cidrdb.orgSebastian Schelter. "amnesia" -towards machine learning models that can forget user data very fast. In 10th Conference on Innovative Data Systems Research, CIDR 2020, Amsterdam, The Netherlands, January 12-15, 2020, Online Proceedings. www.cidrdb.org, 2020.
Overcoming catastrophic forgetting by bayesian generative regularization. Pei-Hung Chen, Wei Wei, Cho-Jui Hsieh, Bo Dai, Proceedings of the 38th International Conference on Machine Learning, ICML 2021. Marina Meila and Tong Zhangthe 38th International Conference on Machine Learning, ICML 2021PMLR139Virtual EventPei-Hung Chen, Wei Wei, Cho-Jui Hsieh, and Bo Dai. Overcoming catastrophic forgetting by bayesian generative regularization. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 1760-1770. PMLR, 2021.
Overcoming catastrophic forgetting in graph neural networks. Huihui Liu, Yiding Yang, Xinchao Wang, Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event. AAAI Press2021Huihui Liu, Yiding Yang, and Xinchao Wang. Overcoming catastrophic forgetting in graph neural networks. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 8653-8661. AAAI Press, 2021.
Fast yet effective machine unlearning. K Ayush, Tarun, S Vikram, Murari Chundawat, Mohan S Mandal, Kankanhalli, abs/2111.08947CoRRAyush K. Tarun, Vikram S. Chundawat, Murari Mandal, and Mohan S. Kankanhalli. Fast yet effective machine unlearning. CoRR, abs/2111.08947, 2021.
Machine unlearning via algorithmic stability. Enayat Ullah, Tung Mai, Anup Rao, Ryan A Rossi, Raman Arora, Conference on Learning Theory, COLT 2021. Mikhail Belkin and Samory KpotufeBoulder, Colorado, USAPMLR134Enayat Ullah, Tung Mai, Anup Rao, Ryan A. Rossi, and Raman Arora. Machine unlearning via algorithmic stability. In Mikhail Belkin and Samory Kpotufe, editors, Conference on Learning Theory, COLT 2021, 15-19 August 2021, Boulder, Colorado, USA, volume 134 of Proceedings of Machine Learning Research, pages 4126-4142. PMLR, 2021.
Towards making systems forget with machine unlearning. Yinzhi Cao, Junfeng Yang, 2015 IEEE Symposium on Security and Privacy. San Jose, CA, USAIEEE Computer SocietyYinzhi Cao and Junfeng Yang. Towards making systems forget with machine unlearning. In 2015 IEEE Symposium on Security and Privacy, SP 2015, San Jose, CA, USA, May 17-21, 2015, pages 463-480. IEEE Computer Society, 2015.
Baiwu Zhang, David Lie, and Nicolas Papernot. Machine unlearning. Lucas Bourtoule, Varun Chandrasekaran, Christopher A Choquette-Choo, Hengrui Jia, Adelin Travers, 42nd IEEE Symposium on Security and Privacy. San Francisco, CA, USAIEEE2021Lucas Bourtoule, Varun Chandrasekaran, Christopher A. Choquette-Choo, Hengrui Jia, Adelin Travers, Baiwu Zhang, David Lie, and Nicolas Papernot. Machine unlearning. In 42nd IEEE Symposium on Security and Privacy, SP 2021, San Francisco, CA, USA, 24-27 May 2021, pages 141-159. IEEE, 2021.
Federated unlearning with knowledge distillation. Chen Wu, Sencun Zhu, Prasenjit Mitra, Chen Wu, Sencun Zhu, and Prasenjit Mitra. Federated unlearning with knowledge distillation. 2022.
Federated machine learning: Concept and applications. Qiang Yang, Yang Liu, Tianjian Chen, Yongxin Tong, 12:1-12:19ACM Trans. Intell. Syst. Technol. 102Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. Federated machine learning: Concept and applications. ACM Trans. Intell. Syst. Technol., 10(2):12:1-12:19, 2019.
Recommendation unlearning. Chong Chen, Fei Sun, Min Zhang, Bolin Ding, WWW '22: The ACM Web Conference 2022, Virtual Event. Frédérique Laforest, Raphaël Troncy, Elena Simperl, Deepak Agarwal, Aristides Gionis, Ivan Herman, and Lionel MédiniLyon, FranceACMChong Chen, Fei Sun, Min Zhang, and Bolin Ding. Recommendation unlearning. In Frédérique Laforest, Raphaël Troncy, Elena Simperl, Deepak Agarwal, Aristides Gionis, Ivan Herman, and Lionel Médini, editors, WWW '22: The ACM Web Conference 2022, Virtual Event, Lyon, France, April 25 -29, 2022, pages 2768-2777. ACM, 2022.
Making recommender systems forget: Learning and unlearning for erasable recommendation. Yuyuan Li, Xiaolin Zheng, Chaochao Chen, Junlin Liu, abs/2203.11491CoRRYuyuan Li, Xiaolin Zheng, Chaochao Chen, and Junlin Liu. Making recommender systems forget: Learning and unlearning for erasable recommendation. CoRR, abs/2203.11491, 2022.
Hongzhi Yin, and Quoc Viet Hung Nguyen. A survey of machine unlearning. Thanh Tam Nguyen, Thanh Trung Huynh, Phi Le Nguyen, Alan Wee-Chung Liew, abs/2209.02299CoRRThanh Tam Nguyen, Thanh Trung Huynh, Phi Le Nguyen, Alan Wee-Chung Liew, Hongzhi Yin, and Quoc Viet Hung Nguyen. A survey of machine unlearning. CoRR, abs/2209.02299, 2022.
Making machine learning forget. Saurabh Shintre, Kevin A Roundy, Jasjeet Dhaliwal, Privacy Technologies and Policy -7th Annual Privacy Forum. Maurizio Naldi, Giuseppe F. Italiano, Kai Rannenberg, Manel Medina, and Athena BourkaRome, ItalySpringer11498Saurabh Shintre, Kevin A. Roundy, and Jasjeet Dhaliwal. Making machine learning forget. In Maurizio Naldi, Giuseppe F. Italiano, Kai Rannenberg, Manel Medina, and Athena Bourka, editors, Privacy Technologies and Policy -7th Annual Privacy Forum, APF 2019, Rome, Italy, June 13-14, 2019, Proceedings, volume 11498 of Lecture Notes in Computer Science, pages 72-83. Springer, 2019.
Unrolling SGD: understanding factors influencing machine unlearning. Anvith Thudi, Gabriel Deza, Varun Chandrasekaran, Nicolas Papernot, 7th IEEE European Symposium on Security and Privacy. Genoa, ItalyIEEE2022Anvith Thudi, Gabriel Deza, Varun Chandrasekaran, and Nicolas Papernot. Unrolling SGD: understanding factors influencing machine unlearning. In 7th IEEE European Symposium on Security and Privacy, EuroS&P 2022, Genoa, Italy, June 6-10, 2022, pages 303-319. IEEE, 2022.
Understanding black-box predictions via influence functions. Wei Pang, Percy Koh, Liang, Proceedings of the 34th International Conference on Machine Learning. Doina Precup and Yee Whye Tehthe 34th International Conference on Machine LearningSydney, NSW, AustraliaPMLR70Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pages 1885-1894. PMLR, 2017.
Forget-svgd: Particle-based bayesian federated unlearning. CoRR, abs. Jinu Gong, Osvaldo Simeone, Rahif Kassab, Joonhyuk Kang, Jinu Gong, Osvaldo Simeone, Rahif Kassab, and Joonhyuk Kang. Forget-svgd: Particle-based bayesian federated unlearning. CoRR, abs/2111.12056, 2021.
Eternal sunshine of the spotless net: Selective forgetting in deep networks. Aditya Golatkar, Alessandro Achille, Stefano Soatto, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, WA, USA20202020Computer Vision Foundation / IEEEAditya Golatkar, Alessandro Achille, and Stefano Soatto. Eternal sunshine of the spotless net: Selective forgetting in deep networks. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 9301-9309. Computer Vision Foundation / IEEE, 2020.
Certified data removal from machine learning models. Chuan Guo, Tom Goldstein, Awni Y Hannun, Laurens Van Der Maaten, Proceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine LearningPMLR2020Virtual EventChuan Guo, Tom Goldstein, Awni Y. Hannun, and Laurens van der Maaten. Certified data removal from machine learning models. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 3832-3842. PMLR, 2020.
Remember what you want to forget: Algorithms for machine unlearning. Ayush Sekhari, Jayadev Acharya, Gautam Kamath, Ananda Theertha Suresh, abs/2103.03279CoRRAyush Sekhari, Jayadev Acharya, Gautam Kamath, and Ananda Theertha Suresh. Remember what you want to forget: Algorithms for machine unlearning. CoRR, abs/2103.03279, 2021.
Forgetting outside the box: Scrubbing deep networks of information accessible from input-output observations. Aditya Golatkar, Alessandro Achille, Stefano Soatto, Computer Vision -ECCV 2020 -16th European Conference. Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael FrahmGlasgow, UKSpringer12374Proceedings, Part XXIXAditya Golatkar, Alessandro Achille, and Stefano Soatto. Forgetting outside the box: Scrubbing deep networks of information accessible from input-output observations. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision -ECCV 2020 -16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXIX, volume 12374 of Lecture Notes in Computer Science, pages 383-398. Springer, 2020.
Towards probabilistic verification of machine unlearning. CoRR, abs. David Marco Sommer, Liwei Song, Sameer Wagh, Prateek Mittal, David Marco Sommer, Liwei Song, Sameer Wagh, and Prateek Mittal. Towards probabilistic verification of machine unlearning. CoRR, abs/2003.04247, 2020.
Amnesiac machine learning. Laura Graves, Vineel Nagisetty, Vijay Ganesh, Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event. AAAI Press2021Laura Graves, Vineel Nagisetty, and Vijay Ganesh. Amnesiac machine learning. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 11516-11524. AAAI Press, 2021.
Class clown: Data redaction in machine unlearning at enterprise scale. Daniel L Felps, Amelia D Schwickerath, Joyce D Williams, Trung N Vuong, Alan Briggs, Matthew Hunt, Evan Sakmar, David D Saranchak, Tyler Shumaker, Proceedings of the 10th International Conference on Operations Research and Enterprise Systems, ICORES 2021, Online Streaming. Greg H. Parlier, Federico Liberatore, and Marc Demangethe 10th International Conference on Operations Research and Enterprise Systems, ICORES 2021, Online StreamingSCITEPRESSDaniel L. Felps, Amelia D. Schwickerath, Joyce D. Williams, Trung N. Vuong, Alan Briggs, Matthew Hunt, Evan Sakmar, David D. Saranchak, and Tyler Shumaker. Class clown: Data redaction in machine unlearning at enterprise scale. In Greg H. Parlier, Federico Liberatore, and Marc Demange, editors, Proceedings of the 10th International Conference on Operations Research and Enterprise Systems, ICORES 2021, Online Streaming, February 4-6, 2021, pages 7-14. SCITEPRESS, 2021.
Graph unlearning. Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, Yang Zhang, Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security. Heng Yin, Angelos Stavrou, Cas Cremers, and Elaine Shithe 2022 ACM SIGSAC Conference on Computer and Communications SecurityLos Angeles, CA, USAACMMin Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, and Yang Zhang. Graph unlearning. In Heng Yin, Angelos Stavrou, Cas Cremers, and Elaine Shi, editors, Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security, Los Angeles, CA, USA, November 7-11, 2022, pages 499-513. ACM, 2022.
Deepobliviate: A powerful charm for erasing data residual memory in deep neural networks. Yingzhe He, Guozhu Meng, Kai Chen, Jinwen He, Xingbo Hu, abs/2105.06209CoRRYingzhe He, Guozhu Meng, Kai Chen, Jinwen He, and Xingbo Hu. Deepobliviate: A powerful charm for erasing data residual memory in deep neural networks. CoRR, abs/2105.06209, 2021.
Adaptive machine unlearning. Varun Gupta, Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, Chris Waites, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021. Marc'Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman VaughanVarun Gupta, Christopher Jung, Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, and Chris Waites. Adaptive machine unlearning. In Marc'Aurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, editors, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 16319-16330, 2021.
Descent-to-delete: Gradient-based methods for machine unlearning. Seth Neel, Aaron Roth, Saeed Sharifi-Malvajerdi, Algorithmic Learning Theory. Vitaly Feldman, Katrina Ligett, and Sivan SabatoPMLR132Seth Neel, Aaron Roth, and Saeed Sharifi-Malvajerdi. Descent-to-delete: Gradient-based methods for machine unlearning. In Vitaly Feldman, Katrina Ligett, and Sivan Sabato, editors, Algorithmic Learning Theory, 16-19 March 2021,, volume 132 of Proceedings of Machine Learning Research, pages 931-962. PMLR, 2021.
Approximate data deletion from machine learning models. Zachary Izzo, Mary Anne Smart, Kamalika Chaudhuri, James Zou, The 24th International Conference on Artificial Intelligence and Statistics, AISTATS 2021. Arindam Banerjee and Kenji FukumizuPMLR130Zachary Izzo, Mary Anne Smart, Kamalika Chaudhuri, and James Zou. Approximate data deletion from machine learning models. In Arindam Banerjee and Kenji Fukumizu, editors, The 24th International Conference on Artificial Intelligence and Statistics, AISTATS 2021, April 13-15, 2021, Virtual Event, volume 130 of Proceedings of Machine Learning Research, pages 2008-2016. PMLR, 2021.
Mixed-privacy forgetting in deep networks. Aditya Golatkar, Alessandro Achille, Avinash Ravichandran, Marzia Polito, Stefano Soatto, IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual. 2021Computer Vision Foundation / IEEEAditya Golatkar, Alessandro Achille, Avinash Ravichandran, Marzia Polito, and Stefano Soatto. Mixed-privacy forgetting in deep networks. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, pages 792-801. Computer Vision Foundation / IEEE, 2021.
Machine unlearning of features and labels. Alexander Warnecke, Lukas Pirch, Christian Wressnegger, Konrad Rieck, abs/2108.11577CoRRAlexander Warnecke, Lukas Pirch, Christian Wressnegger, and Konrad Rieck. Machine unlearning of features and labels. CoRR, abs/2108.11577, 2021.
Federaser: Enabling efficient client-level data removal from federated learning models. Gaoyang Liu, Xiaoqiang Ma, Yang Yang, Chen Wang, Jiangchuan Liu, 29th IEEE/ACM International Symposium on Quality of Service, IWQOS 2021. Tokyo, JapanIEEEGaoyang Liu, Xiaoqiang Ma, Yang Yang, Chen Wang, and Jiangchuan Liu. Federaser: Enabling efficient client-level data removal from federated learning models. In 29th IEEE/ACM International Symposium on Quality of Service, IWQOS 2021, Tokyo, Japan, June 25-28, 2021, pages 1-10. IEEE, 2021.
Federated unlearning via class-discriminative pruning. Junxiao Wang, Song Guo, Xin Xie, Heng Qi, WWW '22: The ACM Web Conference 2022, Virtual Event. Frédérique Laforest, Raphaël Troncy, Elena Simperl, Deepak Agarwal, Aristides Gionis, Ivan Herman, and Lionel MédiniLyon, FranceACMJunxiao Wang, Song Guo, Xin Xie, and Heng Qi. Federated unlearning via class-discriminative pruning. In Frédérique Laforest, Raphaël Troncy, Elena Simperl, Deepak Agarwal, Aristides Gionis, Ivan Herman, and Lionel Médini, editors, WWW '22: The ACM Web Conference 2022, Virtual Event, Lyon, France, April 25 -29, 2022, pages 622-632. ACM, 2022.
Machine unlearning: Linear filtration for logit-based classifiers. CoRR, abs. Thomas Baumhauer, Pascal Schöttle, Matthias Zeppelzauer, Thomas Baumhauer, Pascal Schöttle, and Matthias Zeppelzauer. Machine unlearning: Linear filtration for logit-based classifiers. CoRR, abs/2002.02730, 2020.
Hedgecut: Maintaining randomised trees for low-latency machine unlearning. Sebastian Schelter, Stefan Grafberger, Ted Dunning, SIGMOD '21: International Conference on Management of Data, Virtual Event. Guoliang Li, Zhanhuai Li, Stratos Idreos, and Divesh SrivastavaChinaACMSebastian Schelter, Stefan Grafberger, and Ted Dunning. Hedgecut: Maintaining randomised trees for low-latency machine unlearning. In Guoliang Li, Zhanhuai Li, Stratos Idreos, and Divesh Srivastava, editors, SIGMOD '21: International Conference on Management of Data, Virtual Event, China, June 20-25, 2021, pages 1545-1557. ACM, 2021.
Machine unlearning for random forests. Jonathan Brophy, Daniel Lowd, Proceedings of the 38th International Conference on Machine Learning, ICML 2021. Marina Meila and Tong Zhangthe 38th International Conference on Machine Learning, ICML 2021PMLR139Jonathan Brophy and Daniel Lowd. Machine unlearning for random forests. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 1092-1104. PMLR, 2021.
Machine unlearning via GAN. CoRR, abs. Kongyang Chen, Yao Huang, Yiwen Wang, Kongyang Chen, Yao Huang, and Yiwen Wang. Machine unlearning via GAN. CoRR, abs/2111.11869, 2021.
Deltagrad: Rapid retraining of machine learning models. Yinjun Wu, Edgar Dobriban, Susan B Davidson, Proceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine LearningPMLR2020Yinjun Wu, Edgar Dobriban, and Susan B. Davidson. Deltagrad: Rapid retraining of machine learning models. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 10355-10366. PMLR, 2020.
Mlmodelci: An automatic cloud platform for efficient mlaas. Huaizheng Zhang, Yuanming Li, Yizheng Huang, Yonggang Wen, Jianxiong Yin, Kyle Guan, MM '20: The 28th ACM International Conference on Multimedia. Chang Wen Chen, Rita Cucchiara, Xian-Sheng Hua, Guo-Jun Qi, Elisa Ricci, Zhengyou Zhang, and Roger ZimmermannSeattle, WA, USAACMHuaizheng Zhang, Yuanming Li, Yizheng Huang, Yonggang Wen, Jianxiong Yin, and Kyle Guan. Mlmodelci: An automatic cloud platform for efficient mlaas. In Chang Wen Chen, Rita Cucchiara, Xian-Sheng Hua, Guo-Jun Qi, Elisa Ricci, Zhengyou Zhang, and Roger Zimmermann, editors, MM '20: The 28th ACM International Conference on Multimedia, Virtual Event / Seattle, WA, USA, October 12-16, 2020, pages 4453-4456. ACM, 2020.
Membership inference attacks against gans by leveraging over-representation regions. Hailong Hu, Jun Pang, CCS '21: 2021 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event. Yongdae Kim, Jong Kim, Giovanni Vigna, and Elaine ShiRepublic of KoreaACMHailong Hu and Jun Pang. Membership inference attacks against gans by leveraging over-representation regions. In Yongdae Kim, Jong Kim, Giovanni Vigna, and Elaine Shi, editors, CCS '21: 2021 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event, Republic of Korea, November 15 -19, 2021, pages 2387-2389. ACM, 2021.
Machine unlearning for image retrieval: A generative scrubbing approach. Pengfei Zhang, Guangdong Bai, Zi Huang, Xin-Shun Xu, MM '22: The 30th ACM International Conference on Multimedia. João Magalhães, Alberto Del Bimbo, Shin'ichi Satoh, Nicu Sebe, Xavier Alameda-Pineda, Qin Jin, Vincent Oria, and Laura ToniLisboa, PortugalACMPengFei Zhang, Guangdong Bai, Zi Huang, and Xin-Shun Xu. Machine unlearning for image retrieval: A generative scrubbing approach. In João Magalhães, Alberto Del Bimbo, Shin'ichi Satoh, Nicu Sebe, Xavier Alameda-Pineda, Qin Jin, Vincent Oria, and Laura Toni, editors, MM '22: The 30th ACM International Conference on Multimedia, Lisboa, Portugal, October 10 -14, 2022, pages 237-245. ACM, 2022.
Privacy risk in machine learning: Analyzing the connection to overfitting. Samuel Yeom, Irene Giacomelli, Matt Fredrikson, Somesh Jha, 31st IEEE Computer Security Foundations Symposium, CSF 2018. Oxford, United KingdomIEEE Computer SocietySamuel Yeom, Irene Giacomelli, Matt Fredrikson, and Somesh Jha. Privacy risk in machine learning: Analyzing the connection to overfitting. In 31st IEEE Computer Security Foundations Symposium, CSF 2018, Oxford, United Kingdom, July 9-12, 2018, pages 268-282. IEEE Computer Society, 2018.
Joint coding and scheduling optimization for distributed learning over wireless edge networks. Dinh Thai Nguyen Van Huynh, Hoang, N Diep, Eryk Nguyen, Dutkiewicz, IEEE J. Sel. Areas Commun. 402Nguyen Van Huynh, Dinh Thai Hoang, Diep N. Nguyen, and Eryk Dutkiewicz. Joint coding and scheduling optimization for distributed learning over wireless edge networks. IEEE J. Sel. Areas Commun., 40(2):484-498, 2022.
Privacy-preserved distributed learning with zeroth-order optimization. Cristiano Gratton, K D Naveen, Reza Venkategowda, Stefan Arablouei, Werner, IEEE Trans. Inf. Forensics Secur. 17Cristiano Gratton, Naveen K. D. Venkategowda, Reza Arablouei, and Stefan Werner. Privacy-preserved distributed learning with zeroth-order optimization. IEEE Trans. Inf. Forensics Secur., 17:265-279, 2022.
An evolutionary algorithm for automated machine learning focusing on classifier ensembles: An improved algorithm and extended results. João Carlos Xavier Junior, Alex Alves Freitas, Teresa Bernarda Ludermir, Antonino Feitosa Neto, Cephas A S Barreto, Theor. Comput. Sci. 805João Carlos Xavier Junior, Alex Alves Freitas, Teresa Bernarda Ludermir, Antonino Feitosa Neto, and Cephas A. S. Barreto. An evolutionary algorithm for automated machine learning focusing on classifier ensembles: An improved algorithm and extended results. Theor. Comput. Sci., 805:1-18, 2020.
Patient similarity learning with selective forgetting. Wei Qian, Chenxu Zhao, Huajie Shao, Minghan Chen, Fei Wang, Mengdi Huai, IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2022. Donald A. Adjeroh, Qi Long, Xinghua Mindy Shi, Fei Guo, Xiaohua Hu, Srinivas Aluru, Giri Narasimhan, Jianxin Wang, Mingon Kang, Ananda Mondal, and Jin LiuLas Vegas, NV, USAIEEEWei Qian, Chenxu Zhao, Huajie Shao, Minghan Chen, Fei Wang, and Mengdi Huai. Patient similarity learning with selective forgetting. In Donald A. Adjeroh, Qi Long, Xinghua Mindy Shi, Fei Guo, Xiaohua Hu, Srinivas Aluru, Giri Narasimhan, Jianxin Wang, Mingon Kang, Ananda Mondal, and Jin Liu, editors, IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2022, Las Vegas, NV, USA, December 6-8, 2022, pages 529-534. IEEE, 2022.
ARCANE: an efficient architecture for exact machine unlearning. Haonan Yan, Xiaoguang Li, Ziyao Guo, Hui Li, Fenghua Li, Xiaodong Lin, Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022. Luc De Raedtthe Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022Vienna, Austria2022Haonan Yan, Xiaoguang Li, Ziyao Guo, Hui Li, Fenghua Li, and Xiaodong Lin. ARCANE: an efficient architecture for exact machine unlearning. In Luc De Raedt, editor, Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence, IJCAI 2022, Vienna, Austria, 23-29 July 2022, pages 4006-4013. ijcai.org, 2022.
Knowledge-aware coupled graph neural network for social recommendation. Chao Huang, Huance Xu, Yong Xu, Peng Dai, Lianghao Xia, Mengyin Lu, Liefeng Bo, Hao Xing, Xiaoping Lai, Yanfang Ye, Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event. AAAI Press2021Chao Huang, Huance Xu, Yong Xu, Peng Dai, Lianghao Xia, Mengyin Lu, Liefeng Bo, Hao Xing, Xiaoping Lai, and Yanfang Ye. Knowledge-aware coupled graph neural network for social recommendation. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 4115-4122. AAAI Press, 2021.
Financial time series forecasting with multi-modality graph neural network. Dawei Cheng, Fangzhou Yang, Sheng Xiang, Jin Liu, Pattern Recognit. 121108218Dawei Cheng, Fangzhou Yang, Sheng Xiang, and Jin Liu. Financial time series forecasting with multi-modality graph neural network. Pattern Recognit., 121:108218, 2022.
SUPREME: a cancer subtype prediction methodology integrating multiple biological datatypes using graph convolutional neural networks. Serdar Ziynet Nesibe Kesimoglu, Bozdag, BCB '21: 12th ACM International Conference on Bioinformatics. Hongmei Jiang, Xiuzhen Huang, and Jiajie ZhangGainesville, Florida, USAACMZiynet Nesibe Kesimoglu and Serdar Bozdag. SUPREME: a cancer subtype prediction methodology integrating multiple biological datatypes using graph convolutional neural networks. In Hongmei Jiang, Xiuzhen Huang, and Jiajie Zhang, editors, BCB '21: 12th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, Gainesville, Florida, USA, August 1-4, 2021, page 83:1. ACM, 2021.
Dynamic spatial-temporal graph convolutional neural networks for traffic forecasting. Zulong Diao, Xin Wang, Dafang Zhang, Yingru Liu, Kun Xie, Shaoyao He, The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019. Honolulu, Hawaii, USAAAAI PressZulong Diao, Xin Wang, Dafang Zhang, Yingru Liu, Kun Xie, and Shaoyao He. Dynamic spatial-temporal graph convolutional neural networks for traffic forecasting. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 -February 1, 2019, pages 890-897. AAAI Press, 2019.
Graph neural networks meet neural-symbolic computing: A survey and perspective. C Luís, Artur S Lamb, Marco Garcez, Marcelo O R Gori, Pedro H C Prates, Moshe Y Avelar, Vardi, Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence. Christian Bessierethe Twenty-Ninth International Joint Conference on Artificial Intelligence20202020Luís C. Lamb, Artur S. d'Avila Garcez, Marco Gori, Marcelo O. R. Prates, Pedro H. C. Avelar, and Moshe Y. Vardi. Graph neural networks meet neural-symbolic computing: A survey and perspective. In Christian Bessiere, editor, Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, pages 4877-4884. ijcai.org, 2020.
A comprehensive survey on graph neural networks. Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, Philip S Yu, IEEE Trans. Neural Networks Learn. Syst. 321Zonghan Wu, Shirui Pan, Fengwen Chen, Guodong Long, Chengqi Zhang, and Philip S. Yu. A comprehensive survey on graph neural networks. IEEE Trans. Neural Networks Learn. Syst., 32(1):4-24, 2021.
A network embedding-enhanced bayesian model for generalized community detection in complex networks. Dongxiao He, Youyou Wang, Jinxin Cao, Weiping Ding, Shizhan Chen, Zhiyong Feng, Bo Wang, Yuxiao Huang, Inf. Sci. 575Dongxiao He, Youyou Wang, Jinxin Cao, Weiping Ding, Shizhan Chen, Zhiyong Feng, Bo Wang, and Yuxiao Huang. A network embedding-enhanced bayesian model for generalized community detection in complex networks. Inf. Sci., 575:306-322, 2021.
Multiple kernel clustering with kernel k-means coupled graph tensor learning. Quansen Zhenwen Ren, Dong Sun, Wei, Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innovative Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event. AAAI Press2021Zhenwen Ren, Quansen Sun, and Dong Wei. Multiple kernel clustering with kernel k-means coupled graph tensor learning. In Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021, Thirty-Third Conference on Innova- tive Applications of Artificial Intelligence, IAAI 2021, The Eleventh Symposium on Educational Advances in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9, 2021, pages 9411-9418. AAAI Press, 2021.
When machine unlearning jeopardizes privacy. Min Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, Yang Zhang, CCS '21: 2021 ACM SIGSAC Conference on Computer and Communications Security. Yongdae Kim, Jong Kim, Giovanni Vigna, and Elaine ShiACMVirtual Event, Republic of KoreaMin Chen, Zhikun Zhang, Tianhao Wang, Michael Backes, Mathias Humbert, and Yang Zhang. When machine unlearning jeopardizes privacy. In Yongdae Kim, Jong Kim, Giovanni Vigna, and Elaine Shi, editors, CCS '21: 2021 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event, Republic of Korea, November 15 - 19, 2021, pages 896-911. ACM, 2021.
. Cynthia Dwork, Vitaly Feldman, Privacy-preserving prediction. CoRR, abs/1803.10266Cynthia Dwork and Vitaly Feldman. Privacy-preserving prediction. CoRR, abs/1803.10266, 2018.
When is memorization of irrelevant training data necessary for high-accuracy learning?. Gavin Brown, Mark Bun, Vitaly Feldman, Adam D Smith, Kunal Talwar, STOC '21: 53rd Annual ACM SIGACT Symposium on Theory of Computing, Virtual Event. ItalyACMSamir Khuller and Virginia Vassilevska WilliamsGavin Brown, Mark Bun, Vitaly Feldman, Adam D. Smith, and Kunal Talwar. When is memorization of irrelevant training data necessary for high-accuracy learning? In Samir Khuller and Virginia Vassilevska Williams, editors, STOC '21: 53rd Annual ACM SIGACT Symposium on Theory of Computing, Virtual Event, Italy, June 21-25, 2021, pages 123-132. ACM, 2021.
Mosaic organization of dna nucleotides. C.-K Peng, S V Buldyrev, S Havlin, M Simons, H E Stanley, A L Goldberger, Phys. Rev. E. 49C.-K. Peng, S. V. Buldyrev, S. Havlin, M. Simons, H. E. Stanley, and A. L. Goldberger. Mosaic organization of dna nucleotides. Phys. Rev. E, 49:1685-1689, Feb 1994.
Genseg and mr-genseg: A novel segmentation algorithm and its parallel mapreduce based approach for identifying genomic regions with copy number variations. Rituparna Sinha, Rajat Kumar Pal, Rajat K De, IEEE ACM Trans. Comput. Biol. Bioinform. 191Rituparna Sinha, Rajat Kumar Pal, and Rajat K. De. Genseg and mr-genseg: A novel segmentation algorithm and its parallel mapreduce based approach for identifying genomic regions with copy number variations. IEEE ACM Trans. Comput. Biol. Bioinform., 19(1):443-454, 2022.
Learning fair naive bayes classifiers by discovering and eliminating discrimination patterns. Yoojung Choi, Golnoosh Farnadi, Behrouz Babaki, Guy Van Den Broeck, The Thirty-Second Innovative Applications of Artificial Intelligence Conference. New York, NY, USAAAAI Press2020The Tenth AAAI Symposium on Educational Advances in Artificial IntelligenceYooJung Choi, Golnoosh Farnadi, Behrouz Babaki, and Guy Van den Broeck. Learning fair naive bayes classifiers by discovering and eliminating discrimination patterns. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 10077-10084. AAAI Press, 2020.
Near-tight margin-based generalization bounds for support vector machines. Allan Grønlund, Lior Kamma, Kasper Green Larsen, Proceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine LearningPMLR2020Virtual EventAllan Grønlund, Lior Kamma, and Kasper Green Larsen. Near-tight margin-based generalization bounds for support vector machines. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 3779-3788. PMLR, 2020.
Ball $k$k-means: Fast adaptive clustering with no bounds. Shuyin Xia, Daowan Peng, Deyu Meng, Elisabeth Giem, Wei Wei, Zizhong Chen, IEEE Trans. Pattern Anal. Mach. Intell. 441Shuyin Xia, Daowan Peng, Deyu Meng, Elisabeth Giem, Wei Wei, and Zizhong Chen. Ball $k$k-means: Fast adaptive clustering with no bounds. IEEE Trans. Pattern Anal. Mach. Intell., 44(1):87-99, 2022.
Learning with selective forgetting. Takashi Shibata, Go Irie, Daiki Ikami, Yu Mitsuzumi, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event. Zhi-Hua Zhouthe Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual EventMontreal, Canada2021Takashi Shibata, Go Irie, Daiki Ikami, and Yu Mitsuzumi. Learning with selective forgetting. In Zhi-Hua Zhou, editor, Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, IJCAI 2021, Virtual Event / Montreal, Canada, 19-27 August 2021, pages 989-996. ijcai.org, 2021.
you might also like: " privacy risks of collaborative filtering. Joseph A Calandrino, Ann Kilzer, Arvind Narayanan, Edward W Felten, Vitaly Shmatikov, 32nd IEEE Symposium on Security and Privacy. Berkeley, California, USAIEEE Computer SocietyJoseph A. Calandrino, Ann Kilzer, Arvind Narayanan, Edward W. Felten, and Vitaly Shmatikov. "you might also like: " privacy risks of collaborative filtering. In 32nd IEEE Symposium on Security and Privacy, S&P 2011, 22-25 May 2011, Berkeley, California, USA, pages 231-246. IEEE Computer Society, 2011.
Empirical risk minimization in the non-interactive local model of differential privacy. Di Wang, Marco Gaboardi, Adam D Smith, Jinhui Xu, J. Mach. Learn. Res. 21Di Wang, Marco Gaboardi, Adam D. Smith, and Jinhui Xu. Empirical risk minimization in the non-interactive local model of differential privacy. J. Mach. Learn. Res., 21:200:1-200:39, 2020.
Second-order stochastic optimization for machine learning in linear time. Naman Agarwal, Brian Bullins, Elad Hazan, J. Mach. Learn. Res. 1840Naman Agarwal, Brian Bullins, and Elad Hazan. Second-order stochastic optimization for machine learning in linear time. J. Mach. Learn. Res., 18:116:1-116:40, 2017.
New insights and perspectives on the natural gradient method. James Martens, J. Mach. Learn. Res. 21James Martens. New insights and perspectives on the natural gradient method. J. Mach. Learn. Res., 21:146:1-146:76, 2020.
Neural tangent kernel: convergence and generalization in neural networks. Arthur Jacot, Franck Gabriel, Clément Hongler, STOC '21: 53rd Annual ACM SIGACT Symposium on Theory of Computing, Virtual Event. Samir Khuller and Virginia Vassilevska WilliamsItalyACM6invited paperArthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: convergence and generalization in neural networks (invited paper). In Samir Khuller and Virginia Vassilevska Williams, editors, STOC '21: 53rd Annual ACM SIGACT Symposium on Theory of Computing, Virtual Event, Italy, June 21-25, 2021, page 6. ACM, 2021.
FL-NTK: A neural tangent kernel-based framework for federated learning analysis. Baihe Huang, Xiaoxiao Li, Zhao Song, Xin Yang, Proceedings of the 38th International Conference on Machine Learning, ICML 2021. Marina Meila and Tong Zhangthe 38th International Conference on Machine Learning, ICML 2021PMLR139Baihe Huang, Xiaoxiao Li, Zhao Song, and Xin Yang. FL-NTK: A neural tangent kernel-based framework for federated learning analysis. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 4423-4434. PMLR, 2021.
The right to be forgotten in federated learning: An efficient realization with rapid retraining. Yi Liu, Lei Xu, Xingliang Yuan, Cong Wang, Bo Li, IEEE INFOCOM 2022 -IEEE Conference on Computer Communications. London, United KingdomIEEEYi Liu, Lei Xu, Xingliang Yuan, Cong Wang, and Bo Li. The right to be forgotten in federated learning: An efficient realization with rapid retraining. In IEEE INFOCOM 2022 -IEEE Conference on Computer Communications, London, United Kingdom, May 2-5, 2022, pages 1749-1758. IEEE, 2022.
Federated learning in mobile edge networks: A comprehensive survey. Wei Yang Bryan Lim, Cong Nguyen, Dinh Thai Luong, Yutao Hoang, Ying-Chang Jiao, Qiang Liang, Dusit Yang, Chunyan Niyato, Miao, IEEE Commun. Surv. Tutorials. 223Wei Yang Bryan Lim, Nguyen Cong Luong, Dinh Thai Hoang, Yutao Jiao, Ying-Chang Liang, Qiang Yang, Dusit Niyato, and Chunyan Miao. Federated learning in mobile edge networks: A comprehensive survey. IEEE Commun. Surv. Tutorials, 22(3):2031-2063, 2020.
A survey on federated learning for resource-constrained iot devices. Ahmed Imteaj, Urmish Thakker, Shiqiang Wang, Jian Li, M Hadi Amini, IEEE Internet Things J. 91Ahmed Imteaj, Urmish Thakker, Shiqiang Wang, Jian Li, and M. Hadi Amini. A survey on federated learning for resource-constrained iot devices. IEEE Internet Things J., 9(1):1-24, 2022.
Predicting stock trend using an integrated term frequency-inverse document frequency-based feature weight matrix with neural networks. Ankit Thakkar, Kinjal Chaudhari, Appl. Soft Comput. 96106684Ankit Thakkar and Kinjal Chaudhari. Predicting stock trend using an integrated term frequency-inverse document frequency-based feature weight matrix with neural networks. Appl. Soft Comput., 96:106684, 2020.
DMCP: differentiable markov channel pruning for neural networks. Shaopeng Guo, Yujie Wang, Quanquan Li, Junjie Yan, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle, WA, USA20202020Computer Vision Foundation / IEEEShaopeng Guo, Yujie Wang, Quanquan Li, and Junjie Yan. DMCP: differentiable markov channel pruning for neural networks. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pages 1536-1544. Computer Vision Foundation / IEEE, 2020.
Deep differentiable random forests for age estimation. Wei Shen, Yilu Guo, Yan Wang, Kai Zhao, Bo Wang, Alan L Yuille, IEEE Trans. Pattern Anal. Mach. Intell. 432Wei Shen, Yilu Guo, Yan Wang, Kai Zhao, Bo Wang, and Alan L. Yuille. Deep differentiable random forests for age estimation. IEEE Trans. Pattern Anal. Mach. Intell., 43(2):404-419, 2021.
Using pomdps for learning cost sensitive decision trees. Shlomi Maliah, Guy Shani, Artif. Intell. 292103400Shlomi Maliah and Guy Shani. Using pomdps for learning cost sensitive decision trees. Artif. Intell., 292:103400, 2021.
An intelligent scheme for big data recovery in internet of things based on multi-attribute assistance and extremely randomized trees. Hongju Cheng, Yushi Shi, Leihuo Wu, Yingya Guo, Naixue Xiong, Inf. Sci. 557Hongju Cheng, Yushi Shi, Leihuo Wu, Yingya Guo, and Naixue Xiong. An intelligent scheme for big data recovery in internet of things based on multi-attribute assistance and extremely randomized trees. Inf. Sci., 557:66-83, 2021.
Extracting training data from large language models. Nicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom B Brown, Dawn Song, 30th USENIX Security Symposium, USENIX Security 2021. Michael Bailey and Rachel GreenstadtUSENIX Association2021Úlfar Erlingsson, Alina Oprea, and Colin RaffelNicholas Carlini, Florian Tramèr, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom B. Brown, Dawn Song, Úlfar Erlingsson, Alina Oprea, and Colin Raffel. Extracting training data from large language models. In Michael Bailey and Rachel Greenstadt, editors, 30th USENIX Security Symposium, USENIX Security 2021, August 11-13, 2021, pages 2633-2650. USENIX Association, 2021.
Global convergence of online limited memory BFGS. Aryan Mokhtari, Alejandro Ribeiro, J. Mach. Learn. Res. 16Aryan Mokhtari and Alejandro Ribeiro. Global convergence of online limited memory BFGS. J. Mach. Learn. Res., 16:3151-3181, 2015.
SNIPER: few-shot learning for anomaly detection to minimize false-negative rate with ensured true-positive rate. Yuma Koizumi, Shin Murata, Noboru Harada, Shoichiro Saito, Hisashi Uematsu, IEEE International Conference on Acoustics, Speech and Signal Processing. Brighton, United KingdomIEEEYuma Koizumi, Shin Murata, Noboru Harada, Shoichiro Saito, and Hisashi Uematsu. SNIPER: few-shot learning for anomaly detection to minimize false-negative rate with ensured true-positive rate. In IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2019, Brighton, United Kingdom, May 12-17, 2019, pages 915-919. IEEE, 2019.
PUMA: performance unchanged model augmentation for training data removal. Ga Wu, Masoud Hashemi, Christopher Srinivasa, Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event. AAAI Press2022Ga Wu, Masoud Hashemi, and Christopher Srinivasa. PUMA: performance unchanged model augmentation for training data removal. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 -March 1, 2022, pages 8675-8682. AAAI Press, 2022.
Zero-shot machine unlearning. S Vikram, Ayush K Chundawat, Murari Tarun, Mohan S Mandal, Kankanhalli, abs/2201.05629CoRRVikram S. Chundawat, Ayush K. Tarun, Murari Mandal, and Mohan S. Kankanhalli. Zero-shot machine unlearning. CoRR, abs/2201.05629, 2022.
Analyzing information leakage of updates to natural language models. Lukas Santiago Zanella Béguelin, Shruti Wutschitz, Victor Tople, Andrew Rühle, Olga Paverd, Boris Ohrimenko, Marc Köpf, Brockschmidt, CCS '20: 2020 ACM SIGSAC Conference on Computer and Communications Security. Jay Ligatti, Xinming Ou, Jonathan Katz, and Giovanni VignaUSAACMSantiago Zanella Béguelin, Lukas Wutschitz, Shruti Tople, Victor Rühle, Andrew Paverd, Olga Ohrimenko, Boris Köpf, and Marc Brockschmidt. Analyzing information leakage of updates to natural language models. In Jay Ligatti, Xinming Ou, Jonathan Katz, and Giovanni Vigna, editors, CCS '20: 2020 ACM SIGSAC Conference on Computer and Communications Security, Virtual Event, USA, November 9-13, 2020, pages 363-375. ACM, 2020.
Analyzing privacy loss in updates of natural language models. Shruti Tople, Marc Brockschmidt, Boris Köpf, Olga Ohrimenko, Santiago Zanella Béguelin, abs/1912.07942CoRRShruti Tople, Marc Brockschmidt, Boris Köpf, Olga Ohrimenko, and Santiago Zanella Béguelin. Analyzing privacy loss in updates of natural language models. CoRR, abs/1912.07942, 2019.
Hard to forget: Poisoning attacks on certified machine unlearning. Neil G Marchant, I P Benjamin, Scott Rubinstein, Alfeld, Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event. AAAI Press2022Neil G. Marchant, Benjamin I. P. Rubinstein, and Scott Alfeld. Hard to forget: Poisoning attacks on certified machine unlearning. In Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, February 22 -March 1, 2022, pages 7691-7700. AAAI Press, 2022.
Have you forgotten? A method to assess if machine learning models have forgotten data. Xiao Liu, Sotirios A Tsaftaris, Medical Image Computing and Computer Assisted Intervention -MICCAI 2020 -23rd International Conference. Anne L. Martel, Purang Abolmaesumi, Danail Stoyanov, Diana Mateus, Maria A. Zuluaga, S. Kevin Zhou, Daniel Racoceanu, and Leo JoskowiczLima, PeruSpringer12261Proceedings, Part IXiao Liu and Sotirios A. Tsaftaris. Have you forgotten? A method to assess if machine learning models have forgotten data. In Anne L. Martel, Purang Abolmaesumi, Danail Stoyanov, Diana Mateus, Maria A. Zuluaga, S. Kevin Zhou, Daniel Racoceanu, and Leo Joskowicz, editors, Medical Image Computing and Computer Assisted Intervention -MICCAI 2020 -23rd International Conference, Lima, Peru, October 4-8, 2020, Proceedings, Part I, volume 12261, pages 95-105. Springer, 2020.
On the necessity of auditable algorithmic definitions for machine unlearning. Anvith Thudi, Hengrui Jia, Ilia Shumailov, Nicolas Papernot, abs/2110.11891CoRRAnvith Thudi, Hengrui Jia, Ilia Shumailov, and Nicolas Papernot. On the necessity of auditable algorithmic definitions for machine unlearning. CoRR, abs/2110.11891, 2021.
Revfrf: Enabling cross-domain random forest training with revocable federated learning. Yang Liu, Zhuo Ma, Yilong Yang, Ximeng Liu, Jianfeng Ma, Kui Ren, IEEE Trans. Dependable Secur. Comput. 196Yang Liu, Zhuo Ma, Yilong Yang, Ximeng Liu, Jianfeng Ma, and Kui Ren. Revfrf: Enabling cross-domain random forest training with revocable federated learning. IEEE Trans. Dependable Secur. Comput., 19(6):3671-3685, 2022.
Backdoor defense with machine unlearning. Yang Liu, Mingyuan Fan, Cen Chen, Ximeng Liu, Zhuo Ma, Li Wang, Jianfeng Ma, IEEE INFOCOM 2022 -IEEE Conference on Computer Communications. London, United KingdomIEEEYang Liu, Mingyuan Fan, Cen Chen, Ximeng Liu, Zhuo Ma, Li Wang, and Jianfeng Ma. Backdoor defense with machine unlearning. In IEEE INFOCOM 2022 -IEEE Conference on Computer Communications, London, United Kingdom, May 2-5, 2022, pages 280-289. IEEE, 2022.
Process synchronization in database systems. Gunter Schlageter, ACM Trans. Database Syst. 33Gunter Schlageter. Process synchronization in database systems. ACM Trans. Database Syst., 3(3):248-271, 1978.
Process synchronization: Design and performance evaluation of distributed algorithms. L Rajive, Bagrodia, IEEE Trans. Software Eng. 159Rajive L. Bagrodia. Process synchronization: Design and performance evaluation of distributed algorithms. IEEE Trans. Software Eng., 15(9):1053-1065, 1989.
Adaptive federated learning in resource constrained edge computing systems. Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, Kin K Leung, Christian Makaya, Ting He, Kevin Chan, IEEE J. Sel. Areas Commun. 376Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, Kin K. Leung, Christian Makaya, Ting He, and Kevin Chan. Adaptive federated learning in resource constrained edge computing systems. IEEE J. Sel. Areas Commun., 37(6):1205- 1221, 2019.
Ekram Hossain, and Choong Seon Hong. Federated learning for internet of things: Recent advances, taxonomy, and open challenges. U Latif, Walid Khan, Zhu Saad, Han, IEEE Commun. Surv. Tutorials. 233Latif U. Khan, Walid Saad, Zhu Han, Ekram Hossain, and Choong Seon Hong. Federated learning for internet of things: Recent advances, taxonomy, and open challenges. IEEE Commun. Surv. Tutorials, 23(3):1759-1799, 2021.
A security-and privacy-preserving approach based on data disturbance for collaborative edge computing in social iot systems. Peiying Zhang, Yaqi Wang, Neeraj Kumar, Chunxiao Jiang, Guowei Shi, IEEE Trans. Comput. Soc. Syst. 91Peiying Zhang, Yaqi Wang, Neeraj Kumar, Chunxiao Jiang, and Guowei Shi. A security-and privacy-preserving approach based on data disturbance for collaborative edge computing in social iot systems. IEEE Trans. Comput. Soc. Syst., 9(1):97-108, 2022.
Differentially private empirical risk minimization. Kamalika Chaudhuri, Claire Monteleoni, Anand D Sarwate, J. Mach. Learn. Res. 12Kamalika Chaudhuri, Claire Monteleoni, and Anand D. Sarwate. Differentially private empirical risk minimization. J. Mach. Learn. Res., 12:1069-1109, 2011.
Privacy-preserving feature extraction via adversarial training. Xiaofeng Ding, Hongbiao Fang, Zhilin Zhang, Kim-Kwang Raymond Choo, Hai Jin, IEEE Trans. Knowl. Data Eng. 344Xiaofeng Ding, Hongbiao Fang, Zhilin Zhang, Kim-Kwang Raymond Choo, and Hai Jin. Privacy-preserving feature extraction via adversarial training. IEEE Trans. Knowl. Data Eng., 34(4):1967-1979, 2022.
Sample-centric feature generation for semi-supervised few-shot learning. Bo Zhang, Hancheng Ye, Gang Yu, Bin Wang, Yike Wu, Jiayuan Fan, Tao Chen, IEEE Trans. Image Process. 31Bo Zhang, Hancheng Ye, Gang Yu, Bin Wang, Yike Wu, Jiayuan Fan, and Tao Chen. Sample-centric feature generation for semi-supervised few-shot learning. IEEE Trans. Image Process., 31:2309-2320, 2022.
DGDFS: dependence guided discriminative feature selection for predicting adverse drug-drug interaction. Jiajing Zhu, Yongguo Liu, Chuanbiao Wen, Xindong Wu, IEEE Trans. Knowl. Data Eng. 341Jiajing Zhu, Yongguo Liu, Chuanbiao Wen, and Xindong Wu. DGDFS: dependence guided discriminative feature selection for predicting adverse drug-drug interaction. IEEE Trans. Knowl. Data Eng., 34(1):271-285, 2022.
Exploration-exploitation in multi-agent learning: Catastrophe theory meets game theory. Stefanos Leonardos, Georgios Piliouras, Artif. Intell. 304103653Stefanos Leonardos and Georgios Piliouras. Exploration-exploitation in multi-agent learning: Catastrophe theory meets game theory. Artif. Intell., 304:103653, 2022.
Improving data utility through game theory in personalized differential privacy. Lei Cui, Youyang Qu, Mohammad Reza Nosouhi, Shui Yu, Jianwei Niu, Gang Xie, J. Comput. Sci. Technol. 342Lei Cui, Youyang Qu, Mohammad Reza Nosouhi, Shui Yu, Jianwei Niu, and Gang Xie. Improving data utility through game theory in personalized differential privacy. J. Comput. Sci. Technol., 34(2):272-286, 2019.
Investigating the adoption of hybrid encrypted cloud data deduplication with game theory. Xueqin Liang, Zheng Yan, Robert H Deng, Qinghua Zheng, IEEE Trans. Parallel Distributed Syst. 323Xueqin Liang, Zheng Yan, Robert H. Deng, and Qinghua Zheng. Investigating the adoption of hybrid encrypted cloud data deduplication with game theory. IEEE Trans. Parallel Distributed Syst., 32(3):587-600, 2021.
| [] |
[
"A Unified Algorithm for Stochastic Path Problems",
"A Unified Algorithm for Stochastic Path Problems"
] | [
"Christoph Dann \nCDANN@CDANN.NET\nUSC\n\n",
"Chen-Yu Google \nCDANN@CDANN.NET\nUSC\n\n",
"Wei \nCDANN@CDANN.NET\nUSC\n\n",
"Julian Zimmert \nCDANN@CDANN.NET\nUSC\n\n",
"Zimmert@google Com Google \nCDANN@CDANN.NET\nUSC\n\n"
] | [
"CDANN@CDANN.NET\nUSC\n",
"CDANN@CDANN.NET\nUSC\n",
"CDANN@CDANN.NET\nUSC\n",
"CDANN@CDANN.NET\nUSC\n",
"CDANN@CDANN.NET\nUSC\n"
] | [] | We study reinforcement learning in stochastic path (SP) problems. The goal in these problems is to maximize the expected sum of rewards until the agent reaches a terminal state. We provide the first regret guarantees in this general problem by analyzing a simple optimistic algorithm. Our regret bound matches the best known results for the well-studied special case of stochastic shortest path (SSP) with all non-positive rewards. For SSP, we present an adaptation procedure for the case when the scale of rewards B ⋆ is unknown. We show that there is no price for adaptation, and our regret bound matches that with a known B ⋆ . We also provide a scale adaptation procedure for the special case of stochastic longest paths (SLP) where all rewards are non-negative. However, unlike in SSP, we show through a lower bound that there is an unavoidable price for adaptation.Specially, in the lower bound construction of Theorem 3, V ⋆ and B ⋆ are of order O(1) for all u ≤ K/(SA), showing that V ⋆ and B ⋆ are insufficient to characterize the regret bound for general SP problems. As we will see in the following sections, this contrasts with the special cases SLP and SSP, where except for logarithmic terms, the coefficients in the regret bound can be completely characterized by V ⋆ and B ⋆ .In the case when the learner has NO access to an absolute upper bound for V ⋆ , we set U = K 1/ǫ for some parameters ǫ ∈ (0, 1). With ζ = K/(S 2 A ln U ) and ζ = K/(S 3 A ln U ), Algorithm 3 ensures with probability at least 1 − O(δ)respectively.The proof can be found in Appendix B. In Theorem 8, we obtain two regret bounds for SLP without knowledge ofin Theorem 6 for the case with a known B ⋆ . Is it possible to close the gap between the "known B ⋆ " and the "unknown B ⋆ " cases? In Section 4.2, we will show that this is impossible, by giving a regret lower bound for algorithms agnostic of B ⋆ . The lower bound can be strictly larger than the upper bound with knowledge of B ⋆ , thus formally identifying the price of information about B ⋆ for SLP. Finally, we remark on the source of suboptimality in the bounds in Theorem 8. The bounds we get are BThe additional S dependencies come from the S 2 in the lower-order term in Theorem 6. It is conjectured by previous work (Zhang et al., 2021) that this S 2 in the lower-order term can be improved to S. If this conjecture is true, then our bounds in Theorem 8 can be improved to B ⋆ √ SAK and √ V ⋆ B ⋆ SAK + B 2 ⋆ V⋆ SA. As we will see in Section 4.2, these bounds are unimprovable when B ⋆ is unknown. 1 4 K 3 4 ) in the hard instance mentioned in Theorem 9. Considering two classes of upper bounds, one in which we always scale with √ K without dominating lower order terms, and one in which we allow a constant cost for adapting to an unknown B ⋆ , we directly derive the following results. Corollary 10 Any algorithm with an asymptotic upper bound of O B α ⋆ V 1−α ⋆ √ SAK + o B 2 ⋆ , satisfies at least α ≥ 1 and any algorithm with an upper bound of O V ⋆ B ⋆ SAK + B ⋆ V ⋆ 2 poly(V ⋆ , S, A) | 10.48550/arxiv.2210.09255 | [
"https://export.arxiv.org/pdf/2210.09255v1.pdf"
] | 252,918,536 | 2210.09255 | 32b60239f8d8bd55099232fe7a88b81b79c59670 |
A Unified Algorithm for Stochastic Path Problems
17 Oct 2022
Christoph Dann
CDANN@CDANN.NET
USC
Chen-Yu Google
CDANN@CDANN.NET
USC
Wei
CDANN@CDANN.NET
USC
Julian Zimmert
CDANN@CDANN.NET
USC
Zimmert@google Com Google
CDANN@CDANN.NET
USC
A Unified Algorithm for Stochastic Path Problems
17 Oct 20221-48
We study reinforcement learning in stochastic path (SP) problems. The goal in these problems is to maximize the expected sum of rewards until the agent reaches a terminal state. We provide the first regret guarantees in this general problem by analyzing a simple optimistic algorithm. Our regret bound matches the best known results for the well-studied special case of stochastic shortest path (SSP) with all non-positive rewards. For SSP, we present an adaptation procedure for the case when the scale of rewards B ⋆ is unknown. We show that there is no price for adaptation, and our regret bound matches that with a known B ⋆ . We also provide a scale adaptation procedure for the special case of stochastic longest paths (SLP) where all rewards are non-negative. However, unlike in SSP, we show through a lower bound that there is an unavoidable price for adaptation.Specially, in the lower bound construction of Theorem 3, V ⋆ and B ⋆ are of order O(1) for all u ≤ K/(SA), showing that V ⋆ and B ⋆ are insufficient to characterize the regret bound for general SP problems. As we will see in the following sections, this contrasts with the special cases SLP and SSP, where except for logarithmic terms, the coefficients in the regret bound can be completely characterized by V ⋆ and B ⋆ .In the case when the learner has NO access to an absolute upper bound for V ⋆ , we set U = K 1/ǫ for some parameters ǫ ∈ (0, 1). With ζ = K/(S 2 A ln U ) and ζ = K/(S 3 A ln U ), Algorithm 3 ensures with probability at least 1 − O(δ)respectively.The proof can be found in Appendix B. In Theorem 8, we obtain two regret bounds for SLP without knowledge ofin Theorem 6 for the case with a known B ⋆ . Is it possible to close the gap between the "known B ⋆ " and the "unknown B ⋆ " cases? In Section 4.2, we will show that this is impossible, by giving a regret lower bound for algorithms agnostic of B ⋆ . The lower bound can be strictly larger than the upper bound with knowledge of B ⋆ , thus formally identifying the price of information about B ⋆ for SLP. Finally, we remark on the source of suboptimality in the bounds in Theorem 8. The bounds we get are BThe additional S dependencies come from the S 2 in the lower-order term in Theorem 6. It is conjectured by previous work (Zhang et al., 2021) that this S 2 in the lower-order term can be improved to S. If this conjecture is true, then our bounds in Theorem 8 can be improved to B ⋆ √ SAK and √ V ⋆ B ⋆ SAK + B 2 ⋆ V⋆ SA. As we will see in Section 4.2, these bounds are unimprovable when B ⋆ is unknown. 1 4 K 3 4 ) in the hard instance mentioned in Theorem 9. Considering two classes of upper bounds, one in which we always scale with √ K without dominating lower order terms, and one in which we allow a constant cost for adapting to an unknown B ⋆ , we directly derive the following results. Corollary 10 Any algorithm with an asymptotic upper bound of O B α ⋆ V 1−α ⋆ √ SAK + o B 2 ⋆ , satisfies at least α ≥ 1 and any algorithm with an upper bound of O V ⋆ B ⋆ SAK + B ⋆ V ⋆ 2 poly(V ⋆ , S, A)
Introduction
Imagine a web application with recurring user visits (epochs). During each visit, the app can choose from different content to present to the user (actions), which might lead to a desired interaction such as a purchase or click on an ad (reward). The user's behaviour depends on their internal state, which is influenced by the content provided to them. Inevitably, at some point the user will abandon the session.
It is natural to model this as an episodic reinforcement-learning problem. However, the length of each episode is random and depends on the agent's actions. To deal with the random episode length, and hence potentially unbounded cumulative reward in a single episode, one could either consider a fixed horizon problem by clipping the length of each episode, or consider discounted rewards. Both approaches introduce biases to the actual objective of the agent and we consider a third option: the stochastic longest path (SLP) setting, which is analogous to the stochastic shortest path (SSP) problem (e.g., Rosenberg et al., 2020;Cohen et al., 2021;Tarbouriech et al., 2021b;Chen et al., 2021a) but with positive rewards instead of costs.
In more generality, we can assume a setting in which there are both negative rewards (cost), as well as positive rewards. For example, users without subscription using the free part of an application might induce an overall negative reward due to the cost of infrastructure. However, the hope is that the user is convinced by the free service to upgrade to a subscription, which provides revenue. We call this setting the stochastic path (SP) problem. We make the following contributions:
1. We formalize the general SP problem and provide a simple unified algorithm for it with SSP and SLP as special cases. 2. We present the first regret upper-bound for the SP and SLP problem and show through lowerbounds that they are minimax-optimal up to log-factors and lower-order terms. Technically, our analysis gives the first near-optimal near-horizon-free regret bound for episodic MDPs when the reward could be positive or negative. In comparison, previous analysis in Zhang et al. (2021); Tarbouriech et al. (2021b); Chen et al. (2021a) can only get near-horizonfree bounds for all-non-negative or all-non-positive reward (see Section 3 for more discussions).
3. For SSP, when the scale of the sum of rewards B ⋆ is unknown, we derive an improved procedure to adapt to B ⋆ . Unlike prior work (Tarbouriech et al., 2021b;Chen et al., 2021a), our adaptation procedure allows us to recover the regret bound achieved with a known B ⋆ .
4. For SLP, we also derive an algorithm that adapts to unknown B ⋆ . This adaptation is qualitatively different than in the SSP case. In fact, we show through a lower bound that adaptivity to unkonwn B ⋆ comes at an unavoidable price in SLP.
The contributions 3 and 4 above jointly formalize a distinction between SSP and SLP when the scale of cumulative rewards is unknown. An overview of the main regret bounds derived in this work and a comparison to existing results is available in Table 1.
Related work
The SP problem and its special cases SSP and SLP are episodic reinforcement learning settings. When the horizon, the length of each episode is fixed and known, these problems have been extensively studied (Dann and Brunskill, 2015;Azar et al., 2017;Jin et al., 2018;Efroni et al., 2021;Zanette and Brunskill, 2019;Zhang et al., 2020). Among these work on finitehorizon tabular RL, the recent line of work on horizon-free algorithms (Wang et al., 2020;Zhang et al., 2020Zhang et al., , 2022 is of particular interest. These works assume that rewards are non-negative and their cumulative sum are bounded by 1 and aim for regret that only incurs a logarithmic dependency on the horizon. Although many techniques developed there are useful for our setting as well, the SLP and general SP problem is more difficult since the reward sum is not bounded by 1, and there are potentially negative rewards. The SSP problem has seen a number of publications recently (Rosenberg et al., 2020;Cohen et al., 2021;Tarbouriech et al., 2020Tarbouriech et al., , 2021aVial et al., 2022;Chen et al., 2021aChen et al., ,b, 2022aChen andLuo, 2021, 2022;Jafarnia-Jahromi et al., 2021;Min et al., 2022), for which we refer to Tarbouriech et al. (2021b); Chen et al. (2021a) for a detailed comparison.
Preliminaries
We consider a stochastic path (SP) problem with a finite state space S, a finite action set A, an initial state s init ∈ S, a terminal state g (for notational simplicity, we let g / ∈ S), a transition kernel P : S ×A → ∆ S∪{g} , and a reward function r : S ×A → [−1, 1]. We define S = |S| and A = |A|. In an episode, the player starts from the initial state s 1 = s init 1 . At the i-th step in an episode, the player sees the current state s i ∈ S, takes an action a i ∈ A, which leads to a reward value r(s i , a i ) and generates the next state s i+1 according to s i+1 ∼ P s i ,a i (·). The episode terminates right after the player reaches state g (no action is taken on g). We assume that the r is known to the learner, while P is not. We call the problem stochastic longest path (SLP) if r(s, a) ≥ 0 for all s, a, and call it stochastic shortest path (SSP) if r(s, a) ≤ 0 for all s, a.
A history-dependent deterministic policy π = (π 1 , π 2 , . . .) is a mapping from state-action histories to actions, i.e., π i : (S × A) i−1 × S → A; we use Π HD to denote the set of all history-dependent deterministic policies. A stationary deterministic policy π is a mapping from states to actions, i.e., π : S → A; we use Π SD to denote the set of all stationary deterministic policies. The state value function of a policy π ∈ Π HD on state s ∈ S are defined as
V π (s) E π τ i=1 r(s i , a i ) s 1 = s ,
where τ = min{i : s i+1 = g}, i.e., the timestep right before reaching the terminal state g (or ∞ if g is never reached), and E π denotes expectation under policy π. Naturally, V π (g) 0.
A policy π is called proper if g is reached with probability 1 under policy π starting from any state. In this paper, we make the following assumption:
Assumption 1 All policies in Π HD are proper.
Assumption 1 is stronger than those in previous works on SSP (Rosenberg et al., 2020;Cohen et al., 2021;Tarbouriech et al., 2021b;Chen et al., 2021a), which only require the existence of a proper policy. We note that the algorithmic trick they developed (adding a small amount of cost to every step) can also help us weaken Assumption 1. More details on this are available in Appendix G. By Theorem 7.1.9 of Puterman (2014), Assumption 1 implies that there is a stationary and deterministic optimal policy π ⋆ ∈ Π SD such that V π ⋆ (s) ≥ V π (s) for any π ∈ Π HD and any s. We let V ⋆ (·) V π ⋆ (·) and define
V ⋆ |V ⋆ (s init )| , B ⋆ max s |V ⋆ (s)| .
To establish our result, we also need the following definitions:
Definition 1 Define R sup π∈Π HD E π τ i=1 r(s i , a i ) 2 s 1 = s init , R max max s sup π∈Π HD E π τ i=1 r(s i , a i ) 2 s 1 = s , T max max s sup π∈Π HD E π τ | s 1 = s ,
where τ = min{i : s i+1 = g}, i.e., the timestep right before reaching the terminal state g.
In words, T max is the maximum (over all policies and all states) expected time to reach the terminal state; R and R max are two quantities that represent the range of the total reward in an episode. Notice that in the definition of R, the starting state is fixed as s init , while in the definition of R max , a maximum is taken over all possible starting states. Under Assumption 1, V ⋆ , B ⋆ , R, R max , and T max are all bounded. For simplicity, we assume that they are all ≥ 1.
Learning Protocol
The learning procedure considered in this paper is the same as previous works on SSP. We let the learner interact with the SP for K episodes, each started from s init . We define the regret as the difference between KV ⋆ (s init ) (the expected total reward obtained by the optimal policy) and the total reward of the learner. We keep a time index t to track the number of steps executed by the learner, and let s t denote the state the learner sees at time t. Episode k starts at time t k , and thus s t k = s init . At time t, the learner takes an action a t , and transitions to s ′ t ∼ P st,at (·). If s ′ t = g, we let s t+1 = s ′ t ; otherwise, we let t k+1 = t + 1 to be the first step of episode k + 1. The reader can refer to Algorithm 1 to see how the time indices are updated. The regret can be written as
Reg K = K k=1 V ⋆ (s init ) − e k t=t k r(s t , a t ) ,
with e k = t k+1 − 1. We let T to be the total number of steps during K episodes. That is, T = e K .
Notation
For x > 0, define ln + (x) ln(1 + x). We write x = O(y) or x ≤ O(y) to mean that x ≤ cy for some universal constant c, and write x =Õ(y) or x ≤Õ(y) if x ≤ cy for some c that only contains logarithmic factors. E t [·] denotes expectation conditioned on history before time t.
[n] denotes the set {1, 2, , . . . , n}. We defineι T,B,δ (ln(SA/δ)+ln ln(BT ))×ln T . V(P, V ) where P ∈ ∆ S and V ∈ R S denotes the variance of V under P , i.e., V(P, V )
S i=1 P (i)V (i) 2 −( S i=1 P (i)V (i)) 2 .
Algorithm 1 VI-SP 1 input: B ≥ 1, 0 < δ < 1, sufficiently large universal constants c 1 , c 2 that satisfy 2c 2 1 ≤ c 2 .
2 Initialize: t ← 0, s 1 ← s init . 3 For all (s, a, s ′ ) where s = g, set n(s, a, s ′ ) = n(s, a) ← 0, Q(s, a) ← B, V (s) ← B. 4 Set V (g) ← 0. 5 for k = 1, . . . , K do 6 while true do 7 t ← t + 1 8 / * Q t (s, a), V t (s) are defined as the Q(s, a), V (s) at this point. * / 9
Take action a t = argmax a Q(s t , a), receive reward r(s t , a t ), and transit to s ′ t .
10
Update counters: n t n(s t , a t ) ← n(s t , a t ) + 1, n(s t , a t , s ′ t ) ← n(s t , a t , s ′ t ) + 1.
11 DefineP t (s ′ ) n(st,at,s ′ ) nt ∀s ′ . 12 Define b t max c 1 V(Pt,V )ιt nt , c 2 Bιt nt , where ι t = ln(SA/δ) + ln ln(Bn t ). 13 Q(s t , a t ) ← min r(s t , a t ) +P t V + b t , Q(s t , a t ) 14 V (s t ) ← max a Q(s t , a). 15 if s ′ t = g then then s t+1 ← s ′ t ;
16 else s t+1 ← s init and break;
3. An Algorithm for General Stochastic Path (SP)
Our algorithm for general SP is Algorithm 1, which is simplified from the SVI-SSP algorithm by Chen et al. (2021a). The inputs are a parameter B that is supposed to an upper bound of B ⋆ , and a confidence parameter δ. The algorithm maintains an optimistic estimator Q(s, a) of Q ⋆ (s, a) := r(s, a) + E s ′ ∼Ps,a [V ⋆ (s ′ )] (i.e., with high probability, Q(s, a) ≥ Q ⋆ (s, a) always holds). In every step t, the learner chooses action a t = argmax a Q(s t , a) based on the "optimism in the face uncertainty" principle (Line 9), and updates the entry Q(s t , a t ) after receiving the reward and the next state, with an additional exploration bonus b t that keeps the optimism of Q(s t , a t ) (Line 11-Line 13). Although this algorithm is similar to the one in Chen et al. (2021a), the existing analysis only applies to SSP and SLP, and it is unclear how it handles general SP. Our main contribution in this section is to provide a regret guarantee for this algorithm in general SP. The regret guarantee of Algorithm 1 is given by the following theorem.
Theorem 2 If Assumption 1 holds, then Algorithm 1 with B ≥ B ⋆ ensures that with probability at least 1 − O(δ), for all K ≥ 1, with T being the total number of steps in K episodes,
Reg K = O R SAKι T,B,δ + R max SA ln R max K Rδ ι T,B,δ + BS 2 Aι T,B,δ ,
whereι T,B,δ (ln(SA/δ) + ln ln(BT )) × ln T . 2 2. Technically, the total number of steps T is a random quantity but can be replaced by KTmax with high probability if desired.
The proof of Theorem 2 can be found in Appendix A. Theorem 2 generalizes previous works on near-optimal near-horizon-free regret bounds for RL (Zhang et al., 2021;Tarbouriech et al., 2021b;Chen et al., 2021a). Specifically, with a closer look into their analysis, one can find that their analysis leads to a regret bound that depends on the magnitude of i∈episode |r(s i , a i )|, which can be much larger than | i∈episode r(s i , a i )| if the rewards have mixed signs. To address this issue, we develop new analysis techniques to get a near-horizon-free regret bound, which only scales with | i∈episode r(s i , a i )|. Other than this, the rest of the proofs are similar to those in Chen et al.
(V ⋆ (s t )− Q ⋆ (s t , a t ))
. This is standard based on the performance difference lemma (Kakade and Langford, 2002).
We use the fact that B ≥ B ⋆ and the bonus construction to show that the value estimator Q(s, a) always upper bounds Q ⋆ (s, a) with high probability (Lemma 15). This relies on the monotonic value propagation idea developed by Zhang et al. (2021). Then following the analysis of Zhang et al. (2021) and Chen et al. (2021a), we can show the following high probability bound (Lemma 16):
T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )) ≤Õ SA T t=1 V(P st,at , V ⋆ ) + BS 2 A .
(1)
The way we bound T t=1 V(P st,at , V ⋆ ) is the key to handle the case where the rewards have mixed signs. Specifically, we show the following (Lemma 19):
T t=1 V(P st,at , V ⋆ ) ≤Õ R max T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )) + R 2 K + R 2 max .(2)
Combining Eq. (1) and Eq. (2) and solving for T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )), we get an upper bound for it, which in turn gives a high-probability regret bound.
Lower bound
In this subsection, we show that the upper bound established in Theorem 2 is nearly tight. The proofs for this subsection can be found in Appendix D.
Theorem 3 For any u ≥ 2, and K ≥ Ω(SA), we can construct a set of SP instances such that R ≤ u for all instances, and there exists a distribution over these instances such that the expected regret of any algorithm is at least Ω(u √ SAK).
The quantity R in the regret upper and lower bounds (Theorem 2 and Theorem 3) is undesirable because its definition involves a supremum over all policies, which might be very large. Is it possible to refine the upper bound so that it only depends on quantities that correspond to the optimal policy? Specifically, we define
R ⋆ max s E π ⋆ τ i=1 r(s i , a i ) 2 s 1 = s ,
and ask: can the regret bound only depends on R ⋆ ? Notice that Theorem 3 is uninformative for this question because R ≈ R ⋆ in its construction. The next theorem gives a negative answer when the learner is agnostic of the value of R ⋆ .
Theorem 4 Let u ≥ 2 be arbitrarily chosen, and let K ≥ Ω(SA). For any algorithm that obtains a expected regret bound ofÕ(u √ SAK) for all problem instances with R ⋆ = R max ≤ u, there exists a problem instance with R ⋆ = O(1) and R max ≤ u but the expected regret is at leastΩ(u √ SAK).
Given Theorem 4, a left open question is whetherÕ(R
⋆ √ SAK)
is achievable when the learner has information about R ⋆ . Note that our algorithm Algorithm 1 only requires knowledge of B ⋆ , which, in general, does not provide information about R ⋆ (e.g., in the construction of Theorem 4, B ⋆ = O(1) for all u ≤ K/(SA)). Therefore, an algorithm with such a refined guarantee would be quite different from our algorithm.
Stochastic Longest Path (SLP)
For the special case SLP where r(·, ·) ≥ 0, we first demonstrate that our general result in Theorem 2 already gives a nearly tight bound. The following lemma connects the notion of R, R max in general SP to V ⋆ , B ⋆ in SLP. Lemma 5 together with Theorem 2 immediately implies the regret guarantee for SLP. Specifically, assuming that B ≥ B ⋆ , combining Theorem 2 and Lemma 5 yields
Reg K = O V ⋆ B ⋆ SAK ln + (B ⋆ /V ⋆ )ι T,B,δ + B ⋆ SA ln B ⋆ K V ⋆ δ ι T,B,δ + BS 2 Aι T,B,δ . (3)
The logarithmic terms in Eq.
(3) can be slightly improved if we follow a different approach to bound the sum of variance t V(P st,at , V ⋆ ). That is, using Lemma 20 instead of using Lemma 19. Note that the proof of Lemma 20 is similar to those of previous works (Zhang et al., 2021;Tarbouriech et al., 2021b;Chen et al., 2021a), which leads to a regret bound that depends on the magnitude of i∈episode |r(s i , a i )| instead of | i∈episode r(s i , a i )|. Therefore, while it does not work for general SP, we can use it for SLP. Comparing Eq.
(3) and the bound in Theorem 6, we see that specializing our general result in Theorem 2 to SLP only leads to looseness in logarithmic factors.
Theorem 6 If Assumption 1 holds and r(·, ·) ≥ 0, then Algorithm 1 with B ≥ B ⋆ ensures that with probability at least 1 − δ, for all K ≥ 1, with T being the total number of steps in K episodes,
Reg K = O V ⋆ B ⋆ SAKι T,B,δ + BS 2 Aι T,B,δ .(4)
The proof of Theorem 6 is in Appendix B.
Algorithm 2 Procedure to estimate V ⋆ in SLP input: ζ ≥ 1, U > 1. for i = 1, . . . , ⌈log 2 U ⌉ do Initiate a Algorithm 1 with B = 2 i ζ and probability parameter as δ ′ = δ/⌈log 2 U ⌉ (call this instance ALG). Run ALG until N ≥ 16c 2 ζS 2 Aι M,B,δ ′ , where N is the number of episodes, M is the total number of steps, and c is the universal constant hidden in the O(·) notation in Eq. (4). Letr i be average reward of ALG in these N episodes (i.e., the total reward divided by N ).
returnV 2 max i {r i }.
Algorithm 3 VI-SLP for unknown B ⋆ input: ζ ≥ 1, U > 1. Run Algorithm 2 with inputs ζ and U , and get outputV . Run Algorithm 1 with input B =V ζ in the rest of the episodes.
Algorithm without knowledge of B ⋆
While Theorem 6 gives a near-optimal bound, it is unclear how to make the algorithm work if prior knowledge on B ⋆ , which we need in order to set the value of B, is unavailable. Here, we first present a passive way to set B that is simple but leads to a highly sub-optimal bound. Observe from Theorem 6 that B only appears in the "lower-order" term in the regret bound. Therefore, a simple idea is to set B to be something large (of order K/S 3 A) with the hope that B ≥ B ⋆ will hold in a wide range of cases. With this choice, if B ⋆ ≤ B indeed holds, then we enjoy a regret bound ofÕ(
√ V ⋆ B ⋆ SAK + BS 2 A) =Õ( √ V ⋆ B ⋆ SAK + √ SAK) =Õ( √ V ⋆ B ⋆ SAK); if B ⋆ > B, then we simply bound the regret by V ⋆ K ≤ O(V ⋆ B 2 ⋆ S 3 A),
where the last inequality is implied by B ⋆ ≥ B = Θ( K/(S 3 A)). Overall, this simple approach gives a regret bound of
O V ⋆ B ⋆ SAK + V ⋆ B 2 ⋆ S 3 A .(5)
While the dominant is optimal, the lower-order term has cubic dependency (i.e., V ⋆ B 2 ⋆ ) on the scale of the cumulative reward, which is unnatural, and can easily overwhelm the dominant term when
V ⋆ B 2 ⋆ S 3 A √ V ⋆ B ⋆ SAK, or B ⋆ ≥ (K/(V ⋆ S 5 A)) 1/3
. Previous prior-knowledge-free algorithms for SSP (Cohen et al., 2021;Tarbouriech et al., 2021b;Chen et al., 2021a) also suffer from this issue and have at least cubic dependency on the scale of cumulative reward.
In this subsection, we introduce a way to obtain a regret guarantee that only (nearly) linearly depends on the scale. Observe that in SLP, B ⋆ corresponds to the maximum total expected reward the learner can get starting from any state. Clearly, the learner needs to have some knowledge about the optimal policy in order to estimate this quantity. Fortunately, it needs not to be accurately estimated; an estimation up to a constant factor suffices. Therefore, a reasonable plan is to coarsely estimate B ⋆ up to a constant factor, and then use the estimation to set B. Notice that estimating B ⋆ requires estimating V ⋆ (s) for all s since B ⋆ = max s V ⋆ (s).
While we can indeed make this idea work (details omitted), we find that an even more economical solution is to just estimate V ⋆ = V ⋆ (s init ) and set B to be something large compared to this estimation. Below we explain this idea. Let's first assume that B⋆ V⋆ ≤ ζ for some fixed ζ (will be relaxed later). Now consider running Algorithm 1 for N =Θ(ζS 2 A) episodes with parameter B = V ζ for some value V that we choose. If we happen to choose a V ∈ [V ⋆ , 2V ⋆ ], then we have B = V ζ ≥ V ⋆ ζ ≥ B ⋆ , and thus the regret bound in Theorem 6 holds. Letr be the average reward in these N episodes (i.e., total reward divided by N ). Then Theorem 6 gives
V ⋆ −r ≤Õ V ⋆ B ⋆ SA N + BS 2 A N = O V ⋆ B ⋆ ζS + B ζ = O V ⋆ √ S + V = O(V ⋆ ),(6)
where in the first equality we use N =Θ(ζS 2 A), in the second equality we use B⋆ V⋆ ≤ ζ and B = V ζ, in the third equality we use V ≤ 2V ⋆ . By setting N to be large enough, we can ensure that the O(V ⋆ ) on the right-hand side is no more than 1 2 V ⋆ , which then gives 1 2 V ⋆ ≤r ≤ 3 2 V ⋆ . On the other hand, if we choose some V that is not in the range of [V ⋆ , 2V ⋆ ] and set B = V ζ, we may not have a good guarantee like in Eq. (6). However, the following reversed inequality must hold no matter how large V is:
V ⋆ −r ≥ −Õ V ⋆ B ⋆ N + B ⋆ N = −Õ V ⋆ B ⋆ ζS 2 A + B ⋆ ζS 2 A = −Õ(V ⋆ ),(7)
where the first inequality is by the fact that V ⋆ ≥ E[r] (because V ⋆ is the expected value of the optimal policy) and that we can use Freedman's inequality to lower bound V ⋆ −r (details given in the formal proof). This inequality givesr ≤ O(V ⋆ ) no matter what V we use. With the observations from Eq. (6) and Eq. (7), we have the following strategy to estimate V ⋆ given that B⋆ V⋆ ≤ ζ holds: we perform the procedure described above for every V ∈ {1, 2, 4, 8, . . .}. Letr i denote the average reward when we use V = 2 i . By the argument in Eq. (6), at least one of ther i 's is of order Θ(V ⋆ ); by the argument in Eq. (7), allr i 's are of order O(V ⋆ ). Combining them, we have V ⋆ = Θ(max i {r i }). This procedure to estimate V ⋆ is formalized in Algorithm 2, with its guarantee given in the following lemma:
Lemma 7 Suppose that B⋆ V⋆ ≤ ζ, and V ⋆ ≤ U . Then Algorithm 2 with inputs ζ and U ensures that its outputV satisfies V ⋆ ≤V ≤ 3V ⋆ .
WithV from Algorithm 2 being a coarse estimation of V ⋆ , we can use it to set the parameter B in Algorithm 1 as B =V ζ. Our overall algorithm is presented in Algorithm 3. However, notice that Lemma 7 only gives a meaningful guarantee when the two conditions B⋆ V⋆ ≤ ζ and V ⋆ ≤ U hold. Apparently, they do not hold for all instances. In the theorem below, we show that with appropriate choices of ζ and U , the additional regret due to their violation is well-bounded.
Theorem 8 In the case when the learner has access to an absolute upper bound for V ⋆ (for example, T max is an absolute upper bound for V ⋆ ), by setting U to be that upper bound and setting
ζ = K/(S 2 A ln U ), Algorithm 3 ensures with probability at least 1 − O(δ) Reg = O B ⋆ √ S 2 AK ln Uι T,KU,δ . Alternatively, with ζ = K/(S 3 A ln U ), Algorithm 3 ensures with probability at least 1 − O(δ) Reg = O V ⋆ B ⋆ SAK ln Uι T,KU,δ + B 2 ⋆ S 3 A ln U V ⋆ι T,KU,δ .
Lower bound for algorithms agnostic of B ⋆
If the magnitude of B ⋆ is known, Theorem 6 shows that a regret bound ofÕ(
√ V ⋆ B ⋆ SAK + B ⋆ S 2 A) is possible.
In this section, we show that there is a price to pay for adaptivity.
Theorem 9 In SLP, for any algorithm agnostic to B ⋆ that obtains a regret bound ofÕ(ν √ SAK) for any problem instance where B ⋆ = V ⋆ = ν and sufficiently large K, there exists a problem instance with V ⋆ ≤ 1 + 2ν, B ⋆ =Õ ν K/(SA) such that the regret is at least Ω(νK).
See Appendix E for the proof. This theorem implies that being agnostic to B ⋆ is fundamentally harder than knowing an order optimal bound on B ⋆ , since Theorem 6 obtains sub-linear regret of orderÕ(ν(SA)
requires the constant term to be at leastΩ
B 2 ⋆ SA V⋆ .
Proof For the first part, note that for any α < 1, the regret bound for the bad case in Theorem 9
with ν = O(1) reads B α ⋆ V 1−α ⋆ √ SAK + o(B 2 ⋆ ) = O(K 1 2 (1+α) (SA) 1 2 (1−α) ) + o(K)
, which is sublinear in K and hence constitutes a contradiction. Similarly, in the second case, we can make K large enough such that the constant term (i.e., ( B⋆ V⋆ ) 2 poly(V ⋆ , S, A)) is absorbed by the dominant term (i.e., √ V ⋆ B ⋆ SAK) in the V ⋆ = B ⋆ = ν environment, which means we can apply Theorem 9 and the poly(V ⋆ , S, A) term must be of order Ω(V ⋆ SA), to satisfy the Ω(νK) lower bound.
Stochastic Shortest Path (SSP)
SSP has been studied extensively recently. The works by Tarbouriech et al. (2021b) and Chen et al.
(2021a) have achieved a near-optimal regret boundÕ( √ V ⋆ B ⋆ SAK +B ⋆ S 2 A)
when the knowledge on B ⋆ is available to the learner. 3 When such knowledge is unavailable, they design a way to adjust B on the fly, and achieve a regret bound ofÕ(
√ V ⋆ B ⋆ SAK + B 3 ⋆ S 3 A).
In this section, we improve their results, showing that for SSP, a bound ofÕ(
√ V ⋆ B ⋆ SAK + B ⋆ S 2 A)
is possible even without prior knowledge on B ⋆ . This is a contrast with Theorem 9 and Corollary 10, which show that for SLP, without prior knowledge on B ⋆ , this bound is unachievable.
Our algorithm is Algorithm 4. It is almost identical to Algorithm 1 with three main differences. First, B is no longer an input parameter in Algorithm 4, but is an internal variable updated on the fly. Second, the initial Q(s, a), V (s) values are initialized as 0 in Algorithm 4, instead of B, which is natural since Q ⋆ (s, a), V ⋆ (s) ≤ 0 for SSP. Third, in Line 12-Line 16, the algorithm tries to find a large enough B to set the bonus term b t , so that the resulted |Q(s t , a t )| is upper bounded by B.
The operation in Line 12-Line 16 and the corresponding analysis are the keys to our improvement. Recall that we hope to always have 0 ≥ Q t (s, a) ≥ Q ⋆ (s, a) to ensure optimism. Also, we want B to be not too much larger than B ⋆ to avoid regret overhead. Let's assume 0 ≥ Q t (s, a) ≥ Q ⋆ (s, a) for all s, a at time t. In Line 14, we attempt to calculate Q t+1 (s t , a t ) (denoted as Q tmp t+1 (s t , a t ) below). If B ≥ B ⋆ holds, then since r(s t , a t ) +P t V t + b t ≥ Q ⋆ (s t , a t ) by the same argument as in the proof of Lemma 15 and Q t (s t , a t ) ≥ Q ⋆ (s t , a t ) by assumption,
we have 0 ≥ Q tmp t+1 (s t , a t ) ≥ Q ⋆ (s t , a t ) ≥ −B ⋆ ≥ −B by the definition of Q tmp t+1 (s t , a t )
in Line 14. Thus, B will not be increased in Line 16. In short, if optimism always holds (i.e., Q t (s, a) ≥ Q ⋆ (s, a)), we will only increase B in Line 16 when B < B ⋆ , and thus B < 2B ⋆ all the time.
The question then is how to show that optimism holds along the way. Because we start from B = 1 and only increases B when we are sure that B < B ⋆ , one might suspect that the bonus term b t defined through B is insufficient at the beginning, and the optimism might fail. Because of this, Tarbouriech et al. (2021b) and Chen et al. (2021a) bound the regret in the B < B ⋆ regime by a term linear in K. However, one key observation in the analysis is that the original purpose of the bonus term is to compensate the deviation of (P st,at −P t )V t , where |V t | ≤ B by our algorithm. Since V t is history-dependent, a common trick in the analysis (Azar et al., 2017;Zhang et al., 2021) 3. The upper bound reported in Tarbouriech et al. (2021b) and Chen et al. (2021a) is of order B⋆ √ SAK + B⋆S 2 A, which is larger than what we report here. This is simply because in their analysis, they upper bound V⋆ by B⋆. We redo their analysis and report their refined dependence on V⋆ here. Similarly, the lower bound obtained in Rosenberg et al. (2020) is B⋆ √ SAK because in their lower bound construction, they only consider instances where V⋆ and B⋆ are of the same order. Hence, the upper bound we obtain here does not violate their lower bound.
Algorithm 4 VI-SSP for unknown B ⋆ 1 input: 0 < δ < 1, sufficiently large universal constants c 1 , c 2 that satisfy 2c 2 1 ≤ c 2 .
2 Initialize: B ← 1, t ← 0, s 1 ← s init . 3 For all (s, a, s ′ ) where s = g, set n(s, a, s ′ ) = n(s, a) ← 0, Q(s, a) ← 0, V (s) ← 0. 4 Set V (g) ← 0. 5 for k = 1, . . . , K do 6 while true do 7 t ← t + 1 8 / * Q t (s, a), V t (s), B t are defined as the Q(s, a), V (s), B at this point. * / 9
Take action a t = argmax a Q(s t , a), receive reward r(s t , a t ), and transit to s ′ t .
10
Update counters:
n t n(s t , a t ) ← n(s t , a t ) + 1, n(s t , a t , s ′ t ) ← n(s t , a t , s ′ t ) + 1. 11 DefineP t (s ′ ) n(st,at,s ′ ) nt ∀s ′ . 12 while true do 13 Define b t max c 1 V(Pt,V )ιt nt , c 2 Bιt nt , where ι t = ln(SA/δ) + ln ln(Bn t ). 14 Q tmp (s t , a t ) ← min r(s t , a t ) +P t V + b t , Q(s t , a t ) 15 if |Q tmp (s t , a t )| ≤ B then Q(s t , a t ) ← Q tmp (s t , a t ) and break ; 16 B ← 2B. 17 / * b t and ι t are defined as the b t and ι t at this point. * / 18 V (s t ) ← max a Q(s t , a). 19 if s ′ t = g then then s t+1 ← s ′ t ;
20 else s t+1 ← s init and break;
is to replace V t by V ⋆ and bound the deviation of (P st,at −P t )V ⋆ using Freedman's inequality, for which a bonus term defined through B ⋆ is required. To deal with our case, instead of replacing V t by V ⋆ , we replace it by V ⋆
[B] max{−B, V ⋆ } and use Freedman's inequality on (P st,at −P t )V ⋆ [B]
, for which a bonus term defined through B suffices. We can further connect V ⋆
[B] back to V ⋆ using the property V ⋆
[B] ≥ V ⋆ . The details are provided in Lemma 22, where we show that with high probability, Q t (s, a) ≥ Q ⋆ (s, a) holds for all t, s, a, even if B is smaller than B ⋆ along the learning process. The formal guarantee of our algorithm is given by the following theorem, with proof deferred to Appendix C.
Theorem 11 Algorithm 4 guarantees for SSP problems that with probability at least
1 − O(δ), Reg K =Õ( √ V ⋆ B ⋆ SAK + B ⋆ S 2 A).
Conclusions and Open Problems
In this work, we formulate the SP problem and give the first near-optimal regret bound for it. For special cases SLP and SSP, we further investigate the situation when the scale of the total reward B ⋆ is unknown. By improving previous adaptation results for SSP, and giving new lower bounds for SLP, we formally show a distinction between these two cases when B ⋆ is unknown.
In the general case, although our algorithm achieves near-worst-case-optimal bounds in terms of R, there is still possibility of improving the bound using more refined quantities. We have ruled out possibility of V ⋆ , B ⋆ , and the possibility of R ⋆ when its value is unknown, but perhaps there are other candidate quantities. Further, there is a discrepancy in the analysis between the general SP / SLP setting and the SSP setting, i.e., while our result for the general case recovers that for the SLP case (up to logarithmic factors), it does not imply that for the SSP case. This also hints that R does not always capture the true difficulty of every instance.
In general SP when B ⋆ is unknown, can we achieve the bound of orderÕ(R max poly(S, A)K), without any constant term that scales super-linearly with R max ? This would be analogous to thẽ
O(B ⋆ √ S 2 AK) bound
Appendix A. Upper bounds for General Stochastic Path
Definition 12 Let Q t , V t be the Q, V at the beginning of round t (see the comments in Algorithm 1).
Definition 13 Defineι T,B,δ (ln(SA/δ) + ln ln(BT )) × ln T .
A.1. Optimism and regret decomposition
Lemma 14 Define
f (P, V, n, ι) = P V + max c 1 V(P, V )ι n , c 2 Bι n If 2c 2 1 ≤ c 2 and −B ≤ V (·) ≤ B, then f (P, V, n, ι) is increasing in V .
Proof We compute the derivative of f (P, V, n, ι) over V (s ⋆ ):
∂f (P, V, n, ι) ∂V (s ⋆ ) = P (s ⋆ ) + 1 c 1 V(P, V )ι n > c 2 Bι n × c 1 P (s ⋆ )(V (s ⋆ ) − P V )ι nV(P, V )ι ≥ P (s ⋆ ) + 1 c 1 V(P, V )ι n > c 2 Bι n × c 2 1 P (s ⋆ )(V (s ⋆ ) − P V ) c 2 B ≥ P (s ⋆ ) − 2c 2 1 c 2 P (s ⋆ ) ≥ 0.
Lemma 15 If B ≥ B ⋆ , then with probability at least 1 − O(δ), Q t (s, a) ≥ Q ⋆ (s, a) for all (s, a) and t.
Proof We use induction to prove this. When t = 1, Q 1 (s, a) = B ≥ B ⋆ ≥ Q ⋆ (s, a) for all s, a.
Suppose that Q t (s, a) ≥ Q ⋆ (s, a) for all for all s, a (which implies V t (s) ≥ V ⋆ (s) for all s). Since Q t+1 and Q t only differ in the entry (s t , a t ), we only need to check Q t+1 (s t , a t ) ≥ Q ⋆ (s t , a t ). This can be seen from the calculation below:
Q t+1 (s t , a t ) = r(s t , a t ) +P t V t + b t = r(s t , a t ) +P t V t + max c 1 V(P t , V t )ι t n t , c 2 Bι t n t
(by the induction hypothesis V t (·) ≥ V ⋆ (·) and the monotone property Lemma 14)
= r(s t , a t ) +P t V ⋆ + max c 1 V(P t , V ⋆ )ι t n t , c 2 Bι t n t ≥ r(s t , a t ) + P st,at V ⋆ (By Lemma 28) = Q ⋆ (s t , a t ).
Note that we can apply Lemma 28 only for n t ≥ 4ι t . If n t < 4ι t , then the bias term itself is bounded by c 2 B 4 ≥ B which ensures optimism. This finishes the induction.
Lemma 16 Suppose that B ≥ B ⋆ . With probability at least 1 − O(δ), T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )) ≤ O SA T t=1 V(P t , V ⋆ )ι T,B,δ + BS 2 Aι T,B,δ ,
whereι T,B,δ is a logarithmic term defined in Definition 13.
Proof Below, we denote P t P st,at .
T t=1 (Q t (s t , a t ) − Q ⋆ (s t , a t )) = T t=1 (Q t+1 (s t , a t ) − Q ⋆ (s t , a t )) + T t=1 (Q t (s t , a t ) − Q t+1 (s t , a t )) ≤ T t=1 P t V t − P t V ⋆ + T t=1 b t + T t=1 s,a (Q t (s, a) − Q t+1 (s, a)) ≤ T t=1 P t (V t − V ⋆ ) + T t=1 (P t − P t )V ⋆ + T t=1 (P t − P t )(V t − V ⋆ ) + T t=1 b t + O(BSA) ≤ T t=1 1 s ′ t (V t − V ⋆ ) term 1 + T t=1 (P t − 1 s ′ t )(V t − V ⋆ ) term 2 + T t=1 (P t − P t )V ⋆ term 3 + T t=1 (P t − P t )(V t − V ⋆ ) term 4 + T t=1 b t term 5 +O(BSA)
where the first inequality is because Q t+1 (s t , a t ) ≤ r(s t , a t )+P t V t +b t and Q ⋆ (s t , a t ) = r(s t , a t )+ P t V ⋆ , and that Q t (s, a) ≥ Q t+1 (s, a). Below, we bound the individual terms.
term 1 = T t=1 1 s ′ t (V t − V ⋆ ) ≤ T t=1 1 s t+1 (V t − V ⋆ ) (V t (g) − V ⋆ (g) = 0 and V t (s) − V ⋆ (s) ≥ 0 by Lemma 15) ≤ T t=1 1 st (V t − V ⋆ ) + T t=1 (V t (s t+1 ) − V t (s t )) + T t=1 (V ⋆ (s t ) − V ⋆ (s t+1 )) ≤ T t=1 1 st (V t − V ⋆ ) + T t=2 s (V t−1 (s) − V t (s)) + O(B) (V t decreases with t) ≤ T t=1 1 st (V t − V ⋆ ) + O(BS).(8)
We have
(P t − 1 s ′ t )(V t − V ⋆ ) conditionedterm 2 = T t=1 (P t − 1 s ′ t )(V t − V ⋆ ) = O T t=1 V(P t , V t − V ⋆ )ι t + Bι t .
We have (P t − P t )V ⋆ = 1 nt r:(sr,ar)=(st,at) (1 s ′ r − P r )V ⋆ , which again by Lemma 26 is bounded for a fixed state, action pair simultaneously over all time-steps. Via union bound over all states and actions, we have with probability 1 − O(δ)
term 3 = T t=1 (P t − P t )V ⋆ = O T t=1 V(P t , V ⋆ )ι t n t + B ⋆ ι t n t .(9)
For the next term, we use a union bound over all state action pairs and apply Lemma 27, to obtain with probability 1 − O(δ)
term 4 = T t=1 (P t − P t )(V t − V ⋆ ) = O T t=1 V(P t , V t − V ⋆ )ι t n t + Bι t n t .(10)
Next, using Lemma 27 again, we have with probability 1 − O(δ)
V(P t , V t ) =P t (V t −P t V t ) 2 ≤P t (V t − P t V t ) 2 = V(P t , V t ) + (P t − P t )(V t − P t V t ) 2 ≤ V(P t , V t ) + 2B|(P t − P t )(V t − P t V t )| ≤ V(P t , V t ) + O B S V(P t , V t )ι t n t + SB 2 ι t n t ≤ O V(P t , V t ) + SB 2 ι t n t . (AM-GM inequality)
By the definition of b t , with probability at least 1 − O(δ),
term 5 = T t=1 b t = O T t=1 V(P t , V t )ι t n t + Bι t n t = O T t=1 V(P t , V t )ι t n t + B √ Sι t n t = O T t=1 V(P t , V ⋆ )ι t n t + V(P t , V t − V ⋆ )ι t n t + B √ Sι t n t
Collecting terms and using Cauchy-Schwarz, we get
T t=1 (Q t (s t , a t ) − Q ⋆ (s t , a t )) ≤ T t=1 (V t (s t ) − V ⋆ (s t )) + O SA T t=1 V(P t , V ⋆ )ι T,B,δ + S 2 A T t=1 V(P t , V t − V ⋆ )ι T,B,δ + BS 2 Aι T,B,δ .
We further invoke Lemma 17 and bound the last expression by
T t=1 (V t (s t ) − V ⋆ (s t )) + O SA T t=1 V(P t , V ⋆ )ι T,B,δ + BS 2 Aι T,B,δ .
Finally, noticing that Q t (s t , a t ) = V t (s t ) by the choice of a t finishes the proof.
Lemma 17 With probability at least
1 − O(δ), T t=1 V(P t , V t − V ⋆ ) = O 1 S T t=1 V(P t , V ⋆ ) + B 2 S 2 Aι T,B,δ . Proof Using Lemma 23 with X t = V t − V ⋆ , we get T t=1 V(P t , V t − V ⋆ ) = O B T t=1 |V t (s t ) − V ⋆ (s t ) − P t (V t − V ⋆ )| + B T t=1 s |V t (s) − V t+1 (s)| + B 2 ln(1/δ) = O B T t=1 |(P t − P t )V t | + B T t=1 b t + B 2 S ln(1/δ) ≤ O B SA T t=1 V(P t , V ⋆ )ι T,B,δ + B S 2 A T t=1 V(P t , V t − V ⋆ )ι T,B,δ + B 2 S 2 Aι T,B,δ .
(by the same argument as in Eq. (9) and Eq. (10))
Solving for T t=1 V(P t , V t − V ⋆ ), we get T t=1 V(P t , V t − V ⋆ ) = O B SA T t=1 V(P t , V ⋆ )ι T,B,δ + B 2 S 2 Aι T = O 1 S T t=1 V(P t , V ⋆ ) + B 2 S 2 Aι T,B,δ .
(by AM-GM)
A.2. Bounding the sum of variance
In Appendix A.1, we have already shown that
T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )) =Õ SA T t=1 V(P t , V ⋆ ) + BS 2 A inV(P t , V ⋆ ) =Õ R 2 K + R max T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )) + R 2 max .(11)
We first establish some useful properties.
Lemma 18 Define the following notation:
Y k e k t=t k V(P t , V ⋆ ), Z k e k t=t k (V ⋆ (s t ) − Q ⋆ (s t , a t )),(12)
and recall that ln + (x) ln(1 + x). We have
E t k [Z 2 k ] ≤ O R max ln + R max R E t k [Z k ] + 1 ,(13)E t k [Y k ] ≤ O R max ln + R max R E t k [Z k ] + R 2 ,(14)E t k [Y 2 k ] ≤ O R 3 max ln 2 + R max R E t k [Z k ] + R 2 max R 2 ln + R max R ,(15)Z k ≤ R max ln K δ for all k ∈ [K] w.p. ≥ 1 − δ,(16)Y k ≤ R 2 max ln K δ for all k ∈ [K] w.p. ≥ 1 − δ.(17)
Before proving Lemma 18, we point out that the key is to show Eq. (14), which, after summing over k, will almost imply Eq. (11)
X t = V ⋆ (s t )−Q ⋆ (s t , a t ). First, note that 0 ≤ V ⋆ (s t )− Q ⋆ (s t , a t ) ≤ 2B ⋆ ≤ 2R max .
Then note that for any t ′ in episode k,
E t ′ e k t=t ′ (V ⋆ (s t ) − Q ⋆ (s t , a t )) = E t ′ e k t=t ′ (V ⋆ (s t ) − r(s t , a t ) − P t V ⋆ ) = E t ′ e k t=t ′ (V ⋆ (s t ) − r(s t , a t ) − V ⋆ (s ′ t )) = V ⋆ (s t ′ ) − E t ′ e k t=t ′ r(s t , a t ) ≤ 2R max .(18)
Combining these two arguments and using Lemma 24 (b) (with c set to R), we get
E t k [Z 2 k ] = E t k e k t=t k (V ⋆ (s t ) − Q ⋆ (s t , a t )) 2 ≤ O R max ln + R max R E t k e k t=t k (V ⋆ (s t ) − Q ⋆ (s t , a t )) + R 2 = O R max ln + R max R E t k [Z k ] + R 2 .
Proving Eq. (14) Observe that
E t k e k t=t k V(P t , V ⋆ ) = E t k e k t=t k (V ⋆ (s ′ t ) − P t V ⋆ ) 2 = E t k e k t=t k (V ⋆ (s ′ t ) − P t V ⋆ ) 2 ,(19)
where the last equality is because
E t k (V ⋆ (s ′ t ) − P t V ⋆ )(V ⋆ (s ′ u ) − P u V ⋆ ) = 0
for any u > t ≥ t k . We continue with the following:
E t k e k t=t k (V ⋆ (s ′ t ) − P t V ⋆ ) 2 = E t k e k t=t k (V ⋆ (s t ) − P t V ⋆ ) − V ⋆ (s t k ) 2 = E t k e k t=t k (V ⋆ (s t ) − Q ⋆ (s t , a t ) + r(s t , a t )) − V ⋆ (s t k ) 2 ≤ 3E t k e k t=t k (V ⋆ (s t ) − Q ⋆ (s t , a t )) 2 + 3E t k e k t=t k r(s t , a t ) 2 + 3V ⋆ (s t k ) 2 ≤ 3E t k e k t=t k (V ⋆ (s t ) − Q ⋆ (s t , a t )) 2 + 6R 2 .(20)
Thus, combining the arguments above, we have proven
E t k [Y k ] ≤ O E t k [Z 2 k ] + R 2 .
Further combining this with Eq. (13), we get Eq. (14).
Proving Eq. (15) For any t ′ in episode k, by the same calculation as in Eq. (19), Eq. (20) (but instead of starting time from t k , start from an arbitrary t ′ in episode k), we have
E t ′ e k t=t ′ V(P t , V ⋆ ) ≤ O E t ′ e k t=t ′ (V ⋆ (s t ) − Q ⋆ (s t , a t )) 2 + R 2 max ≤ O(R 2 max ),(21)
where the last inequality is by Eq. (18) and Lemma 24 (c) with X t = V ⋆ (s t ) − Q ⋆ (s t , a t ). Thus,
E t k [Y 2 k ] = E t k e k t=t k V(P t , V ⋆ ) 2 ≤ O R 2 max ln + R 2 max R 2 E t k e k t=t k V(P t , V ⋆ ) + R 4 (by Eq. (21) and Lemma 24 (b) with c = R 2 ) = O R 3 max ln 2 + R max R E t k [Z k ] + R 2 max R 2 ln + R max R .
(by Eq. (14))
Proving Eq. (16) This is directly by Eq. (18) and Lemma 24 (a).
Proving Eq. (17) This is directly by Eq. (21) and Lemma 24 (a).
Lemma 19 With probability at least
1 − O(δ), T t=1 V(P t , V ⋆ ) ≤ O R max ln + R max R T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )) + R 2 K + R 2 max ln R max K Rδ ln ln(R max K) δ .
Proof Similar to Eq. (12), we define
Y k e k t=t k V(P t , V ⋆ ), Z k e k t=t k (V ⋆ (s t ) − Q ⋆ (s t , a t ))
.
By Lemma 26, with probability at least
1 − O(δ), K k=1 Y k ≤ K k=1 E t k [Y k ] + O K k=1 E t k [Y 2 k ] ln ln K k=1 E t k [Y 2 k ] δ + max k∈[K] Y k ln ln max k∈[K] Y k δ ≤ O R max ln + R max R K k=1 E t k [Z k ] + R 2 K (by Eq. (14)) + O R 3 max ln 2 + R max R K k=1 E t k [Z k ] + R 2 max R 2 ln + R max R K ln ln(R max K K k=1 E t k [Z k ]) δ (by Eq. (15)) + O R 2 max ln K δ ln ln R 2 max ln K δ δ (by Eq. (17)) ≤ O R max ln + R max R K k=1 E t k [Z k ] + R 2 K + R 2 max ln R max K Rδ ln ln(R max K) δ . (AM-GM and that E t k [Z k ] ≤ R max )(22)
Next, we connect k E t k [Z k ] with k Z k . By Lemma 26, with probability at least 1 − O(δ),
K k=1 E t k [Z k ] ≤ K k=1 Z k + O K k=1 E t k [Z 2 k ] ln ln K k=1 E t k [Z 2 k ] δ + max k∈[K] Z k ln ln max k∈[K] Z k δ ≤ K k=1 Z k + O R max ln + R max R K k=1 E t k [Z k ] ln ln(R max K) δ (by Eq. (13) and that E t k [Z 2 k ] ≤ R 2 max ) + O R max ln K δ ln ln(R max ln K δ ) δ (by Eq. (16)) ≤ K k=1 Z k + 1 2 K k=1 E t k [Z k ] + O R max ln R max K Rδ ln ln(R max K) δ . (AM-GM) Solving for K k=1 E t k [Z k ]
and plugging it to Eq. (22), we get that with probability at least 1
− O(δ), K k=1 Y k ≤ O R max ln + R max R K k=1 Z k + R 2 K + R 2 max ln R max K Rδ ln ln(R max K) δ .
This finishes the proof.
A.3. Bounding the regret
Proof [Theorem 2] We use the following notations:
Y k e k t=t k V(P t , V ⋆ ), Z k e k t=t k (V ⋆ (s t ) − Q ⋆ (s t , a t )). Reg K = K k=1 V ⋆ (s t k ) − e k t=t k r(s t , a t ) = T t=1 (V ⋆ (s t ) − V ⋆ (s ′ t ) − r(s t , a t )) = T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )) + T t=1 (P t V ⋆ − V ⋆ (s ′ t )) ≤ T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )) + O T t=1 V(P t , V ⋆ )ι T,B,δ + B ⋆ιT,B,δ ≤ O K k=1 Z k + K k=1 Y kιT,B,δ + B ⋆ιT,B,δ ≤ O K k=1 Z k + R max ln + R max R K k=1 Z k + R 2 K + R 2 max ln R max K Rδ ln ln(R max K) δ ι T,B,δ + B ⋆ιT,B,δ (Lemma 19) ≤ O K k=1 Z k + R Kι T,B,δ + R max ln R max K Rδ ι T,B,δ . (AM-GM)(23)
It remains to bound K k=1 Z k . By Lemma 16, we have with probability at least 1 − O(δ),
K k=1 Z k ≤ O SA K k=1 Y kιT,B,δ + BS 2 Aι T,B,δ .(24)
Further using Lemma 19 on the right-hand side,
K k=1 Z k ≤ O SA R max ln + R max R K k=1 Z k + R 2 K + R 2 max ln R max K Rδ ln ln(R max K) δ ι T,B,δ + BS 2 Aι T,B,δ . Solving for K k=1 Z k , we get K k=1 Z k ≤ O R SAKι T,B,δ + R max SA ln R max K Rδ ι T,B,δ + BS 2 Aι T,B,δ .
Plugging this to Eq. (23) finishes the proof.
Appendix B. Upper Bound for Stochastic Longest Path
Proof [Lemma 5] Define
B π = max sup s E π ∞ t=1 r(s t , a t ) s 1 = s , 1 V π = max E π ∞ t=1 r(s t , a t ) s 1 = s init , 1
Since r(·, ·) ≥ 0, by Lemma 24 (b) (with c min{B π , V ⋆ }), we have
R 2 ≤ O sup π V π B π ln + B π min{B π , V ⋆ } + V 2 ⋆ ≤ O V ⋆ B ⋆ ln + B ⋆ V ⋆ .
24
A UNIFIED ALGORITHM FOR STOCHASTIC PATH PROBLEMS By Lemma 24 (c), we have
R 2 max ≤ O sup π B 2 π ≤ O B 2 ⋆ .
Lemma 20 With probability at least 1 − δ, for all T ,
T t=1 V(P t , V ⋆ ) ≤ O B ⋆ T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )) + B ⋆ T t=1 |r(s t , a t )| + B 2 ⋆ ln(T /δ) . Proof T t=1 V(P t , V ⋆ ) = T t=1 E s ′ ∼Pt [V ⋆ (s ′ ) 2 ] − (P t V ⋆ ) 2 = T t=1 (V ⋆ (s ′ t ) 2 − (P t V ⋆ ) 2 ) + T t=1 (E s ′ ∼Pt [V ⋆ (s ′ ) 2 ] − V ⋆ (s ′ t ) 2 ) ≤ T t=1 V ⋆ (s t ) 2 − (P t V ⋆ ) 2 + B 2 ⋆ + T t=1 (E s ′ ∼Pt [V ⋆ (s ′ ) 2 ] − V ⋆ (s ′ t ) 2 ) (because V ⋆ (s ′ t ) 2 ≤ V ⋆ (s t+1 ) 2 ) = T t=1 (V ⋆ (s t ) 2 − Q ⋆ (s t , a t ) 2 ) + T t=1 (Q ⋆ (s t , a t ) 2 − (P t V ⋆ ) 2 ) + O T t=1 V(P t , V ⋆2 ) ln(T /δ) + B 2 ⋆ ln(T /δ) ≤ O B ⋆ T t=1 |V ⋆ (s t ) − Q ⋆ (s t , a t )| + B ⋆ T t=1 |Q ⋆ (s t , a t ) − P t V ⋆ | + O B ⋆ T t=1 V(P t , V ⋆ ) ln(T /δ) + B 2 ⋆ ln(T /δ) (a 2 − b 2 ≤ |a + b||a − b| and Lemma 29) ≤ O B ⋆ T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )) + B ⋆ T t=1 |r(s t , a t )| + 1 2 T t=1 V(P t , V ⋆ ) + O B 2 ⋆ ln(T /δ) (AM-GM)
Solving for T t=1 V(P t , V ⋆ ) finishes the proof.
Proof [Theorem 6] By the same calculation as in Eq. (23), we have
Reg K ≤ T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )) + O T t=1 V(P t , V ⋆ )ι T,B,δ + B ⋆ιT,B,δ .
Using Lemma 20, we get
Reg K ≤ T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )) + O B ⋆ T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t ))ι T,B,δ + B ⋆ T t=1 r(s t , a t )ι T,B,δ + B ⋆ιT,B,δ ≤ O T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )) + B ⋆ T t=1 r(s t , a t )ι T,B,δ + B ⋆ιT,B,δ .(25)
By Lemma 16,
T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )) ≤ O SA T t=1 V(P t , V ⋆ )ι T,B,δ + BS 2 Aι T,B,δ ≤ O B ⋆ SA T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t ))ι T,B,δ + B ⋆ SA T t=1 r(s t , a t )ι T,B,δ + BS 2 Aι T,B,δ
where in the last inequality we again use Lemma 20. Solving for T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )), we get
T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )) ≤ O B ⋆ SA T t=1 r(s t , a t )ι T,B,δ + BS 2 Aι T,B,δ .
Using this in Eq. (25), we get
KV ⋆ − T t=1 r(s t , a t ) ≤ O B ⋆ SA T t=1 r(s t , a t )ι T,B,δ + BS 2 Aι T,B,δ . If T t=1 r(s t , a t ) ≥ KV ⋆ , we have KV ⋆ − T t=1 r(s t , a t ) ≤ 0; if T t=1 r(s t , a t ) ≤ KV ⋆ ,
we can further bound the T t=1 r(s t , a t ) term on the right-hand side above by KV ⋆ . In both cases, we have
KV ⋆ − T t=1 r(s t , a t ) ≤ O V ⋆ B ⋆ SAKι T,B,δ + BS 2 Aι T,B,δ .
Lemma 21 Let r(·, ·) ≥ 0. With probability at least 1 − δ, for all K ≥ 1, with T being the total number of steps in K episodes,
Reg K ≥ −O V ⋆ B ⋆ K ln(T /δ) + B ⋆ ln(T /δ) . Proof Reg K = K k=1 V ⋆ (s init ) − e k t=t k r(s t , a t ) = T t=1 V ⋆ (s t ) − V ⋆ (s ′ t ) − r(s t , a t ) = T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )) + T t=1 (P t V ⋆ − V ⋆ (s ′ t )) ≥ T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )) − O T t=1 V(P t , V ⋆ ) ln(T /δ) + B ⋆ ln(T /δ) .
By Lemma 20, we can further lower bound the above expression by
T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )) − O B ⋆ T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )) ln(T /δ) + B ⋆ T t=1 r(s t , a t ) ln(T /δ) + B ⋆ ln(T /δ) ≥ T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )) − 1 2 T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )) − O B ⋆ T t=1 r(s t , a t ) ln(T /δ) + B ⋆ ln(T /δ) (AM-GM) ≥ −O B ⋆ T t=1 r(s t , a t ) ln(T /δ) + B ⋆ ln(T /δ) . (V ⋆ (s t ) − Q ⋆ (s t , a t ) ≥ 0)
Hence we have
Reg K = KV ⋆ − T t=1 r(s t , a t ) ≥ −O B ⋆ T t=1 r(s t , a t ) ln(T /δ) + B ⋆ ln(T /δ) .
Solving for T t=1 r(s t , a t ), we get
KV ⋆ − T t=1 r(s t , a t ) ≥ −O V ⋆ B ⋆ K ln(T /δ) + B ⋆ ln(T /δ) .
Proof [Lemma 7] We consider a particular i ⋆ ∈ {1, . . . , ⌈log 2 U ⌉} such that 2 i−1 ≤ V ⋆ < 2 i . By the assumption that 1 ≤ V ⋆ ≤ U , such i ⋆ always exists. Notice that in the i ⋆ -th for-loop, the input is B = 2 i ζ ≥ V ⋆ ζ ≥ B ⋆ . Thus, according to Theorem 6, when the i ⋆ -th for-loop terminates, we have with probability at least 1 − δ ′ ,
V ⋆ −r i⋆ ≤ 1 N × c × V ⋆ B ⋆ SANι M,B,δ ′ + BS 2 Aι M,B,δ ′ = c × V ⋆ B ⋆ SAι M,B,δ ′ N + BS 2 Aι M,B,δ ′ N ≤ c × V ⋆ B ⋆ 16c 2 ζS + 2 i ζ 16c 2 ζ (because the termination condition is N ≥ 16c 2 ζS 2 Aι M,B,δ ′ ) ≤ 1 4 V ⋆ + 1 8 V ⋆ , (by the assumptions B ⋆ ≤ ζV ⋆ and 2 i−1 ≤ V ⋆ )
which impliesr i⋆ ≥ 1 2 V ⋆ . Next, we consider an arbitrary i ∈ {1, . . . , ⌈log 2 U ⌉}. Because V ⋆ is the expected reward of the optimal policy, in every episode, V ⋆ is larger than the expected reward of the learner. By Lemma 21, we have with probability at least 1 − δ ′ , for all N ,
V ⋆ −r i ≥ − 1 N × c × V ⋆ B ⋆ N ln(M/δ ′ ) + B ⋆ S 2 A ln(M/δ ′ ) ≥ − 1 2 V ⋆ ,
(by the same calculation as above and noticing that ln(M/δ ′ ) ≤ι M,B,δ ′ ) which impliesr i ≤ 3 2 V ⋆ . With an union bound over i, the inequality holds for all i with probability at least 1 − δ. Combining the arguments, we conclude that with probability at least 1 − 2δ,
1 2 V ⋆ ≤ max i {r i } ≤ 3 2 V ⋆
The lemma is proven by noticing thatV = 2 max i {r i }.
Proof [Theorem 8] We first consider the case when B⋆ V⋆ ≤ ζ and V ⋆ ≤ U . Let {N i } i=1,2,...,⌈log 2 U ⌉ be the number of episodes spent in the i-th for-loop in Algorithm 2. Thus, the total number of episodes the learner spends to estimateV is
⌈log 2 U ⌉ i=1 N i = O ⌈log 2 U ⌉ i=1 ζS 2 Aι M i ,2 i ζ,δ ′
where M i is the number of steps spent in the i-th for-loop in Algorithm 2. By definition,ι M i ,2 i ζ,δ ′ = O((ln(SA/δ ′ ) + ln ln(2 i ζM i )) × ln M i ) = O((ln(SA/δ) + ln ln T + ln ln(ζU )) × ln T ) = O(ι T,ζU,δ ), and thus
⌈log 2 U ⌉ i=1 N i = O ζS 2 Aι T,ζU,δ × ln U .
For these episodes, we simply bound the per-episode regret by V ⋆ . By Lemma 7,V ∈ [V ⋆ , 3V ⋆ ], and so the main algorithm (Algorithm 1) is run with B = ζV ≥ ζV ⋆ ≥ B ⋆ . The regret incurred when running Algorithm 1 with B = ζV , according to Theorem 6, is thus upper bound by
O V ⋆ B ⋆ SAKι T,ζU,δ + ζV S 2 Aι T,ζU,δ ≤ O V ⋆ B ⋆ SAKι T,ζU,δ + ζV ⋆ S 2 Aι T,ζU,δ .
Overall, the regret (including the estimation part and the main algorithm part), is upper bounded by
O V ⋆ B ⋆ SAKι T,ζU,δ + ζV ⋆ S 2 Aι T,ζU,δ ln U .
We remind that this is the regret bound we can achieve when B⋆ V⋆ ≤ ζ and V ⋆ ≤ U . If either of them does not hold, we simply bound the total regret by KV ⋆ . Hence, the overall regret without these two assumptions is upper bounded by
O V ⋆ B ⋆ SAKι T,ζU,δ + ζV ⋆ S 2 Aι T,ζU,δ ln U + KV ⋆ I B ⋆ V ⋆ > ζ + KV ⋆ I {V ⋆ > U } .
Case 1. U is a known absolute upper bound for V ⋆ In this case, I {V ⋆ > U } = 0, and we have
Reg K = O V ⋆ B ⋆ SAKι T,ζU,δ + ζV ⋆ S 2 Aι T,ζU,δ ln U + KV ⋆ I B ⋆ V ⋆ > ζ
Let α > 0 be a parameter and set ζ = √ K/α. Then the last expression can be upper bounded by
O V ⋆ B ⋆ SAKι T,ζU,δ + V ⋆ S 2 A(ln U ) √ Kι T,ζU,δ α + KV ⋆ I B ⋆ V ⋆ > √ K α ≤ O V ⋆ B ⋆ SAKι T,ζU,δ + V ⋆ S 2 A(ln U ) √ Kι T,ζU,δ α + min αB ⋆ √ K, α 2 B 2 ⋆ V ⋆
where in the last inequality we use two different ways to bound KV ⋆ under the inequality B⋆ V⋆ > √ K α :
KV ⋆ ≤ K × αB⋆ √ K = αB ⋆ √ K, and KV ⋆ ≤ αB⋆ V⋆ 2 V ⋆ = α 2 B 2 ⋆
V⋆ . If we set α = √ S 2 A ln U and pick up the first term in min{·, ·}, then we get
Reg K = O V ⋆ B ⋆ SAKι T,ζU,δ + V ⋆ √ S 2 AK ln Uι T,ζU,δ + B ⋆ √ S 2 AK ln U ≤ O B ⋆ √ S 2 AK ln Uι T,KU,δ .
If we set α = √ S 3 A ln U and pick up the second term in min{·, ·}, then we get
Reg K = O V ⋆ B ⋆ SAKι T,ζU,δ + V ⋆ √ SAK ln Uι T,ζU,δ + B 2 ⋆ S 3 A ln U V ⋆ι T,ζU,δ ≤ O V ⋆ B ⋆ SAK ln Uι T,KU,δ + B 2 ⋆ S 3 A ln U V ⋆ι T,KU,δ .
Case 2. Unknown range of V ⋆ and set U = K 1 ǫ By the same argument above, if we set ζ = K/(S 2 A ln U ), then we have
Reg K = O B ⋆ √ S 2 AK ln Uι T,KU,δ + KV ⋆ I {V ⋆ > U } ≤ O B ⋆ √ ǫ −1 S 2 AK ln Kι T,K,δǫ + KV ⋆ I V ⋆ > K 1 ǫ ≤ O B ⋆ √ ǫ −1 S 2 AK ln Kι T,K,δǫ + V 1+ǫ ⋆ ,
and if we set ζ = K/(S 3 A ln U ), then We use induction to prove the lemma. When t = 1, Q 1 (s, a) = 0 ≥ Q ⋆ (s, a) for all s, a since we are in the cost setting. Suppose that 0 ≥ Q t (s, a) ≥ Q ⋆ (s, a) for all s, a (which implies 0 ≥ V t (s) ≥ V ⋆ (s) for all s). Since Q t+1 and Q t only differ in the entry (s t , a t ), we only need to check Q t+1 (s t , a t ) ≥ Q ⋆ (s t , a t ). With the definition of b t and ι t specified in Line 17 of Algorithm 4, we have
Reg K = O V ⋆ B ⋆ SAK ln Uι T,KU,δ + B 2 ⋆ S 3 A ln U V ⋆ι T,KU,δ + KV ⋆ I {V ⋆ > U } ≤ O ǫ −1 V ⋆ B ⋆ SAK ln Kι T,K,δǫ + ǫ −1 B 2 ⋆ S 3 A ln K V ⋆ι T,K,δǫ + V 1+ǫ ⋆ .
Appendix C. Upper Bound for Stochastic Shortest Path
Q t+1 (s t , a t ) ≥ r(s t , a t ) +P t V t + b t = r(s t , a t ) +P t V t + max c 1 V(P t , V t )ι t n t , c 2 B t+1 ι t n t ≥ r(s, a) +P t V ⋆ [B t+1 ] + max c 1 V(P t , V ⋆ [B t+1 ] )ι t n t , c 2 B t+1 ι t n t ≥ r(s t , a t ) + P st,at V ⋆ [B t+1 ] ≥ r(s t , a t ) + P st,at V ⋆ = Q ⋆ (s t , a t ),
where the second inequality is because V t (s) ≥ V ⋆ (s) by the induction hypothesis and V t (s) ≥ −B t ≥ −B t+1 by the algorithm, which jointly gives V t (s) ≥ V ⋆ [B t+1 ] (s); then using the monotone property in Lemma 14. This finishes the induction.
We remark that to make the third inequality above hold for all possible B = {1, 2, 4, 8, . . .} with probability 1 − δ, we need a union bound over B's. Therefore, in the third inequality above, we actually apply Freedman's inequality for the B = 2 i case with a probability parameter
δ i = δ 2i 2 = δ 2(log 2 B) 2 so that ∞ i=1 δ i < δ.
The additional 2(log 2 B) 2 factor in the log term is taken cared in the definition of ι t .
Proof [Theorem 11] The proof is similar to that of Theorem 6. We require the combination of the bounds in Lemma 16 and Lemma 20. Notice that the proof of Lemma 16 requires the condition B ≥ B ⋆ , the purpose of which is to ensure optimism Q t (·, ·) ≥ Q ⋆ (·, ·). For the SSP case, since optimism is ensured by Lemma 22 without requiring B ≥ B ⋆ , the conclusion of Lemma 16 still holds, with the B there replaced by B T +1 in Algorithm 4 (i.e., the maximum B used in Algorithm 4). Furthermore, with probability at least 1 − O(δ), B T +1 ≤ 2B ⋆ according to the arguments in Section 5. Therefore, we have
T t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )) ≤ O SA T t=1 V(P t , V ⋆ )ι T,B⋆,δ + B ⋆ S 2 Aι T,B⋆,δ .
The bound in Lemma 20 can be directly used. Combining them in a similar way as in the proof of Theorem 6, we get
Reg K = KV ⋆ (s init ) − T t=1 r(s t , a t ) ≤ O B ⋆ SA T t=1 |r(s t , a t )|ι T,B⋆,δ + B ⋆ S 2 Aι T,B⋆,δ .(26)
Recall that in SSP, V ⋆ (·) ≤ 0 and r(·, ·) ≤ 0, so Eq. (26) is equivalent to
T t=1 |r(s t , a t )| − KV ⋆ ≤ O B ⋆ SA T t=1 |r(s t , a t )|ι T,B⋆,δ + B ⋆ S 2 Aι T,B⋆,δ . Solving for T t=1 |r(s t , a t )|, we get T t=1 |r(s t , a t )| ≤ O KV ⋆ + B ⋆ S 2 Aι T,B⋆,δ .
Plugging this back to Eq. (26) finishes the proof.
Appendix D. Lower Bound for General Stochastic Path
Proof [Theorem 3] As mentioned in Footnote 1, we can convert between the cases of "deterministic initial state" and "random initial state" by just introducing one additional state. In our lower bound proof, our construction is based on random initial states, but converting it to deterministic initial state is straightforward. In this proof, we use P (·|s, a) = P s,a (·) to denote the transition probability. We first consider a MDP with two non-terminal states x, y and a terminal state g. The number of actions is A. Let the initial state distribution be uniform{x, y}. For all action a, let r(x, a) = 1 and r(y, a) = −1. Let ǫ, ∆ ∈ (0, 1 4 ] be quantities to be determined later. For all action a, let P (· | y, a) = (1 − ǫ)uniform{x, y} + ǫ1 g .
For all but one single good action a ⋆ , let
P (· | x, a) = (1 − ǫ)uniform{x, y} + ǫ1 g .
For the good action a ⋆ , let
P (· | x, a ⋆ ) = 1 − ǫ + ∆ 2 1 x + 1 − ǫ − ∆ 2 1 y + ǫ1 g .
First, we calculate the optimal value function of this MDP.
Claim 1 V ⋆ (x) = 1 + ∆ ǫ 1+ǫ 2−∆ , and Q ⋆ (x, a) = 1 + ∆ ǫ 1−ǫ 2−∆ for a = a ⋆ .
Proof [Claim 1] We first show that the optimal policy is to always choose a ⋆ on state x. We only need to compare two deterministic policies: always choose a ⋆ , or always choose some other action a = a ⋆ . For the policy with π(x) = a ⋆ , we have
V π (x) = r(x, a ⋆ ) + P (x|x, a ⋆ )V π (x) + P (y|x, a ⋆ )V π (y) = 1 + 1 − ǫ + ∆ 2 V π (x) + 1 − ǫ − ∆ 2 V π (y),
V π (y) = r(y, ·) + P (x|y, ·)V π (x) + P (y|y, ·)V π (y) = −1 + 1 − ǫ 2 V π (x) + 1 − ǫ 2 V π (y).
Solving the equations, we get V π (x) = 1 + ∆ ǫ 1+ǫ 2−∆ . On the other hand, if π(x) = a, then by similar calculation, we get V π (x) = 1, which is smaller. This shows that the optimal policy is to always choose a ⋆ on x. Thus, V ⋆ (x) = Q ⋆ (x, a ⋆ ) = 1 + ∆ ǫ 1+ǫ 2−∆ . Plugging this in another bellman equation
V ⋆ (y) = −1 + 1 − ǫ 2 V ⋆ (x) + 1 − ǫ 2 V ⋆ (y)
we get V ⋆ (y) = −1 + ∆ ǫ 1−ǫ 2−∆ , and thus for a = a ⋆ ,
Q ⋆ (x, a) = 1 + 1 − ǫ 2 (V ⋆ (x) + V ⋆ (y)) = 1 + 1 − ǫ 2 ∆ ǫ 2 2 − ∆ = 1 + ∆ ǫ 1 − ǫ 2 − ∆ .
Next, we follow the proof idea in Rosenberg et al. (2020) and consider a truncated process: First, we view the K episodes as a continuous process in which once the learner reaches g, a new state is drawn from the initial distribution and the learner restarts from there. Then we cap the process to make it contain at most K ǫ step: if the learner has not finished all K episodes after K ǫ steps, then we stop the learner before it finishes all K episodes.
In the original process, we let Reg to be the regret, T x and T y be the number of steps the learner visits x and y, respectively, T x,a be the number of steps the learner visits x and chooses action a, and let T = T x + T y . We define T − x , T − y , T − x,a , T − to be the corresponding quantities in the truncated process. We first show the following claim:
Claim 2 E[Reg] ≥ 2∆ 2−∆ E[T − x − T − x,a ⋆ ]. Proof [Claim 2]
We first focus on episode k in the original process. Let the episode starts from t = t k , and the last step in the episode before reaching g is t = e k . Then the expected regret is given by
E V ⋆ (s t k ) − e k t=t k r(s t , a t ) = E V ⋆ (s t k ) − e k t=t k (Q ⋆ (s t , a t ) − P st,at V ⋆ ) = E V ⋆ (s t k ) − e k t=t k (Q ⋆ (s t , a t ) − V ⋆ (s t+1 )) = E e k t=t k (V ⋆ (s t ) − Q ⋆ (s t , a t )) . Notice that V ⋆ (s t ) − Q ⋆ (s t , a t ) = 0 when s t = y or (s t , a t ) = (x, a ⋆ ). When s t = x and a t = a ⋆ , we have V ⋆ (s t ) − Q ⋆ (s t , a t ) = ∆ ǫ 2ǫ 2−∆ = 2∆
2−∆ according to Claim 1. Thus, the expected regret in episode k is
E[Reg k ] = E 2∆ 2 − ∆ e k t=t k 1[s t = x, a t = a ⋆ ] = E 2∆ 2 − ∆ e k t=t k (1[s t = x] − 1[s t = x, a t = a ⋆ ]) .
Summing this over episodes, we get
E[Reg] = 2∆ 2 − ∆ E[T x − T x,a ⋆ ].
Using the simple fact that T
x − T x,a ⋆ ≥ T − x − T − x,a ⋆ finishes the proof. Claim 3 E[T − x ] ≥ K 6ǫ . Proof [Claim 3]
Since on every state-action pair (s, a), we have P (x|s, a) ≥ P (y|x, a) and the initial distribution is uniform between x and y, it holds that
E[T − x ] ≥ E[T − y ]. Thus it suffices to show that E[T − ] ≥ K 3ǫ . Note that E[T − ] = E[min{T, K ǫ }] ≥ K k=1 E[min{L k , 1 ǫ }]
where L k is the length of episode k. If we can show that L k ≥ 1 ǫ with probability at least 1 3 , then we have
K k=1 E[min{L k , 1 ǫ }] ≥ 1 3 K k=1 min{ 1 ǫ , 1 ǫ } = K 3ǫ . By the definition of the transition kernel, we indeed have Pr[L k ≥ 1 ǫ ] ≥ (1 − ǫ) 1 ǫ −1 ≥ 1 e ≥ 1 3
. This finishes the proof.
The following proof follows the standard lower bound proof for multi-armed bandits (Auer et al., 2002). We create A different instances of MDPs, where in each of them, the choice of the good action is different. We use P a and E a to denote the probability measure and the expectation under the instance where a is chosen as the good action. We further introduce an instance where every action behaves the same and there is no good action. We use P unif and E unif to denote the probability and expectation under this instance.
Claim 4 E a [T − x,a ] ≤ E unif [T − x,a ] + O K∆ ǫ E unif [T − x,a ] .
Proof [Claim 4] With standard arguments from Auer et al. (2002)
, because 0 ≤ T − x,a ≤ K ǫ , we have E a [T − x,a ] − E unif [T − x,a ] ≤ K ǫ P a − P unif 1 ≤ K ǫ 1 2 KL(P a , P unif ) = K ǫ 1 2 K/ǫ−1 t=1 P unif [(s t , a t ) = (x, a)]KL Multi 1 − ǫ 2 , 1 − ǫ 2 , ǫ , Multi 1 − ǫ + ∆ 2 , 1 − ǫ − ∆ 2 , ǫ
(by the chain rule of KL; we use Multi(a, b, c, . . .) to denote a multinomial distribution)
≤ O K ǫ K/ǫ t=1 P unif [(s t , a t ) = (x, a)] × ∆ 2 1 − ǫ = O K∆ ǫ E unif [T − x,a ] .
Thus, the expected regret is
1 A a E[Reg] ≥ ∆ A a E a [T − x − T − x,a ] ≥ K∆ 6ǫ − ∆ A a E a [T − x,a ] ≥ K∆ 6ǫ − ∆ A a E unif [T − x,a ] + O K∆ ǫ E unif [T − x,a ] ≥ K∆ 6ǫ − K∆ Aǫ − O ∆ A × K∆ ǫ × A a E unif [T − x,a ] ≥ K∆ 6ǫ − K∆ Aǫ − O K∆ 2 ǫ K Aǫ = K∆ ǫ 1 6 − 1 A − O ∆ K Aǫ . Picking ∆ = Θ Aǫ K , we get that E[Reg] ≥ Ω AK ǫ .
Notice that for the MDP we constructed, we have
R 2 = sup π E π τ t=1 r(s t , a t ) 2 = sup π E π τ t=1 (r(s t , a t ) − E t [r(s t , a t )]) 2 + τ t=1 E t [r(s t , a t )] 2 ≤ sup π E π τ + (∆τ ) 2 ≤ 1 ǫ + 2∆ 2 ǫ 2 .
We let ǫ = 1 u 2 . With this choice,
R 2 ≤ 1 ǫ + 2∆ 2 ǫ 2 ≤ O 1 ǫ + A ǫK ≤ O 1 ǫ = O(u 2 ) if we assume K ≥ A.
Therefore, the condition in the lemma the satisfied, and the regret lower bound given by
Ω AK ǫ = Ω(u √ AK).
To show the lower bound for general S, we make S 2 copies of the MDP we constructed, and let the initial state distribution be uniform over all states. When we run on this aggregated MDP for K episodes, a constant portion of the S 2 copies will be visited for Θ( K S ) times. Using the lower bound we just established above (with K replaced by K S ), we have that the regret is lower bounded by
Ω S × u AK S = Ω(u √ SAK).
Notice that the assumption we need becomes K S ≥ A, or K ≥ SA.
Proof [Theorem 4] We first prove this theorem for an MDP with two non-terminal state x, y and a terminal state g. The number of actions is A. The construction is similar to that in Theorem 3. On state x, there is potentially a good action a ⋆ ∈ [A − 1]. The reward function and transition kernel are chosen as below:
state action reward → z → x → y → g x [A − 1]\{a ⋆ } 1 0 1−ǫ−∆ 2 1−ǫ+∆ 2 ǫ x a ⋆ 1 0 1−ǫ+∆ 2 1−ǫ−∆ 2 ǫ x A 1 0 0 0 1 y [A] −1 0 1−ǫ 2 1−ǫ 2 ǫ
The choices of ǫ, ∆ satisfy ǫ, ∆ ∈ (0, 1 8 ] and ∆ 2 ≤ ǫ. We consider two cases: one with the good action a ⋆ , and the other without the good action. We first find the optimal policy in each case. If there is a good action on x, then for the policy that always choose a ⋆ on x, we have V π (x) = 1 + 1−ǫ+∆ 2 V π (x) + 1−ǫ−∆ 2 V π (y) and V π (y) = −1 + 1−ǫ 2 V π (x) + 1−ǫ 2 V π (y), which jointly give V π (x) = 1 + ∆ ǫ 1+ǫ 2−∆ ; for the policy that always choose [A − 1]\{a ⋆ } on x, we have (by similar calculation) V π (x) = 1 − ∆ ǫ 1+ǫ 2+∆ ; for the policy that always choose A on x, we have V π (x) = 1. Clearly, the one that always choose a ⋆ on x gives the highest expected total reward, so it is the optimal policy in this case. Therefore, when there is a good action,
V ⋆ (x) = Q ⋆ (x, a ⋆ ) = 1 + ∆ ǫ 1 + ǫ 2 − ∆ ,(27)V ⋆ (y) = −1 + 1 − ǫ 2 V ⋆ (x) + 1 − ǫ 2 V ⋆ (y) = −1 + ∆ ǫ 1 − ǫ 2 − ∆ ,(28)V ⋆ (x) − Q ⋆ (x, a) = Q ⋆ (x, a ⋆ ) − Q ⋆ (x, a), = ∆ (V ⋆ (x) − V ⋆ (y)) = ∆ 2 + 2∆ 2 − ∆ ≥ 2∆, for a = a ⋆ , A(29)V ⋆ (x) − Q ⋆ (x, A) = 1 + ∆ ǫ 1 + ǫ 2 − ∆ − 1 ≥ ∆ 2ǫ .(30)
If there is no good action, then the optimal policy is to always choose action A on state x. In this case, we have
V ⋆ (x) = Q ⋆ (x, a ⋆ ) = 1,(31)V ⋆ (y) = −1 + 1 − ǫ 2 V ⋆ (x) + 1 − ǫ 2 V ⋆ (y) = −1,(32)V ⋆ (x) − Q ⋆ (x, a) = 1 − 1 + 1 − ǫ − ∆ 2 V ⋆ (x) + 1 − ǫ + ∆ 2 V ⋆ (y) = ∆ for a = A (33)
We use P a and E a to denote the probability measure and expectation under the environment where a ∈ [A − 1] is chosen as the good action on x; we use P unif and E unif to denote the probability and expectation under the environment where there is no good action.
For both kinds of MDPs, we have
R 2 = R 2 max = sup π E π τ t=1 r(s t , a t ) 2 (τ is the last step before reaching g) = Θ sup π E π τ t=1 (r(s t , a t ) − E t [r(s t , a t )]) 2 + τ t=1 E t [r(s t , a t )] 2 = Θ sup π E π τ + (∆τ ) 2 = Θ 1 ǫ + ∆ 2 ǫ 2 = Θ 1 ǫ ,(34)
where in the last equation we use the assumption ∆ 2 ≤ ǫ. For the MDP with good action, we have R ⋆ = R since the optimal policy always choose a ⋆ on x ⋆ ; for the MDP without good action, R ⋆ = Θ(1) since the optimal policy will take action A on all state x, and directly go to g. Like in the proof of Theorem 3, we consider the truncated process where the total number of steps is truncated to K ǫ (and ignoring the regret incurred later). In the truncated process, we let T x to denote the number of times the learner visits x, and T x,a denote the number of times the learner visits x and choose action a.
With all the calculations above, we have the following:
E unif [Reg K ] ≥ E K ǫ t=1 (V ⋆ (s t ) − Q ⋆ (s t , a t )) ≥ E [(T x − T x,A )∆] ,(35)
where we use Eq. (33). On the other hand,
E a [Reg K ] ≥ E a T x,A × ∆ 2ǫ + (T x − T x,a − T x,A ) × 2∆ ≥ E a T x,A × ∆ 4ǫ + (T x − T x,a ) × 2∆ ,(36)
where we use Eq. (29) and Eq. (30) and that ǫ ≤ 1 8 . By the same arguments as in Claim 4 in the proof of Theorem 3, we have that for a ∈ [A − 1],
E a [T x − T x,a ] − E unif [T x − T x,a ] ≤ O K∆ ǫ E unif [T x,a ](37)
and
E unif [T x,A ] − E a [T x,A ] ≤ O K∆ E unif [T x,a ](38)
where we use that T
x − T x,a ∈ [0, K ǫ ] and T x,A ∈ [0, K].
In an environment where a ⋆ is chosen randomly from [A − 1], the expected regret is
1 A − 1 A−1 a=1 E a [Reg K ] ≥ 1 A − 1 A−1 a=1 E a T x,A 4ǫ ∆ + 2(T x − T x,a )∆ (by Eq. (36)) = ∆ 4ǫ 1 A − 1 A−1 a=1 E a [T x,A ] + 2∆ × 1 A − 1 A−1 a=1 E a [T x − T x,a ] ≥ ∆ 4ǫ 1 A − 1 A−1 a=1 E unif [T x,A ] − O K∆ E unif [T x,a ] + 2∆ × 1 A − 1 A−1 a=1 E unif [T x − T x,a ] − O K∆ ǫ E unif [T x,a ]
(by Eq. (37) and Eq. (38))
≥ ∆ 4ǫ E unif [T x,A ] + 2∆E unif [T x ] − 2∆ A − 1 E unif [T x − T x,A ] − O K∆ 2 ǫ 1 A − 1 A−1 a=1 E unif [T x,a ] (AM-GM) ≥ ∆ 4ǫ E unif [T x,A ] + ∆E unif [T x ] − O K∆ 2 ǫ 1 A − 1 A−1 a=1 E unif [T x,a ] .
Before continuing, we prove a property:
Claim 1 E[T x ] ≥ K−2E[T x,A ] 24ǫ
. Proof [Claim 1] We focus on a single episode. Let N x be the total number of steps the learner visits x in an episode, N x,A be the number of steps the learner visits x and chooses A. Clearly, N x,A ≤ 1 since once the learner chooses A, the episodes ends.
We prove the following statement: if E[N x,A ] ≤ 1 2 , then E[N x ] ≥ 1 24ǫ . This can be seen by the following: let E A be the event that in the episode, the learner ever chooses action A when visiting x, and let E ′ A be its complement event (i.e., replacing ever by never). Then
Pr N x ≥ 1 3ǫ ≥ Pr E ′ A × Pr N x ≥ 1 3ǫ E ′ A Notice that Pr[E A ] = E[N x,A ] ≤ 1 2 , so Pr[E ′ A ] ≥ 1 2 . Then notice that Pr N x ≥ 1 3ǫ E ′ A ≥ (1 − 3ǫ) 1 3ǫ ≥ 1 − 1 2 2 = 1 4
because every time the learner select an action in [A − 1], with probability at least
1 − ǫ − ∆ 2 + 1 − ǫ + ∆ 2 1 − ǫ 2 + 1 − ǫ 2 2 + · · · = 1 − ǫ − ǫ∆ 1 + ǫ ≥ 1 − 3ǫ he will visit state x again. Hence, E[N x ] ≥ 1 3ǫ Pr N x ≥ 1 3ǫ ≥ 1 3ǫ × 1 2 × 1 4 = 1 24ǫ
. Now we consider all K episodes, and denote N (k)
x , N (k)
x,a denotes the visitation counts that correspond to episode k. By the discussion above, we have
E [T x ] = K k=1 E N (k) x ≥ K k=1 1 24ǫ I E N (k) x,A ≤ 1 2 = K 24ǫ − K k=1 1 24ǫ I E N (k) x,A > 1 2 ≥ K 24ǫ − K k=1 2 24ǫ E N (k) x,A = K 24ǫ − 1 12ǫ E[T x,A ].
With Claim 1, we continue with the previous calculation:
1 A − 1 A−1 a=1 E a [Reg K ] ≥ ∆ 4ǫ E unif [T x,A ] + ∆E unif [T x ] − O K∆ 2 ǫ 1 A − 1 A−1 a=1 E unif [T x,a ] ≥ ∆ 12ǫ E unif [T x,A ] + ∆E unif [T x ] − O K∆ 2 ǫ 1 A E unif [T x − T x,A ] ≥ K∆ 24ǫ − O K∆ 2 ǫ 1 A E unif [T x − T x,A ]
Suppose that the algorithm can ensure E[Reg K ] ≤ O(u √ AK) for all instances with R max ≤ u. By the bound in Eq. (34), there exists universal constants c 2 > 0 and a term c 1 that only involves logarithmic factors such that
K∆ 24ǫ − c 2 K∆ 2 ǫ 1 A E unif [T x − T x,A ] ≤ c 1 1 ǫ AK
We pick ∆ = 48c 1 ǫA K (one can verify that this satisfies ∆ 2 ≤ ǫ as long as K ≥ Ω(c 2 1 A)). Then the inequality above reads
2c 1 1 ǫ AK − 48 2 c 2 1 c 2 AE unif [T x − T x,A ] ≤ c 1 1 ǫ AK,
which is equivalent to
E unif [T x − T x,A ] ≥ K 48 4 c 2 1 c 2 2 ǫ .
This, together with Eq. (35), implies
E unif [Reg K ] ≥ K∆ 48 4 c 2 1 c 2 2 ǫ = 1 48 3 c 1 c 2 2 AK ǫ .(39)
Recall that unif specifies the environment where there is no good action, and in this case R ⋆ = Θ(1). However, the bound Eq. (39) scales with R = R max = Θ( 1/ǫ) ≫ R ⋆ . Now we generalize our construction to general number of states S. We construct an MDP with S non-terminating states that consists of S 2 copies of the two-state MDP we just constructed, and equip every state x with two additional actions. To connect these S 2 copies, we create a balanced binary tree with S 2 nodes; each node of the tree is the x in the two-state MDP. The two additional actions on every state x will lead to a reward of zero and deterministic transitions to the left and the right child of the node, respectively. Furthermore, we only let at most one of the copies to have the optimal action a ⋆ . The initial state for this tree-structured MDP is its root.
Below, we argue that this tree-structured MDP is at least as hard as the original two-state MDP with Θ(SA) actions (to create this two-state MDP, we let its actions on state x be the union over the actions on all states x in the tree-structured MDPs; similar for y). We can see that for any algorithm in the tree-structured MDP, there is a corresponding algorithm for the two-state MDP that achieves the same expected reward. Besides, the expected reward for the optimal policy on the tree-structured MDP and the two-state MDP are the same. Therefore, our lower for general S can be simply obtained through the two-state construction with SA actions. This finishes the proof. In this proof, we use P (·|s, a) = P s,a (·) to denote transition probability. We first prove this theorem for S = 2 (excluding goal state). We assume that the regret bound claimed by the algorithm for the
state action → x → y → g x b 0 √ A c √ K ln 2 (vK) 1 − √ A c √ K ln 2 (vK) x [A]\{b} 1 − 1 v 0 1 v y a ⋆ 0 1 − √ A 2cv √ K ln 2 (vK) √ A 2cv √ K ln 2 (vK) y [A]\{a ⋆ } 0 1 2 1 2V ⋆ = B ⋆ ≤ v case is E[Reg K ] ≤ cv √ AK(40)
for some c that only involves logarithmic terms. We create an SLP with an two non-terminal states x, y, a terminal state g, and A actions {1, 2, . . . , A}. The initial state is x. The reward function is a constant 1 for all actions on all non-terminating state (i.e., the total reward is the total number of steps before reaching g). On state x, there is a special action b such that P
(g|x, b) = 1 − √ A c √ K ln 2 (vK) and P (y|x, b) = √ A c √ K ln 2 (vK) ;
for all other actions a ∈ [A]\{b}, P (g|x, a) = P (x|x, a) = 1 2 . On state y, there is potentially a good action a ⋆ such that P (g|y, a ⋆ ) = √ A 2cv √ K log 2 K and P (y|y, a ⋆ ) = 1 − √ A 2cv √ K ln 2 (vK) ; for other actions a ∈ [A]\{a ⋆ }, P (g|y, a) = P (y|y, a) = 1 2 . The transition kernel is summarized in Table 2. Let P a and E a denote the probability measure and expectation in the instance where a ⋆ = a, and let P unif and E unif denote those in the instance where a ⋆ does not exist.
Let N a be the total number of times (in K episodes) the learner chooses action a on state y, N b be the total number of times the learner chooses action b on state x, and N y be the total number of times the learner visits state y.
In the environment where a ⋆ does not exist, the optimal policy on state x is to choose any action in [A]\{b}. This is because for any policy such that π(x) = b, we have V π (x) = 1 + P (y|x, b)V π (y) ≤ 3, and for any policy such that π(x) ∈ [A]\{b}, we have V π (x) = v. Therefore,
E unif [Reg K ] = (v − 3)E unif [N b ] ≥ v 2 E unif [N b ].
In this environment, V ⋆ (x) = v and V ⋆ (y) = 2, and thus V ⋆ = B ⋆ = v. By the assumption for the algorithm, we have
v 2 E unif [N b ] ≤ E unif [Reg K ] ≤ cv √ AK, or E unif [N b ] ≤ 2c √ AK.
On the other hand, in the environment where a ⋆ = a, the optimal policy is to choose action b on state x until transitioning to state y and then choose a ⋆ on y. This is because for this policy, V π (x) ≥ ln (vK) .
We consider the environment where a ⋆ is chosen uniformly randomly from [A], and denote its probability measure and expectation as P and E. Clearly, P = 1 A A a=1 P a and E = 1
A A a=1 E a .
By the convexity of KL divergence, we have KL(P, P unif ) ≤ 1 A A a=1 KL(P a , P unif ) = O 1 ln (vK) .
By the same argument as in the proof of Claim 3, Theorem 3, we have P(N b > K/2) ≤ P unif (N b > K/2) + O KL(P, P unif )
≤ 2 K × E unif [N b ] + O 1 ln(vK) ≤ O c √ A √ K + 1 ln(vK)
Thus, for large enough K, we have P(N b > K/2) ≤ 1 2 and hence the regret in this environment is bounded by Ω(vK).
To generalize the result to general number of states, we leave the initial state x unchanged, but replace the state y by a binary tree with S − 1 nodes with root y 1 and leaves y S/2 , . . . , y S−1 . The transition probability P (y 1 |x, b) takes the value of P (y|x, b) specified in Table 2. For all non-leaf nodes, there are only two actions that can deterministically transition to its two children with zero reward. For all leaf nodes, the transition and reward are same as the y node in Table 2. Among all leaves, there is at most one action on one node being the good action a ⋆ .
Similar to the proof of Theorem 4, it is not hard to see that this MDP is at least as hard as the original two-state MDP with Ω(SA) actions on y, by letting the action set on y in the two-state MDP to be the union of actions on the leaves in the tree-structed MDP. This is because the optimal value on these two cases are the same, and every policy in the tree-structured MDP can find a corresponding policy in the two-state MDP with the same expected reward. Hence our lower bound for general S can be obtained by our original constant S construction with Θ(SA) actions.
Appendix F. Auxiliary Lemmas
Lemma 23 Let X t ∈ [−c, c] S be in the filtration of (s 1 , a 1 , . . . , s t−1 , a t−1 , s t ) for some c > 0 with X t (g) 0. Then with probability at least 1 − δ,
T t=1 V(P t , X t ) ≤ O c T t=1 |X t (s t ) − P t X t | + c T t=1 X t − X t+1 ∞ + c 2 ln(1/δ) . Proof Denote P t := P st,at . T t=1 V(P t , X t ) = T t=1 P t X 2 t − (P t X t ) 2 = T t=1 P t X 2 t − X t (s ′ t ) 2 + T t=1 X t (s ′ t ) 2 − X t (s t ) 2 + T t=1 X t (s t ) 2 − (P t X t ) 2 ≤ O T t=1
V(P t , X 2 t ) ln(1/δ) + c 2 ln(1/δ)
+ T t=1 (X t (s t+1 ) 2 − X t (s t ) 2 ) + T t=1 X t (s t ) 2 − (P t X t ) 2 (because X t (g) = 0) ≤ O c T t=1
V(P t , X t ) ln(1/δ) + c 2 ln(1/δ)
+ 2c T t=1 X t − X t+1 ∞ + 2c T t=1 |X t (s t ) − P t X t | ≤ 1 2 T t=1 V(P t , X t ) + 2c T t=1 X t − X t+1 ∞ + 2c T t=1
|X t (s t ) − P t X t | + O c 2 ln(1/δ) .
(AM-GM inequality)
Solving the inequality we get the desired inequality.
Lemma 24 Let X 1 , X 2 , . . . , X τ ⊂ [0, b] be a sequence with a random stopping time τ , where X i is in the filtration of F i = (X 1 , . . . , X i−1 ), for some b ≥ 1. Suppose that for any i, E [ τ t=i X t | F i ] ≤ B. Then (a) with probability at least 1 − δ, τ t=1 X t ≤ O((B + b) ln(1/δ)),
(b) E ( τ t=1 X t ) 2 ≤ O (B + b) ln + ( B+b c )E τ t=1 X t + c 2 for any 1 ≤ c ≤ B + b. (c) E ( τ t=1 X t ) 2 ≤ O (B + b) 2 .
Proof For a sequence X 1 , X 2 , . . . , X τ , define τ 1 = min n ≤ τ : Combining the arguments above, we have the following: for any δ < 0.5 (letting m = ⌈log 2 (1/δ)⌉), with probability at least 1 − 2 −m ≥ 1 − δ, τ t=1 X t ≤ (2B + b)m = (2B + b) ⌈log 2 (1/δ)⌉ ≤ 8(B + b) ln(1/δ).
This proves (a). Below we bound the second moment:
E τ t=1 X t 2 ≤ E τ t=1 X t 2 τ M = ∞ Pr [τ M = ∞] + ∞ m=M +1 E τ t=1 X t 2 τ m = ∞ Pr[τ m−1 < ∞, τ m = ∞] ≤ (2B + b)M E τ t=1 X t τ M = ∞ + ∞ m=M +1 ((2B + b)m) 2 Pr[τ m−1 < ∞] ≤ 2M (B + b)E τ t=1 X t + (2B + b) 2 ∞ m=M +1 2 −m+1 m 2 .
If we pick M = ⌈4 log 2 ( 2B+b c )⌉, then (2B + b) 2 2 − M 2 ≤ c 2 , and the last expression can be further upper bounded by T t=1 X t − T µ 2 ln(2δ −1 ) + 4 ln ln(T ) T + 2 T t=1 X t − 1 T T j=1 X j 2 + 6B(ln(2δ −1 ) + 4 ln ln(T )).
For T ≥ 8 ln(2δ −1 ), we have 2 ln(2δ −1 )+4 ln ln(T ) T ≤ 1 4 and rearranging finishes the proof.
Lemma 29 (Lemma 30, Chen et al. (2021a)) Let X ∞ ≤ C, then V(P, X 2 ) ≤ 4CV(P, X) for any P .
Appendix G. Weakening the assumption on proper policies
Assumption 1 can be weakened to the following:
Assumption 2 There exists a policy π ⋆ ∈ Π SD such that
• V π ⋆ (s) ≥ V π (s) for all s ∈ S and π ∈ Π HD .
• T ⋆ max s E π ⋆ [τ | s 1 = s] < ∞, where τ is the time index right before reaching g.
In words, Assumption 2 assumes that there exists an optimal policy that is stationary and proper. If such an optimal policy is not unique, we can take T ⋆ to be the minimum over all such policies. Notice that the first part of Assumption 2 is sufficient for all our algorithms to work, though the regret bound has a ln T factor, which could be unbounded. That is why in the main text we introduced the stronger Assumption 1 and upper bound T by the order of KT max . Below we show that with the additional second part of Assumption 2 and the algorithmic trick introduced in Tarbouriech et al.
(2021b), the T max dependency can be replaced by T ⋆ . Assuming that an order optimal bound of T ⋆ is known, the agent modifies the MDP by modifying all rewardsr(s, a) = r(s, a) − 1 KT⋆ . The value of the optimal policy in the modified MDP is smaller than that in the original MDP by at most 1 K . It is then sufficient to bound the regret for the modified MDP, since For the modified MDP, we can show that the total time horizon T is bounded. By a trivial bound of V(P t , V ⋆ ) ≤ B 2 ⋆ , combined with Lemma 16, we get that with probability at least 1 − δ, Notice that KṼ ⋆ (s init ) ≥ KV ⋆ (s init ) − 1 and T t=1r (s t , a t ) = − T KT⋆ + T t=1 r(s t , a t ) ≤ − T KT⋆ + KV ⋆ (s init ) + R K ln(KR/δ) + R max ln(KR max /δ) with probability ≥ 1 − O(δ) by Lemma 26 (notice that E T t=1 r(s t , a t ) ≤ KV ⋆ (s init )). Rearranging by T leads to with probability at least 1 − O(δ), T ≤ O poly(S, A, B, B ⋆ , R, R max , K, T ⋆ , δ −1 ) . This upper bound on T helps us to replace the ln T dependency in the regret by a log term that only involves algorithm-independent quantities.
If an order optimal bound of T ⋆ is not known, we can follow the arguments in Tarbouriech et al. (2021b) that setsr(s, a) = r(s, a) − 1 K n for some n ≫ 1. By this, we can also remove the dependency on T max , with the price of an additional T⋆ K n−1 regret.
Lemma 5
5If r(s, a) ≥ 0 for all s, a, then R = O( V ⋆ B ⋆ ln + (B ⋆ /V ⋆ )) and R max = O(B ⋆ ), where ln + (x) ln(1 + x).
on step t is a zero mean random variable bounded in [−B, B]. By Lemma 26, with probability at least 1 − O(δ), we have
Lemma 22
22Algorithm 4 ensures Q t (s, a) ≥ Q ⋆ (s, a) for all (s, a) and t with probability at least 1 − δ. Proof For any B > 0, we define V ⋆ [B] ∈ [−B, 0] S to be such that V ⋆ [B] (s) = max{−B, V ⋆ (s)}.
X
τ m does not exist, let τ m = ∞. Naturally, we define τ 0 = 0. By the condition stated in the lemma and Markov's inequality, we havePr [τ m+1 < ∞ | τ m < ∞] Therefore, Pr[τ m < ∞] ≤ 2 −m and Pr[τ m = ∞] ≥ 1 − 2 −m . Also, notice that by the definition of τ i , we have τ i t=τ i−1 +1 X t ≤ 2B + b for all i (otherwise, τ i −1 t=τ i−1 +1 X t > 2B, contradicting the definition of τ i ). Thus, if τ m = ∞, t ≤ (2B + b)(m − 1) + 2B ≤ (2B + b)m.
t , a t ) .
(s t , a t ) ≤ O SAB 2 ⋆
2Tι T,B,δ + BS 2 Aι T,B,δ .
Table 1 :
1Overview of regret bounds for stochastic path problems. See Section 2 for definitions.Setting Scale B ⋆
Reg K inÕ(·)
SP
known
R
√
SAK + R max SA + B ⋆ S 2 A
Theorem 2
we get for SLP, but our technique there does not lead to this desired bound. Finally, it remains an open question to prove or disprove Zhang et al. (2021)'s conjecture about the lower-order term, which has direct consequences for our adaptivity result in SLP. Peter Auer, Nicolo Cesa-Bianchi, Yoav Freund, and Robert E Schapire. The nonstochastic multiarmed bandit problem. SIAM journal on computing, 32(1):48-77, 2002. Mohammad Gheshlaghi Azar, Ian Osband, and Rémi Munos. Minimax regret bounds for reinforcement learning. In International Conference on Machine Learning, pages 263-272. PMLR, 2017. Liyu Chen and Haipeng Luo. Finding the stochastic shortest path with low regret: The adversarial cost and unknown transition case. In International Conference on Machine Learning, pages 1651-1660. PMLR, 2021. Liyu Chen and Haipeng Luo. Near-optimal goal-oriented reinforcement learning in non-stationary environments. Advances in Neural Information Processing Systems, 2022. Liyu Chen, Mehdi Jafarnia-Jahromi, Rahul Jain, and Haipeng Luo. Implicit finite-horizon approximation and efficient optimal algorithms for stochastic shortest path. Advances in Neural Information Processing Systems, 34:10849-10861, 2021a. Liyu Chen, Haipeng Luo, and Chen-Yu Wei. Minimax regret for stochastic shortest path with adversarial costs and known transition. In Conference on Learning Theory, pages 1180-1215. PMLR, 2021b. Liyu Chen, Rahul Jain, and Haipeng Luo. Improved no-regret algorithms for stochastic shortest path with linear mdp. In International Conference on Machine Learning, pages 3204-3245. PMLR, 2022a. Liyu Chen, Haipeng Luo, and Aviv Rosenberg. Policy optimization for stochastic shortest path. Conference on Learning Theory, 2022b. Alon Cohen, Yonathan Efroni, Yishay Mansour, and Aviv Rosenberg. Minimax regret for stochastic shortest path. Advances in Neural Information Processing Systems, 34:28350-28361, 2021. Christoph Dann and Emma Brunskill. Sample complexity of episodic fixed-horizon reinforcement learning. In Advances in Neural Information Processing Systems, pages 2818-2826, 2015. Yonathan Efroni, Nadav Merlis, and Shie Mannor. Reinforcement learning with trajectory feedback. In AAAI Conference on Artificial Intelligence, 2021. Mehdi Jafarnia-Jahromi, Liyu Chen, Rahul Jain, and Haipeng Luo. Online learning for stochastic shortest path model via posterior sampling. arXiv preprint arXiv:2106.05335, 2021. Chi Jin, Zeyuan Allen-Zhu, Sebastien Bubeck, and Michael I Jordan. Is q-learning provably efficient? In Conference on Neural Information Processing Systems, 2018. Sham Kakade and John Langford. Approximately optimal approximate reinforcement learning. In In Proc. 19th International Conference on Machine Learning. Citeseer, 2002. Tor Lattimore and Csaba Szepesvári. Bandit algorithms. Cambridge University Press, 2020. Yifei Min, Jiafan He, Tianhao Wang, and Quanquan Gu. Learning stochastic shortest path with linear function approximation. In International Conference on Machine Learning, pages 15584-15629. PMLR, 2022. Martin L Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons, 2014. Aviv Rosenberg, Alon Cohen, Yishay Mansour, and Haim Kaplan. Near-optimal regret bounds for stochastic shortest path. In International Conference on Machine Learning, pages 8210-8219. PMLR, 2020. Jean Tarbouriech, Evrard Garcelon, Michal Valko, Matteo Pirotta, and Alessandro Lazaric. Noregret exploration in goal-oriented reinforcement learning. In International Conference on Machine Learning, pages 9428-9437. PMLR, 2020. Jean Tarbouriech, Matteo Pirotta, Michal Valko, and Alessandro Lazaric. Sample complexity bounds for stochastic shortest path with a generative model. In Algorithmic Learning Theory, pages 1157-1178. PMLR, 2021a. Jean Tarbouriech, Runlong Zhou, Simon S Du, Matteo Pirotta, Michal Valko, and Alessandro Lazaric. Stochastic shortest path: Minimax, parameter-free and towards horizon-free regret. Advances in Neural Information Processing Systems, 34:6843-6855, 2021b. Daniel Vial, Advait Parulekar, Sanjay Shakkottai, and R Srikant. Regret bounds for stochastic shortest path problems with linear function approximation. In International Conference on Machine Learning, pages 22203-22233. PMLR, 2022. Ruosong Wang, Ruslan Salakhutdinov, and Lin F Yang. Reinforcement learning with general value function approximation: Provably efficient approach via bounded eluder dimension. In Conference on Neural Information Processing Systems, 2020. Andrea Zanette and Emma Brunskill. Tighter problem-dependent regret bounds in reinforcement learning without domain knowledge using value function bounds. In International Conference on Machine Learning, 2019. Zihan Zhang, Yuan Zhou, and Xiangyang Ji. Almost optimal model-free reinforcement learningvia reference-advantage decomposition. Advances in Neural Information Processing Systems, 33: 15198-15207, 2020. Zihan Zhang, Xiangyang Ji, and Simon Du. Is reinforcement learning more difficult than bandits? a near-optimal algorithm escaping the curse of horizon. In Conference on Learning Theory, pages 4528-4531. PMLR, 2021. Zihan Zhang, Xiangyang Ji, and Simon Du. Horizon-free reinforcement learning in polynomial time: the power of stationary policies. In Conference on Learning Theory, pages 3858-3904. PMLR, 2022.References
Table 2 :
2Transition kernel for Theorem 9 Appendix E. Lower Bound for Stochastic Longest Path Proof [Theorem 9]
© C. Dann, C.-Y. Wei & J. Zimmert.
. This assumption is for simplicity and is without loss of generality -if the initial state is drawn from a fixed distribution ρ ∈ ∆S , we can create a virtual initial state sinit on which every action leads to zero reward and next state distribution ρ.
ln 2 (ec) δ
which proves (b). (c) is an immediate result of (b) by picking c = B + b and noticing that E[ τ t=1 X t ] ≤ B.Lemma 25 (Exercise 5.15 in Lattimore and Szepesvári(2020)) Let X t be a real valued random variable in the filtration of F t = (X 1 , . .Proof The claim is stated for a fixed horizon T in Lattimore and Szepesvári (2020). However, we can define the surrogate random variableX ′ i is adapted to the filtration and it holds with probability 1 − δ,which is equivalent to the anytime result for X i .Lemma 26 Let X t be a real valued random variable in the filtration oft exp(−i) ≤ 1 almost surely. By Exercise 5.15 of Lattimore and Szepesvári (2020), with probability at least 1 − δ/(2i 2 ), we haveBy a union bound, this holds with probability 1−δ uniformly over all i ≥ 1.The reasoning for the second inequality is the following. We are computing min i≥i 0 f (i) + g(i), where f (i) is monotonically decreasing in i and g(i) is monotonically increasing. Let i * = argmin i∈Z f (i)+ g(i). If i * ≥ i 0 , the equation is obviously true. Otherwise the optimal solution is min i≥i 0 f (i) + g(i) = f (i 0 ) + g(i 0 ) ≤ f (i * ) + g(i 0 ) by the monotonicity.Lemma 27 Let X 1 , · · · ∈ R S be a sequence of i.i.d. random vectors with mean µ, variance Σ such that X i 1 < c almost surely. Then with probability at least 1 − 2Sδ, it holds for all w ∈ R S such that w ∞ < C and T > 0 simultaneously:T t=1 X t , w ≤ 4 ST w ⊤ Σw (ln(δ −1 ) + 2 ln ln(T c 2 )) + eScC ln 2 ln 2 (ecC) δ .Proof The proof is extends Lemma 26. Let v 1 , . . . , v S be the eigenvectors of Σ, then we have with probability 1 − 2Sδ for all v i simultaneously T t=1 X t , v i ≤ 4 T v ⊤ i Σv i (ln(δ −1 ) + 2 ln ln(T c 2 )) + ec ln 2 ln 2 (ec) δ .Let w = S i=1 a i v i , where we know that w 2 ≤ √ SC and S i=1 |a i | ≤ SC. This implies T t=1 X t , w ≤ S i=1 4 T a 2 i v ⊤ i Σv i (ln(δ −1 ) + 2 ln ln(T c 2 )) + |a i |ec ln | [] |
[
"MULTIBAND PHOTOMETRY OF A PATROCLUS-MENOETIUS MUTUAL EVENT: CONSTRAINTS ON SURFACE HETEROGENEITY",
"MULTIBAND PHOTOMETRY OF A PATROCLUS-MENOETIUS MUTUAL EVENT: CONSTRAINTS ON SURFACE HETEROGENEITY"
] | [
"Ian Wong iwong@mit.edu \nDepartment of Earth, Atmospheric, and Planetary Sciences\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n\nDivision of Geological and Planetary Sciences\nCalifornia Institute of Technology\n91125PasadenaCAUSA\n",
"Michael E Brown \nDivision of Geological and Planetary Sciences\nCalifornia Institute of Technology\n91125PasadenaCAUSA\n"
] | [
"Department of Earth, Atmospheric, and Planetary Sciences\nMassachusetts Institute of Technology\n02139CambridgeMAUSA",
"Division of Geological and Planetary Sciences\nCalifornia Institute of Technology\n91125PasadenaCAUSA",
"Division of Geological and Planetary Sciences\nCalifornia Institute of Technology\n91125PasadenaCAUSA"
] | [] | We present the first complete multiband observations of a binary asteroid mutual event. We obtained high-cadence, high-signal-to-noise photometry of the UT 2018 April 9 inferior shadowing event in the Jupiter Trojan binary system Patroclus-Menoetius in four Sloan bands -g , r , i , and z . We use an eclipse lightcurve model to fit for a precise mid-eclipse time and estimate the minimum separation of the two eclipsing components during the event. Our best-fit mid-eclipse time of 2458217.80943 +0.00057 −0.00050 is 19 minutes later than the prediction of Grundy et al.(2018); the minimum separation between the center of Menoetius' shadow and the center of Patroclus is 72.5 ± 0.7 km -slightly larger than the predicted 69.5 km. Using the derived lightcurves, we find no evidence for significant albedo variations or large-scale topographic features on the Earth-facing hemisphere and limb of Patroclus. We also apply the technique of eclipse mapping to place an upper bound of ∼0.15 mag on wide-scale surface color variability across Patroclus. | 10.3847/1538-3881/ab18f4 | [
"https://arxiv.org/pdf/1904.06379v1.pdf"
] | 118,683,959 | 1904.06379 | 6eab7a5a4b4b9e2c9b3e50c2d3e34382aea4aba3 |
MULTIBAND PHOTOMETRY OF A PATROCLUS-MENOETIUS MUTUAL EVENT: CONSTRAINTS ON SURFACE HETEROGENEITY
April 16, 2019 April 16, 2019
Ian Wong iwong@mit.edu
Department of Earth, Atmospheric, and Planetary Sciences
Massachusetts Institute of Technology
02139CambridgeMAUSA
Division of Geological and Planetary Sciences
California Institute of Technology
91125PasadenaCAUSA
Michael E Brown
Division of Geological and Planetary Sciences
California Institute of Technology
91125PasadenaCAUSA
MULTIBAND PHOTOMETRY OF A PATROCLUS-MENOETIUS MUTUAL EVENT: CONSTRAINTS ON SURFACE HETEROGENEITY
April 16, 2019 April 16, 2019Draft version Preprint typeset using L A T E X style emulateapj v. 12/16/11 † 51 Pegasi b Postdoctoral Fellow Draft versionplanets and satellites: surfaces -minor planets, asteroids: individual (Patroclus) - techniques: photometric
We present the first complete multiband observations of a binary asteroid mutual event. We obtained high-cadence, high-signal-to-noise photometry of the UT 2018 April 9 inferior shadowing event in the Jupiter Trojan binary system Patroclus-Menoetius in four Sloan bands -g , r , i , and z . We use an eclipse lightcurve model to fit for a precise mid-eclipse time and estimate the minimum separation of the two eclipsing components during the event. Our best-fit mid-eclipse time of 2458217.80943 +0.00057 −0.00050 is 19 minutes later than the prediction of Grundy et al.(2018); the minimum separation between the center of Menoetius' shadow and the center of Patroclus is 72.5 ± 0.7 km -slightly larger than the predicted 69.5 km. Using the derived lightcurves, we find no evidence for significant albedo variations or large-scale topographic features on the Earth-facing hemisphere and limb of Patroclus. We also apply the technique of eclipse mapping to place an upper bound of ∼0.15 mag on wide-scale surface color variability across Patroclus.
INTRODUCTION
The origin and nature of Jupiter Trojans have remained an enigma for many decades. The central question remains whether these objects orbiting in 1:1 mean motion resonance with Jupiter formed in situ or were scattered inward from the outer Solar System and captured into resonance during a period of dynamical instability sometime after the end of planet formation Morbidelli et al. 2005;Tsiganis et al. 2005). While recent numerical modeling has demonstrated the consistency of the latter scenario with current theories of late-stage giant planet migration (e.g., Roig & Nesvorný 2015), the definitive answer to the question of the Trojans' formation location will invariably come from obtaining a more detailed understanding of the physical properties and composition of these objects.
The discovery of Menoetius, the nearly equal-size binary companion of Patroclus (Merline et al. 2001), established the first multiple system in the Trojan population and provided the first estimate of a Trojan's bulk density. Subsequent analyses using resolved imaging (Marchis et al. 2006;Grundy et al. 2018), thermal spectroscopy during mutual events (Mueller et al. 2010), and stellar occultations (Buie et al. 2015) have refined the density estimate to the current value of 1.08±0.33 g/cm 3 . This low density indicates that Patroclus-Menoetius's bulk composition is dominated by ices, with significant porosity, similar to density measurements of cometary nuclei. Such a compositional model points strongly to an outer solar system origin of Trojans.
Theories of binary asteroid formation center around two processes: capture or coeval formation. The former process involves stochastic close encounters, between two bodies, with capture occurring either via dynamical fric-tion from surrounding objects, energy exchange during gravitational scattering of a third body, or capture of fragments from a collision (e.g., Goldreich et al. 2002). Within the context of dynamical instability models of solar system evolution, Patroclus-Menoetius could have formed via capture early on during the planet formation stage, after the planet formation stage prior to the instability in the outer Solar System, or following the scattering of Trojans into their current orbits. The latter process of coeval formation forms binaries through the gravitational collapse of locally concentrated swarms of planetesimals (e.g., Nesvorný et al. 2010).
While coeval formation has a strong tendency to produce near-equal binary components, capture typically results in large size discrepancies between the two components. Therefore, the near-equal sizes of Patroclus and Menoetius point toward coeval formation. Furthermore, coeval formation always produces companions with identical compositions, while capture scenarios can yield heterogeneous pairs. Detailed study of Kuiper Belt binaries has revealed a preponderance of equal-color pairs, whereas the average system colors span the full range of colors seen in the overall population (Benecchi et al. 2009). If recent dynamical instability models are true, and the Trojans were scattered into their current orbits from the outer Solar System, then one would expect Patroclus-Menoetius to also have identical colors as a result of coeval formation in the early Solar System.
Comparisons of the properties of the two binary components provide a powerful empirical test of binary formation theories. In particular, the measurement of discrepant physical properties between Patroclus and Menoetius would immediately rule out coeval formation. It has been hypothesized for over a decade that the Trojans are comprised of two color sub-populations with dis-arXiv:1904.06379v1 [astro-ph.EP] 12 Apr 2019 tinct photometric and spectroscopic characteristics (e.g., Roig et al. 2008;Wong et al. 2014), and within the framework of dynamical instability models, these two sub-populations formed in different regions of the outer protoplanetary disk (Wong & Brown 2016). If Patroclus and Menoetius are found to belong to different sub-populations, then it means that the binary system formed via capture during or after the period of dynamical instability, when the two sub-populations first mixed.
The unique nature of the Patroclus-Menoetius system has made it a prime target for detailed study, and it is one of five Trojan asteroids that will be visited by the space probe Lucy. An extensive effort has begun to better characterize the Trojan targets in order to maximize the mission's scientific yield. In 2017-2019, Patroclus-Menoetius was in a mutual event season when eclipse and occultation events were visible from Earth. We obtained multiband photometric observations of an inferior shadowing event as Menoetius' shadow passed across Patroclus on UT 2018 April 9. In this paper, we present high-cadence, high-signal-to-noise lightcurves in four bands and fit the eclipse lightcurves to produce a precise mid-eclipse timing and estimate of the relative separation of the eclipsing components at mid-eclipse. We also use the technique of eclipse mapping, a first in the study of binary asteroids, to derive constraints on surface heterogeneity from the resultant color lightcurves.
OBSERVATIONS AND DATA ANALYSIS
We observed the UT 2018 April 9 Patroclus-Menoetius inferior eclipsing event using the then newly-installed Wafer-scale Imager for Prime (WaSP) instrument on the 200-inch Hale Telescope at Palomar Observatory. The science detector in WaSP is a 6144×6160 CCD with a pixel scale of 0.18 . We chose a 2048×2048 sub-array to reduce readout time and increase the cadence of our observations. As the shadow of Menoetius passed across the surface of Patroclus, we imaged the system in four Sloan filters -g , r , i , and z -with individual exposure times of 30, 20, 20, and 45 s, respectively, which yielded a target signal-to-noise of at least 100 in all bands. Filters were cycled in the order g -r -i -z , producing a uniform cadence of roughly 5.5 minutes in each band, after accounting for readout and filter changes. Bias frames and dome flats were acquired at the beginning of the night prior to science observations.
Observing conditions at Palomar ranged from average to poor throughout the night. The sky was mostly clear, with a few isolated bands of thin, high-altitude clouds passing through at various points during the night. The seeing was poorest at the beginning of the observations, prior to the start of eclipse; before UT 5:00, the typical seeing exceeded 1.6 , going as high as 2.1 at times. The remainder of the night saw significantly better seeing, averaging around 1.2-1.3 , with the exception of a roughly 30-minute period around UT 8:00, when there was a spike in the seeing to over 1.6 , likely associated with the passage of a few tenuous bands of high-altitude clouds across the vicinity of the observing field. There was also an increase in the seeing during the final 45 minutes of observation. These periods of relatively poor seeing can be identified by the corresponding notable increase in scatter in the lightcurves during those times.
Image processing and photometric calibration were . Apparent magnitude lightcurves of the Patroclus-Menoetius system prior to, during, and following the inferior eclipsing event in the Sloan g , r , i , and z bands. The vertical axis denotes increasing brightness (decreasing magnitude). Periods of larger scatter correspond to times of poorer observing conditions and higher seeing. The overall increased scatter in the z -band lightcurve is attributed to discernible residual fringing on the images.
carried out using standard techniques. After the images were bias-subtracted and flat-fielded, the centroid positions and fluxes of bright sources in each image were obtained using SExtractor (Bertin & Arnouts 1996). These sources were then matched with stars in the Pan-STARRS DR1 catalog (Flewelling et al. 2016) to produce an astrometric solution and a photometric zeropoint. Our pipeline then automatically queried the JPL Horizons database for the position of Patroclus-Menoetius at the time of the exposure, identified the corresponding source on the image, and computed its apparent magnitude. Photometric extraction was carried out using a variety of fixed circular aperture sizes with diameters ranging from 8 to 24 pixels, choosing the optimal aperture for each exposure that minimizes the resultant photometric error. The median optimal aperture diameters in the four bands are 20, 11, 16, and 17 pixels, corresponding to radii of 1.80 , 0.99 , 1.44 , and 1.53 , respectively.
In Figure 1, the apparent magnitude lightcurves are plotted in each band; the individual 1σ uncertainties are a quadrature sum of the propagated photometric errors stemming from the measured fluxes and the zeropoint uncertainties. The eclipse produced a roughly 0.15 mag dimming of the total system brightness in each of the four bands. The median photometric uncertainties are 0.0079, 0.0085, 0.0067, and 0.0074 mag in g -, r -, i -, and z -band, respectively. A handful of outliers are discernible, for example, two in the r -band lightcurve at around UT 8:00 and 10:20. Visual inspection of these images did not reveal cosmic rays or any obvious chip artifacts that could have affected these points. By changing the extraction aperture used for those exposures, we found that the saliency of these outliers showed notable variation, suggesting a non-astrophysical cause. We also note that all of the outlier exposures occurred during the periods of increased seeing mentioned previously. We have chosen to leave them in the lightcurves presented in this paper.
In z -band images, there was discernible residual fringing on the flux arrays, even after flat-fielding, particularly in the northeast corner. While the target mostly avoided the regions of the detector with the most severe residual fringing, there is still a noticeable effect in the z -band lightcurve, as manifested by the larger scatter in the photometry on short timescales and larger than expected photometric zeropoint errors. We do not attempt to correct for fringing, and while we present the z -band lightcurve in Figure 1, we do not utilize or discuss the z -band photometry in the following analysis.
DISCUSSION
Eclipse lightcurve fit
To derive estimates of the mid-eclipse time and the extent of the eclipsed region, we use a custom transit model to fit the i -band lightcurve, which has the smallest median photometric error. Since the eclipsed region of Patroclus is non-illuminated, we can equivalently model the eclipse event as an occultation.
The mutual orbit of the binary system is consistent with circular, so we fix the eccentricity to zero. We fix the orbital period and semimajor axis to the values reported and assumed in the mutual event predictions of Grundy et al. (2018): P = 4.282680 days, a = 688.5 km. Both components are significantly non-spherical, and modeling from occultation and rotational phase curves yields a triaxial radius ratio of α : β : γ = 1.3 : 1.21 : 1; the long dimension of each object lies along the line connecting the two objects, while the shortest dimension is aligned with the angular momentum vector of the binary system (Buie et al. 2015). During a mutual event, the sky-projected shapes of Patroclus (1) and Menoetius (2) are ellipses with semimajor axis values of β 1 = 117 km, γ 1 = 98 km and β 2 = 108 km, γ 2 = 90 km, respectively.
We fit for the center of eclipse time T c and the apparent orbital inclination i, which is defined relative to the sky plane so that i = 90 • is a perfectly edge-on occultation where the centers of the two objects align at mid-event. For each pair of T c and i values in the Markov Chain Monte Carlo (MCMC) chain, we use the orbital shape and period to derive the relative separation vector between the two components at every point in the time series. To compute the amount of Patroclus blocked by Menoetius' shadow, we use a Python-based code 1 to calculate the overlapping area of the two ellipses, which is based on the algorithm described in Hughes & Chraibi (2012). We also fit for a constant multiplicative factor to normalize the out-of-eclipse lightcurve to unity.
We modify the transit model to account for the fact that Menoetius is illuminated, which dilutes the transit signal relative to the case where the secondary is dark. If the lightcurve of the eclipsed object Patroclus is modeled as λ(t), then the total lightcurve of the binary system is (λ(t) + f 2 )/(1 + f 2 ), where f 2 is the brightness of the secondary Menoetius relative to Patroclus. If Patroclus and Menoetius were identical in albedo, then the brightness ratio would be equal to the ratio in sky-projected areas: f 2 = β 2 γ 2 /β 1 γ 1 . While it is reasonable to assume that the two components are largely identical in composition and therefore should have very similar albedos, given the 1 https://github.com/chraibi/EEOver likely formation mechanism of such near-equal mass binaries (see Section 1) and the markedly narrow albedo distribution of the Trojan asteroid population as a whole (e.g., Romanishin et al. 2018;Fernández et al. 2003), we nevertheless account for our uncertainty in the albedos of the individual components: we set a multiplicative scaling factor on f 2 and place a Gaussian prior on its value centered on unity with a standard deviation of 20%, consistent to the variance in the measured geometric albedos of large Trojans (e.g., Fernández et al. 2003;Romanishin et al. 2018).
The best-fit eclipse lightcurve is plotted in Figure 2. We have removed the fourth data point prior to the final fit, which is more than 3σ discrepant from the best-fit eclipse model. The lightcurve is normalized such that the combined out-of-eclipse brightness of Patroclus and Menoetius is unity. The scatter in the residuals is 0.0048, compared to a median relative flux uncertainty of 0.0030, indicating significant non-white noise in the lightcurve attributable to the periods of poorer observing conditions at the beginning and towards the end of the night. We measure a mid-eclipse time (in Julian days) of
T c = 2458217.80943 +0.00057 −0.00050 ,(1)
which corresponds to UT 2018 April 9 7:25:35 with an uncertainty of 46s. This is 19 minutes later than the predicted center of eclipse in Grundy et al. (2018). Meanwhile, we obtain a precise relative inclination estimate of i = 83.95 ± 0.06 deg. We can compute the sky-projected separation d min of the center of Patroclus and the center of Menoetius's shadow at mid-eclipse: d min = a cos(i) = 72.5 ± 0.7 km.
(2) Grundy et al. (2018) reported a predicted minimum separation between the centers of the two eclipsing bodies of 69.5 km. The greater separation derived from our fit indicates a more grazing shadowing event than predicted and points toward a slight inaccuracy in the orbital pole obliquity calculated in Grundy et al. (2018). We remind the reader that during this event, it is the shadow of Menoetius that occults Patroclus. The disk of Menoetius itself does not interact with the disk of the primary.
3.2. Surface properties Various physical and compositional properties of the surface are expressed in the eclipse lightcurves. When looking in one photometric band, comparison between the observed lightcurve and the best-fit eclipse model provides constraints on albedo variations across the eclipsed region of the primary as well as the shapes of both binary components. Significant covariant deviations in the residuals from a flat line may indicate patches of enhanced or reduced reflectivity on the primary or significant deviations along the limb from that of a skyprojected ellipse. Examining the residuals from our bestfit eclipse model in Figure 2, we do not discern any statistically significant deviations indicating non-uniform reflectivity or non-ellipsoidal shapes for the primary disk and secondary shadow.
Leveraging photometric lightcurves at multiple wavelengths provides additional information about the level of color variation across and between the two binary components. As the shadow of Menoetius eclipses Patroclus, the contribution of the shadowed region to the average color of the system is removed. By examining the resultant color lightcurves, one can piece together the color distribution of the eclipsed region in a technique known as eclipse mapping. This powerful method allows one to potentially extract spatial information about the target from spatially unresolved images. For each pair of photometric lightcurves, we use linear interpolation between adjacent points in the second lightcurve's time series to calculate the magnitudes in the second filter at the time sampling of the first lightcurve's time series. We then subtract the resampled lightcurves from one another, adding the propagated uncertainties in quadrature. Figure 3 shows the three color lightcurves derived from the g -, r -, and i -band lightcurves in Figure 1. We have omitted the color lightcurves involving z -band due to the effect of residual fringing (see Section 2).
The color lightcurves are generally very smooth, with no large deviations and almost all points lying well within 1.5σ of the average color across the observations. We note that the regions with increased short-term variation and the largest color deviations correspond precisely to the periods during our observations when seeing was poor and highly variable (see Section 2). Given the grazing nature of this eclipse event, we are only sensitive to very large color variations on small scales. The most stringent constraints on color variability can be derived from comparing the mid-eclipse color, when the eclipsed region is at its maximum, with the out-of-eclipse color. For all color lightcurves, the mid-eclipse color value is well within 1σ of the out-of-eclipse color, so we place 1σ upper bounds on the color variability using the median color uncertainty from the lightcurves, σ c .
To quantify these constraints, we consider two cases. Color lightcurves derived from the photometric lightcurves in Figure 1, showing minimal variations during the shadowing event. The vertical solid and dashed lines indicate mideclipse and the beginning/end of the eclipse event, respectively. Almost all points in the color lightcurves are consistent with a flat line to within 1.5σ. The two notable outliers at around UT 8:00 and 10:20 in the g − r and r − i lightcurves stem from two outlier points in the r -band lightcurve (see Figure 1).
The first case seeks to constrain the difference between the average color of the eclipsed region on Patroclus c * and the average color c of the uneclipsed regions on both objects. The change in the measured color of the combined system between the out-of-eclipse baseline and mid-eclipse is weighted by the ratio of the maximum eclipsed area A * to the uneclipsed area A 1 + A 2 − A * , where A 1 = πβ 1 γ 1 and A 2 = πβ 2 γ 2 are the sky-projected areas of Patroclus and Menoetius, respectively. The maximum eclipsed area of Patroclus, as derived from our eclipse model fit in Section 3.1, was 12.4% of its sky-projected disk: A * = 1110 km 2 . From here, the difference in color ∆c 1 ≡ |c * − c| is given by
∆c 1 = σ c A 1 + A 2 − A * A * .(3)
For the g − i color variability, for example, we have σ g−i = 0.0092 and establish an upper limit of ∆c 1,g−i = 0.13 mag, with similar constraints for the other colors.
The second case assumes that the two components have different colors, c 1 and c 2 , but are individually uniform in color. A similar derivation yields the following expression for ∆c 2 ≡ |c 2 − c 1 |:
∆c 2 = σ c (A 1 + A 2 − A * )(A 1 + A 2 ) A * A 2 .(4)
The constraints on ∆c 2 are much looser. For g − i, this upper limit is ∆c 2,g−i = 0.28 mag. Starting with the second constraint, we see that the small maximum shadow coverage of Patroclus prevents us from deriving particularly useful upper limits on the difference in color between the two components. For comparison, the two color sub-populations in the Trojans have mean g − i colors of 0.73 and 0.86 (Wong et al. 2014;Wong & Brown 2015), so a larger eclipsed area and/or more precise photometry would be needed to confidently rule out a binary comprised of components from two different sub-populations using lightcurves like these. Typical color differences between the components of KBO binary systems are also significantly smaller than our upper bound constraint (e.g., Benecchi et al. 2009). The first constraint reflects the level of large-scale surface inhomogeneities across Patroclus. This much more stringent constraint suggests that the surface of Patroclus is quite homogeneous. When comparing with other ice-rich asteroids and satellites that have well-mapped surface color distributions, we find that those larger bodies, such as Pluto, Europa, Ceres, and Triton, have significantly higher levels of color variability than Patroclus across physical scales comparable to the relative area probed by our eclipse measurements. In addition, those objects also display significant localized albedo variations across the surface, which we do not detect on Patroclus from our measurements.
The relative homogeneity of Patroclus is consistent with theories regarding the formation and evolution of Trojans and similar objects. Whereas the larger bodies like the Galilean satellites and dwarf planets accreted sufficient material to gravitationally circularize, internally differentiate, and, in some cases, bind tenuous atmospheres, leading to secondary geological processes that continue to be active in the present day, smaller bodies like the Trojans would have formed as undifferentiated ice-rock agglomerations, similar to cometary nuclei, without sufficient gravity or internal heating to undergo further physical or compositional alterations (e.g., Wong & Brown 2016). These primitive objects would have a uniform composition throughout and develop a homogeneous irradiation mantle across their entire surfaces.
Such a formation scenario does not preclude occasional instances of surface inhomogeneities due to minor cratering events. Areas of pristine material excavated by impacts might have much higher albedo than the ∼5% typical of Trojans (e.g., Fernández et al. 2003). Likewise, these newly-exposed regions might have a distinct color from the rest of the radiation-reddened surface (Wong & Brown 2016). Both the reflectivity and color inhomogeneities would be detectable using high-precision multiband lightcurves of mutual events similar to the ones presented in this work.
SUMMARY
In this paper, we presented multiband photometric observations of the UT 2018 April 9 inferior shadowing in the Patroclus-Menoetius system. Our short-cadence high-signal-to-noise lightcurves provided a precise mid-eclipse timing measurement, T c = 2458217.80943 +0.00057 −0.00050 , which is later than the prediction from Grundy et al. (2018) by almost 20 minutes. Eclipse lightcurve modeling showed that the eclipse magnitude was slightly less than predicted, with a minimum separation distance of 72.5 ± 0.7 km between the centers of Patroclus and Menoetius' shadow at mid-eclipse. Through an analysis of the color trends derived from the photometric lightcurves, we placed a moderately tight upper bound on the level of surface variability across Patroclus, in agreement with the predictions from formation models of primitive icy bodies. Meanwhile, the grazing nature of the event prevented us from ruling out a mixed binary scenario with components from different color sub-populations. Nevertheless, our analysis demonstrated the applicability of the eclipse mapping technique to the study of binary asteroids. Future work combining the observations of Patroclus-Menoetius from the 2017-2019 mutual event season with previous measurements will greatly improve the orbital parameters of the system. New orbital fits and shape models will enable more detailed planning of the Lucy flyby encounter of the Patroclus-Menoetius system in 2033.
Figure 1
1Figure 1. Apparent magnitude lightcurves of the Patroclus-Menoetius system prior to, during, and following the inferior eclipsing event in the Sloan g , r , i , and z bands. The vertical axis denotes increasing brightness (decreasing magnitude). Periods of larger scatter correspond to times of poorer observing conditions and higher seeing. The overall increased scatter in the z -band lightcurve is attributed to discernible residual fringing on the images.
Figure 2 .
2Top panel: i -band lightcurve of the mutual event (blacks points) along with the best-fit eclipse lightcurve model (blue line). The out-of-eclipse combined brightness of Patroclus and Menoetius is normalized to unity. Vertical black lines indicate the best-fit mid-eclipse time and uncertainties: Tc = 2458217.80943 +0.00057 −0.00050 ; the red line shows the mid-eclipse time predicted by Grundy et al. (2018): 2458217.7965. Bottom panel: corresponding residuals from the fit. The scatter in the residuals is 0.0048, while the median per-point flux uncertainty in 0.0030.
Figure 3. Color lightcurves derived from the photometric lightcurves in Figure 1, showing minimal variations during the shadowing event. The vertical solid and dashed lines indicate mideclipse and the beginning/end of the eclipse event, respectively. Almost all points in the color lightcurves are consistent with a flat line to within 1.5σ. The two notable outliers at around UT 8:00 and 10:20 in the g − r and r − i lightcurves stem from two outlier points in the r -band lightcurve (see Figure 1).
I.W. is supported by a Heising-Simons Foundation 51 Pegasi b postdoctoral fellowship. The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max Planck Institute for Astronomy, Heidelberg and the Max Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen's University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under Grant No. NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation Grant No. AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation. This work made use of the JPL Solar System Dynamics high-precision ephemerides through the HORIZONS system.
. S D Benecchi, K S Noll, W M Grundy, Icar. 292Benecchi, S. D., Noll, K. S., Grundy, W. M., et al. 2009, Icar, 200, 292
. E Bertin, S Arnouts, A&AS. 317393Bertin, E., & Arnouts, S. 1996, A&AS, 317, 393
. M W Buie, C B Olkin, W J Merline, AJ. 149113Buie, M. W., Olkin, C. B., Merline, W. J., et al. 2015, AJ, 149, 113
. Y R Fernández, S S Sheppard, D C Jewitt, AJ. 1261563Fernández, Y. R., Sheppard, S. S., & Jewitt, D. C. 2003, AJ, 126, 1563
. H A Flewelling, E A Magnier, K C Chambers, arXiv:1612.05243Flewelling, H. A., Magnier, E. A., Chambers, K. C., et al. 2016, arXiv:1612.05243
. P Goldreich, Y Lithwick, R Sari, 420643NaturGoldreich, P., Lithwick, Y., & Sari, R. 2002, Natur, 420, 643
. R Gomes, H F Levison, K Tsiganis, A Morbidelli, 435466NaturGomes, R., Levison, H. F., Tsiganis, K., & Morbidelli, A. 2005, Natur, 435, 466
. W M Grundy, K S Noll, M W Buie, H Levison, Icar. 308Grundy, W. M., Noll, K. S., Buie, M. W., Levison, H. F. 2018, Icar, 308, 198
Computing and Visualization in Science. G B Hughes, M Chraibi, 15291Hughes, G. B., & Chraibi, M. 2012, Computing and Visualization in Science, 15, 291
. F Marchis, D Hestroffer, P Descamps, 439565NaturMarchis, F., Hestroffer, D., Descamps, P., et al. 2006, Natur, 439, 565
. W J Merline, L M Close, N Siegler, IAUC. 77412Merline, W. J., Close, L. M., Siegler, N., et al. 2001, IAUC, 7741, 2
. A Morbidelli, H F Levison, K Tsiganis, R Gomes, 435462NaturMorbidelli, A., Levison, H. F., Tsiganis, K., & Gomes, R. 2005, Natur, 435, 462
. M Mueller, F Marchis, J P Emery, Icar. 205505Mueller, M., Marchis, F., Emery, J. P., et al. 2010, Icar, 205, 505
. D Nesvorný, A N Youdin, D C Richardson, AJ. 140785Nesvorný, D., Youdin, A. N., & Richardson, D. C. 2010, AJ, 140, 785
. F Roig, D Nesvorný, AJ. 150186Roig, F., & Nesvorný, D. 2015, AJ, 150, 186
. F Roig, A O Ribeiro, R Gil-Hutton, A&A. 483911Roig, F., Ribeiro, A. O., & Gil-Hutton, R. 2008, A&A, 483, 911
. W Romanishin, S C Tegler, AJ. 15619Romanishin, W., & Tegler, S. C. 2018, AJ, 156, 19
. K Tsiganis, R Gomes, A Morbidelli, H F Levison, 435459NaturTsiganis, K., Gomes, R., Morbidelli, A., & Levison, H. F. 2005, Natur, 435, 459
. I Wong, M E Brown, AJ. 150174Wong, I., & Brown, M. E. 2015, AJ, 150, 174
. I Wong, M E Brown, AJ. 15290Wong, I., & Brown, M. E. 2016, AJ, 152, 90
. I Wong, M E Brown, J P Emery, AJ. 148112Wong, I., Brown, M. E., & Emery, J. P. 2014, AJ, 148, 112
| [
"https://github.com/chraibi/EEOver"
] |
[
"Single-Phase High-Entropy Intermetallic Compounds (HEICs): Bridging High-Entropy Alloys and Ceramics",
"Single-Phase High-Entropy Intermetallic Compounds (HEICs): Bridging High-Entropy Alloys and Ceramics"
] | [
"Naixie Zhou \nDepartment of Nanoengineering\nProgram of Materials Science and Engineering\nUniversity of California\nSan Diego, La Jolla92093-0448CAUSA\n\nOerlikon Metco Inc\n92121San DiegoCAUSA\n",
"Sicong Jiang \nDepartment of Nanoengineering\nProgram of Materials Science and Engineering\nUniversity of California\nSan Diego, La Jolla92093-0448CAUSA\n",
"Timothy Huang \nDepartment of Nanoengineering\nProgram of Materials Science and Engineering\nUniversity of California\nSan Diego, La Jolla92093-0448CAUSA\n",
"Mingde Qin \nDepartment of Nanoengineering\nProgram of Materials Science and Engineering\nUniversity of California\nSan Diego, La Jolla92093-0448CAUSA\n",
"Tao Hu \nDepartment of Nanoengineering\nProgram of Materials Science and Engineering\nUniversity of California\nSan Diego, La Jolla92093-0448CAUSA\n\nState Key Laboratory of High Performance and Complex Manufacturing\nSchool of Materials Science and Engineering\nCentral South University\n410083ChangshaHunanChina\n",
"Jian Luo \nDepartment of Nanoengineering\nProgram of Materials Science and Engineering\nUniversity of California\nSan Diego, La Jolla92093-0448CAUSA\n"
] | [
"Department of Nanoengineering\nProgram of Materials Science and Engineering\nUniversity of California\nSan Diego, La Jolla92093-0448CAUSA",
"Oerlikon Metco Inc\n92121San DiegoCAUSA",
"Department of Nanoengineering\nProgram of Materials Science and Engineering\nUniversity of California\nSan Diego, La Jolla92093-0448CAUSA",
"Department of Nanoengineering\nProgram of Materials Science and Engineering\nUniversity of California\nSan Diego, La Jolla92093-0448CAUSA",
"Department of Nanoengineering\nProgram of Materials Science and Engineering\nUniversity of California\nSan Diego, La Jolla92093-0448CAUSA",
"Department of Nanoengineering\nProgram of Materials Science and Engineering\nUniversity of California\nSan Diego, La Jolla92093-0448CAUSA",
"State Key Laboratory of High Performance and Complex Manufacturing\nSchool of Materials Science and Engineering\nCentral South University\n410083ChangshaHunanChina",
"Department of Nanoengineering\nProgram of Materials Science and Engineering\nUniversity of California\nSan Diego, La Jolla92093-0448CAUSA"
] | [] | High-entropy intermetallic compounds (HEICs) were fabricated by mechanical alloying and spark plasma sintering to fill a knowledge gap between the traditional high-entropy alloys (HEAs) and emerging high-entropy ceramics (HECs). Notably, several four-or fivecomponent equimolar aluminides, such as the B2-phase (Fe1/5Co1/5Ni1/5Mn1/5Cu1/5)Al, have been made into single-phase HEICs for the first time. Thermodynamic modeling and a reversible, temperature-dependent, phase-stability experiment suggest that such B2-phase HEICs are entropy-stabilized phases. The structure of these HEICs resembles that of HECs with highentropy mixing of four or five elements of nearly equal fractions in one and only one sublattice, but with significant (~10%) anti-site defects (differing from typical HECs). A new phase stability rule for forming single B2-phase HEICs is proposed. Five additional HEICs of predominantly D022 phases have also been made. This study broadens the families of equimolar, single-phase, high-entropy materials that have been successfully fabricated. | 10.1016/j.scib.2019.05.007 | [
"https://arxiv.org/pdf/1902.10420v1.pdf"
] | 118,907,348 | 1902.10420 | c35010a603b6792576c8a989fc332383bbc2d344 |
Single-Phase High-Entropy Intermetallic Compounds (HEICs): Bridging High-Entropy Alloys and Ceramics
Naixie Zhou
Department of Nanoengineering
Program of Materials Science and Engineering
University of California
San Diego, La Jolla92093-0448CAUSA
Oerlikon Metco Inc
92121San DiegoCAUSA
Sicong Jiang
Department of Nanoengineering
Program of Materials Science and Engineering
University of California
San Diego, La Jolla92093-0448CAUSA
Timothy Huang
Department of Nanoengineering
Program of Materials Science and Engineering
University of California
San Diego, La Jolla92093-0448CAUSA
Mingde Qin
Department of Nanoengineering
Program of Materials Science and Engineering
University of California
San Diego, La Jolla92093-0448CAUSA
Tao Hu
Department of Nanoengineering
Program of Materials Science and Engineering
University of California
San Diego, La Jolla92093-0448CAUSA
State Key Laboratory of High Performance and Complex Manufacturing
School of Materials Science and Engineering
Central South University
410083ChangshaHunanChina
Jian Luo
Department of Nanoengineering
Program of Materials Science and Engineering
University of California
San Diego, La Jolla92093-0448CAUSA
Single-Phase High-Entropy Intermetallic Compounds (HEICs): Bridging High-Entropy Alloys and Ceramics
1 * Corresponding authors. E-mail addresses: jluo@alum.mit.edu (J.L.) and taohu1982@gmail.com (T.H.) 2high-entropy intermetallic compoundaluminidephase stabilityhigh-entropy alloyhigh-entropy ceramic
High-entropy intermetallic compounds (HEICs) were fabricated by mechanical alloying and spark plasma sintering to fill a knowledge gap between the traditional high-entropy alloys (HEAs) and emerging high-entropy ceramics (HECs). Notably, several four-or fivecomponent equimolar aluminides, such as the B2-phase (Fe1/5Co1/5Ni1/5Mn1/5Cu1/5)Al, have been made into single-phase HEICs for the first time. Thermodynamic modeling and a reversible, temperature-dependent, phase-stability experiment suggest that such B2-phase HEICs are entropy-stabilized phases. The structure of these HEICs resembles that of HECs with highentropy mixing of four or five elements of nearly equal fractions in one and only one sublattice, but with significant (~10%) anti-site defects (differing from typical HECs). A new phase stability rule for forming single B2-phase HEICs is proposed. Five additional HEICs of predominantly D022 phases have also been made. This study broadens the families of equimolar, single-phase, high-entropy materials that have been successfully fabricated.
Introduction
High-entropy alloys (HEAs) consisted of at least four principal metallic elements with near equimolar fractions, also known as "multi-principal element alloys (MPEAs)" or "complex concentrated alloys (CCAs)," have attracted significant research interests in last 15 years [1][2][3][4][5][6][7][8].
Compared with traditional alloys with one primary element and several minor alloying dopants, HEAs explore new compositional spaces where no single component is dominant. Examples of HEAs include the famous FCC "Cantor alloy" CoCrFeMnNi [8] and refractory BCC HEAs (e.g., TaMoNbVW) [3,4]. HEAs can often display unique mechanical properties, e.g. high strength and excellent cryogenic or high-temperature performance [1][2][3][4][5][6][7].
On the one hand, prior studies of metallic HEAs focus on simple solid solution phases where the multiple principal elements randomly occupy the same type lattice site of the FCC, BCC or HCP structure. On the other hand, an increasing number of new high-entropy ceramics (HECs) have been made in the last four years [9][10][11][12][13][14][15][16][17][18][19], where multiple principal metal cations occupy one sublattice, with another ordered anion sublattice with little or no mixing. Examples of singlephase HECs that have been successfully fabricated in last a few years include rocksalt oxides (e.g., Mg0.2Co0.2Ni0.2Cu0.2Zn0.2O) [9,20], metal diborides (e.g., (Hf0.2Zr0.2Ta0.2Nb0.2Ti0.2)B2) [10,21], fluorite oxides (e.g., (Hf0.2Zr0.2Ce0.2Y0.2Gd0.2)O2-) [12], perovskites (e.g., (Ba0.5Sr0.5)(Zr0.2Sn0.2Ti0.2Hf0.2Nb0.2)O3) [11,13,14], carbides (e.g., (Hf0.2Zr0.2Ta0.2Nb0.2Ti0.2)C) [15][16][17][18], and silicides (e.g., (Mo0.2Nb0.2Ta0.2Ti0.2W0.2)Si2) [19]. These HECs also possess some unique or superior properties, such as reduced low thermal conductivities [16,20] and increased hardness [15][16][17][18]21].
This study further fills a knowledge gap between the traditional (metallic) HEAs and the emerging (mostly ionic) HECs via fabricating a new class of single-phase high-entropy intermetallic compounds (HEICs), exemplified by equimolar high-entropy aluminides such as the B2-phase (Fe1/5Co1/5Ni1/5Mn1/5Cu1/5)Al. These new HEICs are mostly metallic (albeit some mixed ionic-metallic bonds due to the different electronegativities) but have crystal structures resembling (mostly) ionic HECs: i.e., random mixing of four or five elements of equal molar fractions on one and only sublattice with another ordered sublattice with little mixing (albeit ~10% anti-site defects that are substantially higher than that in typical HECs). Thus, this discovery expands the families of single-phase, equimolar, high-entropy materials that have been successfully fabricated to date, and it bridges HEAs and HECs.
While the fabrication of single-phase HEICs has not been reported before, we recognize that multicomponent intermetallic compounds have been observed widely in complex alloys including HEAs as secondary phases [6,[22][23][24][25][26]. Notably, Lu et al. reported an eutectic HEA, AlCoCrFeNi2.1, and attributed the two phases as FCC and (ordered BCC-based) B2 based on Xray diffraction (XRD) [22]. A follow-up study on the same AlCoCrFeNi2.1 HEA by Nagase et al., however, suggested the formation of the ordered (FCC-based) L12 in the dendritic region, along with disordered FCC and BCC in the eutectic region, by using more sensitive electron diffraction [23]. Furthermore, four eutectic HEAs (CoCrFeNiM0.45, where M = Nb, Ta, Zr, or Hf) with FCC and Laves phases were designed and fabricated via casting [24]. These Laves phases are enriched in two or three elements (thereby being far away from equimolar compositions) [24]. CALPHAD modeling also suggested the existence of FCC-B2 two-phase regions in several HEAs [27]. Zhao et al. recently investigated three Al xCo0.2Cr0.2Ni0.2Ti0.4-x multi-phase HEAs with substantial amounts of multicomponent B2 phases that presumably have three principal elements (Co, Cr, and Ni) on one sublattice and another two (Al and Ti) primarily on the other sublattice [26]; this represents perhaps one reported case that is the most close to (but not yet truly) HEICs (and not single-phase). Moreover, none of these prior studies aimed at fabricating and investigating singlephase HEICs. Interestingly, two recent studies by Yang et al. [6] and He et al. [25] used multicomponent intermetallic nanoparticles (MCINPs) of the L12 [6] and D022 [25] structure, in the form of nanoscale precipitates in the continuous matrix of complex alloys [6] or HEAs [25], to strengthen the FCC-based alloys. Here, these MCINPs, e.g., (Ni35Co17Fe8)(Al7Ti11Co2) L12 (resemble Ni3Ti or Ni3Al) [6] and Ni65.2Nb24.1Co7.7Cr1.4Fe1.3 D022 (close to Ni3Nb) phases, are enriched in only one, or at most two, principal element(s) in each sublattice (so they are not multi-principal element high-entropy phases), and they are not the primary phases. Thus, the current study aims to make the first research effort to fabricate and subsequently characterize and investigate single-phase HEICs.
To seek the existence of single-phase HEICs with random mixing of four or five elements of a nominally equimolar fraction on one and only one sublattice (resembling HECs), we selected B2 phase high-entropy aluminides as our primary model systems. In addition, we further explored D022-phase high-entropy aluminides as a second structure to extend generality of this study. Aluminides were chosen in this study because they are important structural materials for their light weight, excellent thermal stability, outstanding oxidation resistance, and high strength [28]. The B2 phase is an ordered BCC-based structure, i.e. the CsCl type structure albeit significant anti-site defects. Fig. 1(a) shows the schematic structure of a B2-phase high-entropy aluminide, in which four or five elements of equimolar fractions are randomly mixed on one and only one sublattice and Al atoms primarily occupy the other sublattice. In this study, we fabricated seven equimolar B2-phase high-entropy aluminides, including three with single (and others with predominant one) ordered B2 phases. We also showed that these B2-phase HEICs are likely entropy-stabilized phases, and we further proposed a new selection criterion for forming single-phase B2 high-entropy aluminides (as a new phase stability rule). Furthermore, we successfully fabricated five additional high-entropy aluminides of primarily D022 phases (with the structure illustrated in Fig. 1(b)) to extend the generality of this study and our discovery.
Materials and Methods
To prepare the B2-phase high-entropy aluminides, we designed seven equal-aluminide-molar compositions (as listed in Table 1). Each non-Al element has equal molar fraction and the total molar fraction of these elements is 50% to maintain the stoichiometry of aluminide. The elements were selected based on the phase stability of binary aluminides as well as their mutual solubility.
All selected specimens contain Fe, Co and Ni because AlFe, AlCo and AlNi are stable binary aluminides with pronounced mutual miscibility and close lattice constants [29,30]. Other transition metal elements, e.g. Mn, Cu and Cr, were also added into the matrix. In addition, 50% of the Al atoms were replaced with Ti in HEAL #6 and #7 to further perturbate the B2 structure and extend the work beyond pure aluminides.
High-purity powders Al (99.9%), Ti (99.5%), Cr (99.95%), Fe (99.9%), Co (99.8%), Ni (99.9%), Cu (99.9%) and Mn (99.95%) purchased from Alfa Aesar were utilized as starting materials. Appropriate amounts of each powder according to the stoichiometry were used to fabricate specimens. The seven compositions for forming B2-phase HEICs are listed in Table 1 and referred to as specimen #1 to #7 in the text. The raw powders were mechanically alloyed via high energy ball milling (HEBM) using a SPEX 8000D miller (SpexCertPrep, NJ, USA) for 3 hours. To prevent overheating, the HEBM was stopped every 60 minutes to allow cooling for 5 minutes. The powders were then compacted into disks of 20-mm diameter and consolidated by using spark plasma sintering (SPS, Thermal Technologies, CA, USA).
All the consolidated bulk samples were further homogenized at 1100 o C for 10 hours. The phase composition and lattice parameters were determined by XRD using a Rigaku diffractometer with Cu Kα radiation. The compositions of the specimens were characterized by scanning electron microscopy equipped with energy dispersive X-ray spectroscopy (EDXS). Specifically, EDXS mapping was employed to image the size and distribution of the fine precipitates.
To test the reversible temperature-dependent stability of the B2-phase (Co1/4Fe1/4Ni1/4Cu1/4)Al was chosen and annealed at 1000 o C, 1100 o C, and 1300 o C, respectively, for 10 hours, in a sequence. Subsequently, the same specimen (after equilibrated at the higher temperature of 1300 o C) was annealed again at the lower temperature 1000 o C isothermally for 10 hours. After each annealing step, the specimen was quenched and XRD measurement was performed to determine the reversible temperature-dependence of the phase stability.
To extend the generality of this study, five additional compositions (with 75% Al and equal molar amounts of four other transition metal elements) were selected to fabricate D022-phase HEICs via the same mechanical alloying and SPS procedure, followed by annealing at 1300 o C.
Results and Discussion
The Formation of High-Entropy B2 Phases
The phase formation of the seven specimens equilibrated at 1100 o C was determined by XRD patterns, as shown in Fig. 2(a). For comparison, the diffraction peaks for the pure BCC (Fe as an example) and B2 structured aluminide (FeAl as an example) were also simulated. It is noted that the B2 phase (a BCC-based structure with CsCl ordering) has BCC-like diffraction peaks of (110), (200) and (211), which are indicated by dots in Fig. 2. However, the formation of ordered B2 phase is characterized by the "superlattice" diffraction peaks, (100), (111) and (210), as indicated by stars, which are "forbidden reflections" in the disordered BCC structure due to the symmetry. XRD patterns of specimens #1 to #5 match well with the B2 phase (with virtually no detectable secondary phases), indicating the successful synthesis of the B2-phase high-entropy aluminides (as one type of HEICs).
For the two Ti-contained specimens #6 (Fe1/4Co1/4Ni1/4Cu1/4)(Al1/2Ti1/2) and #7 (Fe1/4Co1/4Ni1/4Mn1/4)(Al1/2Ti1/2), XRD patterns showed the formation of very small amounts of secondary precipitate phases.
Derived from the XRD patterns, the lattice parameter obtained for the B2 structure for the specimens #1 to #5 is close to 2.89 Å (within 0.7% variations), as shown in Table 1. The measured lattice parameters for specimens #6 (Fe1/4Co1/4Ni1/4Cu1/4)(Al1/2Ti1/2) and #7 (Fe1/4Co1/4Ni1/4Mn1/4)(Al1/2Ti1/2), in which 50% Al atoms were replaced by Ti atoms (with a 15% larger atomic radius), are increased to around 2.95Å.
Anti-Site Defects in HEALs
Anti-site defects are commonly present in the intermetallic compounds at finite temperatures due to an entropic effect [31]. In a conventional B2 intermetallic compound AB, four possible types of substitutional point defects (anti-site A and B atoms, as well as two types of vacancies in two sublattices) can exist [31], and there are more compositional variables of the anti-site defects in HEICs.
Using the single-phase specimen #3 (Co1/4Fe1/4Ni1/4Cu1/4)Al as an example, we simulated a series of XRD patterns to estimate the anti-site occupation assuming, for simplicity, equimolar transition metal anti-site defects in the Al sublattice and ignoring vacancies. The modeled relative intensity of the (100) superlattice peak (normalized to the strongest (110) peak) vs. anti-site occupation fraction is plotted in Fig. 2(b). The anti-site defects were then estimated to be ~10% by comparing the measured relative intensities with the simulation results. Other specimens (#1, #2, #3, #4, and #5) are estimated to have similar levels of anti-site defects at 1100 C.
Compositional Homogeneity and Single-Phase HEICs
To examine the compositional homogeneity, SEM and EDXS were performed. The EDXS elemental mapping showed that the specimens #3 (Co1/4Fe1/4Ni1/4Cu1/4)Al and #5 (Co1/5Fe1/5Ni1/5Mn1/5Cu1/5)Al are compositionally homogeneous without any detectable secondary phase.
However, Cr-enriched regions in specimens #1 (Co1/4Fe1/4Ni1/4Cr1/4)Al and #4 (Co1/5Fe1/5Ni1/5Mn1/5Cr1/5)Al were revealed by EDXS mapping, despite they both appear to be single phases in XRD. EDXS mapping also suggests that specimen #2 (Co1/4Fe1/4Ni1/4Mn1/4)Al has very small amount of Mn-enriched precipitates. In additional, EDXS showed that specimen #6 (Co1/4Fe1/4Ni1/4Cu1/4)(Al1/2Ti1/2) had Cu-enriched precipitates, and specimen #7 (Fe1/4Co1/4Ni1/4Mn1/4)(Al1/2Ti1/2) contained a secondary phase enriched with Ti, Fe, and Mn.
In summary, the combination of XRD and EDXS compositional mapping (Fig. 2) showed that specimens #3 (Co1/4Fe1/4Ni1/4Cu1/4)Al and #5 (Co1/5Fe1/5Ni1/5Mn1/5Cu1/5)Al indeed exhibit single, high-entropy B2 phases that are compositionally homogenous. The other five specimens consist of primarily single high-entropy B2 phases, while small amounts of secondary phases or inhomogeneous regions exist; specifically, specimens #1, #2, and #4 appear to be single B2-phase HEICs in XRD patterns while EDXS mapping revealed compositional inhomogeneity.
CALPHAD Modeling
We further conducted CALPHAD (calculation of phase diagram) modeling to assess the predicted phase fraction vs. temperature curves from the ThermoCalc TCHEA database for these seven specimens (Fig. 4). While we recognize none of the current databases have been validated for HEICs (as this is the first experimental study to synthesize single B2-phase high-entropy aluminides), some useful trends can be still obtained, albeit not completely accurate, from the existing databases such as the TCHEA based on the extrapolation from the binary and ternary systems.
The CALPHAD results showed that the primary phase at high temperatures (around 1100 C)
is ordered B2 phase for all samples. Among them, the B2 phase fractions of the specimens #2, #3 and #5 are high (nearly 100%) in the temperature range of 1000 to 1400 K, which are consistent with experiments. The CALPHAD results also indicated that both specimens #1 and #4 should contain a Cr-rich secondary BCC phase (with ~55%Cr, ~15% Fe and ~25% Al) and a Fe-Mn-Tienriched Laves phase should precipitate in specimen #7, which all agree with the EDXS elemental maps (Fig. 3(c)).
However, the CAPHAD modeling failed to predict the precipitation of the secondary phase in specimen #6, as observed in the experiment. Moreover, CALPHAD predict a single B2 phase from 700K to the solidus temperature ~1600K for specimen #2, but we observed a reversible secondary phase formation at 1100 o C or lower (Fig. 4). It is not surprising that CALPHAD modeling is not completely accurate, since this current study represents the first experimental effort to synthesize single B2-phase high-entropy aluminides; thus, all current databases are likely extrapolation from the partial data of binary and ternary aluminides without direct confirmation and calibration of the high-entropy B2-phase stability.
Entropy-Stabilized Phases
Although the CALPHAD predictions (presumably extrapolated from the data of binary and ternary aluminides) are not completely accurate, they (with extrapolation from the binary and ternary data) are able show an important (and valid) general trend (Fig. 4): i.e., the equimolar, single high-entropy B2 phases are generally stable at high temperatures (just below the solidus temperatures), while some secondary phases precipitate at lower temperatures. This general trend suggests that these B2-phase HEICs are entropy-stabilized.
To more critically assess this hypothesis of entropy stabilization, specimen #2 (Co1/4Fe1/4Ni1/4Mn1/4)Al was selected to study the reversible temperature stability of the highentropy B2 phase. The XRD patterns in Fig. 4 showed that specimen #2 exhibited two phases, the primary B2 phase and a secondary phase indicated by solid triangles, when equilibrated at 1000 o C. The amount of the secondary phase reduced after further annealing at 1100 o C, and the secondary phase vanished (dissolved completely into the B2 matrix) at 1300 o C to form a single high-entropy B2 phase. Interestingly, when the same sample was isothermally annealed again at 1000 o C after forming single high-entropy B2 phase at 1300 o C, the secondary phase emerged again. The EDXS maps shown in Fig. 4(b) further confirmed the precipitation at 1000 o C, and the dissolution of the precipitates at 1300 o C. Specifically, the precipitates are a Mn-rich phase with composition close to 50% Mn, 25% Al, 15% Fe, 5% Co, and 5% Ni.
Similar reversible precipitation of a CuO-enriched secondary phase at low temperatures in (Mg0.2Co0.2Ni0.2Cu0.2Zn0.2)O was reported previously and considered as the main evidence of forming this entropy-stabilized oxide [9]. Thus, Fig. 4 also implies that this single-phase equimolar HEIC observed in this study is likely entropy-stabilized at high temperatures.
It is also interesting to note that while specimen #5 (Co1/5Fe1/5Ni1/5Mn1/5Cu1/5)Al exhibits a single high-entropy B2 phase at 1100 C, six out of the ten equimolar ternary aluminide subsystems, i.e., (Co1/2Ni1/2)Al, (Cu1/2Co1/2)Al, (Mn1/2Cu1/2)Al, (Fe1/2Cu1/2)Al, (Ni1/2Mn1/2)Al, and (Fe1/2Mn1/2)Al, do not form single B2 phases at the same temperature (based on CALPHAD modeling, which should be more accurate for ternary subsystems). This also suggests a highentropy effect to stabilize equimolar solid solutions in high-component-dimensional systems.
Criterion for Forming Single High-Entropy B2 Phases
Two criteria are commonly used to assess the formation of single-phase HEAs, i.e., atomic
size polydispersity = 100√∑ (1 − / ̅ ) 2 =1 and mixing enthalpy ∆ = ∑ Ω =1, ≠ ,
where the average atomic radius ̅ = ∑ =1 , ci and ri are the atomic percentage and atomic radius of the i th element, respectively, n is the number of alloying elements, and Ωij is the mixing enthalpy between i and j in the liquid phase [32]. It is proposed that the formation of a singlephase HEA solid solution is favored when ≤ 6 and −15 kJ/mol ≤ ∆ ≤ 5 kJ/mol. An additional indicator, VEC = ∑ =1 , where is the valence electron concentration (VEC), was introduced to predict whether an HEA forms BCC (VEC < 6.5) or FCC (VEC > 6.5) structure. [33]. The corresponding values of , ΔHmix, and VEC for seven specimens were calculated and listed in Table 1 Table 1. Again, only specimens #3 and #5, the two that form single high-entropy B2 phases at 1100 C, have the refined ∆ * > −15 kJ/mol and * < 1.
It is interesting to note that specimen #2 has the lowest * of 0.66, but with a more negative ∆ * of −17.75 kJ/mol. Experimentally, only trace amount of secondary phase was observed in specimen #2 at 1100 C (Fig. 2); moreover, it formed single high-entropy B2 phase at a higher temperature of 1300 C, but a secondary phase reversibly precipitated out at a lower temperature of 1000 C. Presumably, the negative ∆ * effect was offset by higher entropy stabilization effect at a higher temperature, which further supports that this single high-entropy B2 phase is entropy-stabilized at high temperature.
Overall, we conclude that a small * (< 1) appears to be a good indicator to promote the formation of a single high-entropy B2 phase (and perhaps HEICs in general), while a less negative ∆ * may help to stabilize HEICs to a lower temperature. More critical testing and possible refinement of the proposed criterion should be carried out in future studies.
High-Entropy D022 Phases
To extend the generality of this study and our discovery of single-phase HEICs, we further examined the possible formation of D022-phase HEICs (see Fig. 1(b) for the schematic structure) fabricated via the same route. We selected possible compositions based on the following principles (similar to those used for selecting possible B2-phase HEICs): (1) at least three transition metal elements are able to form equilibrium D022-phase binary aluminides and (2) at least two of the D022-phase binary aluminides have 100% mutual solubility according to the ternary phase diagrams. Fig. 5 shows the XRD patterns of the five specimens of selected compositions: i.e., (Ti1/4Nb1/4V1/4Zr1/4)Al3, (Ti1/4Nb1/4Ta1/4Cr1/4)Al3, (Ti1/4Nb1/4Ta1/4Mn1/4)Al3, (Ti1/4Nb1/4Ta1/4Mo1/4)Al3, and (Ti1/4Nb1/4Ta1/4Zr1/4)Al3. All these five compositions exhibited the primary D022 phases after annealing at 1300 C for 10 hrs. Very few amounts of secondary phases were detected by XRD measurements. These findings further demonstrated that equimolar
HEICs of mostly single high-entropy phases (albeit small amounts of secondary phases) beyond the high-entropy B2 phases can be made.
Conclusions
In this study, we successfully fabricated, for the first time to our knowledge, several single- 2. (a) XRD patterns for seven HEIC specimens that exhibit primarily or completely single high-entropy B2 phases after annealing at 1100 o C for 10 hours. Simulated XRD peaks for the BCC (using Fe as an example), perfectly-ordered B2 (using binary FeAl as an example), and B2 phase with 10% anti-site defects are also included. Peaks that can belong to either BCC or B2 structure are indexed by dots; superlattice peaks belong exclusively to the B2 structure are indexed by stars.
Minor peaks are evident in several patterns, e.g., for specimen #6 and #7, which indicate the presence of secondary phases. (b) Simulated intensity for (001) peak (normalized to the strongest (110) peak) in B2-structured aluminide as a function of the fraction of anti-site Al defects. Calculation was performed by using VESTA software. (c) SEM micrographs and corresponding EDXS compositional maps for specimens after annealing at 1100 o C for 10 hours. HEIC specimen #3 and #5 appear to completely homogeneous, while specimen #2 were almost homogeneous. Fig. 3. Phase evolution (volume fraction of various phases vs. equilibrium temperature) predicted by ThermoCalc using TCHEA database. The CALPHAD can be used to forecast some general trends (to the first order of approximation), but the predictions are not all accurate when they are compared with Fig. 2 and Fig. 4; this is presumably due to that the database does not include all interactions in aluminides. The secondary phase that formed at 1000 o C and 1100 o C vanished after annealing at high temperature of 1300 o C, but re-precipitated after subsequent reannealing at 1000 o C. This model experiment implies that the single B2 solid-solution phase is likely entropy-stabilized at high temperatures (and the CALPHAD prediction from the TCHEA database shown in Fig. 3(b) is not all accurate). 2. (a) XRD patterns for seven HEIC specimens that exhibit primarily or completely single high-entropy B2 phases after annealing at 1100 o C for 10 hours. Simulated XRD peaks for the BCC (using Fe as an example), perfectly-ordered B2 (using binary FeAl as an example), and B2 phase with 10% anti-site defects are also included. Peaks that can belong to either BCC or B2 structure are indexed by dots; superlattice peaks belong exclusively to the B2 structure are indexed by stars. Minor peaks are evident in several patterns, e.g., for specimen #6 and #7, which indicate the presence of secondary phases. (b) Simulated intensity for (001) peak (normalized to the strongest (110) peak) in B2-structured aluminide as a function of the fraction of anti-site Al defects. Calculation was performed by using VESTA software. (c) SEM micrographs and corresponding EDXS compositional maps for specimens after annealing at 1100 o C for 10 hours. HEIC specimen #3 and #5 appear to completely homogeneous, while specimen #2 were almost homogeneous.
Fig. 3.
Phase evolution (volume fraction of various phases vs. equilibrium temperature) predicted by ThermoCalc using TCHEA database. The CALPHAD can be used to forecast some general trends (to the first order of approximation), but the predictions are not all accurate when they are compared with Fig. 2 and Fig. 4; this is presumably due to that the database does not include all interactions in aluminides. The secondary phase that formed at 1000 o C and 1100 o C vanished after annealing at high temperature of 1300 o C, but re-precipitated after subsequent reannealing at 1000 o C. This model experiment implies that the single B2 solid-solution phase is likely entropy-stabilized at high temperatures (and the CALPHAD prediction from the TCHEA database shown in Fig. 3(b) is not all accurate).
Fig. 5.
XRD patterns of five HEIC specimens with primarily the high-entropy D022 phase after annealing at 1300 o C for 10 hours. The D022 phase are indexed; the unindexed peaks with low intensity correspond to the secondary phases. The high-entropy D022 phase dominant in all five cases (albeit some minor secondary phases).
Figure Legends
Figure Legends
Fig. 1 .
1Schematic illustrations of the high-entropy aluminides with the B2 and D022 structures.
Fig.
Fig. 2. (a) XRD patterns for seven HEIC specimens that exhibit primarily or completely single high-entropy B2 phases after annealing at 1100 o C for 10 hours. Simulated XRD peaks for the BCC (using Fe as an example), perfectly-ordered B2 (using binary FeAl as an example), and B2 phase with 10% anti-site defects are also included. Peaks that can belong to either BCC or B2 structure are indexed by dots; superlattice peaks belong exclusively to the B2 structure are indexed by stars. Minor peaks are evident in several patterns, e.g., for specimen #6 and #7, which indicate the presence of secondary phases. (b) Simulated intensity for (001) peak (normalized to the strongest (110) peak) in B2-structured aluminide as a function of the fraction of anti-site Al defects. Calculation was performed by using VESTA software. (c) SEM micrographs and corresponding EDXS compositional maps for specimens after annealing at 1100 o C for 10 hours. HEIC specimen #3 and #5 appear to completely homogeneous, while specimen #2 were almost homogeneous.
Fig. 4 (
4a) XRD patterns of the same #2 (Fe1/4Co1/4Ni1/4Mn1/4)Al specimen annealed at 1000 o C, 1100 o C, 1300 o C, and 1000 o C sequentially (each for 10 hours). (b) The enlarged XRD peaks showing the evolution of the secondary phase. (c) EDXS mapping of the same specimen annealed at 1300 o C and 1100 o C, respectively.
Fig. 5 .
5XRD patterns of five HEIC specimens with primarily the high-entropy D022 phase after annealing at 1300 o C for 10 hours. The D022 phase are indexed; the unindexed peaks with low intensity correspond to the secondary phases. The high-entropy D022 phase dominant in all five cases (albeit some minor secondary phases).
Fig.
Fig. 2. (a) XRD patterns for seven HEIC specimens that exhibit primarily or completely single high-entropy B2 phases after annealing at 1100 o C for 10 hours. Simulated XRD peaks for the BCC (using Fe as an example), perfectly-ordered B2 (using binary FeAl as an example), and B2 phase with 10% anti-site defects are also included. Peaks that can belong to either BCC or B2 structure are indexed by dots; superlattice peaks belong exclusively to the B2 structure are indexed by stars. Minor peaks are evident in several patterns, e.g., for specimen #6 and #7, which indicate the presence of secondary phases. (b) Simulated intensity for (001) peak (normalized to the strongest (110) peak) in B2-structured aluminide as a function of the fraction of anti-site Al defects. Calculation was performed by using VESTA software. (c) SEM micrographs and corresponding EDXS compositional maps for specimens after annealing at 1100 o C for 10 hours. HEIC specimen #3 and #5 appear to completely homogeneous, while specimen #2 were almost homogeneous.
Fig. 4 (
4a) XRD patterns of the same #2 (Fe1/4Co1/4Ni1/4Mn1/4)Al specimen annealed at 1000 o C, 1100 o C, 1300 o C, and 1000 o C sequentially (each for 10 hours). (b) The enlarged XRD peaks showing the evolution of the secondary phase. (c) EDXS mapping of the same specimen annealed at 1300 o C and 1100 o C, respectively.
The atomic size polydispersity calculated with 50% Al should be used to judge whether an HEA solid solution on one lattice (simple FCC or BCC) can to form. The ordered B2 HEICs are more like HECs with one element (Al in this case vs. anions in HECs) primarily occupies one sublattice and other four or five elements form a solid solution on the other sublattice. In HECs, atomic size difference of cation atoms and cation-anion bonding lengths are considered as a key parameter for predicting the formation of single HEC phases[10,11]. Here, we proposed to use. Here, small VEC values (2.6-3.5) suggest they would favor to form BCC-like
structure, but large values (> 7 for all cases) show that they should not form HEAs of the
simple BCC structure. The predictions are consistent with the experimental observations that they
form BCC-based, ordered B2 phases.
atomic size polydispersity of non-Al elements, * = 100√∑
(1 − / ̅ − ) 2
≠
, as a refined
parameter to the formation of single high-entropy B2 phases. The calculated results are listed in
Table 1. It is found that the specimens #3 and #5 (single HEIC phases at 1100 C) as well as
specimen #2 (almost single high-entropy B2 phase at 1100 C and true single high-entropy B2
phase at a high temperature of 1300 C ) with * < 1 form single high-entropy B2 phases,
thereby suggesting that * could be a good indicator for forming single-phase HEICs.
A careful examination further shows that the calculated ∆
values are in the range of
−12 to −29 kJ/mol. It is interesting to note that only specimens #3 and #5 have ∆
>
−15 kJ/mol, which are the two that form single high-entropy B2 phases at 1100 C. For ordered
B2-phase HEICs, perhaps it is more accurate to use the sum of average weighted bonding
energies between non-Al and Al atoms to define a modify ∆
* = ∑
Ω −
≠
. The
calculated results are listed in
compositions exhibit predominantly single high-entropy B2 phases with small amounts of secondary phases. These high-entropy B2 phases are likely entropy-stabilized phases based on CALPHAD modeling and a model experiment. A new criterion for forming single high-entropy B2 phases is proposed as a new phase stability rule. Five additional HEICs of primarily D022 phases have been made to broaden the discovery.phase HEICs with ordered B2 structure, in which four or five transition metal elements, e.g. Fe,
Co, Ni, Mn, and Cu, of equimolar fractions, occupy one sublattice, with Al on the other sublattice
(albeit ~10% anti-site defects). Specifically, (Co1/4Fe1/4Ni1/4Mn1/4)Al, (Co1/4Fe1/4Ni1/4Cu1/4)Al, and
(Co1/5Fe1/5Ni1/5Mn1/5Cu1/5)Al can be made into single high-entropy B2 phases, while four other
The discovery of single-phase HEICs bridges the traditional metallic HEAs and emerging
non-metallic HECs; their structures are more like ionic HECs with high-entropy mixing only on
one sublattice. However, comparison of experimental and calculated XRD patterns suggest the
existence of ~10% anti-site defects in B2-pgase HEICs (differing from most HECs with little
anti-site defects). The single-phase HEICs reported in this study represents a new class of high
entropy materials, which opens a new platform to explore unique mechanical, thermal, and other
functional properties.
Table 1 .
1Summary of the seven specimens that exhibit primarily or completely single highentropy B2 phases after equilibration at 1100 o C for 10 hours.Fig. 1. Schematic illustrations of the high-entropy aluminides with the B2 and D022 structures.Composition
Single B2
Phase at
1100 C?
Lattice
Constant
(Å)
δ
ΔHmix
VEC
δ *
ΔHmix *
#1
(Fe1/4Co1/4Ni1/4Cr1/4)Al
No
2.887
7.64
-16.44
2.83
1.30
-15.50
#2
(Fe1/4Co1/4Ni1/4Mn1/4)Al
Almost
2.895
7.78
-18.75
2.86
0.66
-17.75
#3
(Fe1/4Co1/4Ni1/4Cu1/4)Al
Yes
2.918
7.72
-12.00
3.34
0.97
-13.25
#4
(Fe1/5Co1/5Ni1/5Mn1/5Cr1/5)Al
No
2.909
7.57
-17.24
2.69
1.18
-16.20
#5
(Fe1/5Co1/5Ni1/5Mn1/5Cu1/5)Al
Yes
2.916
7.64
-13.96
3.16
0.92
-14.40
#6
(Fe1/4Co1/4Ni1/4Cu1/4)(Al1/2Ti1/2)
No
2.948
7.12
-24.00
3.12
/
/
#7
(Fe1/4Co1/4Ni1/4Mn1/4)(Al1/2Ti1/2)
No
2.945
7.20
-28.38
2.65
/
/
AcknowledgementThis work is partially supported by a Vannevar Bush Faculty Fellowship sponsored by theDisclose of potential conflict of interest:The authors declare no conflict of interests.
High-entropy alloys: A critical review. M-H Tsai, J-W Yeh, Materials Research Letters. 2Tsai M-H, Yeh J-W. High-entropy alloys: A critical review. Materials Research Letters, 2014, 2: 107-123
Microstructures and properties of high-entropy alloys. Y Zhang, T T Zuo, Z Tang, Progress in Materials Science. 61Zhang Y, Zuo TT, Tang Z, et al. Microstructures and properties of high-entropy alloys. Progress in Materials Science, 2014, 61: 1-93
thermophysical and electrical properties in alxcocrfeni (0≤x≤2) high-entropy alloys. H-P Chou, Y-S Chang, S-K Chen, Materials Science and Engineering: B. 163Chou H-P, Chang Y-S, Chen S-K, et al. Microstructure, thermophysical and electrical properties in alxcocrfeni (0≤x≤2) high-entropy alloys. Materials Science and Engineering: B, 2009, 163: 184-189
Mechanical properties of nb25mo25ta25w25 and v20nb20mo20ta20w20 refractory high entropy alloys. O N Senkov, G B Wilks, J M Scott, Intermetallics. 19Senkov ON, Wilks GB, Scott JM, et al. Mechanical properties of nb25mo25ta25w25 and v20nb20mo20ta20w20 refractory high entropy alloys. Intermetallics, 2011, 19: 698-706
A fracture-resistant high-entropy alloy for cryogenic applications. B Gludovatz, A Hohenwarter, D Catoor, Science. 345Gludovatz B, Hohenwarter A, Catoor D, et al. A fracture-resistant high-entropy alloy for cryogenic applications. Science, 2014, 345: 1153-1158
Multicomponent intermetallic nanoparticles and superb mechanical behaviors of complex alloys. T Yang, Ylz , Y Tong, Z B Jiao, J Wei, J X Cai, X D Han, D Chen, A Hu, J J Kai, K Lu, Y Liu, C T Liu, Science. 362T. Yang YLZ, Y. Tong, Z. B. Jiao, J.Wei, J. X. Cai, X. D. Han, D. Chen, A. Hu, J.J. Kai, K. Lu, Y. Liu, C. T. Liu. Multicomponent intermetallic nanoparticles and superb mechanical behaviors of complex alloys. Science, 2018, 362: 933-937
Metastable high-entropy dual-phase alloys overcome the strength-ductility trade-off. Z Li, K G Pradeep, Y Deng, Nature. 534Li Z, Pradeep KG, Deng Y, et al. Metastable high-entropy dual-phase alloys overcome the strength-ductility trade-off. Nature, 2016, 534: 227-230
Microstructural development in equiatomic multicomponent alloys. B Cantor, I Chang, P Knight, Materials Science and Engineering: A. 375Cantor B, Chang I, Knight P, et al. Microstructural development in equiatomic multicomponent alloys. Materials Science and Engineering: A, 2004, 375: 213-218
Entropy-stabilized oxides. C M Rost, E Sachet, T Borman, Nat Commun. 68485Rost CM, Sachet E, Borman T, et al. Entropy-stabilized oxides. Nat Commun, 2015, 6: 8485
High-entropy metal diborides: A new class of highentropy materials and a new type of ultrahigh temperature ceramics. J Gild, Y Zhang, T Harrington, Sci Rep. 637946Gild J, Zhang Y, Harrington T, et al. High-entropy metal diborides: A new class of high- entropy materials and a new type of ultrahigh temperature ceramics. Sci Rep, 2016, 6: 37946
A new class of high-entropy perovskite oxides. S Jiang, T Hu, J Gild, Scripta Materialia. 142Jiang S, Hu T, Gild J, et al. A new class of high-entropy perovskite oxides. Scripta Materialia, 2018, 142: 116-120
High-entropy fluorite oxides. J Gild, M Samiee, J L Braun, Journal of the European Ceramic Society. 38Gild J, Samiee M, Braun JL, et al. High-entropy fluorite oxides. Journal of the European Ceramic Society, 2018, 38: 3578-3584
Single-crystal high entropy perovskite oxide epitaxial films. Y Sharma, B L Musico, X Gao, Physical Review Materials. 260404Sharma Y, Musico BL, Gao X, et al. Single-crystal high entropy perovskite oxide epitaxial films. Physical Review Materials, 2018, 2: 060404
Rare earth and transition metal based entropy stabilised perovskite type oxides. A Sarkar, R Djenadic, D Wang, Journal of the European Ceramic Society. 38Sarkar A, Djenadic R, Wang D, et al. Rare earth and transition metal based entropy stabilised perovskite type oxides. Journal of the European Ceramic Society, 2018, 38: 2318-2327
Phase stability and mechanical properties of novel high entropy transition metal carbides. T J Harrington, J Gild, P Sarker, Acta Materialia. 166Harrington TJ, Gild J, Sarker P, et al. Phase stability and mechanical properties of novel high entropy transition metal carbides. Acta Materialia, 2019, 166: 271-280
2)c high-entropy ceramics with low thermal conductivity. X Yan, L Constantin, Y Lu, Journal of the American Ceramic Society. 101hf0.2zr0.2ta0.2nb0.2ti0.Yan X, Constantin L, Lu Y, et al. (hf0.2zr0.2ta0.2nb0.2ti0.2)c high-entropy ceramics with low thermal conductivity. Journal of the American Ceramic Society, 2018, 101: 4486-4491
Processing and properties of high-entropy ultra-high temperature carbides. E Castle, T Csanádi, S Grasso, Scientific reports. 88609Castle E, Csanádi T, Grasso S, et al. Processing and properties of high-entropy ultra-high temperature carbides. Scientific reports, 2018, 8: 8609
First-principles study, fabrication and characterization of (hf0. 2zr0. 2ta0. 2nb0. 2ti0. 2) c high-entropy ceramic. B Ye, T Wen, K Huang, Journal of the American Ceramic Society. Ye B, Wen T, Huang K, et al. First-principles study, fabrication and characterization of (hf0. 2zr0. 2ta0. 2nb0. 2ti0. 2) c high-entropy ceramic. Journal of the American Ceramic Society, 2019,
A high-entropy silicide:(mo0. 2nb0. 2ta0. 2ti0. 2w0. 2) si2. J Gild, J Braun, K Kaufmann, arXiv:190201033arXiv preprintGild J, Braun J, Kaufmann K, et al. A high-entropy silicide:(mo0. 2nb0. 2ta0. 2ti0. 2w0. 2) si2. arXiv preprint arXiv:190201033, 2019,
Charge-induced disorder controls the thermal conductivity of entropy-stabilized oxides. J L Braun, C M Rost, M Lim, Advanced Materials. 301805004Braun JL, Rost CM, Lim M, et al. Charge-induced disorder controls the thermal conductivity of entropy-stabilized oxides. Advanced Materials, 2018, 30: 1805004
Dense high-entropy boride ceramics with ultra-high hardness. Y Zhang, W-M Guo, Z-B Jiang, Scripta Materialia. 164Zhang Y, Guo W-M, Jiang Z-B, et al. Dense high-entropy boride ceramics with ultra-high hardness. Scripta Materialia, 2019, 164: 135-139
A promising new class of high-temperature alloys: Eutectic high-entropy alloys. Y Lu, Y Dong, S Guo, Scientific reports. 46200Lu Y, Dong Y, Guo S, et al. A promising new class of high-temperature alloys: Eutectic high-entropy alloys. Scientific reports, 2014, 4: 6200
Solidification microstructure of alcocrfeni2. 1 eutectic high entropy alloy ingots. T Nagase, M Takemura, M Matsumuro, Materials Transactions. 59Nagase T, Takemura M, Matsumuro M, et al. Solidification microstructure of alcocrfeni2. 1 eutectic high entropy alloy ingots. Materials Transactions, 2018, 59: 255-264
A new strategy to design eutectic high-entropy alloys using simple mixture method. H Jiang, K Han, X Gao, Materials & Design. 142Jiang H, Han K, Gao X, et al. A new strategy to design eutectic high-entropy alloys using simple mixture method. Materials & Design, 2018, 142: 101-105
Design of d022 superlattice with superior strengthening effect in high entropy alloys. F He, D Chen, B Han, Acta Materialia. 167He F, Chen D, Han B, et al. Design of d022 superlattice with superior strengthening effect in high entropy alloys. Acta Materialia, 2019, 167: 275-286
Investigation on phase stability of alxco0.2cr0.2ni0.2ti0.4 − x high entropy alloys. Y Zhao, Y Yang, C-H Lee, Journal of Phase Equilibria and Diffusion. 39Zhao Y, Yang Y, Lee C-H, et al. Investigation on phase stability of alxco0.2cr0.2ni0.2ti0.4 − x high entropy alloys. Journal of Phase Equilibria and Diffusion, 2018, 39: 610-622
Thermodynamics of concentrated solid solution alloys. Current Opinion in Solid State & Materials Science. M C Gao, C Zhang, P Gao, 21Gao MC, Zhang C, Gao P, et al. Thermodynamics of concentrated solid solution alloys. Current Opinion in Solid State & Materials Science, 2017, 21: 238-251
Recent advances in b2 iron aluminide alloys: Deformation, fracture and alloy design. C T Liu, E P George, P J Maziasz, Materials Science and Engineering: A. 258Liu CT, George EP, Maziasz PJ, et al. Recent advances in b2 iron aluminide alloys: Deformation, fracture and alloy design. Materials Science and Engineering: A, 1998, 258: 84-98
Assessment of the fe-ni-al system. L Eleno, K Frisk, A Schneider, Intermetallics. 14Eleno L, Frisk K, Schneider A. Assessment of the fe-ni-al system. Intermetallics, 2006, 14: 1276-1290
Thermodynamic modeling of al-co-cr, al-co-ni, cocr-ni ternary systems towards a description for al-co-cr-ni. X L Liu, G Lindwall, T Gheno, Calphad. 52Liu XL, Lindwall G, Gheno T, et al. Thermodynamic modeling of al-co-cr, al-co-ni, co- cr-ni ternary systems towards a description for al-co-cr-ni. Calphad, 2016, 52: 125-142
Point defect concentrations and hardening in binary b2 intermetallics. L M Pike, Y A Chang, C T Liu, Acta Materialia. 45Pike LM, Chang YA, Liu CT. Point defect concentrations and hardening in binary b2 intermetallics. Acta Materialia, 1997, 45: 3709-3719
Solid-solution phase formation rules for multi-component alloys. Y Zhang, Y J Zhou, J P Lin, Advanced Engineering Materials. 10Zhang Y, Zhou YJ, Lin JP, et al. Solid-solution phase formation rules for multi-component alloys. Advanced Engineering Materials, 2008, 10: 534-538
Effect of valence electron concentration on stability of fcc or bcc phase in high entropy alloys. S Guo, C Ng, J Lu, Journal of Applied Physics. 109103505Guo S, Ng C, Lu J, et al. Effect of valence electron concentration on stability of fcc or bcc phase in high entropy alloys. Journal of Applied Physics, 2011, 109: 103505
| [] |
[
"The gluon and charm content of the deuteron",
"The gluon and charm content of the deuteron"
] | [
"Stanley J Brodsky \nSLAC National Accelerator Laboratory\nStanford University\n94309StanfordCAUSA\n",
"Yu-Ju Kelly ",
"Chiu \nSLAC National Accelerator Laboratory\nStanford University\n94309StanfordCAUSA\n",
"Jean-Philippe Lansberg \nIPNO\nCNRS-IN2P3\nUniv. Paris-Sud\nUniversité Paris-Saclay\n91406Orsay CedexFrance\n",
"Nodoka Yamanaka \nIPNO\nCNRS-IN2P3\nUniv. Paris-Sud\nUniversité Paris-Saclay\n91406Orsay CedexFrance\n\niTHES Research Group\nRIKEN\n351-0198WakoSaitamaJapan\n"
] | [
"SLAC National Accelerator Laboratory\nStanford University\n94309StanfordCAUSA",
"SLAC National Accelerator Laboratory\nStanford University\n94309StanfordCAUSA",
"IPNO\nCNRS-IN2P3\nUniv. Paris-Sud\nUniversité Paris-Saclay\n91406Orsay CedexFrance",
"IPNO\nCNRS-IN2P3\nUniv. Paris-Sud\nUniversité Paris-Saclay\n91406Orsay CedexFrance",
"iTHES Research Group\nRIKEN\n351-0198WakoSaitamaJapan"
] | [] | We evaluate the frame-independent gluon and charm parton-distribution functions (PDFs) of the deuteron utilizing light-front quantization and the impulse approximation. We use a nuclear wave function obtained from solving the nonrelativistic Schrödinger equation with the realistic Argonne v18 nuclear force, which we fold with the proton PDF. The predicted gluon distribution in the deuteron (per nucleon) is a few percent smaller than that of the proton in the domain x b j = Q 2 2p N ·q ∼ 0.4, whereas it is strongly enhanced for x b j larger than 0.6. We discuss the applicability of our analysis and comment on how to extend it to the kinematic limit x b j → 2. We also analyze the charm distribution of the deuteron within the same approach by considering both the perturbatively and non-perturbatively generated (intrinsic) charm contributions. In particular, we note that the intrinsic-charm content in the deuteron will be enhanced due to 6-quark "hidden-color" QCD configurations. | 10.1016/j.physletb.2018.06.070 | [
"https://arxiv.org/pdf/1805.03173v1.pdf"
] | 119,097,174 | 1805.03173 | ead67e3c4ba8f00699e85323a0058efaef23d19f |
The gluon and charm content of the deuteron
8 May 2018
Stanley J Brodsky
SLAC National Accelerator Laboratory
Stanford University
94309StanfordCAUSA
Yu-Ju Kelly
Chiu
SLAC National Accelerator Laboratory
Stanford University
94309StanfordCAUSA
Jean-Philippe Lansberg
IPNO
CNRS-IN2P3
Univ. Paris-Sud
Université Paris-Saclay
91406Orsay CedexFrance
Nodoka Yamanaka
IPNO
CNRS-IN2P3
Univ. Paris-Sud
Université Paris-Saclay
91406Orsay CedexFrance
iTHES Research Group
RIKEN
351-0198WakoSaitamaJapan
The gluon and charm content of the deuteron
8 May 2018
We evaluate the frame-independent gluon and charm parton-distribution functions (PDFs) of the deuteron utilizing light-front quantization and the impulse approximation. We use a nuclear wave function obtained from solving the nonrelativistic Schrödinger equation with the realistic Argonne v18 nuclear force, which we fold with the proton PDF. The predicted gluon distribution in the deuteron (per nucleon) is a few percent smaller than that of the proton in the domain x b j = Q 2 2p N ·q ∼ 0.4, whereas it is strongly enhanced for x b j larger than 0.6. We discuss the applicability of our analysis and comment on how to extend it to the kinematic limit x b j → 2. We also analyze the charm distribution of the deuteron within the same approach by considering both the perturbatively and non-perturbatively generated (intrinsic) charm contributions. In particular, we note that the intrinsic-charm content in the deuteron will be enhanced due to 6-quark "hidden-color" QCD configurations.
Introduction
A primary challenge in nuclear physics is to study the structure and dynamics of nuclei from first principles in terms of the fundamental quark and gluon degrees of freedom of quantum chromodynamics (QCD). The conventional description of nuclear many-body systems, where nucleons are treated as elementary particles with phenomenological potentials, can be justified in the nonrelativistic domain [1][2][3][4][5][6]. However, in the short-distance, high-momentum-transfer region, quark and gluon fields play an essential role in describing nuclear systems, and non-nucleonic phenomena, such as QCD "hidden-color degrees" of freedom [7][8][9][10], become relevant. For example, the six-quark Fock state of the deuteron has five different SU(3) color-singlet contributions, only one of which projects to the standard proton and neutron three-quark clusters. The leading-twist shadowing [11][12][13][14][15] of nuclear parton distributions at small x b j in the Gribov-Glauber theory is due to the destructive interference of two-step and one-step amplitudes, where the two-step amplitude depends on diffractive deep inelastic scattering (DDIS) ℓN → ℓ ′ N ′ X, leaving the struck nucleon intact. The study of the quark and gluon structure of nuclei thus illuminates the intersection between the nuclear and particle physics.
The quark and gluon distributions of nuclei also play an important role in high-energy astrophysics [16,17]. An accurate knowledge of nuclear parton distributions is essential in many physics fields [18]. For example, the gluonic content of light nuclei is important in understanding the production of antiprotons in interstellar reactions. The charmquark distribution function in nuclei at high x b j can significantly change the predictions of the spectrum of cosmic neutrinos and is thus important to interpret the background of ultra-high-energy neutrinos which contribute to the Ice-Cube experimental data [19,20] in the high-x F domain [21][22][23]. Furthermore, the parton-distribution function (PDF) for nuclei is the initial condition controlling the dynamics of the possible formation and thermalization of the quarkgluon plasma (see e.g. [24]).
Collider experiments typically probe proton and nuclear PDFs in the region of small x b j = Q 2 2p N ·q (see [25][26][27][28] for recent works showing the relevance of LHC heavy-flavor data to determine the gluon content of the nuclei at small x b j ). In contrast, fixed-target experiments can unveil the PDF over the full range of x b j up to unity by taking advantage of the asymmetry of the experimental apparatus and the kinematics. New fixed-target experiments using the beams of the LHC are currently investigated (see the works of the AF-TER@LHC study group [29][30][31][32][33]) following the very positive outcome of the data taking of the SMOG@LHCb system [34,35]. In fixed-target experiments, one also has the advantage that the parton distributions of a large variety of nuclei, both polarized and unpolarized, can be measured. It is thus an important theoretical task to predict the gluon and heavy-quark distributions of nuclei.
We will focus on the deuteron, which is the simplest many-nucleon system, and thus can be evaluated with high accuracy in nuclear physics. It is therefore an excellent system where nuclear effects [7,9, can be studied. In addition, a careful study of the structure of the deuteron may provide accurate information on the quark and gluon structure of the neutron [61][62][63]. In particular, the gluon PDF of the neutron is of interest. The PDF of the deuteron near the maximal fraction x b j = 2 (we use this definition in this work) can be constrained by perturbative QCD, since it is the dual of the deuteron form factor at high-momentum transfer Q 2 [64,65]. In this work, we will mostly be interested in the region of x b j ∼ 1, a domain which AF-TER@LHC can access.
As a first study, we have calculated the gluon PDF in the deuteron within the impulse approximation which gives the leading contribution at x b j < 1. To do so, we have solved the Schrödinger equation of the two-nucleon system with a phenomenological nuclear potential [1] using the Gaussian expansion method [66]. We have then derived the boostinvariant light-front wave function [67,68] of the nucleus and convoluted it with the gluon distribution of the nucleon in order to obtain the gluon distribution of the deuteron. The complications of boosting an instant-form nucleon wavefunction to nonzero momentum are discussed in Ref. [69]. This paper is organized as follows. In the next section, we calculate the gluon PDF of the deuteron through the procedure mentioned above. In Section 3, we discuss the applicability of the impulse approximation and show our result. We also extend our discussion to illuminate the intrinsic heavyquark contribution to the deuteron charm-quark distribution (Section 3.2). A summary is presented in the final section.
Derivation of the gluon PDF of the deuteron
Deuteron wave function
Let us now explain how we convolute the gluon PDF of the nucleon by the deuteron wave function in the impulse approximation [see Fig. 1 (a)]. The impulse approximation is the leading contribution in the chiral effective field theory (χEFT) [48,58]. We will show later that the two-nucleon contribution [ Fig. 1 (b)] is subleading in the nucleon velocity expansion. These arguments lead us to consider a nonrelativistic framework. We first calculate the wave function of the deuteron, given by the bound-state solution of the nonrelativistic twonucleon Schrödinger equation with the Argonne v18 potential [1] as the nuclear force. To solve the equation, we use the Gaussian expansion method [66], where an accurate solution is provided as a superposition of Gaussians with geometric series of ranges. The Gaussian basis is given by
Φ nlm (r) = N nl r l e −ν n r 2 Y lm (r),(1)
where N nl is the normalization constant of the Gaussian basis,r the unit vector of the relative coordinate r, and ν n = 1 r 2 n = 1 r 1 a n−1 (n = 1, · · · , n max ). We have taken n max = 12 Gaussians with r 1 = 0.1 fm and the common ratio a so that r 12 = 10 fm. Note that the nuclear force has a strong tensor force which may change the orbital angular momentum by two units, so the S -wave and D-wave states are relevant. The deuteron state is thus given by
| 2 H, m j = n c (s) n N n0 e −ν n r 2 Y 00 (r)χ 1,m j + n ′ c (d) n ′ N n ′ 2 r 2 e −ν n ′ r 2 m l ,m s f m l m s m j Y 2m l (r)χ 1,m s ,(2)
where
χ 1,m s ≡ | s = 1, m s , and f m l m s m j ≡ l = 2, m l , s = 1, m s | j = 1, m j .
To solve the Schrödinger equation, we have to diagonalize the Hamiltonian matrix together with the nuclear norm matrix which involves the information of the overlap between Gaussian basis functions. This is a generalized eigenvalue problem (For details, see Section 2.1 of Ref. [66]). By diagonalizing the Hamiltonian, we obtain the wave function shown in Fig. 2, which has a dominant S -wave component and a D-wave component representing 6% of the total probability. In our framework, the wave function is given as a superposition of Gaussians, so further transformations can analytically be performed. We then Fourier transform it and project the wave function onto the z-axis. After some manipulations, we obtain the following expression for the wave function of the unpolarized deuteron expressed in terms of the momentum in the z-axis p z :
P(p z ) = n n ′ c (s) n c (s) n ′ N n0 N n ′ 0 e − 1 4
The corresponding probability distribution is shown in Fig. 3. The distribution of the nucleon momentum is centered at p z = 0, and the standard deviation is close to 50 MeV. This is due to the kinetic energy of the nucleon (about 20 MeV), which is the bound-state effect of the nuclear force. Figure 3 also displays the contribution from the Swave, which is nearly identical to the total result. In Fig. 3, we also show the momentum distribution of the nucleon inside a typical heavy nucleus with the Fermi en-
ergy ǫ F ≡ p 2 F 2m N = 33 MeV.
The smearing of the momentum distribution is given by [43]
P(p z ) = 1 2πγ F exp − p 2 z 2γ F ,(4)
where γ F = 1 5 p 2 F . One sees that the momentum distribution of the deuteron is narrower than that of a typical heavy nucleus. The data for a typical nucleus with a Fermi energy ǫ F = 33 MeV are also shown for comparison (labeled as "Heavy nucleus").
Light-front momentum fraction
We now calculate the light-front momentum distribution of the nucleon in the deuteron. Note that the procedure to obtain a wave function in the light-front frame from the instant-form is not unique. In this work, we follow the recipe of Ref. [70] (see also [43,[71][72][73]) giving the wave function in the light-front frame as
ψ(p ⊥ , z) = ∂p z (p ⊥ , z) ∂z ψ(p ⊥ , p z ),(5)
where
p z = (z − 1) m 2 N +p 2 ⊥ z(2−z)
. The momentum fraction of the nucleon in the deuteron z is defined in the interval 0 ≤ z ≤ 2. This can consistently be derived using z defined by
z ≡ A p + N p + A = A m A m 2 N + p 2 z + p 2 ⊥ + p z ,(6)
where p + N and p + A are the momentum of the nucleon and of the nucleus in the light-front frame, respectively, and A the nucleon number of the nucleus (A = 2 for the deuteron). We then have z ≤ A. The masses of the nucleon and of the nucleus are labeled by m N and m A , respectively. By nonrelativistically reducing the nuclear binding effect (p 2
z + p 2 ⊥ )/m 2 N ≪ 1 and m A ≈ Am N , one obtains [43] z − 1 ≈ p z m N .(7)
This can however be improved by considering the shift of the energy by the moving nucleon inside the deuteron. The momentum fraction is then
z = A p + N p + A ≈ A E N + p z 2E N = 1 + p z E N = 1 + p z p 2 z + m 2 N ,(8)
where we still neglect p ⊥ . By solving the above equation in term of p z , the nucleon longitudinal momentum inside the deuteron satisfies
p z = z − 1 √ z(2 − z) m N .(9)
We think this manipulation is more suitable for light-front dynamics than the approximation used in Ref. [43]. We then apply this variable change to the previously obtained z-axis momentum fraction P(p z ) ≡ |ψ(p z )| 2 . We have
N N/A (z)dz = m N √ z(2 − z) 3 ψ z − 1 √ z(2 − z) m N 2 dz.(10)
This relation agrees with the recipe of Ref. [70]. This yields the light-front distribution plotted in Fig. 4, where one sees that the momentum fraction of the nucleon is broader in the deuteron than in a typical heavy nucleus, which is expected from the importance of the Fermi motion.
Gluon distribution
Now that we have the light-front distribution of the nucleon in the deuteron, we can derive the gluon PDF in the deuteron using the impulse approximation, by folding the gluon PDF of the nucleon [74][75][76][77][78][79][80][81] by N N/A (z). Since we are interested in the high-x behavior of the gluon PDF, we need a well behaved gluon PDF up to 1. For this reason, we prefer to use GRV98 [82].
The gluon PDF in the deuteron is obtained by folding the gluon PDF of the proton G p (x) by the light-front distribution of the nucleon inside the deuteron N N/A (z):
G d (x, µ F ) = 2 dydz N N/A (z)G p (y, µ F )δ(yz − x) = 2 A x N N/A (z) 1 z G p (x/z, µ F )dz,(11)
where µ F is the factorization scale. We note that the effect of the scale evolution is contained in G p (x/z, µ F ). This operation consists of calculating the contribution depicted by the diagram of Fig. 1 (a). In our computation, we of course assume that the proton and the neutron have the same gluon PDF, hence the factor of two in Eq. (11).
Results and discussion
Domain of applicability
Before plotting our results, let us discuss the domain of applicability of our calculation. Indeed, we assumed that the nucleon inside the deuteron is not modified from the onshell one. The nucleons in the deuteron can be considered almost on-shell when the invariant mass of the nucleon pair M pn has a small virtuality compared to the binding of the deuteron:
M 2 pn − m 2 d < m d × ǫ d ,(12)
where m d and ǫ d are respectively the deuteron mass and binding energy. The above condition of virtuality can be converted to a constraint on the nucleon velocity, that is v = p z m N by using Eq. (9) [or Eq. (7)]. This gives v < 0.004 which is obviously nonrelativistic. From this inequality, we can then derive the corresponding region of the momentum fraction of the gluon in the deuteron, by computing the average z as a function of x. This yields a conservative limit, 0 < x < 0.7, outside which the off-shell correction may be relevant.
Let us now inspect what such off-shell effects may be. We start by discussing the two-nucleon effects [see Fig. 1 (b)]. The nth moment of the PDF can indeed be expanded in terms of the velocity of the nucleus v A as [48] x n g|A = v A,µ 0 · · · v A,µ n A|O µ 0 ···µ n g |A ,
where O µ 0 ···µ n g is the gluon density operator. We note that the nuclear velocity is equal to the nucleon velocity v, up to small x corrections due to the nuclear binding. On the other hand, x n g|A can be expressed in terms of the nonrelativistic nucleon operators as
x n g|A = x n g [A + A|α n (N † N) 2 |A ],(14)
where x n g is the nth moment of the gluon PDF of the nucleon. The first term A is the nucleon number, obtained from the one-nucleon operator A|N † N|A = A. The nuclear matrix element A|α n (N † N) 2 |A provides the nuclear modification effect, and depends on the renormalization scale but not on the momentum fraction. The coefficient α n is proportional to the nth moment of the nuclear modification effect of the PDF, which is the residual piece of the nuclear PDF after subtracting the gluon PDF of free nucleons.
The zeroth moment α 0 is zero, due to charge conservation, and the first moment α 1 is known to be small from experiment [83]. At the hadron level, the leading off-shell correction is the pion exchange-current [48,84,85], but these contributions are N 3 LO in χEFT, thus small. This means that the nuclear modification effect is expected to be small in the nonrelativistic regime. The first off-shell effect therefore starts from v 2 which means that the constraint discussed above, v < 0.004, is probably too conservative.
Let us now see the range of velocities in which our framework holds. In Fig. 5, we plot the averaged squared velocity of the nucleon as a function of the gluon momentum fraction x. We of course exclude the region v 2 > 1 which is unphysical. We note that v 2 is still small at x = 1.1, v 2 ≈ 0.3 and therefore consider the domain of applicability our our framework as 0 < x < 1.1, where the off-shell effects are likely small. x Nucleon < v 2 > (GRV98, µ F =1GeV) According to the above discussion, we will show the result of our calculation of the gluon PDF in the deuteron up to x ≃ 1.1 in Fig. 6. The gluon PDF of the deuteron G d (x, µ F ) shows a monotonic decrease. In the region 0 < x < 0.6, G d (x, µ F ) ≈ 2G p (x, µ F ) within 5%, as expected. It is also notable that the the ratio G d /G p is larger than unity for 0 < x < 0.2, and that it shows a minimum near x = 0.4. Above x ∼ 0.6, the ratio G d /G p grows rapidly due to the falloff of the PDF of the proton. This is due to the Fermi motion, where the momentum of the nucleon in the deuteron is pushed to the high momentum region, in a 4 similar way as the quark PDF. x Ratio deuteron/proton
Charm distribution of the deuteron
Another interesting point to discuss is the charm-quark distribution, which can be analyzed in the same way as that of the gluon. The charm-quark distribution of the deuteron can equally be calculated in the domain of applicability of our framework discussed in Sec. 3.1 (0 < x < 1.1).
The charm quarks in a nucleon are virtually created by the gluon splitting (see Fig. 7) at leading order. The distribution of the charm quark generated by this subprocess inherits the gluon distribution, and decreases monotonically in x. We have calculated this contribution by using the charm PDF of CTEQ-JLAB 15 [86] which we fold with N N/A (z) discussed in Section 2.3. The result of our calculation is shown in Fig. 8. The behavior of the charm PDF of the deuteron due to the gluon splitting is similar to that of the gluon. The ratio of the charm PDFs of the deuteron (per nucleon) to the proton is unity within 5% for x < 0.4, and it deviates from unity for x > 0.4 due to Fermi motion, as expected from the impulse approximation. The distribution of charm quarks in the nucleon however receives additional non-perturbative contributions from the charm quark-antiquark pair creation which are multiconnected by two or more gluons coupling to different va- lence quarks (see Fig. 9). This intrinsic-charm contribution, although suppressed since it is higher order in α s , is favored by a higher probability due to the sharing of momenta from different valence quarks. This is in contrast to the gluon-splitting contributions where the charm and anticharm quarks couples to a single valence quark. In the limit of heavy quarks (Q), the intrinsic heavy quark distribution in a hadron is suppressed as m −2 Q , as can be derived by the application of the operator product expansion [87][88][89]. A model for the charm distribution in the nucleon based on kinematical constraints is given in Refs. [90,91]
f int c/N (x) = 1800N x 2 1 3 (1 − x)(1 + 10x + x 2 ) +2x(1 + x) ln x .(15)
The normalization N is phenomenologically determined as N ∼ 0.01 [91]. This distribution peaks at x ∼ 0. We plot in Fig. 8 the intrinsic-charm distribution of the deuteron calculated in our framework. As is the case of the gluon splitting, the Fermi motion alters the ratio of the deuteron PDFs (per nucleon) to that of the proton from unity for x > 0.6. We also observe that this ratio, although consistent with unity within 5%, varies more than that of the gluon PDFs in the region 0 < x < 0.6.
We can also derive an intrinsic-charm distribution of the deuteron by considering a six-valence-parton configuration (see Fig. 10). It can be calculated by rescaling the endpoint of Eq. (15) from x = 1 to x = 2. The normalization of the intrinsic-charm contribution to the deuteron is currently not known (we plot it in Fig. 8, with N = 10 −4 ). There are however some arguments suggesting a sizable contribution of this contribution. Indeed, beside the argument of the momentum-fraction sharing by several valence particles enhancing the intrinsic-charm content at high x, there is another enhancement from the combinatoric factors in the deuteron case. For the gluon splitting, we obviously have a factor of 6, whereas for the intrinsic charm generated by the radiation of two gluons from two distinct quarks, we have a factor of 15 (see Fig. 10 (a)). The enhancement may even be larger for the intrinsic charm created by the three-gluon emission although it is even higher order in α s , since we have a combinatoric factor of 20 (see Fig. 10 (b)). Note that this combinatoric enhancement is absent in the case of the nucleon. The intrinsic-charm contribution generated off three-gluon emission may also kinematically be more advantageous than the two-gluon case, since the momenta of valence quarks can stay closer to the valence configuration after the gluon radiation. It would thus be interesting to perform measurements sensitive to the charm content of the deuteron at x ∼ 1. Fixed-target experiments at the LHC with the LHCb or ALICE detector provide an ideal setup for such measurements.
Summary
In this work, we have calculated the gluon and charm PDFs of the deuteron in the light-front quantization. We used the impulse approximation where the input nuclear wave function is obtained by solving the nonrelativistic Schrödinger equation with the phenomenological Argonne v18 nuclear potential as input. Although we only analyzed the nonrelativistic regime, the range of applicability our computation is estimated to extend up to x ∼ 1.1.
We have found that the gluon and charm PDFs of the deuteron (per nucleon) at low x only differs by a few percent from that of the proton, as expected for nonrelativistic nucleons in the nucleus. However as x becomes close to unity, their distributions deviate significantly from that of the nucleon due to Fermi motion. This should taken into account when extracting the gluon PDF of the neutron via this system.
We also discussed the charm PDF of the deuteron, which is potentially very interesting at x ∼ 1 due to the intrinsiccharm contribution. The intrinsic charm of the deuteron is enhanced by the combinatoric factors characteristic for gluon emission and the sharing of the momentum by valence partons, although the overall normalization is somewhat uncertain. We expect the charm distribution in the deuteron to be studied in the region 0 < x < 1.1 by future experiments -particularly in future fixed-target experiments using the LHC beams-in order to determine the normalization of the intrinsic-charm and hidden-color states.
In the limit of high-momentum scale Q 2 → ∞ for exclusive scatterings, other structures with the same quantum numbers as the | NN state, such as the ∆∆ states, or the hidden-color configurations [7][8][9][10], in which quarks are not arranged to form two color-singlet baryons, become relevant as Fock states. Indeed, in the short distance limit, 80% of the deuteron will be composed of hidden-color states. This state should be continuously be related to the almost maximal | NN state at low resolution via the renormalization group equation. The composition at intermediate momentum scales also involves higher Fock states with a valence gluon [92], such as | (uuudddg) . We note that the composition at intermediate distances can only be calculated if the normalization of the Fock state at some scale is known, as is the case for the renormalization group equation analysis. As for now, the implication of these states for inclusive reactions at finite x (away from 2 in the deuteron case), and thus the PDFs, remains to be studied, and is beyond the scope of our exploratory study.
At the endpoint (x ∼ 2), where only one gluon is carrying almost the entire momentum of the deuteron, the gluon PDF behavior is however related to the form factor of the system at short distances [36,93,94], and is known analytically. The counting rules indeed predict G d (x) ∝ (2 − x) 11 [36,68,94,95]. Since the partons are maximally virtual in this limit, the deuteron has to be expressed in terms of quarks and gluons, and it is therefore not possible to discuss with our framework. Extending our nonrelativistic results to the this limiting case, is also left for a future work especially since it seems difficulty experimentally accessible in a near future.
Our framework could be extended to the case of the gluon and charm PDF in heavier nuclei, such as the 4 He, which is one of the main ingredient of the interstellar matter, and for 14 N and 16 O, which are the main components of the atmosphere. Such analyses would be important to reduce the theoretical uncertainty of the cross section of the reactions between primary cosmic rays and the interstellar matter, as well as to predict the ultra-high-energy neutrino background in terrestrial experiments such as IceCube [19][20][21][22][23]. A better knowledge of the gluon PDFs of light nuclei, e.g. 3 He and 4 H, is therefore crucial for high-energy astrophysics, and they could be measured in the near future in LHC fixedtarget experiments.
Figure 1 :
1Schematic representation of the PDF of the deuteron. The solid and double lines denote the nucleon and the deuteron, respectively, and the cross indicates the PDF operator. There are two distinct contributions: (a) one-nucleon operator, working in the impulse approximation, (b) twonucleon operator, relevant in high-momentum exchange.
Figure 2 :
2Radial component (spherical coordinate) of the deuteron wave function.
Figure 3 :
3Momentum z-axis component of the deuteron wave function.
Figure 4 :
4Momentum fraction of the nucleon in the deuteron. The data for a typical heavy nucleus with a Fermi energy ǫ F = 33 MeV are also shown for comparison (labeled as "Heavy nucleus").
Figure 5 :
5The velocity distribution of the nucleon in the deuteron v 2 as a function of the gluon momentum fraction x obtained in our framework. The region where v 2 > 1 is unphysical (grey band).
Figure 6 :
6Gluon PDF in the deuteron and in the nucleon.
Figure 7 :
7Diagrammatic representation of the charm-quark creation in a nucleon via gluon splitting.
Figure 8 :
8Charm PDF in the deuteron and in the nucleon.
Figure 9 :
9Diagrammatic representation of the intrinsic charm in a nucleon.
Figure 10 :
10Diagrammatic representation of the intrinsic-charm generation in the deuteron: (a) from two-gluon fusion, (b) the α s suppressed -but combinatoric enhanced-3-gluon fusion.
AcknowledgementsWe thank Jaume Carbonell
An Accurate nucleon-nucleon potential with charge independence breaking. R B Wiringa, V G J Stoks, R Schiavilla, 10.1103/PhysRevC.51.38arXiv:nucl-th/9408016Phys. Rev. 51nucl-thR. B. Wiringa, V. G. J. Stoks, and R. Schiavilla, "An Accurate nucleon-nucleon potential with charge independence breaking," Phys. Rev. C51 (1995) 38-51, arXiv:nucl-th/9408016 [nucl-th].
Few nucleon forces from chiral Lagrangians. U Van Kolck, Phys. Rev. 49U. van Kolck, "Few nucleon forces from chiral Lagrangians," Phys. Rev. C49 (1994) 2932-2941.
Realistic models of pion exchange three nucleon interactions. S C Pieper, V R Pandharipande, R B Wiringa, J Carlson, 10.1103/PhysRevC.64.014001arXiv:nucl-th/0102004Phys. Rev. 6414001nucl-thS. C. Pieper, V. R. Pandharipande, R. B. Wiringa, and J. Carlson, "Realistic models of pion exchange three nucleon interactions," Phys. Rev. C64 (2001) 014001, arXiv:nucl-th/0102004 [nucl-th].
Benchmark test calculation of a four nucleon bound state. H Kamada, 10.1103/PhysRevC.64.044001arXiv:nucl-th/0104057Phys. Rev. 6444001nucl-thH. Kamada et al., "Benchmark test calculation of a four nucleon bound state," Phys. Rev. C64 (2001) 044001, arXiv:nucl-th/0104057 [nucl-th].
Quantum Monte Carlo methods for nuclear physics. J Carlson, S Gandolfi, F Pederiva, S C Pieper, R Schiavilla, K E Schmidt, R B Wiringa, 10.1103/RevModPhys.87.1067arXiv:1412.3081Rev. Mod. Phys. 871067nucl-thJ. Carlson, S. Gandolfi, F. Pederiva, S. C. Pieper, R. Schiavilla, K. E. Schmidt, and R. B. Wiringa, "Quantum Monte Carlo methods for nuclear physics," Rev. Mod. Phys. 87 (2015) 1067, arXiv:1412.3081 [nucl-th].
Semilocal momentum-space regularized chiral two-nucleon potentials up to fifth order. P Reinert, H Krebs, E Epelbaum, arXiv:1711.08821nucl-thP. Reinert, H. Krebs, and E. Epelbaum, "Semilocal momentum-space regularized chiral two-nucleon potentials up to fifth order," arXiv:1711.08821 [nucl-th].
Quantum Chromodynamic Predictions for the Deuteron Form-Factor. S J Brodsky, C.-R Ji, G P Lepage, Phys. Rev. Lett. 5183S. J. Brodsky, C.-R. Ji, and G. P. Lepage, "Quantum Chromodynamic Predictions for the Deuteron Form-Factor," Phys. Rev. Lett. 51 (1983) 83.
Evolution of Relativistic Multi -Quark Systems. S J Brodsky, C.-R Ji, 10.1103/PhysRevD.33.1406Phys. Rev. 331406S. J. Brodsky and C.-R. Ji, "Evolution of Relativistic Multi -Quark Systems," Phys. Rev. D33 (1986) 1406.
Factorization Property of the Deuteron. S J Brodsky, C.-R Ji, 10.1103/PhysRevD.33.2653Phys. Rev. 332653S. J. Brodsky and C.-R. Ji, "Factorization Property of the Deuteron," Phys. Rev. D33 (1986) 2653.
Novel Six-Quark Hidden-Color Dibaryon States in QCD. M Bashkanov, S J Brodsky, H Clement, arXiv:1308.6404Phys. Lett. 727hep-phM. Bashkanov, S. J. Brodsky, and H. Clement, "Novel Six-Quark Hidden-Color Dibaryon States in QCD," Phys. Lett. B727 (2013) 438-442, arXiv:1308.6404 [hep-ph].
Gluon Recombination and Shadowing at Small Values of x. A H Mueller, J.-W Qiu, 10.1016/0550-3213(86)90164-1Nucl. Phys. 268A. H. Mueller and J.-w. Qiu, "Gluon Recombination and Shadowing at Small Values of x," Nucl. Phys. B268 (1986) 427-452.
Shadowing and Antishadowing of Nuclear Structure Functions. S J Brodsky, H J Lu, 10.1103/PhysRevLett.64.1342Phys. Rev. Lett. 641342S. J. Brodsky and H. J. Lu, "Shadowing and Antishadowing of Nuclear Structure Functions," Phys. Rev. Lett. 64 (1990) 1342.
Nuclear deep inelastic lepton scattering and coherence phenomena. G Piller, W Weise, 10.1016/S0370-1573(99)00107-6arXiv:hep-ph/9908230Phys. Rept. 330hep-phG. Piller and W. Weise, "Nuclear deep inelastic lepton scattering and coherence phenomena," Phys. Rept. 330 (2000) 1-94, arXiv:hep-ph/9908230 [hep-ph].
Nuclear shadowing. N Armesto, arXiv:hep-ph/0604108J. Phys. 32hep-phN. Armesto, "Nuclear shadowing," J. Phys. G32 (2006) R367-R394, arXiv:hep-ph/0604108 [hep-ph].
Dynamical model of antishadowing of the nuclear gluon distribution. L Frankfurt, V Guzey, M Strikman, arXiv:1612.08273Phys. Rev. C95. 555208hep-phL. Frankfurt, V. Guzey, and M. Strikman, "Dynamical model of antishadowing of the nuclear gluon distribution," Phys. Rev. C95 no. 5, (2017) 055208, arXiv:1612.08273 [hep-ph].
Prompt neutrino fluxes from atmospheric charm. R Enberg, M H Reno, I Sarcevic, 10.1103/PhysRevD.78.043005arXiv:0806.0418Phys. Rev. 7843005hep-phR. Enberg, M. H. Reno, and I. Sarcevic, "Prompt neutrino fluxes from atmospheric charm," Phys. Rev. D78 (2008) 043005, arXiv:0806.0418 [hep-ph].
Perturbative charm production and the prompt atmospheric neutrino flux in light of RHIC and LHC. A Bhattacharya, R Enberg, M H Reno, I Sarcevic, A Stasto, 10.1007/JHEP06(2015)110arXiv:1502.01076JHEP. 06110hep-phA. Bhattacharya, R. Enberg, M. H. Reno, I. Sarcevic, and A. Stasto, "Perturbative charm production and the prompt atmospheric neutrino flux in light of RHIC and LHC," JHEP 06 (2015) 110, arXiv:1502.01076 [hep-ph].
Nuclear effects in structure functions. M Arneodo, Phys. Rept. 240M. Arneodo, "Nuclear effects in structure functions," Phys. Rept. 240 (1994) 301-393.
Observation of High-Energy Astrophysical Neutrinos in Three Years of IceCube Data. M G Aartsen, IceCube Collaboration10.1103/PhysRevLett.113.101101arXiv:1405.5303Phys. Rev. Lett. 113101101astro-ph.HEIceCube Collaboration, M. G. Aartsen et al., "Observation of High-Energy Astrophysical Neutrinos in Three Years of IceCube Data," Phys. Rev. Lett. 113 (2014) 101101, arXiv:1405.5303 [astro-ph.HE].
Observation and Characterization of a Cosmic Muon Neutrino Flux from the Northern Hemisphere using six years of IceCube data. M G Aartsen, IceCube CollaborationarXiv:1607.08006Astrophys. J. 83313astro-ph.HEIceCube Collaboration, M. G. Aartsen et al., "Observation and Characterization of a Cosmic Muon Neutrino Flux from the Northern Hemisphere using six years of IceCube data," Astrophys. J. 833 no. 1, (2016) 3, arXiv:1607.08006 [astro-ph.HE].
Charm contribution to the atmospheric neutrino flux. F Halzen, L Wille, 10.1103/PhysRevD.94.014014arXiv:1605.01409Phys. Rev. D94. 114014hep-phF. Halzen and L. Wille, "Charm contribution to the atmospheric neutrino flux," Phys. Rev. D94 no. 1, (2016) 014014, arXiv:1605.01409 [hep-ph].
IceCube can constrain the intrinsic charm of the proton. R Laha, S J Brodsky, 10.1103/PhysRevD.96.123002arXiv:1607.08240Phys. Rev. 9612123002hep-phR. Laha and S. J. Brodsky, "IceCube can constrain the intrinsic charm of the proton," Phys. Rev. D96 no. 12, (2017) 123002, arXiv:1607.08240 [hep-ph].
On the intrinsic charm contribution to the prompt atmospheric neutrino flux. A V Giannini, V P Goncalves, F S Navarra, arXiv:1803.01728hep-phA. V. Giannini, V. P. Goncalves, and F. S. Navarra, "On the intrinsic charm contribution to the prompt atmospheric neutrino flux," arXiv:1803.01728 [hep-ph].
Heavy-flavour and quarkonium production in the LHC era: from proton-proton to heavy-ion collisions. A Andronic, arXiv:1506.03981Eur. Phys. J. C76. 3107nucl-exA. Andronic et al., "Heavy-flavour and quarkonium production in the LHC era: from proton-proton to heavy-ion collisions," Eur. Phys. J. C76 no. 3, (2016) 107, arXiv:1506.03981 [nucl-ex].
Impact of heavy-flavour production cross sections measured by the LHCb experiment on parton distribution functions at low x. O Zenaiev, PROSA CollaborationarXiv:1503.04581Eur. Phys. J. C75. 8396hep-phPROSA Collaboration, O. Zenaiev et al., "Impact of heavy-flavour production cross sections measured by the LHCb experiment on parton distribution functions at low x," Eur. Phys. J. C75 no. 8, (2015) 396, arXiv:1503.04581 [hep-ph].
Precision determination of the small-x gluon from charm production at LHCb. R Gauld, J Rojo, arXiv:1610.09373Phys. Rev. Lett. 118772001hep-phR. Gauld and J. Rojo, "Precision determination of the small-x gluon from charm production at LHCb," Phys. Rev. Lett. 118 no. 7, (2017) 072001, arXiv:1610.09373 [hep-ph].
Gluon shadowing and antishadowing in heavy-flavor production at the LHC. A Kusina, J.-P Lansberg, I Schienbein, H.-S Shao, arXiv:1712.07024hep-phA. Kusina, J.-P. Lansberg, I. Schienbein, and H.-S. Shao, "Gluon shadowing and antishadowing in heavy-flavor production at the LHC," arXiv:1712.07024 [hep-ph].
Towards an automated tool to evaluate the impact of the nuclear modification of the gluon density on quarkonium, D and B meson production in proton-nucleus collisions. J.-P Lansberg, H.-S Shao, 10.1140/epjc/s10052-016-4575-xarXiv:1610.05382Eur. Phys. J. C77. 11hep-phJ.-P. Lansberg and H.-S. Shao, "Towards an automated tool to evaluate the impact of the nuclear modification of the gluon density on quarkonium, D and B meson production in proton-nucleus collisions," Eur. Phys. J. C77 no. 1, (2017) 1, arXiv:1610.05382 [hep-ph].
Physics Opportunities of a Fixed-Target Experiment using the LHC Beams. S J Brodsky, F Fleuret, C Hadjidakis, J P Lansberg, 10.1016/j.physrep.2012.10.001arXiv:1202.6585Phys. Rept. 522hep-phS. J. Brodsky, F. Fleuret, C. Hadjidakis, and J. P. Lansberg, "Physics Opportunities of a Fixed-Target Experiment using the LHC Beams," Phys. Rept. 522 (2013) 239-255, arXiv:1202.6585 [hep-ph].
Heavy-ion Physics at a Fixed-Target Experiment Using the LHC Proton and Lead Beams (AFTER@LHC): Feasibility Studies for Quarkonium and Drell-Yan Production. B Trzeciak, C Silva, E G Ferreiro, C Hadjidakis, D Kikola, J P Lansberg, L Massacrier, J Seixas, A Uras, Z Yang, 10.1007/s00601-017-1308-0arXiv:1703.03726Few Body Syst. 585148nucl-exB. Trzeciak, C. Da Silva, E. G. Ferreiro, C. Hadjidakis, D. Kikola, J. P. Lansberg, L. Massacrier, J. Seixas, A. Uras, and Z. Yang, "Heavy-ion Physics at a Fixed-Target Experiment Using the LHC Proton and Lead Beams (AFTER@LHC): Feasibility Studies for Quarkonium and Drell-Yan Production," Few Body Syst. 58 no. 5, (2017) 148, arXiv:1703.03726 [nucl-ex].
Feasibility Studies for Single Transverse-Spin Asymmetry Measurements at a Fixed-Target Experiment Using the LHC Proton and Lead Beams (AFTER@LHC). D Kikola, M G Echevarria, C Hadjidakis, J.-P Lansberg, C Lorce, L Massacrier, C M Quintans, A Signori, B Trzeciak, arXiv:1702.01546Few Body Syst. 584139hep-exD. Kikola, M. G. Echevarria, C. Hadjidakis, J.-P. Lansberg, C. Lorce, L. Massacrier, C. M. Quintans, A. Signori, and B. Trzeciak, "Feasibility Studies for Single Transverse-Spin Asymmetry Measurements at a Fixed-Target Experiment Using the LHC Proton and Lead Beams (AFTER@LHC)," Few Body Syst. 58 no. 4, (2017) 139, arXiv:1702.01546 [hep-ex].
Feasibility studies for quarkonium production at a fixed-target experiment using the LHC proton and lead beams (AFTER@LHC). L Massacrier, B Trzeciak, F Fleuret, C Hadjidakis, D Kikola, J P Lansberg, H S Shao, 10.1155/2015/986348arXiv:1504.05145Adv. High Energy Phys. 2015986348hep-exL. Massacrier, B. Trzeciak, F. Fleuret, C. Hadjidakis, D. Kikola, J. P. Lansberg, and H. S. Shao, "Feasibility studies for quarkonium production at a fixed-target experiment using the LHC proton and lead beams (AFTER@LHC)," Adv. High Energy Phys. 2015 (2015) 986348, arXiv:1504.05145 [hep-ex].
Quarkonium Physics at a Fixed-Target Experiment using the LHC Beams. J P Lansberg, S J Brodsky, F Fleuret, C Hadjidakis, 10.1007/s00601-012-0445-8arXiv:1204.5793Few Body Syst. 53hep-phJ. P. Lansberg, S. J. Brodsky, F. Fleuret, and C. Hadjidakis, "Quarkonium Physics at a Fixed-Target Experiment using the LHC Beams," Few Body Syst. 53 (2012) 11-25, arXiv:1204.5793 [hep-ph].
Fixed-target physics at LHCb. E Maurice, LHCb Collaboration5th Large Hadron Collider Physics Conference. LHCb Collaboration, E. Maurice, "Fixed-target physics at LHCb," in 5th Large Hadron Collider Physics Conference (LHCP 2017)
. China Shanghai, arXiv:1708.05184arXiv:1708.05184.pdfhep-exShanghai, China, May 15-20, 2017. 2017. arXiv:1708.05184 [hep-ex]. http://inspirehep.net/record/ 1616496/files/arXiv:1708.05184.pdf.
Physics programme in fixed-target mode with the LHCb experiment at CERN. L Anderlini, LHCb CollaborationPoS. 2017152LHCb Collaboration, L. Anderlini, "Physics programme in fixed-target mode with the LHCb experiment at CERN," PoS EPS-HEP2017 (2017) 152.
The Deuteron Form-Factor and the Short Distance Behavior of the Nuclear Force. S J Brodsky, B T Chertok, Phys. Rev. Lett. 7269S. J. Brodsky and B. T. Chertok, "The Deuteron Form-Factor and the Short Distance Behavior of the Nuclear Force," Phys. Rev. Lett. 7 37 (1976) 269.
A Quark-parton Model of Nuclear Production. G Berlad, A Dar, G Eilam, 10.1103/PhysRevD.22.1547Phys. Rev. 221547G. Berlad, A. Dar, and G. Eilam, "A Quark-parton Model of Nuclear Production," Phys. Rev. D22 (1980) 1547.
Fermi Motion Effects in Deep Inelastic Lepton Scattering from Nuclear Targets. A Bodek, J L Ritchie, Phys. Rev. 231070A. Bodek and J. L. Ritchie, "Fermi Motion Effects in Deep Inelastic Lepton Scattering from Nuclear Targets," Phys. Rev. D23 (1981) 1070.
Further Studies of Fermi Motion Effects in Lepton Scattering from Nuclear Targets. A Bodek, J L Ritchie, Phys. Rev. 241400A. Bodek and J. L. Ritchie, "Further Studies of Fermi Motion Effects in Lepton Scattering from Nuclear Targets," Phys. Rev. D24 (1981) 1400.
The ratio of the nucleon structure functions F2 n for iron and deuterium. J J Aubert, European Muon Collaboration10.1016/0370-2693(83)90437-9Phys. Lett. 123European Muon Collaboration, J. J. Aubert et al., "The ratio of the nucleon structure functions F2 n for iron and deuterium," Phys. Lett. 123B (1983) 275-278.
Hard Nuclear Processes and Microscopic Nuclear Structure. L L Frankfurt, M I Strikman, 10.1016/0370-1573(88)90179-2Phys. Rept. 160L. L. Frankfurt and M. I. Strikman, "Hard Nuclear Processes and Microscopic Nuclear Structure," Phys. Rept. 160 (1988) 235-427.
Precise measurements of the proton and deuteron structure functions from a global analysis of the SLAC deep inelastic electron scattering cross-sections. L W Whitlow, E M Riordan, S Dasu, S Rock, A Bodek, 10.1016/0370-2693(92)90672-QPhys. Lett. 282L. W. Whitlow, E. M. Riordan, S. Dasu, S. Rock, and A. Bodek, "Precise measurements of the proton and deuteron structure functions from a global analysis of the SLAC deep inelastic electron scattering cross-sections," Phys. Lett. B282 (1992) 475-482.
Modification of the gluon structure function and J / psi leptoproduction by nuclear Fermi motion. H Merabet, J F Mathiot, J Dolejsi, H J Pirner, 10.1016/0370-2693(93)90208-YPhys. Lett. 307H. Merabet, J. F. Mathiot, J. Dolejsi, and H. J. Pirner, "Modification of the gluon structure function and J / psi leptoproduction by nuclear Fermi motion," Phys. Lett. B307 (1993) 177-181.
Shadowing, binding and off-shell effects in nuclear deep inelastic scattering. S A Kulagin, G Piller, W Weise, arXiv:nucl-th/9402015Phys. Rev. 50nucl-thS. A. Kulagin, G. Piller, and W. Weise, "Shadowing, binding and off-shell effects in nuclear deep inelastic scattering," Phys. Rev. C50 (1994) 1154-1169, arXiv:nucl-th/9402015 [nucl-th].
Probing the origin of the EMC effect via tagged structure functions of the deuteron. W Melnitchouk, M Sargsian, M I Strikman, 10.1007/s002180050372arXiv:nucl-th/9609048Z. Phys. 359nucl-thW. Melnitchouk, M. Sargsian, and M. I. Strikman, "Probing the origin of the EMC effect via tagged structure functions of the deuteron," Z. Phys. A359 (1997) 99-109, arXiv:nucl-th/9609048 [nucl-th].
Determination of nuclear parton distributions. M Hirai, S Kumano, M Miyama, 10.1103/PhysRevD.64.034003arXiv:hep-ph/0103208Phys. Rev. 6434003hep-phM. Hirai, S. Kumano, and M. Miyama, "Determination of nuclear parton distributions," Phys. Rev. D64 (2001) 034003, arXiv:hep-ph/0103208 [hep-ph].
Hadrons in the nuclear medium. M M Sargsian, 10.1088/0954-3899/29/3/201arXiv:nucl-th/0210025J. Phys. 293nucl-thM. M. Sargsian et al., "Hadrons in the nuclear medium," J. Phys. G29 no. 3, (2003) R1-R45, arXiv:nucl-th/0210025 [nucl-th].
Universality of the EMC effect. J.-W Chen, W Detmold, 10.1016/j.physletb.2005.08.041arXiv:hep-ph/0412119Phys. Lett. 625hep-phJ.-W. Chen and W. Detmold, "Universality of the EMC effect," Phys. Lett. B625 (2005) 165-170, arXiv:hep-ph/0412119 [hep-ph].
Low Q scaling, duality, and the EMC effect. J Arrington, R Ent, C E Keppel, J Mammei, I Niculescu, arXiv:nucl-ex/0307012Phys. Rev. 7335205nucl-exJ. Arrington, R. Ent, C. E. Keppel, J. Mammei, and I. Niculescu, "Low Q scaling, duality, and the EMC effect," Phys. Rev. C73 (2006) 035205, arXiv:nucl-ex/0307012 [nucl-ex].
Global study of nuclear structure functions. S A Kulagin, R Petti, 10.1016/j.nuclphysa.2005.10.011arXiv:hep-ph/0412425Nucl. Phys. 765hep-phS. A. Kulagin and R. Petti, "Global study of nuclear structure functions," Nucl. Phys. A765 (2006) 126-187, arXiv:hep-ph/0412425 [hep-ph].
Determination of nuclear parton distribution functions and their uncertainties in next-to-leading order. M Hirai, S Kumano, T H Nagai, 10.1103/PhysRevC.76.065207arXiv:0709.3038Phys. Rev. 7665207hep-phM. Hirai, S. Kumano, and T. H. Nagai, "Determination of nuclear parton distribution functions and their uncertainties in next-to-leading order," Phys. Rev. C76 (2007) 065207, arXiv:0709.3038 [hep-ph].
New parton distributions from large-x and low-Q 2 data. A Accardi, M E Christy, C E Keppel, P Monaghan, W Melnitchouk, J G Morfin, J F Owens, arXiv:0911.2254Phys. Rev. 8134016hep-phA. Accardi, M. E. Christy, C. E. Keppel, P. Monaghan, W. Melnitchouk, J. G. Morfin, and J. F. Owens, "New parton distributions from large-x and low-Q 2 data," Phys. Rev. D81 (2010) 034016, arXiv:0911.2254 [hep-ph].
Short Range Correlations and the EMC Effect. L B Weinstein, E Piasetzky, D W Higinbotham, J Gomez, O Hen, R Shneor, 10.1103/PhysRevLett.106.052301arXiv:1009.5666Phys. Rev. Lett. 10652301hep-phL. B. Weinstein, E. Piasetzky, D. W. Higinbotham, J. Gomez, O. Hen, and R. Shneor, "Short Range Correlations and the EMC Effect," Phys. Rev. Lett. 106 (2011) 052301, arXiv:1009.5666 [hep-ph].
Uncertainties in determining parton distributions at large x. A Accardi, W Melnitchouk, J F Owens, M E Christy, C E Keppel, L Zhu, J G Morfin, 10.1103/PhysRevD.84.014008arXiv:1102.3686Phys. Rev. 8414008hep-phA. Accardi, W. Melnitchouk, J. F. Owens, M. E. Christy, C. E. Keppel, L. Zhu, and J. G. Morfin, "Uncertainties in determining parton distributions at large x," Phys. Rev. D84 (2011) 014008, arXiv:1102.3686 [hep-ph].
Comparative study of nuclear effects in polarized electron scattering from 3He. J J Ethier, W Melnitchouk, arXiv:1308.3723Phys. Rev. C88. 554001nucl-thJ. J. Ethier and W. Melnitchouk, "Comparative study of nuclear effects in polarized electron scattering from 3He," Phys. Rev. C88 no. 5, (2013) 054001, arXiv:1308.3723 [nucl-th].
Nuclear effects in the proton-deuteron Drell-Yan process. P J Ehlers, A Accardi, L T Brady, W Melnitchouk, arXiv:1405.2039Phys. Rev. D90. 114010hep-phP. J. Ehlers, A. Accardi, L. T. Brady, and W. Melnitchouk, "Nuclear effects in the proton-deuteron Drell-Yan process," Phys. Rev. D90 no. 1, (2014) 014010, arXiv:1405.2039 [hep-ph].
Nucleon-Nucleon Correlations, Short-lived Excitations, and the Quarks Within. O Hen, G A Miller, E Piasetzky, L B Weinstein, 10.1103/RevModPhys.89.045002arXiv:1611.09748Rev. Mod. Phys. 89445002nucl-exO. Hen, G. A. Miller, E. Piasetzky, and L. B. Weinstein, "Nucleon-Nucleon Correlations, Short-lived Excitations, and the Quarks Within," Rev. Mod. Phys. 89 no. 4, (2017) 045002, arXiv:1611.09748 [nucl-ex].
Short Range Correlations and the EMC Effect in Effective Field Theory. J.-W Chen, W Detmold, J E Lynn, A Schwenk, 10.1103/PhysRevLett.119.262502arXiv:1607.03065Phys. Rev. Lett. 11926262502hep-phJ.-W. Chen, W. Detmold, J. E. Lynn, and A. Schwenk, "Short Range Correlations and the EMC Effect in Effective Field Theory," Phys. Rev. Lett. 119 no. 26, (2017) 262502, arXiv:1607.03065 [hep-ph].
Nuclear Effects in the Deuteron and Constraints on the d/u Ratio. S I Alekhin, S A Kulagin, R Petti, 10.1103/PhysRevD.96.054005arXiv:1704.00204Phys. Rev. D96. 554005nucl-th[59] S. I. Alekhin, S. A. Kulagin, and R. Petti, "Nuclear Effects in the Deuteron and Constraints on the d/u Ratio," Phys. Rev. D96 no. 5, (2017) 054005, arXiv:1704.00204 [nucl-th].
First lattice QCD study of the gluonic structure of light nuclei. F Winter, W Detmold, A S Gambhir, K Orginos, M J Savage, P E Shanahan, M L Wagman, arXiv:1709.00395Phys. Rev. D96. 994512hep-latF. Winter, W. Detmold, A. S. Gambhir, K. Orginos, M. J. Savage, P. E. Shanahan, and M. L. Wagman, "First lattice QCD study of the gluonic structure of light nuclei," Phys. Rev. D96 no. 9, (2017) 094512, arXiv:1709.00395 [hep-lat].
Neutron / proton structure function ratio at large x. W Melnitchouk, A W Thomas, 10.1016/0370-2693(96)00292-4arXiv:nucl-th/9602038Phys. Lett. 377nucl-thW. Melnitchouk and A. W. Thomas, "Neutron / proton structure function ratio at large x," Phys. Lett. B377 (1996) 11-17, arXiv:nucl-th/9602038 [nucl-th].
Neutron structure function and A = 3 mirror nuclei. I R Afnan, F R P Bissey, J Gomez, A T Katramatou, W Melnitchouk, G G Petratos, A W Thomas, arXiv:nucl-th/0006003Phys. Lett. 493nucl-thI. R. Afnan, F. R. P. Bissey, J. Gomez, A. T. Katramatou, W. Melnitchouk, G. G. Petratos, and A. W. Thomas, "Neutron structure function and A = 3 mirror nuclei," Phys. Lett. B493 (2000) 36-42, arXiv:nucl-th/0006003 [nucl-th].
Deep inelastic scattering from A = 3 nuclei and the neutron structure function. I R Afnan, F R P Bissey, J Gomez, A T Katramatou, S Liuti, W Melnitchouk, G G Petratos, A W Thomas, 10.1103/PhysRevC.68.035201arXiv:nucl-th/0306054Phys. Rev. 6835201nucl-thI. R. Afnan, F. R. P. Bissey, J. Gomez, A. T. Katramatou, S. Liuti, W. Melnitchouk, G. G. Petratos, and A. W. Thomas, "Deep inelastic scattering from A = 3 nuclei and the neutron structure function," Phys. Rev. C68 (2003) 035201, arXiv:nucl-th/0306054 [nucl-th].
Scaling Laws at Large Transverse Momentum. S J Brodsky, G R Farrar, 10.1103/PhysRevLett.31.1153Phys. Rev. Lett. 31S. J. Brodsky and G. R. Farrar, "Scaling Laws at Large Transverse Momentum," Phys. Rev. Lett. 31 (1973) 1153-1156.
Quark Counting Rules: Old and New Approaches. A Radyushkin, 10.1142/S0217751X10048792arXiv:0907.4585Int. J. Mod. Phys. 25hep-phA. Radyushkin, "Quark Counting Rules: Old and New Approaches," Int. J. Mod. Phys. A25 (2010) 502-512, arXiv:0907.4585 [hep-ph].
Gaussian expansion method for few-body systems. E Hiyama, Y Kino, M Kamimura, Prog. Part. Nucl. Phys. 51E. Hiyama, Y. Kino, and M. Kamimura, "Gaussian expansion method for few-body systems," Prog. Part. Nucl. Phys. 51 (2003) 223-307.
Quantum chromodynamics and other field theories on the light cone. S J Brodsky, H.-C Pauli, S S Pinsky, arXiv:hep-ph/9705477Phys. Rept. 301hep-phS. J. Brodsky, H.-C. Pauli, and S. S. Pinsky, "Quantum chromodynamics and other field theories on the light cone," Phys. Rept. 301 (1998) 299-486, arXiv:hep-ph/9705477 [hep-ph].
Exclusive Processes in Perturbative Quantum Chromodynamics. G P Lepage, S J Brodsky, 10.1103/PhysRevD.22.2157Phys. Rev. 222157G. P. Lepage and S. J. Brodsky, "Exclusive Processes in Perturbative Quantum Chromodynamics," Phys. Rev. D22 (1980) 2157.
The Electromagnetic Interactions of Composite Systems. S J Brodsky, J R Primack, 10.1016/0003-4916(69)90264-4Annals Phys. 52S. J. Brodsky and J. R. Primack, "The Electromagnetic Interactions of Composite Systems," Annals Phys. 52 (1969) 315-365.
On the Structure of Wave Functions of Mesons as Bound States of Relativistic Quarks. M V Terentev, Sov. J. Nucl. Phys. 24207Yad. Fiz.M. V. Terentev, "On the Structure of Wave Functions of Mesons as Bound States of Relativistic Quarks," Sov. J. Nucl. Phys. 24 (1976) 106. [Yad. Fiz.24,207(1976)].
J ′ /ψ ′ to J/ψ ratio in diffractive photoproduction. P Hoyer, S Peigne, 10.1103/PhysRevD.61.031501arXiv:hep-ph/9909519Phys. Rev. 6131501hep-phP. Hoyer and S. Peigne, "J ′ /ψ ′ to J/ψ ratio in diffractive photoproduction," Phys. Rev. D61 (2000) 031501, arXiv:hep-ph/9909519 [hep-ph].
Photoproduction of charmonia and total charmonium proton cross-sections. J Hufner, Yu P Ivanov, B Z Kopeliovich, A V Tarasov, 10.1103/PhysRevD.62.094022arXiv:hep-ph/0007111Phys. Rev. 6294022hep-phJ. Hufner, Yu. P. Ivanov, B. Z. Kopeliovich, and A. V. Tarasov, "Photoproduction of charmonia and total charmonium proton cross-sections," Phys. Rev. D62 (2000) 094022, arXiv:hep-ph/0007111 [hep-ph].
Coherence phenomena in charmonium production off nuclei at the energies of RHIC and LHC. B Kopeliovich, A Tarasov, J Hufner, 10.1016/S0375-9474(01)01220-9arXiv:hep-ph/0104256Nucl. Phys. 696hep-phB. Kopeliovich, A. Tarasov, and J. Hufner, "Coherence phenomena in charmonium production off nuclei at the energies of RHIC and LHC," Nucl. Phys. A696 (2001) 669-714, arXiv:hep-ph/0104256 [hep-ph].
A Critical Appraisal and Evaluation of Modern PDFs. A Accardi, 10.1140/epjc/s10052-016-4285-4arXiv:1603.08906Eur. Phys. J. C76. 8471hep-phA. Accardi et al., "A Critical Appraisal and Evaluation of Modern PDFs," Eur. Phys. J. C76 no. 8, (2016) 471, arXiv:1603.08906 [hep-ph].
New parton distribution functions from a global analysis of quantum chromodynamics. S Dulat, T.-J Hou, J Gao, M Guzzi, J Huston, P Nadolsky, J Pumplin, C Schmidt, D Stump, C P Yuan, 10.1103/PhysRevD.93.033006arXiv:1506.07443Phys. Rev. 93333006hep-phS. Dulat, T.-J. Hou, J. Gao, M. Guzzi, J. Huston, P. Nadolsky, J. Pumplin, C. Schmidt, D. Stump, and C. P. Yuan, "New parton distribution functions from a global analysis of quantum chromodynamics," Phys. Rev. D93 no. 3, (2016) 033006, arXiv:1506.07443 [hep-ph].
Parton distributions from high-precision collider data. R D Ball, NNPDF Collaboration10.1140/epjc/s10052-017-5199-5arXiv:1706.00428Eur. Phys. J. C77. 10663hep-phNNPDF Collaboration, R. D. Ball et al., "Parton distributions from high-precision collider data," Eur. Phys. J. C77 no. 10, (2017) 663, arXiv:1706.00428 [hep-ph].
Towards parton distribution functions with small-x resummation: HELL 2.0. M Bonvini, S Marzani, C Muselli, arXiv:1708.07510JHEP. 12117hep-phM. Bonvini, S. Marzani, and C. Muselli, "Towards parton distribution functions with small-x resummation: HELL 2.0," JHEP 12 (2017) 117, arXiv:1708.07510 [hep-ph].
The Structure of the Proton in the LHC Precision Era. J Gao, L Harland-Lang, J Rojo, arXiv:1709.04922hep-phJ. Gao, L. Harland-Lang, and J. Rojo, "The Structure of the Proton in the LHC Precision Era," arXiv:1709.04922 [hep-ph].
Parton distributions and lattice QCD calculations: a community white paper. H.-W Lin, arXiv:1711.07916Prog. Part. Nucl. Phys. 100hep-phH.-W. Lin et al., "Parton distributions and lattice QCD calculations: a community white paper," Prog. Part. Nucl. Phys. 100 (2018) 107-160, arXiv:1711.07916 [hep-ph].
Reconstruction of light-cone parton distribution functions from lattice QCD simulations at the physical point. C Alexandrou, K Cichy, M Constantinou, K Jansen, A Scapellato, F Steffens, arXiv:1803.02685hep-latC. Alexandrou, K. Cichy, M. Constantinou, K. Jansen, A. Scapellato, and F. Steffens, "Reconstruction of light-cone parton distribution functions from lattice QCD simulations at the physical point," arXiv:1803.02685 [hep-lat].
Nonperturbatively-renormalized glue momentum fraction at physical pion mass from Lattice QCD. Y.-B Yang, M Gong, J Liang, H.-W Lin, K.-F Liu, D Pefkou, P Shanahan, arXiv:1805.00531hep-latY.-B. Yang, M. Gong, J. Liang, H.-W. Lin, K.-F. Liu, D. Pefkou, and P. Shanahan, "Nonperturbatively-renormalized glue momentum fraction at physical pion mass from Lattice QCD," arXiv:1805.00531 [hep-lat].
Dynamical parton distributions revisited. M Glueck, E Reya, A Vogt, 10.1007/s100529800978,10.1007/s100520050289arXiv:hep-ph/9806404Eur. Phys. J. 5hep-phM. Glueck, E. Reya, and A. Vogt, "Dynamical parton distributions revisited," Eur. Phys. J. C5 (1998) 461-470, arXiv:hep-ph/9806404 [hep-ph].
On distribution functions for partons in nuclei. A S Rinat, M F Taragin, 10.1103/PhysRevC.72.065209arXiv:nucl-th/0501006Phys. Rev. 7265209nucl-thA. S. Rinat and M. F. Taragin, "On distribution functions for partons in nuclei," Phys. Rev. C72 (2005) 065209, arXiv:nucl-th/0501006 [nucl-th].
Relativistic deuteron wave function in the light front dynamics. J Carbonell, V A Karmanov, Nucl. Phys. 581J. Carbonell and V. A. Karmanov, "Relativistic deuteron wave function in the light front dynamics," Nucl. Phys. A581 (1995) 625-653.
Explicitly covariant light front dynamics and relativistic few body systems. J Carbonell, B Desplanques, V A Karmanov, J F Mathiot, 10.1016/S0370-1573(97)00090-2arXiv:nucl-th/9804029Phys. Rept. 300nucl-thJ. Carbonell, B. Desplanques, V. A. Karmanov, and J. F. Mathiot, "Explicitly covariant light front dynamics and relativistic few body systems," Phys. Rept. 300 (1998) 215-347, arXiv:nucl-th/9804029 [nucl-th].
Constraints on large-x parton distributions from new weak boson production and deep-inelastic scattering data. A Accardi, L T Brady, W Melnitchouk, J F Owens, N Sato, arXiv:1602.03154Phys. Rev. D93. 11114017hep-phA. Accardi, L. T. Brady, W. Melnitchouk, J. F. Owens, and N. Sato, "Constraints on large-x parton distributions from new weak boson production and deep-inelastic scattering data," Phys. Rev. D93 no. 11, (2016) 114017, arXiv:1602.03154 [hep-ph].
Intrinsic Chevrolets at the SSC. S J Brodsky, J C Collins, S D Ellis, J F Gunion, A H Mueller, DESIGN AND UTILIZATION OF THE SUPERCONDUCTING SUPER COLLIDER. PROCEEDINGS, 1984 SUMMER STUDY. SNOWMASS, USA227S. J. Brodsky, J. C. Collins, S. D. Ellis, J. F. Gunion, and A. H. Mueller, "Intrinsic Chevrolets at the SSC," in DESIGN AND UTILIZATION OF THE SUPERCONDUCTING SUPER COLLIDER. PROCEEDINGS, 1984 SUMMER STUDY, SNOWMASS, USA, JUNE 23 -JULY 13, 1984, p. 227. 1984. http://www-public.slac.stanford.edu/sciDoc/ docMeta.aspx?slacPubNumber=SLAC-PUB-15471.
The Intrinsic charm contribution to the proton spin. M V Polyakov, A Schafer, O V Teryaev, 10.1103/PhysRevD.60.051502arXiv:hep-ph/9812393Phys. Rev. 6051502hep-phM. V. Polyakov, A. Schafer, and O. V. Teryaev, "The Intrinsic charm contribution to the proton spin," Phys. Rev. D60 (1999) 051502, arXiv:hep-ph/9812393 [hep-ph].
Heavy quark mass expansion and intrinsic charm in light hadrons. M Franz, M V Polyakov, K Goeke, 10.1103/PhysRevD.62.074024arXiv:hep-ph/0002240Phys. Rev. 6274024hep-phM. Franz, M. V. Polyakov, and K. Goeke, "Heavy quark mass expansion and intrinsic charm in light hadrons," Phys. Rev. D62 (2000) 074024, arXiv:hep-ph/0002240 [hep-ph].
The Intrinsic Charm of the Proton. S J Brodsky, P Hoyer, C Peterson, N Sakai, 10.1016/0370-2693(80)90364-0Phys. Lett. 93S. J. Brodsky, P. Hoyer, C. Peterson, and N. Sakai, "The Intrinsic Charm of the Proton," Phys. Lett. 93B (1980) 451-455.
Intrinsic Heavy Quark States. S J Brodsky, C Peterson, N Sakai, 10.1103/PhysRevD.23.2745Phys. Rev. 232745S. J. Brodsky, C. Peterson, and N. Sakai, "Intrinsic Heavy Quark States," Phys. Rev. D23 (1981) 2745.
The Intrinsic gluon component of the nucleon. P Hoyer, D P Roy, 10.1016/S0370-2693(97)00893-9arXiv:hep-ph/9705273Phys. Lett. 410hep-phP. Hoyer and D. P. Roy, "The Intrinsic gluon component of the nucleon," Phys. Lett. B410 (1997) 63-66, arXiv:hep-ph/9705273 [hep-ph].
Unified Description of Inclusive and Exclusive Reactions at All Momentum Transfers. R Blankenbecler, S J Brodsky, 10.1103/PhysRevD.10.2973Phys. Rev. 102973R. Blankenbecler and S. J. Brodsky, "Unified Description of Inclusive and Exclusive Reactions at All Momentum Transfers," Phys. Rev. D10 (1974) 2973.
The Asymptotic Form-Factors of Hadrons and Nuclei and the Continuity of Particle and Nuclear Dynamics. S J Brodsky, B T Chertok, 10.1103/PhysRevD.14.3003Phys. Rev. 14S. J. Brodsky and B. T. Chertok, "The Asymptotic Form-Factors of Hadrons and Nuclei and the Continuity of Particle and Nuclear Dynamics," Phys. Rev. D14 (1976) 3003-3020.
Perturbative QCD constraints on the shape of polarized quark and gluon distributions. S J Brodsky, M Burkardt, I Schmidt, 10.1016/0550-3213(95)00009-HarXiv:hep-ph/9401328Nucl. Phys. 441hep-phS. J. Brodsky, M. Burkardt, and I. Schmidt, "Perturbative QCD constraints on the shape of polarized quark and gluon distributions," Nucl. Phys. B441 (1995) 197-214, arXiv:hep-ph/9401328 [hep-ph].
| [] |
[
"The bias of the submillimetre galaxy population: SMGs are poor tracers of the most massive structures in the z ∼ 2 Universe",
"The bias of the submillimetre galaxy population: SMGs are poor tracers of the most massive structures in the z ∼ 2 Universe"
] | [
"Tim B Miller \nDepartment of Physics and Atmospheric Science\nDalhousie University\n6310 Coburg RoadB3H 4R2HalifaxNSCanada\n\nCalifornia Institute of Technology\n1200 E. California Boulevard91125PasadenaCAUSA\n",
"Christopher C Hayward \nHarvard-Smithsonian Center for Astrophysics\n60 Garden Street02138CambridgeMAUSA\n",
"Scott C Chapman \nDepartment of Physics and Atmospheric Science\nDalhousie University\n6310 Coburg RoadB3H 4R2HalifaxNSCanada\n\nCalifornia Institute of Technology\n1200 E. California Boulevard91125PasadenaCAUSA\n",
"Peter S Behroozi \nSpace Telescope Science Institute\n3700 San Martin Drive21218BaltimoreMDUSA\n"
] | [
"Department of Physics and Atmospheric Science\nDalhousie University\n6310 Coburg RoadB3H 4R2HalifaxNSCanada",
"California Institute of Technology\n1200 E. California Boulevard91125PasadenaCAUSA",
"Harvard-Smithsonian Center for Astrophysics\n60 Garden Street02138CambridgeMAUSA",
"Department of Physics and Atmospheric Science\nDalhousie University\n6310 Coburg RoadB3H 4R2HalifaxNSCanada",
"California Institute of Technology\n1200 E. California Boulevard91125PasadenaCAUSA",
"Space Telescope Science Institute\n3700 San Martin Drive21218BaltimoreMDUSA"
] | [
"Mon. Not. R. Astron. Soc"
] | It is often claimed that overdensities of (or even individual bright) submillimetre-selected galaxies (SMGs) trace the assembly of the most-massive dark matter structures in the Universe. We test this claim by performing a counts-in-cells analysis of mock SMG catalogues derived from the Bolshoi cosmological simulation to investigate how well SMG associations trace the underlying dark matter structure. We find that SMGs exhibit a relatively complex bias: some regions of high SMG overdensity are underdense in terms of dark matter mass, and some regions of high dark matter overdensity contain no SMGs. Because of their rarity, Poisson noise causes scatter in the SMG overdensity at fixed dark matter overdensity. Consequently, rich associations of less-luminous, more-abundant galaxies (i.e. Lyman-break galaxy analogues) trace the highest dark matter overdensities much better than SMGs. Even on average, SMG associations are relatively poor tracers of the most significant dark matter overdensities because of 'downsizing': at z 2.5, the most-massive galaxies that reside in the highest dark matter overdensities have already had their star formation quenched and are thus no longer SMGs. Furthermore, because of Poisson noise and downsizing, some of the highest overdensities are not associated with any SMGs. Conversely, some bright SMGs are in underdense regions. | 10.1093/mnras/stv1267 | [
"https://arxiv.org/pdf/1501.04105v2.pdf"
] | 119,305,039 | 1501.04105 | 21e5565ee8124d65fb7a856ee788eeb1f7a91709 |
The bias of the submillimetre galaxy population: SMGs are poor tracers of the most massive structures in the z ∼ 2 Universe
2015
Tim B Miller
Department of Physics and Atmospheric Science
Dalhousie University
6310 Coburg RoadB3H 4R2HalifaxNSCanada
California Institute of Technology
1200 E. California Boulevard91125PasadenaCAUSA
Christopher C Hayward
Harvard-Smithsonian Center for Astrophysics
60 Garden Street02138CambridgeMAUSA
Scott C Chapman
Department of Physics and Atmospheric Science
Dalhousie University
6310 Coburg RoadB3H 4R2HalifaxNSCanada
California Institute of Technology
1200 E. California Boulevard91125PasadenaCAUSA
Peter S Behroozi
Space Telescope Science Institute
3700 San Martin Drive21218BaltimoreMDUSA
The bias of the submillimetre galaxy population: SMGs are poor tracers of the most massive structures in the z ∼ 2 Universe
Mon. Not. R. Astron. Soc
0002015Printed 20 January 2015 Submitted to MNRAS Letters(MN L A T E X style file v2.2)cosmology: theory -cosmology: large-scale structure of Universe -galaxies: clusters: general -galaxies: high-redshift -methods: numerical -submillimeter: galaxies
It is often claimed that overdensities of (or even individual bright) submillimetre-selected galaxies (SMGs) trace the assembly of the most-massive dark matter structures in the Universe. We test this claim by performing a counts-in-cells analysis of mock SMG catalogues derived from the Bolshoi cosmological simulation to investigate how well SMG associations trace the underlying dark matter structure. We find that SMGs exhibit a relatively complex bias: some regions of high SMG overdensity are underdense in terms of dark matter mass, and some regions of high dark matter overdensity contain no SMGs. Because of their rarity, Poisson noise causes scatter in the SMG overdensity at fixed dark matter overdensity. Consequently, rich associations of less-luminous, more-abundant galaxies (i.e. Lyman-break galaxy analogues) trace the highest dark matter overdensities much better than SMGs. Even on average, SMG associations are relatively poor tracers of the most significant dark matter overdensities because of 'downsizing': at z 2.5, the most-massive galaxies that reside in the highest dark matter overdensities have already had their star formation quenched and are thus no longer SMGs. Furthermore, because of Poisson noise and downsizing, some of the highest overdensities are not associated with any SMGs. Conversely, some bright SMGs are in underdense regions.
INTRODUCTION
Submillimetre-selected galaxies (SMGs; see Casey et al. 2014 for a recent review), with typical infrared (IR) luminosities of LIR 5×10 12 L , represent the rarest and most extreme examples of star forming galaxies. The LIR of an SMG implies an immense star formation rate, typically SFR ∼ 500 − 1000 M yr −1 , assuming that there is not a significant contribution to LIR from deeply obscured AGN. SMGs allow us to probe the mechanisms behind most intense star formation events in the Universe and can elucidate the highest SFRs sustainable in a galaxy. Because of their extreme nature, SMGs provide laboratories to test the limits of hydrodynamical simulations of galaxies (e.g. Narayanan et al. 2010;Hayward et al. 2011;Hayward 2013).
Massive starburst galaxies appear to grow in the most massive halos (Hickox et al. 2012), thus making them potential tracers for E-mail: tim.blake.miller@gmail.com † Moore Prize Postdoctoral Scholar in Theoretical Astrophysics ‡ Giacconi Fellow the highest-redshift proto-clusters (e.g. Capak et al. 2011;Walter et al. 2012). Thus, observations of bright SMGs should probe their environment and trace significant overdensities that can be interpreted in the context of large-scale structure simulations. Therefore, SMGs set critical constraints on cosmological models.
Interest in SMG associations has grown in recent years as increasing numbers of SMG associations have been detected (e.g. Blain et al. 2004;Geach et al. 2005;Daddi et al. 2009;Chapman et al. 2009;Dannerbauer et al. 2014;MacKenzie et al. 2014;Smail et al. 2014). Furthermore, Clements et al. (2014) have demonstrated that some Planck sources trace overdensities of dusty star-forming galaxies, and they suggested that such observations can be used to investigate the epoch of galaxy cluster formation. However, this claim relies on the assumption that overdensities of dusty starforming galaxies correspond to galaxy clusters in the process of formation.
There is some observational evidence that calls this claim into question: in their study of the GOODS-N field, Chapman et al. (2009) found an association of 8 SMGs at z ≈ 1.99. The associc 2015 RAS arXiv:1501.04105v1 [astro-ph.GA] 16 Jan 2015 ated structure was only a typical overdense region, as indicated by the well-sampled optical spectroscopy in this region, that would not form a virialized cluster by z = 0. Moreover, Blain et al. (2004) found that the clustering length of SMGs is consistent with that of evolved 'extremely red objects' (EROs) at z ∼ 1 and z = 0 clusters, which would suggest that the descendants of SMGs would tend to be found in rich cluster environments; however, this interpretation implies a comoving space density of clusters that is at least an order of magnitude greater than that observed. These results suggest that perhaps associations of SMGs trace particularly active phases in relatively modest-mass overdensities rather than the highest overdensities and thus have a relatively complex clustering bias.
To test this possibility, we have performed a counts-in-cells analysis on the Hayward et al. (2013a, hereafter H13) simulated SMG catalogues to investigate the relationships of SMGs and more modestly star-forming galaxies to the underlying dark matter structure. We first investigate the clustering biases of SMGs and Lymanbreak-galaxy (LBG) analogues. We then study the properties of individual associations of SMGs and LBG analogues.
METHODS
To analyze the bias in the SMG population, we use the mock SMG catalogues of H13, which were generated by assigning galaxy properties to dark matter haloes from a cosmological collisionless dark matter simulation using subhalo abundance matching and then assigning submm flux densities using a fitting function derived from the results of performing radiative transfer on idealized hydrodynamical simulations. We will summarize the H13 methodology here, and we refer the reader to H13 for full details.
Using halo catalogues from the Bolshoi simulation (Klypin, Trujillo-Gomez, & Primack 2011;Behroozi et al. 2013b,c), we constructed mock lightcones by starting at eight random locations within the simulation and selecting haloes along a randomly oriented sightline with an 84' x 84' (1.96 deg 2 ) field of view from z = 0.5 to z = 8. We calculated cosmological redshifts, including the effects of halo peculiar velocities. We then assigned stellar masses (M ) and SFRs using the redshift-dependent stellar masshalo mass and SFR-halo mass relations of Behroozi, Wechsler, & Conroy (2013a), which include scatter at fixed halo mass and redshift. We included a simple model for satellite quenching: satellite SFRs were reduced by a factor equal to their current subhalo mass divided by the peak mass in their subhalo's mass accretion history. We assigned dust masses (M d ) to the haloes using the empirically based method of Hayward et al. (2013b). Finally, we assigned 850µm flux densities (S850) using the following fitting function, which was derived based on the results of performing dust radiative transfer on hydrodynamical simulations of idealized disc galaxies and mergers (Hayward et al. 2011(Hayward et al. , 2013b:
S850 = 0.81 mJy SFR 100 M yr −1 0.43 M d 10 8 M 0.54 ,(1)
where we incorporated the scatter in the relation of 0.13 dex (Hayward et al. 2011) when assigning S850. Note that because S850 scales sublinearly with both SFR and M d , the predicted S850 values are relatively insensitive to the model details. Furthermore, the S850-M relation predicted in this manner agrees well with that observed (Davies et al. 2013).
Throughout this work, we refer to mock galaxies with S850 > 3 mJy as SMGs (the median SFR for sources with S850 ∼ 3 Figure 1. Overdensity δ galaxy of SMGs (S 850 > 3 mJy; red circles), LBGs (0.1 mJy < S 850 < 1 mJy; black diamonds) and a random subset of LBGs selected to have number density equal to that of the SMGs (blue diamonds) vs. overdensity of dark matter, δmass, for the random cells. For the full LBG population, there is a tight correlation between δ galaxy and δmass, which indicates that the clustering of the LBGs traces the clustering of the dark matter well. In contrast, the SMGs and random subset of LBGs exhibit a large scatter in δ galaxy at a given δmass. This result indicates that because of the rarity of SMGs, Poisson noise causes SMG overdensities to be poor tracers of dark matter overdensities. mJy is ∼ 140 M yr −1 ) and those with 0.1 < S850 < 1 mJy as LBGs (this range corresponds to median SFR values of ∼ 10 − 50 M yr −1 ). We study the bias of SMGs and LBGs and identify SMG and LBG associations (or redshift spikes; e.g. Chapman et al. 2009) using a simple counts-in-cells analysis (e.g. Adelberger et al. 1998). Specifically, we divide each of the 8 mock catalogues into cells with angular dimensions 10 arcmin × 10 arcmin and depth dz = 0.05; the results are similar if we use cells with side lengths equal to twice these values. We use a subset of 10,000 of these cells for calculating the clustering bias and for making comparisons to the properties of SMG and LBG associations; we refer to these cells as 'random cells'. To identify associations, we start with the same cells. However, to ensure that we do not divide potential associations by using a fixed grid, we shift the cells by 1-arcmin intervals 10 times and define an SMG (LBG) association as the galaxies contained in the cell that contains the maximum number of SMGs (LBGs). We ensure that we do not count a single association multiple times. We calculate total dark matter masses for each cell by summing the dark matter masses of all haloes of mass > 10 10 M (because the Bolshoi simulation is incomplete below this halo mass) contained in the cell.
As discussed in detail in H13, the H13 model does not include the effect of starbursts (i.e. the extended tail to high SFR at a given stellar mass and redshift). Because one would expect that interaction-induced starbursts would most affect the SFRs of SMGs in highly overdense regions and thus potentially alter our results, we have extended the H13 model by including a model for interaction-induced starbursts. For each galaxy, we check whether it has a neighboring galaxy with stellar mass within a factor of 3 of its own (such that the pair would constitute a 'major' merger) that is located within a physical distance of d weak . If so, the SFRs of both galaxies are boosted by a factor of b weak . If the separation is less than dstrong < d weak , we instead boost the SFRs by a larger factor, bstrong > b weak . We experimented with different reasonable parameter values, as judged based on the results of idealized hydrodynamical simulations of mergers (e.g. Cox et al. 2008;Torrey et al. 2012;Hayward et al. 2014), and found that the results were qualitatively unaffected even for the extreme scenario of d weak = 15 kpc, b weak = 10, dstrong = 5 kpc, and dstrong = 100. However, it is important to note that the catalogues are incomplete for mergers with small separations (Behroozi et al. 2013b), which would result in an underestimate of the number of interacting galaxies. Nevertheless, this incompleteness likely does not affect our results because although interactions could boost the submm fluxes of some galaxies and increase the clustering signal on 10 kpc scales, the clustering on larger scales should be unaffected. In all figures, we show the results for the original H13 model, but the corresponding plots for the boosted models are similar. Note that unlike H13, we have not incorporated the effects of blending of multiple galaxies into a single submm source in this work, although blending significantly affects the SMG population (Hayward et al. 2011(Hayward et al. , 2012(Hayward et al. , 2013bH13). The reason is that the sizes of the associations are much greater than the beam sizes of single-dish submm telescopes (see below). Thus, although the detailed results could be affected by blending, our conclusions would be unchanged if blending were incorporated. Furthermore, we wish to analyze how well individual submm-bright galaxies, which would be resolved by e.g. the Atacama Large Millimeter Array, rather than blended submm sources (which depend on the beam size of the instrument used to detect them and are thus a less general population than resolved sources) trace dark matter structures.
RESULTS
Clustering biases of mock SMGs and LBGs
For each cell, we calculate the number overdensity of SMGs (δSMG) and LBGs (δLBG) using the following equation:
δ galaxy = N galaxy − < N galaxy > < N galaxy > ,(2)
where N galaxy is the number of galaxies in a cell and < N galaxy > is the mean number of galaxies per cell. We also calculate the dark matter mass overdensity of each cell, where MDM is the mass of dark matter in a cell and < MDM > is the mean dark matter mass per cell. Fig. 1 shows the overdensity δ galaxy of SMGs (S850 > 3 mJy; red circles), LBGs (0.1 mJy < S850 < 1 mJy; black diamonds) and a random subset of LBGs selected to have number density equal to that of the SMGs (blue diamonds) vs. overdensity of dark matter, δmass, for the random cells. For the total LBG population, δ galaxy and δmass are tightly correlated, which indicates that LBG overdensities are good tracers of dark matter overdensities. The slope of the best-fitting linear relation (i.e. the bias, b ≡ δ galaxy /δmass) is 0.98 ± 0.01, and the mean squared error (MSE) is 0.1. For the SMGs and random subset of LBGs, there is a correlation between δ galaxy and δmass, but it exhibits significant scatter. The bias values are 1.3 ± 0.1 and 1.0 ± 0.1 for the SMGs and LBGs, respectively, and the MSE values are 17 and 16. The fact that the SMGs and random LBGs exhibit similar scatter indicates that Poisson noise due to the rarity of SMGs is the reason for the complicated relationship between δ galaxy and δmass for this population. Thus, although SMGs are slightly more biased than LBGs, SMG overdensities are poor tracers of the underlying dark matter overdensities. This effect may explain the results of Blain et al. (2004) discussed above.
δmass = MDM− < MDM > < MDM > ,(3)
SMG redshift associations
We now investigate the properties of individual SMG and LBG redshift associations in detail. Table 1 . Left: total dark matter mass in a cell versus redshift of the cell for cells that contain 5 SMGs (red circles), cells with 195 LBGs (this number was selected to yield the 10 richest LBG associations; black circles), cells that contain exactly 100 LBGs (green circles) and randomly selected cells (blue points). The redshift and dark matter mass distributions are shown next to the respective axes. Compared with the richest LBG associations, the SMG associations trace less-massive, higher-redshift structures. Associations of 100 LBGs trace lower-mass dark matter substructures that span the full redshift range considered. Right: similar to the left panel, but with cells classified according to the number of SMGs that they contain. Cells with lower numbers of SMGs tend to include less dark matter. Notably, some of the most overdense regions contain no SMGs. but a substantial minority (∼ 35 per cent) are. Only a few per cent of SMGs are located in rich associations of four or more SMGs. The richer associations exhibit lower median separations, which suggests that these associations correspond to higher overdensities (which will be confirmed below). Fig. 2 shows the spatial distributions of the SMGs and LBGs in two mock SMG associations. The associations each contain 6 SMGs. The spatial distributions of the galaxies exhibit clear filamentary structures, which reflect the underlying structure of the 'cosmic web'. Note that incorporating blending (using a typical beam of 30 arcsec) would tend to increase the submm fluxes of the LBGs and blend the SMGs with LBGs, but it would not blend any of the bright SMGs. Thus, a blended version would potentially contain additional bright SMGs and therefore be comparable to observed SMG associations (e.g. Chapman et al. 2009;Dannerbauer et al. 2014).
Our goal is to understand how well SMG associations trace the highest dark matter overdensities. To do so, it is instructive to compare the total dark matter mass in individual cells that are identified as SMG or LBG associations with the values for randomly selected cells. If SMG associations trace the most significant overdensities, these cells should contain more dark matter mass than other cells at a given redshift. The left panel of Fig. 3 shows the total dark matter in a given cell versus the redshift of the cell for cells that contain 5 SMGs (red circles), the 10 richest LBG associations ( 195 LBGs in a cell; black circles), cells that contain exactly 100 LBGs (green circles), and a subset of randomly selected cells (blue points). The redshift and dark matter mass distributions are shown next to the respective axes.
It is immediately clear that the SMG associations do not trace the most significant dark matter overdensities, although they do trace relatively high overdensities. Compared with the richest LBG associations, which should be considered analogous to observed associations (i.e. redshift spikes) of LBGs, the SMG associations tend to have lower dark matter masses (a median of 1.2 × 10 14 M for the SMG associations compared with 2.2 × 10 14 M for the 195−LBG associations) and are located at higher redshifts (the median values for the SMG and LBG associations are 2.4 and 2.0, respectively). Furthermore, there are many randomly selected cells that have dark matter masses that are comparable to or even greater than the values for the SMG associations, whereas the richest LBG associations more faithfully trace the cells with the largest dark matter masses.
For comparison, we show more-modest LBG associations that contain exactly 100 LBGs (green circles). As expected, these LBG associations trace less massive substructures than the richest LBG associations. The median dark matter mass of the 100-LBG associations is similar to that of the SMG associations, 9.1 × 10 13 M , but the 100-LBG associations span a broader range of redshifts.
The right panel of Fig. 3 shows the total dark matter mass in a cell versus redshift of the cell for cells that contain one or more SMGs (coloured according the number of SMGs). This figure demonstrates multiple interesting results: first, many of the highest dark matter overdensities at a given redshift contain no SMGs (the blue points with dark matter mass 2 × 10 14 M ). Consequently, finding dark matter overdensities using SMGs as signposts will cause one to miss many of the highest overdensities. Second, cells with lower numbers of SMGs tend to have less dark matter. Finally, the minimum mass necessary for a cell to host an SMG is ∼ 10 13 M , which is consistent with the results of inferences from the clustering of real SMGs (Hickox et al. 2012).
SUMMARY AND DISCUSSION
We have used mock SMG catalogues to demonstrate that SMG associations do not necessarily trace the highest overdensities of dark matter. Instead, associations of many less-luminous star-forming galaxies are much better tracers of the most-massive dark matter structures. At higher redshifts (z 2.5), the richest SMG associations trace some of the highest overdensities because the mostmassive galaxies in those regions are still forming stars rapidly. However, such associations are rare, and many of the highest overdensities do not host such associations. Consequently, SMG associations are poor tracers of the highest overdensities even at z 2.5. The situation is worse at z 2.5: many of the most-massive galaxies, which reside in the highest dark matter overdensities, have already had their star formation quenched. (Independently of redshift, the halos with the highest ratio of SFR to halo mass are those with halo masses of ∼ 10 12 M at that redshift; e.g. Behroozi et al. 2013a;Moster et al. 2013;Sparre et al. 2015.) Consequently, the z 2.5 dark matter overdensities are less likely to contain SMGs.
In our model, galaxy SFRs are assigned using a redshiftdependent SFR-halo mass relation and a model for satellite quenching. The parameters of the model are constrained by fitting to a wide range of observations . Consequently, the fact that some fraction of massive galaxies are quenched even at z ∼ 2 is not a prediction. The utility of our model is that it can be used to determine the consequences of quenching/downsizing for the clustering of the SMG population. Furthermore, because we determine the submm flux densities of our galaxies self-consistently using a fitting function derived from radiative transfer calculations, there is not a monotonic mapping between SFR and submm flux density (a galaxy with a relatively modest SFR can still be submm-bright if it has sufficiently high dust mass). Thus, the results are specific to the SMG population rather than just the most rapidly star-forming galaxies (cf. Davé et al. 2010). Finally, our model explicitly accounts for the stochasticity that is inherent in the SMG selection because bright SMGs are an extreme population; thus, one may select a galaxy as an SMG because it is in a short-lived phase of elevated SFR (perhaps due to an interaction) or because it has an especially high submm flux density for its SFR and dust mass (because of e.g. an especially extended geometry). GN20 could be a real-Universe example of the latter. Consequently, Poisson noise contributes to the scatter in the value of δSMG at a given δmass and causes some of the most significant overdensities to contain few or no bright SMGs. Conversely, our model suggests that some the brightest SMGs in the Universe may lie in relative voids, consistent with observational findings (Chapman et al., in preparation).
A few other theoretical works have investigated the clustering of the SMG population. Davé et al. (2010) studied the properties of the most rapidly star-forming galaxies, which they considered SMG analogues, in a cosmological hydrodynamical simulation. Because of the tight, monotonic SFR-stellar mass relation for star-forming galaxies in their simulation, they effectively selected the most-massive star-forming galaxies in their simulation. Consequently, they found that their simulated SMGs were highly clustered and biased, with a correlation length r0 ≈ 10h −1 comoving Mpc and bias of ∼ 6.
In the semi-analytical model of Almeida et al. (2011), SMGs are strongly clustered (r0 = 5.6h −1 comoving Mpc). They found that the correlation length is tightly related to the stellar and halo mass but independent of the submm flux density, which is qualitatively consistent with our results (i.e. galaxies in high overdensities are not necessarily submm-bright). However, their results may have been affected by use of a flat initial mass function (IMF) in starbursts, and a much less top-heavy IMF is used in the most recent incarnation of the model (Lacey et al., in preparation). Consequently, it will be worthwhile to revisit this issue.
Overall, our results urge caution when interpreting SMG associations in the context of large-scale structure. Because of their rarity, Poisson noise causes significant scatter in the SMG overdensity at fixed dark matter overdensity (i.e. at best, SMGs stochastically sample the highest overdensities). Consequently, although the highest-redshift SMG associations trace some of the highest dark matter overdensities at those redshifts, many of the most extreme overdensities do not host SMG associations. At lower redshifts (z 2), the situation is worse: the highest overdensities tend to contain only a few SMGs at most, and many do not contain any. Thus, if one wishes to identify protoclusters, the complicated bias of SMGs makes them less-than-ideal beacons, and it is preferable to search for associations of LBGs. If one wishes to use SMGs to trace dark matter structure, large sample sizes are required to overcome the Poisson noise. Current instruments are insufficient for this purpose, but proposed 30-m-class (sub)mm telescopes would be able to overcome this limitation.
Figure 2 .
2Spatial distributions of galaxies near the 2 richest SMG associations (which each contain 6 SMGs; the cells are marked with dashed lines). The SMGs (LBGs) are denoted with red (blue) points, the sizes of which are proportional to S 850 . The spatial distributions of the galaxies reflect the filamentary structure of the dark matter distribution.
Figure 3
3presents the fraction of SMGs in associations and the median separation of the SMGs in the different types of associations. Typical SMGs are not in associations,
Table 1 .
1Demographics of SMG associations a Number of SMGs in a cell. b Percentage of SMGs in such associations. c Median pairwise separations of SMGs in such associations.NSMG a Percentage of population b Median separation c
(Mpc)
1
64
-
2
23
7.6
3
10
6.1
4
2
5.2
5
0.8
4.3
6
0.2
4.6
Miller, Hayward, Chapman & Behroozi
c 2015 RAS, MNRAS 000, 1-5
ACKNOWLEDGEMENTSWe thank Neal Katz for useful discussion and Phil Hopkins for comments on the manuscript. CCH is grateful to the Gordon and Betty Moore Foundation for financial support and acknowledges the hospitality of the Aspen Center for Physics, which is supported by the National Science Foundation Grant No. PHY-1066293. PSB was supported by a Giacconi Fellowship provided through the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy under NASA contract NAS5-26555.
. K L Adelberger, C C Steidel, M Giavalisco, M Dickinson, M Pettini, M Kellogg, ApJ. 50518Adelberger K. L., Steidel C. C., Giavalisco M., Dickinson M., Pettini M., Kellogg M., 1998, ApJ, 505, 18
. C Almeida, C M Baugh, C G Lacey, MNRAS. 1312Almeida C., Baugh C. M., Lacey C. G., 2011, MNRAS, 1312
. P S Behroozi, R H Wechsler, C Conroy, ApJ. 77057Behroozi P. S., Wechsler R. H., Conroy C., 2013a, ApJ, 770, 57
. P S Behroozi, R H Wechsler, H.-Y Wu, ApJ. 762109Behroozi P. S., Wechsler R. H., Wu H.-Y., 2013b, ApJ, 762, 109
. P S Behroozi, R H Wechsler, H.-Y Wu, M T Busha, A A Klypin, J R Primack, ApJ. 76318Behroozi P. S., Wechsler R. H., Wu H.-Y., Busha M. T., Klypin A. A., Primack J. R., 2013c, ApJ, 763, 18
. A W Blain, S C Chapman, I Smail, R Ivison, ApJ. 611725Blain A. W., Chapman S. C., Smail I., Ivison R., 2004, ApJ, 611, 725
. P L Capak, Nature. 470233Capak P. L. et al., 2011, Nature, 470, 233
. C M Casey, D Narayanan, A Cooray, PhysRep. 54145Casey C. M., Narayanan D., Cooray A., 2014, PhysRep, 541, 45
. S C Chapman, A Blain, R Ibata, R J Ivison, I Smail, G Morrison, ApJ. 691560Chapman S. C., Blain A., Ibata R., Ivison R. J., Smail I., Morrison G., 2009, ApJ, 691, 560
. D L Clements, MNRAS. 4391193Clements D. L. et al., 2014, MNRAS, 439, 1193
. T J Cox, P Jonsson, R S Somerville, J R Primack, A Dekel, MNRAS. 384386Cox T. J., Jonsson P., Somerville R. S., Primack J. R., Dekel A., 2008, MNRAS, 384, 386
. E Daddi, ApJ. 6941517Daddi E. et al., 2009, ApJ, 694, 1517
. H Dannerbauer, A&A. 57055Dannerbauer H. et al., 2014, A&A, 570, A55
. R Davé, K Finlator, B D Oppenheimer, M Fardal, N Katz, D Kereš, D H Weinberg, MNRAS. 4041355Davé R., Finlator K., Oppenheimer B. D., Fardal M., Katz N., Kereš D., Weinberg D. H., 2010, MNRAS, 404, 1355
. L J M Davies, M N Bremer, E R Stanway, M D Lehnert, 4332588MN-RASDavies L. J. M., Bremer M. N., Stanway E. R., Lehnert M. D., 2013, MN- RAS, 433, 2588
. J E Geach, MNRAS. 3631398Geach J. E. et al., 2005, MNRAS, 363, 1398
. C C Hayward, MNRAS. 43285Hayward C. C., 2013, MNRAS, 432, L85
. C C Hayward, P S Behroozi, R S Somerville, J R Primack, J Moreno, R H Wechsler, MNRAS. 434H132572Hayward C. C., Behroozi P. S., Somerville R. S., Primack J. R., Moreno J., Wechsler R. H., 2013a, MNRAS, 434, 2572 (H13)
. C C Hayward, P Jonsson, D Kereš, B Magnelli, L Hernquist, T J Cox, MNRAS. 424951Hayward C. C., Jonsson P., Kereš D., Magnelli B., Hernquist L., Cox T. J., 2012, MNRAS, 424, 951
. C C Hayward, D Kereš, P Jonsson, D Narayanan, T J Cox, L Hernquist, ApJ. 743159Hayward C. C., Kereš D., Jonsson P., Narayanan D., Cox T. J., Hernquist L., 2011, ApJ, 743, 159
. C C Hayward, D Narayanan, D Kereš, P Jonsson, P F Hopkins, T J Cox, L Hernquist, MNRAS. 4282529Hayward C. C., Narayanan D., Kereš D., Jonsson P., Hopkins P. F., Cox T. J., Hernquist L., 2013b, MNRAS, 428, 2529
. C C Hayward, P Torrey, V Springel, L Hernquist, M Vogelsberger, MNRAS. 442Hayward C. C., Torrey P., Springel V., Hernquist L., Vogelsberger M., 2014, MNRAS, 442, 1992
. R C Hickox, MNRAS. 421284Hickox R. C. et al., 2012, MNRAS, 421, 284
. A A Klypin, S Trujillo-Gomez, J Primack, ApJ. 740102Klypin A. A., Trujillo-Gomez S., Primack J., 2011, ApJ, 740, 102
. T P Mackenzie, MNRAS. 445201MacKenzie T. P. et al., 2014, MNRAS, 445, 201
. B P Moster, T Naab, S D M White, MNRAS. 4283121Moster B. P., Naab T., White S. D. M., 2013, MNRAS, 428, 3121
. D Narayanan, C C Hayward, T J Cox, L Hernquist, P Jonsson, J D Younger, B Groves, MNRAS. 4011613Narayanan D., Hayward C. C., Cox T. J., Hernquist L., Jonsson P., Younger J. D., Groves B., 2010, MNRAS, 401, 1613
. I Smail, ApJ. 78219Smail I. et al., 2014, ApJ, 782, 19
. M Sparre, arXiv:1409.0009MNRAS. in pressSparre M. et al., 2015, MNRAS, in press, arXiv:1409.0009
. P Torrey, T J Cox, L Kewley, L Hernquist, ApJ. 746108Torrey P., Cox T. J., Kewley L., Hernquist L., 2012, ApJ, 746, 108
. F Walter, Nature. 486233Walter F. et al., 2012, Nature, 486, 233
| [] |
[
"BART-based inference for Poisson processes",
"BART-based inference for Poisson processes"
] | [
"Stamatina Lamprinakou \nDepartment of Mathematics\nImperial College London London United Kingdom\n\n",
"Mauricio Barahona \nDepartment of Mathematics\nImperial College London London United Kingdom\n\n",
"Seth Flaxman \nDepartment of Mathematics\nImperial College London London United Kingdom\n\n",
"Sarah Filippi \nDepartment of Mathematics\nImperial College London London United Kingdom\n\n",
"Axel Gandy \nDepartment of Mathematics\nImperial College London London United Kingdom\n\n",
"Emma Mccoy \nDepartment of Mathematics\nImperial College London London United Kingdom\n\n"
] | [
"Department of Mathematics\nImperial College London London United Kingdom\n",
"Department of Mathematics\nImperial College London London United Kingdom\n",
"Department of Mathematics\nImperial College London London United Kingdom\n",
"Department of Mathematics\nImperial College London London United Kingdom\n",
"Department of Mathematics\nImperial College London London United Kingdom\n",
"Department of Mathematics\nImperial College London London United Kingdom\n"
] | [] | The effectiveness of Bayesian Additive Regression Trees (BART) has been demonstrated in a variety of contexts including non-parametric regression and classification. A BART scheme for estimating the intensity of inhomogeneous Poisson processes is introduced. Poisson intensity estimation is a vital task in various applications including medical imaging, astrophysics and network traffic analysis. The new approach enables full posterior inference of the intensity in a non-parametric regression setting. The performance of the novel scheme is demonstrated through simulation studies on synthetic and real datasets up to five dimensions, and the new scheme is compared with alternative approaches. | 10.1016/j.csda.2022.107658 | [
"https://export.arxiv.org/pdf/2005.07927v2.pdf"
] | 218,674,119 | 2005.07927 | 261bf124b6467fc16932b3990d983b10dd7d550b |
BART-based inference for Poisson processes
Stamatina Lamprinakou
Department of Mathematics
Imperial College London London United Kingdom
Mauricio Barahona
Department of Mathematics
Imperial College London London United Kingdom
Seth Flaxman
Department of Mathematics
Imperial College London London United Kingdom
Sarah Filippi
Department of Mathematics
Imperial College London London United Kingdom
Axel Gandy
Department of Mathematics
Imperial College London London United Kingdom
Emma Mccoy
Department of Mathematics
Imperial College London London United Kingdom
BART-based inference for Poisson processes
The effectiveness of Bayesian Additive Regression Trees (BART) has been demonstrated in a variety of contexts including non-parametric regression and classification. A BART scheme for estimating the intensity of inhomogeneous Poisson processes is introduced. Poisson intensity estimation is a vital task in various applications including medical imaging, astrophysics and network traffic analysis. The new approach enables full posterior inference of the intensity in a non-parametric regression setting. The performance of the novel scheme is demonstrated through simulation studies on synthetic and real datasets up to five dimensions, and the new scheme is compared with alternative approaches.
Introduction
The Bayesian Additive Regression Trees (BART) model is a Bayesian framework, which uses a sum of trees to predict the posterior distribution of a response y given a p-dimensional covariate X and priors on the function relating the covariates to the response. Chipman et al. (2010) proposed an inference procedure using Metropolis Hastings within a Gibbs Sampler, whereas Lakshminarayanan et al. (2015) used a Particle Gibbs Sampler to increase mixing when the true posterior consists of deep trees or when the dimensionality of the data is high. Several theoretical studies of BART models (Rockova and van der Pas, 2017;Rockova and Saha, 2018;Linero and Yang, 2018) have recently established optimal posterior convergence rates. The BART model has been applied in various contexts including non-parametric mean regression (Chipman et al., 2010), classification (Chipman et al., 2010;Zhang and Härdle, 2010;Kindo et al., 2016), variable selection (Chipman et al., 2010;Bleich et al., 2014;Linero, 2018), estimation of monotone functions (Chipman et al., 2021), causal inference (Hill, 2011), survival analysis (Sparapani et al., 2016), and heteroskedasticity (Bleich and Kapelner, 2014;Pratola et al., 2020). Linero and Yang (2018) illustrated how the BART model suffers from a lack of smoothness and the curse of dimensionality, and overcome both potential shortcomings by considering a sparsity assumption similar to (Linero, 2018) and treating decisions at branches probabilistically. The original BART model (Chipman et al., 2010) assume that the response has a Gaussian distribution and the majority of applications have used this framework. Murray (2017) adapted the BART model to count data and categorical data via a log-linear transformation, and provided an efficient MCMC sampler. Our focus is on extending this methodology to estimate the intensity function of inhomogeneous Poisson processes.
The question of estimating the intensity of Poisson processes has a long history, including both frequentist and Bayesian methods. Frequentist methods include fixed-bandwidth and adaptive bandwidth kernel estimators with edge correction (Diggle et al., 2003), and wavelet-based methods (e.g. Fryzlewicz and Nason, 2004;Patil et al., 2004). Bayesian methods include using a sigmoidal Gaussian Cox process model for intensity inference (Adams et al., 2009), a Markov random field (MRF) with Laplace prior (Sardy and Tseng, 2004), variational Bayesian intensity inference (Lloyd et al., 2015), and non-parametric Bayesian estimations of the intensity via piecewise functions with either random or fixed partitions of constant intensity (Arjas and Gasbarra, 1994;Heikkinen and Arjas, 1998;Gugushvili et al., 2018).
In this paper, we introduce an extension of the BART model (Chipman et al., 2010) for Poisson Processes whose intensity at each point is estimated via a tiny ensemble of trees. Specifically, the logarithm of the intensity at each point is modelled via a sum of trees (and hence the intensity is a product of trees). This approach enables full posterior inference of the intensity in a non-parametric regression setting. Our main contribution is a novel BART scheme for estimating the intensity of an inhomogeneous Poisson process. The simulation studies demonstrate that our algorithm is competitive with the Haar-Fisz algorithm in one dimension, kernel smoothing in two dimensions, and outperforms the kernel approach for multidimensional intensities. The simulation analysis also demonstrates that our proposed algorithm is competitive with the inference via spatial log-Gaussian Cox processes. We also demonstrate its ability to track varying intensity in synthetic and real data.
The outline of the article is as follows. Section 2 introduces our approach for estimating the intensity of a Poisson process through the BART model, and Section 3 presents the proposed inference algorithm. Sections 4 and 5 present the application of the algorithm to synthetic data and real data sets, respectively. Section 6 provides our conclusions and plans for future work.
The BART Model for Poisson Processes
Consider an inhomogeneous Poisson process defined on a d-dimensional domain S ⊂ R d , d ≥ 1, with intensity λ : S → R + . For such a process, the number of points within a subregion B ⊂ S has a Poisson distribution with mean λ B = B λ(s) ds, and the number of points in disjoint subregions are independent (Daley and Vere-Jones, 2003). The homogeneous Poisson process is a special case with constant intensity λ(s) = λ 0 , ∀s ∈ S.
To estimate the intensity of the inhomogeneous Poisson process, we use m partitions of the domain S, each associated with a tree T h , h = 1, . . . , m. The partitions are denoted T h = {Ω ht } b h t=1 , where b h is the number of terminal nodes in the corresponding tree T h , and each leaf node t corresponds to one of the subregions Ω ht of the partition T h . Being a partition, every tree covers the full domain, i.e. S = ∪ b h t=1 Ω ht for every h. Each subregion Ω ht has an associated parameter λ ht , and hence each tree T h has an associated vector of leaf intensities Λ h = (λ h1 , λ h2 , .., λ hb h ).
We model the intensity of s ∈ S as:
log(λ(s)) = m h=1 b h t=1 log (λ ht ) I(s ∈ Ω ht )(1)
T h ∼ heterogeneous Galton-Watson process for a partition of S (2)
λ ht |T h iid ∼ Gamma(α, β)(3)
where I(·) denotes the indicator function. Equivalently, (1) can be expressed as
λ(s) = m h=1 b h t=1 λ I(s∈Ω ht ) ht .(4)
Given a fixed number of trees, m, the parameters of the model are thus the regression trees T = {T h } m h=1 and their corresponding intensities Λ = {Λ h } m h=1 . Following Chipman et al. (2010), we assume that the tree components (T h , Λ h ) are independent of each other, and that the terminal node parameters of every tree are independent, so that the prior can be factorized as:
P (Λ, T ) = m h=1 P (Λ h , T h ) = m h=1 P (Λ h |T h )P (T h ) = m h=1 b h t=1 P (λ ht |T h ) P (T h ).(5)
Prior on the trees. The trees T h of the BART model are stochastic regression trees generated through a heterogeneous Galton-Watson (GW) process (Harris et al., 1963;Rockova and Saha, 2018). The GW process is the simplest branching process concerning the evolution of a population in discrete time. Individuals (tree nodes) of a generation (tree depth) give birth to a random number of individuals (tree nodes), called offspring, mutually independent and all with the same offspring distribution that may vary from generation (depth) to generation (depth). In our case, we use the prior introduced by Chipman et al. (1998), that is a GW process in which each node has either zero or two offspring and the probability of a node splitting depends on its depth in the tree. Specifically, a node η ∈ T h splits into two offsprings with probability
p split (η) = γ (1 + d(η)) δ ,(6)
where d(η) is the depth of node η in the tree, and γ ∈ (0, 1) and δ ≥ 0 are parameters of the model. Classic results from the theory of branching processes show that γ ≤ 0.5 guarantees that the expected depth of the tree is finite. In our construction, each tree T h is associated with a partition of S. Namely, if node η splits, we select uniformly at random one of the d dimensions of the space of the Poisson process, followed by uniform selection from the available split values associated with that dimension respecting the splitting rules higher in the tree.
Prior on the leaf intensities. Our choice of a Gamma prior for the leaf parameters λ ht builds upon previous work by Murray (2017), who used a mixture of Generalized Inverse Gaussian (GIG) distributions as the prior on leaf parameters in a BART model for count regression. Here we impose a Gamma prior (a special case of GIG) on the leaf parameters, which simplifies the model and leads to a closed form of the conditional integrated likelihood below (see Section 3) as the Gamma distribution is the conjugate prior for the Poisson likelihood. We discuss the selection of it hyperparameters α and β in Section 3.1.
The Inference Algorithm
Given a finite realization of an inhomogeneous Poisson process with n sample points s = s 1 , . . . , s n ∈ S ⊂ R d , we seek to infer the parameters of the model (Λ, T ) by sampling from the posterior P (Λ, T |s).
Before presenting the sampling algorithm we summarize a preliminary result. To simplify our notation, let us define with K(T (h) ) subregions (Rockova and van der Pas, 2017).
Then we have the following result.
Remark 1. (i) The conditional likelihood of the realization is given by
P (s|Λ, T ) = c h b h t=1 λ n ht ht e −λ ht c ht ,(7)
with c h = n i=1 m j=1,j =h g(s i ; T j , Λ j ),
c ht = K(T (h) ) k=1 λ (h) k |Ω (h) k ∩ Ω ht |,
where λ (ii) For a tree h, the conditional integrated likelihood obtained by integrating out Λ h is
P (s|T h , T (h) , Λ (h) ) = c h β α Γ(α) b h b h t=1 Γ(n ht + α) (c ht + β) n ht +α .(8)
A proof can be found in Appendix B and Appendix C. We now summarize our sampling algorithm. To sample from P (Λ, T |s), we implement a Metropolis-Hastings within block Gibbs sampler (Algorithm 1), which requires m successive draws
from (T h , Λ h )|T (h) , Λ (h) , s. Note that P (T h , Λ h |T (h) , Λ (h) , s) = P (T h |T (h) , Λ (h) , s) P (Λ h |T h , T (h) , Λ (h) , s) ∝ P (T h |T (h) , Λ (h) , s) P (s|Λ, T )P (Λ h |T h ) = P (T h |T (h) , Λ (h) , s) P (s|Λ, T ) b h t=1 P (λ ht |T h ) = P (T h |T (h) , Λ (h) , s) c h b h t=1 λ n ht ht e −λ ht c ht b h t=1 β α Γ(α) λ α−1 ht e −βλ ht ∝ P (T h |T (h) , Λ (h) , s) b h t=1 λ n ht +α−1 ht e −(c ht +β)λ ht (9)(10)
which follows directly from Bayes' rule and Eqs. (5) and (3).
From (10), it is clear that a draw from (T h , Λ h )|T (h) , Λ (h)
, s can be achieved in (b h +1) successive steps consisting of:
• sampling T h |T (h) , Λ (h) , s using Metropolis-Hastings (Algorithm 2)
• sampling λ ht |T h , T (h) , Λ (h) , s from a Gamma distribution with shape n ht + α and rate c ht + β for t = 1, .., b h .
These steps are implemented through Metropolis-Hastings in Algorithm 1. Note also that
P (T h |T (h) , Λ (h) , s) ∝ P (s|T h , T (h) , Λ (h) ) P (T h ),
so that the conditional integrated likelihood (8) is required to compute the Hastings ratio.
Algorithm 1 Metropolis-Hastings within Gibbs sampler
for v = 1, 2, 3, .. do for h = 1 to m do Sample T (v+1) h |s, {T (v+1) j } h−1 j=1 , {T (v) j } m j=h+1 , {Λ (v+1) j } h−1 j=1 , {Λ (v) j } m j=h+1 using Algorithm 2 for t = 1 to b h do Sample λ (v+1) ht |s, {T (v+1) j } h j=1 , {T (v) j } m j=h+1 , {Λ (v+1) j } h−1 j=1 , {Λ (v) j } m j=h+1
from Gamma(n ht + α, c ht + β) end for end for end for Algorithm 2 Metropolis-Hastings Algorithm for sampling from the posterior P (T j |s, T (j) , Λ (j) ) Generate a candidate value T * j with probability q(T * j |T
(v) j ). Set T (v+1) j = T * j with probability α(T (v) j , T * j ) = min 1, q(T (v) j |T * j ) q(T * j |T (v) j ) P (s|T * j , T (j) , Λ (j) ) P (s|T (v) j , T (j) , Λ (j) ) P (T * j ) P (T (v) j ) Otherwise, set T (v+1) j = T (v) j .
The transition kernel q in Algorithm 2 is chosen from the three proposals: GROW, PRUNE, CHANGE (Chipman et al., 2010;Kapelner and Bleich, 2013). The GROW proposal randomly picks a terminal node, splits the chosen terminal into two new nodes and assigns a decision rule to it. The PRUNE proposal randomly picks a parent of two terminal nodes and turns it into a terminal node by collapsing the nodes below it. The CHANGE proposal randomly picks an internal node and randomly reassigns to it a splitting rule. We describe the implementation of the proposals in Appendix A.
For completeness, in the supplementary material, we present the full development of the algorithm for inference of the intensity of inhomogeneous Poisson processes via only one tree.
Fixing the hyperparameters of the model
Hyperparameters of the Gamma distribution for the leaf intensities. We use a simple data-informed approach to fix the hyperparameters α and β of the Gamma distribution (3). We discretize the domain into N G subregions of equal volume (N G = ( 100 1/d ) d works well in practice up to 5 dimensions) and count the number of samples s i per subregion. We thus obtain the empirical densities in each of the subregions: ξ i , i = 1, . . . , N G . Given the form of the intensity (4) as a product of m trees, we consider the m-th roots Ξ = {ξ
1/m i } N G i=1
as candidates for the intensity of each tree. Taking the sample mean µ Ξ and sample variance σ 2 Ξ , we choose the model hyperparameters α and β to correspond to those of a Gamma distribution with the same mean and variance, i.e., α = µ 2 Ξ / σ 2 Ξ and β = µ Ξ / σ 2 Ξ , although fixing β = 1 can also give good estimates of the intensity. Although setting N G = ( 100 1/d ) d leads to convergence and good estimates of the intensity in our simulation studies below, there are other possibilities. Alternatively, we can bin the data based on a criterion that takes into account the number of samples, n, and the number of dimensions, d. For example, the number of bins per dimension, n b , can be computed as (Scott, 2008;Wand, 1997)
: (i) n b = n 1/(d+1) , (ii) n b = n 1/(d+2) , or (iii) n b = max k∈{1,2,..,d} [ DR k · n 1/(d+2) /(2 · IQR({s i,k }) ],
where IQR denotes the interquartile range of the sample, DR k is the range of the domain in dimension k (here we scale the initial domain to a unit hypercube so that DR k = 1, ∀k), and by extension N G = n b d . In our simulation scenarios below, all these approaches lead to comparable convergence times and estimates of the intensity.
Hyperparameters of the stochastic ensemble of regression trees. The GW stochastic process that generates our tree ensemble has several hyperparameters. The parameters (γ, δ) control the shape of trees. The parameter γ > 0 controls the probability that the root of a tree will split into two offspring, while the parameter δ > 0 penalizes against deep trees. As noted in (Chipman et al., 2010), for a sum-of-trees model, we want to keep the depth of the tree small whilst ensuring nontrivial trees, hence, in our simulation study we fix γ = 0.98 and δ = 2. Second, each of the d dimensions has to be assigned a grid of split values, from which the subregions of the partition are randomly chosen, yet always respecting the consistency of the ancestors in the tree (that is respecting the splitting rules higher in the tree). Here, we use a simple uniform grid for each of the d-dimensions (Pratola et al., 2016): we normalize each dimension of the space from (0,1) and discretize each dimension into N d segments. (N d = 100 works well in practice and is used throughout our examples.) More sophisticated, data-informed grids are also possible, although using, e.g., the sample points as split values does not improve noticeably the performance in our examples. Finally, the number of trees m also needs to be fixed as in Chipman et al. (2010). In our examples below, we have checked the performance of our algorithm with varying number of trees m between 2 and 50. We find that good performance can be achieved with a moderate number of trees, m, between 3 and 10 depending on the particular example.
Simulation Study on Synthetic Data
We carried out a simulation study on synthetic data to illustrate the performance of Algorithm 1 to estimate first the intensity of one dimensional and two dimensional inhomogeneous Poisson processes and finally the intensity of multidimensional Poisson processes.
We simulate realizations of Poisson processes on the domain [0, 1) d for d ∈ {1, 2, 3, 4, 5} via thinning (Lewis and Shedler, 1979). The hyperparameters of the model (for the trees and the leaf intensities) are fixed as described in Section 3.1. We initially randomly generate m trees of zero depth. The probabilities of the proposals in Algorithm 2 are set to: P (GROW) = P (PRUNE) = 0.4 and P (CHANGE) = 0.2. A set {z i } is defined by uniformly sampling points in the domain [0, 1) d .
We run 3 parallel chains of the same length. We discard their first halves treating the second halves as a sample from the target distribution. We assess chain convergence using the Gelman-Rubin convergence diagnostic (Gelman et al., 1992) applied to the estimated intensity for each point of the set {z i } , as well as trace plots and autocorrelation plots for some points of the testing set.
At each state t of a simulated chain we estimate the intensity for each point z i by a product of trees denoted as
λ (t) (z i ) = m j=1 g(z i ; T (t) j , Λ (t) j ). The induced sequence { λ (t) (·)} ∞ t=1 for the sequence of draws {(T (t) 1 , Λ (t) 1 ), .., (T (t) m , Λ (t)
m )} ∞ t=1 converges to P ( λ|s). We estimate the posterior mean E[ λ(·)|s 1 , ..s n ], the posterior median of λ(·), and the highest density interval (hdi) using the function hdi provided by the R package bayestestR (Makowski et al., 2019). To assess the performance of our algorithm, we compute the Average Absolute Error (AAE) of the computed estimate:
AAE( λ) = 1 N z Nz i=1 | λ(z i ) − λ(z i )|(11)
and the Root Integrated Square Error (RISE):
RISE( λ) = 1 N z Nz i=1 ( λ(z i ) − λ(z i )) 2 1/2(12)
where N z is the number of test points.
In the spirit of Akaike information criterion (AIC) (Loader, 1999), we also introduce two diagnostics targetting the likelihood function to evaluate if increasing the number of trees leads to better intensity estimation:
D g = 2 (log P (s 1 , .., s n ) − k g ) , and D l = 2 (log P (s 1 , .., s n ) − k l ) ,
where k g is the number of global cells, and k l is the overall number of leaves in the ensemble. We estimate both diagnostics using the sequence of the draws T (w) ,
Λ (w) = T (w) 1 , Λ (w) 1 , ..., T (w) m , Λ (w) m
after the burn-in period as
D g ≈ 2 1 N w Nw w=1 log P s 1 , .., s n | T (w) , Λ (w) − k (w) g , and D l ≈ 2 1 N w Nw w=1 log P s 1 , .., s n | T (w) , Λ (w) − k (w) l , where k (w) g and k (w) l
are the number of global cells and the overall number of leaves in the ensemble associated to the w th draw, respectively.
AIC has been shown to be asymptomatically equal to leave-one-out cross validation (LOO-CV) (Stone, 1977;Gelman et al., 2014). According to Leininger and Gelfand (2017), the computational burden required for leave-one-out cross validation considering a point pattern data is impractical. We introduce a leave-partition-out (LPO) method, assuming that the initial process N (t) is obtained by combining independent processes {N i (t)} Np i=1 , as follows
D LP O = N P i=1 log P (N i (t)|N (t) − {N i (t)}) (13) where P (N i (t)|N (t) − {N i (t)})
is the leave-partition-out predictive intensity given the process N (t) without the i th partition, N i (t). We can evaluate 13 as follows,
D LP O = N P i=1 log 1 N w Nw w=1 P N i (t)| T (w,i) , Λ (w,i) where (T (w,i) , Λ (w,i) ) is the sequence of draws T (w,i) 1 , Λ (w,i) 1 , ..., T (w,i) m , Λ (w,i) m
after the burnin period leaving out the partition N i (t). We assume that each event of N (t) is coming from N i (t) with probability p i . The bias of the method is introduced by randomly splitting the process into individual processes. We can get the LOO-CV by LPO, defining appropriately the parameter N p . As higher the number N p is, as less biased the method is. In the simulation scenarios, we consider that p i = 0.1, i = 1, ..., N p and N p = 10 for computational reasons. The diagnostics show that tiny ensembles of trees provide good estimates in our simulation scenarios.
To confirm the proposed diagnostics, we use p-thinning (Illian et al., 2008, Chapter 6) with p = 0.8 to create training and test datasets in two of the simulation scenarios. We employ Root Standardized Mean Square Error (RSMSE) and Rank Probability Score (RPS) with the test data set comparing observed counts in disjoint equal volume subregions {S i } Ns i=1 as follows:
RSMSE( N ) = 1 N s Ns i=1 ( N (S i ) − N (S i )) 2 N (S i ) 1/2(14)
and
RPS(N (S j )) = N (Sj )−1 u=0 F (u) 2 + ∞ u=N (Sj ) (F (u) − 1) 2 ,(15)
where F is the Poisson distribution with parameter m = Sjλ (s)ds, N (S i ) the actual number of testing points in S i and N (S i ) the estimated number of testing points in S i given by
N (S i ) = Si 1 − p p λ(s)ds 1 − p p 1 N i z zj ∈Si λ(z j )|S i |(16)
with N i z being the number of points {z j } falling in S i and estimating the intensity at each points s , λ(s), via the posterior mean E[ λ(·)|s 1 , ..s n ].
For one dimensional processes, we compare the results of Algorithm 1 to the Haar-Fisz algorithm (Fryzlewicz and Nason, 2004), a wavelet based method for estimating the intensity of one dimensional Poisson Processes that outperforms well known competitors. We apply the Haar-Fisz algorithm to the counts of points falling into 256 consecutive intervals using the R package haarfisz (Fryzlewicz, 2010). Our algorithm is competitive with the Haar algorithm for smooth intensity functions and is not strongly out-performed by the Haar-Fisz algorithm when the underlying intensity is a stepwise function.
For two-dimensional processes, we compare the results of our algorithm with fixed-bandwidth estimators and log-Gaussian Cox processes (LGCP) with intensity λ(s) = exp (a + u(s)) where u is a Gaussian process with exponential covariance function. We used a discretization version of the LGCP model defined on a regular grid over space which we implemented using Stan-code (Gelman et al., 2015). As noted in Davies and Baddeley (2018), the choice of the kernel is not of primary importance, we choose a Gaussian kernel for its wide applicability. In our tables of results, the smoothing bandwidth, sigma, selected using likelihood cross-validation (Loader, 1999) denoted by (LCV), and we have also included other values of sigma to demonstrate the sensitivity to bandwidth choice. The kernel estimators, and the bandwidth value given by likelihood cross-validation, were computed using the R package spatstat (Baddeley and Turner, 2005). Our algorithm outperforms the maximum likelihood approach using linear conditional intensity, as expected. Our algorithm outperforms kernel smoothing and LGCP for stepwise functions and is competitive with them for a smooth intensity.
Finally, we examine the performance of our algorithm for multidimensional intensities by generating realizations of Poisson Processes on the domain [0, 1) d for d ∈ {3, 5} via thinning. Future work includes the study of intensities in higher dimensions (d > 5). We compare our intensity estimates with kernel smoothing estimators having isotropic standard deviation matrices with diagonal elements equal to h and the methodology for applying maximum likelihood to point process models with linear conditional intensity (Peng, 2003). We select the bandwidth h using likelihood cross-validation (Loader, 1999) denoted by (LCV).
One dimensional Poisson Process with stepwise intensity
Our first example is a one dimensional Poisson Process with piecewise constant intensity with several steps (Fig. 1). We run 3 parallel chains of the same length for 200000 iterations for 2-10 trees, 100000 for 12 trees, 50000 iterations for 15 trees and 30000 iterations for 20 trees.
Our algorithm detects the change points and provides good estimates of the intensity and is competitive in terms of AAE with the Haar-Fisz algorithm, but does not perform as well in terms of RISE (see Fig. 1 and Tables 3-6). We have found the metrics and convergence diagnostics in a set of uniformly chosen points without excluding the points close to jumps. Due to inferring the intensity via a product of stepwise functions, it is expected that the proposed algorithm will provide estimates with higher variability close to jumps. The proposed algorithm outperforms the Haar-Fisz algorithm without considering the points close to jumps. Tables 4-5 show the metrics for various number of trees without considering the points in a distance = ±0.02 from the jumps.
The diagnostics D g , D l and D LP O obtain their highest values for 7, 4 and 8 trees, respectively. The analysis demonstrates only small differences between log-likelihood values as the number of trees increases, supporting results found in previous BART studies that the method is robust to the choice of m. The average RSMSE and RPS on testing points over 7 different splits of the original data set (Tables 1-2) provide evidence that ensembles with more than seven trees do not improve the fit of the proposed algorithm.
Two-dimensional Poisson process with stepwise intensity function
To demonstrate the applicability of our algorithm in a two-dimensional setting, Figures 2-3 and Tables 7-9 reveal that our algorithm outperforms kernel smoothing and inference with spatial log-Gaussian Cox processes for stepwise intensity functions. We run 3 parallel chains of the same length for 100000 iterations for 3-6 trees. The convergence criteria indicate convergence of the simulated chains for the majority of points. As may be expected, the simulation study shows that points close to jumps are estimated with less reliability. The algorithm converges less well at these points, as demonstrated by the Gelman-Rubin diagnostic (see supplementary material). The diagnostics D g , D l and D LP O obtain their highest values for three trees, respectively. The diagnostics indicate that small ensembles of trees can provide a good estimate of the intensity. Inference with spatial log-Gaussian Cox processes grid AAE RISE 10 × 10 568 751 20 × 20 678 953
Inhomogeneous three-dimensional Poisson Process with Gaussian intensity
Our first example for multidimensional intensities is a three-dimensional Poisson process with intensity λ(x) = 500e x T x for x ∈ [0, 1) 3 . We generated a realization of 1616 points via thinning. We run 3 parallel chains of the same length for 100000 iterations for 3-10 trees and 30000 iterations for 12 trees. Tables 10 and 11 illustrate the statistics of our algorithm and kernel smoothing. The diagnostics D g , D l and D LP O get their highest values with 4 trees, respectively. We observe that the diagnostic D l slightly differs between 4 and 8 trees. The diagnostic D g is similar between 4 and 5 trees. The estimate of the average logarithm of Poisson process likelihood does not change significantly from 4 trees to 12 trees. Specifically, we observe its maximum equal to 10536.3 at 12 trees, while its minimum to 10531.9 at 4 trees. In addition, the estimated average number of leaves in a tree of an ensemble is about 3 for 4 − 12 trees. That explains why we observe higher values of diagnostics for a small number of trees. The metrics AAE and RISE are optimised with 12 trees. However, it should be noted that only small variations in the metrics are seen between 4 and 12 trees. The diagnostics provide evidence that increasing the number of trees does not improve the fit of the proposed model.
Inhomogeneous five dimensional Poisson Process with sparsity assumption
Here, we demonstrate the performance of our algorithm to detect the dimensions that contribute most in the intensity of s ∈ S in a noisy environment. Consider a five dimensional inhomogeneous Poisson process with intensity function of x = (x 1 , x 2 , x 3 , x 4 , x 5 ) ∈ [0, 1) 5 depending on 3 of 5 dimensions:
λ(x) = (21 (x 1 < 0.2) + 101 (x 1 ≥ 0.2)) * (31 (x 2 < 0.5) + 151 (x 2 ≥ 0.5)) * (31 (x 3 < 0.8) + 301 (x 3 ≥ 0.8))
We generate a realization of 669 points via thinning. We run 3 parallel chains of the same length for 100000 iterations for 4-8 trees, 50000 iterations for 10 trees, 30000 iterations for 12 trees and 10000 iterations for 15 trees. The convergence criterion is smaller than 1.1 for the majority of testing points. Table 12 shows the metrics and diagnostics D g and D l of the estimated intensity over various numbers of trees. The diagnostics D g and D l obtain their highest values with 4 trees, and the diagnostic D l shows only small differences between 4 and 5 trees. We note that (i) the average number of leaves in a tree of the ensemble is about 2.2 for 4-5 trees, and (ii) the estimated logarithm of Poisson process likelihood for 4 and 5 trees are 4271.5 and 4271.8, respectively. The diagnostic D LP O gets its highest value with 5 trees. The p-thinning approach confirms the diagnostics, and indicates that increasing the number of trees does not improve the fit of the proposed model to the data. Table 16 demonstrates the frequency of times we meet each dimension in the decision rules of a tree. Table 15 shows how likely each dimension is to be involved in the root's decision rule. The results illustrate that the important covariates x 1 , x 2 and x 3 are more likely to be involved in the decision rules of a tree than the noisy dimensions x 4 and x 5 . That indicates the algorithm prioritizes the dimensions that contribute most to the intensity. Figure 6 shows that the mean of the posterior marginal intensities are similar to the expected marginal intensities given that {x i } 5 i=1 are uniform independent covariates. Tables 12, 13 and 14 show that our algorithm outperforms kernel smoothing and the maximum likelihood approach considering linear conditional intensity as expected. The ability of our method to identify important features demonstrates an important advantage over other procedures.
Intensity estimation for Real Data
In this section, we first apply our algorithm to real data sets when modelled as realizations of inhomogeneous Poisson processes in one and two dimensions. To assess the performance of our algorithm, we break the domain [0, 1) d into equal volume subareas {S i } N S i=1 and consider a set {z i } by uniformly sampling points in the domain [0, 1) d . We compute the AAE of the estimated expected number of points falling into each of the subareas :
AAE( N ) = 1 N S N S i=1 | N (S i ) − N (S i )|(17)
and Root Integrated Square Error (RISE):
RISE( N ) = 1 N S N S i=1 ( N (S i ) − N (S i )) 2 1/2 ,(18)
where N (S i ) is the actual number of points in S i and
N (S i ) = Si λ(s)ds 1 N Si zj ∈Si |S i | λ(z j )(19)
with N Si being the number of testing points {z j } falling in S i . We apply the metrics AAE and RISE to compare our intensity estimates of one dimensional processes with those obtained by applying the Haar-Fisz algorithm for one dimensional data; and with kernel estimators for two-dimensional data. We observe that our algorithm, the Haar-Fisz algorithm and the kernel smoothing lead to similar results. As expected, the reconstructions of the intensity function are less smooth than those derived with kernel smoothing. The kernel estimator, as well as the bandwidth value given by likelihood cross-validation were computed using the R package spatstat (Baddeley and Turner, 2005). We provide more simulation results in the supplementary material.
Earthquakes Data
This data set is available online from the Earthquake Hazards Program and consists of the times of 1088 earthquakes from 2-3-2020 to 1-4-2020. We consider the period from 27-2-2020 to 5-4-2020 to avoid edges. We run 3 parallel chains of the same length for 100000 iterations for 3-10 trees. The convergence criteria included in the supplementary material indicate that the considered chains have converged. Figure 7 presents the Posterior Mean and the Posterior Median for 5 Trees, as well as the intensity estimate of the Haar-Fisz algorithm applied to the counts in 128 consecutive intervals of equal length. The deterministic discretized intensity of the R package haarfisz is divided by the duration of an interval. The differences between both algorithms are due to different assumptions; the Haar-Fisz algorithm considers the aggregated counts into disjoint subintervals of the domain, while the proposed algorithm the times of individual events. The most noticeable difference is observed between 2020.212 and 2020.213 (69th interval) where we see a jump in earthquakes from 5 to 33 and again to 7. The Haar-Fisz algorithm detects that peak as we feed it with that information, while the proposed algorithm does not indicate a sharp rise in the intensity in that period, treating it as an outlier. The intensity estimate of the Haar-Fitz algorithm applied in 64 consecutive intervals is closer to the proposed algorithm (see Figure 8), as expected. Similar to coarser binning, the proposed algorithm is less prone to overfitting to spikes in the data, which get filtered out. The estimated AAE and RISE demonstrate good performance compared to the Haar-Fisz method. The simulation results illustrate that our algorithm can track the varying intensity of earthquakes.
The diagnostics D g , D l and D LP O obtain their highest values at 9, 3 and 8 trees, respectively. The AIC diagnostics values between 3 and 9 trees show only small variations, we choose 5 trees for the analysis, noting that the results will not vary significantly for other choices of m in this region.
Lansing Data
The lansing data set included in the R package spatstat describes the locations of different types of trees in the Lansing woods forest. Our attention is restricted to the locations of 514 maples that are presented with dots in Figures 9-10. We run 3 parallel chains of the same length for 200000 iterations for 3-10 trees and 100000 iterations for 12 trees. The diagnostic criteria included in the supplementary material indicate that the considered chains have converged for the majority of testing points.
We compare our algorithm to a fixed bandwidth estimator using a Gaussian kernel. Our algorithm and the kernel estimator are consistent in the overall structure. The differences are due to the different nature of the methods. Given the tree locations, our algorithm recovers the spatial pattern of trees as rectangular regions of different intensities (Fig. 9), whereas the kernel method produces a continuum with more localized peaks in space. As expected, the kernel estimator presented in Figure 10 consists of smoother subregions with various intensities. Tables 21-23 show that our algorithm is competitive to kernel smoothing with fixed bandwidth chosen with likelihood cross-validation. In contrast to our method, kernel methods are highly sensitive to parameter (bandwidth) choice.
The diagnostics D g and D l obtain their highest values at 4 and 10 trees, respectively.
Discussion and Future Work
In this article, we have studied how the Bayesian Additive Regression Trees (BART) model can be applied to estimating the intensity of Poisson processes. The BART framework provides a flexible non-parametric approach to capturing non-linear and additive effects in the underlying functional form of the intensity. Our numerical experiments show that our algorithm provides good approximations of the intensity with ensembles of less than 10 trees. This enables our algorithm to detect the dimensions contributing most to the intensity. The ability of our method to identify important features demonstrates an important advantage over other procedures.
Our approach enables full posterior inference of the intensity in a non-parametric regression setting. In addition, the method extends easily to higher dimensional settings. The simulation study on synthetic data sets shows that our algorithm can detect change points and provides good estimates of the intensity via either the posterior mean or the posterior median. Our algorithm is competitive with the Haar-Fisz algorithm and kernel methods in one and two dimensions and inference using spatial log-Gaussian Cox processes. The strength of our method is its performance in higher dimensions, and we demonstrate that it outperforms the kernel approach for multidimensional intensities. We also demonstrate that our inference for the intensity is consistent with the variability of the rate of events in real and synthetic data. The convergence criteria included in the supplementary material indicate good convergence of the considered chains. We ran each chain for at least 100000 iterations to increase our confidence in the results. However, our algorithm works well with considerably fewer iterations (around 10000). The BART model assumes independence of the underlying tree structure. The alternative method of (Sardy and Tseng, 2004) makes use of a locally dependent Markov Random Field, and one way of extending our model in this direction is to consider neighbouring intensities following Chipman et al. (2021).
Our method has only considered the standard priors commonly used in BART procedures, an interesting avenue of future research would be to implement different prior assumptions. In addition, we have fixed the parameters for the Galton-Watson prior on the trees, and further work on sensitivities to hyperparameter selection and alternative methods for inference of the hyperparameters is of interest. Currently, our model is limited to non-homogeneous Poisson Process and we believe the flexibility of the BART approach could be extended to more general point processes.
Appendix A. Metropolis Hastings Proposals
We describe the proposals of Algorithm 2. The Hastings ratio can be expressed as the product of three terms (Kapelner and Bleich, 2016):
• Transition Ratio:
TR = q(T (t) j |T * j ) q(T * j |T (t) j )
• Likelihood Ratio:
LR = P (s|T * j , T (j) , Λ (j) ) P (s|T (t) j , T (j) , Λ (j) )
• Tree Structure Ratio:
TSR = P (T * j ) P (T (t) j ) Appendix A.1. GROW Proposal
This proposal randomly picks a terminal node, splits the chosen terminal into two new nodes and assigns a decision rule to it.
Let η be the randomly picked terminal node in tree T (t) j . We denote the new nodes as η L and η R . We now derive the expressions for the transition ratio (TR), tree structure ratio (TSR) and likelihood ratio (LR).
Transition Ratio. It holds that:
(i) q(T * j |T (t)
j ) = P(GROW) × P(selecting a leaf η to grow from) × P(selecting an available dimension j to split on) × P(selecting the slitting value given the chosen dimension to split on) = P(GROW) 1 bj 1 card(kη) 1 card (τη) where b j is the number of terminal nodes in the tree T (t) j , k h the set of all available dimensions to split the node η, τ η the set of all available splitting values given the chosen dimension for splitting the node η and card(S) the cardinality of a set S.
(ii) q(T (t) j |T * j ) = P(PRUNE)
× P(selecting a node η having two terminal nodes to prune from) = P(PRUNE) 1 w * where w * is the number of internal nodes with two terminal nodes as children in the tree T * j . Hence the transition ratio is given by
TR = P (PRUNE) 1 w * P (GROW) 1 bj 1 card(kη) 1 card(τη) .
Tree Structure Ratio:. The difference between the structures of the proposed tree T (t) j and the tree T * j is the two offsprings η L and η R . Thus the tree structure ratio is:
TSR = P (T * j ) P (T (t) j ) = (1 − p SPLIT (η L )) (1 − p SPLIT (η R )) p SPLIT (η) p RU LE (η) (1 − p SPLIT (η)) = 1 − γ (1+d(η L )) δ 1 − γ (1+d(η R )) δ γ (1+d(η)) δ 1 card(kη) 1 card(τη) 1 − γ (1+d(η)) δ ,
where p SPLIT (η) is the splitting probability for a node η and p RU LE (η) the distribution of decision rule associated to node η.
Likelihood Ratio. The likelihood ratio is an application of equation 8 twice, that is once considering the proposed tree, T * j (numerator) and the other considering the tree of the current iteration t, T (t) j (denominator), which can be simplified as follows
LR = β α Γ(α) Γ(njη L +α) (cjη L +β) n jη L +α Γ(njη R +α) (cjη R +β) n jη R +α Γ(njη+α) (cjη+β) n jη +α = β α Γ(α) Γ(n jη L + α)Γ(n jη R + α) Γ(n jη + α) (c jη + β) njη+α (c jη L + β) njη L +α (c jη R + β) njη R +α Appendix A.2. PRUNE Proposal
This proposal randomly picks a parent of two terminal nodes and turns it into a terminal node by collapsing the nodes below it.
Let η be the picked parent of two terminal nodes, y and c the dimension and splitting value of the rule linked to the node η.
Transition Ratio. It holds that: (ii) q(T (t) j |T * j ) = P(GROW) × P(selecting the node η to grow from) × P(selecting the dimension y) × P(selecting the slitting value c given the chosen dimension y) = P(GROW) 1
(i) q(T * j |T(w * 1 card(kη) 1 card(τη)
where w * is the number of terminal nodes in the tree T * j , k h the set of all available dimensions to split the node η and τ η the set of all available splitting values given the chosen dimension y for splitting the node η.
Hence the transition ratio is given by
TR = P (GROW) 1 w * 1 card(kη) 1 card(τη) P (PRUNE) 1 w .
Tree Structure Ratio. The proposed tree differs by not having the two children nodes η L and η R . Thus the tree structure ratio is:
TSR = P (T * j ) P (T (t) j ) = (1 − p SPLIT (η)) (1 − p SPLIT (η L )) (1 − p SPLIT (η R )) p SPLIT (η) p RU LE (η) = 1 − γ (1+d(η)) δ 1 − γ (1+d(η L )) δ 1 − γ (1+d(η R )) δ γ (1+d(η)) δ 1 card(kη) 1 card(τη)
Likelihood Ratio. Similar to the GROW proposal, the likelihood ratio can be written as follows
LR = β α Γ(α) −1 Γ(njη+α) (cjη+β) n jη +α Γ(njη L +α) (cjη L +β) n jη L +α Γ(njη R +α) (cjη R +β) n jη R +α = β α Γ(α) −1 Γ(n jη + α) Γ(n jη L + α)Γ(n jη R + α) (c jη L + β) njη L +α (c jη R + β) njη R +α (c jη + β) njη+α Appendix A.3
. CHANGE Proposal
This proposal randomly picks an internal node and randomly reassigns to it a splitting rule. Let η be the picked internal node having rule y < c and children denoted as η R and η L . We assume that y < c is its new assigned rule in the proposed tree, T * j . Following Kapelner and Bleich (2016), for simplicity we are restricted to picking an internal node having two terminal nodes as children.
Transition Ratio. It holds that:
(i) q(T * j |T (t) j ) = P(CHANGE)
× P(selecting an internal node η to change) × P(selecting the new available dimension y to split on) × P(selecting the new splitting value c given the chosen dimension y)
(ii) q(T (t)
j |T * j ) = P(CHANGE) × P(selecting the node η to change) × P(selecting the dimension y to split on) × P(selecting the splitting value c given the chosen dimension y)
Thus the Transition Ratio is TR = P (selecting c to split on given the chosen dimension y) P (selecting c to split on given the chosen dimension y)
Tree Structure Ratio. The two trees differ in the splitting rule at node η. Thus we have that
TSR = P (T * j ) P (T (t) j ) = p SPLIT (η) p RULE (η|T * j ) p SPLIT (η) p RULE (η|T (t) j ) =
P (selecting y) P (selecting c given y) P (selecting y) P (selecting c given y) = P (selecting c given y) P (selecting c given y) .
It then follows that TR × TSR = 1, and hence only the likelihood ratio needs to be found to obtain the Hastings ratio.
Likelihood Ratio. Let n * L = n (T * j ) jη L , n * R = n (T * j ) jη R , c * L = c (T * j ) jη L , c * R = c (T * j ) jη R , n (t) L = n (T (t) j ) jη L , n (t) R = n (T (t) j ) jη R , c (t) L = c (T (t) j ) jη L and c (t) R = c (T (t) j ) jη R , where (T * j ) and (T (t)
j ) indicate that the corresponding quantities are related to the tree T * j and T (t) J respectively. Following the previous proposals, the likelihood ratio is
LR = Γ(n * L +α) (c * L +β) n * L +α Γ(n * R +α) (c * R +β) n * R +α Γ(n (t) L +α) (c (t) L +β) n (t) L +α Γ(n (t) R +α) (c (t) R +β) n (t) R +α = (c (t) L + β) n (t) L +α (c (t) R + β) n (t) R +α (c * L + β) n * L +α (c * R + β) n * R +α Γ(n * L + α) Γ(n * R + α) Γ(n (t) L + α) Γ(n (t) R + α)
.
Appendix B. The Poisson Process conditional likelihood
Let us consider a finite realization of an inhomogeneous Poisson process with n points s. Given the tree components (T, Λ), and approximating the intensity of a point s i ∈ S by a product of m trees λ(s i ) = m j=1 g(s i ; T j , Λ j ), the likelihood is:
P (s|Λ, T ) = n i=1 λ(s i ) exp − S λ(s)ds = n i=1 m j=1 g(s i ; T j , Λ j ) exp − S m j=1 g(s; T j , Λ j )ds .
(B.1)
The first term of the above equation can be written as follows
n i=1 m j=1 g(s i ; T j , Λ j ) = n i=1 m j=1,j =h g(s i ; T j , Λ j )g(s i ; T h , Λ h ) = n i=1 m j=1,j =h g(s i ; T j , Λ j ) n i=1 g(s i ; T h , Λ h ) = c h b h t=1 λ n ht ht where c h = n i=1
m j=1,j =h g(s i ; T j , Λ j ) and n ht is the cardinality of the set {i : s i ∈ Ω ht }. The exponential term of (B.1) can be expressed as:
exp − S m j=1 g(s; T j , Λ j )ds = exp − S m j=1,j =h g(s; T j , Λ j )g(s; T h , Λ h ) = exp − S m j=1,j =h g(s; T j , Λ j ) b h t=1 λ ht I(s ∈ Ω ht ) ds = exp − S b h t=1 λ ht m j=1,j =h g(s; T j , Λ j )I(s ∈ Ω ht )ds
Tonelli's theorem allows the change of order between summation and integral.
exp − S m j=1 g(s; T j , Λ j )ds = exp − b h t=1 λ ht S m j=1,j =h g(s; T j , Λ j )I(s ∈ Ω ht )ds = exp − b h t=1 λ ht c ht where c ht = S m j=1,j =h g(s; T j , Λ j ) I(s ∈ Ω ht )ds. Let T (h) = {T j } m
j=1,j =h be an ensemble of trees not including the tree T h that defines the global partition {Ω
(h) k } K(T (h) ) k=1 by merging all cuts in {T j } m j=1,j =h . Giving, m j=1,j =h g(s; T j , Λ j ) = K(T h ) k=1 λ (h) k I(s ∈ Ω (h) k ) where λ (h) k = m t=1,t =h bt l=1 λ I(Ω tl ∩Ω (h) k =0) tl ,
leading to the following expression for c ht ,
c ht = S m j=1,j =h g(s, T j , Λ j ) I(s ∈ Ω ht )ds = S K(T (h) ) k=1 λ (h) k I(s ∈ Ω (h) k ) I(s ∈ Ω ht )ds = K(T (h) ) k=1 λ (h) k S I(s ∈ Ω (h) k ∩ Ω ht )ds = K(T (h) ) k=1 λ (h) k |Ω (h) k ∩ Ω ht |, where |Ω (h) k ∩ Ω ht | is the volume of the region Ω (h) k ∩ Ω ht .
Hence the conditional likelihood can be written as follows
P (s|Λ, T ) = c h b h t=1 λ n ht ht e −λ ht c ht .
Appendix C. The conditional integrated likelihood
The conditional integrated likelihood is given by underpinned by a tree-shaped partition T = {Ω k } b k=1 where b is the number of terminal nodes in the tree T . Each leaf node k associated to region Ω k is linked with a parameter λ k . All parameters λ k are collected in the vector Λ = (λ 1 , λ 2 , .., λ b ). The parameters of the model are 1. the regression tree T 2. the parameters Λ = (λ 1 , λ 2 , .., λ b ).
P (s|T h , T (h) , Λ (h) ) = ∞ 0 P (s, Λ h |T h , T (h) , Λ (h) )dΛ h = ∞ 0 P (s|Λ, T ) P (Λ h |T h , T (h) , Λ (h) )dΛ h = c h ∞ 0 . . . ∞ 0 b h t=1 λ n ht ht e −λ ht c ht b h t=1 β α Γ(α) e −βλ ht λ α−1 ht dλ h1 . . . dλ hb h = c h β α Γ(α) b h b h t=1 ∞ 0 λ n ht +α−1 ht e −(c ht +β)λ ht dλ ht = c h β α Γ(α) b h b h t=1 Γ(n ht + α) (c ht + β) n ht +α
BART-based inference for Poisson processes
We assume that the leaf parameters are independent, i.e., P (Λ|T ) = b k=1 P (λ k |T ).
Appendix D.1. Poisson Process conditional likelihood
The conditional likelihood of a finite realization of an inhomogeneous Poisson process with n points s 1 , . . . , s n is derived by describing λ(s) using one tree (Λ, T ) as: λ(s) = g(s; T, Λ). • sample λ k |T, n, s 1 , . . . , s n from a Gamma distribution with shape equal to n k + α and rate equal to |Ω k | + β for k = 1, .., b.
Noting that P (T |s 1 , . . . , s n ) ∝ P (s 1 , . . . , s n |T ) P (T ) the integrated likelihood (integrating out the parameters Λ) is:
P (s 1 , . . . , s n |T ) = P (s 1 , . . . , s n , Λ|T )dΛ = P (s 1 , . . . , s n |Λ, T )P (Λ|T )dΛ
= β α Γ(a) b b k=1 λ n k +α−1 k e −(|Ω k |+β)λ k dλ k = β α Γ(a) b b k=1 Γ(n k + α) (β + |Ω k |) n k +α . (D.3)
In the tree sampling Algorithm 4, the transition kernel q is chosen from the three proposals: GROW, PRUNE, CHANGE (Chipman et al., 2010;Kapelner and Bleich, 2016), and Eq. (D.3) allows us to compute the Metropolis Hastings ratio to accept or reject the proposal.
Algorithm 3 Proposed Algorithm: Metropolis Hastings within Gibbs sampler
for t = 1, 2, 3, .. do Sample T (t+1) |s 1 , . . . , s n for k = 1 to b do Sample λ (t+1) k |s 1 , . . . , s n , T (t+1) end for end for Algorithm 4 Metropolis Hastings Algorithm for sampling from the posterior P (T |s 1 , . . . , s n ) Generate a candidate value T * with probability q(T * |T (t) ). Set T (t+1) = T * with probability α(T (t) , T * ) = min 1, q(T (t) |T * ) q(T * |T (t) ) P (s 1 , . . . , s n |T * ) P (s 1 , . . . , s n |T (t) )
P (T * ) P (T (t) ) Otherwise, set T (t+1) = T (t)
.
Appendix E. Simulation results on synthetic data with various number of sampling iterations
In this appendix we show that our algorithm works equally well for 10000 iterations by running three parallel chains, examining their convergence and assessing the performance of our algorithm via AAE and RMSE of computed estimates over various number of iterations. We also check the convergence of chains using the Gelman-Rubin criterion in all cases. Appendix E.2. One dimensional Poisson Process with continuously varying intensity Table E.25 shows that increasing the number of iterations does not change essentially the error for the synthetic data presented in Section Appendix G.1. The convergence criterion indicates that even for small number of iterations, the chains converge for 10 trees. For 5 trees they converge for the majority of the range (Figure E.12 Appendix E.3. Two dimensional Poisson process with stepwise intensity function Likewise, we do not observe significant improvement in AAE and RMSE beyond 10000 iterations (see Table E.26). Moreover, increasing the number of iterations does not fix the convergence issues at points close to jumps (see Figure E.13
Appendix E.4. Inhomogeneous two dimensional Poisson Process with Gaussian intensity
Similarly to all the above scenarios, the error with 10000 iterations are already comparable to those obtained with a larger number of iterations (see Table E.27). Figure E.14 shows that the chains converge for 10 Trees even if we consider a relatively small number of iterations. The same holds for the majority of testing points for 8 Trees. The algorithm only provides less accurate estimations for the testing points close to the upper end of the domain for 8 Trees and relatively small number of iterations.
Appendix G. Simulation Study on Synthetic Data
Appendix G.1. One dimensional Poisson Process with continuously varying intensity We have applied our algorithm to samples of a one dimensional Poisson process with intensity λ(x) = 20e −x/5 (5 + 4 cos(x)) for x ∈ [0, 10]. Figure G.18 and Tables G.28-G.29 show that the algorithm works well on a smoothy varying intensity with fewer sample points and outperforms the Haar-Fisz Estimator for the majority the range. The convergence criteria indicate convergence of the simulated chains for 10 Trees and for the most testing points for 5 Trees (see supplementary material). Fig. G.18 Appendix G.2. Inhomogeneous two-dimensional Poisson Process with Gaussian intensity We also considered a two-dimensional Poisson process with intensity λ(x, y) = 1000 e x 2 +y 2 for x, y ∈ [0, 1). The outcomes of the algorithm, log-Gaussian Cox processes (LGCP) and kernel smoothing are illustrated in Figures G.19-G.20 and Tables G.30-G.32. The results demonstrate that the proposed algorithm performs well in this setting, is competitive with the kernel method, and spatial log-Gaussian Cox processes. In this scenario, the hyperparameter β has been set equal to 1. In this scenario, the hyperparameter β has been set equal to 1. The convergence criteria indicate convergence of the simulated chains (see also Section Appendix J). LGMean (a)
LGCP
Appendix G.3. Inhomogeneous five dimensional Poisson Process with Gaussian intensity
Our next example is a five dimensional Poisson process with intensity λ(x) = 50e x T x for x ∈ [0, 1) 5 and the generated process via thinning consists of 343 points. The statistics are presented in Tables G.33 and G.34 for our algorithm and kernel smoothing, respectively. We have checked that the Gelman-Rubin criterion indicates convergence of chains.
Eq. (4) becomes λ(s i ) = m h=1 g(s i ; T h , Λ h ). Let us choose any arbitrary tree T h in our ensemble T , and let us denote the set with the rest of the trees as T (h) = {T j } m j=1,j =h and their leaf parameters as Λ (h) = {Λ j } m j=1,j =h . The intersection of all the partitions associated with the trees in T (h) gives us a global partition {Ω
k
=0) tl , n ht is the cardinality of the set {i : s i ∈ Ω ht }, and |Ω (h) k ∩ Ω ht | is the volume of the region Ω (h) k ∩ Ω ht .
Figure 1 :
1The original intensity (blue curve), the posterior mean (red curve), the posterior median (black curve), the 95% hdi interval of the estimated intensity illustrated by the dotted green lines and the Haar-Fisz estimator (cyan curve). The rug plot on the bottom displays the 3590 event times.
Figure 2 :
2Original Intensity, posterior mean and posterior median for 4 trees.
Figure 3 :
3Kernel estimator and inference with spatial log-Gaussian Cox processes.
Figures 4 and 5 show our estimators and the kernel estimator with h=0.073 for 8 Trees and 10 Trees with fixed third dimension (x[3]) at 0.4 and 0.8, respectively.
Figure 4 :Figure 5 :
45Kernel estimator and Posterior Median for 8 and 10 Trees with x[3] = 0.4. Kernel Estimator and Posterior Median for 8 and 10 Trees with x[3] = 0.8.
The posterior mean of λ(x 1 ) (red line; 95% CI (green line)) and the true expected value of λ(x 1 ) (black line). The posterior mean of λ(x 2 ) (red line; 95% CI (green line)) and the true expected value of λ(x 2 ) (black line). The posterior mean of λ(x 3 ) (red line; 95% CI (green line)) and the true expected value of λ(x 3 ) (black line). The posterior mean of λ(x 4 ) (red line; 95% CI (green line)) and the true expected value of λ(x 4 ) (black line). The posterior mean of λ(x 5 ) (red line; 95% CI (green line)) and the true expected value of λ(x 5 ) (black line).
Figure 6 :
6Posterior marginal intensities considering 4 trees.
Figure 7 :Figure 8 :
78Earthquakes Data: The posterior mean (red curve), the posterior median (black curve), the 95% hdi interval of the estimated intensity illustrated by the dotted green lines and the intensity estimator of the Haar-Fisz Algorithm illustrated by the blue line. The rug plot on the bottom displays the event times. Earthquakes Data: The posterior mean (red curve), the posterior median (black curve), the 95% hdi interval of the estimated intensity illustrated by the dotted green lines and the intensity estimator of the Haar-Fisz Algorithm illustrated by the blue line. The rug plot on the bottom displays the event times.
Figure 9 :Figure 10 :
910Posterior Fixed-bandwidth chosen using likelihood cross-validation.
Bibliography
Adams, R. P.,Murray, I. and MacKay, D. J. (2009) Tractable nonparametric bayesian inference in poisson processes with Gaussian process intensities. In Proceedings of the 26th Annual International Conference on Machine Learning, 9-16. ACM.
t) j ) = P(PRUNE) × P(selecting a parent of two terminal nodes to prune from) = P(PRUNE) 1 w where w is the number of nodes with two terminal nodes as children in the tree T
Stamatina
Lamprinakou b, * , Mauricio Barahona b , Seth Flaxman b , Sarah Filippi b , Axel Gandy b , Emma McCoy b b Department of Mathematics Imperial College London London United Kingdom Supplementary Material Appendix D. The model for the case of one tree The proposed model for considering only one tree can be written as follows λ(s i ) = g(s i ; T, Λ) = b k=1 λ k I(s i ∈ Ω k )T ∼ heterogeneous Galton-Watson process for a partition of S λ k |T ∼ Gamma(α, β)
PIwhere
(s 1 , . . . , s n |Λ, n k is the cardinality of the set {i : s i ∈ Ω k }. The exponential term of (D.1) can be expressed as followsexp − S g(s; T, Λ)ds = exp − S b k=1 λ k I(s ∈ Ω k )(s ∈ Ω k )ds = exp − b k=1 λ k |Ω k | * Corresponding authorHence the conditional likelihood can be written as P (s 1 , . . . , s n |Λ, |Ω k | is the volume of the region Ω k .Appendix D.2. Inference AlgorithmInference on the model parameters (Λ, T ) induces sampling from the posterior P (Λ, T |s 1 , ..., s n ). A Metropolis Hastings within Gibbs sampler (Algorithm 3) is proposed for sampling from the posterior P (Λ, T |s 1 , . . . , s n ). Noting that, P (Λ, T |s 1 , . . . , s n ) = P (Λ|T, s 1 , . . . , s n ) P (T |s 1 , . . . , s n ) and P (Λ|T, s 1 , . . . , s n ) ∝ P (s 1 , ..., s n |Λ, T ) P (Λ|T ) (|Ω k |+β)λ k , a draw from (T, Λ)|s 1 , . . . , s n can be achieved in (b+1) successive steps:• sample T |n, s 1 , . . . , s n through Metropolis-Hastings Algorithm summarized in Algorithm 4
Figure E. 11 :
11The Gelman-Rubin Criterion for various number of iterations and trees.
Figure E. 12 :
12The Gelman-Rubin Criterion for various number of iterations and trees.
Figure E. 13 :
13The Gelman-Rubin Criterion for 4 trees and various number of iterations.
Figure E. 14 :
14The Gelman-Rubin Criterion for various number of iterations and trees.Appendix F. Intensity estimation for Real DataAppendix F.1. Coal DataThe first real data set under consideration is composed of the dates of 191 explosions which caused at least 10 occurrences of death fromMarch 22, 1962 until March 15, 1981. The data set is available in the R package boot (Canty and Ripley, 2019) as coal.
Figure F.15 illustrates the Posterior Mean and the Posterior Median for 8 and 10 Trees. We observe that our algorithm captures the fluctuations of the rate of accidents in the period under consideration. The diagnostic criteria included in the Supplementary Material indicate that the considered chains have converged. See Adams et al. (2009), Gugushvili et al. (2018) and Lloyd et al. (2015) for alternative analyses.
Figure F. 15 :Figure F. 17 :
1517Coal Data: The posterior mean (red curve), the posterior median (black curve), the 95% hdi interval of the estimated intensity illustrated by the dotted green lines. The rug plot on the bottom displays the event times. Appendix F.2. Redwoodfull Data Finally, we use a data set available in the R package spatstat describing the locations of 195 trees in a square sampling region shown with dots in the figures below. Adams et al. (2009) analyzed the redwoodfull data using their recommended algorithm. We present the posterior mean and the posterior median obtained with our algorithm for different number of trees and the result of kernel estimators. Intensity inference via posterior mean (Figure F.16c) or posterior median (Figure F.16d) for 10 Trees is similar to the fixed-bandwidth kernel estimator with edge correction and bandwidth selected using likelihood cross-validation (Figure F.17a), and the inference from Adams et al. (2009). Fixed-bandwidth chosen using likelihood cross-validation and adaptive-bandwidth kernel estimators.
Figure G. 18 :
18Scenario 2: The original intensity (blue curve), the posterior mean (red curve), the posterior median (black curve), the 95% hdi interval of the estimated intensity illustrated by the dotted green lines and the Haar-Fisz estimator (pink curve). The rug plot on the bottom displays the 440 event times.
Figure G.19: Posterior Mean and Posterior Median for 8, 10 and 15 Trees
Figure G. 20 :
20Kernel estimator and inference with spatial log-Gaussian Cox processes.
Figure H. 21 :Figure H. 24 :FigureFigure H. 30 :Figure H. 32 :FigureFigure I. 42 :Figure I. 43 :FigureFigure J. 47 :FigureFigure J. 51 :Figure J. 53 :FigureFigure K. 59 :FigureFigure 219 FigureFigure O. 90 :
2124303242434751535921990The Average number of leaves at trees prior for the parameters associated to leaves Figure H.26: Prior for 5 Trees Appendix H.2. 7 Trees We run 3 parallel chains each for 200000 iterations keeping every 100th sample. H.27: The Gelman-Rubin Criterion for 7 Average number of leaves at trees Prior for 7 Trees Appendix I. One dimensional Poisson Process with with continuously varying intensity Appendix I.1. 5 Trees We run 3 parallel chains each for 100000 iterations keeping every 50th sample. I.36: Density of the estimated intensity for 5 Density of the estimated intensity for 10 Trees Prior for 10 Trees Appendix J. Inhomogeneous two-dimensional Poisson Process with Gaussian intensity Appendix J.1. 8 Trees We run 3 parallel chains each for 200000 iterations keeping every 100th sample. at (0.646,0.04) Average number of leaves at trees intensity at (0.989,0.04) J.48: Density of the estimated intensity for 8 Trace Average number of leaves at trees intensity at (0.989,0.04) J.54: Density of the estimated intensity for 10 Trees prior for the parameters associated to leaves Figure J.55: Prior for 10 Trees Appendix K. Two dimensional Poisson process with stepwise intensity function Appendix K.1. 4 Trees We run 3 parallel chains each for 100000 iterations keeping every 50th sample. intensity at (0.646,0.04) Figure K.57: Trace plots for 4 Average number of leaves at trees intensity at (0.989,0.04) K.60: Density of the estimated intensity for 4 Trees L.66: Prior for 8 Trees Appendix L.2. 10 Trees We run 3 parallel chains each for 200000 iterations keeping every 100th sample. prior for the parameters associated to leaves Density Figure L.72: Prior for 10 Trees Appendix M. Earthquakes Data Appendix M.1. 10 Trees We run 3 parallel chains each for 100000 iterations keeping every 50th sample. intensity at t=2020.Gamma prior for the parameters associated to leaves Figure M.78: Prior for 10 Trees Appendix N. Mapples Appendix N.1. 5Trees We run 3 parallel chains each for 300000 iterations keeping every 150th sample. prior for the parameters associated to leaves Figure N.84: Prior for 5 Trees Appendix N.2. 10 Trees We run 3 parallel chains each for 300000 iterations keeping every 150th sample. Gamma prior for the parameters associated to leaves Figure N.86: Prior for 10 Trees Appendix O. Redwood Appendix O.1. 5 Trees We run 3 parallel chains each for 300000 iterations keeping every 150th sample. Average number of leaves at trees prior for the parameters associated to leaves Figure O.92: Prior for 5 Trees Appendix O.2. 10 Trees We run 3 parallel chains each for 300000 iterations keeping every 150th sample. Gamma prior for the parameters associated to leaves Figure O.94: Prior for 10 Trees
Table 1 :
1The average RPS on testing points over 7 different splits of the original data set inFig. 1.Proposed BART Algorithm
Number
of trees
N s = 1
N s = 10
N s = 25
N s = 50
N s = 75
N s = 75
2
0.95
1.13
1.06
1.02
0.99
1.00
3
0.95
1.13
1.06
1.02
0.98
0.99
4
0.95
1.14
1.06
1.02
0.98
0.98
5
0.94
1.13
1.04
1.02
0.98
0.98
7
0.95
1.13
1.04
1.01
0.97
0.97
8
0.95
1.12
1.03
1.01
0.97
0.96
9
0.95
1.13
1.03
1.01
0.97
0.97
10
1.10
1.20
1.06
1.04
0.98
0.98
12
0.98
1.18
1.04
1.02
0.98
0.96
15
0.95
1.12
1.02
1.00
0.97
0.96
20
0.95
1.12
1.02
1.00
0.97
0.96
Table 2 :
2The average RSMSE on testing points over 7 different splits of the original data set inFig. 1.
Table 3 :
3Average Absolute Error and Root Integrated Square Error for various number of trees for the data inFig. 1.Proposed BART Algorithm
Number of
trees
AAE
for
Posterior
Mean
AAE
for
Posterior
Median
RISE
for
Posterior
Mean
RISE
for
Posterior
Median
4
144.48
139.58
181.21
174.82
5
144.55
139.02
180.74
176.19
7
124.53
123.2
175.74
172.4
Table 4 :
4Average Absolute Error and Root Integrated Square Error for the data inFig. 1without considering points close to steps.Haar-Fisz Algorithm
AAE
RISE
141.95
192.6
Table 5 :
5Average Absolute Error and Root Integrated Square Error for Haar-Fisz estimator for the data inFig. 1without considering points close to steps.Haar-Fisz Algorithm
AAE
RISE
272.3
476.9
Table 6 :
6Average Absolute Error and Root Integrated Square Error for Haar-Fisz estimator for the data inFig. 1.
Table 7 :
7Average Absolute Error, Root Integrated Square Error and diagnostics for various trees for the data inFigure 2.Kernel Smoothing
Bandwidth (sigma)
AAE
RISE
0.027
763.8
1041.3
0.038
662.7
956.8
0.047 (LCV)
636.7
960.6
0.067
672.8
1042.5
Table 8 :
8Average Absolute Error and Root Integrated Square Error for fixed bandwidth estimators for the data inFigure 2.
Table 9 :
9Average Absolute Error and Root Integrated Square Error with LGCP for the data inFigure 2.
Table 10 :
10Average Absolute Error, Root Integrated Square Error and diagnostics for various number of trees.Kernel Smoothing
h
AAE
RISE
0.053
480.8
667.5
0.073 (LCV)
415.86
645.16
0.08
417.7
661.4
0.085
423.2
676.2
0.1
450.3
727.6
0.3
890.4
1236
Table 11 :
11Average Absolute Error and Root Integrated Square Error for various isotropic variance matrices.
Table 12 :
12Average Absolute Error, Root Integrated Square Error and diagnostics for various number of trees in the case of Inhomogeneous five dimensional Poisson Process with sparsity assumption.Kernel Smoothing
Bandwidth (sigma)
AAE
RISE
0.121 (LCV)
407.1
888.1
Table 13 :
13Average Absolute Error and Root Integrated Square Error for fixed bandwidth estimators in the case of Inhomogeneous five dimensional Poisson Process with sparsity assumption.Linear conditional intensity
AAE
RISE
654.2
1076.5
Table 14 :
14Average Absolute Error and Root Integrated Square Error for linear conditional intensity in the case of Inhomogeneous five dimensional Poisson Process with sparsity assumption.Proposed BART Algorithm
Number of
trees
x 1
x 2
x 3
x 4
x 5
4
0.31
0.29
0.34
0.03
0.03
5
0.35
0.29
0.26
0.05
0.06
Table 15 :
15How likely each dimension is to be involved in the root's decision rule.Proposed BART Algorithm
Number of
trees
x 1
x 2
x 3
x 4
x 5
4
0.35
0.36
0.37
0.06
0.07
5
0.39
0.34
0.37
0.09
0.10
Table 16 :
16The frequency of times we meet each dimension in the decision rules of a tree.
Table 17 :
17The average RPS on testing points over 7 different splits of the original data set in the case of Inhomogeneous five dimensional Poisson Process with sparsity assumption.Proposed BART Algorithm
Number
of trees
N s = 1
N s = 32
N s = 243
4
0.64
0.95
1
5
0.64
0.95
1
6
0.65
0.95
1
8
0.65
0.96
1.01
10
0.65
0.96
1.01
15
0.65
0.96
1.01
Table 18 :
18The average RSMSE on testing points over 7 different splits of the original data set in the case of Inhomogeneous five dimensional Poisson Process with sparsity assumption.
Table 19 :
19Average Absolute Error, Root Integrated Square Error and diagnostics for the data inFig. 7.Haar-Fisz Algorithm
Subintervals
AAE
RMSE
128
94.1
107.8
64
94
107
Table 20 :
20Average Absolute Error and Root Mean Square Error for Haar-Fisz estimator for the data inFig. 7
Table 21 :
21Average Absolute Error, Root Integrated Square Error with N S = 225 and diagnostics for the data inFig. 9.Proposed BART Algorithm
Number of
trees
AAE for Poste-
rior Mean
AAE for Poste-
rior Median
RMSE for Pos-
terior Mean
RMSE for Pos-
terior Median
3
0.9
0.9
1.3
1.3
4
0.9
0.9
1.2
1.3
5
0.9
0.9
1.2
1.3
7
0.9
0.9
1.2
1.2
8
0.9
0.9
1.2
1.2
9
0.9
0.9
1.2
1.2
10
0.9
0.9
1.2
1.2
12
0.9
0.9
1.2
1.2
Table 22 :
22Average Absolute Error and Root Integrated Square Error with N S = 400 for the data inFig. 9.Kernel Smoothing
Bandwidth (sigma)
AAE
RISE
0.05 (LCV) for N S = 225
1.03
1.42
0.05 (LCV) for N S = 400
0.82
1.13
Table 23 :
23Average Absolute Error and Root Integrated Square Error for fixed bandwidth estimators for data inFig. 10.
Appendix E.1. One dimensional Poisson Process with stepwise intensityTable E.24 shows that there are no significant difference in errors increasing the number of iterations from 10000 to 200000.Figure E.11 reveals that the chains work less well at points close to jumps for small number of iterations.Table E.24: Average Absolute Error and Root Mean Square Error for various number of iterations and trees.Proposed Algorithm
Number
of trees
Number of
Iterations
AAE for Pos-
terior Mean
AAE for Pos-
terior Median
RMSE
for
Posterior
Mean
RMSE for Pos-
terior Median
5
10000
284.61
274.3
588.88
590.5
50000
289.11
284.56
575.11
579.17
200000
279.88
269.81
572.94
576.94
7
10000
265.22
257.49
572.33
576.58
50000
276.19
267.75
580.35
584.47
200000
278.37
269.78
582.82
584.1
).Table E.25: Average Absolute Error and Root Mean Square Error for various number of iterations and trees.Proposed Algorithm
Number
of trees
Number of
Iterations
AAE for Pos-
terior Mean
AAE for Pos-
terior Median
RMSE
for
Posterior
Mean
RMSE for Pos-
terior Median
5
10000
6.27
6.71
9.83
10.62
50000
6.16
6.51
9.63
10.42
100000
6.14
6.38
9.52
10.17
7
10000
5.99
6.03
9.54
9.95
50000
6.04
6.1
9.49
9.88
100000
5.95
6.01
9.39
9.8
).Table E.26: Average Absolute Error and Root Mean Square Error for 4 Trees and various number of iterations.Proposed Algorithm
Number
of trees
Number of
Iterations
AAE for Pos-
terior Mean
AAE for Pos-
terior Median
RMSE
for
Posterior
Mean
RMSE for Pos-
terior Median
4
10000
241.82
240.1
464.99
489.93
50000
209.95
209.58
392.43
418.37
100000
208.74
213.04
410.19
447.86
Table E.27: Average Absolute Error and Root Mean Square Error for various number of iterations and trees.Proposed Algorithm
Number
of trees
Number of
Iterations
AAE for Pos-
terior Mean
AAE for Pos-
terior Median
RMSE
for
Posterior
Mean
RMSE for Pos-
terior Median
8
10000
173.02
175.61
247.5
255.81
50000
169.54
170.5
242.03
250.74
200000
177.44
175.62
255.23
258.88
10
10000
168.91
168.78
242.62
249.38
50000
177.72
173.93
254.67
256.32
200000
176.52
174.02
253.14
255.92
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
y
1.0
1.2
1.4
1.6
Diagnostic
(a) 8 Trees and 10000 iterations
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
y
1.00
1.02
1.04
1.06
Diagnostic
(b) 10 Trees and 10000 iterations
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
y
1.000
1.005
1.010
1.015
1.020
Diagnostic
(c) 8 Trees and 50000 iterations
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
y
1.000
1.005
1.010
1.015
1.020
Diagnostic
(d) 10 Trees and 50000 iterations
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
y
1.0000
1.0025
1.0050
1.0075
1.0100
Diagnostic
(e) 8 Trees and 200000 iterations
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
y
1.000
1.005
1.010
Diagnostic
(f) 10 Trees and 200000 iterations
Table G .
G30: Average Absolute Error and Root Integrated Square Error with LGCP for the data in Figure G.19.Proposed Algorithm
Number of
trees
AAE for Poste-
rior Mean
AAE for Poste-
rior Median
RISE for Poste-
rior Mean
RISE for Poste-
rior Median
8
177.44
175.62
255.23
258.88
10
176.52
174.02
253.14
255.92
15
177.48
172.62
254.22
251.96
Table G.31: Average Absolute Error and Root Integrated Square Error for various number of trees for data in
Fig. G.19.
Kernel Smoothing
Bandwidth (sigma)
AAE
RISE
0.03
360.11
463.1
0.04
277.89
353.82
0.087 (LCV)
167.74
227.85
0.095
166.51
230.27
Table G.32: Average Absolute Error and Root Integrated Square Error for fixed bandwidth estimators for data in
Fig. G.19.
Appendix L. Coal DataAppendix L.1. 8 Trees We run 3 parallel chains each for 200000 iterations keeping every 100th sample.
Stan: A probabilistic programming language for bayesian inference and optimization. A Gelman, D Lee, J Guo, Journal of Educational and Behavioral Statistics. 40Gelman, A., Lee, D. and Guo, J. (2015) Stan: A probabilistic programming language for bayesian inference and optimization. Journal of Educational and Behavioral Statistics, 40, 530-543.
Inference from iterative simulation using multiple sequences. A Gelman, D B Rubin, Statistical science. 7Gelman, A., Rubin, D. B. et al. (1992) Inference from iterative simulation using multiple sequences. Statistical science, 7, 457-472.
Fast and scalable nonparametric bayesian inference for poisson point processes. S Gugushvili, F Van Der Meulen, M Schauer, P Spreij, arXiv:1804.03616arXiv preprintGugushvili, S., van der Meulen, F., Schauer, M. and Spreij, P. (2018) Fast and scalable non- parametric bayesian inference for poisson point processes. arXiv preprint arXiv:1804.03616.
T E Harris, The theory of branching processes. BerlinSpringer6Harris, T. E. et al. (1963) The theory of branching processes, vol. 6. Springer Berlin.
Non-parametric bayesian estimation of a spatial poisson intensity. J Heikkinen, E Arjas, Scandinavian Journal of Statistics. 25Heikkinen, J. and Arjas, E. (1998) Non-parametric bayesian estimation of a spatial poisson inten- sity. Scandinavian Journal of Statistics, 25, 435-450.
Bayesian nonparametric modeling for causal inference. J L Hill, Journal of Computational and Graphical Statistics. 20Hill, J. L. (2011) Bayesian nonparametric modeling for causal inference. Journal of Computational and Graphical Statistics, 20, 217-240.
Statistical analysis and modelling of spatial point patterns. J Illian, A Penttinen, H Stoyan, D Stoyan, John Wiley & Sons70Illian, J., Penttinen, A., Stoyan, H. and Stoyan, D. (2008) Statistical analysis and modelling of spatial point patterns, vol. 70. John Wiley & Sons.
A Kapelner, J Bleich, arXiv:1312.2171Machine learning with bayesian additive regression trees. arXiv preprintKapelner, A. and Bleich, J. (2013) bartmachine: Machine learning with bayesian additive regression trees. arXiv preprint arXiv:1312.2171.
bartMachine: Machine learning with bayesian additive regression trees. A Kapelner, J Bleich, Journal of Statistical Software. 70Kapelner, A. and Bleich, J. (2016) bartMachine: Machine learning with bayesian additive regression trees. Journal of Statistical Software, 70, 1-40.
Multinomial probit bayesian additive regression trees. B P Kindo, H Wang, E A Peña, Stat. 5Kindo, B. P., Wang, H. and Peña, E. A. (2016) Multinomial probit bayesian additive regression trees. Stat, 5, 119-131.
Particle gibbs for bayesian additive regression trees. B Lakshminarayanan, D Roy, Y W Teh, Artificial Intelligence and Statistics. Lakshminarayanan, B., Roy, D. and Teh, Y. W. (2015) Particle gibbs for bayesian additive regres- sion trees. In Artificial Intelligence and Statistics, 553-561.
Bayesian inference and model assessment for spatial point patterns using posterior predictive samples. T J Leininger, A E Gelfand, Bayesian Analysis. 12Leininger, T. J. and Gelfand, A. E. (2017) Bayesian inference and model assessment for spatial point patterns using posterior predictive samples. Bayesian Analysis, 12, 1-30.
Simulation of nonhomogeneous poisson processes by thinning. P W Lewis, G S Shedler, Naval research logistics quarterly. 26Lewis, P. W. and Shedler, G. S. (1979) Simulation of nonhomogeneous poisson processes by thin- ning. Naval research logistics quarterly, 26, 403-413.
Bayesian regression trees for high-dimensional prediction and variable selection. A R Linero, Journal of the American Statistical Association. 113Linero, A. R. (2018) Bayesian regression trees for high-dimensional prediction and variable selec- tion. Journal of the American Statistical Association, 113, 626-636.
Bayesian regression tree ensembles that adapt to smoothness and sparsity. A R Linero, Y Yang, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 80Linero, A. R. and Yang, Y. (2018) Bayesian regression tree ensembles that adapt to smoothness and sparsity. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80, 1087-1110.
Variational inference for Gaussian process modulated Poisson processes. C Lloyd, T Gunter, M Osborne, S Roberts, International Conference on Machine Learning. Lloyd, C., Gunter, T., Osborne, M. and Roberts, S. (2015) Variational inference for Gaussian process modulated Poisson processes. In International Conference on Machine Learning, 1814- 1822.
Local regression and likelihood springer. C Loader, New yorkLoader, C. (1999) Local regression and likelihood springer: New york.
2019) bayestestr: Describing effects and their uncertainty, existence and significance within the bayesian framework. D Makowski, M Ben-Shachar, D Lüdecke, Journal of Open Source Software. 41541Makowski, D., Ben-Shachar, M. and Lüdecke, D. (2019) bayestestr: Describing effects and their uncertainty, existence and significance within the bayesian framework. Journal of Open Source Software, 4, 1541.
Log-linear bayesian additive regression trees for categorical and count responses. J S Murray, arXiv:1701.01503arXiv preprintMurray, J. S. (2017) Log-linear bayesian additive regression trees for categorical and count re- sponses. arXiv preprint arXiv:1701.01503.
Counting process intensity estimation by orthogonal wavelet methods. P N Patil, A T Wood, Bernoulli. 10Patil, P. N., Wood, A. T. et al. (2004) Counting process intensity estimation by orthogonal wavelet methods. Bernoulli, 10, 1-24.
Multi-dimensional point process models in r. R Peng, Journal of Statistical Software. 8Peng, R. (2003) Multi-dimensional point process models in r. Journal of Statistical Software, 8, 1-27.
Heteroscedastic bart via multiplicative regression trees. M T Pratola, H A Chipman, E I George, R E Mcculloch, 10.1080/10618600.2019.1677243Journal of Computational and Graphical Statistics. 29Pratola, M. T., Chipman, H. A., George, E. I. and McCulloch, R. E. (2020) Heteroscedastic bart via multiplicative regression trees. Journal of Computational and Graphical Statistics, 29, 405-417. URL https://doi.org/10.1080/10618600.2019.1677243.
Efficient metropolis-hastings proposal mechanisms for bayesian regression tree models. M T Pratola, Bayesian analysis. 11Pratola, M. T. et al. (2016) Efficient metropolis-hastings proposal mechanisms for bayesian regres- sion tree models. Bayesian analysis, 11, 885-911.
V Rockova, E Saha, arXiv:1810.00787On theory for bart. arXiv preprintRockova, V. and Saha, E. (2018) On theory for bart. arXiv preprint arXiv:1810.00787.
Posterior concentration for bayesian regression trees and their ensembles. V Rockova, S Van Der Pas, arXiv:1708.08734arXiv preprintRockova, V. and van der Pas, S. (2017) Posterior concentration for bayesian regression trees and their ensembles. arXiv preprint arXiv:1708.08734.
On the statistical analysis of smoothing by maximizing dirty markov random field posterior distributions. S Sardy, P Tseng, 10.1198/016214504000000188Journal of the American Statistical Association. 99Sardy, S. and Tseng, P. (2004) On the statistical analysis of smoothing by maximizing dirty markov random field posterior distributions. Journal of the American Statistical Association, 99, 191- 204. URL https://doi.org/10.1198/016214504000000188.
Histograms: Theory and Practice. D Scott, Scott, D. (2008) Histograms: Theory and Practice, 47-94.
Nonparametric survival analysis using bayesian additive regression trees (bart). R A Sparapani, B R Logan, R E Mcculloch, P W Laud, Statistics in medicine. 35Sparapani, R. A., Logan, B. R., McCulloch, R. E. and Laud, P. W. (2016) Nonparametric survival analysis using bayesian additive regression trees (bart). Statistics in medicine, 35, 2741-2753.
An asymptotic equivalence of choice of model by cross-validation and akaike's criterion. M Stone, Journal of the Royal Statistical Society: Series B (Methodological). 39Stone, M. (1977) An asymptotic equivalence of choice of model by cross-validation and akaike's criterion. Journal of the Royal Statistical Society: Series B (Methodological), 39, 44-47.
Data-based choice of histogram bin width. M Wand, The American Statistician. 51Wand, M. (1997) Data-based choice of histogram bin width. The American Statistician, 51, 59-64.
The bayesian additive classification tree applied to credit risk modelling. J L Zhang, W K Härdle, 1197-1205. 2020Computational Statistics & Data Analysis. 5424Zhang, J. L. and Härdle, W. K. (2010) The bayesian additive classification tree applied to credit risk modelling. Computational Statistics & Data Analysis, 54, 1197-1205. 2020.16 2020.18 2020.20 2020.22 2020.24
. Gelman−rubin, Gelman−Rubin Criterion
73: The Gelman-Rubin Criterion for 10 Trees. M Figure, Figure M.73: The Gelman-Rubin Criterion for 10 Trees 0 500 1000 1500 2000
| [] |
[
"Probing Black Hole Magnetic Fields with QED",
"Probing Black Hole Magnetic Fields with QED"
] | [
"Ilaria Caiazzo \nDepartment of Physics and Astronomy\nUniversity of British Columbia\n6224 Agricultural RoadV6T 1Z1VancouverBCCanada\n",
"Jeremy Heyl \nDepartment of Physics and Astronomy\nUniversity of British Columbia\n6224 Agricultural RoadV6T 1Z1VancouverBCCanada\n"
] | [
"Department of Physics and Astronomy\nUniversity of British Columbia\n6224 Agricultural RoadV6T 1Z1VancouverBCCanada",
"Department of Physics and Astronomy\nUniversity of British Columbia\n6224 Agricultural RoadV6T 1Z1VancouverBCCanada"
] | [] | The effect of vacuum birefringence is one of the first predictions of quantum electrodynamics (QED): the presence of a charged Dirac field makes the vacuum birefringent when threaded by magnetic fields. This effect, extremely weak for terrestrial magnetic fields, becomes important for highly magnetized astrophysical objects, such as accreting black holes. In the X-ray regime, the polarization of photons traveling in the magnetosphere of a black hole is not frozen at emission but is changed by the local magnetic field. We show that, for photons traveling along the plane of the disk, where the field is expected to be partially organized, this results in a depolarization of the X-ray radiation. Because the amount of depolarization depends on the strength of the magnetic field, this effect can provide a way to probe the magnetic field in black-hole accretion disks and to study the role of magnetic fields in astrophysical accretion in general. | 10.3390/galaxies6020057 | [
"https://arxiv.org/pdf/1805.11018v1.pdf"
] | 119,222,648 | 1805.11018 | 571f7e666aee7a9549eae319777abf31dde00dca |
Probing Black Hole Magnetic Fields with QED
Ilaria Caiazzo
Department of Physics and Astronomy
University of British Columbia
6224 Agricultural RoadV6T 1Z1VancouverBCCanada
Jeremy Heyl
Department of Physics and Astronomy
University of British Columbia
6224 Agricultural RoadV6T 1Z1VancouverBCCanada
Probing Black Hole Magnetic Fields with QED
10.3390/galaxies6020057Received: 18 March 2018; Accepted: 20 May 2018; Published: 24 May 2018Articleblack holesX-ray polarizationquantum electrodynamics: radiative correctionsmagnetic field
The effect of vacuum birefringence is one of the first predictions of quantum electrodynamics (QED): the presence of a charged Dirac field makes the vacuum birefringent when threaded by magnetic fields. This effect, extremely weak for terrestrial magnetic fields, becomes important for highly magnetized astrophysical objects, such as accreting black holes. In the X-ray regime, the polarization of photons traveling in the magnetosphere of a black hole is not frozen at emission but is changed by the local magnetic field. We show that, for photons traveling along the plane of the disk, where the field is expected to be partially organized, this results in a depolarization of the X-ray radiation. Because the amount of depolarization depends on the strength of the magnetic field, this effect can provide a way to probe the magnetic field in black-hole accretion disks and to study the role of magnetic fields in astrophysical accretion in general.
Introduction
In the theory of accretion disks around black holes and astrophysical accretion in general, magnetic fields play a crucial role. They are expected to be the main source of shear stresses, without which accretion cannot occur [1,2]. Moreover, magnetic fields in the inner regions of black-hole accretion disks are thought to lead to the formation of relativistic jets through the Penrose-Blandford-Znajek mechanism [3,4]. However, information on the strength and structure of magnetic fields around black holes is hard to obtain by direct observations. From the analysis of the spectra of two Galactic stellar-mass black holes, Miller et al. [5][6][7] showed that a wind is generated by magnetic processes as close as 850 GM/c 2 to the hole. They also obtained an estimate of the strength of the magnetic field when a certain magnetic process is assumed [7]. The only indication that we have on the magnetic field structure closer to the central engine comes from interferometry observations of the radio polarization from Sagittarius A*, the supermassive black hole at the center of the Milky Way, which shows evidence for a partially ordered magnetic field on scales of 12 GM/c 2 [8]. In this paper, we describe how X-ray polarization measurements from black-hole accretion disks could provide a way to probe, for the first time, the strength and structure of the magnetic field close to the event horizon.
If only classical electrodynamics is considered, at energies higher than 1-2 keV, the polarization of a photon emitted by the accretion disk is not affected by the presence of a magnetic field. The linear polarization of X-ray photons stays the same as they travel through the magnetosphere of the hole all the way to the observer. At lower photon energies, the presence of a magnetized corona could destroy the linear polarization of X-ray photons due to the effect of plasma birefringence [9][10][11]. In quantum electrodynamics (QED), the vacuum is also expected to be birefringent in presence of a magnetic field. This effect, which was one of the first predictions of QED, has never been proven. Recent observations of the visible polarization from a radio-quiet neutron star [12] have strongly hinted that vacuum birefringence is indeed affecting the photons' polarization. If the vacuum is indeed birefringent, after photons are emitted from the disk, their polarization will change as they travel through the magnetized vacuum.
In classical electrodynamics, photons do not interact with other electromagnetic fields as Maxwell equations are linear in the fields. In QED, the presence of a Dirac current in the vacuum results in an addition to the usual action integral of the electromagnetic field that is more than quadratic in the fields. This implies that the interaction between the fields is not linear as photons can interact with virtual electron-positron pairs as they travel through the magnetized vacuum. As a result, the speed at which light travels through the vacuum depends on its polarization and on the strength of the field. In other words, in presence of a magnetic field the vacuum becomes birefringent, i.e., it acquires an index of refraction that is different depending on the angle between the direction of the photon's polarization and the magnetic field. A detailed derivation of the vacuum birefringence in QED is described by Heyl and Caiazzo, in this volume.
In this paper, we assume the strength of the magnetic field in the accretion disk to be the minimum needed for accretion to occur if an α-model structure of the disk is considered. We find that the effect of vacuum birefringence on the photon polarization becomes important, depending on the angular momentum of the black hole and that of the photon, around 10 keV, for both stellar-mass and supermassive black holes. A stronger (weaker) field would shift this range to lower (higher) energies. Observation of the X-ray polarization from accretion disks in the 1-30 keV range, if properly modeled with QED, would both probe the strength of the magnetic field and test the currently accepted models of astrophysical accretion. Several observatories with an X-ray polarimeter on board are now at different stages of development: in the 1-10 keV range, the NASA SMEX mission IXPE [13] and the Chinese-European eXTP [14]; in the hard-X-ray range, 15-150 keV, the balloon-borne X-Calibur [15] and PoGO+ [16] and Friis et al., in this volume; and, in the sub-keV range, the narrow band (250 eV) LAMP [17] and the broad band (0.2-0.8 keV) rocket-based REDSox [18].
In Section 2, we introduce our model and our assumptions and, in Section 3, we show the energy at which QED becomes important given our assumptions as a function of the black hole spin and we show the effect of vacuum birefringence on the polarization of X-ray photons traveling near the disk plane, where we assume the magnetic field to be partially organized. For a more detailed derivation of our equations, please see [11].
Model
Our calculations are performed in the Kerr metric surrounding a spinning black hole, with spin parameter a = J/(cM), ranging from a = 0 (Schwarzschild black hole) to a = GM/c 2 (critical spin). To calculate the strength of the magnetic field in the mid-plane of the disk, we have to model the structure of the inner disk. In particular, we have to make an assumption on what is generating the shear stresses needed for accretion to occur. We follow the α-model, suggested first by Shakura and Sunyaev [1], for which tangential stresses between layers are generated by magnetic field and turbulence:
tφr = ρc s v t + B 2 4π = αP(1)
where ρ is the mass density, c s is the speed of sound, v t is the turbulence velocity, B is the magnetic field strength, P is pressure and tφr is the shear stress as measured in a frame of reference moving with the gas. The last equality is called the α-prescription, in which the efficiency of the angular momentum transfer is expressed with one parameter. Because the magnetic field is at the origin of the turbulence, we expect the two terms in Equation (1) to be of the same order, so we estimate the magnetic field strength to be ∼ (4παP) 1/2 . To calculate the pressure in the mid-plane, we employ the disk structure equations in Novikov and Thorne [19], with the correction to the hydrostatic equilibrium obtained by Riffert and Herold [20]. The general relativistic equations in the two papers are written as Newtonian values times relativistic corrections, the latter expressed by functions that are equal to one in the Newtonian limit. In this paper, we use the following relativistic corrections:
A =1 + a 2 /r 2 + 2a 2 /r 3 (2a) B =1 + a /r 3/2 (2b) C =1 − 3/r + 2a /r 3/2 (2c) D =1 − 2/r + a 2 /r 2 (2d) N =1 − 4a /r 3/2 + 3a 2 /r 2 (2e)
where a = ac 2 /GM, M is the mass of the black hole, r = rc 2 /GM and r is the distance from the black hole (more precisely, the circumferential radius). The first four come from [19] and the last one, N , corresponds to the quantity called C in [20]. We also assume the pressure to be dominated by radiation and the opacity to be dominated by electron scattering. This assumption applies to the inner region of the accretion disk, which is also where the magnetic field is stronger. Outer regions of the disk will have no influence on our calculations as the magnetic field there is weak. We find the square of the strength of the magnetic field in the mid-plane to be:
B 2 = 8πc 3κ es GM r 3 N D .(3)
where κ es is the electron scattering opacity. For a 10 M black hole, at the innermost stable circular orbit of the disk (ISCO, or r I ), this corresponds to
B 2 = (0.36 − 1.22 × 10 8 G) 2 M 10 M −1 1 + X 2 −1 (4)
where the first value is for a = 0 and the second is for a = 0.999 (the value diverges for a = 1) and X is the hydrogen mass fraction. This value is a crude estimate of the minimum magnetic field needed in the mid-plane for accretion to occur if an α-model is assumed. Magnetohydrodynamics and shear box simulations show that the strength of the magnetic field decreases moving away from the mid-plane toward the photosphere. However, Equation (3) reproduces both the strength and the scaling with distance of the magnetic field at the photosphere obtained by simulations [21,22], and of the minimum estimates obtained by Miller et al. [7]. We decided therefore to use the analytic expression in Equation (3) as our best guess for the strength of the magnetic field at the photosphere.
To describe the evolution of the polarization of a single photon, we used the Poincaré formalism, in which the polarization is described by a unit vector s = (Q, U, V)/I, where I, Q, U and V are the Stokes parameters, and the polarization states for fully polarized light are mapped on the surface of a unit sphere. Following Kubo and Nagata [23,24], the polarization of a wave in a birefringent medium evolves as:
∂s ∂x 3 =Ω × s + (T × s) × s(5)
whereΩ is the birefringent vector,T is the dichroic vector and x 3 is the length of the photon path. In the case of the QED vacuum with an external magnetic field to one-loop order and a weak electric field, T = 0 (there is no real pair production) and the amplitude of the birefringent vectorΩ is proportional to the difference between the indices of refraction for the two polarization states: the one parallel to the magnetic field (n ) and the one perpendicular (n ⊥ ). Equations (53) and (54) of Heyl and Caiazzo, this volume, yield for B B QED :
Ω = k 0 (n − n ⊥ ) = k 0 α QED 30π B B QED 2 sin 2 θ(6)
where k 0 = 2πν/c is the unperturbed wavenumber of the photon, θ is the angle between the direction of the motion of the photon and the external field, α QED is the fine structure constant and B QED = m 2 e c 3 /(he) 4.4 × 10 13 G. From Equations (3) and (6), we can find the magnitude of the birefringent vector as function of the distance from the hole along the plane of the disk. After assuming a structure for the magnetic field, we can integrate Equation (5) to find how the polarizations of photons traveling in the magnetosphere close to the disk plane evolve.
Results
Polarization-Limiting Radius
Before calculating the evolution of the photon polarization for a defined structure of the magnetic field, it is interesting to look at the quantity called the polarization-limiting radius (PLR). The PLR provides an estimate for the distance from the black hole at which the vacuum birefringence stops affecting the photon polarization because the magnetic field has become too weak. From Equations (3) and (6) and Equation (56) of Heyl and Caiazzo, this volume, we find the PLR for a black hole to be:
r p c 2 GM = 2k 0h m p 15πm 2 e c(1 + X) N (r p ) D(r p ) 2(7)
where m p is the mass of the proton and m e is the mass of the electron. From Equation (7), we can derive the energy at which the PLR is equal to the ISCO. Figure 1 shows the ISCO as a function of the black hole spin (black dashed line, right y-axis) and the photon energy at which the PLR is equal to the ISCO (solid red line, left y-axis). Figure 1 provides a rough estimate of the photon energy at which QED becomes important: if our estimate of the magnetic field strength is correct, for rapidly spinning black holes, the effect of QED will be important around a photon energy of 10 keV or lower, while for slowly spinning black holes, QED will affect the polarization only above 10-20 keV. However, if the magnetic field is stronger (weaker) the energy threshold will be lower (higher). This result does not depend on the mass of the black hole, so it holds for both stellar-mass and supermassive black holes. The PLR estimate does not take into account light bending: if a photon is emitted with large retrograde angular momentum, its path through the magnetosphere will be longer, so retrograde photons at lower energies can also be affected, as we find in the next section.
Edge-on Photons
To better understand how vacuum birefringence affects the polarization of photons traveling through the black hole magnetosphere, we assume a simple structure for the magnetic field threading the accretion disk, and we study how the polarization changes for photons traveling parallel to the disk plane. Recent observations of the radio polarization coming from the region close to the event horizon of Sagittarius A* suggest the presence of a partially organized field [8]. It is reasonable to assume the magnetic field to be organized on some length-scale that reflects the competition between the magnetic field itself, which would tend to be organized, and the shear of the disk, which prevents big structures from forming. We therefore assume the disk to be divided into regions of constant magnetic-field direction, which is also the structure often assumed for the magnetic field in the plane of the disk by magnetohydrodynamics simulations [25]. We pick two different length-scales to test how our assumption on the size of the magnetic loops affects our results. Since we expect the length scale to be related to both the distance to the hole and to the size of the hole itself, we first divide the disk into five regions, each twice as large as the previous one: from the ISCO to twice the ISCO, to 4 times the ISCO, to 8 times the ISCO, to 16 times the ISCO, and to infinity. For simplicity, we call this configuration the 2-fold configuration. In the second configuration, the regions of constant magnetic-field direction are each 1.5 times as large as the previous one: from the ISCO to 1.5 times the ISCO, to 2.3 times the ISCO, to 5.1 times the ISCO, to 7.6 times the ISCO, to 11 times the ISCO, to 17 times the ISCO, and to infinity. For simplicity, we call this configuration the 1.5-fold configuration. We analyze the evolution of the polarization of single photons as they travel along geodesics through the magnetosphere. On the Poincaré sphere, their polarization will perform a random walk, where in each region the direction of the step is given by Equation (5) and the rotation angle aroundΩ is given by
∆Θ = EK sin 2 θr − 3 2 N DC (l /r 3/2 − B) 2 (l 2 (−1 + 2/r )/r 2 − 4l a /r 3 + A) 1/2 dr (8)
where E is the energy of the photon at infinity, l is the dimensionless specific angular momentum of the photon (l = L/E × c 2 /(GM)) and K = m p /[15πm 2 e c 2 (1 + X)]. Since we are considering photons traveling close to the equatorial plane, general relativity does not affect their polarization's direction.
We perform a Monte-Carlo simulation for 6000 photons, calculating the evolution of their polarization from the ISCO to infinity. Each of the 6000 photons is emitted with the same specific angular momentum l and the same energy at infinity E from the ISCO of a black hole with spin parameter a . We take the angle between the magnetic field and the photon, θ, and the angle between s andΩ to be constant in every region, and we take their values as random in each run. We then take the average of the linear polarization over the 6000 photons. We repeat the same calculation for photons with different specific angular momenta: zero angular momentum photons (l = 0), photons initially rotating with the disk at 90% the maximum prograde angular momentum (l = 0.9 l + ) and photons initially rotating against the disk at 90% the maximum retrograde angular momentum (l = 0.9 l − ). We also employ different photon energies between 1 and 80 keV and for four different spins of the hole: a = 0.5, 0.7, 0.9 and 0.99. The results are shown in Figure 2. In Figure 2, the dashed lines show the results for the 1.5-fold configuration and the solid lines show the results for the 2-fold configuration. We find that, if magnetic loops are smaller, the depolarization effect is reduced linearly with the size of the loops: in our example, the dashed lines fall on top of the solid lines if we rescale them by 2/1.5. However, the solid lines show peaks that are not present in the dashed lines. For example, for a hole rotating with spin a = 0.99 in the 2-fold configuration (purple solid line, right panel) the polarization fraction peaks at 7 keV and then again at 14 keV, at 21 keV and so on. These peaks are due to the fact that at those energies the integral in Equation (8) reaches, in the first zone of the disk, an average value of π, and therefore, the polarization vector remains closer to the S 1 − S 2 plane. In the 1.5-fold configuration, this does not happen because the first region is smaller and the second region has a bigger effect on the final polarization, washing out the peaks. Ideally, the presence of features in the polarization spectrum such as the peaks shown for the 2-fold configuration could provide hints on the structure of the magnetic field in the disk.
All of the aforementioned results are independent of the black hole mass.
A Simulation for GRS 1915+105
To understand whether the upcoming polarimeters will be sensitive to the effects of QED, we simulated the observed polarization of the black-hole binary GRS 1915+105 with eXTP and IXPE. GRS 1915+105 is a bright microquasar that hosts a rapidly spinning black hole. Measurements of its spin, which rely on observations in both X-rays and optical, seem to indicate a spin parameter a 0.98 [26,27]. We assume an inclination angle of 75 • [28,29], and we use the polarization spectra from Figure 7 of Schnittman and Krolik (2009) [30]. To calculate the effects of the vacuum birefringence, we assume that the bulk of the radiation comes from near the ISCO and has zero angular momentum. To simulate the response of the instruments, we employ the code XIMPOL [31]. Figure 3a shows the observed polarization degree for two spin parameters, a = 0.95 and a = 0.99, both with and without including QED. The blue dots show a simulated 100 ks observation with eXTP (which would correspond to approximately 300 ks with IXPE), assuming the emission model to be the one with a = 0.99 and with QED (blue line). We can immediately see that QED has an effect in the energy range of the upcoming polarimeters (2-8 keV). In addition, if QED were not included in the model, it would be easy to mistake a black hole actually spinning at a = 0.99 (blue line) with one spinning at a = 0.95 (green line). In the left panel of Figure 3, all the models are calculated assuming the minimum magnetic field needed for accretion to occur in an α−model (Equation (3)).
In Figure 3b, we show the effect of a stronger magnetic field. The red and the blue line are the same as in Figure 3a: a = 0.99 and the minimum magnetic field, with and without QED, while the black line represents a model with the same parameters but a magnetic field 2.5 times stronger. The black dots show a simulated 1 Ms observation with eXTP (∼3 Ms with IXPE). We can see that the curves are very different, with the QED effect being much stronger for the stronger magnetic field, and that the peaks have shifted into the 2-8 keV range. Of course, the magnetic field structure that we use in this paper is just a toy model, but the peaks show that the QED effect can be sensitive to the magnetic field structure, and the upcoming polarimeters would be sensitive enough to detect them. We want to stress that these figures show preliminary calculations, and further work is required to model the expected polarization degree with IXPE and eXTP. Indeed, our model assumes the flux to be dominated by photons coming from close to the ISCO and with nearly zero angular momentum, which could be a good assumption for high-energy photons but the contribution of photons coming from more distant regions has to be properly included in the calculations for low-energy photons. Moreover, the structure of the magnetic field that we employ is just a simple toy model, and better calculations are needed to make a prediction on whether features like the peaks in the polarization degree would be detectable and at which energies they would be present.
Discussion
In Figure 2, all photons were emitted with the same polarization. If the vacuum were not birefringent, their final polarization would still be the same, and the final linear polarization fraction would still average at one. We can therefore conclude that vacuum birefringence has a big impact on the polarization of X-ray photons, especially for fast-spinning black holes and for red-shifted (retrograde) photons. The reason the effect is stronger for higher spinning parameters is because the ISCO is closer to the event horizon and, therefore, the magnetic field is stronger, but also because photons perform more orbits around fast-spinning holes, staying longer in the strong magnetic field region. Retrograde photons are more affected for two reasons: they perform more orbits around the black hole with respect to zero angular momentum and prograde photons, and they receive a red-shift, which means that their energy at emission was higher.
The results shown in Figure 2 were obtained for the minimum magnetic field needed to generate enough shear stresses for accretion to occur in an α-model for the accretion disk. The actual magnetic field threading the accretion disk could be higher, leading to a stronger effect of the vacuum birefringence on the polarization. In general, a stronger (or weaker) magnetic field would shift the x−axis of Figure 2 to a lower (higher) energy range, and the shifting would scale with the square of the magnetic field, as shown in Figure 3.
The simulations presented in Section 3.3 are not intended to be predictive as more detailed models are required for the structure of the magnetic field close to the disk plane and for the contribution to the total emission from photons emitted at different distances to the central engine. However, they show that vacuum birefringence has an effect on the observed polarization of fast-spinning black holes that can be detected in the energy range of the upcoming polarimeters IXPE and eXTP.
Our analysis is restricted to edge-on photons, traveling close to the disk plane, where we expect the magnetic field to be partially organized on small scales. Further studies are needed to calculate the effect of vacuum birefringence for photons coming out of the disk plane, where we expect the magnetic field to be organized on large scales. In this case, the effect of QED could be the opposite of what happens for edge-on photons: the organized magnetic field could align the polarization of photons traveling through the magnetosphere, resulting in a larger net observed polarization.
Materials and Methods
The Monte-Carlo simulations were performed by numerically integrating Equation (8) in Maple. A detailed derivation of the equations can be found in [11]. The simulations for eXTP in Section 3.3 were performed using the code XIMPOL [31].
Figure 1 .
1The plot shows, on the left y−axis, the energy at which r p = r I (solid red line). On the right y−axis, the ISCO for a black hole as function of the spin parameter a (dashed black line).
Figure 1. The plot shows, on the left y−axis, the energy at which r p = r I (solid red line). On the right y−axis, the ISCO for a black hole as function of the spin parameter a (dashed black line). Figure from [11].
Figure 2 .
2Final polarization fraction vs. photon energy calculated in the 2-fold configuration (solid lines) and in the 1.5-fold configuration (dashed lines): (a) left to right, maximum retrograde (90% l − ) angular momentum photons (red), zero angular momentum photons (black) and maximum prograde (90% l + ) angular momentum photons (blue), coming from the ISCO of a black hole with a = 0.9; and (b) 90% l − photons for, left to right, a = 0.99 (purple), 0.9 (red), 0.7 (light blue) and 0.5 (green). Figure from[11].
Figure 3 .
3Observed polarization degree for the black-hole binary GRS 1915+105. (a) Model with a = 0.99 with QED (blue line) and without QED (red line); model with a = 0.95 with QED (yellow line) and without QED (green line). Blue dots are a simulated 100 ks observation with eXTP for the blue line model. (b) Model with a = 0.99 with QED and the minimum magnetic field (blue line) and without QED (red line); model with a = 0.99 with QED and 2.5 times the minimum magnetic field (black line). Black dots are a simulated 1 Ms observation with eXTP for the black line model.
Acknowledgments:We thank the anonymous referees for useful suggestions that improved the paper significantly. We used the NASA ADS service, arXiv.org and SIMBAD. This work was supported by a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada, the Canadian Foundation for Innovation and the British Columbia Knowledge Development Fund. I.C. is supported by a Four-Year-Fellowship from the University of British Columbia.Conflicts of Interest:The authors declare no conflict of interest.AbbreviationsThe following abbreviations are used in this manuscript:
Black holes in binary systems. Observational appearance. N I Shakura, R A Sunyaev, Astron. Astrophys. 24Shakura, N.I.; Sunyaev, R.A. Black holes in binary systems. Observational appearance. Astron. Astrophys. 1973, 24, 337-355.
A powerful local shear instability in weakly magnetized disks. I-Linear analysis. II-Nonlinear evolution. S A Balbus, J F Hawley, Astron. J. 376Balbus, S.A.; Hawley, J.F. A powerful local shear instability in weakly magnetized disks. I-Linear analysis. II-Nonlinear evolution. Astron. J. 1991, 376, 214-233.
Electromagnetic extraction of energy from Kerr black holes. R D Blandford, R L Znajek, Mon. Not. R. Astron. Soc. 179Blandford, R.D.; Znajek, R.L. Electromagnetic extraction of energy from Kerr black holes. Mon. Not. R. Astron. Soc. 1977, 179, 433-456.
Efficient generation of jets from magnetically arrested accretion on a rapidly spinning black hole. A Tchekhovskoy, R Narayan, J C Mckinney, Mon. Not. R. Astron. Soc. 418Tchekhovskoy, A.; Narayan, R.; McKinney, J.C. Efficient generation of jets from magnetically arrested accretion on a rapidly spinning black hole. Mon. Not. R. Astron. Soc. 2011, 418, L79-L83.
Wijnands, R. The magnetic nature of disk accretion onto black holes. J M Miller, J Raymond, A Fabian, D Steeghs, J Homan, C Reynolds, M Van Der Klis, Nature. 441Miller, J.M.; Raymond, J.; Fabian, A.; Steeghs, D.; Homan, J.; Reynolds, C.; van der Klis, M.; Wijnands, R. The magnetic nature of disk accretion onto black holes. Nature 2006, 441, 953-955.
The Accretion Disk Wind in the Black Hole GRO J1655-40. J M Miller, J Raymond, C S Reynolds, A C Fabian, T R Kallman, J Homan, Astron. J. 680Miller, J.M.; Raymond, J.; Reynolds, C.S.; Fabian, A.C.; Kallman, T.R.; Homan, J. The Accretion Disk Wind in the Black Hole GRO J1655-40. Astron. J. 2008, 680, 1359-1377.
The Accretion Disk Wind in the Black Hole GRS 1915+105. J M Miller, J Raymond, A C Fabian, E Gallo, J Kaastra, T Kallman, A L King, D Proga, C S Reynolds, A Zoghbi, Astrophys. J. Lett. 9Miller, J.M.; Raymond, J.; Fabian, A.C.; Gallo, E.; Kaastra, J.; Kallman, T.; King, A.L.; Proga, D.; Reynolds, C.S.; Zoghbi, A. The Accretion Disk Wind in the Black Hole GRS 1915+105. Astrophys. J. Lett. 2016, 821, L9.
Resolved magnetic-field structure and variability near the event horizon of Sagittarius A*. M D Johnson, V L Fish, S S Doeleman, D P Marrone, R L Plambeck, J F C Wardle, K Akiyama, K Asada, C Beaudoin, L Blackburn, Science. 350Johnson, M.D.; Fish, V.L.; Doeleman, S.S.; Marrone, D.P.; Plambeck, R.L.; Wardle, J.F.C.; Akiyama, K.; Asada, K.; Beaudoin, C.; Blackburn, L.; et al. Resolved magnetic-field structure and variability near the event horizon of Sagittarius A*. Science 2015, 350, 1242-1245.
X-ray pulsar models. I-Angle-dependent cyclotron line formation and comptonization. P Meszaros, W Nagel, Astron. J. 298Meszaros, P.; Nagel, W. X-ray pulsar models. I-Angle-dependent cyclotron line formation and comptonization. Astron. J. 1985, 298, 147-160.
The Effects of Magnetic Fields and Inhomogeneities on Accretion Disk Spectra and Polarization. S W Davis, O M Blaes, S Hirose, J H Krolik, Astron. J. 703Davis, S.W.; Blaes, O.M.; Hirose, S.; Krolik, J.H. The Effects of Magnetic Fields and Inhomogeneities on Accretion Disk Spectra and Polarization. Astron. J. 2009, 703, 569-584.
Vacuum birefringence and the x-ray polarization from black-hole accretion disks. I Caiazzo, J Heyl, Phys. Rev. D. 83001Caiazzo, I.; Heyl, J. Vacuum birefringence and the x-ray polarization from black-hole accretion disks. Phys. Rev. D 2018, 97, 083001.
Evidence for vacuum birefringence from the first optical-polarimetry measurement of the isolated neutron star RX J1856.5-3754. R P Mignani, V Testa, D González Caniulef, R Taverna, R Turolla, S Zane, K Wu, Mon. Not. R. Astron. Soc. 465Mignani, R.P.; Testa, V.; González Caniulef, D.; Taverna, R.; Turolla, R.; Zane, S.; Wu, K. Evidence for vacuum birefringence from the first optical-polarimetry measurement of the isolated neutron star RX J1856.5-3754. Mon. Not. R. Astron. Soc. 2017, 465, 492-500.
The Imaging X-ray Polarimetry Explorer (IXPE). M C Weisskopf, B Ramsey, S L O'dell, A Tennant, R Elsner, P Soffita, F Mulieri, Result. Phys. 6Weisskopf, M.C.; Ramsey, B.; O'Dell, S.L.; Tennant, A.; Elsner, R.; Soffita, P.; Mulieri, F. The Imaging X-ray Polarimetry Explorer (IXPE). Result. Phys. 2016, 6, 1179-1180.
Enhanced X-ray Timing and Polarization mission. S N Zhang, M Feroci, A Santangelo, Y W Dong, H Feng, F J Lu, S Brandt, Extp, Space Telescopes and Instrumentation. Ultraviolet to Gamma RayZhang, S.N.; Feroci, M.; Santangelo, A.; Dong, Y.W.; Feng, H.; Lu, F.J.; Brandt, S. eXTP: Enhanced X-ray Timing and Polarization mission. In Space Telescopes and Instrumentation 2016: Ultraviolet to Gamma Ray;
. International Society for Optics and Photonics. 990599051International Society for Optics and Photonics: Bellingham, WA, USA, 2017; Volume 9905, p. 99051Q.
Design and Performance of the X-ray Polarimeter X-Calibur. M Beilicke, F Kislat, A Zajczyk, Q Guo, R Endsley, M Stork, R Cowsik, P Dowkontt, S Barthelmy, T Hams, J. Astron. Instrum. 31440008Beilicke, M.; Kislat, F.; Zajczyk, A.; Guo, Q.; Endsley, R.; Stork, M.; Cowsik, R.; Dowkontt, P.; Barthelmy, S.; Hams, T.; et al. Design and Performance of the X-ray Polarimeter X-Calibur. J. Astron. Instrum. 2014, 3, 1440008.
The PoGO+ view on Crab off-pulse hard X-ray polarisation. M Chauvin, H G Florén, M Friis, M Jackson, T Kamae, J Kataoka, T Kawano, M Kiss, V Mikhalev, T Mizuno, Mon. Not. R. Astron. Soc. 477Chauvin, M.; Florén, H.G.; Friis, M.; Jackson, M.; Kamae, T.; Kataoka, J.; Kawano, T.; Kiss, M.; Mikhalev, V.; Mizuno, T.; et al. The PoGO+ view on Crab off-pulse hard X-ray polarisation. Mon. Not. R. Astron. Soc. 2018, 477, L45-L49.
LAMP: A micro-satellite based soft x-ray polarimeter for astrophysics. R She, H Feng, F Muleri, P Soffitta, R Xu, H Li, R Bellazzini, Z Wang, D Spiga, M Minuti, International Society for Optics and Photonics. UV, X-Ray, and Gamma-Ray Space Instrumentation960196010AstronomyShe, R.; Feng, H.; Muleri, F.; Soffitta, P.; Xu, R.; Li, H.; Bellazzini, R.; Wang, Z.; Spiga, D.; Minuti, M.; et al. LAMP: A micro-satellite based soft x-ray polarimeter for astrophysics. In UV, X-Ray, and Gamma-Ray Space Instrumentation for Astronomy XIX; International Society for Optics and Photonics: Bellingham, WA, USA, 2015; Volume 9601, p. 96010I.
REDSoX: Monte-Carlo ray-tracing for a soft x-ray spectroscopy polarimeter. In Optics for EUV, X-Ray, and Gamma-Ray Astronomy VIII. H M Gaenther, M Egan, R K Heilmann, S N T Heine, T Hellickson, J Frost, H L Marshall, N S Schulz, A Theriault-Shay, International Society for Optics and Photonics. 103991039917Gaenther, H.M.; Egan, M.; Heilmann, R.K.; Heine, S.N.T.; Hellickson, T.; Frost, J.; Marshall, H.L.; Schulz, N.S.; Theriault-Shay, A. REDSoX: Monte-Carlo ray-tracing for a soft x-ray spectroscopy polarimeter. In Optics for EUV, X-Ray, and Gamma-Ray Astronomy VIII; International Society for Optics and Photonics: Bellingham, WA, USA, 2017; Volume 10399, pp. 1039917.
Astrophysics of black holes. I D Novikov, K S Thorne, Black Holes. Novikov, I.D.; Thorne, K.S. Astrophysics of black holes. In Black Holes (Les Astres Occlus);
Relativistic Accretion Disk Structure Revisited. H Riffert, H Herold, Astron. J. 450508Riffert, H.; Herold, H. Relativistic Accretion Disk Structure Revisited. Astron. J. 1995, 450, 508.
Radiation-Dominated Disks are Thermally Stable. S Hirose, J H Krolik, O Blaes, Astron. J. 691Hirose, S.; Krolik, J.H.; Blaes, O. Radiation-Dominated Disks are Thermally Stable. Astron. J. 2009, 691, 16-31.
X-Ray Spectra from Magnetohydrodynamic Simulations of Accreting Black Holes. J D Schnittman, J H Krolik, S C Noble, Astron. J. 156Schnittman, J.D.; Krolik, J.H.; Noble, S.C. X-Ray Spectra from Magnetohydrodynamic Simulations of Accreting Black Holes. Astron. J. 2013, 769, 156.
Determination of dielectric tensor fields in weakly inhomogeneous anisotropic media. II. H Kubo, R Nagata, J. Opt. Soc. Am. 71Kubo, H.; Nagata, R. Determination of dielectric tensor fields in weakly inhomogeneous anisotropic media. II. J. Opt. Soc. Am. 1981, 71, 327-333.
Vector representation of behavior of polarized light in a weakly inhomogeneous medium with birefringence and dichroism. H Kubo, R Nagata, J. Opt. Soc. Am. 73Kubo, H.; Nagata, R. Vector representation of behavior of polarized light in a weakly inhomogeneous medium with birefringence and dichroism. J. Opt. Soc. Am. 1983, 73, 1719-1724.
Black hole jets without large-scale net magnetic flux. K Parfrey, D Giannios, A M Beloborodov, Mon. Not. R. Astron. Soc. 446Parfrey, K.; Giannios, D.; Beloborodov, A.M. Black hole jets without large-scale net magnetic flux. Mon. Not. R. Astron. Soc. 2015, 446, L61-L65.
The Spin of the Near-Extreme Kerr Black Hole GRS 1915+105. J E Mcclintock, R Shafee, R Narayan, R A Remillard, S W Davis, L X Li, Astron. J. 652McClintock, J.E.; Shafee, R.; Narayan, R.; Remillard, R.A.; Davis, S.W.; Li, L.X. The Spin of the Near-Extreme Kerr Black Hole GRS 1915+105. Astron. J. 2006, 652, 518-539.
NuSTAR Spectroscopy of GRS 1915+105: Disk Reflection, Spin, and Connections to Jets. J M Miller, M L Parker, F Fuerst, M Bachetti, F A Harrison, D Barret, S E Boggs, D Chakrabarty, F E Christensen, W W Craig, Astrophys. J. Lett. 45Miller, J.M.; Parker, M.L.; Fuerst, F.; Bachetti, M.; Harrison, F.A.; Barret, D.; Boggs, S.E.; Chakrabarty, D.; Christensen, F.E.; Craig, W.W.; et al. NuSTAR Spectroscopy of GRS 1915+105: Disk Reflection, Spin, and Connections to Jets. Astrophys. J. Lett. 2013, 775, L45.
A superluminal source in the Galaxy. I F Mirabel, L F Rodríguez, Nature. 371Mirabel, I.F.; Rodríguez, L.F. A superluminal source in the Galaxy. Nature 1994, 371, 46-48.
MERLIN observations of relativistic ejections from GRS 1915+105. R P Fender, S T Garrington, D J Mckay, T W B Muxlow, G G Pooley, R E Spencer, A M Stirling, E B Waltman, Mon. Not. R. Astron. Soc. 304Fender, R.P.; Garrington, S.T.; McKay, D.J.; Muxlow, T.W.B.; Pooley, G.G.; Spencer, R.E.; Stirling, A.M.; Waltman, E.B. MERLIN observations of relativistic ejections from GRS 1915+105. Mon. Not. R. Astron. Soc. 1999, 304, 865-876.
X-ray Polarization from Accreting Black Holes: The Thermal State. J D Schnittman, J H Krolik, Astron. J. 701Schnittman, J.D.; Krolik, J.H. X-ray Polarization from Accreting Black Holes: The Thermal State. Astron. J. 2009, 701, 1175-1187.
Ximpol: A new X-ray polarimetry observation-simulation and analysis framework. L Baldini, F Muleri, P Soffitta, N Omodei, M Pesce-Rollins, C Sgro, L Latronico, F Spada, A Manfreda, N Di Lalla, Proceedings of the 41st COSPAR Scientific Assembly. the 41st COSPAR Scientific AssemblyIstanbul, Turkey; Basel, Switzerland41c 2018 by the authors. Licensee MDPI. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) licenseBaldini, L.; Muleri, F.; Soffitta, P.; Omodei, N.; Pesce-Rollins, M.; Sgro, C.; Latronico, L.; Spada, F.; Manfreda, A.; Di Lalla, N. Ximpol: A new X-ray polarimetry observation-simulation and analysis framework. In Proceedings of the 41st COSPAR Scientific Assembly, Istanbul, Turkey, 30 July-7 August 2016; Volume 41. c 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
| [] |
[
"Compressive MRI quantification using convex spatiotemporal priors and deep auto-encoders",
"Compressive MRI quantification using convex spatiotemporal priors and deep auto-encoders"
] | [
"Member, IEEEMohammad Golbabaee ",
"Guido Bounincontri ",
"Carolin M Pirkl ",
"Marion I Menzel ",
"Bjoern H Menze ",
"Mike Davies ",
"Pedro A Gómez "
] | [] | [] | We propose a dictionary-matching-free pipeline for multi-parametric quantitative MRI image computing. Our approach has two stages based on compressed sensing reconstruction and deep learned quantitative inference. The reconstruction phase is convex and incorporates efficient spatiotemporal regularisations within an accelerated iterative shrinkage algorithm. This minimises the under-sampling (aliasing) artefacts from aggressively short scan times. The learned quantitative inference phase is purely trained on physical simulations (Bloch equations) that are flexible for producing rich training samples. We propose a deep and compact auto-encoder network with residual blocks in order to embed Bloch manifold projections through multiscale piecewise affine approximations, and to replace the nonscalable dictionary-matching baseline. Tested on a number of datasets we demonstrate effectiveness of the proposed scheme for recovering accurate and consistent quantitative information from novel and aggressively subsampled 2D/3D quantitative MRI acquisition protocols. | null | [
"https://arxiv.org/pdf/2001.08746v2.pdf"
] | 210,911,490 | 2001.08746 | 8e220bbc6d48efd1babb388db8c685d6fecebed5 |
Compressive MRI quantification using convex spatiotemporal priors and deep auto-encoders
Member, IEEEMohammad Golbabaee
Guido Bounincontri
Carolin M Pirkl
Marion I Menzel
Bjoern H Menze
Mike Davies
Pedro A Gómez
Compressive MRI quantification using convex spatiotemporal priors and deep auto-encoders
1Index Terms-MR Fingerprintingcompressed sensingconvex model-based reconstructionresidual networkauto-encoder
We propose a dictionary-matching-free pipeline for multi-parametric quantitative MRI image computing. Our approach has two stages based on compressed sensing reconstruction and deep learned quantitative inference. The reconstruction phase is convex and incorporates efficient spatiotemporal regularisations within an accelerated iterative shrinkage algorithm. This minimises the under-sampling (aliasing) artefacts from aggressively short scan times. The learned quantitative inference phase is purely trained on physical simulations (Bloch equations) that are flexible for producing rich training samples. We propose a deep and compact auto-encoder network with residual blocks in order to embed Bloch manifold projections through multiscale piecewise affine approximations, and to replace the nonscalable dictionary-matching baseline. Tested on a number of datasets we demonstrate effectiveness of the proposed scheme for recovering accurate and consistent quantitative information from novel and aggressively subsampled 2D/3D quantitative MRI acquisition protocols.
I. INTRODUCTION
Quantification of the intrinsic NMR characteristics [1] has proven powerful for tissue identification and tracking pathological changes. Despite many potentials, standard quantitative MRI (QMRI) approaches have very long acquisition times and for this reason, are not widely applicable in clinical setups. Magnetic Resonance Fingerprinting (MRF) has emerged to overcome this challenge [2]. MRF uses short excitation sequences capable of simultaneously encoding multitudes of NMR properties and further adopts Compressed Sensing (CS) to subsample a tiny fraction of the spatiotemporal k-space information [3,4,5,6,7]. Estimating the underlying quantitative maps therefore becomes a highly ill-posed inverse problem.
Popular computational approaches to the MRF inverse problem rely on dictionary matching (DM), primarily for parameter inference i.e. estimating quantitative maps from back-projected images, or further for promoting temporaldomain priors within model-based MRF reconstructions to reduce undersampling artefacts [8,9]. However DM's complexity (storage/runtime) does not scale well to the emerging multi-parametric QMRI applications. Deep learning MRF approaches recently emerged to address this issue [10,11,12]. Back-projected images are fed into a compact neural network that temporally processes voxel-wise MRF signal evolutions, MG is with the Computer Science department at the University of Bath, UK: (m.golbabaee@bath.ac.uk). GB is with Imago7 foundation. MD is with the University of Edinburgh. MIM is with the GE Healthcare. CMP, BHM and PAG are with the Technical University of Munich. so-called fingerprints, and replaces DM for quantitative inference. Trained with independently corrupted noisy fingerprints, such networks are unable to correct for dominant spatially-correlated (aliasing) artefacts appearing in heavily undersampled aquisitions. While larger convolutional models [13,14,15] capture spatiotemporal information to resolve aliasing artefacts, labelled QMRI datasets (i.e. ground-truth multi-parametric maps) that are necessary to train these models particularly in novel applications are scarce and hence place adaption of these models at the risk of overfitted predictions. Further, current approaches along this line build customised de-noisers (de-aliasing) and require expensive re-training by changing sampling parameters i.e. the forward model. This work aims to address these shortcomings through a two-stage DM-free pipeline: First, we take a CS approach to spatiotemporally process the k-space data and minimise undersampling artefacts in the reconstructed image time-series, and second we feed the resulted sequence to a deep and compact auto-encoder network with residual blocks for pervoxel quantitative inference. We cast reconstruction as a convex optimisation problem (LRTV) which enjoys reproducible global solutions regardless of initialisation and can be implemented with a momentum-accelerated algorithm with fast convergence. Spatial regularities of the MRF time-series are promoted by the Total Variation shrinkage and temporal structures are relaxed to an a-priori learned low-rank factorised model. We further provide geometrical insights to the mechanism behind the proposed deep inference approach. We show that the network provides a multi-resolution piecewise affine approximation to the Bloch response manifold projection. Rather than memorising a large MRF dictionary, the network hierarchically clusters this manifold through deep layers and learns a compact set of deep regressing filters for parameter inference. The proposed pipeline is validated on a number of experiments using a novel multi-parametric acquisition sequence for 2D and 3D quantitative brain imaging. Our approach can flexibly apply and report consistent predictions for different k-space readouts and further outperforms shallow learned inference models related to the Gaussian kernel fitting.
Paper organisation: We review previous related works in section II. Section III presents the inverse imaging problem model. Section IV presents our reconstruction and quantitive inference pipeline. Section V presents our geometrical insight to the network's performance for deep quantitative inference. In Section VI we present and discuss our experimental results, and finally we conclude in section VII. arXiv:2001.08746v1 [cs.CV] 23 Jan 2020
Notations: Throughout ||.|| denotes the Euclidean norm of a vector or a matrix, ||.|| T V denotes the Total Variation (TV) of a 2D or 3D spatial image defined by the sums of its gradient magnitudes [16]. Matrix rows and columns are denoted by X (i,.) and X i respectively.
II. RELATED WORKS
Here we highlight a number of related computational approaches for the MRF problem. Multi-parametric quantification based on fingerprinting, DM and its SVD-compressed (low-rank) variant were proposed in [2,17]. Reconstructing image time-series from k-space data was non-iterative and used zero-filling (ZF). Inspired by CS, later studies adopted model-based reconstructions to reduce subsampling (aliasing) artefacts and to pave the path for aggressively shorter scan times [8,9]. These methods are based on non-convex optimisation (iterative) without momentum-acceleration, and require DM per iteration in order to promote temporal-domain priors according to the Bloch dynamics. To accelerate DM's runtime, fast search schemes based on grouping the fingerprints [18] or forming tree structures were proposed [19,20]. Nonetheless the large size of the MRF dictionary remained a storage challenge to all. For some k-space subsampling patterns, including those adopted in our experiments, using only a temporal-domain prior is insufficient to produce artefact-free reconstructions (see e.g. [19,20]). This issue was tackled by low-pass filtering [19] that traded off images sharpness and later was improved by a Total Variation (TV) regularisation [21]. Nonetheless both methods require DM per-iteration, are nonconvex and without momentumacceleration. DM-free convex reconstructions based on lowrank priors were proposed in [22,23] (albeit cascaded to DM for quantitative inference), however [22] does not incorporate spatial domain priors and [23] encounters the cost of periteration SVD decompositions. We avoid this and add spatial TV regularisation for dimension-reduced image time-series while enforcing temporal-domain priors through a (low-rank) subspace representation of the dictionary instead of DM.
On the other hand, deep learning MRF approaches recently emerged to address the non-scalability of DM. Many works use the ZF reconstruction baseline and for quantitative inference they replace DM with a neural network. These methods broadly divide in two camps: the first group learns temporaldomain dynamics from simulating Bloch equations; hence is rich with training data (see e.g. [10,11,12,24] and also a kernel machine approach for shallow learning [25]). The second group use convolutional layers to also learn spatial domain regularities, see e.g. [13,14,15], but they require training on ground truth quantitative anatomical maps that may not be largely available as for the mainstream qualitative MRI. Our quantitative inference approach belongs to the first camp; we however replace ZF by a DM-free spatiotemporally regularised (model-based) reconstruction to remove undersampling artefacts before being fed to the network.
III. COMPRESSIVE QMRI ACQUISITION MODEL
The compressed sensing approach adopted by MRF for acquiring quantitative information follows a linear spatiotem-poral model [2]:
Y = A(X) + ξ,(1)
where Y ∈ C T ×m is the multi-coil k-space measurements collected at t = 1, . . . , T temporal frames and corrupted by some noise ξ. The Time-Series of Magnetisation Images (TSMI) -to be reconstructed-is an image sequence represented by a complex-valued matrix X of spatiotemporal resolution T ×n i.e. n spatial voxels across T temporal frames. The forward operator A := F Ω S models the multi-coil sensitivities operator S, and the Fourier transform F subsampled according to a set of temporally-varying k-space locations Ω. The tissues' quantitative properties in each voxel are encoded in a temporal signal at the corresponding column of the TMSI matrix. This signal records the magnetisation response of proton dipoles to dynamic excitations in the form a sequence of flip angles (magnetic field rotations) applying with certain repetition (TR) and echo (TE) times. Tissues with different NMR characteristics respond distinctively to excitations. QMRI/MRF rely on this principle to estimate quantitative characteristics from the (computed) TSMI. Pervoxel v magnetisation responses of the TSMI scaled by the proton density γ v are modelled as
X v ≈ γ v B(Θ v ) ∀v ∈ 1, . . . , n(2)
where the Bloch response B(Θ v ) : R p → C T is a nonlinear mapping from per-voxel intrinsic NMR properties Θ v to the corresponding (discrete-time) solution of the Bloch differential equations which captures the overall transient-state macroscopic dynamics of a voxel [26]. Our experiments use sequences that simultaneously encode p = 2 characteristics in each voxel i.e. the T1 and T2 relaxation times. This could be further extended to include other properties e.g. off resonance frequencies, T 2 * , diffusion and perfusion [4,5,6].
A. Low-dimensional manifold and subspace models
Estimating Θ (i.e. quantification) requires long enough sequences T > p to create contrast between different tissues' responses. As such the Bloch responses despite their high ambient dimension live on a low p-dimensional (nonlinear) sub-manifold of C T . Further it is observed that for certain excitation sequences, including those used in our experiments, this manifold is approximately embedded in a low-rank subspace Range(V ) ⊂ C T represented by an orthonormal matrix V ∈ C T ×s where p < s T . Hence the following dimensionreduced alternatives for models (1) and (2) can be deduced:
Y ≈ A(V X) (3) X v ≈ γ v V H B(Θ v )(4)
where X ∈ C s×n is the dimension-reduced TSMI. This compact representation is the basis for the subspace compression methods [9,22] and is proven beneficial to the runtime and accuracy (by noise trimming) of the reconstructions.
B. Model-fitting for parameter inference
Fitting computed TSMIs to the Bloch response model is central to QMRI. Per-voxel model-fitting according to (4) for obtaining the NMR characteristics and proton density reads (see e.g. [17,8]):
Θ v = P B ( X v ) := argmin Θ || X v − V H B(Θ)|| (5) γ v = X v , V H B( Θ v )(6)
We assumed without losing generality having normalised Bloch responses. We refer to P B (.) as the Bloch response manifold projection. This projection is nonconvex and oftentimes intractable for the generally complicated Bloch responses adopted by the MRF sequences. The MRF framework instead approximates (5)
P B ( X v ) ≈ argmin j || X v − V H D j ||(7)
through a nearest neighbour search that is itself a projection onto the discrete set of fingerprints i.e. a point-wise approximation to the (continuous) Bloch response manifold. Viewing fingerprints as training samples, the dictionary can be factorised through principal component analysis (PCA) [17]:
DD H ≈ V ΛV H(8)
for unsupervisedly learning the low-rank subspace representation of the Bloch responses. This representation helps to reduce temporal dimension and can be coupled with fast search schemes [18,20,19] to accelerate DM runtime. However any form of DM (fast or exhaustive search) remains nonscalable and creates storage overhead in multi-parametric QMRI applications because the number of dictionary atoms exponentially grows with p.
IV. DM-FREE IMAGE RECONSTRUCTION AND PARAMETER
INFERENCE PIPELINE
Our DM-free image computing pipeline consists of two stages: i) reconstructing TMSIs from undersampled k-space measurements and then ii) approximate model-fitting according to (4) for parameter inference. A set of simulated fingerprints (could be MRF dictionary) sample the Bloch response model and are used only for training (pre-processing) in order to learn three temporal-domain models: i) a dimension-reduced (low-rank) subspace representation for the Bloch responses, ii) an encoder network to map noisy fingerprints to the NMR parameters, and iii) a decoder network to generate clean Bloch responses from the NMR parameters.
A. Convex TSMI reconstruction with LRTV algorithm
A popular MRF baseline uses zero-filling (ZF) [17], that is back-projecting k-space measurements to form a dimensionreduced TSMI through the adjoint of (3):
X = V H A H (Y ) ∈ C s×n (9)
prior to the DM inference. Modern QMRI/MRF acquisitions aggressively curtail the scan times by using short excitation sequences and severe spatial (k-space) subsampling. As such the inverse problem (1) becomes highly ill-posed and ZF (which is not an inversion) results in aliasing artefacts in the reconstructed TSMI. Errors made at this stage can be indeed significant (see experiment results), they propagate to the parameter inference step and deteriorate the overall quantification accuracy.
To address this issue, we adopt model-based CS reconstruction with simultaneous spatiotemporal regularisations. Dimension-reduced TSMIs are computed through solving the following convex and DM-free optimisation dubbed as LRTV:
X = argmin X∈C s×n ||Y −A(V X)|| 2 + s i=1 λ i ||X (i,.) || T V (10)
The first term minimises discrepancies between the k-space measurements and the solutions through the factorised forward model (3). As such LRTV adopts a temporal-domain prior through the subspace model (i.e. the low-rank factorisation X ≈ V X) which provides a compact and convex (in fact linear) relaxed representation for the Bloch response model instead of using the MRF dictionary. LRTV additionally adopts Total Variation (TV) regularisation. Each component of the TSMI corresponds to a spatial 2D or 3D volumetric image X (i,.) (matrix row), where penalising its TV norm promotes spatial-domain regularities via sparse image gradients [16]. λ i > 0 control per (subspace) component regularisation levels.
The LRTV problem (10) can be efficiently solved using Fast Iterative Shrinkage Algorithm with Nesterov-type momentum acceleration and backtracking step-size [27,28]. Each iteration k = 0, 1, 2 . . . computes:
∇ = X k − µ k V H A H A(V X k ) − Y Z k (i,.) = Prox λiµ k (∇ (i,.) ) ∀i = 1, . . . s X k+1 = Z k + k−1 k+2 (Z k − Z k−1 )(11)
The first and third lines correspond to the gradient and momentum-acceleration updates, respectively. The second line computes a small number s T of shrinkage operations for the 2D/3D images in each subspace component Prox α (x) := argmin u 1 2 ||x−u|| 2 +α||u|| T V , which can be efficiently done on a GPU using the Primal-Dual algorithm [29]. Per iteration, the initial step size µ k halves until the following criteria holds: 1
||Y − A(V Z k )|| 2 >||Y − A(V X k )|| 2 + 2Re G, Z k − X k + µ −1 k ||Z k − X k || 2
With an all-zero initialisation, the first line of (11) recovers ZF in the first iteration. Setting λ = 0 recovers the LR problem [22] that is a convex relaxed alternative to the BLIP [8], wherein temporal-only priors based on the MRF dictionary are replaced by the low-rank subspace. Note that the size of V is independent of the number of fingerprints (used for training). Hence the solver does not face a memory bottleneck and the slow progress of computing DM per iteration. While for certain (Cartesian) sampling schemes this temporal model can decently regularise the inversion [30], for other important sampling patterns e.g. non-cartesian spiral and radial readouts used in our experiments, it turns out to be inadequate and fails to output artefact-free TSMIs (see section VI). Multi-prior CS solvers are proven effective for highly undersampled systems by further restricting degrees of freedom of data [31,32]. The LRTV uses this fact by setting λ > 0 and adding spatial priors to sufficiently regularise the problem. Besides being DM-free, the proposed approach has other advantages over its nonconvex alternatives including a tractable way to incorporate multiple priors 2 , momentum-acceleration for fast convergence and reproducible global solutions regardless of initialisation.
B. MRFResnet for parameter inference
Instead of using a large-size dictionary for DM, we propose training and using a compact network coined as MRFResnet in the form of an auto-encoder with deep residual blocks, shown in Figure 1. Auto-encoders have proven powerful in denoising tasks through creating an information bottleneck which corresponds to learning a low-dimensional manifold model for capturing (nonlinear) intrinsic signal structures [33]. In our task computed TSMI voxels are processed by such a model to create clean magnetisation responses as well as estimating the intrinsic NMR parameters in a computably efficient manner. The p = 2 neurons bottleneck (in Figure 1) has a physical interpretation: fitting noisy temporal trajectories to the nonlinear Bloch model with limited p T degrees of freedom determined by the T1 and T2 quantities.
1) Encoder: This network learns to approximate Bloch manifold projections through a continuous mapping R : X v → Θ v parametrised by the network's weights and biases {W, β}: (12) where h (i) the outputs of i = 1, . . . , N residual blocks are
R(x) ≡ h (N +1) (x) = ϕ W (N +1) h (N ) (x) + β (N +1)h (i) (x) = ϕ h (i−1) (x) + g (i) (x) , g (i) (x) = W (i,2) ϕ W (i,1) h (i−1) (x) + β (i,1) + β (i,2) , h (0) (x) = x and the ReLU activations ϕ(x) = max(x, 0)
are used throughout. The inputs are the normalised temporal voxels of the dimension-reduced TSMI. The network is trained on simulated noisy Bloch responses (see section VI-B) so that the approximate projection holds
R(x) ≈ P B (x)(13)
in a neighbourhood of the (compressed) Bloch manifold.
2) Decoder: The proton density (PD) is a scaling factor that amplifies the Bloch responses in each voxel. Hence after estimating other nonlinear NMR parameters (e.g. T1/T2) using the encoder part, PD can be explicitly resolved through (6). This would however require evaluating/solving Bloch responses for
G(Θ v ) ≈ V H B(Θ v )(14)
the corresponding compressed Bloch responses (clean fingerprints) in short runtimes. This allows (6) to be easily applied without significant computations. For the sequence design used in our experiments, it turns out that a fully-connected shallow network with one hidden layer and ReLU activations can approximate well this step. 3 Unit dimensions are customised to a sequence used in our experiments encoding T1/T2 relaxation times, with reduced subspace dimension s = 10. Encoder has N = 6 residual blocks of 10 neurons width, and decoder has 300 neurons in its single hidden layer. The subspace compression helps reduce model sizes in both networks (hence reducing risk of overfitted predictions) and also reduce required training resources compared to uncompressed deep MRF approaches [10,11]. Further to avoid losing discrimination between fingerprints -e.g. by a magnitudeonly data processing [10] -we adopt a practical phasealignment heuristic from [20,19] to de-phase TSMIs and training samples before being fed to MRFResnet. This treatment allows the network without losing generality have real-valued parameters and approximate real-valued mappings. 4 V. HIERARCHICAL PARTITIONING OF THE BLOCH RESPONSE MANIFOLD In this part we show that the MRFResnet provides a multi-scale piecewise affine approximation to the Bloch response manifold projection (5). Hierarchical partitioning and multi-scale approximations are also central to the fast search schemes proposed for the DM-based MRF (see illustrations in [20,35]). However unlike any form of DM (fast or exhaustive) that creates point-wise approximations for (7), MRFResnet does not memorise a dictionary and rather uses it to learn and efficiently encode a compact set of partitions and deep matched-filters for affine regression of the NMR quantities. 3 We also observed a similar network complexity for generating responses to the well-known FISP sequence [3]. However, we did not achieved accurate predictions using shallow architectures of comparable sizes for R (two layer were needed at least however with larger model than MRFResnet [12,34]). This suggests that generating clean responses (decoding) was easier than projecting noisy fingerprints to their generative parameters (encoding), and the latter requires deep processing (see section V). 4 Another way is to duplicate input size by separating real and imaginary signal components e.g. [11]. We found this unnecessary in our experiments.
A. Affine spline function approximation
The MRFResnet encoder (also its decoder network) is composed of linear connections and piecewise linear ReLU activations. This results in piecewise affine functions h (i) (x) after each residual block as well as the end-to-end mapping R(x) (see e.g. [36,37]). Further, R is Lipschitz continuous for continuous activation functions as above and for bounded {W (i) , β (i) , i}.
Theorem 1. Denote by z : R s → R p z(x) := W (N +1) h (N ) (x) + β (N +1)(15)
the weighted outputs in (12) before the last non-linearity. 5 The following affine spline representation holds for MRFResnet:
z(x) = A[x]x + b[x] := r (A r x + b r ) ι Ωr (x),(16)
where ι Ωr (x) is the indicator function with respect to a segment (set) Ω r ∈ R s , returning x if it belongs to the segment and 0 otherwise -segments form a disjoint partitioning of the input space with affine boundaries. Matrices A r ∈ R p×s and vectors b r ∈ R p define the corresponding slopes and offsets for the input-output affine mapping in each segment. Shorthands 5 The last ReLU layer in R is for imposing the positivity of T1/T2 values, and therefore the prediction task is mainly done by the preceding layers.
A[x] : R s → R p and b[x] : R s → R p represent the input- dependent (piece-wise affine) mapping of z(x). b[x] repre- sents p input-dependent offsets. Similarly, A[x]
is an inputdependent p×s matrix where each row is a deep matched-filter returning its correlation with x for each output.
Proof can be found in [36] for general feedforward networks with fully-connected, convolutional, pooling and/or residual layers and using any piecewise-linear activations. During training, MRFResnet encoder learns
{W (i) , β (i) } or equivalently {A[x], b[x]
} to provide a continuous and piece-wise affine approximation for (5). The universal approximation theorem [38] states that a shallow network with one but very wide hidden layer can do this. Deeper networks are however more practical to efficiently reduce the number of hidden units [39]. Indeed, we experimentally observe this (section VI-C) by comparing MRFResnet to a shallow learning scheme related to [25] based on Kernel Machines (KM) and random features [40].
B. Visualising MRFResnet segments on Bloch manifold
Remark 1. Continuity of z(x) implies that adjacent segments Ω r , Ω r correspond to distinct A r , A r . Indeed, if A r = A r and the only difference is in the offsets b r = b r , then Ω r , Ω r won't intersect on boundaries. Therefore they are not adjacent segments unless contradicting the continuity assumption.
This remark gives an idea for visualizing the input space segments. For densely sampled input signals x, we compute derivatives of the weighted outputs (15) with respect to inputs using back propagation. These will determine the inputdependant slopes in the affine spline formulation (16) i.e. rows of A[x] at a point x are populated as follows ∀j = 1, 2, . . . , p:
A[x] (j,.) = ∂z j (x) ∂x 1 , ∂z j (x) ∂x 2 , . . . , ∂z j (x) ∂x s .(17)
By vector quantisation (e.g. k-means clustering) we cluster regions of x that output distinct slopes A r and identify the segments Ω r . Similar routine could apply to compute input space partitions by clustering back-propagated output derivates after each residual block (Theorem 1 and Remark 1 also hold for the intermediate blocks of R).
According to [36] as we progress into deeper layers, partitions will be subdivided into smaller segments in a hierarchal fashion. This can be observed in Figure 2 where we adopted the above routine for the T1/T2 encoding MRF sequence used in our experiments and visualised multi-scale (from coarse-tofine) partitions obtained after each residual layer. The Bloch response manifold is sampled across fine-gridded T1/T2 values (i.e. MRF dictionary) to visualise the intersection of the input space segments with this manifold (results are visualised across the three dominant principal component axes). MR-FResnet encoder learns about a thousand partitions for its endto-end mapping z(x). In the light of (16) we know that for each partition Ω r the network implicitly encodes p = 2 deep matched-filters (the rows of A[x] or alternatively A r ) and an offset term to locally linearly regress the T1/T2 outputs in that segment. As such instead of memorising >100K dictionary atoms used for training, the network learns a compact piecewise affine approximation to the Bloch manifold projection (5) as a rapid and memory-efficient alternative to DM's point-wise approximation (7). The total number of parameters used by the MRFResnet (Table I) are two hundreds times less than the size of the dimension-reduced MRF dictionary. Figure 3 shows the Bloch responses for a range of T1/T2 values, as well as deep matched-filters learned by MRFResnet to predict each of these quantities in this range from noisy inputs. Computed through (17), match-filters are one-dimensional analogues of the saliency maps a.k.a. deep dream images [41], measuring sensitivities of the neurons with respect to the inputs.
VI. NUMERICAL EXPERIMENTS AND DISCUSSIONS A. Datasets and 2D/3D acquisition parameters
Methods are tested on the Brainweb in-silico phantom (see supplementary materials), a EUROSPIN TO5 phantom (invitro) [42], and a healthy human brain (in-vivo). In-vitro and in-vivo data were acquired on a 1.5T GE HDxT scanner using 8-channel receive-only head RF coil. The novel adopted excitation sequence has T = 880 repetitions and jointly encodes T1/T2 values using an inversion pulse followed by a flip angle schedule that linearly ramps up from 1 • to 70 • in repetitions 1-400, ramps down to 1 • in repetitions 400-600, and then stays constant to 1 • for repetitions 600-880 (see more details in [43]). Three non-Cartesian readout trajectories were tested: 2D/3D variable density spiral and 2D radial k-space subsampling patterns. Throughout we used Tinv=18 ms, fixed TR=12 ms, and TE = 0.46/2.08 ms for spiral/radial acquisitions, respectively. For the 2D/3D acquisitions we had 200 2 /200 3 (mm 2 /mm 3 ) FOV and 200 2 /200 3 voxels image/tensor size, respectively. Further, the total number of interleaves for the 2D/3D spiral and 2D radial readouts were 377/48'400 and 967, respectively. The total acquisition times for the 2D and 3D scans were 10:56 seconds and 9:51 minutes, respectively.
B. Tested algorithms 1) TSMI reconstruction: we compare model-based (convex) methods LRTV and LR through solving iteratively (10) with spatiotemporal (λ > 0) and temporal-only (λ = 0) regularisations, respectively. The latter is a convex relaxation of the BLIP algorithm [8]. Further we compare against non-iterative baselines zero-filling (ZF) (9) and ViewSharing (VS) [44]. VS aggregates spatial k-space data within neighbouring temporal frames to increase per-frame samples and enhance spatial resolutions in a non model-based fashion. Coil sensitivities were computed from undersampled data using an adaptive coil combination scheme [45].
2) Quantitative inference: We compare learned models MRFResnet (deep learning) and a shallow learning method based on Gaussian Kernel Machines (KM) related to [25]. We further consider baselines DM and Fast Group Matching (FGM) [18] using exhaustive and fast dictionary searches, respectively. 6 .
3) Learned models: All methods above use a s = 10 dimensional subspace model a-priori learned from Bloch response simulations using PCA. For this, a dictionary of d = 113781 atoms sampling the T1=[100:10:4000] (ms) and T2=[20:2:600] (ms) grid was simulated using the Extended Phase Graph formalism [46]. The subspace-compressed dictionary was directly used in DM and FGM, whereas for learningbased inference it was only used for training. Clean fingerprints were used for training MRFResnet decoder G i.e. Bloch response generative network. Noisy fingerprints (i.i.d. noise ∼ N (0, 0.01)) were used to train the MRFResnet encoder R. After noise corruption (i.e. data augmentation by factor 50) we performed dictionary search to find correct (closest match) training labels and not those that originally generated the fingerprints in order to learn a projection mapping rather than a (possible overfitted) denoiser. Trainings used Adam optimiser with MSE loss for 20 epochs, 0.01 initial learning rate with decay factors 0.8/0.95 and mini-match sizes 500/20 for R and G, respectively. The same datasets were used for training KM's encoder and decoder models using LBFGS optimiser.
C. Deep vs. shallow models' prediction results
To compare the prediction performances of the MRFResnet and KM models, 500K out-of-sample noisy fingerprints were randomly generated and fed to the encoder models to estimate T1/T2 parameters. Predicted T1/T2s were then fed to the decoder models for generating the corresponding noise-free Bloch responses. The ground-truth (GT) T1/T2s from DM were used to measure encoders' performances based on Mean 6 Used hyperparameters: FGM used 100 groups, KM used optimised kernel scales by MATLAB's fitrkernel function and 1000/500 random features [40] per output index for encoder/decoder models, respectively. VS used 880 shared views, LRTV used ∀i, λ i = λ = 0.2/0.04 for the 2D/3D scanned data. 1) Discussion: MRFResnet outperforms KM and achieves reliable predictions for T1/T2 values and Bloch response generation, with 1% average difference with the DM baseline. KM reports poor T2 and Bloch response estimations for the number of random features used. Comparing the size of both models we can deduce the advantage of depth in the proposed learning approach to embed DM, as compared to its shallow alternative for the adopted acquisition sequence.
Absolute (Percentage) Errors MAE = E[| T 1 − T 1 GT |] and MAPE = E[ | T 1−T 1 GT | T 1 GT
D. In-vitro phantom experiment
The 2D (spiral/radial) 3D (spiral) acquisition schemes were tested for measuring quantitative parameters in twelve tubes of the EUROSPIN TO5 phantom. Figure 4 displays the mean and standard deviation of the predicted T1/T2 values in each ROI (tube) using different reconstructions algorithms: ZF, LR, VS and the proposed LRTV, all fed to the MRFResnet for quantitative inference. The spin-echo and inversion recovery spin-echo experiments suggested in the phantoms manual were used as references for T1/T2 values. Computed parameter map images are also shown in the supplementary materials. Figure 5 displays the Bland-Altman plots of the percentage differences between T1/T2 values of the phantom ROIs in spiral and radial scans, estimated using the ZF and LRTV.
1) Discussion: From Figure 4 we observe that tested methods (except VS for 2D 7 ) report comparable performances in estimating mean T1/T2 values. T1 values are comparable to the GT (although ZF, LR and VS slightly underestimate T1). The predicted T2 values, especially in high T2 regimes, are under-estimated (negative bias). 8 Overall, the proposed LRTV predicts least biased T1/T2s. Notably, LRTV has the least variations around the estimated values. For all experiments and averaged over all ROIs, LRTV's standard deviation is 1.5/2.5 times less than its closest competitor for predicting T1/T2, respectively. Further, from Figure 5 we observe LRTV enables highly consistent predictions across different sampling protocols i.e. per-pixel estimated T1/T2 values in all ROIs obtained from radial and spiral measurements are 2 to 3 times more consistent with each other than those computed via ZF. We do not observe similar consistency level in other tested algorithms as the readout-dependent undersampling artefacts in images were not fully removed by them ( Figure S2).
E. In-vivo 2D/3D experiments
We applied the same acquisition sequences for imaging a healthy volunteer's brain. Figures 6 displays the parametric maps reconstructed from 2D spiral and radial readouts. We computed the T1, T2 and proton density (PD) maps using reconstruction algorithms ZF, VS, LR and LRTV, and the MRFResnet for quantitative inference. We also tested KM inference after applying LRTV. For the 3D (spiral) acquisitions 7 We observe VS generally trades off image smoothness against overestimated T2s and underestimated T1s. This compromise is strongly unfavourable in 2D acquisitions. Larger k-space neighbourhood information was available/shared in 3D (than 2D) acquisitions, which made 3D VS competitive. 8 We hypothesize this is due to physical effects e.g. flip angle calibration errors, diffusion or magnetization transfer that are currently un-modelled in the reconstruction schemes. we compared LRTV and its closest competitor VS in Figure 7.
Outcomes from other tested algorithm are displayed in the supplementary materials ( Figure S3).
1) Discussion:
The LRTV-MRFResnet outperforms all tested algorithms in reconstructing T1, T2 and PD maps in all acquisition schemes. Other methods were unable to successfully remove the under-sampling artefacts in TSMIs, and these errors propagated to the parameter inference phase and resulted in inaccurate maps. Temporal-only priors incorporated within LR are shown insufficient to regularise the inverse problem and LR sometimes (e.g. 2D spiral acquisitions) admits TABLE II: NRMSE between the T1, T2 and PD maps obtained from MRFResnet and DM, after LRTV reconstruction. solutions with even stronger artefacts than the model-free ZF baseline. This was previously observed for other non-Cartesian MRF readouts (e.g. [20,19]), and highlights the need for adding appropriate spatial regularisation. The non model-based VS results in spatially smoother maps than ZF and LR, but is unable to fully clean the artefacts. Further and consistent with our in-vitro experiment, we observe that VS overestimates the PD and T2 values (e.g. in White and Grey matter regions) in tested 2D acquisitions (i.e. spatial regularisation trades off quantification accuracy). Finally, the learning-based KM and MRFResnet inference schemes applied after LRTV reconstruction, both output comparably accurate T1 maps.
However the shallow KM model, despite having model size larger than MRFResnet, is unable to learn accurate T2 and PD quantification and results in poor estimated maps, consistent with our observations in section VI-C.
F. MRFResnet's consistency with DM
Further to our validations in section VI-C, we compare parametric maps computed by DM and MRFResnet for the in-vitro and in-vivo experiments, where LRTV was applied for TSMI reconstruction. Results are summarised in Table II and for 2D spiral scans are illustrated in Figure 8. We observe very small differences in parametric maps (Table II) and particularly for the regions corresponding to white and grey matters predictions are highly consistent with each other (Figure 8).
G. Runtimes
Computations were conducted on an Intel Xeon E5-2667v4 processor (16 CPU cores), 32 GB RAM and a NVIDIA 2080Ti GPU. Where parallel computing was feasible, we adopted GPU implementation for speedup i.e. in forward/adjoint NUFFT operations [47], the TV shrinkage operator [48], VS, MRFResnet, DM and FGM. Table III includes computation times of the tested methods for the 2D/3D in-vivo experiments. LRTV benefits from momentum-acceleration and takes 7-11 iterations to converge, that is much faster than DM-based iterative methods (for comparisons see runtimes in [20]). We observed that the LR method without spatial regularisation makes very slow progress towards its (inaccurate) solution and does not converge within our limit of 30 iterations. This indicates that exploiting additional (spatial) solution structure, despite introducing TV shrinkage computations, has an overall runtime advantage (see e.g. [49,50]) by avoiding extra costly forward/adjoint iterations. The LRTV runs 2-3 times slower than its non-iterative competitor VS for achieving better predictions. DM-based inference methods are order(s) of magnitudes slower than MRFResnet, and therefore the great prediction consistency in both approaches suggests adopting neural inference in favour of runtime.
VII. CONCLUSIONS
We proposed a two-stage DM-free approach for multiparametric QMRI image computing based on compressed sensing reconstruction and deep learning. The reconstruction is convex and incorporates efficient spatiotemporal regularisations within an accelerated iterative shrinkage algorithm to minimise undersampling artefacts in the computed TSMI. We proposed MRFResnet, a compact auto-encoder network with deep residual blocks, in order to embed Bloch manifold projections through multi-scale piecewise affine approximations, and to replace the non-scalable DM baseline for quantitative inference. We demonstrated the effectiveness of the proposed scheme through validations on a novel 2D/3D multi-parametric quantitative acquisition sequence. Future extensions could address motion-artefacts and multi-compartment voxel quantification [51,52] that are currently un-modelled in our pipeline. Further accelerations could be studied through stochastic gradients [53] and/or learned proximity operations [54] where the proposed scheme could complementarily be adopted for creating accurate labelled parametric maps for training. Results are consistent with those obtained in previous experiments. KM outputs inaccurate T2/PD predictions. Due to the extremely low k-space data for view sharing, VS also fails to recover T2 informations. Temporal priors used by LR are insufficient to reject under-sampling artefacts. On the other hand, the spatiotemporally regularised LRTV significantly improves TSMI reconstructions (e.g. 3 to 6 dB enhancement compared to the ZF baseline) through successfully removing strong aliasing artefacts (see Figure S1). This enables accurate parameter inference in the next stage using DM or the DM-free alternative MRFResnet. As can be seen in Table I, MRResnet and the DM baseline score competitive quantitative inference results regardless of the reconstruction algorithm.
SII. in-vitro PHANTOM RECONSTRUCTED MAPS
In Figure S2 we display the computed T1, T2 and PD maps for our in-vitro phantom experiments in section VI-D. Tested reconstruction methods are ZF, LR, VS and the proposed LRTV, all fed to the MRFResnet for quantitative inference. Methods ZF and LR result in noisy predictions. It can be observed that for the 2D acquisitions (spiral/radial) VS strongly compromises between outputting smoother images and overestimated T2 values (bias). This issue is also present in in-vivo and in-silico experiments, where less k-space neighbourhood information are available to share (compared to the 3D acquisitions) and make the VS noncompetitive, and further the overall quantifications inconsistent across 2D/3D acquisitions. The proposed LRTV overcomes this issue through a model-based compressed sensing reconstruction.
SIII. RECONSTRUCTED MAPS FOR THE 3D in-vivo SCANS
To supplement our comparisons in section VI-E (Figure 7) regarding the 3D quantitative brain imaging scans, we display the parametric maps ( Figure S3) computed by the ZF and LR algorithms, both fed to the MRFResnet for quantitative inference. As can be seen, predictions are suffering from undersampling artefacts and are not competitive to those computed by the proposed LRTV algorithm (Figure 7). : Reconstructed T1 (first two columns) T2 (second two columns) and PD (third two columns) maps using a 3D scan with spiral readouts. The (zoomed) 3D maps are computed using ZF (left sub-column) and LR (right sub-column) algorithms followed by MRFResnet for quantitative inference.
Fig. 1 :
1MRFResnet (encoder) for T1/T2 inference, the Bloch response generative network G (decoder), and the implicit linear dimensionality reduction/expansion (first/last) layers using the subspace model V H /V . all voxels and their parameters Θ v which can be computationally intensive. Instead we train a decoder network G(.) which for given NMR parameters it approximately generates
Fig. 2 :
2Coarse-to-fine partitioning of the Bloch manifold (top row) sampled by a dense fingerprinting dictionary, and their generative T1/T2 parameters (bottom row), using MRFResnet. From left to right figures illustrate learned partitions after each residual block.
Fig. 3 :
3(a) The mean and (b) centred Bloch responses within the range (T1, T2) ∈ [1000 − 1200] × [80 − 110] (ms). (c)-(d) The end-to-end match filters learned by the MRFResnet to regress T1/T2 values are shown across the original (non-compressed) temporal dimension.
]
(similarly for T2). Corresponding clean fingerprints were used as GT to measure generative model (decoder) predictions based on Normalised-RMSE = E ||G( Θ)−B(Θ GT )|| ||B(Θ GT )||.
Fig. 4 :
4The mean T1 (left column) and T2 (right column) values in milliseconds and their standard deviations (error bars) estimated via using four reconstruction methods compared to the reference values (GT) in 12 phantom ROIs. Results are compared for 2D spiral (top row), 2D radial (middle row) and 3D spiral acquisitions (bottom row).
Fig. 6 :Fig. 7 :
67Reconstructed T1, T2 and PD maps (left to right) from 2D (a) spiral and (b) radial scans using (from top to bottom) LRTV, VS, ZF and LR algorithms followed by MRFResnet inference and (the last row) LRTV with KM inference. Reconstructed T1 (first two columns) T2 (second two columns) and PD (third two columns) maps using a 3D scan with spiral readouts. The (zoomed) 3D maps are computed using LRTV (left sub-column) and VS (right sub-column) algorithms followed by MRFResnet for quantitative inference.
Fig. 8 :
8Differences in the predicted T1 (left), T2 (middle) and PD (right) maps between MRFResnet and DM, after applying LRTV reconstruction.
Fig. S1 :
S1Reconstructed T1, T2 and PD maps (left to right) from 2D (a) spiral and (b) radial simulated scans using (from top to bottom) LRTV, VS, ZF and LR algorithms followed by MRFResnet inference and (the last row) LRTV with KM inference. Figure (c) displays the ground truth maps used for simulations.
Fig. S2 :
S2Reconstructed T1 (3 left columns) and T2 (3 right columns) maps of EUROSPIN TO5 phantom, imaged using the 2D spiral (1st sub-column), 2D radial (2nd sub-column) and 3D spiral (3rd sub-column) k-space acquisitions. Tested reconstruction methods (from top to the bottom row) are ZF, LR, VS and LRTV, all fed to MRFResnet for inference.
Fig. S3
S3Fig. S3: Reconstructed T1 (first two columns) T2 (second two columns) and PD (third two columns) maps using a 3D scan with spiral readouts. The (zoomed) 3D maps are computed using ZF (left sub-column) and LR (right sub-column) algorithms followed by MRFResnet for quantitative inference.
Table I summarises our results.Total #
params.
T1 (ms)
MAE
T1 (%)
MAPE
T2 (ms)
MAE
T2 (%)
MAPE
B (%)
NRMSE
MRFResnet 5.2e3
7.2
0.9
1.9
1.0
0.8
KM fitting 18.0e4
28.3
3.7
21.2
11.6
13.3
TABLE I :
IPrediction performances of MRFResnet and KM.
TABLE III :
IIITested runtimes for quantitative brain image computing.
TABLE I :
IThe T1, T2 and PD maps' MAPE and NRMSE errors (%), and the TSMI reconstruction PSNRs (dB) for the in-silico experiment. Results are sorted for the spiral / radial acquisitions and validated against the ground truth.
Optional warm-start could rescale the chosen µ 1 by factor ||Y ||/||A(X 1 )||.
In non-convex (e.g. DM-based) approaches incorporating extra priors such as spatial regularity constraints is not always algorithmically tractable e.g. sequential projections on two sets where one is non-convex may not result in projecting onto the intersection.
Quantitative MRI of the brain: measuring changes caused by disease. P Tofts, John Wiley & SonsP. Tofts, Quantitative MRI of the brain: measuring changes caused by disease. John Wiley & Sons, 2005.
Magnetic resonance fingerprinting. D Ma, V Gulani, N Seiberlich, K Liu, J Sunshine, J Durek, M Griswold, Nature. 4957440D. Ma, V. Gulani, N. Seiberlich, K. Liu, J. Sunshine, J. Durek, and M. Griswold, "Magnetic resonance fingerprinting," Nature, vol. 495, no. 7440, pp. 187-192, 2013.
MR fingerprinting using fast imaging with steady state precession (fisp) with spiral readout. N Jiang, Y , D Ma, N Seiberlich, V Gulani, M Griswold, Magnetic resonance in medicine. 746N. Jiang Y, D. Ma, N. Seiberlich, V. Gulani, and M. Griswold, "MR fingerprinting using fast imaging with steady state precession (fisp) with spiral readout," Magnetic resonance in medicine, vol. 74, no. 6, pp. 1621-1631, 2015.
Magnetic resonance fingerprinting using echo-planar imaging: Joint quantification of T1 and T2* relaxation times. B Rieger, F Zimmer, J Zapp, S Weingärtner, L R Schad, Magnetic resonance in medicine. 785B. Rieger, F. Zimmer, J. Zapp, S. Weingärtner, and L. R. Schad, "Magnetic resonance fingerprinting using echo-planar imaging: Joint quantification of T1 and T2* relaxation times," Magnetic resonance in medicine, vol. 78, no. 5, pp. 1724-1733, 2017.
Estimation of perfusion properties with mr fingerprinting arterial spin labeling. K L Wright, Y Jiang, D Ma, D C Noll, M A Griswold, V Gulani, L Hernandez-Garcia, Magnetic resonance imaging. 50K. L. Wright, Y. Jiang, D. Ma, D. C. Noll, M. A. Griswold, V. Gulani, and L. Hernandez-Garcia, "Estimation of perfusion properties with mr fingerprinting arterial spin labeling," Magnetic resonance imaging, vol. 50, pp. 68-77, 2018.
Simultaneous T1, T2 and diffusion quantification using multiple contrast prepared magnetic resonance fingerprinting. Y Jiang, J Hamilton, W Lo, K Wright, Proc. Intl. Soc. Mag. Res. Med. Y. Jiang, J. Hamilton, W. Lo, K. Wright et al., "Simultaneous T1, T2 and diffusion quantification using multiple contrast prepared magnetic resonance fingerprinting," in Proc. Intl. Soc. Mag. Res. Med., 2017.
Sparse mri: The application of compressed sensing for rapid mr imaging. M Lustig, D Donoho, J M Pauly, Magnetic Resonance in Medicine. 586M. Lustig, D. Donoho, and J. M. Pauly, "Sparse mri: The application of compressed sensing for rapid mr imaging," Magnetic Resonance in Medicine, vol. 58, no. 6, pp. 1182-1195, 2007.
A compressed sensing framework for magnetic resonance fingerprinting. M Davies, G Puy, P Vandergheynst, Y Wiaux, SIAM Journal on Imaging Sciences. 74M. Davies, G. Puy, P. Vandergheynst, and Y. Wiaux, "A compressed sensing framework for magnetic resonance fingerprinting," SIAM Jour- nal on Imaging Sciences, vol. 7, no. 4, pp. 2623-2656, 2014.
Low rank alternating direction method of multipliers reconstruction for mr fingerprinting. J Assländer, M A Cloos, F Knoll, D K Sodickson, J Hennig, R Lattanzi, Magnetic resonance in medicine. 791J. Assländer, M. A. Cloos, F. Knoll, D. K. Sodickson, J. Hennig, and R. Lattanzi, "Low rank alternating direction method of multipliers reconstruction for mr fingerprinting," Magnetic resonance in medicine, vol. 79, no. 1, pp. 83-96, 2018.
MR fingerprinting deep reconstruction network (DRONE). O Cohen, B Zhu, M S Rosen, Magnetic resonance in medicine. 803O. Cohen, B. Zhu, and M. S. Rosen, "MR fingerprinting deep recon- struction network (DRONE)," Magnetic resonance in medicine, vol. 80, no. 3, pp. 885-894, 2018.
Better than real: Complex-valued neural nets for mri fingerprinting. P Virtue, X Y Stella, M Lustig, Image Processing (ICIP. 2017P. Virtue, X. Y. Stella, and M. Lustig, "Better than real: Complex-valued neural nets for mri fingerprinting," in Image Processing (ICIP), 2017
. IEEE. IEEE International Conference on. IEEE, 2017, pp. 3953-3957.
Geometry of deep learning for magnetic resonance fingerprinting. M Golbabaee, D Chen, P A Gómez, M I Menzel, M E Davies, ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing. M. Golbabaee, D. Chen, P. A. Gómez, M. I. Menzel, and M. E. Davies, "Geometry of deep learning for magnetic resonance fingerprinting," in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 7825-7829.
Deep learning for magnetic resonance fingerprinting: A new approach for predicting quantitative parameter values from time series. E Hoppe, G Körzdörfer, T Würfl, J Wetzl, F Lugauer, J Pfeuffer, A Maier, Studies in health technology and informatics. 243202E. Hoppe, G. Körzdörfer, T. Würfl, J. Wetzl, F. Lugauer, J. Pfeuffer, and A. Maier, "Deep learning for magnetic resonance fingerprinting: A new approach for predicting quantitative parameter values from time series." Studies in health technology and informatics, vol. 243, p. 202, 2017.
Magnetic resonance fingerprinting reconstruction via spatiotemporal convolutional neural networks. F Balsiger, A S Konar, S Chikop, V Chandran, O Scheidegger, S Geethanath, M Reyes, International Workshop on Machine Learning for Medical Image Reconstruction. SpringerF. Balsiger, A. S. Konar, S. Chikop, V. Chandran, O. Scheidegger, S. Geethanath, and M. Reyes, "Magnetic resonance fingerprinting reconstruction via spatiotemporal convolutional neural networks," in International Workshop on Machine Learning for Medical Image Re- construction. Springer, 2018, pp. 39-46.
Deep learning for fast and spatially-constrained tissue quantification from highly-accelerated data in magnetic resonance fingerprinting. Z Fang, Y Chen, M Liu, L Xiang, Q Zhang, Q Wang, W Lin, D Shen, IEEE transactions on medical imaging. Z. Fang, Y. Chen, M. Liu, L. Xiang, Q. Zhang, Q. Wang, W. Lin, and D. Shen, "Deep learning for fast and spatially-constrained tissue quantification from highly-accelerated data in magnetic resonance fin- gerprinting," IEEE transactions on medical imaging, 2019.
Nonlinear total variation based noise removal algorithms. L I Rudin, S Osher, E Fatemi, Physica D: nonlinear phenomena. 601-4L. I. Rudin, S. Osher, and E. Fatemi, "Nonlinear total variation based noise removal algorithms," Physica D: nonlinear phenomena, vol. 60, no. 1-4, pp. 259-268, 1992.
SVD compression for magnetic resonance fingerprinting in the time domain. D F Mcgivney, E Pierre, D Ma, Y Jiang, H Saybasili, V Gulani, M A Griswold, IEEE transactions on medical imaging. 3312D. F. McGivney, E. Pierre, D. Ma, Y. Jiang, H. Saybasili, V. Gulani, and M. A. Griswold, "SVD compression for magnetic resonance finger- printing in the time domain," IEEE transactions on medical imaging, vol. 33, no. 12, pp. 2311-2322, 2014.
Fast group matching for mr fingerprinting reconstruction. S F Cauley, K Setsompop, D Ma, Y Jiang, H Ye, E Adalsteinsson, M A Griswold, L L Wald, Magnetic resonance in medicine. 742S. F. Cauley, K. Setsompop, D. Ma, Y. Jiang, H. Ye, E. Adalsteinsson, M. A. Griswold, and L. L. Wald, "Fast group matching for mr finger- printing reconstruction," Magnetic resonance in medicine, vol. 74, no. 2, pp. 523-528, 2015.
Air-mrf: Accelerated iterative reconstruction for magnetic resonance fingerprinting. C C Cline, X Chen, B Mailhe, Q Wang, J Pfeuffer, M Nittka, M A Griswold, P Speier, M S Nadar, Magnetic resonance imaging. 41C. C. Cline, X. Chen, B. Mailhe, Q. Wang, J. Pfeuffer, M. Nittka, M. A. Griswold, P. Speier, and M. S. Nadar, "Air-mrf: Accelerated iterative reconstruction for magnetic resonance fingerprinting," Magnetic resonance imaging, vol. 41, pp. 29-40, 2017.
Coverblip: accelerated and scalable iterative matched-filtering for magnetic resonance fingerprint reconstruction. M Golbabaee, Z Chen, Y Wiaux, M Davies, Inverse Problems. 3615003M. Golbabaee, Z. Chen, Y. Wiaux, and M. Davies, "Coverblip: accel- erated and scalable iterative matched-filtering for magnetic resonance fingerprint reconstruction," Inverse Problems, vol. 36, p. 015003, 2019.
Low rank and spatial regularization model for magnetic resonance fingerprinting. S Arberet, X Chen, B Mailhe, M Nadar, P Speier, uS Patent 2019/0041480S. Arberet, X. Chen, B. Mailhe, M. Nadar, and P. Speier, "Low rank and spatial regularization model for magnetic resonance fingerprinting," 2019, uS Patent 2019/0041480.
Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling. B Zhao, K Setsompop, E Adalsteinsson, B Gagoski, H Ye, D Ma, Y Jiang, P Ellen Grant, M A Griswold, L L Wald, Magnetic resonance in medicine. 792B. Zhao, K. Setsompop, E. Adalsteinsson, B. Gagoski, H. Ye, D. Ma, Y. Jiang, P. Ellen Grant, M. A. Griswold, and L. L. Wald, "Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling," Magnetic resonance in medicine, vol. 79, no. 2, pp. 933-942, 2018.
Low-rank magnetic resonance fingerprinting. G Mazor, L Weizman, A Tal, Y C Eldar, Medical physics. 459G. Mazor, L. Weizman, A. Tal, and Y. C. Eldar, "Low-rank magnetic resonance fingerprinting," Medical physics, vol. 45, no. 9, pp. 4066- 4084, 2018.
Magnetic resonance fingerprinting using recurrent neural networks. I Oksuz, G Cruz, J Clough, A Bustin, N Fuin, R M Botnar, C Prieto, A P King, J A Schnabel, IEEE Intl. Symposium on Biomedical Imaging (ISBI). I. Oksuz, G. Cruz, J. Clough, A. Bustin, N. Fuin, R. M. Botnar, C. Prieto, A. P. King, and J. A. Schnabel, "Magnetic resonance fingerprinting using recurrent neural networks," in IEEE Intl. Symposium on Biomedical Imaging (ISBI), 2019, pp. 1537-1540.
Dictionary-free mri perk: Parameter estimation via regression with kernels. G Nataraj, J Nielsen, C Scott, J Fessler, IEEE transactions on medical imaging. 379G. Nataraj, J. Nielsen, C. Scott, and J. Fessler, "Dictionary-free mri perk: Parameter estimation via regression with kernels," IEEE transactions on medical imaging, vol. 37, no. 9, pp. 2103-2114, 2018.
Matrix treatment of nuclear induction. E Jaynes, Physical Review. 9841099E. Jaynes, "Matrix treatment of nuclear induction," Physical Review, vol. 98, no. 4, p. 1099, 1955.
A fast iterative shrinkage-thresholding algorithm for linear inverse problems. A Beck, M Teboulle, SIAM journal on imaging sciences. 21A. Beck and M. Teboulle, "A fast iterative shrinkage-thresholding algo- rithm for linear inverse problems," SIAM journal on imaging sciences, vol. 2, no. 1, pp. 183-202, 2009.
A method for solving the convex programming problem with convergence rate o (1/kˆ2). Y E Nesterov, Dokl. akad. nauk Sssr. 269Y. E. Nesterov, "A method for solving the convex programming problem with convergence rate o (1/kˆ2)," in Dokl. akad. nauk Sssr, vol. 269, 1983, pp. 543-547.
A first-order primal-dual algorithm for convex problems with applications to imaging. A Chambolle, T Pock, Journal of Mathematical Imaging and Vision. 401A. Chambolle and T. Pock, "A first-order primal-dual algorithm for convex problems with applications to imaging," Journal of Mathematical Imaging and Vision, vol. 40, no. 1, pp. 120-145, 2011.
Multi-shot echo planar imaging for accelerated cartesian mr fingerprinting: an alternative to conventional spiral mr fingerprinting. A J V Benjamin, P A Gómez, M Golbabaee, Z B Mahbub, T Sprenger, M I Menzel, M Davies, I Marshall, Magnetic resonance imaging. 61A. J. V. Benjamin, P. A. Gómez, M. Golbabaee, Z. B. Mahbub, T. Sprenger, M. I. Menzel, M. Davies, and I. Marshall, "Multi-shot echo planar imaging for accelerated cartesian mr fingerprinting: an alternative to conventional spiral mr fingerprinting," Magnetic resonance imaging, vol. 61, pp. 20-32, 2019.
Hyperspectral image compressed sensing via low-rank and joint-sparse matrix recovery. M Golbabaee, P Vandergheynst, Acoustics, Speech and Signal Processing. IEEE2012 IEEE International Conference onM. Golbabaee and P. Vandergheynst, "Hyperspectral image compressed sensing via low-rank and joint-sparse matrix recovery," in Acoustics, Speech and Signal Processing (ICASSP), 2012 IEEE International Conference on. IEEE, 2012, pp. 2741-2744.
Joint trace/tv norm minimization: A new efficient approach for spectral compressive imaging. 19th IEEE International Conference on. IEEEImage Processing (ICIP)--, "Joint trace/tv norm minimization: A new efficient approach for spectral compressive imaging," in Image Processing (ICIP), 2012 19th IEEE International Conference on. IEEE, 2012, pp. 933-936.
Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. P Vincent, H Larochelle, I Lajoie, Y Bengio, P.-A Manzagol, Journal of machine learning research. 11P. Vincent, H. Larochelle, I. Lajoie, Y. Bengio, and P.-A. Manzagol, "Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion," Journal of machine learning research, vol. 11, no. Dec, pp. 3371-3408, 2010.
Deep MR fingerprinting with total-variation and low-rank subspace priors. M Golbabaee, C Pirkl, M Menze, G Bounincotri, P Gomez, Proceedings of Intl. Soc. Mag. Res. Med. (ISMRM). M. Golbabaee, C. Pirkl, M. Menze, G. Bounincotri, and P. Gomez, "Deep MR fingerprinting with total-variation and low-rank subspace priors," in Proceedings of Intl. Soc. Mag. Res. Med. (ISMRM), 2019.
Cover tree compressed sensing for fast MR fingerprint recovery. M Golbabaee, Z Chen, Y Wiaux, M Davies, IEEE Intl. Workshop on Machine Learning for Signal Processing. M. Golbabaee, Z. Chen, Y. Wiaux, and M. Davies, "Cover tree compressed sensing for fast MR fingerprint recovery," in IEEE Intl. Workshop on Machine Learning for Signal Processing, 2017.
A spline theory of deep learning. R Balestriero, Richard Baraniuk, Proceedings of the Intl. Conference on Machine Learning. the Intl. Conference on Machine Learning80R. Balestriero and richard baraniuk, "A spline theory of deep learning," in Proceedings of the Intl. Conference on Machine Learning, vol. 80, 2018, pp. 374-383.
On the number of linear regions of deep neural networks. G F Montufar, R Pascanu, K Cho, Y Bengio, Advances in neural information processing systems. G. F. Montufar, R. Pascanu, K. Cho, and Y. Bengio, "On the number of linear regions of deep neural networks," in Advances in neural information processing systems, 2014, pp. 2924-2932.
Approximation by superpositions of a sigmoidal function. G Cybenko, Mathematics of control, signals and systems. 2G. Cybenko, "Approximation by superpositions of a sigmoidal function," Mathematics of control, signals and systems, vol. 2, pp. 303-314, 1989.
Shallow vs. deep sum-product networks. O Delalleau, Y Bengio, Advances in Neural Information Processing Systems. O. Delalleau and Y. Bengio, "Shallow vs. deep sum-product networks," in Advances in Neural Information Processing Systems, 2011, pp. 666- 674.
Deep kernel learning. A G Wilson, Z Hu, R Salakhutdinov, E P Xing, Artificial Intelligence and Statistics. A. G. Wilson, Z. Hu, R. Salakhutdinov, and E. P. Xing, "Deep kernel learning," in Artificial Intelligence and Statistics, 2016, pp. 370-378.
Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, arXiv:1409.1556arXiv preprintK. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
Ii. performance assessment and quality control in mri by eurospin test objects and protocols. R Lerski, J De Certaines, 11Magnetic resonance imagingR. Lerski and J. De Certaines, "Ii. performance assessment and quality control in mri by eurospin test objects and protocols," Magnetic reso- nance imaging, vol. 11, no. 6, pp. 817-833, 1993.
Rapid three-dimensional multiparametric MRI with quantitative transient-state imaging. P Gómez, M Cencini, M Golbabaee, R Schulte, G Fallo, L Peretti, M Tosetti, B Menze, G Buonincontri, arXiv:2001.07173arXiv preprintP. Gómez, M. Cencini, M. Golbabaee, R. Schulte, G. Fallo, L. Peretti, M. Tosetti, B. Menze, and G. Buonincontri, "Rapid three-dimensional multiparametric MRI with quantitative transient-state imaging," arXiv preprint arXiv:2001.07173, 2018.
Mr fingerprinting with simultaneous b1 estimation. G Buonincontri, S J Sawiak, Magnetic resonance in medicine. 764G. Buonincontri and S. J. Sawiak, "Mr fingerprinting with simultaneous b1 estimation," Magnetic resonance in medicine, vol. 76, no. 4, pp. 1127-1135, 2016.
Adaptive reconstruction of phased array mr imagery. D O Walsh, A F Gmitro, M W Marcellin, Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine. 435D. O. Walsh, A. F. Gmitro, and M. W. Marcellin, "Adaptive reconstruc- tion of phased array mr imagery," Magnetic Resonance in Medicine: An Official Journal of the International Society for Magnetic Resonance in Medicine, vol. 43, no. 5, pp. 682-690, 2000.
Extended phase graphs: Dephasing, RF pulses, and echoespure and simple. M Weigel, Journal of Magnetic Resonance Imaging. 412M. Weigel, "Extended phase graphs: Dephasing, RF pulses, and echoes- pure and simple," Journal of Magnetic Resonance Imaging, vol. 41, no. 2, pp. 266-295, 2015.
Computational and statistical tradeoffs via convex relaxation. V Chandrasekaran, M I Jordan, Proceedings of the National Academy of Sciences. 11013V. Chandrasekaran and M. I. Jordan, "Computational and statistical tradeoffs via convex relaxation," Proceedings of the National Academy of Sciences, vol. 110, no. 13, pp. E1181-E1190, 2013.
Gradient projection iterative sketch for large scale constrained least-squares. J Tang, M Golbabaee, M Davies, Proceedings of the Intl. Conference on Machine Learning. the Intl. Conference on Machine Learning70J. Tang, M. Golbabaee, and M. Davies, "Gradient projection iterative sketch for large scale constrained least-squares," Proceedings of the Intl. Conference on Machine Learning, vol. 70, pp. 3377-3386, 2017.
Rigid motion-corrected magnetic resonance fingerprinting. G Cruz, O Jaubert, T Schneider, R M Botnar, C Prieto, Magnetic resonance in medicine. 812G. Cruz, O. Jaubert, T. Schneider, R. M. Botnar, and C. Prieto, "Rigid motion-corrected magnetic resonance fingerprinting," Magnetic resonance in medicine, vol. 81, no. 2, pp. 947-961, 2019.
Greedy approximate projection for magnetic resonance fingerprinting with partial volumes. R Duarte, A Repetti, P A Gómez, M Davies, Y Wiaux, arXiv:1807.06912arXiv preprintR. Duarte, A. Repetti, P. A. Gómez, M. Davies, and Y. Wiaux, "Greedy approximate projection for magnetic resonance fingerprinting with par- tial volumes," arXiv preprint arXiv:1807.06912, 2018.
The practicality of stochastic optimization in imaging inverse problems. J Tang, K Egiazarian, M Golbabaee, M Davies, arXiv:1910.10100arXiv preprintJ. Tang, K. Egiazarian, M. Golbabaee, and M. Davies, "The practicality of stochastic optimization in imaging inverse problems," arXiv preprint arXiv:1910.10100, 2019.
Rest-katyusha: exploiting the solution's structure via scheduled restart schemes. J Tang, M Golbabaee, F Bach, Advances in Neural Information Processing Systems. J. Tang, M. Golbabaee, F. Bach et al., "Rest-katyusha: exploiting the solution's structure via scheduled restart schemes," in Advances in Neural Information Processing Systems, 2018, pp. 429-440.
We further simulated the above-mentioned 2D spiral and radial acquisitions for measuring parametric maps in a slice of the In-silico Brainweb phantom [55]. A challenging single-coil acquisition with eight times less measurements were considered i.e. S(X) =X identity sensitivity map. Table I compares the reconstruction performances for the T1, T2, PD maps and the computed TSMIs using ZF, LR, VS and LRTV algorithms followed by the MRFResnet or KM for quantitative inference. Si. In-Silico Phantom Experiment Supplementary Materials, Results, Reconstructed maps are also shown in Figure S1. T1 MAPE T2 MAPE PD NRMSE TMSI PSNRSupplementary Materials SI. IN-SILICO PHANTOM EXPERIMENT RESULTS We further simulated the above-mentioned 2D spiral and radial acquisitions for measuring parametric maps in a slice of the In-silico Brainweb phantom [55]. A challenging single-coil acquisition with eight times less measurements were considered i.e. S(X) =X identity sensitivity map. Table I compares the reconstruction performances for the T1, T2, PD maps and the computed TSMIs using ZF, LR, VS and LRTV algorithms followed by the MRFResnet or KM for quantitative inference. Reconstructed maps are also shown in Figure S1. T1 MAPE T2 MAPE PD NRMSE TMSI PSNR
| [] |
[
"Multiverse Predictions for Habitability: Stellar and Atmospheric Habitability",
"Multiverse Predictions for Habitability: Stellar and Atmospheric Habitability"
] | [
"Mccullen Sandora \nBlue Marble Space Institute of Science\n98154SeattleWAUSA\n",
"Vladimir Airapetian \nSellers Exoplanetary Environments Collaboration\nNASA Goddard Space Flight Center\n20771GreenbeltMDUSA\n\nDepartment of Physics\nAmerican University\nWashingtonDCUSA\n",
"Luke Barnes \nSchool of Science\nWestern Sydney University\nLocked Bag 1797\n\nPenrith South DC\n2751NSWAustralia\n",
"Geraint F Lewis \nSydney Institute for Astronomy\nSchool of Physics\nThe University of Sydney\nA28, 2006NSWAustralia\n"
] | [
"Blue Marble Space Institute of Science\n98154SeattleWAUSA",
"Sellers Exoplanetary Environments Collaboration\nNASA Goddard Space Flight Center\n20771GreenbeltMDUSA",
"Department of Physics\nAmerican University\nWashingtonDCUSA",
"School of Science\nWestern Sydney University\nLocked Bag 1797",
"Penrith South DC\n2751NSWAustralia",
"Sydney Institute for Astronomy\nSchool of Physics\nThe University of Sydney\nA28, 2006NSWAustralia"
] | [] | Citation: Sandora, M.; Airapetian, V.; Barnes, L.; Lewis, G.F. Multiverse Predictions for Habitability: Stellar and Atmospheric Habitability. Preprints 2023, 1, 0. https://doi.org/ Copyright: © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).Abstract: Stellar activity and planetary atmospheric properties have the potential to strongly influence habitability. To date, neither have been adequately studied in the multiverse context, so there has been no assessment of how these effects impact the probabilities of observing our fundamental constants. Here, we consider the effects of solar wind, mass loss, and extreme ultra-violet (XUV) flux on planetary atmospheres, how these effects scale with fundamental constants, and how this affects the likelihood of our observations. We determine the minimum atmospheric mass that can withstand erosion, maintain liquid surface water, and buffer diurnal temperature changes. We consider two plausible sources of Earth's atmosphere, as well as the notion that only initially slowly rotating stars are habitable, and find that all are equally compatible with the multiverse. We consider whether planetary magnetic fields are necessary for habitability, and find five boundaries in parameter space where magnetic fields are precluded. We find that if an Earth-like carbon-to-oxygen ratio is required for life, atmospheric effects do not have much of an impact on multiverse calculations. If significantly different carbon-to-oxygen ratios are compatible with life, magnetic fields must not be essential for life, and planet atmosphere must not scale with stellar nitrogen abundance, or else the multiverse would be ruled out to a high degree of confidence. | 10.3390/universe9010004 | [
"https://export.arxiv.org/pdf/2303.03119v1.pdf"
] | 255,027,067 | 2303.03119 | 4ed34042c4c1492674d33b5c32ca1b59ba273dac |
Multiverse Predictions for Habitability: Stellar and Atmospheric Habitability
Mccullen Sandora
Blue Marble Space Institute of Science
98154SeattleWAUSA
Vladimir Airapetian
Sellers Exoplanetary Environments Collaboration
NASA Goddard Space Flight Center
20771GreenbeltMDUSA
Department of Physics
American University
WashingtonDCUSA
Luke Barnes
School of Science
Western Sydney University
Locked Bag 1797
Penrith South DC
2751NSWAustralia
Geraint F Lewis
Sydney Institute for Astronomy
School of Physics
The University of Sydney
A28, 2006NSWAustralia
Multiverse Predictions for Habitability: Stellar and Atmospheric Habitability
Articlemultiversehabitabilitystellar activity, planetary atmospheres
Citation: Sandora, M.; Airapetian, V.; Barnes, L.; Lewis, G.F. Multiverse Predictions for Habitability: Stellar and Atmospheric Habitability. Preprints 2023, 1, 0. https://doi.org/ Copyright: © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https:// creativecommons.org/licenses/by/ 4.0/).Abstract: Stellar activity and planetary atmospheric properties have the potential to strongly influence habitability. To date, neither have been adequately studied in the multiverse context, so there has been no assessment of how these effects impact the probabilities of observing our fundamental constants. Here, we consider the effects of solar wind, mass loss, and extreme ultra-violet (XUV) flux on planetary atmospheres, how these effects scale with fundamental constants, and how this affects the likelihood of our observations. We determine the minimum atmospheric mass that can withstand erosion, maintain liquid surface water, and buffer diurnal temperature changes. We consider two plausible sources of Earth's atmosphere, as well as the notion that only initially slowly rotating stars are habitable, and find that all are equally compatible with the multiverse. We consider whether planetary magnetic fields are necessary for habitability, and find five boundaries in parameter space where magnetic fields are precluded. We find that if an Earth-like carbon-to-oxygen ratio is required for life, atmospheric effects do not have much of an impact on multiverse calculations. If significantly different carbon-to-oxygen ratios are compatible with life, magnetic fields must not be essential for life, and planet atmosphere must not scale with stellar nitrogen abundance, or else the multiverse would be ruled out to a high degree of confidence.
Introduction
The multiverse hypothesis, which posits that other universes with different laws of physics exist, is an intriguing idea in theoretical cosmology that has so far proven challenging to test [1]. This paper is part of a broader series aiming to rectify this, by generating a plethora of predictions within the multiverse framework regarding the nature of habitability [2][3][4][5][6][7]. The core of this process is the requirement that the multiverse can only be a consistent theory of cosmology if it predicts that our presence in this particular universe is not too improbable; one way of falsifying the multiverse is to find that it predicts that the vast majority of complex (multicellular) life exists in universes with features different from our own. Our contribution to this procedure lies in the recognition that the distribution of complex life, and so observers, throughout the multiverse, depends heavily on the assumptions we make about the nature of habitability. Thus, certain habitability conditions, that are otherwise quite widely discussed, are incompatible with the multiverse. If we ultimately find that the requirements for complex life are incompatible with the multiverse, we will be able to falsify the theory, to a calculable level of statistical significance. Conversely, if we ultimately determine that all currently unknown habitability conditions turn out to be in line with multiverse expectations, we will accrue a long list of supporting evidence for the theory.
It remains to check the compatibility of each habitability condition with the multiverse framework by systematically incorporating them into our calculation of the distribution of observers throughout the multiverse, and the subsequent calculation of the probability of our observations. To this end, we have organized this endeavor into several papers on the topic, each dealing with a loosely overarching theme. The current paper explores several aspects relating to properties of planetary atmospheres, and stellar activity. The two are tightly related, and considered by many to be essential for the maintenance of planetary habitability.
The compatibility of a habitability condition H with the multiverse is determined by the probability of observing our values of the fundamental constants. This is communicated through the Bayes factor, which is defined relative to the baseline case where atmospheric effects are not important H 0 by BpHq " BpHq{BpH 0 q, where BpHq " Ppα|Hq Ppβ|Hq Ppγ|Hq Ppδ u |Hq Ppδ d |Hq (1) and Ppx|Hq " minpPpx ă x obs |Hq, Ppx ą x obs |Hqq, for the fine structure constant α, the electron to proton mass m e {m p " β, the proton to Planck mass m p {M pl " γ, the up quark to proton mass m u {m p " δ u , and the down quark to proton mass m d {m p " δ d . The probability of observing particular values of the constants is defined through the probability density function ppx|Hq 9 p prior pxq Hpxq, as described in more detail in [2]. For the baseline habitability condition H 0 , we take the most successful account of our observations we have considered, which is that complex life requires light from a relatively narrow spectral band for photosynthesis, and that the habitability of a planet is directly proportional to the amount of entropy it receives from incident starlight [2,4].
In Section 2, we discuss generalities of stellar properties, and how these vary with physical constants, deriving expressions that will be crucial for the rest of the paper. In Section 3, we discuss atmospheric loss processes, focusing in particular on extreme ultra-violet (XUV)driven energy limited escape. Determining the importance of this process as constants vary necessitates determination of a great many factors, including stellar spin-down history, initial atmospheric mass, and mass required for surface water retention, which we detail within. Section 4 is dedicated to stellar wind stripping present on planets without an intrinsic magnetic field, and the conditions for planetary magnetic fields to arise.
We find that the significance of atmospheric properties depends on which additional habitability assumptions are made. If we take that an Earth-like carbon-to-oxygen ratio is required for life, as is commonly assumed, then the atmospheric conditions we consider do not strongly affect the probabilities we compute, and so they are neither favored nor disfavored by the multiverse. However, if we adopt the stance that life does not depend on the carbon-tooxygen ratio, several atmospheric conditions do strongly affect the multiverse probabilities. Both the idea that atmospheric mass scales linearly with stellar nitrogen abundance and the idea that planetary magnetic fields are required for habitability cause the probability of our observations to significantly drop, and so both these conditions are incompatible with the multiverse hypothesis. The strategy to test the multiverse is then to check whether this prediction is correct; if life indeed does not depend on planetary carbon-to-oxygen ratio, but either of these other two conditions is found true, the multiverse will be ruled out to high significance.
How Do Stellar Properties Change in Other Universes?
Changes in stellar properties were among the first aspects to be investigated within a multiverse framework. Refs. [8][9][10] worked out how the properties of stars such as mass, lifetime, and luminosity change when constants vary. Ref. [11] discuss photosynthetic potential of starlight. Much work has been done on how different nuclear stability thresholds affect stellar fusion: Refs. [12][13][14] investigated the effects of diproton stability. Refs. [15][16][17][18] discuss the effects of alpha burning. Refs. [19][20][21] investigate deuteron stability. Ref. [22] investigated the consequences of tritium stability. Refs. [23,24] discuss non-nuclear energy production pathways. Ref. [25] discuss the sizes of white dwarfs and neutron stars. Refs. [2,26] discuss the entropy production as a key to habitability.
However, all of these previous studies have so far neglected some of the finer-grained stellar properties, which may nevertheless be just as important for determining the habitability of a planetary system. Among these are properties of stellar coronae, magnetic fields, Sunspot fraction, stellar wind, rotation, and X-ray luminosity. In part, this neglect may be due to prudence on the previous authors' parts, as many of these aspects remain imperfectly understood theoretically, making extrapolation of their behaviors to different universes fraught with potentially misplaced certainty. However, much progress has been made in the understanding of many of these aspects in recent years, and we take advantage of these recent advances to establish a first attempt at determining how these properties may differ in other universes.
Stellar Properties
Expressions for stellar mass, radius, temperature, luminosity, and lifetime in terms of fundamental constants are all already well known (see, e.g., [27]), so we merely reproduce them here:
M ‹ " 122.4 λ M 3 pl m 2 p R ‹ " 108.6 λ 4{5 M pl α 2 m 2 p T ‹ " 0.014 λ 19{40 α 1{2 m 1{2 e m 3{4 p M 1{4 pl L ‹ " 9.7ˆ10´4 λ 7{2 m 2 e M pl α 2 m p t ‹ " 110.0 α 2 M 2 pl λ 5{2 m 2 e m p(2)
The symbol λ " M ‹ {M Ch is a dimensionless parameterization of stellar mass in terms of the Chandrasekhar mass M Ch " 122.4M 3 pl {m 2 P " 1.4M @ . In these and all following expressions, the functional dependence on constants is derived using physical arguments, and the coefficients are set to accurately reproduce the correct values for our Sun, for the observed values of the physical constants.
In addition, we will need the following expressions for the mass, density, orbital location, total incident power, and day length of an Earth-like planet, which is defined as both temperate (can maintain liquid surface water) and terrestrial (can retain heavy but not light atmospheric gases):
M terr " 92 α 3{2 m 3{4 e M 3 pl m 11{4 p ρ rock " 0.13 α 3 m 3 e m p a temp " 7.6 λ 7{4 m 1{2 p M 1{2 pl α 5 m 2 e Q solar " 5.3ˆ10´5 α 7 m 9{2 e M 2 pl m 9{2 p t day " 376 M pl α 3{2 m 3{2 e m 1{2 p(3)
Though there will be a certain tolerable range for each of these parameters, we specify to the Earth's values for our calculations. Additionally, note that the temperate requirement has dictated that the incident stellar power is evaluated at a temp ("AU for our values), making this quantity independent of stellar mass.
Speed of Stellar Wind
The escape velocity of a star is v esc "
d 2 G M ‹ R ‹ " 0.30 λ 1{10 α(4)
For the Sun, this is 618 km/s. The speed of solar wind is around 400-1000 km/s, roughly the same order of magnitude. This results from the fact that the escaping wind is nonthermal, as particles that have enough energy to make it off the Sun usually have a surplus of the same order. This is larger than the thermal sound speed, which invariably depends on height. For the photosphere,
c s " d T ‹ m p " 0.12 λ 19{80 α 1{4 β 1{4 γ 1{8(5)
The sound speed of the corona is higher, as discussed below.
Scale Height
The scale height of a star is given by a competition between thermal and gravitational processes as
H ‹ " c 2 s g " 19.7 λ 43{40 m 1{2 e M 3{4 pl α 7{2 m 9{4 p(6)
This is 100-1000 km for the Sun, and sets the scale for many processes, including the granule size and typical magnetic flux tube length.
Stellar Magnetic Field
The magnetic field at the stellar surface is created by a highly complex and incompletely understood dynamo mechanism [28,29]. However, the details of the precise mechanism are unimportant for determining the overall field strength, which is set by equipartition of energy as [30]
B surf " b 4π P photosphere " 3.1ˆ10´5 λ 19{20 α m e m 3{2 p M 1{2 pl(7)
For the photosphere pressure, we use P photosphere " T 4 ‹ , as appropriate for a n " 3 polytrope, which describes stellar structure well [31]. The numerical value matches the observational quantity B surf " 2G. This yields an estimate for the total field strength at the surface, which consists of both open field lines that contribute to the star's long range magnetic field, as well as highly complex field configurations that do not. The long range field is related to the total strength by
B ‹ " f open B surf ,
where f open is the fraction of field lines which are open. It is this factor that introduces rotational dependence to the stellar magnetic field.
Fraction of Open Field lines
The fraction of stellar magnetic field lines which are "open" (i.e., extend to infinity, rather than form a closed loop) depends both on stellar rotation and temperature. This was postulated to depend on Rossby number in [32] as f open " 0.55 expp´2.03 Roq (8) where Rossby number is the ratio of rotation period to convective turnover time, Ro " P rot {τ conv . For the convective turnover time, we use the expression from [33]:
τ conv " τ 0 expˆ´T T conv˙( 9)
where we have neglected terms that cause shutoff for large temperatures. The turnover temperature is set by molecular absorption processes,
T conv " 0.27α 2 m 3{2 e {m 1{2
p . This is normalized to yield a Rossby number of 1. 96 and an open field line fraction of 0.01 for the Sun. The coefficient
τ 0 is set dimensionally to be τ 0 " R ‹ a m p {T ‹ " 1.9ˆ10 5 λ 9{16 M 9{8 pl {pα 9{4 m 1{4 e m 15{8
p q, and is normalized to be 246.4 days for our Sun. Expressing this in terms of fundamental parameters depends on the distribution of stellar rotation periods, which is discussed below.
Corona
The corona is the hotter, much less dense outer layer of a star. Its properties are continuous with the star's extended stellar wind region of influence, and is the source region of most of the variable activity leading to space weather.
Density of Corona
In the formalism of [34], the density of the corona (at the transition region) is determined by the equilibration of heating and cooling processes. The heating rate is given by Q heat " ρ corona v 3 {λ c , where λ c is the granular scale, roughly set by the scale height H " c 2 s {g. The cooling rate for bremsstrahlung is Q cool " n e n p σ T v , where σ T " 8π{3α 2 {m 2 e is the Thomson cross section and " αm e is the typical energy exchange [10]. These are equal when
ρ corona " m e m p g σ T " 4.7ˆ10´7 α m 2 e m 3 p λ 3{5 M pl(10)
This is equal to 10´1 6 g/cm 3 for the Sun.
Temperature of Corona
The corona is about two orders of magnitude hotter than the photosphere, which has proven puzzling to explain theoretically for many years. Consequently, various competing theories have been developed to explain the anomalously high temperature [35]. Perhaps the most popular account is that of Alfvén wave heating, which posits that energy is transferred to the corona from the stellar interior by turbulent plasma oscillations. In the following, we only consider this theory, which gives the heat flux as [36]:
S " 1 2 ρ corona δv 2 v A(11)
Here δv 2 " T{m p and v A " B{ ? ρ corona . This determines temperature through the diffusion equation S "´κ th ∇T " κ th T{H ‹ . From [28], the thermal conductivity of a stellar plasma is
κ th " 1.31 π log Λ T 5{2 e 4 m 1{2 e(12)
where log Λ "5-20 is the Coulomb logarithm, which has mild parameter dependence, but can be ignored. This can be solved for T to yield
T corona "˜e 4 m 1{2 e m 2 p ρ 1{2 corona B surf g¸2 {3 " 4.6ˆ10´3 λ 5{6 m 5{3 e α 1{3 m 2{3 p(13)
Stellar Wind
Mass Loss Rate
According to [31], many analytic mass loss formulas have no strong theoretical justification. Whatever the underlying mechanism for solar wind, it is constrained by the continuity equation to obey
9 M " ρ corona v 4π R 2 ‹ " 7.0ˆ10´5 λ 11{10 m 2 e M pl α 2 m p(14)
This is normalized to yield 2ˆ10´1 4 M @ /yr for the Sun. With this, we may ponder whether in some universes the stellar wind is strong enough to deplete stellar material before the available nuclear energy is exhausted; in such universes, type II supernovae would not occur, with stars instead ending their lives having blown off material to the point where fusion ceases. We find t ‹ 9
M{M ‹ " .16α 5 β 1{4 {pλ 3{2 γq " 3ˆ10 6 , so that if α were a factor of 20 lower, this would indeed be the case. However, this may not preclude the distribution of heavy elements into the interstellar medium, if enough reach the wind-launch site. More work is needed to determine whether this mechanism can be at play, whether heavy elements collect in the stellar core, or whether the strong wind effectively extinguishes the star before any heavy elements are created. In any case, including this boundary in parameter space does not appreciably affect the probabilities we compute.
Alfvén Radius
The Alfvén radius is the point at which an appreciable azimuthal velocity component develops. This is set by
B 2 ‹ 4π " ρprqv 2 r(15)
Throughout we take the Sun's Alfvén radius to be R A " 24R @ , though it can vary by a factor of 2 throughout the solar cycle [37]. By the continuity equation, the quantity ρ v r 91{r 2 . The radial dependence of v r can be found using Parker's model of solar wind, which gives
1 v r´v 2 r´c 2 s¯d v r dr " 2c 2 s r´G M ‹ r 2(16)
If we define the sonic radius R s " GM ‹ {p2c 2 s q, then for r " R s , this gives v r Ñ 2c s logpr{R s q 1{2 [38], though to first approximation the logarithmic dependence can be neglected.
If B is primarily dipolar, Bprq " B ‹ pR ‹ {rq 3 , and we find
R A "˜f 2 open T 4 ‹ ρ corona c 2 s¸1 {4 R ‹ " 1.1ˆ10 4 λ 11{8 f 1{2 open M pl α 9{4 m 2 p(17)
For more generic magnetic field profiles B " r´q, the fourth root is replaced by 1{p2q´2q.
X ray Luminosity
A star's X-ray luminosity, which is an important driver of planetary atmospheric loss, is greatly enhanced with respect to the thermal contribution by dynamo processes. As such, X-ray luminosity is found to correlate well with both magnetic activity and rotation speed for slowly rotating stars [39]. For stars with rotation periods less than a few days, however, the X-ray luminosity is found to saturate to about 10´3 of the bolometric luminosity. The origin of this is not well understood, but could be due either to the saturation of surface magnetic flux, or internal dynamo [40], representing a qualitatively different regime of energy transport. These two regimes can be encapsulated with the following expression
L X " 1 8 B 2 ‹ R 2 ‹ minpv conv , v rot q(18)
which reproduces the linear rotation-activity relation between X-ray luminosity and magnetic flux found in [41]. Here, we have defined a convective speed in terms of the convective turnover time in Equation (9) as v conv " R ‹ {τ conv .
Rotation
Since stellar activity depends on rotation rate, and stellar rotation decreases over time, the majority of a planet's atmospheric loss may occur during the initial phase of stellar evolution. Here, we derive expressions for initial stellar rotation as well as spindown rate.
Initial Stellar Rotation
Stars are observed to have a spread of rotation periods within the span of several days, with periods that increase as they age [42]. At formation time, one may expect that stars inherit their rotation from their collapsed dust cloud, but an order of magnitude estimate reveals that the angular momentum of the dust cloud vastly exceeds stellar angular momentum [43]. Indeed, a star possessing that much angular momentum would exceed the critical breakup velocity, and would quickly jettison its material. Instead, the star radiates angular momentum through its surrounding disk until it drops below the breakup speed, and can coalesce [42]. This process results in initial stellar rotation frequencies being close to their breakup velocity, as observed in [44]:
Ω 0 " d 2 3 G M ‹ R 3 ‹ " 1.6ˆ10´3 α 3 m 2 p λ 7{10 M pl(19)
Stellar Spindown Time
Stars lose angular momentum throughout their evolution via stellar wind. While a star's angular momentum is given by J " MR 2
‹ Ω, to estimate angular momentum loss we must keep in mind that the stellar wind travels radially outward until the Alfvén radius, and so angular momentum loss is given by 9 J " 9 MR 2 A Ω [45]. This increased lever arm greatly enhances spindown, and also introduces extra rotation dependence, as the Alfvén radius depends on spin. A linear dependence R A 9Ω leads to a cubic evolution equation for Ω, as first discussed in [46]. Additionally, a qualitative shift in spindown behavior empirically occurs when the rotation frequency exceeds a critical value, akin to the convective turnover time given in Equation (18). This leads to the following equation governing the evolution of rotation [42]:
9 Ω "´B 2 ‹ R 2 ‹ M ‹ v Ω minpΩ 2 τ 2 conv , 1q(20)
This also sets the spindown time as
t brake " M ‹ 9 M R 2 ‹ R 2 A " .21 α 5{2 M 2 pl λ 5{4 f open m 2 e m p(21)
For stars rotating more rapidly than the convective turnover time, spindown is set by the star's convective churn, rather than rotation. Below this, the evolution 9 Ω " Ω 3 leads to the well established Skumanich law, P rot " ? t [47]. For fast rotators, the decay is instead exponential. These are all the properties of stars we will need to model our habitability effects in the sections below.
How Do Atmospheric Properties Differ in Other Universes?
We now turn our attention to planetary atmospheres, and whether their character is substantially different in other universes. In particular, we ask what physics determines that the Earth's atmospheric mass is six orders of magnitude less than the planet's mass, how this compares to the minimum needed for several habitability considerations, and whether the expected atmospheric mass is lower than these thresholds for different values of the fundamental constants.
At first glance, it may seem strange to attempt to explain atmospheric mass fraction in terms of fundamental constants. After all, the solar system alone exhibits an enormous diversity of atmospheric mass fractions amongst its planets, from almost zero around small inner rocky bodies to nearly unity for the gas giants. Indeed, atmospheric mass seems to depend on a great number of variables: planetary mass, interior and surface chemistry, orbit, evolution, flux, and the presence or absence of life [48]. Even Venus, though remarkably similar to Earth in orbit and mass, has an atmosphere 90 times Earth's. However, closer inspection reveals hidden regularity; Venus's atmospheric nitrogen content is only 3-4 times that of Earth's, placing it at the same order of magnitude [49]. Its carbon dioxide content, which comprises the bulk of the atmosphere, is the same order of magnitude as that found dissolved in Earth's oceans and compressed into sedimentary rock [50]. Even Venus's initial water content is estimated to have been similar to Earth's [50] (though recent work indicates that even if its initial water content were similar, it may not have ever been able to condense from a steam atmosphere to form an ocean [51]). Evidently, this diversity stems from the different phases each species can undergo, rather than the primordial abundance of each element, giving hope that the overall mass fraction may be understood by processes operating in the early solar system, as well as galactic element abundances. Furthermore, if this is the case, we have hope of extrapolating these values to other universes.
In the following, we focus on nitrogen, as the only gas which is noncondensible under temperate conditions, and present in appreciable quantities. Its presence is essential for stability of liquid water on Earth's surface [52]. It was estimated in [53] that the nitrogen contained in the Earth's mantle is between 3-10 times that of Earth's atmospheric nitrogen, a ratio that is certainly affected by the presence of other species, but is likely to hold as a rough order of magnitude estimate under a range of conditions [49]. Earth's atmospheric nitrogen has remained constant to within a factor of two over the past 3 Gyr, as evidenced by analyzing raindrop imprint size [54] and the isotopic composition of quartz [55].
In the following, we consider two explanations for the magnitude of Earth's nitrogen abundance, corresponding to two different plausible sources: late accretion by chondrites, and initially, as dissolved material in Earth's original building blocks. Each of these hypotheses has different implications for the amounts of nitrogen on planets elsewhere in our universe, as well as throughout the multiverse. Additionally, it is an open question how planetary nitrogen abundance scales with initial stellar system nitrogen abundance, which has important implications for the multiverse, as we happen to be very close to a boundary beyond which nitrogen abundance is reduced by a factor of 270. At the two extremes, the dependence may be linear, if the nitrogen content of solar system bodies was not close to their carrying capacity, or independent, if the bodies were saturated. The dependence probably lies somewhere between these two extremes, but we report how adopting each assumption alters our multiverse probabilities, which serves to bracket the upper and lower limits for our calculations.
We then consider three atmospheric mass thresholds that are plausibly related to habitability. The first is the amount of atmosphere that can be stripped away by stellar flux. The second is related to the pressure necessary to maintain liquid surface water. Third, the mass needed to buffer diurnal temperature changes. Finally, we consider the possibility that only initially slowly rotating stars in our universe are capable of retaining their atmospheres, and assess the compatibility of this hypothesis with the multiverse.
Possible Sources of Atmosphere
The fact that Earth possesses an atmosphere containing volatile constituents is somewhat of a mystery, given that the conditions during Earth's formation were much hotter than their condensation temperatures. Naively, this would result in inner planets that are almost completely comprised of refractory elements, which is manifestly not the case. In the following, we consider two leading theories for the origin of Earth's nitrogen atmosphere: delivery during late accretion from outer system bodies, and as a result of initial accretion from nitrogen dissolved in Earth's original building blocks.
Initial Atmosphere Delivered during Accretion
The classic account for Earth's volatile budget is from planetesimals initially situated outside the solar system's ice line, where temperatures were below the condensation point of volatile species. It has been estimated that up to 7.5 atmospheric masses could have been delivered by carbonaceous chondrites after the main phase of planet formation was completed [56]. This account has the simplicity of explaining the origin of Earth's atmosphere and ocean by a single common source. Additionally, it can readily explain the hierarchy of why Earth's ocean is "100 times more massive than the atmosphere, as [57] demonstrated that the H 2 O/N 2 impact degassing ratio is "100 for a range of different chondrites. Finally, we would like to stress that in this scenario, final atmospheric mass will be highly stochastic, as the material delivered through late accretion is dominated by few large bodies [58]. Thus, while we compute the expected value, it should be kept in mind that this scenario yields a distribution of atmospheric mass ratios.
In [7], we derive the planetary ocean mass fraction delivered via planetesimal accretion during planet formation in terms of the amount of material delivered during late accretion. In this scenario, the atmospheric volatiles are delivered in the same manner. Therefore, we may posit the atmospheric mass fraction to simply be
f N " 0.011 κ λ 21{10 γ 1{3 α 11{2 β 25{12(22)
For details on how this expression was obtained, we refer the reader to [7].
Atmospheric Mass as a Result of Accretion by N-Rich Bodies
Here, we follow [59] by considering that Earth's nitrogen was delivered during accretion in the form of dissolved N inside rock and metal. We may then derive the total amount of resulting nitrogen as a function of body mass, with the presumption that only nitrogen in the interior of these planetesimals will be incorporated into the planet's final budget.
The initial nitrogen fraction of a planetesimal is f N " m N {m pp , where m pp is the mass of the planetesimal. The resultant nitrogen budget is obtained through the magma ocean and core asf
N " m MO N`M core N m MO`Mcore " 1`Z D N 1`Z C MO N (23) where Z " M core {M mo , C MO N " M MO N {M MO , and D N " C core N {C MO N .
For the fraction of nitrogen dissolved in the magma ocean, we use [60]:
C MO N " p N p 1`f O´3 {4 2ˆp N p 2˙1 {2(24)
where p 1 and p 2 are coefficients, taken here to scale as p i 9Ry 4 , with Ry the Rydberg constant that dictates the electronic energy scale. The quantity f O 2 is oxygen fugacity, and will depend of the primordial abundances of the two elements. The partial pressure can be rewritten in terms of atmospheric nitrogen mass as
p N " M atm N g A " pM tot N´M MO N´M core N q g A " M pp g A´f N´p 1´f N qf N¯( 25)
This can then be used to find an equation determiningf N :
f N " k 1 p f N´fN q`bk 2 p f N´fN q(26)
where for cleanliness we have defined k 1 " ζτ{p 1 , k 2 " ζ 2 f O´3 {2 2 τ{p 2 , ζ " p1`ZD N q{p1`Zq, and τ " gM pp p1´f N q{A. This can be solved forf N to find
f N " 2 f N k 1 p1`k 1 q´k 2`b 4 f N p1`k 1 qk 2`k 2 2 2p1`k 1 q 2(27)
We find that for large mass bodies, k 1 ,k 2 Ñ 8,f N Ñ f N , so that planetary nitrogen abundance matches the primordial value. In the limit k 2 Ñ 0, this expression simplifies significantly tof N Ñ f N k 1 {p1`k 1 q. This expression allows us to derive the final nitrogen abundance as a function of planetesimal mass, by noting that g{A " Gρ To determine the planetary nitrogen abundance fraction that results from original accretion, we need the typical planetesimal size. For this, we use the isolation mass M iso " 1.31 [3]. In the limit that k 1 ! 1 and neglecting the dependence on planetary mass of D N , this giveŝ
0 8 κ 3{2 λ 25{8 m 7{4 p M 9{4 pl {pα 15{2 m 3 e qf N " 7.6ˆ10´7 κ λ 25{12 γ 1{2 α 9 β 2(28)
Interestingly, the dependence on stellar mass of this quantity is practically indistinguishable from that of the alternate nitrogen source, Eqn. (22).
Which Atmospheric Thresholds Are Important for Habitability?
Earth's atmosphere is quite comfortably above any catastrophic thresholds, being about two orders of magnitude larger than needed to prevent total atmospheric escape, maintain liquid surface water, and buffer diurnal temperature changes. However, given the exponential dependence on constants of some of these conditions, we investigate the influence each exerts on our multiverse calculations.
How Much Atmospheric Loss Occurs in Other Universes?
In this paper, we restrict our attention to terrestrial planets, which are defined such that light gases such as hydrogen and helium, but not heavy gases such as water, oxygen and nitrogen, undergo Jeans escape. For these planets, the dominant form of atmospheric escape is driven by stellar XUV light, and is in the energy limited regime (for recent reviews, see [61,62]). The mass loss rate for this type of escape is given by equating the energy of UV light absorbed by the atmosphere with the energy of atmospheric particles ejected at the escape speed [63],
9 M XUV " R 3 C L X a 2 temp G M C(29)
Here, is an unimportant efficiency factor. This is independent of atmospheric mass, being limited by the amount of energy imparted in the upper atmosphere rather than the amount of material present. In [64] it was estimated that an XUV flux greater than 60 times Earth's value would be needed to induce a catastrophic mass loss rate of 1.8ˆ10 9 g/s, capable of eroding the entire atmosphere. For reference, M dwarfs and young K dwarfs are subjected to 100-400 times Earth's XUV flux [85].
The total atmospheric mass loss through X-ray flux may be found through Equation (18), taking rotation evolution into account:
∆M XUV " R 3 C a 2 temp G M C B 2 ‹ R 3 ‹ 8 ż t ‹ 0 dt minpΩ conv , Ωptqq(30)
Using the evolution dictated by Equation (20) and in the limit t ‹ " t brake , this integral can be performed to find
∆M XUV " B 2 ‹ R 3 ‹ a 2 temp G ρ rock minpΩ conv , Ω 0 q a t ‹ t brake(31)
The condition t ‹ " t brake , which holds by three orders of magnitude in our universe, is not necessarily generic; we compute t brake {t ‹ " 0.0019λ 5{4 α 1{2 { f open , which can be much larger than 1 if no stellar magnetic field lines are open for certain parameters. However, including a more complete expression does not affect the calculated probabilities appreciably, while considerably complicating the formulae. In Figure 1, we display the atmospheric mass loss for temperate, terrestrial planets as a function of stellar mass, for three different values of the fine structure constant. The difference resulting from adopting the two alternate origin scenarios is also displayed, but is seen to be minimal. This defines some stellar mass below which more than the initial atmosphere is lost through XUV irradiation, which depends on fundamental constants, and can be larger than the solar mass (λ " 1{1.8) in some regions of parameter space.
How Much Atmosphere Is Needed to Maintain Liquid Surface Water?
Liquid surface water can exist only when atmospheric pressure exceeds that at the triple point, where the three low energy phases of water coexist in equilibrium. The location of the triple point can be determined by noting that the solid-liquid transition is almost independent of pressure, and occurs at temperature set by the vibrational molecular energy T freeze "
α 1{2 {pm 1{2 r 3{2 H 2 O q.
The liquid-gas transition is given by the Clausius-Clapeyron equation as PpTq " P 0 e´L {T , where the latent heat of evaporation is L " αr H 2 O . The coefficient P 0 can be found by enforcing that the phase curve terminates at the observed critical point of water of 647 K and 22.1 MPa. Though an imperfect description, the van der Waals equation of state may be used to provide a theoretical expectation for the location of the critical point in terms of the molecular radius and energy , yielding T crit " 8{27 and P crit " {p18πr 3 q [65]. Normalizing to fit our observed values, this yields the pressure at the triple point to be
P triple " 18π r 3 e 27L 8 " 1.6ˆ10´3 α 5 m 4 e e´. 424{ ? β(32)
This can then be related to minimal atmospheric mass capable of supporting liquid water through M min " 4πR 2 C P triple {g, giving
M min " 0.87 α 3{2 m 1{4 e M 3 pl m 9{4 p e´. 424{ ? β(33)
This is about 0.006M atm for our values.
How Much Atmosphere Is Needed to Buffer Diurnal Temperature Changes?
Earth's atmosphere retains substantial heat, which buffets the day-night temperature difference from the otherwise extreme variations that would occur, such as the day-night temperature differences on the Moon and Mars which can reach hundreds of degrees Kelvin. This occurs because the relaxation time of Earth's atmosphere, estimated as the ratio of thermal energy over the power supplied, t relax " E therm {Q solar , is about 100 days. This gives
t relax " T M atm a 2 temp m p L ‹ R 2 C(34)
For small enough atmospheric mass, this is less than half a day, and the atmosphere does not play a significant role in averaging out daily variations of stellar flux. This occurs at the threshold
M min " 1.9 α 7{2 m 3{2 e M 3 pl m 7{2 p(35)
The exact mass depends strongly on water content, as evidenced by the extreme temperature differences present in Earth deserts, but we do not consider this here.
Are Only Slowly Rotating Stars Habitable?
There is evidence from noble gas isotopes [66], the Moon [67], and Venus [68] that the Sun began as an anomalously slow rotator. However, it is not currently possible to determine precisely how slow, and many studies only differentiate between stars in the lower 25 percentile. If true, this suggests a selection effect: ordinarily rotating stars may be incapable of hosting life, presumably due to high early atmospheric loss.
To determine the compatibility of this habitability hypothesis with the multiverse, we follow the fraction of slowly rotating stars f slow " minˆM atm ∆M XUV , 1˙ (36) This treats the initial rotation distribution as uniform up to the natural value Ω 0 , which is loosely consistent with observations of stellar populations [69]. To account for the observation that the Sun appears to be in the lower 25 percentile, we rescale the fraction of slow rotators (of Sun-like stars) in our universe to be 1/4.
Is Atmospheric Stability a Factor Determining Our Presence in This Universe?
We can now test the various atmospheric habitability thresholds, on the basis of their compatibility with our observations within the multiverse. To this end, we test the following four thresholds: loss due to XUV radiation, the minimal mass for stable liquid surface water, the minimal mass to buffer diurnal temperature changes, and the notion that only stars which are slowly rotating are habitable. In addition, we check both the early and late origin scenarios for our atmosphere, both an independent and linearly dependent abundance as a function of stellar nitrogen content, and either restricting to a narrow range of Earth-like carbon-to-oxygen values, or not. In Table 1, we display the various Bayes factors for each of these combinations.
We find that when restricting consideration to a narrow range of carbon-to-oxygen ratios, the Bayes factors for the various habitability criteria do not vary significantly. When considering the carbon-to-oxygen ratio to not play a factor in habitability, however, several of the habitability criteria are severely disfavored in the multiverse context. The disfavored criteria all have to do with the assumption that planetary nitrogen content scales linearly with stellar system nitrogen abundance, and does not depend on the atmospheric source or threshold mass. This is a consequence of our universe being situated very close to a precipitous threshold where nitrogen-14 is unstable [6], which affects the probabilities if the carbon-to-oxygen ratio is unimportant but does not if restricted to the subspace where the carbon-to-oxygen ratio is close to our observed value. We note that in [6] we found additional reasons to favor a restricted range of carbon-to-oxygen ratio based on the observed Hoyle energy value and organic to rock ratio in our universe. Apart from this insight, no strong preference can be given to the different atmospheric origin scenarios, threshold masses, or expectation on whether only slow rotators are habitable. Our conclusion is that it is certainly consistent that atmospheric mass may play a large role in the habitability of our universe, but it does not appear to be a driving factor in determining our particular observations. Table 1. Bayes factors for various atmospheric habitability criteria relative to the baseline case where atmosphere mass is unimportant for habitability. Small values indicate that a set of assumptions is disfavored to a corresponding degree in the multiverse framework. The cases considered are that atmosphere must be large enough to withstand XUV loss, be above the triple point of water, can buffer diurnal temperature changes, and that only slowly rotating stars are habitable. The late delivery vs. initial columns consider both potential origins of the atmosphere, and the N dep columns consider that planetary nitrogen abundance scales linearly with stellar system abundance.
Are Planetary Magnetic Fields Generic?
A planet's magnetic field is purported to be essential for habitability, as it shields against charged particles, preventing stellar wind stripping (see, e.g., [70]). However, it must be pointed out that magnetic fields also provide several avenues for ion escape [71], which may well represent the dominant form of atmospheric loss on Earth today [72]. Indeed, Venus has managed to retain its atmosphere without an intrinsic (as opposed to induced by the Sun's) magnetic field, despite being closer to the Sun than Earth.
Given the uncertain importance of planetary magnetic fields for habitability, we ask whether their properties change significantly in other universes, and thus whether demanding their presence influences the probabilities of any of our observables. We focus on five relevant aspects required for a magnetic field to be both present and protective: (i) The core's magnetic Reynolds number is large enough to support a dynamo. (ii) The magnetosphere must extend beyond the atmosphere, as otherwise it will have little effect on loss properties. (iii) The star's temperate zone must be outside its Alfvén zone, as otherwise the planetary and stellar magnetic field lines connect, forming a direct line of transport which dumps stellar wind onto the planet's poles, rather than act as a shield. (iv) The development of a magnetic field requires a metal core, placing limits on the oxygen content of the planet. (v) The magnetic field is generated through a dynamo, and so requires the core to remain at least partly liquid for an appreciable duration. If planetary magnetic fields are essential for habitability, all of these conditions must be met.
When Is the Magnetic Reynolds Number Large Enough to Induce a Dynamo?
Both theory and simulations of the Earth's core indicate that a dynamo will only exist when advection of the magnetic field dominates over diffusion [73]. This can be summarized as a condition on the magnetic Reynolds number R a " v core L{η ą 10-100 (Earth's magnetic Reynolds number is about 10 3 ) [74]. This condition can be used to place constraints on the fundamental constants, using the length scale L " R core , and magnetic diffusivity η " 1{p4πσ electric q with σ electric the electrical conductivity, which is related to thermal conductivity through the Wiedemann-Franz law [75]
:κ heat σ electric " π 2 3 T e 2(37)
In [4], we found an expression for the thermal diffusivity in terms of fundamental constants
as κ heat " 2{pm 1{4 e m 3{4
p q, which is related to thermal conductivity through κ heat "κ heat {pc p ρ rock q. The core convective speed can be obtained from mixing length theory, v core " pLq{pρ rock H T qq 1{3 [76]. Using our expression for heat flux q " 0.58α 11{2 m 5 e {M pl from [4] and the generic expression for scale height H T " c 2 s {g, we find
R a " 0.33 α 7{3 β 7{6 γ 2{3(38)
These scalings are not significantly altered if we instead use the magnetostrophic estimate for the core convection velocity, also from [76].
Is the Magnetosphere Always Larger Than the Atmosphere?
In order for a planetary magnetic field to be an effective shield against stellar wind, it must extend beyond the atmosphere. The size of the magnetosphere can be estimated as the point at which the magnetic pressure is equal to the stellar wind pressure, yielding for a dipole field [77] the standoff distance:
r magnetosphere "˜2 B 2 0 ρ sw v 2 sw¸1 {6(39)
To evaluate this, we use the expressions for density and speed of solar wind from Section 2. It remains to estimate B 0 , the strength of the magnetic field at the planet's surface.
There are an inordinate number of proposals for how planetary magnetic field strength depends on planetary characteristics, as reviewed in [78]. We use the Elasser number rule, which posits that the Lorentz force and Coriolis force are roughly equal, and results in
B core " d 2 ρ rock Ω σ electric(40)
Here Ω is the planet's angular rotation speed. For terrestrial planets, the atmospheric scale height is much smaller than planetary radius, and so it suffices to compare the magnetosphere size to the latter. Using our expressions above, and defining Y " pR core {R planet q 3 , we find this to be
r magnetosphere R planet " 3.1 λ 23{60 γ 1{6 α 13{12 β 11{24 Y 1{6(41)
This ratio evaluates to 10 for our values of the constants and Earth's core radius. The dependence on fundamental constants is rather weak, and so it takes a drastic change to alter the conclusion that the magnetosphere extends beyond the atmosphere.
When Is the Temperate Zone Outside the Alfvén Zone?
If a planet is orbiting inside its host star's Alfvén zone, its intrinsic magnetic field lines will connect to the star's, which will result in a highly increased level of bombardment by charged particles. This is expected to be the case for the inner planets in the Trappist-1 system, for instance [79] from simulations. Given our expressions for both the temperate zone and Alfvén radius from Section 2, it is straightforward to derive their ratio:
a temp R A " 7.1ˆ10´4 λ 3{8 γ 1{2 f open α 11{4 β 2(42)
For fixed physical constants, this defines a smallest stellar mass for which this condition holds. In our universe this is about .1 M @ , in accordance with the expectation that Proxima Centauri b, which orbits a 0.12 M @ star at 0.05 AU, is outside the Alfvén zone for the most likely values inferred for its orbital parameters [80]. Our treatment ignores the nonsphericity and nonstationarity of the Alfvén zone and potential planetary eccentricity, which may cause the orbit to periodically dip into the Alfvén zone throughout the year.
Note that the dependence on starspot fraction is of crucial importance in this expression, as otherwise this threshold stellar mass would be smaller than the smallest stellar mass. As such, this condition is loosely coincident with the onset of a full stellar convection zone.
When Does a Core Stratify Geochemically?
In [81], it was pointed out that if a planet's mantle oxygen content is too high, the iron will all be in the form of iron oxide (FeO), and will not differentiate to form a core. They find that the quantity R 1 = (Mg + 2Si + Fe)/O must exceed 1 in order for a core to develop, Earth's value of this ratio being 1.23. Interestingly, about 4% of the Earth's oxygen is left over after binding with magnesium and silicon, so that only 86% of Earth's iron makes it into the core. This raises the additional possibility that if a planet's oxygen is depleted before its magnesium and silicon are consumed, no iron will be left in the mantle or crust. This could have an additional adverse effect on habitability, which would restrict the allowable oxygen content required for habitability to a narrow range, but we leave exploration of this for future work. It was argued in [82] that planets with core mass fraction below "0.24 would have much higher rates of volatile subduction, due to more extensive volcanism, thicker crust, and stabilized amphibole group. This places a potential lower limit to the allowable core mass for habitability.
Though the core development condition depends on the ratio R 1 above, this depends on the abundances of both the alpha elements and iron, which are set by two different supernova processes, and so will scale differently with fundamental constants. In [6], we found the dependence of the alpha element abundances (C, O, Mg, and Si) on the Hoyle resonance energy E R " .626pm u`md q`p0.58α´0.0042qm p , with m u , m d the masses of the up and down quarks, as found in [18]. We also found an expression for the metal to rock ratio, from which we may determine the quantity
R 2 " Fe Mg+Si+O " 5.0ˆ10´3 β .82 γ .54 κ .81 α .56(43)
For Earth, this value is R 2 " 0.163. This assumes a linear relationship between stellar and planetary metal to rock ratios, which indeed is found [83].
In Figure 2, we plot the oxygenation ratio for various values of the metal ratio R 2 . It can be seen that with R 2 held fixed at the observed value, the oxygenation ratio is less than 1 for ∆E R ą 3.6 keV. While we are rather close to a potential anthropic boundary with metal fraction held fixed, allowing it to vary relaxes this closeness. In fact, there is a silicon and magnesium rich region of parameter space for larger values of ∆E R which also satisfy the R 1 ą 1 requirement. Above a metal fraction of 0.62, these two branches merge, and planets will always contain enough iron to form a core. As discussed in [6], such metal rich planets may be unsuitable for life for reasons other than the possession of a magnetic field, but we found that placing an upper bound on the metal content does not appreciably affect the probabilities we compute, and we do not concern ourselves with such a boundary here.
What Sets the Core Solidification Timescale?
The presence of a planetary dynamo requires a liquid convective core, which cannot be sustained indefinitely. As heat leaks from the planet, an initially liquid core will cool and solidify. If the solidification timescale is too rapid, any magnetic field will cease before life can take hold on a planet, and so one important consideration is the longevity of a liquid core.
First, we must establish that terrestrial planets possess enough heat for their cores to initially be liquid. This follows almost from our definition of a terrestrial planet, which demands that the gravitational binding energy is of the same order of magnitude as molecular binding energies, so that chemical reactions may take place on the planet's surface. Given the increased temperature and pressure of the planetary interior, the melting point will naturally be exceeded in the core.
The solidification timescale can be simply estimated as t solid " E core {Q heat , where E core is the energy required to be leached from the core for solidification to take place, and Q heat is the total core power. A proper estimate of E core would take into account the difference between the gravitational binding energy and the energy that would result in solidification; thankfully, however, these two energies are similar in magnitude, another consequence of restricting our attention to terrestrial planets. So, we may approximate the total energy in the core as E core " GM 2 core {R core . By the same token, Q heat has components due to formation and crystallization, which are roughly equal. In [4], we found that Q heat " GM planet ρ rock κ heat based on dimensional analysis. There, we also consider radiogenic heat and time dependence in more detail, which we neglect here. This may indeed be important; as discussed in [84], too much radioactive heating can prevent core convection. However, we do not consider this in detail here.
With this, the core solidification timescale is very simple:
t solid " A planet κ heat " 5.7ˆ10´3 M 2 pl α m 5{4 e m 7{4 p(44)
If a long-lived liquid outer core is necessary for habitability, this timescale must be larger than some timescale typical for the development of complex life, which we take here to be proportional to the stellar lifetime (see [2] for an exploration of different choices on this matter). We normalize this time to the expectation that the outer core will remain liquid for another 700 Myr from [85].
An alternative view is that a solid inner core is actually necessary for the sustenance of a magnetic field, in spite of geologic evidence to the contrary (see [86] for zircon evidence of a magnetic field at 3-4 Ga). The inner core may have developed as late as 565 Mya, based on magnetic evidence from Ediacaran rocks that record an anomalously low field strength, taken to signal a rearrangement in field configuration indicative of a recently established solid inner core [87]. This apparent incompatibility is reconciled if another mechanism generated the magnetic field before core solidification, as for example a long lived liquid mantle ocean [88]). In this case, the above timescale would need to be comparable to the evolutionary timescale, rather than simply longer than it.
Is a Planetary Magnetic Field Necessary for Habitability?
To treat intrinsic planetary magnetic fields as essential for habitability, we include the product of all five factors into the habitability condition as
H B " θpR a´1 00q θ´r B´Rplanet¯θ`atemp´RAlfvén˘θ pR 1´1 q θpt solid´t‹ q(45)
If we incorporate this into our calculation, we find that the Bayes factor relative to the base case where magnetic fields are not taken to be important is B " 1.52. We also probe the relative importance of each of these subconditions in Table 2 by first only incorporating each condition in isolation, and then incorporating the four others without each condition, into the calculation. Of the five factors considered, the magnetic Reynolds number, magnetosphere radius, and Alfvén zone conditions do not perceptibly alter the probabilities. The core existence condition slightly decreases the probabilities, while the core timescale condition slightly increases them. So, the notion that a magnetic field is necessary for habitability is compatible with the multiverse, and although it is even slightly preferred to the base case, the difference is not statistically meaningful enough to draw the conclusion that the converse hypothesis is disfavored.
We also remark that the base case here took the carbon-to-oxygen ratio to be important for habitability. If instead we drop this assumption, we find the Bayes factor is B " 0.050. The driving factor in making this so low is the core solidification timescale, as can be seen in Table 2. Therefore, we find that the assumption that planetary magnetic fields are important is only compatible with the multiverse if carbon-to-oxygen ratio is also important. This echoes the results of Section 3 and [6], where we found that restricting the carbon-to-oxygen ratio was important for compatibility with the multiverse on other accounts. This also suggests a test of the multiverse hypothesis, for if we find that complex life occurs only on magnetized planets but independently of carbon-to-oxygen ratio, our presence in this universe would be quite unlikely.
Discussion
Though few agree on exactly what conditions are required for habitability, it surely depends on the confluence of a great many factors. Likewise, our notions of habitability strongly affect the expectation for the distribution of life, both throughout our universe, and in others. Because of this, very fine-grained effects have the potential to radically alter our estimations of the probability of our existence in this particular universe, and our observations in general. This places us in a scenario where the importance of all discussed habitability factors must be tested before we may make any statements about multiverse probabilities with a relatively high degree of certainty. The stellar activity and atmospheric aspect of this program was undertaken in this paper. Uncertainties abound: the physics dictating the corona, stellar wind and flares, the relative importance of different atmospheric erosion rates, the ultimate source of Earth's atmosphere, the importance of planetary magnetic fields, and the distribution of all these quantities across different stellar systems are only now coming to light. While we have tried to hedge our ignorance in as many aspects as possible by contemplating competing accounts of these effects, we have necessarily restricted our attention in certain cases, and completely neglected other potentially important effects. Thus, while our work cannot claim to be a definitive exploration of stellar activity and atmospheric effects in other universes, it does represent an important first step.
Perhaps the biggest takeaway of our findings is that, if one believes that a relatively narrow carbon-to-oxygen ratio is required for complex life (as may be argued by the vastly different tectonic regimes that occur outside the interval (0.5,1)), any atmospheric habitability condition we considered had no significant bearing on multiverse probabilities. In this light, all that can be said is that atmospheric presence and stability does not appear to be a major determining factor for why we are in this universe. This is plausible, since the Earth's atmosphere is about two orders of magnitude larger than any threshold we are aware of, but many effects we consider depend exponentially on fundamental constants, so this conclusion is by no means automatic.
On the other hand, if we entertain the possibility that a carbon-to-oxygen ratio relatively close to ours is not required for habitability, altogether different conclusions are drawn. We are forced to conclude, under this assumption, that planetary magnetic fields cannot be important for life, because it renders many of the otherwise less likely regions of parameter space infertile, making us outliers. Additionally, when treating the carbon-to-oxygen ratio as unimportant, we find planetary atmospheric nitrogen must not scale with stellar system nitrogen abundance, or our presence in this universe is unlikely, independent of uncertainties about atmospheric source and lower atmospheric mass threshold.
Both of these findings also suggest potential methods for testing the multiverse hypothesis, if the true habitability conditions turn out to be incompatible with these expectations. So, if we find that an Earth-like C/O is not needed for complex life and either that atmosphere mass scales with stellar nitrogen or that planetary magnetic fields are required for life, the predictions the multiverse framework has made will be found to be incorrect. While these tests may be rather far off, the salient point is that they are possible in principle. Various biosignatures have already been proposed to help determine the distribution of life throughout the universe, for several places inside and out of our solar system, including searching for relic biomarker compounds on Mars [89], abundance ratios of organic compounds on icy moons such as Enceladus [90,91], chemical disequilibrium and even microbial absorption in Venus's atmosphere [92], and atmospheric gases such as oxygen around exoplanets [93]. In fact, it is conceivable that the next few generations of experiments will be able to measure biosignatures on exoplanet populations large enough to distinguish trends with respect to system parameters such as composition [94], that the presence of planetary magnetic fields can be measured through auroral emissions [95], and that the relation between planetary atmospheric size and stellar composition can be determined [96].
4{3 rock {M 1{3 , D N " expppb`cPq{Tq, T " GM{R " GM 2{3 ρ 1{3 rock .
Figure 1 .
1Fraction of atmosphere lost as a function of stellar mass. The dependence of this quantity on the fine structure constant α can be observed. The solid lines assume a late atmospheric delivery scenario, and the dashed lines assume atmosphere originates in original accretion. The dotted lines correspond to the catastrophic value 1, and the Sun's value.
Figure 2 .
2Oxygenation ratio R 1 for different metal ratios R 2 . The quantity ∆E R " 0 for our values of the constants. Planetary cores form only when this quantity exceeds 1 (0 on our log scale).
Author
Contributions: Conceptualization, all authors; Methodology, M.S.; Formal Analysis, M.S.; Validation, V.A., L.B. and G.F.L.; Writing-Original Draft Preparation, M.S.; Writing-Review & Editing, V.A., L.B. and G.F.L. All authors have read and agreed to the published version of the manuscript.Funding: This research received no external funding. Data Availability Statement: All code to generate data and analysis is located at https://github.com/ mccsandora/Multiverse-Habitability-Handler, accessed on Dec. 20, 2022.
The top rows restrict to Earth-like values of C/O ratio, and the bottom do not.H H H
Late Delivery
Late (N Dep)
Initial
Initial (N Dep)
Earth-like C/O
M atm ą ∆M XUV
0.87
0.66
0.94
0.71
M atm ą M triple
0.82
0.63
0.89
0.67
M atm ą M diurnal
1.0
0.75
1.0
0.75
slow rotator
0.32
0.25
0.40
0.32
Unrestricted C/O
M atm ą ∆M XUV
2.05
0.0041
2.34
0.0038
M atm ą M triple
1.09
0.0057
1.84
0.0032
M atm ą M diurnal
1.98
0.0074
3.08
0.068
slow rotator
1.14
0.00067
2.62
0.00082
Table 2 .
2Ablation study for planetary magnetic field criteria. This table displays the Bayes factors relative to the baseline case where planetary magnetic fields are not important. The 'with only' rows only incorporate the condition in the given column, and the 'without only' rows incorporate every condition except the condition in the given column into the probability calculation.H H H
R a
r B ą R planet
a temp ą R A
Mg + 2Si + Fe
> O
t solid ą t ‹
Earth-like C/O
with only
1.0
1.0
1.0
0.658
1.19
without only
1.52
1.52
1.44
1.25
0.68
Unrestricted C/O
with only
0.24
1.98
1.99
2.12
0.0046
without only
0.050
0.050
0.046
0.0043
1.73
Acknowledgments:We would like to thank Daman Grewal for useful comments.Conflicts of Interest:The authors declare no conflict of interest.
A brief history of the multiverse. A Linde, Rep. Prog. Phys. 22001Linde, A. A brief history of the multiverse. Rep. Prog. Phys. 2017, 80, 022001.
Multiverse Predictions for Habitability: The Number of Stars and Their Properties. M Sandora, 10.3390/universe50601495Sandora, M. Multiverse Predictions for Habitability: The Number of Stars and Their Properties. Universe 2019, 5, 149, https://doi.org/10.3390/universe5060149.
Multiverse Predictions for Habitability: Number of Potentially Habitable Planets. M Sandora, 10.3390/universe50601575Sandora, M. Multiverse Predictions for Habitability: Number of Potentially Habitable Planets. Universe 2019, 5, 157, https://doi.org/10.3390/universe5060157.
Multiverse Predictions for Habitability: Fraction of Planets That Develop Life. M Sandora, 10.3390/univer5. Sandora, M. Multiverse Predictions for Habitability: Fraction of Life that Develops Intelligence. 5Sandora, M. Multiverse Predictions for Habitability: Fraction of Planets That Develop Life. Universe 2019, 5, 171, https://doi.org/10.3390/univer 5. Sandora, M. Multiverse Predictions for Habitability: Fraction of Life that Develops Intelligence. Universe 2019, 5, 175, https://doi.org/10.3390/universe5070175.
Multiverse Predictions for Habitability: Element Abundances in Other Universes. M Sandora, V Airapetian, L Barnes, G Lewis, I Perez-Rodriguez, submittedSandora, M.; Airapetian, V.; Barnes, L.; Lewis, G.; Perez-Rodriguez, I. Multiverse Predictions for Habitability: Element Abundances in Other Universes. 2021, submitted.
Multiverse Predictions for Habitability: Planetary Characteristics. M Sandora, V Airapetian, L Barnes, G Lewis, submittedSandora, M.; Airapetian, V.; Barnes, L.; Lewis, G. Multiverse Predictions for Habitability: Planetary Characteristics. 2021, submitted.
Large number coincidences and the anthropic principle in cosmology. B Carter, Confrontation of Cosmological Theories with Observational Data. Carter, B. Large number coincidences and the anthropic principle in cosmology. In Confrontation of Cosmological Theories with Observational Data;
. / Berlin, Germany Heidelberg, Berlin/Heidelberg, Germany, 1974; pp. 291-298.
The Anthropic Cosmological Principle. J D Barrow, F J Tipler, Barrow, J.D.; Tipler, F.J. The Anthropic Cosmological Principle; 1986.
The anthropic principle and the structure of the physical world. B J Carr, M J Rees, Nature. 278Carr, B.J.; Rees, M.J. The anthropic principle and the structure of the physical world. Nature 1979, 278, 605-612.
Life at the Interface of Particle Physics and String Theory. A N Schellekens, 10.1103/RevModPhys.85.1491Rev. Mod. Phys. 85Schellekens, A.N. Life at the Interface of Particle Physics and String Theory. Rev. Mod. Phys. 2013, 85, 1491-1540, https://doi.org/10.1103/RevModPhys.85.1491.
Energy in the universe. F J Dyson, Sci. Am. 225Dyson, F.J. Energy in the universe. Sci. Am. 1971, 225, 50-59.
The effect of hypothetical diproton stability on the universe. R Bradford, J. Astrophys. Astron. 30Bradford, R. The effect of hypothetical diproton stability on the universe. J. Astrophys. Astron. 2009, 30, 119-131.
Binding the diproton in stars: Anthropic limits on the strength of gravity. L A Barnes, J. Cosmol. Astropart. Phys. 50Barnes, L.A. Binding the diproton in stars: Anthropic limits on the strength of gravity. J. Cosmol. Astropart. Phys. 2015, 2015, 050.
The anthropic significance of the existence of an excited state of 12C. M Livio, D Hollowell, A Weiss, J W Truran, Nature. 340Livio, M.; Hollowell, D.; Weiss, A.; Truran, J.W. The anthropic significance of the existence of an excited state of 12C. Nature 1989, 340, 281-284.
Stellar production rates of carbon and its abundance in the universe. H Oberhummer, A Csoto, H Schlattl, Science. 289Oberhummer, H.; Csoto, A.; Schlattl, H. Stellar production rates of carbon and its abundance in the universe. Science 2000, 289, 88-90.
Dependence of the triple-alpha process on the fundamental constants of nature. E Epelbaum, H Krebs, T A Lähde, D Lee, U G Meißner, Eur. Phys. J. A. 82Epelbaum, E.; Krebs, H.; Lähde, T.A.; Lee, D.; Meißner, U.G. Dependence of the triple-alpha process on the fundamental constants of nature. Eur. Phys. J. A 2013, 49, 82.
Sensitivity of carbon and oxygen yields to the triple-alpha resonance in massive stars. L Huang, F C Adams, E Grohs, Astropart. Phys. 105Huang, L.; Adams, F.C.; Grohs, E. Sensitivity of carbon and oxygen yields to the triple-alpha resonance in massive stars. Astropart. Phys. 2019. 105, 13-24.
The anthropic principle. P C Davies, Progress in Particle and Nuclear Physics. 10Davies, P.C. The anthropic principle. In Progress in Particle and Nuclear Physics; 1983; Volume 10.
Producing the deuteron in stars: anthropic limits on fundamental constants. L A Barnes, G F Lewis, 10.1088/1475-7516/2017/07/036J. Cosmology Astropart. Phys. Barnes, L.A.; Lewis, G.F. Producing the deuteron in stars: anthropic limits on fundamental constants. J. Cosmology Astropart. Phys. 2017, 7, 036, https://doi.org/10.1088/1475-7516/2017/07/036.
On the habitability of universes without stable deuterium. F C Adams, E Grohs, Astropart. Phys. 91Adams, F.C.; Grohs, E. On the habitability of universes without stable deuterium. Astropart. Phys. 2017, 91, 90-104.
A Gould, arXiv:1207.2149Tritium as an Anthropic Probe. arXiv 2012. Gould, A. Tritium as an Anthropic Probe. arXiv 2012, arXiv:1207.2149.
Stars in other universes: stellar structure with different fundamental constants. F C Adams, 10.1088/1475-7516/2008/08/010J. Cosmology Astropart. Phys. 810Adams, F.C. Stars in other universes: stellar structure with different fundamental constants. J. Cosmology Astropart. Phys. 2008, 8, 010, https://doi.org/10.1088/1475-7516/2008/08/010.
Nuclear processes in other universes: Varying the strength of the weak force. A R Howe, E Grohs, F C Adams, Phys. Rev. D. 63014Howe, A.R.; Grohs, E.; Adams, F.C. Nuclear processes in other universes: Varying the strength of the weak force. Phys. Rev. D 2018, 98, 063014.
Astronomical reach of fundamental physics. A S Burrows, J P Ostriker, Proc. Natl. Acad. Sci. Natl. Acad. SciUSA111Burrows, A.S.; Ostriker, J.P. Astronomical reach of fundamental physics. Proc. Natl. Acad. Sci. USA 2014, 111, 2409-2416.
Constraints on Alternate Universes: Stars and habitable planets with different fundamental constants. F C Adams, J. Cosmol. Astropart. Phys. 42Adams, F.C. Constraints on Alternate Universes: Stars and habitable planets with different fundamental constants. J. Cosmol. Astropart. Phys. 2016, 2016, 042.
Dependence of macrophysical phenomena on the values of the fundamental constants. W H Press, A P Lightman, 10.1098/rsta.1983.0094Philos. Trans. R. Soc. Lond. Ser. A. 310Press, W.H.; Lightman, A.P. Dependence of macrophysical phenomena on the values of the fundamental constants. Philos. Trans. R. Soc. Lond. Ser. A 1983, 310, 323-334. https://doi.org/10.1098/rsta.1983.0094.
Astrophysical magnetic fields and nonlinear dynamo theory. A Brandenburg, K Subramanian, Phys. Rep. 417Brandenburg, A.; Subramanian, K. Astrophysical magnetic fields and nonlinear dynamo theory. Phys. Rep. 2005, 417, 1-209.
Magnetic fields in the solar convection zone. Y Fan, Living Rev. Sol. Phys. 6Fan, Y. Magnetic fields in the solar convection zone. Living Rev. Sol. Phys. 2009, 6, 1-96.
New measurements of photospheric magnetic fields in late-type stars and emerging trends. S Saar, J Linsky, Adv. Space Res. 6Saar, S.; Linsky, J. New measurements of photospheric magnetic fields in late-type stars and emerging trends. Adv. Space Res. 1986, 6, 235-238.
. R Kippenhahn, A Weigert, A Weiss, Stellar Structure and Evolution. Kippenhahn, R.; Weigert, A.; Weiss, A. Stellar Structure and Evolution;
. / Berlin, Germany Heidelberg, 192Berlin/Heidelberg, Germany, 1990; Volume 192.
On magnetic fields, stellar coronae and dynamo action in late-type dwarfs. B Montesinos, C Jordan, Mon. Not. R. Astron. Soc. 264Montesinos, B.; Jordan, C. On magnetic fields, stellar coronae and dynamo action in late-type dwarfs. Mon. Not. R. Astron. Soc. 1993, 264, 900-918.
On the rotation-activity correlation for active binary stars. A Gunn, C Mitrou, J Doyle, Mon. Not. R. Astron. Soc. 296Gunn, A.; Mitrou, C.; Doyle, J. On the rotation-activity correlation for active binary stars. Mon. Not. R. Astron. Soc. 1998, 296, 150-164.
Testing a predictive theoretical model for the mass loss rates of cool stars. S R Cranmer, S H Saar, Astrophys. J. 74154Cranmer, S.R.; Saar, S.H. Testing a predictive theoretical model for the mass loss rates of cool stars. Astrophys. J. 2011, 741, 54.
Physics of the Solar Corona: An Introduction with Problems and Solutions. M Aschwanden, Aschwanden, M. Physics of the Solar Corona: An Introduction with Problems and Solutions;
Observations and Models of Coronal Heating. F Malara, M Velli, Proceedings of the Symposium-International Astronomical Union. the Symposium-International Astronomical UnionMalara, F.; Velli, M. Observations and Models of Coronal Heating. In Proceedings of the Symposium-International Astronomical Union;
An analysis of Alfvén radius based on sunspot number from 1749 to today. M L Goelzer, N A Schwadron, C W Smith, J. Geophys. Res. Space Phys. 119Goelzer, M.L.; Schwadron, N.A.; Smith, C.W. An analysis of Alfvén radius based on sunspot number from 1749 to today. J. Geophys. Res. Space Phys. 2014, 119, 115-120.
The Solar Wind. B Ryden, Ryden, B. The Solar Wind. 2011.
The stellar-activity-rotation relationship and the evolution of stellar dynamos. N J Wright, J J Drake, E E Mamajek, G W Henry, Astrophys. J. 74348Wright, N.J.; Drake, J.J.; Mamajek, E.E.; Henry, G.W. The stellar-activity-rotation relationship and the evolution of stellar dynamos. Astrophys. J. 2011, 743, 48.
The stellar activity-rotation relationship revisited: Dependence of saturated and non-saturated X-ray emission regimes on stellar mass for late-type dwarfs. N Pizzolato, A Maggio, G Micela, S Sciortino, P Ventura, Astron. Astrophys. 397Pizzolato, N.; Maggio, A.; Micela, G.; Sciortino, S.; Ventura, P. The stellar activity-rotation relationship revisited: Dependence of saturated and non-saturated X-ray emission regimes on stellar mass for late-type dwarfs. Astron. Astrophys. 2003, 397, 147-157.
The relationship between X-ray radiance and magnetic flux. A A Pevtsov, G H Fisher, L W Acton, D W Longcope, C M Johns-Krull, C C Kankelborg, T R Metcalf, Astrophys. J. 5981387Pevtsov, A.A.; Fisher, G.H.; Acton, L.W.; Longcope, D.W.; Johns-Krull, C.M.; Kankelborg, C.C.; Metcalf, T.R. The relationship between X-ray radiance and magnetic flux. Astrophys. J. 2003, 598, 1387.
Improved angular momentum evolution model for solar-like stars. F Gallet, J Bouvier, Astron. Astrophys. 55636Gallet, F.; Bouvier, J. Improved angular momentum evolution model for solar-like stars. Astron. Astrophys. 2013, 556, A36.
Angular momentum evolution of young stars and disks. P Bodenheimer, Annu. Rev. Astron. Astrophys. 33Bodenheimer, P. Angular momentum evolution of young stars and disks. Annu. Rev. Astron. Astrophys. 1995, 33, 199-238.
Angular momentum in stars-The Kraft curve revisited. S D Kawaler, Publ. Astron. Soc. Pac. 1322Kawaler, S.D. Angular momentum in stars-The Kraft curve revisited. Publ. Astron. Soc. Pac. 1987, 99, 1322.
The angular momentum of the solar wind. E J Weber, L DavisJr, Astrophys. J. 148Weber, E.J.; Davis Jr, L. The angular momentum of the solar wind. Astrophys. J. 1967, 148, 217-227.
On the angular momentum loss of late-type stars. B Durney, J Latour, Geophys. Astrophys. Fluid Dyn. 9Durney, B.; Latour, J. On the angular momentum loss of late-type stars. Geophys. Astrophys. Fluid Dyn. 1977, 9, 241-255.
Time scales for CA II emission decay, rotational braking, and lithium depletion. A Skumanich, Astrophys. J. 171565Skumanich, A. Time scales for CA II emission decay, rotational braking, and lithium depletion. Astrophys. J. 1972, 171, 565.
Modeling p N2 through geological time: Implications for planetary climates and atmospheric biosignatures. E Stüeken, M Kipp, M Koehler, E Schwieterman, B Johnson, R Buick, Astrobiology. 16Stüeken, E.; Kipp, M.; Koehler, M.; Schwieterman, E.; Johnson, B.; Buick, R. Modeling p N2 through geological time: Implications for planetary climates and atmospheric biosignatures. Astrobiology 2016, 16, 949-963.
Atmospheric nitrogen evolution on Earth and Venus. R Wordsworth, Earth Planet. Sci. Lett. 447Wordsworth, R. Atmospheric nitrogen evolution on Earth and Venus. Earth Planet. Sci. Lett. 2016, 447, 103-111.
Planetary Climates. A Ingersoll, Princeton University PressPrinceton, NJ, USAIngersoll, A. Planetary Climates; Princeton University Press: Princeton, NJ, USA, 2013.
Day-night cloud asymmetry prevents early oceans on Venus but not on Earth. M Turbet, E Bolmont, G Chaverot, D Ehrenreich, J Leconte, E Marcq, Nature. 598Turbet, M.; Bolmont, E.; Chaverot, G.; Ehrenreich, D.; Leconte, J.; Marcq, E. Day-night cloud asymmetry prevents early oceans on Venus but not on Earth. Nature 2021, 598, 276-280.
Water loss from terrestrial planets with CO2-rich atmospheres. R Wordsworth, R Pierrehumbert, Astrophys. J. 778154Wordsworth, R.; Pierrehumbert, R. Water loss from terrestrial planets with CO2-rich atmospheres. Astrophys. J. 2013, 778, 154.
The nitrogen budget of Earth. B Johnson, C Goldblatt, Earth-Sci. Rev. 148Johnson, B.; Goldblatt, C. The nitrogen budget of Earth. Earth-Sci. Rev. 2015, 148, 150-173.
Air density 2.7 billion years ago limited to less than twice modern levels by fossil raindrop imprints. S M Som, D C Catling, J P Harnmeijer, P M Polivka, R Buick, Nature. 484Som, S.M.; Catling, D.C.; Harnmeijer, J.P.; Polivka, P.M.; Buick, R. Air density 2.7 billion years ago limited to less than twice modern levels by fossil raindrop imprints. Nature 2012, 484, 359-362.
Nitrogen isotopic composition and density of the Archean atmosphere. B Marty, L Zimmermann, M Pujol, R Burgess, P Philippot, Science. 342Marty, B.; Zimmermann, L.; Pujol, M.; Burgess, R.; Philippot, P. Nitrogen isotopic composition and density of the Archean atmosphere. Science 2013, 342, 101-104.
The major volatile elements of the Earth: Their origin, behavior, and fate. M Javoy, Geophys. Res. Lett. 24Javoy, M. The major volatile elements of the Earth: Their origin, behavior, and fate. Geophys. Res. Lett. 1997, 24, 177-180.
Chemistry of atmospheres formed during accretion of the Earth and other terrestrial planets. L Schaefer, B FegleyJr, Icarus. 208Schaefer, L.; Fegley Jr, B. Chemistry of atmospheres formed during accretion of the Earth and other terrestrial planets. Icarus 2010, 208, 438-448.
Evolution of the Earth's atmosphere during Late Veneer accretion. C A Sinclair, M C Wyatt, A Morbidelli, D Nesvornỳ, Mon. Not. R. Astron. Soc. 499Sinclair, C.A.; Wyatt, M.C.; Morbidelli, A.; Nesvornỳ, D. Evolution of the Earth's atmosphere during Late Veneer accretion. Mon. Not. R. Astron. Soc. 2020, 499, 5334-5362.
Rates of protoplanetary accretion and differentiation set nitrogen budget of rocky planets. D S Grewal, R Dasgupta, T Hough, A Farnell, Nat. Geosci. 14Grewal, D.S.; Dasgupta, R.; Hough, T.; Farnell, A. Rates of protoplanetary accretion and differentiation set nitrogen budget of rocky planets. Nat. Geosci. 2021, 14, 369-376.
Nitrogen solubility in basaltic melt. Part I. Effect of oxygen fugacity. G Libourel, B Marty, F Humbert, Geochim. Cosmochim. Acta. 67Libourel, G.; Marty, B.; Humbert, F. Nitrogen solubility in basaltic melt. Part I. Effect of oxygen fugacity. Geochim. Cosmochim. Acta 2003, 67, 4123-4135.
Escape of Atmospheres to Space. D Catling, J Kasting, Catling, D.; Kasting, J. Escape of Atmospheres to Space; 2017; pp. 129-168.
Atmospheric escape processes and planetary atmospheric evolution. G Gronoff, P Arras, S Baraka, J M Bell, G Cessateur, O Cohen, S M Curry, J J Drake, M Elrod, J Erwin, J. Geophys. Res. Space Phys. 2020Gronoff, G.; Arras, P.; Baraka, S.; Bell, J.M.; Cessateur, G.; Cohen, O.; Curry, S.M.; Drake, J.J.; Elrod, M.; Erwin, J.; et al. Atmospheric escape processes and planetary atmospheric evolution. J. Geophys. Res. Space Phys. 2020, 125, e2019JA027639.
Mass fractionation of noble gases in diffusion-limited hydrodynamic hydrogen escape. K Zahnle, J F Kasting, J B Pollack, Icarus. 84Zahnle, K.; Kasting, J.F.; Pollack, J.B. Mass fractionation of noble gases in diffusion-limited hydrodynamic hydrogen escape. Icarus 1990, 84, 502-527.
Extreme hydrodynamic losses of Earth-like atmospheres in the habitable zones of very active stars. C Johnstone, M Khodachenko, T Lüftinger, K Kislyakova, H Lammer, M Güdel, Astron. Astrophys. 62410Johnstone, C.; Khodachenko, M.; Lüftinger, T.; Kislyakova, K.; Lammer, H.; Güdel, M. Extreme hydrodynamic losses of Earth-like atmospheres in the habitable zones of very active stars. Astron. Astrophys. 2019, 624, L10.
An Introduction to Statistical Thermodynamics. T L Hill, Dover PublicationsMineola, NY, USAHill, T.L. An Introduction to Statistical Thermodynamics; Dover Publications: Mineola, NY, USA 1986.
Escape and fractionation of volatiles and noble gases from Mars-sized planetary embryos and growing protoplanets. P Odert, H Lammer, N V Erkaev, A Nikolaou, H I Lichtenegger, C P Johnstone, K G Kislyakova, M Leitzinger, N Tosi, Icarus. 307Odert, P.; Lammer, H.; Erkaev, N.V.; Nikolaou, A.; Lichtenegger, H.I.; Johnstone, C.P.; Kislyakova, K.G.; Leitzinger, M.; Tosi, N. Escape and fractionation of volatiles and noble gases from Mars-sized planetary embryos and growing protoplanets. Icarus 2018, 307, 327-346.
Was the Sun a slow rotator? Sodium and potassium constraints from the lunar regolith. P Saxena, R M Killen, V Airapetian, N E Petro, N M Curran, A M Mandell, Astrophys. J. Lett. 87616Saxena, P.; Killen, R.M.; Airapetian, V.; Petro, N.E.; Curran, N.M.; Mandell, A.M. Was the Sun a slow rotator? Sodium and potassium constraints from the lunar regolith. Astrophys. J. Lett. 2019, 876, L16.
Formation of Venus, Earth and Mars: Constrained by isotopes. H Lammer, R Brasser, A Johansen, M Scherf, M Leitzinger, Space Sci. Rev. 217Lammer, H.; Brasser, R.; Johansen, A.; Scherf, M.; Leitzinger, M. Formation of Venus, Earth and Mars: Constrained by isotopes. Space Sci. Rev. 2021, 217, 1-35.
The origin of stellar angular momentum. S Wolff, S Edwards, G Preston, Astrophys. J. 252Wolff, S.; Edwards, S.; Preston, G. The origin of stellar angular momentum. Astrophys. J. 1982, 252, 322-336.
Coronal mass ejection (CME) activity of low mass M stars as an important factor for the habitability of terrestrial exoplanets. II. CME-induced ion pick up of Earth-like exoplanets in close-in habitable zones. H Lammer, H I Lichtenegger, Y N Kulikov, J M Griessmeier, N Terada, N V Erkaev, H K Biernat, M L Khodachenko, I Ribas, T Penz, Astrobiology. 7Lammer, H.; Lichtenegger, H.I.; Kulikov, Y.N.; Griessmeier, J.M.; Terada, N.; Erkaev, N.V.; Biernat, H.K.; Khodachenko, M.L.; Ribas, I.; Penz, T.; et al. Coronal mass ejection (CME) activity of low mass M stars as an important factor for the habitability of terrestrial exoplanets. II. CME-induced ion pick up of Earth-like exoplanets in close-in habitable zones. Astrobiology 2007, 7, 185-207.
Why an intrinsic magnetic field does not protect a planet against atmospheric escape. H Gunell, R Maggiolo, H Nilsson, G S Wieser, R Slapak, J Lindkvist, M Hamrin, J De Keyser, Astron. Astrophys. 3Gunell, H.; Maggiolo, R.; Nilsson, H.; Wieser, G.S.; Slapak, R.; Lindkvist, J.; Hamrin, M.; De Keyser, J. Why an intrinsic magnetic field does not protect a planet against atmospheric escape. Astron. Astrophys. 2018, 614, L3.
Do Intrinsic Magnetic Fields Protect Planetary Atmospheres from Stellar Winds?. R Ramstad, S Barabash, Space Sci. Rev. 217Ramstad, R.; Barabash, S. Do Intrinsic Magnetic Fields Protect Planetary Atmospheres from Stellar Winds? Space Sci. Rev. 2021, 217, 1-39.
Geodynamo theory and simulations. P H Roberts, G A Glatzmaier, Rev. Mod. Phys. 1081Roberts, P.H.; Glatzmaier, G.A. Geodynamo theory and simulations. Rev. Mod. Phys. 2000, 72, 1081.
Constraints from material properties on the dynamics and evolution of Earth's core. C Davies, M Pozzo, D Gubbins, D Alfe, Nat. Geosci. 8Davies, C.; Pozzo, M.; Gubbins, D.; Alfe, D. Constraints from material properties on the dynamics and evolution of Earth's core. Nat. Geosci. 2015, 8, 678-685.
Introduction to the Physics of the Earth's Interior. J P Poirier, Cambridge University PressCambridge, UKPoirier, J.P. Introduction to the Physics of the Earth's Interior; Cambridge University Press: Cambridge, UK 2000.
Planetary magnetic fields. D J Stevenson, Earth Planet. Sci. Lett. 208Stevenson, D.J. Planetary magnetic fields. Earth Planet. Sci. Lett. 2003, 208, 1-11.
Planetary magnetospheres. In Encyclopedia of the Solar System. M G Kivelson, F Bagenal, Amsterdam, The NetherlandsKivelson, M.G.; Bagenal, F. Planetary magnetospheres. In Encyclopedia of the Solar System; Amsterdam, The Netherlands, 2014; pp. 137-157.
Dynamo scaling laws and applications to the planets. U Christensen, Space Sci. Rev. 152Christensen, U. Dynamo scaling laws and applications to the planets. Space Sci. Rev. 2010, 152, 565-590.
The threatening magnetic and plasma environment of the TRAPPIST-1 planets. C Garraffo, J J Drake, O Cohen, J D Alvarado-Gómez, S P Moschou, Astrophys. J. Lett. 33Garraffo, C.; Drake, J.J.; Cohen, O.; Alvarado-Gómez, J.D.; Moschou, S.P. The threatening magnetic and plasma environment of the TRAPPIST-1 planets. Astrophys. J. Lett. 2017, 843, L33.
The space weather of Proxima Centauri b. C Garraffo, J J Drake, O Cohen, Astrophys. J. Lett. 4Garraffo, C.; Drake, J.J.; Cohen, O. The space weather of Proxima Centauri b. Astrophys. J. Lett. 2016, 833, L4.
The role of carbon in extrasolar planetary geodynamics and habitability. C T Unterborn, J E Kabbes, J S Pigott, D M Reaman, W R Panero, Astrophys. J. 793124Unterborn, C.T.; Kabbes, J.E.; Pigott, J.S.; Reaman, D.M.; Panero, W.R. The role of carbon in extrasolar planetary geodynamics and habitability. Astrophys. J. 2014, 793, 124.
B Dyck, J Wade, R Palin, arXiv:2104.10612The effect of core formation on surface composition and planetary habitability. arXiv 2021. Dyck, B.; Wade, J.; Palin, R. The effect of core formation on surface composition and planetary habitability. arXiv 2021, arXiv:2104.10612.
V Adibekyan, C Dorn, S G Sousa, N C Santos, B Bitsch, G Israelian, C Mordasini, S C Barros, E D Mena, O D Demangeon, arXiv:2102.12444The Chemical link between stars and their rocky planets. arXiv 2021. Adibekyan, V.; Dorn, C.; Sousa, S.G.; Santos, N.C.; Bitsch, B.; Israelian, G.; Mordasini, C.; Barros, S.C.; Mena, E.D.; Demangeon, O.D.; et al. The Chemical link between stars and their rocky planets. arXiv 2021, arXiv:2102.12444.
Radiogenic Heating and Its Influence on Rocky Planet Dynamos and Habitability. F Nimmo, J Primack, S M Faber, E Ramirez-Ruiz, M Safarzadeh, Astrophys. J. Lett. 37Nimmo, F.; Primack, J.; Faber, S.M.; Ramirez-Ruiz, E.; Safarzadeh, M. Radiogenic Heating and Its Influence on Rocky Planet Dynamos and Habitability. Astrophys. J. Lett. 2020, 903, L37.
Impact of space weather on climate and habitability of terrestrial-type exoplanets. V Airapetian, R Barnes, O Cohen, G Collinson, W Danchi, C Dong, A Del Genio, K France, K Garcia-Sage, A Glocer, Int. J. Astrobiol. 19Airapetian, V.; Barnes, R.; Cohen, O.; Collinson, G.; Danchi, W.; Dong, C.; Del Genio, A.; France, K.; Garcia-Sage, K.; Glocer, A.; et al. Impact of space weather on climate and habitability of terrestrial-type exoplanets. Int. J. Astrobiol. 2020, 19, 136-194.
A Hadean to Paleoarchean geodynamo recorded by single zircon crystals. J A Tarduno, R D Cottrell, W J Davis, F Nimmo, R K Bono, Science. 349Tarduno, J.A.; Cottrell, R.D.; Davis, W.J.; Nimmo, F.; Bono, R.K. A Hadean to Paleoarchean geodynamo recorded by single zircon crystals. Science 2015, 349, 521-524.
Young inner core inferred from Ediacaran ultra-low geomagnetic field intensity. R K Bono, J A Tarduno, F Nimmo, R D Cottrell, Nat. Geosci. 12Bono, R.K.; Tarduno, J.A.; Nimmo, F.; Cottrell, R.D. Young inner core inferred from Ediacaran ultra-low geomagnetic field intensity. Nat. Geosci. 2019, 12, 143-147.
Implications of a long-lived basal magma ocean in generating Earth's ancient magnetic field. L Ziegler, D Stegman, Geochem. Geophys. Geosystems. 14Ziegler, L.; Stegman, D. Implications of a long-lived basal magma ocean in generating Earth's ancient magnetic field. Geochem. Geophys. Geosystems 2013, 14, 4735-4742.
Biosignatures on Mars: What, where, and how? Implications for the search for martian life. F Westall, F Foucher, N Bost, M Bertrand, D Loizeau, J L Vago, G Kminek, F Gaboyer, K A Campbell, J G Bréhéret, Astrobiology. 15Westall, F.; Foucher, F.; Bost, N.; Bertrand, M.; Loizeau, D.; Vago, J.L.; Kminek, G.; Gaboyer, F.; Campbell, K.A.; Bréhéret, J.G.; et al. Biosignatures on Mars: What, where, and how? Implications for the search for martian life. Astrobiology 2015, 15, 998-1029.
The possible origin and persistence of life on Enceladus and detection of biomarkers in the plume. C P Mckay, C C Porco, T Altheide, W L Davis, T A Kral, Astrobiology. 8McKay, C.P.; Porco, C.C.; Altheide, T.; Davis, W.L.; Kral, T.A. The possible origin and persistence of life on Enceladus and detection of biomarkers in the plume. Astrobiology 2008, 8, 909-919.
Biological methane production under putative Enceladus-like conditions. R S Taubner, P Pappenreiter, J Zwicker, D Smrzka, C Pruckner, P Kolar, S Bernacchi, A H Seifert, A Krajete, W Bach, Nat. Commun. 9Taubner, R.S.; Pappenreiter, P.; Zwicker, J.; Smrzka, D.; Pruckner, C.; Kolar, P.; Bernacchi, S.; Seifert, A.H.; Krajete, A.; Bach, W.; et al. Biological methane production under putative Enceladus-like conditions. Nat. Commun. 2018, 9, 1-11.
. S S Limaye, R Mogul, K H Baines, M A Bullock, C Cockell, J A Cutts, D M Gentry, D H Grinspoon, J W Head, K L Jessup, Astrobiology. 21an astrobiology targetLimaye, S.S.; Mogul, R.; Baines, K.H.; Bullock, M.A.; Cockell, C.; Cutts, J.A.; Gentry, D.M.; Grinspoon, D.H.; Head, J.W.; Jessup, K.L.; et al. Venus, an astrobiology target. Astrobiology 2021, 21, 1163-1185.
Reflections on O2 as a biosignature in exoplanetary atmospheres. V S Meadows, Astrobiology. 17Meadows, V.S. Reflections on O2 as a biosignature in exoplanetary atmospheres. Astrobiology 2017, 17, 1022-1052.
Exoplanet biosignatures: A review of remotely detectable signs of life. E W Schwieterman, N Y Kiang, M N Parenteau, C E Harman, S Dassarma, T M Fisher, G N Arney, H E Hartnett, C T Reinhard, S L Olson, Astrobiology. 18Schwieterman, E.W.; Kiang, N.Y.; Parenteau, M.N.; Harman, C.E.; DasSarma, S.; Fisher, T.M.; Arney, G.N.; Hartnett, H.E.; Reinhard, C.T.; Olson, S.L.; et al. Exoplanet biosignatures: A review of remotely detectable signs of life. Astrobiology 2018, 18, 663-708.
Magnetospheres of Terrestrial Exoplanets and Exomoons: Implications for Habitability and Detection. J Green, S Boardsen, C Dong, Astrophys. J. Lett. 45Green, J.; Boardsen, S.; Dong, C. Magnetospheres of Terrestrial Exoplanets and Exomoons: Implications for Habitability and Detection. Astrophys. J. Lett. 2021, 907, L45.
Trends in atmospheric properties of Neptune-size exoplanets. I J Crossfield, L Kreidberg, Astron. J. 154261Crossfield, I.J.; Kreidberg, L. Trends in atmospheric properties of Neptune-size exoplanets. Astron. J. 2017, 154, 261.
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods. Disclaimer/Publisher's Note, instructions or products referred to in the contentDisclaimer/Publisher's Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
| [] |
[
"Beyond the Born rule in quantum gravity",
"Beyond the Born rule in quantum gravity"
] | [
"Antony Valentini \nDepartment of Physics and Astronomy\nAugustus College\n14 Augustus RoadSW19 6LNLondonUK\n\nKinard Laboratory\nClemson University\n29634-0978ClemsonSCUSA\n"
] | [
"Department of Physics and Astronomy\nAugustus College\n14 Augustus RoadSW19 6LNLondonUK",
"Kinard Laboratory\nClemson University\n29634-0978ClemsonSCUSA"
] | [] | We have recently developed a new understanding of probability in quantum gravity. In this paper we provide an overview of this new approach and its implications. Adopting the de Broglie-Bohm pilot-wave formulation of quantum physics, we argue that there is no Born rule at the fundamental level of quantum gravity with a non-normalisable Wheeler-DeWitt wave functional Ψ. Instead the universe is in a perpetual state of quantum nonequilibrium with a probability density P = |Ψ| 2 . Dynamical relaxation to the Born rule can occur only after the early universe has emerged into a semiclassical or Schrödinger approximation, with a time-dependent and normalisable wave functional ψ, for non-gravitational systems on a classical spacetime background. In that regime the probability density ρ can relax towards |ψ| 2 (on a coarse-grained level). Thus the pilot-wave theory of gravitation supports the hypothesis of primordial quantum nonequilibrium, with relaxation to the Born rule taking place soon after the big bang. We also show that quantum-gravitational corrections to the Schrödinger approximation allow quantum nonequilibrium ρ = |ψ| 2 to be created from a prior equilibrium (ρ = |ψ| 2 ) state. Such effects are very tiny and difficult to observe in practice.Published in special issue Pilot-wave and beyond: Louis de Broglie and David Bohm's quest for a quantum ontology, ed. A. Drezet, Found. Phys. 53, 6 (2023). | 10.1007/s10701-022-00635-0 | [
"https://export.arxiv.org/pdf/2212.12175v1.pdf"
] | 253,968,095 | 2212.12175 | 39f98bb28a309c4efa8d2b7804e507386558f4ea |
Beyond the Born rule in quantum gravity
23 Dec 2022
Antony Valentini
Department of Physics and Astronomy
Augustus College
14 Augustus RoadSW19 6LNLondonUK
Kinard Laboratory
Clemson University
29634-0978ClemsonSCUSA
Beyond the Born rule in quantum gravity
23 Dec 20221 antonyv@clemson.edu
We have recently developed a new understanding of probability in quantum gravity. In this paper we provide an overview of this new approach and its implications. Adopting the de Broglie-Bohm pilot-wave formulation of quantum physics, we argue that there is no Born rule at the fundamental level of quantum gravity with a non-normalisable Wheeler-DeWitt wave functional Ψ. Instead the universe is in a perpetual state of quantum nonequilibrium with a probability density P = |Ψ| 2 . Dynamical relaxation to the Born rule can occur only after the early universe has emerged into a semiclassical or Schrödinger approximation, with a time-dependent and normalisable wave functional ψ, for non-gravitational systems on a classical spacetime background. In that regime the probability density ρ can relax towards |ψ| 2 (on a coarse-grained level). Thus the pilot-wave theory of gravitation supports the hypothesis of primordial quantum nonequilibrium, with relaxation to the Born rule taking place soon after the big bang. We also show that quantum-gravitational corrections to the Schrödinger approximation allow quantum nonequilibrium ρ = |ψ| 2 to be created from a prior equilibrium (ρ = |ψ| 2 ) state. Such effects are very tiny and difficult to observe in practice.Published in special issue Pilot-wave and beyond: Louis de Broglie and David Bohm's quest for a quantum ontology, ed. A. Drezet, Found. Phys. 53, 6 (2023).
Introduction
It has long been known that the pilot-wave theory of de Broglie and Bohm provides us with an objective and deterministic account of quantum physics [1,2,3,4,5]. Historically, the theory was constructed by de Broglie in a series of papers from 1922 to 1927, culminating in a pilot-wave dynamics for a many-body system. 2 The theory was revived by Bohm in 1952, who extended the dynamics to field theory and, crucially, showed in detail how the theory accounts for the general quantum theory of measurement [3]. Despite this success, there remains a long-standing controversy concerning the status of the Born probability rule in this theory. In recent work we have argued that the status of the Born rule in pilot-wave theory changes radically when we consider a regime in which quantum-gravitational effects are important [6]. In this paper we provide an overview of these new ideas and results, with a minimum of technicalities, and with an emphasis on the conceptual implications.
In pilot-wave theory a system with configuration-space wave function ψ(q, t) has an actual trajectory q(t) whose velocity v = dq/dt is determined by de Broglie's law of motion or 'guidance equation', where for systems with conventional Hamiltonians v is proportional to the gradient ∂ q S of the phase S of ψ = |ψ| e iS . For an ensemble of systems with the same wave function ψ(q, t), the ensemble distribution ρ(q, t) of configurations is usually assumed to be given by the Born rule ρ = |ψ| 2 .
It is a simple consequence of the equations of motion that if (1) holds at an initial time t = 0 it will hold for all t. On these grounds in the 1920s de Broglie simply took (1) as an assumption with no further explanation or justification [2]. This stance was however criticised by Pauli and by Keller, in 1953, who argued that such an initial condition was unjustified in a deterministic theory and should be derived from the dynamics [7,8]. This criticism was partially met by Bohm in the same year when he argued that an ensemble of two-level molecules would relax to the state (1) when subjected to random collisions [9]. However, no general argument for relaxation was given. In 1954, citing difficulties with understanding relaxation to the Born rule, Bohm and Vigier abandoned the original deterministic theory and introduced random (subquantum) 'fluid fluctuations' that drive relaxation to the Born rule for a general system [10]. Since then, most authors have simply adopted de Broglie's original position, with (1) in effect taken as an additional postulate (alongside the Schrödinger equation for ψ and de Broglie's law for v) [4,11,12,13]. This author has long argued that simply postulating (1) is a mistake, akin to artificially restricting classical mechanics to a state of thermal equilibrium [14,15,16,17,18,19,20,21,22,23,24]. In pilot-wave theory the Born rule (1) really describes a state of statistical equilibrium, or 'quantum equilibrium', analogous for example to the Maxwell distribution of molecular speeds for a gas in thermal equilibrium. Just as a classical ensemble can be in thermal nonequilibrium, with a distribution of velocities different from that of Maxwell, in pilot-wave theory an ensemble can be in quantum nonequilibrium with a distribution of configurations ρ = |ψ| 2 (2) different from that of Born. For such an ensemble the statistical predictions of textbook quantum mechanics would fail -raising the question of why such nonequilibrium phenomena have never been observed in the laboratory. The answer, at least as proposed by this author in 1991, is that all the systems we have access to have a long and violent history that traces back ultimately to the big bang. During that time there has been ample opportunity for dynamical relaxation ρ → |ψ| 2 to take place (on a coarse-grained level) -a process of 'quantum relaxation' that is broadly analogous to classical thermal relaxation, and which presumably occurred in the early universe. This process has been studied in general terms and has been observed to take place efficiently in a wide range of numerical simulations [14,15,16,18,21,24,25,26,27,28,29,30,31,32]. We can then understand why the Born rule holds to high accuracy today. At the same time, we understand that pilot-wave theory also contains a wider nonequilibrium physics, which may have been active in the early universe, and which could have left discernible traces today -in the form of anomalies in the cosmic microwave background (CMB), as well as in relic cosmological particles that might today still display violations of the Born rule [18,33,34,35,36,37,38,39,40,41]. On this view, quantum physics is merely a special case of a much wider physics in which the Born rule is broken. That wider physics allows violations of the uncertainty principle as well as practical nonlocal signalling [15,19,20,22]. According to pilot-wave theory, at least when correctly interpreted, textbook quantum mechanics is merely an effective theory that emerges in the state of quantum equilibrium. Many supposedly fundamental quantum constraints are really peculiarities of equilibrium and are broken for more general ensembles.
An alternative view has, however, long been championed by the 'Bohmian mechanics' school of de Broglie-Bohm theory -a distinctive approach to the subject first proposed by Dürr, Goldstein and Zanghì in 1992 [11,12,13]. In this approach, the wave function Ψ of the whole universe is used to define a (supposedly) fundamental probability (or 'typicality') measure |Ψ| 2 , from which one can readily derive the Born rule (1) for subsystems with an effective wave function ψ. On this view the Born rule is built into the theory and there is no prospect of ever finding nonequilibrium violations, not even in the early universe. While this approach has been influential among some philosophers [42,43], the argument is essentially circular: the Born rule is derived for subsystems only by assuming the Born rule for the whole universe at the initial time t = 0. There is no reason why our universe should have started with those particular initial conditions -whether or not the universe began in equilibrium or nonequilibrium is ultimately an empirical question to be decided by observation and experiment, not by theoretical or philosophical fiat [17,18,24].
As we will see in this paper, the controversy over the Born rule in pilot-wave theory changes drastically when we consider a regime where quantum gravity is important. For in that regime there simply is no normalisable physical probability (or typicality) measure |Ψ| 2 for the whole universe, and the (circular) argument employed by the Bohmian mechanics school can no longer even be formulated. Instead, normalisable wave functions ψ emerge only in the semiclassical regime -for systems evolving on a classical spacetime background -and in that regime the Born rule (1) can emerge by a dynamical process of quantum relaxation [6]. In this way, considerations from quantum gravity vindicate the hypothesis of quantum nonequilibrium at the big bang, with relaxation to the Born rule taking place only afterwards.
Before presenting technical details of how all this works, let us first sketch the key ideas in simple terms. In canonical quantum gravity the geometry of 3-space is described by a metric tensor g ij . 3 If we include a matter field φ we might expect the system to have a wave function (or functional) Ψ[g ij , φ, t] obeying a time-dependent Schrödinger equation i∂Ψ/∂t =ĤΨ (with an appropriate HamiltonianĤ and time parameter t). Instead, when we quantise the gravitational field we obtain a wave functional Ψ[g ij , φ] obeying a time-independent Wheeler-DeWitt equation [45,46]
Ĥ Ψ = 0(3)
(with an appropriate Hamiltonian density operatorĤ). 4 Time makes no appearance in the equations. After more than half a century since it was first written down, the physical interpretation of this 'timeless' theory remains controversial. Most workers in the field agree that a time-dependent Schrödinger equation i∂ψ/∂t =Ĥψ for a conventional wave functional ψ[φ, t] can emerge only in a semiclassical regime for a quantum field φ propagating on a classical background spacetime. There is, however, controversy over precisely how an effective time parameter t can emerge from a fundamentally timeless theory. This question is known in the quantum gravity literature as the 'problem of time' [47,48,49,50,51,52,53].
Quantum-gravitational effects are expected to be significant at sufficiently early times in our cosmological history (certainly within a Planck time t P ∼ 10 −43 sec after the beginning). In such a deep quantum-gravity regime, the Wheeler-DeWitt equation (3) must be applied. Soon afterwards we expect the universe to emerge into a semiclassical or 'Schrödinger' regime, in which a conventional time-dependent wave equation can be applied. Previous discussion of the Born rule and of quantum relaxation in pilot-wave theory has taken place within the semiclassical or Schrödinger approximation. Outside that approximation, however, the discussion must be carefully revised.
A pilot-wave theory of quantum gravity can be written down by supplementing the Wheeler-DeWitt equation (3) with de Broglie-Bohm trajectories g ij (t) whose velocity ∂g ij /∂t is proportional to a generalised phase gradient (see Section 5) [54,55,56,57,58]. 5 We might then expect |Ψ[g ij , φ]| 2 to define an equilibrium Born-rule probability density [54,60]. But, as we will see, this cannot be correct. The mathematical structure of (3) ensures that the density |Ψ[g ij , φ]| 2 cannot be normalised and so cannot be a physical probability distribution. This point has caused controversy and confusion in the literature. In our view, from a pilot-wave perspective, the implication is clear: in quantum gravity there simply is no physical Born-rule equilibrium state [6,61]. A physical probability density P [g ij , φ, t] (for a theoretical ensemble) must be normalisable by definition. Therefore it must differ from |Ψ[g ij , φ]| 2 at all times. We may say that the deep quantum-gravity regime is in a perpetual state of quantum nonequilibrium
P = |Ψ| 2 .(4)
In this regime there can be no quantum relaxation and no state of quantum equilibrium. As we will see there are two immediate implications. First, as the early universe emerges from the deep quantum-gravity regime and settles into a semiclassical regime described by the Schrödinger approximation, we can expect fields φ propagating on the classical background to be in a state of quantum nonequilibrium ρ[φ, t] = |ψ[φ, t]| 2 -where ψ is the effective (normalisable) Schrödinger wave functional for φ and the probability ρ emerges from P as a conditional probability. Second, quantum relaxation as previously understood can begin to take place only after the universe has settled into a conventional Schrödinger regime. Thus, even though there is no fundamental Born-rule equilibrium state in quantum gravity, we still recover quantum relaxation in the Schrödinger approximation -and so we can still explain the Born rule as we see it today.
There is another remarkable result of this analysis. It is well known that the emergent Schrödinger equation is subject to small quantum-gravitational corrections appearing in the effective HamiltonianĤ. Perhaps surprisingly, some of the correction terms are non-Hermitian [62,63,64,65]. Such terms are of course inconsistent with standard quantum mechanics, since the norm of ψ is no longer conserved and |ψ| 2 cannot be interpreted as a probability density in the usual way. For this reason, in previous studies such terms have been dropped, with no clear justification. In pilot-wave theory, in contrast, there is no inconsistency: such terms simply generate a gravitational instability of the Born rule, whereby an initial density ρ = |ψ| 2 can evolve into a final density ρ = |ψ| 2 . As we will see, such effects are extremely small, but observable at least in principle. This means that, when quantum gravity is taken into account, it is no longer the case that once quantum equilibrium is reached we are trapped in that state forever. There is a way out, at least in principle.
To summarise, in this paper we have three new ideas to present:
1. In quantum gravity there is no Born rule for a timeless Wheeler-DeWitt wave functional Ψ and the system is in a perpetual state of quantum nonequilibrium P = |Ψ| 2 .
2. The Born rule ρ = |ψ| 2 can emerge by quantum relaxation only in a semiclassical or Schrödinger approximation, for systems with an effective time-dependent wave function ψ on a classical background spacetime.
3. Tiny quantum-gravitational corrections to the Schrödinger approximation can make the Born rule unstable, with initial distributions ρ = |ψ| 2 evolving to final distributions ρ = |ψ| 2 .
To develop the details, we begin with a brief outline of some essential formalism.
Quantum gravity and quantum cosmology
In this section we provide a brief summary of the essential formalism of canonical quantum gravity [45,46], together with a simple model of quantum cosmology.
Canonical quantum gravity
The canonical quantisation of the gravitational field begins with a '3+1' foliation of classical spacetime by spacelike slices Σ(t) labelled by a time parameter t. This can always be done (generally nonuniquely) for a spacetime that is 'globally-hyperbolic'. The line element then takes the form
dτ 2 = (N 2 − N i N i )dt 2 − 2N i dx i dt − g ij dx i dx j ,(5)
where N , N i are respectively the 'lapse function' and 'shift vector', while g ij is the 3-metric on Σ(t). For simplicity we can take N i = 0 -so that lines of constant x i are normal to the slices -provided such lines do not encounter singularities. The object to be quantised is then a spatial 3-geometry represented by g ij . Beginning with the usual Einstein-Hilbert action, standard quantisation methods lead to the Wheeler-DeWitt equation, which for the pure gravitational field reads 6
−G ijkl δ 2 δg ij δg kl − √ gR Ψ = 0 ,(6)
where Ψ = Ψ[g ij ] and
G ijkl = 1 2 g −1/2 (g ik g jl + g il g jk − g ij g kl ) .(7)
We have written the kinetic term with a specific operator ordering but it should be understood that the ordering is ambiguous. The wave functional is also subject to a constraint − 2D j δΨ δg ij = 0 (8) associated with spatial coordinate invariance (where D j is a spatial covariant derivative). This constraint ensures that Ψ is a function of the coordinateindependent 3-geometry and not of the coordinate-dependent 3-metric. Writing (6) and (8) asĤ
Ψ = 0 ,Ĥ i Ψ = 0 ,
the total Hamiltonian operator is given bŷ
H = d 3 x (NĤ + N iĤ i ) .(9)
In the presence of a scalar matter field φ with potential V(φ) we have an extended Wheeler-DeWitt equation
(Ĥ g +Ĥ φ )Ψ = 0(10)
for the wave functional Ψ = Ψ[g ij , φ], where the gravitational term
H g = −G ijkl δ 2 δg ij δg kl − √ gR(11)
inĤ =Ĥ g +Ĥ φ is supplemented by a matter term
H φ = 1 2 √ g − 1 g δ 2 δφ 2 + g ij ∂ i φ∂ j φ + √ gV ,(12)
while the constraintĤ i Ψ = 0 corresponding to (8) takes the form
− 2D j δΨ δg ij + ∂ i φ δΨ δφ = 0 .(13)
A simple model of quantum cosmology
It will be helpful to illustrate our ideas with a simple model of quantum cosmology.
Consider an expanding flat and homogeneous universe with scale factor a(t) and spacetime line element 7
dτ 2 = dt 2 − a 2 dx 2 .(14)
We assume that the universe contains a homogeneous matter field φ with a potential V(φ). We then have a 'mini-superspace' model with two degrees of freedom (a, φ).
This system has a Lagrangian [65,66]
L = − 1 2 m 2 P aȧ 2 + 1 2 a 3φ2 − a 3 V ,(15)
where m 2 P = 3/4πG is the square of a (rescaled) Planck mass. This implies canonical momenta p a = −m 2 P aȧ ,
p φ = a 3φ(16)
and a Hamiltonian
H = − 1 2m 2 P 1 a p 2 a + 1 2a 3 p 2 φ + a 3 V(17)
(noting the sign difference between the kinetic terms). Promoting the canonical momenta to operatorsp a = −i∂/∂a andp φ = −i∂/∂φ, and choosing the factor ordering 1 a 2pa ap a , the Wheeler-DeWitt equa-tionĤΨ = 0 for Ψ(a, φ) reads [65]
1 2m 2 P 1 a ∂ ∂a a ∂Ψ ∂a − 1 2a 2 ∂ 2 Ψ ∂φ 2 + a 4 VΨ = 0 .(18)
Difficulties with the Born rule in quantum gravity
In this section we discuss some of the key difficulties with trying to interpret |Ψ| 2 as a probability density for a Wheeler-DeWitt wave funtional Ψ.
Why |Ψ| 2 is non-normalisable
We have said that, for solutions Ψ of the Wheeler-DeWitt equation, the quantity |Ψ| 2 cannot be a physical probability density because it is non-normalisable. To see why |Ψ| 2 cannot be normalised, consider for simplicity the case of pure gravitation. The Wheeler-DeWitt equation (6) for the wave functional Ψ[g ij ] on the space of 3-metrics g ij is mathematically analogous to the single-particle Klein-Gordon equation
− ∂ 2 ψ ∂t 2 + δ ij ∂ 2 ψ ∂x i ∂x j − m 2 ψ = 0(19)
for a wave ψ(x, t) on Minkowski spacetime. The analogy can be traced to the indefinite character of the 'DeWitt metric' G ijkl [45]. This means that (6) is formally analogous to an infinite-dimensional Klein-Gordon equation with a 'masssquared' term g 1/2 R. As a result, the integral Dg |Ψ[g ij ]| 2 over the whole space of 3-metrics necessarily diverges, just as the integral d 3 x dt |ψ(x, t)| 2 over the whole of spacetime necessarily diverges. It might be thought that the divergence could be removed by an appropriate regularisation. But the divergence is deeper than that, reflecting a basic fact about wave propagation. Solutions ψ(x, t) of the wave equation (19) can be localised with respect to x but not with respect to t, and mathematically the same phenomenon occurs for solutions Ψ[g ij ] of the wave equation (6). What we have just said is slightly simplified. We have not mentioned the constraint (8), which ensures that Ψ[g ij ] is not really a function on the space of coordinate-dependent 3-metrics g ij but in fact a function on the space of coordinate-independent 3-geometries (a space commonly referred to as 'superspace'). It might then be thought that the non-normalisability of |Ψ[g ij ]| 2 could just be an artifact of having to integrate over an infinite 'gauge volume' of 3metrics representing the same 3-geometry. But in fact the result still diverges even if we perform a physical integral over the space of 3-geometries (perhaps by factoring out the gauge volume in some way).
The simplest way to see this is to consider our mini-superspace model of quantum cosmology (Section 2.2). Each spacelike slice of constant t has a simple coordinate-independent representation as a flat Euclidean 3-space with scale factor a(t). If we rewrite the Wheeler-DeWitt equation (18) for Ψ(a, φ) in terms of α = ln a we find
1 m 2 P ∂ 2 Ψ ∂α 2 − ∂ 2 Ψ ∂φ 2 + 2e 6α VΨ = 0 .(20)
This is a two-dimensional Klein-Gordon equation with a potential term. The free part (ignoring the potential) has the general solution
Ψ free = f (φ − m P α) + g(φ + m P α) ,(21)
where f and g are packets travelling with the 'wave speed' c = m P in the two-dimensional 'spacetime' (α, φ). Thus
dαdφ |Ψ free | 2 = ∞ ,(22)
just as for a Klein-Gordon solution ψ(x, t) we have
d 3 x dt |ψ(x, t)| 2 = ∞ .(23)
Clearly the non-normalisability of Ψ has nothing to do with the (technically delicate) issue of unphysical coordinate degrees of freedom. It is simply a consequence of the Klein-Gordon-like character of the Wheeler-DeWitt equation and the resulting wave-like propagation in the mini-superspace (α, φ). Similar conclusions must hold in the full theory.
Naive Schrödinger interpretation. I
It is in fact well known in quantum-gravity circles that |Ψ| 2 cannot be a physical probability density [47,48,49,50,51,52]. While interpreted as such by Hawking and collaborators in the 1980s [67], this came to be known as the 'naive Schrödinger interpretation' [47]. Even leaving aside the question of nonnormalisability, the interpretation is problematic because the putative probability density |Ψ| 2 is time-independent. Attempts were made to repair this by some form of conditioning on a subset of the degrees of freedom, but this 'conditional probability interpretation' led to other problems [49,50]. In any case, in our view the interpretation fails from the outset simply because a non-normalisable density cannot represent a physical probability distribution. Historically, however, it has been more common to cite another reason for the failure of the naive interpretation of |Ψ| 2 . It is claimed that treating |Ψ| 2 as a conventional probability is incorrect because 'time' is in effect hidden in the metric degrees of freedom g ij . For example, in quantum cosmology with a wave function Ψ(a, φ), it is commonplace to treat the scale factor a as an effective time parameter. We can then try to recover a Born-rule-type probability for the remaining degrees of freedom at a given value of 'time'. This approach has a long history, dating back to the pioneering work of DeWitt and Wheeler in the 1960s [45,68], but to this day it remains controversial. Some authors have raised concerns about the bona fide temporal properties of gravitational degrees of freedom [47]. For example, if the scale factor a plays the role of time, what happens to time in a universe that expands and recontracts? On the other hand, some supporters of quantum gravity argue that at the deepest level physics is genuinely timeless, and that our common-sense notions of 'time' emerge only approximately and in certain conditions [45,69,70,71,72,73]. It is however not entirely clear whether quantum mechanics can be properly applied in a fundamentally timeless theory [44,70,73,74]. A relatively recent and exhaustive review of the 'problem of time' in quantum gravity runs to nearly a thousand pages and draws no definite conclusions [52], suggesting that the problem has yet to be satisfactorily resolved (though some may disagree).
In this paper we offer a new explanation for the failure of the naive Schrödinger interpretation. Our explanation is that, in the deep quantum-gravity regime, there is no such thing as the Born rule. As we shall see, we can discuss (timedependent) probability densities such as P [g ij , φ, t], but these are not tied to the Born rule and can never be. Necessarily, P [g ij , φ, t] = |Ψ[g ij , φ]| 2 always, since the left-hand side is normalisable (by definition) and the right-hand side is not. We may say that, at the Planck scale, a quantum-gravitational universe is in a perpetual state of quantum nonequilibrium. This, in our view, is the true physical significance of the non-normalisability of the Wheeler-DeWitt wave functional Ψ.
To make sense of this idea, however, we need to look more closely at pilotwave theory. 22,23,24]. At least, that is the case in non-gravitational physics. As we shall see, in pilot-wave gravitation, in contrast, there is no state of quantum equilibrium and no possibility of obtaining the Born rule, except in the semiclassical regime.
Let us first consider pilot-wave theory for a general system with configuration q and wave function ψ(q, t) on a background classical spacetime with global time parameter t (corresponding to a foliation by spacelike slices Σ(t)). Here q could represent particle or field configurations on the spacelike slice at time t. The wave function obeys a time-dependent Schrödinger equation
i ∂ψ ∂t =Ĥψ(24)
with some Hamiltonian operatorĤ. This implies a continuity equation for |ψ| 2 ,
∂ |ψ| 2 ∂t + ∂ q · j = 0 ,(25)
where ∂ q is a gradient on configuration space and j satisfies
∂ q · j = 2 Re iψ * Ĥ ψ .(26)
The 'current' j can be written in terms of ψ and its functional form depends on
H [75]. 8 Given the expression j = j [ψ] = j(q, t) we can define a velocity field v(q, t) = j(q, t) |ψ(q, t)| 2(27)
and write down a de Broglie guidance equation
dq dt = v(q, t)(28)
for the trajectory q(t). Equations (24), (27) and (28) define a deterministic dynamics for a general system with wave function ψ(q, t). 9 Note that ψ is regarded as a physical field (or 'pilot wave') on configuration space that guides the trajectory q(t) of a single system. At the fundamental dynamical level there is no such thing as probability (as in classical mechanics). IfĤ happens to be quadratic in the momenta, we find that v is proportional to a phase gradient. For example, for a single low-energy particle of mass m we find
v = 1 m Im ∇ψ ψ = 1 m ∇S ,(29)
where ψ = |ψ| e iS , while for a many-body system the nth particle has velocity v n = (1/m n )∇ n S.
We can now consider an ensemble of systems with the same wave function ψ. The systems evolve according to the velocity field v. By construction, then, the distribution ρ(q, t) of configurations evolves by the continuity equation
∂ρ ∂t + ∂ q · (ρv) = 0 .(30)
This matches the continuity equation (25) for |ψ| 2 . It follows immediately that if ρ = |ψ| 2 initially then ρ = |ψ| 2 at later times. An ensemble obeying the Born rule is in a state of 'quantum equilibrium'. For such ensembles we recover the usual statistical predictions of textbook quantum mechanics [3,4]. There is, however, no reason of principle why we could not begin with a 'quantum nonequilibrium' ensemble with ρ = |ψ| 2 [14,15,16]. What happens then? In general we will find violations of the usual statistical predictions. For example, for single particles incident on a two-slit screen with incoming wave function ψ, an incident ensemble with ρ = |ψ| 2 will yield an anomalous distribution ρ(x, t) = |ψ(x, t)| 2 at the backstop, breaking the usual interference pattern. Similarly, atomic transitions will have non-standard probabilities, and so on. And yet, such nonequilibrium phenomena have never been observed in the laboratory. Why not?
Quantum relaxation
In pilot-wave theory there is a straightforward answer. At some time in the remote past there took place a process of 'quantum relaxation' -by which we mean the time evolution of ρ towards |ψ| 2 (on a coarse-grained level). Quantum equilibrium was already reached, at least to a very good approximation, long before any of our experiments were carried out. That is why we see the Born rule today [14,15,16,17,18,21,22,23,24]. Bearing in mind the long and violent astrophysical and cosmological history of all known systems, quantum relaxation probably occurred in the very early universe.
Quantum relaxation can be understood, by analogy with thermal relaxation for an isolated classical system, in terms of the decrease of a coarse-grained H-function [14,16]
H (t) = dqρ ln(ρ/|ψ| 2 ) ,(31)
where the overbars indicate coarse-graining over small cells in configuration space. This quantity is equal to minus the relative entropy ofρ with respect to |ψ| 2 . It is bounded below by zero,H ≥ 0, andH = 0 if and only ifρ = |ψ| 2 . If we begin at t = 0 withρ = |ψ| 2 thenH(0) > 0. As the ensemble relaxes towards equilibrium,H(t) → 0 andρ → |ψ| 2 . This relaxing behaviour has been demonstrated in a wide variety of numerical simulations, yielding an approximately exponential decay [21,26,28]
H(t) ≈H(0)e −t/τ(32)
on a timescale τ that is (very roughly) comparable to the quantum timescale ∆t = ℏ/∆E (though τ also depends on the coarse-graining length) [26]. Moreover the quantity (31) obeys a general coarse-graining H-theorem [14,16]
H(t) ≤H(0) ,(33)
assuming no initial fine-grained structure in ρ and |ψ| 2 at t = 0. Closer analysis shows thatH(t) strictly decreases when ρ develops fine-grained structureas tends to happen for velocity fields that vary over the coarse-graining cells [16,24]. We have said that quantum relaxation probably took place in the early universe, soon after the big bang. This idea is potentially testable. According to inflationary cosmology, primordial quantum fluctuations in a scalar inflaton field were the ultimate source of primordial inhomogeneities, which later grew by gravitational clumping to form large-scale structure, as well as seeding the small temperature anisotropies we see today in the cosmic microwave background (CMB) [76,77,78]. This means that the statistical properties of the CMB sky ultimately depend on the Born rule for quantum field fluctuations in the very early universe. If the Born rule was broken at sufficiently early times, this could show up as anomalies in the CMB today [33,35]. Careful analysis shows that on expanding space quantum relaxation is suppressed for long-wavelength (super-Hubble) field modes, suggesting that a pre-inflationary era will end with a power deficit at long wavelengths, which could then carry over to an inflationary phase yielding a large-scale power deficit in the CMB [34,36]. This scenario has been studied numerically, with some simplifying assumptions [37,39]. A large-scale power deficit has in fact been reported in the CMB data [79], though its status remains controversial. Fitting the data to a quantum relaxation model has yielded some tantalising results, but the data are too noisy for clear conclusions to be drawn [35,80].
Trapped forever in quantum death?
According to pilot-wave theory, at least when correctly interpreted, we are currently trapped in a state of 'quantum death' that is broadly analogous to the state of classical thermodynamic 'heat death' which was much discussed in the nineteenth century as a seemingly inevitable future end state of our world (in which all systems have reached the same temperature and it is no longer possible to convert heat into work) [14,15,16,17]. According to pilot-wave theory, a subquantum analogue of the classical heat death has already happened (and a long time ago). In this state, all systems are subject to the same quantum noise, as described by the Born rule -just as, in the classical heat death, all systems are subject to the same thermal noise (at the same global temperature). In the state of quantum death, the uncertainty principle prevents us from observing and controlling the underlying details of de Broglie-Bohm trajectories. As a consequence, we are unable to control the underlying nonlocal dynamics of entangled systems, and in particular we are unable to employ entanglement for nonlocal signalling. But these limitations are not fundamental, they are merely peculiarities of the state of quantum death (just as the inability to convert heat into work is a peculiarity of the state of heat death). Locality emerges at the statistical level only if the Born rule holds exactly. As has been shown explicitly, nonequilibrium entangled systems generally allow instantaneous signalling [15,19,20] -which can be understood as defining a preferred foliation of spacetime with a global time parameter t [81].
Here we are interested specifically in the status of the Born rule. Clearly, in pilot-wave theory, the Born rule is not a law of nature, but holds only because we are confined to a certain state of statistical equilibrium. Moreover we seem to be forever trapped in this state, which appears to be stable. The equations of motion of pilot-wave theory guarantee that once quantum equilibrium is reached there is no way out (barring extremely rare fluctuations [16], analogous to putting a kettle of water on ice and waiting for the water to boil). There is one caveat, however. Our discussion applies to quantum systems on a classical spacetime background. What happens if spacetime itself is quantised? To answer that, we must turn to the pilot-wave theory of gravity.
Pilot-wave gravitation
The pilot-wave theory of gravity appears to have been first written down and studied for a mini-superspace model by Vink [54] and in general terms by Horiguchi [55]. It has since been developed, and extensively applied to cosmology, in particular by Pinto-Neto and collaborators [57,58,59].
Beginning for simplicity with the case of pure gravitation, the Wheeler-DeWitt equation (6) for the wave functional Ψ[g ij ] is supplemented by a de Broglie guidance equation for the time evolution g ij (t) of the 3-metric,
∂g ij ∂t = 2N G ijkl δS δg kl ,(34)
where S is the phase of Ψ and for simplicity we have set N i = 0. 10 The equation of motion (34) can be justified in two ways. We might simply identify the classical canonical momentum density p ij (conjugate to g ij ) with the phase gradient δS/δg ij and then use the well-known classical relation between p ij anḋ g ij to yield (34). Alternatively, (34) can be justified as the natural velocity field appearing in the equation
δ δg ij |Ψ| 2 G ijkl δS δg kl = 0 ,(35)
which follows from the Wheeler-DeWitt equation (6) (with an appropriate choice of operator ordering, 11 inserting Ψ = |Ψ| e iS and taking the imaginary part), and which can be rewritten as
δ δg ij |Ψ| 2 ∂g ij ∂t = 0(36)
withġ ij given by (34). Equations (34) and (6), together with the constraint (8), are taken to define the dynamics of a single system. These equations are readily extended to include a matter field φ. The Wheeler-DeWitt equation (10) for Ψ[g ij , φ] then includes a matter term (12) and Ψ is subject to the constraint (13). We still have the same guidance equation (34) for g ij and in addition a guidance equation
∂φ ∂t = N √ g δS δφ(37)
for φ (again taking N i = 0). 12 As before the guidance equations can be justified by identifying the classical canonical momenta with a phase gradient, or by identifying the natural velocity fields appearing in the equation
δ δg ij |Ψ| 2 ∂g ij ∂t + δ δφ |Ψ| 2 ∂φ ∂t = 0(38)
(which now follows from the extended Wheeler-DeWitt equation (10)). Before proceeding we should point out that, in the above dynamics, the status of the spacetime foliation is perhaps not fully understood. The arbitrary functions N , N i should not affect the 4-geometry that is traced out by the evolving 3-metric (for given initial conditions). Shtanov [56] argued that the 4geometry depends on N , suggesting that the theory breaks foliation invariance. In that case one might include a specific choice for N as part of the theory. On the other hand, work by Pinto-Neto and Santini [82] suggests that the 4-geometry is in fact independent of N , N i . By writing the dynamics as a Hamiltonian system, it is argued that the time evolution of an initial 3-geometry yields the same 4-geometry for all N , N i -with the caveat that the 4-geometry is non-Lorentzian. Local Lorentz invariance is broken (as expected in a nonlocal theory). If this argument is correct it seems to imply that, for a given solution Ψ and for a given initial 3-geometry, the resulting spacetime has an effective preferred foliation. Intuitively, this seems consistent with the first-order (or 'Aristotelian') structure of pilot-wave dynamics [83]. And, as already noted, for nonequilibrium ensembles of entangled systems we obtain statistical nonlocal signals [15,19,20], which arguably also define a preferred foliation of spacetime [81]. It would be of interest to study these matters in more detail.
The above dynamics has been applied extensively by numerous authors, in particular to quantum cosmology. Such applications have focussed on properties of the trajectories (such as singularity avoidance) without attempting to construct a theory of a quantum equilibrium ensemble [57,58,59]. In fact previous workers have avoided discussing ensembles, owing to the pathological (non-normalisable) nature of the density |Ψ| 2 . By a curious twist, we then find ourselves in a position opposite to that of textbook quantum mechanics: we have a theory of single systems with trajectories, but no theory of ensembles or of probabilities. It has been suggested that this is understandable because (as argued by the Bohmian mechanics school [11,12]) the notion of probability is (supposedly) meaningless for a single universe [59]. And yet theoretical cosmologists routinely discuss probabilities for primordial cosmological perturbations, and observational cosmologists employ measurements of the CMB to constrain the primordial power spectrum. In practice, by assuming statistical isotropy and statistical homogeneity for a theoretical ensemble, we can and do discuss probabilities for our universe and constrain them by observation [24]. How, then, can we proceed with a theory of probability in pilot-wave gravitation?
Gravity without the Born rule
Our suggested answer is to accept that at the fundamental level there is no such thing as the Born rule [6]. An arbitrary theoretical ensemble with the same Wheeler-DeWitt wave functional Ψ[g ij , φ] will have an arbitrary initial probability distribution P [g ij , φ, t i ] at time t i . By definition P [g ij , φ, t i ] will be normalisable and so cannot be equal to |Ψ[g ij , φ]| 2 under any circumstances. The initial distribution P [g ij , φ, t i ] is a contingency unconstrained by any law but which can, at least in principle, be constrained by empirical observation. Furthermore, we can straightforwardly study its time evolution. Each element of the ensemble evolves by the de Broglie velocity field (34) and (37), and so P [g ij , φ, t] necessarily evolves by the continuity equation 13
∂P ∂t + d 3 x δ δg ij P ∂g ij ∂t + d 3 x δ δφ P ∂φ ∂t = 0 .(39)
We then have a theory for a general ensemble of gravitational systems evolving in time. One of our key claims is that this theory has no Born-rule equilibrium state [6,61]. At the deepest level of gravitational physics, the universe is in a perpetual state of quantum nonequilibrium
P [g ij , φ, t] = |Ψ[g ij , φ]| 2 .(40)
In Section 6 we shall illustrate these ideas with our simple model of quantum cosmology.
Naive Schrödinger interpretation. II
Some workers instead interpret |Ψ| 2 as a probability density (as first suggested by Vink [54] and recently advocated by Dürr and Struyve [60]). This amounts to 13 We might also append a constraint of the form (13) on P , to ensure that it is a function on the space of coordinate-independent 3-geometries. We can avoid this complication by simply working with one representation of the 3-geometry by one (coordinate-dependent) metric g ij .
applying the naive Schrödinger interpretation to pilot-wave gravitation. However, while the presence of trajectories adds a new element, the interpretation remains unworkable because |Ψ| 2 is non-normalisable.
It might be thought that equation (38) can be employed to motivate |Ψ| 2 is an equilibrium probability density. But (38) is not a continuity equation but an infinity of equations (one per spatial point x). Following Dürr and Struyve [
∂ |Ψ| 2 ∂t + d 3 x δ δg ij |Ψ| 2 ∂g ij ∂t + d 3 x δ δφ |Ψ| 2 ∂φ ∂t = 0(41)
(where for completeness we have inserted the vanishing term ∂ |Ψ| 2 /∂t = 0). This is formally the same as the physical continuity equation (39) for P . We might then 'deduce' that P = |Ψ| 2 is an equilibrium state, as usually done for non-gravitational systems. On this basis Dürr and Struyve claim that |Ψ| 2 can be employed as a quantum equilibrium measure of 'typicality' for initial configurations of the universe (from which follows the Born rule for subsystems, along lines already advocated by the Bohmian mechanics school). But this novel application of the naive Schrödinger interpretation again founders on the fact that |Ψ| 2 is not normalisable and cannot define a physical probability (or typicality) measure.
We should be wary of artificial attempts to make |Ψ| 2 appear like a conventional density. In our view, to interpret |Ψ| 2 as a Born-rule measure is a category mistake. At the fundamental level there is no Born rule. As we will see the usual Born-rule measure emerges only in the Schrödinger approximation (on a classical spacetime background).
Pilot-wave cosmology
Recall our quantum-cosmological model with degrees of freedom (a, φ) and wave function Ψ(a, φ) satisfying the Wheeler-DeWitt equation (18). Inserting Ψ = |Ψ| e iS and taking the imaginary part yields what we call a 'pseudo-continuity equation'
∂ ∂a
a 2 |Ψ| 2ȧ + ∂ ∂φ a 2 |Ψ| 2φ = 0(42)
for a density a 2 |Ψ| 2 and with a velocity fielḋ
a = − 1 m 2 P 1 a ∂S ∂a ,φ = 1 a 3 ∂S ∂φ .(43)
We can identify (43) as the natural de Broglie guidance equations for this system. 15 A general theoretical ensemble of systems with the same wave function Ψ will have a probability distribution P (a, φ, t) (with density defined with respect to dadφ). Since each element of the ensemble evolves according to the velocity field (43), the distribution P (a, φ, t) necessarily evolves according to the continuity equation
∂P ∂t + ∂ ∂a (Pȧ) + ∂ ∂φ Pφ = 0(44)
(withȧ,φ given by (43)). We then have a theory for a general ensemble of cosmological systems evolving in time.
Failure of the Born rule
At this point we must be careful. In standard quantum theory equation (42) would be regarded as a continuity equation for a Born-rule-like probability density a 2 |Ψ| 2 . The unusual factor a 2 arises simply from the structure of our minisuperspace and if preferred could be eliminated by defining a density with respect to a 2 dadφ instead of dadφ. In any case, equation (42) seemingly suggests that the quantum probability to find the system in a range dadφ is given by a 2 |Ψ| 2 dadφ. This is the naive Schrödinger interpretation in a quantumcosmological setting. As we saw in Section 3.2 this interpretation fails not only because the putative probability density a 2 |Ψ| 2 has no explicit time dependence but also because it is non-normalisable.
In pilot-wave theory it might be argued that, because the respective evolution equations (44) and (42) for P and a 2 |Ψ| 2 are identical (noting that ∂(a 2 |Ψ| 2 )/∂t = 0), if P = a 2 |Ψ| 2 initially then P = a 2 |Ψ| 2 at later times and we may identify this as a state of quantum equilibrium. This is again the naive Schrödinger interpretation applied to pilot-wave theory (Section 5.2), and again it is as untenable in pilot-wave theory as it is in standard quantum theory. For a general solution Ψ of the wave equation (18) we inevitably have dadφ a 2 |Ψ(a, φ)| 2 = ∞ .
The equality P = a 2 |Ψ| 2 is mathematically nonsensical since the left-hand side is (by definition) normalisable while the right-hand side is not.
To understand this theory we need to accept that in the deep quantumgravity regime there is no Born-rule-like equilibrium state. There simply is no Born rule and no state of quantum equilibrium. In the context of our quantumcosmological model, we must have P = a 2 |Ψ| 2 (46) always (both initially and at later times).
Impossibility of quantum relaxation
In terms of quantum relaxation, for this system the coarse-grained H-function
H(t) = dadφP ln(P /a 2 |Ψ| 2 )(47)
(minus the relative entropy ofP with respect to a 2 |Ψ| 2 ) still obeys a coarsegraining H-theorem (33) but now has no lower bound. If we had
dadφ a 2 |Ψ| 2 = N(48)
for some finite N , it is easy to show that (47) would be bounded below by − ln N , with the lower bound attained if and only ifP = 1 N a 2 |Ψ| 2 [6]. For N → ∞ there is no lower bound. The functionH(t) can decrease indefinitely without ever reaching a minimum. In this senseP is always infinitely far away from the putative 'equilibrium' state a 2 |Ψ| 2 . Limited local relaxation might take place in some regions of configuration space, but to attain global equilibrium is mathematically impossible. Similar reasoning applies to the full gravitational theory [6].
Quantum relaxation in the Schrödinger approximation
We have argued that, in the deep quantum-gravity regime, quantum relaxation cannot take place because there is no physical equilibrium state to relax to. In effect a quantum-gravitational system is perpetually in nonequilibrium. How are we then to understand the ubiquity of the Born rule in our world today? Our proposed answer is that quantum relaxation can take place in the semiclassical regime, where the system propagates on an approximately classical spacetime background. In this 'Schrödinger approximation' we have an effective time-dependent Schrödinger equation (24) for a conventional wave function ψ(q, t), where q might represent for example a field configuration φ on a background classical curved space and t is the time function associated with a preferred foliation. Since ψ is now normalisable, |ψ| 2 can correspond to a physical probability distribution, which is attainable after appropriate relaxation.
To see how this works, we need to outline how the Schrödinger approximation is derived from the underlying quantum-gravitational theory (details are given in Section 9.1). Consider again the deep quantum-gravity regime with a matter field φ and a Wheeler-DeWitt wave functional Ψ[g ij , φ]. We obtain an effective time-dependent wave function ψ, on an approximately classical spacetime background, when the solution Ψ[g ij , φ] of the Wheeler-DeWitt equation takes the approximate form
Ψ[g ij , φ] ≈ Ψ WKB [g ij ]ψ[φ, g ij ] ,(49)
where Ψ WKB [g ij ] is a WKB wave functional for the 3-metric. The phase S WKB = Im ln Ψ WKB satisfies a classical Hamilton-Jacobi equation and generates classical trajectories g ij = g ij (t) for the background. If we evaluate ψ[φ, g ij ] along a specific trajectory g ij (t), we can define an effective time-dependent wave functional
ψ eff [φ, t] = ψ[φ, g ij (t)](50)
for the matter field φ, with a time derivative
∂ ∂t = d 3 xġ ij δ δg ij .(51)
It can then be shown that ψ eff satisfies an approximate time-dependent Schrödinger equation
i ∂ψ eff ∂t =Ĥ eff ψ eff(52)
for the field φ on the classical background, whereĤ eff is an effective Hamiltonian (which of course depends on the background). This method of deriving the Schrödinger approximation has a long history. The WKB trajectories for the classical background allow us to define an effective time parameter t, which historically has often been called 'WKB time' [84]. The origin of such trajectories is unclear in standard quantum mechanics, where they are really being inserted by hand. In pilot-wave theory, in contrast, the WKB trajectories are simply de Broglie-Bohm trajectories evaluated in the WKB approximation, and so the above construction is conceptually clear.
We can now return to the question of quantum relaxation and the Born rule. Once the very early universe enters the semiclassical or Schrödinger regime, fields and particles propagating on the (approximate) classical background will satisfy a time-dependent Schrödinger equation of the form (52), with a conventional and normalisable wave function ψ eff . As we will see in Section 9.1, the de Broglie guidance equation also takes the standard form (in terms of ψ eff ). We then find ourselves in the domain which has already been much studied in pilot-wave theory as briefly summarised in Section 4.1. If at the beginning of the Schrödinger regime we have a nonequilibrium probability distribution ρ = |ψ eff | 2 , then in appropriate circumstances quantum relaxation will ensure that ρ → |ψ eff | 2 on a coarse-grained level, at least to a good approximation and in particular for short-wavelength (sub-Hubble) field modes. In this way, despite the complete absence of a Born rule in the deep quantum-gravity regime, we can nevertheless understand the emergence of the Born rule in the semiclassical or Schrödinger approximation, at scales relevant to laboratory physics, after appropriate quantum relaxation. We have said that, once we have an approximate time-dependent Schrödinger equation, conventional quantum relaxation can take place. But is there any reason to expect nonequilibrium ρ = |ψ eff | 2 at the start of the semiclassical regime? Indeed there is. Fundamentally we have a perpetual nonequilibrium ensemble with distribution P [g ij , φ, t] = |Ψ[g ij , φ]| 2 . As we enter the semiclassical regime, say at some 'initial' time t i (approximately marking the beginning of that regime), the field φ will have a conditional probability density
ρ[φ, t i ] = P [g ij , φ, t i ] P [g ij , φ, t i ]Dφ ,(53)
where on the right-hand side it is understood that we have inserted the actual value of the classical background 3-metric g ij at time t i . Because here
P [g ij , φ, t i ] = |Ψ[g ij , φ]| 2 ≈ |Ψ WKB [g ij ]| 2 |ψ[φ, g ij ]| 2 = |Ψ WKB [g ij ]| 2 |ψ eff [φ, t i ]| 2 ,(54)it follows that ρ[φ, t i ] = |ψ eff [φ, t i ]| 2 (55) (unless it so happens that P [g ij , φ, t i ] = Π[g ij ] |ψ eff [φ, t i ]| 2 for some Π[g ij ])
. We then expect to find quantum nonequilibrium at the start of the semiclassical or Schrödinger regime, with the Born rule emerging only later after an appropriate period of quantum relaxation.
Instability of the Born rule in quantum gravity
We have outlined how the time-dependent Schrödinger equation (52) for the effective wave function ψ eff emerges in the semiclassical approximation. We then have a normalisable wave function and quantum relaxation to equilibrium ρ → |ψ eff | 2 can proceed in the usual way. The Schrödinger equation (52) is, however, subject to small quantum-gravitational corrections to the effective Hamiltonian H eff . Remarkably, some of the correction terms are non-Hermitian [62,63,64,65]. These terms have no consistent interpretation in standard quantum mechanics as they violate the conservation of probability. In pilot-wave theory, in contrast, probability is by construction conserved and (as we shall see) the non-Hermitian terms simply render the Born rule unstable.
The derivation of the correction terms will be presented in the next section. Here we first show how pilot-wave theory is able to accomodate such terms consistently.
The corrections are calculated by performing a 'semiclassical expansion' of the Wheeler-DeWitt equation (Section 9). Dropping for simplicity the subscript 'eff', we find an effective Hamiltonian of the form
H =Ĥ φ +Ĥ a + iĤ b ,(56)
whereĤ φ is the usual field Hamiltonian and the Hermitian operatorsĤ a ,Ĥ b represent tiny quantum-gravitational corrections. There is a Hermitian cor-rectionĤ a and a non-Hermitian correction iĤ b . WritingĤ 1 =Ĥ φ +Ĥ a and H 2 =Ĥ b , the effective Schrödinger equation for ψ[φ, t] takes the form
i ∂ψ ∂t = (Ĥ 1 + iĤ 2 )ψ .(57)
Applying the same semiclassical expansion to the de Broglie guidance equation, we find an effective guidance equation of the form
∂φ ∂t = j 1 |ψ| 2 ,(58)
where j 1 is the usual current associated with the Hermitian partĤ 1 only. Thus, while the Schrödinger equation (57) has a non-Hermitian correction iĤ 2 , this does not affect the guidance equation (58). As we show in the next section, these results follow directly and without ambiguity from the underlying quantumgravitational equations in a semiclassical expansion. To see the consequences note that (57) implies a continuity equation (writing
j 1 = |ψ| 2φ ) ∂ |ψ| 2 ∂t + ∂ φ · (|ψ| 2φ ) = s ,(59)
where ∂ φ · (...) = d 3 x δ/δφ(x) (...) is a divergence in field configuration space and s = 2 Re ψ * Ĥ 2 ψ .
For an ensemble of systems with the same wave function ψ, each element of the ensemble evolves by the de Broglie velocity field (58). The probability density ρ[φ, t] then evolves by the usual continuity equation
∂ρ ∂t + ∂ φ · (ρφ) = 0 .(61)
For s = 0 there is a mismatch between equations (59) and (61). It follows that an initial distribution ρ = |ψ| 2 can evolve into a final distribution ρ = |ψ| 2 . The Born rule is unstable. This can be quantified in terms of the ratio f = ρ/|ψ| 2 , which is no longer conserved along trajectories. From (59) and (61) we find
df dt = − sf |ψ| 2(62)
(where d/dt = ∂/∂t + d 3 xφ.δ/δφ(x) is the time derivative along a trajectory in field configuration space). It is also worth noting that the squared-norm Dφ |ψ| 2 of ψ changes with time,
d dt Dφ |ψ| 2 = dq s = 2 Ĥ 2 .(63)
We see from the above equations that the usual Born-rule equilibrium state ρ = |ψ| 2 is unstable. As a result of quantum-gravitational corrections, nonequilibrium ρ = |ψ| 2 is created on a timescale τ noneq which can be estimated from the rate of change of the (fine-grained) H-function H(t) = Dφ ρ ln(ρ/ |ψ| 2 ). From (59) and (61) we find 16
dH dt = − Dφ ρ |ψ| 2 s .(64)
Close to equilibrium (ρ ≈ |ψ| 2 ) we have
dH dt ≈ − Dφ s = −2 Ĥ 2 .(65)
Defining τ noneq as the timescale over which H changes by a factor of order unity, we have the estimate τ noneq ≈ 1
2 Ĥ 2 .(66)
Note however that, for such effects to build up over time, nonequilibrium must be created faster than relaxation can remove it, which requires conditions where τ relax > τ noneq .
Some quantum-gravity theorists have long been puzzled by the non-Hermitian terms iĤ 2 (first found in 1991 and re-derived in more recent papers), which signal a violation of unitarity (the usual norm of ψ is not conserved) [62,63,64,65]. Because the non-Hermitian terms are inconsistent with the standard interpretation of |ψ| 2 as a probability density, they are often regarded as an artifact to be ignored by fiat. Some authors have advocated formally eliminating these terms by appropriate redefinitions of the wave function [64,85]. We suggest that such redefinitions may turn out to be an artificial means of disguising genuine physical effects. Our experience with quantum systems on a classical spacetime background teaches us that the Hamiltonian must be Hermitian, but that experience is limited to conditions where quantum-gravitational effects are negligible. As we have seen, non-Hermitian terms are perfectly consistent with pilot-wave theory, according to which they simply generate a gravitational instability of the Born rule: an initial density ρ = |ψ| 2 can evolve into a final density ρ = |ψ| 2 . The derivation of the non-Hermitian terms will now be discussed in more detail.
Semiclassical expansion
Quantum-gravitational corrections to the Schrödinger equation (52) were derived by Kiefer and Singh [62] from a semiclassical expansion
Ψ = exp i µS 0 + S 1 + µ −1 S 2 + ...(68)
of the extended Wheeler-DeWitt equation (10) for Ψ[g ij , φ], where µ = c 2 /32πG (dimensions mass per length). Inserting (68) into the left-hand side of (10), terms of the same order in µ are collected and their sum set to zero. The orders that appear are µ 2 , µ, µ 0 , µ −1 ... . To a first approximation we obtain the usual Schrödinger equation (52) for a field φ on a classical background spacetime. We then obtain gravitational corrections to (52), in the form of (very small) Hermitian and non-Hermitian terms in the Hamiltonian. The results found by Kiefer and Singh are summarised below. In pilot-wave theory we must also consider how the semiclassical expansion affects the de Broglie guidance equation (37) for φ. As we will see, the guidance equation retains its standard form. Thus, if the semiclassical expansion is to be trusted, it follows from the fundamental equations of quantum gravity that the emergent Born rule is unstable.
Lowest-order Schrödinger approximation
At order µ 2 the expansion (68) yields (δS 0 /δφ) 2 = 0 so that S 0 = S 0 [g ij ] depends only on g ij , while at order µ it is found that S 0 satisfies a classical Hamilton-Jacobi equation whose solution defines a classical background spacetime. The trajectories of the classical background can be used to define an effective time parameter t (cf. equation (51)).
Order µ 0 yields an equation for S 1 , which can be written as an effective timedependent Schrödinger equation for a zeroth-order (uncorrected) wave functional ψ (0) [φ, t] on a classical background with metric g ij . Defining
ψ (0) = D exp(iS 1 )(69)
for an appropriate functional D[g ij ], it can be shown that
i ∂ψ (0) ∂t = d 3 x NĤ φ ψ (0)(70)
whereĤ φ is given by (12). This is the standard Schrödinger equation for a massless (minimally-coupled) real scalar field φ with potential V(φ) on a classical spacetime background.
As expected, to this order the Wheeler-DeWitt wave functional Ψ[g ij , φ] takes the WKB form (49), with Ψ WKB [g ij ] = (1/D) exp (iM S 0 ) and ψ[φ,
g ij ] = ψ (0) [φ, t].
In pilot-wave theory we must also consider the de Broglie guidance equation (37) for φ. To this order, how is the field velocityφ related to the effective wave functional ψ (0) [φ, t]? We can find out by inserting the expansion (68) into (37) (where S = Im ln Ψ), yielding
∂φ ∂t = N √ g δ δφ Re S 1 + µ −1 Re S 2 + ... .(71)
The factor D in (69) can be chosen to be real, so that Re S 1 is equal to the phase of ψ (0) . To lowest order we then have a de Broglie velocity, ∂φ ∂t
(0) = N √ g δS (0) δφ ,(72)
where S (0) = Im ln ψ (0) is the phase of ψ (0) . 17 This is the standard de Broglie guidance equation for a field φ with wave functional ψ (0) [φ, t] [86].
Thus, in this approximation we recover the usual pilot-wave dynamics of a field on a classical spacetime background, and so we can expect quantum relaxation to the Born rule to occur in the usual way.
Gravitational corrections
Following ref. [62] we now consider higher orders in the semiclassical expansion (68). At order µ −1 Kiefer and Singh obtain an equation for S 2 . Writing S 2 = σ 2 [g ij ] + η[φ, g ij ] for appropriately chosen σ 2 , the corrected matter wave functional
ψ (1) = ψ (0) exp(iη/µ)(73)
is found to satisfy a corrected Schrödinger equation
i ∂ψ (1) ∂t = d 3 x N Ĥ φ +Ĥ a + iĤ b ψ (1) ,(74)
whereĤ a = 1 8µ
1 √ gRĤ 2 φ (75) andĤ b = 1 8µ δ δτ Ĥ φ √ gR(76)
are both Hermitian (employing the convenient shorthand δ/δτ =ġ ij δ/δg ij , withġ ij = 2N G ijkl δS 0 /δg kl , to denote a 'many-fingered time derivative' on the background). To this order we have a total effective Hamiltonian of the form (56) witĥ
H φ = d 3 x NĤ φ ,Ĥ a = d 3 x NĤ a ,Ĥ b = d 3 x NĤ b .(77)
As noted we have Hermitian and non-Hermitian correctionsĤ a and iĤ b respectively. 18 We can now consider the next order in the semiclassical expansion (68) of the de Broglie guidance equation (37). Because the term σ 2 [g ij ] in S 2 is independent of φ, the de Broglie velocity (71) takes the form
∂φ ∂t = N √ g δ δφ Re S 1 + µ −1 Re η + ... .(78)
The corrected wave functional (73) has a total phase
Im ln ψ (1) = Re S 1 + µ −1 Re η ,(79)
18 In an expanding cosmological background the ratio of iĤ b toĤa is roughly of order ∼ H/E, where H =ȧ/a is the Hubble parameter and E is a typical energy for the field [62]. and so the corrected de Broglie velocity (78) can once again be written in the standard form, ∂φ ∂t
(1) = N √ g δS (1) δφ ,(80)
where now S (1) = Im ln ψ (1) is the phase of ψ (1) . To conclude, despite the non-Hermitian term in the corrected Schrödinger equation (74), the de Broglie velocity (80) continues to take the standard form (now in terms of ψ (1) ). In other words, the expression for the velocity remains that associated with the original (uncorrected) Hermitian partĤ φ of the Hamiltonian. The non-Hermitian term affects the time evolution of ψ (1) -and so indirectly affects the trajectories -but does not change the form of the guidance equation itself. As we have seen, this implies that the Born rule for φ is unstable.
More recently, Brizuela, Kiefer and Krämer [65] have derived similar results for a minisuperspace model of quantum cosmology. The classical background is defined by a scale factor a(t) and a homogeneous field φ(t). Quantum scalar perturbations (of the background metric combined with the inflaton perturbation) are described by the Mukhanov-Sasaki variable υ k in Fourier space. The wave function Ψ k (a, φ, υ k ) satisfies a Wheeler-DeWitt equation for the mode k, which is solved by means of a semiclassical expansion k , and so remains equal to the velocity generated by the (uncorrected) Hermitian part of the Hamiltonian. As in the general case this implies that the Born rule is unstable.
Ψ k = exp i m 2 P S 0 + m 0 P S 1 + m −2 P S 2 + ...(81)
Examples of quantum instability
The gravitational instability of the Born rule has been studied for several examples. These include a scalar field on de Sitter space, a scalar field close to an evaporating black hole, and an atomic system in the gravitational field of the earth. Here we outline the results obtained so far and some of the potential implications. 19
Inflationary perturbations on de Sitter space
In ref. [6], taking the results of Brizuela et al. [65] as a starting point, we derived a simplified model for inflationary perturbations in a far slow-roll limit, on a background with an approximate de Sitter expansion, a ∝ e Ht , where the Hubble parameter H is almost constant. The resulting equations define a tractable cosmological model of quantum instability in the early inflationary universe.
The perturbations are described by (real) Fourier field components q k . The corrected Schrödinger equation for the effective wave function ψ
(1) k (q k , t) is found to be i ∂ψ (1) k ∂t =Ĥ k ψ (1) k −k 3 2m 2 P H 2 1 ψ (0) k 1 a 3 (Ĥ k ) 2 ψ (0) k + i ∂ ∂t 1 a 3Ĥ k ψ (0) k ψ (1) k ,(82)whereĤ k = − 1 2a 3 ∂ 2 ∂q 2 k + 1 2 ak 2 q 2 k(83)
is the uncorrected (zeroth-order) Hamiltonian for the field mode, ψ (0) k is the uncorrected (zeroth-order) wave function, and
k = 1 L ,(84)
where L is an arbitrary lengthscale associated with spatial integration in the classical action (to be interpreted as an infrared cutoff) [66]. In the same limit the de Broglie guidance equation for q k is found to be
dq k dt = 1 a 3 ∂s (1) k ∂q k ,(85)
where s
(1) k = Im ln ψ (1) k is the phase of ψ (1)
k . This is the standard de Broglie velocity for Fourier components of a scalar field, with Hamiltonian (83), on a classical expanding background [33].
For a theoretical ensemble with the same wave function ψ (1) k (q k , t), the probability density ρ (1) k (q k , t) will evolve by the continuity equation
∂ρ (1) k ∂t + ∂ ∂q k ρ (1) kq k = 0 ,(86)
whereq k is the velocity field (85). In contrast, from (82) we find that ψ
(1) k 2 satisfies ∂ ψ (1) k 2 ∂t + ∂ ∂q k ψ (1) k 2q k = s ,(87)
where in the notation of Section 8 the 'source' s is given by (60) where herê
H 2 = −k 3 2m 2 P H 2 1 ψ (0) k ∂ ∂t 1 a 3Ĥ k ψ (0) k .(88)
These equations can be used to calculate the gravitational production of quantum nonequilibrium during inflation, employing the differential equation (62) for the rate of change of the ratio f k = ρ k / |ψ k | 2 along trajectories. Taking
ψ (0)
k to be the Bunch-Davies vacuum wave function, approximate calculations show that the gravitational instability of the Born rule generates a nonequilibrium deficit ∼ 1/k 3 in the primordial cosmological power spectrum. It has been shown elsewhere that there is no significant relaxation during inflation [35,87], so the condition (67) will be satisfied and the generated nonequilibrium will persist over time. However, the magnitude of the effect on the power spectrum is far too small to observe in the CMB (for details see ref. [6]).
By considering only the Hermitian terms in the Hamiltonian, Brizuela et al. [65] show that the gravitationally-corrected wave function induces a similar ∼ 1/k 3 correction to the power spectrum but of opposite sign (hence a power excess). However, the calculations of ref. [6] are too approximate to precisely compare the overall magnitudes of these physically-distinct effects.
Evaporating black holes
It is also of interest to consider quantum instability for a field in the background spacetime of an evaporating Schwarzchild black hole. It was argued by Kiefer, Müller and Singh [88] that in this case the quantum-gravitational corrections to the effective Schrödinger equation will be as in equations (74)-(76) but with the replacement
√ gR → −16πGM/c 2 ,(89)
where the Schwarzchild radius r S = 2GM/c 2 (for a black hole of mass M ) provides a natural lengthscale. The non-Hermitian term in (74) reads
iĤ b = i 4πℏG c 4 d 3 x N δ δτ Ĥ φ √ gR ,(90)
where we have inserted µ = c 2 /32πG as well as and c. With the replacement (89), (90) takes the approximate form
iĤ b ≃ −i ℏ 4c 2 d dt 1 M d 3 x NĤ φ = i ℏ 4c 2 1 M 2 dM dtĤ φ ,(91)
whereĤ φ is the uncorrected field Hamiltonian (neglecting the rate of change of H φ compared with the rate of change of the background geometry). Kiefer et al. suggested that this term might alleviate the problem of black-hole information loss (though such a term is inconsistent with the standard quantum formalism).
Taking the phenomenological time dependence [89,90] over time only if (67) is satisfied in the regime where M approaches m P . Thus we need to know how τ relax scales with M and to compare this with our estimate τ noneq ∝ (M/m P ) 5 . This is a matter for future work. Should nonequilibrium survive in the outgoing radiation, at least in principle the emitted photons could show anomalies in their two-slit interference pattern or in their polarisation probabilities [92]. Realistically, Hawking radiation in the γ-ray region might be detected from exploding primordial black holes (which may form a significant component of dark matter [93]). However, only the very final burst is likely to show significant deviations from the Born rule, making detection difficult.
An atom in the gravitational field of the earth
We might ask if the Born rule could be unstable for an atomic system in the gravitational field of the earth. We saw in Section 10.2 that the non-Hermitian correction to a field Hamiltonian in the spacetime of a Schwarzchild black hole can plausibly be obtained from (90) by replacing √ gR by −8πr S where r S = 2GM/c 2 is the natural lengthscale of the background. In the gravitational field of the earth we might expect instead to make a replacement of the form
√ gR → −8πr c ,(98)
where r c is the local radius of curvature (r c ≃ 10 13 cm at the surface of the earth). 21 Inserting this in (90), and writingĤ φ asĤ a whereĤ a = d 3 x NĤ a is the atomic Hamiltonian, we find an estimated non-Hermitian term
iĤ b ∼ −i ℏG c 4 1 r c d 3 x N δ δτ Ĥ a ∼ −i l P r c ∂Ĥ a ∂t t P ,(99)
where l P and t P are respectively the Planck length and time.
The term (99) is non-zero only if the (uncorrected) atomic Hamiltonian H a is time dependent. The magnitude of (99) is roughly the change inĤ a over a Planck time suppressed by the ratio l P /r c . Needless to say, in ordinary laboratory conditions, this term will be utterly negligible. Furthermore, ifĤ a changes rapidly (to maximise the effect), the atomic wave function will be a superposition of multiple energy eigenstates, and we expect to find quantum relaxation over timescales τ relax << τ noneq . Even if we could probe an atomic ensemble over times ∼ τ noneq (far longer than the age of the universe), any gravitationally-generated nonequilibrium will have long-since relaxed. It then appears that the gravitational creation of quantum nonequilibrium in ordinary laboratory systems -with a dynamical Hamiltonian in a background curved space -is likely to be of theoretical interest only.
Conclusion
We have argued that, in the deep quantum-gravity regime, with a non-normalisable Wheeler-DeWitt wave functional Ψ, there is no Born rule and the universe is in a perpetual state of quantum nonequilibrium with a probability density P = |Ψ| 2 . Quantum relaxation to the Born rule can occur only when the early universe emerges into a semiclassical or Schrödinger approximation, with a time-dependent and normalisable effective wave functional ψ for a system on a classical spacetime background, for which the probability density ρ can evolve towards |ψ| 2 (on a coarse-grained level). We conclude that the long-standing hypothesis of primordial quantum nonequilibrium, with relaxation to the Born rule taking place soon after the big bang, follows naturally from the internal logic of quantum gravity (as interpreted in de Broglie-Bohm pilot-wave theory). Furthermore, quantum-gravitational corrections to the Schrödinger approximation, in the form of tiny non-Hermitian terms in the effective Hamiltonian, generate a (very slight) instability of the Born rule, whereby quantum nonequilibrium ρ = |ψ| 2 can be created from a prior equilibrium (ρ = |ψ| 2 ) state. To observe such effects will be difficult in practice, though possible at least in principle.
When restricted to the Born-rule equilibrium state, the pilot-wave or de Broglie-Bohm formulation of quantum theory is experimentally indistinguishable from textbook quantum mechanics. Wider support for this formulation is likely to be forthcoming should we find experimental evidence for violations of the Born rule -or if the theory allows us to make decisive progress in understanding some vital aspect of fundamental physics. From the results presented here we suggest that this little-used formulation of quantum theory allows us to understand and solve three problems in canonical quantum gravity: (a) to explain why the naive Schrödinger interpretation does not work, (b) to account for the emergence of the Born rule in a semiclassical regime, and (c) to give a consistent meaning to non-Hermitian quantum-gravitational corrections to the effective Schrödinger equation.
The results of this paper also impact on certain philosophical debates concerning the status of the Born rule in de Broglie-Bohm theory. As we have noted, and discussed in detail elsewhere [24], the 'Bohmian mechanics' school employs an essentially circular argument to obtain the Born rule for subsystems by assuming the Born rule for the whole universe at some initial cosmological time. 22 We have seen that, when quantum gravity is taken into account, such an argument has no starting point, since there is no fundamental Born-rule measure for a universe governed by the Wheeler-DeWitt equation (despite attempts by some workers [54,60] to apply the naive Schrödinger interpretation to pilot-wave gravitation).
It is one hundred years since de Broglie started on the path that, after five years of remarkable developments, brought him in 1927 to pilot-wave theory as we know it today. It is seventy years since the revival and further development of pilot-wave theory in Bohm's papers of 1952. And yet the theory is still not widely known or used, and is often misunderstood. The historical development of pilot-wave theory is in certain respects reminiscent of the historical development of the kinetic theory of gases. Beginning with the pioneering work of Bernoulli in the early 18th century, and of Herapath and Waterston in the early 19th century, kinetic theory was more or less ignored until it was taken up by Clausius in an influential paper of 1857 [95]. Another half a century had to pass, with decisive contributions in particular by Maxwell, Boltzmann and Einstein, before theorists were able to interpret Brownian motion as evidence for atoms and kinetic theory. Whether or not a comparable empirical breakthough awaits pilot-wave theory remains to be seen.
Why were physicists in the late nineteenth-century still reluctant to accept the existence of atoms and molecules, long after chemists had already deduced their detailed shapes and compositions? In part there was philosophical opposition from Mach and others, who emphasised the role of sensory perception in physics, while the idea of an objective reality beyond the immediate reach of our senses came to be widely derided as unscientific and metaphysical. Similarly, today there remains widespread opposition to realism in quantum physics. For as long as the details of de Broglie-Bohm trajectories cannot be observed (the uncertainty principle reigns for as long as we are confined to quantum equilibrium) those trajectories will continue to be dismissed as unphysical.
A decisive breakthrough, with an end to seemingly endless philosophical debates, will occur only by extending the boundaries of physics beyond what is currently known and understood. The prospects do not seem entirely remote. As we have argued in this paper, gravitation may hold the key to unlocking the hidden physics of pilot-wave theory.
in powers of m 2 P . Inserting this into the left-hand side of the Wheeler-DeWitt equation, terms of the same order in m P are collected and their sum set to zero. By this means, Brizuela et al. derive a Schrödinger equation for an effective wave function ψ
k
(υ k , t), where the corrections in the effective Hamiltonian have both Hermitian and non-Hermitian parts. The same expansion can again be applied to the de Broglie guidance equation for the perturbations υ k[6]. We again find that the de Broglie velocity takes the standard form proportional to the gradient of the phase s
60], if we integrate (38) over x we can write down what we call a 'pseudo-continuity equation' 14
By de Broglie's own account his ideas originated in a paper of 1922 on blackbody radiation, although his first paper on pilot-wave theory proper did not appear until 1923, culminating in his theory of a many-body system presented at the 1927 Solvay conference (see ref.[2], chapter 2).
In this paper we employ the traditional metric representation of the gravitational field. We expect similar conclusions to hold in loop quantum gravity[44].4 As we will see there is also a constraint on Ψ guaranteeing coordinate invariance.
For a recent review see ref.[59].
For a 'functional' Ψ[φ] (mapping a function φ(x) to a complex number Ψ) the functional derivative δΨ/δφ(x) at a spatial point x is defined by δΨ = d 3 x [δΨ/δφ(x)] δφ(x) for arbitrary infinitesimal variations δφ(x).
Here dx 2 = (dx 1 ) 2 + (dx 2 ) 2 + (dx 3 ) 2 .
Pilot-wave theory and the Born ruleWhen interpreted correctly, pilot-wave theory shows us that the Born rule is not an axiom or law, but instead represents a statistical state of quantum equilibrium (analogous to classical thermal equilibrium)[14, 15, 16, 17, 18, 19, 20, 21,
The continuity equation(25) can also be derived from Noether's theorem as the local conservation law associated with a global phase symmetry ψ → ψe iθ on configuration space[75].9 We are assuming the wave function has a single component ψ. The method can be readily extended to spin systems with multi-component wave functions[?].
For N i = 0 there are additional terms D i N j + D j N i on the right-hand side of (34).11 Specifically, with the ordering (δ/δg ij )G ijkl (δ/δg kl ) in the kinetic term.
For N i = 0 there is an additional term N i ∂ i φ on the right-hand side of(37).
For a single particle with a static density ρ and a current j, this is analogous to summing the equations ∂xjx = ∂yjy = ∂zjz = 0 to yield ∂ρ/∂t + ∇ · j = 0.15 The same equations follow by identifying the canonical momenta (16) with a phase gradient.
When s = 0 the exact H is constant but the coarse-grained value decreases (if there is no initial fine-grained structure)[14].
This result is of course expected from the WKB form (49) (with ψ = ψ (0) ).
For more details see ref.[6].
The creation of quantum nonequilibrium by evaporating black holes was previously suggested as a possible mechanism for resolving the information-loss puzzle[86,33,61,91], but without a clear theoretical underpinning.
A similar suggestion was made by Kiefer and Singh[62], who considered the effect of the Hermitian correction on atomic energy levels.
In a remarkable reply to ref.[24], Dürr and Struyve[94] invoke similar circular reasoning in their account of classical coin tossing.
with M 0 the initial mass, κ a numerical factor, and here m P = c/G ≃ 10 −5 g the standard Planck mass, we have dM/dt ≃ − 1 3 κ(m P /t P ) (m P /M ) 2 . According to(91)the HamiltonianĤ k of a field mode then acquires a non-Hermitian correction iĤ 2 (in the notation of Section 8) witĥThis correction is significant in the final stage of evaporation when M approaches m P , suggesting that the final burst of Hawking radiation could contain significant departures from the Born rule.20Quantum nonequilibrium is expected to be created on a timescale τ noneq of order(66), which depends inversely on the equilibrium mean energyis the Hawking temperature, then from(66)and(93)we haveCorrections to the Born rule will be significant if τ noneq is not too large compared to the evaporation timescale t evap . Taking 1/t evap ∼ (1/M ) |dM/dt| we haveand a ratio(the factor κ cancels). Again it seems clear that significant deviations from the Born rule can be generated in the outgoing radiation only in the final stage of evaporation when M approaches m P . It is however not known if such deviations could survive quantum relaxation, which may well be significant in the final stage of evaporation when the background spacetime is changing rapidly. Quantum nonequilibrium will build up
L De Broglie, Électrons et Photons: Rapports et Discussions du Cinquième Conseil de Physique. ParisGauthier-VillarsLa nouvelle dynamique des quanta. English translation in ref. [2].L. de Broglie, La nouvelle dynamique des quanta, in:Électrons et Photons: Rapports et Discussions du Cinquième Conseil de Physique (Gauthier- Villars, Paris, 1928). [English translation in ref. [2].]
G Bacciagaluppi, A Valentini, arXiv:quant-ph/0609184Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference. Cambridge University PressG. Bacciagaluppi and A. Valentini, Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference (Cambridge University Press, 2009). [arXiv:quant-ph/0609184]
A suggested interpretation of the quantum theory in terms of 'hidden' variables. I. D Bohm, Phys. Rev. 85166D. Bohm, A suggested interpretation of the quantum theory in terms of 'hidden' variables. I, Phys. Rev. 85, 166 (1952);
A suggested interpretation of the quantum theory in terms of 'hidden' variables. II. D Bohm, Phys. Rev. 85180D. Bohm, A suggested interpretation of the quantum theory in terms of 'hidden' variables. II, Phys. Rev. 85, 180 (1952).
The Quantum Theory of Motion: an Account of the de Broglie-Bohm Causal Interpretation of Quantum Mechanics. P R Holland, Cambridge University PressCambridgeP. R. Holland, The Quantum Theory of Motion: an Account of the de Broglie-Bohm Causal Interpretation of Quantum Mechanics (Cambridge University Press, Cambridge, 1993).
De Broglie-Bohm pilot-wave theory. A Valentini, Oxford Research Encyclopedia of Physics. Oxford University PressA. Valentini, De Broglie-Bohm pilot-wave theory, in: Oxford Re- search Encyclopedia of Physics (Oxford University Press, 2023). [https://oxfordre.com/physics]
A Valentini, arXiv:2104.07966Quantum gravity and quantum probability. A. Valentini, Quantum gravity and quantum probability, arXiv:2104.07966.
W Pauli, Louis de Broglie: Physicien et Penseur (Albin Michel. ParisW. Pauli, in: Louis de Broglie: Physicien et Penseur (Albin Michel, Paris, 1953).
Bohm's interpretation of the quantum theory in terms of 'hidden' variables. J B Keller, Phys. Rev. 891040J. B. Keller, Bohm's interpretation of the quantum theory in terms of 'hidden' variables, Phys. Rev. 89, 1040 (1953).
Proof that probability density approaches |ψ| 2 in causal interpretation of the quantum theory. D Bohm, Phys. Rev. 89458D. Bohm, Proof that probability density approaches |ψ| 2 in causal inter- pretation of the quantum theory, Phys. Rev. 89, 458 (1953).
Model of the causal interpretation of quantum theory in terms of a fluid with irregular fluctuations. D Bohm, J P Vigier, Phys. Rev. 96208D. Bohm and J. P. Vigier, Model of the causal interpretation of quantum theory in terms of a fluid with irregular fluctuations, Phys. Rev. 96, 208 (1954).
Quantum equilibrium and the origin of absolute uncertainty. D Dürr, S Goldstein, N Zanghì, J. Stat. Phys. 67843D. Dürr, S. Goldstein and N. Zanghì, Quantum equilibrium and the origin of absolute uncertainty, J. Stat. Phys. 67, 843 (1992).
D Dürr, S Teufel, Bohmian Mechanics: The Physics and Mathematics of Quantum Theory. BerlinSpringerD. Dürr and S. Teufel, Bohmian Mechanics: The Physics and Mathematics of Quantum Theory (Springer, Berlin, 2009).
R Tumulka, Bohmian Mechanics, The Routledge Companion to the Philosophy of Physics. E. Knox and A. Wilson (RoutledgeNew YorkR. Tumulka, Bohmian mechanics, in: The Routledge Companion to the Philosophy of Physics, eds. E. Knox and A. Wilson (Routledge, New York, 2021).
Signal-locality, uncertainty, and the subquantum H -theorem. I. A Valentini, Phys. Lett. A. 1565A. Valentini, Signal-locality, uncertainty, and the subquantum H -theorem. I, Phys. Lett. A 156, 5 (1991).
Signal-locality, uncertainty, and the subquantum H -theorem. A Valentini, Phys. Lett. A. II1A. Valentini, Signal-locality, uncertainty, and the subquantum H -theorem. II, Phys. Lett. A 158, 1 (1991).
On the pilot-wave theory of classical, quantum and subquantum physics. A Valentini, International School for Advanced Studies. PhD thesisA. Valentini, On the pilot-wave theory of classical, quantum and subquan- tum physics, PhD thesis, International School for Advanced Studies, Tri- este, Italy (1992). [http://hdl.handle.net/20.500.11767/4334]
Pilot-wave theory of fields, gravitation and cosmology. A Valentini, Bohmian Mechanics and Quantum Theory: an Appraisal. J. T. Cushing et al.DordrechtKluwerA. Valentini, Pilot-wave theory of fields, gravitation and cosmology, in: Bohmian Mechanics and Quantum Theory: an Appraisal, eds. J. T. Cush- ing et al. (Kluwer, Dordrecht, 1996).
A Valentini, arXiv:quant-ph/0104067Hidden variables, statistical mechanics and the early universe. J. Bricmont et al.BerlinSpringerChance in Physics: Foundations and PerspectivesA. Valentini, Hidden variables, statistical mechanics and the early universe, in: Chance in Physics: Foundations and Perspectives, eds. J. Bricmont et al. (Springer, Berlin, 2001). [arXiv:quant-ph/0104067]
Signal-locality in hidden-variables theories. A Valentini, arXiv:quant-ph/0106098Phys. Lett. A. 297273A. Valentini, Signal-locality in hidden-variables theories, Phys. Lett. A 297, 273 (2002). [arXiv:quant-ph/0106098]
Subquantum information and computation. A Valentini, arXiv:quant-ph/0203049Pramana-J. Phys. 59269A. Valentini, Subquantum information and computation, Pramana-J. Phys. 59, 269 (2002). [arXiv:quant-ph/0203049]
Dynamical origin of quantum probabilities. A Valentini, H Westman, arXiv:quant-ph/0403034Proc. Roy. Soc. A. 461253A. Valentini and H. Westman, Dynamical origin of quantum probabilities, Proc. Roy. Soc. A 461, 253 (2005). [arXiv:quant-ph/0403034]
Quantum mechanics: generalizations, in: Encyclopaedia of Mathematical Physics. P Pearle, A Valentini, arXiv:quant-ph/0506115J.-P. Françoise et al.ElsevierNorth-HollandP. Pearle and A. Valentini, Quantum mechanics: generalizations, in: En- cyclopaedia of Mathematical Physics, eds. J.-P. Françoise et al. (Elsevier, North-Holland, 2006). [arXiv:quant-ph/0506115]
Beyond the quantum. A Valentini, arXiv:1001.2758Phys. World. 221132A. Valentini, Beyond the quantum, Phys. World 22N11, 32 (2009). [arXiv:1001.2758]
A Valentini, arXiv:1906.10761Foundations of statistical mechanics and the status of the Born rule in de Broglie-Bohm pilot-wave theory, in: Statistical Mechanics and Scientific Explanation: Determinism, Indeterminism and Laws of Nature. V. AlloriWorld ScientificA. Valentini, Foundations of statistical mechanics and the status of the Born rule in de Broglie-Bohm pilot-wave theory, in: Statistical Mechan- ics and Scientific Explanation: Determinism, Indeterminism and Laws of Nature, ed. V. Allori (World Scientific, 2020). [arXiv:1906.10761]
Chaos in Bohmian quantum mechanics. C Efthymiopoulos, G Contopoulos, J. Phys. A: Math. Gen. 391819C. Efthymiopoulos and G. Contopoulos, Chaos in Bohmian quantum me- chanics, J. Phys. A: Math. Gen. 39, 1819 (2006).
Time scales for dynamical relaxation to the Born rule. M D Towler, N J Russell, A Valentini, arXiv:1103.1589Proc. Roy. Soc. A. 468990M. D. Towler, N. J. Russell, and A. Valentini, Time scales for dynam- ical relaxation to the Born rule, Proc. Roy. Soc. A 468, 990 (2012). [arXiv:1103.1589]
Relaxation to quantum equilibrium for Dirac fermions in the de Broglie-Bohm pilot-wave theory. S Colin, arXiv:1108.5496Proc. Roy. Soc. A. 4681116S. Colin, Relaxation to quantum equilibrium for Dirac fermions in the de Broglie-Bohm pilot-wave theory, Proc. Roy. Soc. A 468, 1116 (2012). [arXiv:1108.5496]
Long-time relaxation in pilot-wave theory. E Abraham, S Colin, A Valentini, arXiv:1310.1899J. Phys. A: Math. Theor. 47395306E. Abraham, S. Colin and A. Valentini, Long-time relaxation in pilot-wave theory, J. Phys. A: Math. Theor. 47, 395306 (2014). [arXiv:1310.1899]
Chaos in de Broglie-Bohm quantum mechanics and the dynamics of quantum relaxation. C Efthymiopoulos, G Contopoulos, A C Tzemos, arXiv:1703.09810Ann. Fond. Louis de Broglie. 42133C. Efthymiopoulos, G. Contopoulos and A. C. Tzemos, Chaos in de Broglie- Bohm quantum mechanics and the dynamics of quantum relaxation, Ann. Fond. Louis de Broglie 42, 133 (2017). [arXiv:1703.09810]
Justifying Born's rule P α = |Ψ α | 2 using deterministic chaos, decoherence, and the de Broglie-Bohm quantum theory. A Drezet, arXiv:2109.09353Entropy. 231371A. Drezet, Justifying Born's rule P α = |Ψ α | 2 using deterministic chaos, decoherence, and the de Broglie-Bohm quantum theory, Entropy 23, 1371 (2021). [arXiv:2109.09353]
Quantum relaxation in a system of harmonic oscillators with time-dependent coupling. F B Lustosa, S Colin, S E Perez Bergliaffa, arXiv:2007.02939Proc. Roy. Soc. A. 47720200606F. B. Lustosa, S. Colin and S. E. Perez Bergliaffa, Quantum relaxation in a system of harmonic oscillators with time-dependent coupling, Proc. Roy. Soc. A 477, 20200606 (2021). [arXiv:2007.02939]
F B Lustosa, N Pinto-Neto, A Valentini, arXiv:2205.13701Evolution of quantum nonequilibrium for coupled harmonic oscillators. F. B. Lustosa, N. Pinto-Neto and A. Valentini, Evolution of quantum nonequilibrium for coupled harmonic oscillators, arXiv:2205.13701.
Astrophysical and cosmological tests of quantum theory. A Valentini, arXiv:hep-th/0610032J. Phys. A: Math. Theor. 403285A. Valentini, Astrophysical and cosmological tests of quantum theory, J. Phys. A: Math. Theor. 40, 3285 (2007). [arXiv:hep-th/0610032]
De Broglie-Bohm prediction of quantum violations for cosmological super-Hubble modes. A Valentini, arXiv:0804.4656A. Valentini, De Broglie-Bohm prediction of quantum violations for cosmo- logical super-Hubble modes, arXiv:0804.4656.
Inflationary cosmology as a probe of primordial quantum mechanics. A Valentini, arXiv:0805.0163Phys. Rev. D. 8263513A. Valentini, Inflationary cosmology as a probe of primordial quantum mechanics, Phys. Rev. D 82, 063513 (2010). [arXiv:0805.0163]
Mechanism for the suppression of quantum noise at large scales on expanding space. S Colin, A Valentini, arXiv:1306.1579Phys. Rev. D. 88103515S. Colin and A. Valentini, Mechanism for the suppression of quantum noise at large scales on expanding space, Phys. Rev. D 88, 103515 (2013). [arXiv:1306.1579]
Primordial quantum nonequilibrium and largescale cosmic anomalies. S Colin, A Valentini, arXiv:1407.8262Phys. Rev. D. 9243520S. Colin and A. Valentini, Primordial quantum nonequilibrium and large- scale cosmic anomalies, Phys. Rev. D 92, 043520 (2015). [arXiv:1407.8262]
A Valentini, arXiv:1510.02523Statistical anisotropy and cosmological quantum relaxation. A. Valentini, Statistical anisotropy and cosmological quantum relaxation, arXiv:1510.02523.
Robust predictions for the large-scale cosmological power deficit from primordial quantum nonequilibrium. S Colin, A Valentini, arXiv:1510.03508Int. J. Mod. Phys. D. 251650068S. Colin and A. Valentini, Robust predictions for the large-scale cosmolog- ical power deficit from primordial quantum nonequilibrium, Int. J. Mod. Phys. D 25, 1650068 (2016). [arXiv:1510.03508]
Quantum field theory of relic nonequilibrium systems. N G Underwood, A Valentini, arXiv:1409.6817Phys. Rev. D. 9263531N. G. Underwood and A. Valentini, Quantum field theory of relic nonequi- librium systems, Phys. Rev. D 92, 063531 (2015). [arXiv:1409.6817]
Anomalous spectral lines and relic quantum nonequilibrium. N G Underwood, A Valentini, arXiv:1609.04576Phys. Rev. D. 10143004N. G. Underwood and A. Valentini, Anomalous spectral lines and relic quantum nonequilibrium, Phys. Rev. D 101, 043004 (2020). [arXiv:1609.04576]
D Albert, After Physics. Harvard University PressD. Z Albert, After Physics (Harvard University Press, 2015).
Bohmian mechanics. S Goldstein, The Stanford Encyclopedia of Philosophy (Fall 2021 Edition). E. N. ZaltaS. Goldstein, Bohmian mechanics, in: The Stanford Ency- clopedia of Philosophy (Fall 2021 Edition), ed. E. N. Zalta. [https://plato.stanford.edu/archives/fall2021/entries/qm-bohm/]
C Rovelli, Quantum Gravity. CambridgeCambridge University PressC. Rovelli, Quantum Gravity (Cambridge University Press, Cambridge, 2004).
Quantum theory of gravity. I. The canonical theory. B S Dewitt, Phys. Rev. 1601113B. S. DeWitt, Quantum theory of gravity. I. The canonical theory, Phys. Rev. 160, 1113 (1967).
C Kiefer, Quantum Gravity. OxfordOxford University PressC. Kiefer, Quantum Gravity (Oxford University Press, Oxford, 2012).
Time and the interpretation of canonical quantum gravity. W G Unruh, R M Wald, Phys. Rev. D. 402598W. G. Unruh and R. M. Wald, Time and the interpretation of canonical quantum gravity, Phys. Rev. D 40, 2598 (1989).
Conceptual and geometrical problems in quantum gravity. C J Isham, Recent Aspects of Quantum Fields. H. Mitter and H. GaustererBerlinSpringer-VerlagC. J. Isham, Conceptual and geometrical problems in quantum gravity, in: Recent Aspects of Quantum Fields, eds. H. Mitter and H. Gausterer (Springer-Verlag, Berlin, 1991).
Time and interpretations of quantum gravity. K V Kuchař, Proceedings of the 4th Canadian Conference on General Relativity and Relativistic Astrophysics. G. Kunstatter, D. Vincent and J. Williamsthe 4th Canadian Conference on General Relativity and Relativistic AstrophysicsSingapore203World ScientificK. V. Kuchař, Time and interpretations of quantum gravity, in: Proceed- ings of the 4th Canadian Conference on General Relativity and Relativistic Astrophysics, eds. G. Kunstatter, D. Vincent and J. Williams (World Sci- entific, Singapore, 1992). [Reprinted: K. V. Kuchař, Int. J. Mod. Phys. D 20, 3 (2011).]
Canonical quantum gravity and the problem of time. C J Isham, arXiv:gr-qc/9210011Integrable Systems, Quantum Groups, and Quantum Field Theories. L. A. Ibort and M. A. RodriguezLondonKluwerC. J. Isham, Canonical quantum gravity and the problem of time, in: Inte- grable Systems, Quantum Groups, and Quantum Field Theories, eds. L. A. Ibort and M. A. Rodriguez (Kluwer, London, 1993). [arXiv:gr-qc/9210011]
The problem of time in quantum geometrodynamics. K V Kuchař, The Arguments of Time. J. ButterfieldOxfordOxford University PressK. V. Kuchař, The problem of time in quantum geometrodynamics, in: The Arguments of Time, ed. J. Butterfield (Oxford University Press, Oxford, 1999).
E Anderson, The Problem of Time: Quantum Mechanics versus General Relativity. SpringerE. Anderson, The Problem of Time: Quantum Mechanics versus General Relativity (Springer, 2017).
. C Kiefer, P Peter, arXiv:2112.05788Time in quantum cosmology. 836C. Kiefer and P. Peter, Time in quantum cosmology, Universe 8, 36 (2022). [arXiv:2112.05788]
Quantum potential interpretation of the wave function of the universe. J C Vink, Nucl. Phys. B. 369707J. C. Vink, Quantum potential interpretation of the wave function of the universe, Nucl. Phys. B 369, 707 (1992).
Quantum potential interpretation of the Wheeler-DeWitt equation. T Horiguchi, Mod. Phys. Lett. A. 91429T. Horiguchi, Quantum potential interpretation of the Wheeler-DeWitt equation, Mod. Phys. Lett. A 9, 1429 (1994).
Pilot wave quantum cosmology. Yu V Shtanov, arXiv:gr-qc/9503005Phys. Rev. D. 542564Yu. V. Shtanov, Pilot wave quantum cosmology, Phys. Rev. D 54, 2564 (1996). [arXiv:gr-qc/9503005]
The Bohm interpretation of quantum cosmology. N Pinto-Neto, arXiv:gr-qc/0410117Found. Phys. 35577N. Pinto-Neto, The Bohm interpretation of quantum cosmology, Found. Phys. 35, 577 (2005). [arXiv:gr-qc/0410117]
Quantum cosmology from the de Broglie-Bohm perspective. N Pinto-Neto, J C Fabris, arXiv:1306.0820Class. Quantum Grav. 30143001N. Pinto-Neto and J. C. Fabris, Quantum cosmology from the de Broglie-Bohm perspective, Class. Quantum Grav. 30, 143001 (2013). [arXiv:1306.0820]
The de Broglie-Bohm quantum theory and its application to quantum cosmology. N Pinto-Neto, arXiv:2111.030577134N. Pinto-Neto, The de Broglie-Bohm quantum theory and its application to quantum cosmology, Universe 7, 134 (2021). [arXiv:2111.03057]
Quantum Einstein equations. D Dürr, W Struyve, arXiv:2003.03839Class. Quantum Grav. 37135002D. Dürr and W. Struyve, Quantum Einstein equations, Class. Quantum Grav. 37, 135002 (2020). [arXiv:2003.03839]
A Valentini, arXiv:1409.7467Trans-Planckian fluctuations and the stability of quantum mechanics. A. Valentini, Trans-Planckian fluctuations and the stability of quantum mechanics, arXiv:1409.7467.
Quantum gravitational corrections to the functional Schrödinger equation. C Kiefer, T P Singh, Phys. Rev. D. 441067C. Kiefer and T. P. Singh, Quantum gravitational corrections to the func- tional Schrödinger equation, Phys. Rev. D 44, 1067 (1991).
Quantum gravitational contributions to the cosmic microwave background anisotropy spectrum. C Kiefer, M Krämer, arXiv:1103.4967Phys. Rev. Lett. 10821301C. Kiefer and M. Krämer, Quantum gravitational contributions to the cos- mic microwave background anisotropy spectrum, Phys. Rev. Lett. 108, 021301 (2012). [arXiv:1103.4967]
On the modification of the cosmic microwave background anisotropy spectrum from canonical quantum gravity. D Bini, G Esposito, C Kiefer, M Krämer, F Pessina, arXiv:1303.0531Phys. Rev. D. 87104008D. Bini, G. Esposito, C. Kiefer, M. Krämer and F. Pessina, On the modifica- tion of the cosmic microwave background anisotropy spectrum from canon- ical quantum gravity, Phys. Rev. D 87, 104008 (2013). [arXiv:1303.0531]
Quantum-gravitational effects on gauge-invariant scalar and tensor perturbations during inflation: The de Sitter case. D Brizuela, C Kiefer, M Krämer, arXiv:1511.05545Phys. Rev. D. 93104035D. Brizuela, C. Kiefer and M. Krämer, Quantum-gravitational effects on gauge-invariant scalar and tensor perturbations during inflation: The de Sitter case, Phys. Rev. D 93, 104035 (2016) [arXiv:1511.05545];
Quantum-gravitational effects on gauge-invariant scalar and tensor perturbations during inflation: The slow-roll approximation. D Brizuela, C Kiefer, M Krämer, arXiv:1611.02932Phys. Rev. D. 94123527D. Brizuela, C. Kiefer and M. Krämer, Quantum-gravitational effects on gauge-invariant scalar and tensor perturbations during inflation: The slow-roll approxima- tion, Phys. Rev. D 94, 123527 (2016) [arXiv:1611.02932].
Signatures of quantum gravity in a Born-Oppenheimer context. A Y Kamenshchik, A Tronconi, G Venturi, arXiv:1403.2961Phys. Lett. B. 73472A. Y. Kamenshchik, A. Tronconi, and G. Venturi, Signatures of quantum gravity in a Born-Oppenheimer context, Phys. Lett. B 734, 72 (2014). [arXiv:1403.2961]
Wave function of the universe. J B Hartle, S W Hawking, Phys. Rev. D. 282960J. B. Hartle and S. W. Hawking, Wave function of the universe, Phys. Rev. D 28, 2960 (1983);
The quantum state of the universe. S W Hawking, Nucl. Phys. B. 239257S. W. Hawking, The quantum state of the universe, Nucl. Phys. B 239, 257 (1984);
Operator ordering and the flatness of the universe. S W Hawking, D Page, Nucl. Phys. B. 264185S. W. Hawking and D. Page, Operator ordering and the flatness of the universe, Nucl. Phys. B 264, 185 (1986);
How probable is inflation?. S W Hawking, D Page, Nucl. Phys. B. 298789S. W. Hawking and D. Page, How probable is inflation?, Nucl. Phys. B 298, 789 (1988).
J A Wheeler, Battelle Rencontres. C. DeWitt and J. A. WheelerNew YorkBenjaminJ. A. Wheeler, in: Battelle Rencontres: 1967 Lectures in Mathematics and Physics, eds. C. DeWitt and J. A. Wheeler (Benjamin, New York, 1968).
Quantum mechanics without time: a model. C Rovelli, Phys. Rev. D. 422638C. Rovelli, Quantum mechanics without time: a model, Phys. Rev. D 42, 2638 (1990).
Time in quantum gravity: an hypothesis. C Rovelli, Phys. Rev. D. 43442C. Rovelli, Time in quantum gravity: an hypothesis, Phys. Rev. D 43, 442 (1991).
Forget time, FQXi Essay on the Nature of Time. C Rovelli, arXiv:0903.3832C. Rovelli, Forget time, FQXi Essay on the Nature of Time (2009). [arXiv:0903.3832]
The timelessness of quantum gravity. I. The evidence from the classical theory. J B Barbour, Class. Quantum Grav. 112853J. B. Barbour, The timelessness of quantum gravity. I. The evidence from the classical theory, Class. Quantum Grav. 11, 2853 (1994);
The timelessness of quantum gravity. II. The appearance of dynamics in static configurations. J B Barbour, Class. Quantum Grav. 112875J. B. Barbour, The timelessness of quantum gravity. II. The appearance of dynamics in static configurations, Class. Quantum Grav. 11, 2875 (1994).
Trajectories for the wave function of the universe from a simple detector model. J J Halliwell, arXiv:gr-qc/0008046Phys. Rev. D. 6444008J. J. Halliwell, Trajectories for the wave function of the universe from a sim- ple detector model, Phys. Rev. D 64, 044008 (2001) [arXiv:gr-qc/0008046];
Probabilities in quantum cosmological models: A decoherent histories analysis using a complex potential. J J Halliwell, arXiv:0909.2597Phys. Rev. D. 80124032J. J. Halliwell, Probabilities in quantum cosmological models: A decoherent histories analysis using a complex potential, Phys. Rev. D 80, 124032 (2009) [arXiv:0909.2597];
Decoherent histories analysis of minisuperspace quantum cosmology. J J Halliwell, arXiv:1108.5991J. Phys.: Conf. Ser. 30612023J. J. Halliwell, Decoherent histories analysis of minisu- perspace quantum cosmology, J. Phys.: Conf. Ser. 306, 012023 (2011) [arXiv:1108.5991].
Multiple-event probability in general-relativistic quantum mechanics. F Hellmann, M Mondragon, A Perez, C Rovelli, arXiv:gr-qc/0610140Phys. Rev. D. 7584033F. Hellmann, M. Mondragon, A. Perez and C. Rovelli, Multiple-event prob- ability in general-relativistic quantum mechanics, Phys. Rev. D 75, 084033 (2007) [arXiv:gr-qc/0610140];
Multiple-event probability in general-relativistic quantum mechanics: a discrete model. M Mondragon, A Perez, C Rovelli, arXiv:0705.0006Phys. Rev. D. 7664005M. Mondragon, A. Perez and C. Rovelli, Multiple-event probability in general-relativistic quantum mechanics: a dis- crete model, Phys. Rev. D 76, 064005 (2007). [arXiv:0705.0006]
De Broglie-Bohm guidance equations for arbitrary Hamiltonians. W Struyve, A Valentini, arXiv:0808.0290J. Phys. A: Math. Theor. 4235301W. Struyve and A. Valentini, De Broglie-Bohm guidance equations for arbitrary Hamiltonians, J. Phys. A: Math. Theor. 42, 035301 (2009). [arXiv:0808.0290]
A R Liddle, D H Lyth, Cosmological Inflation and Large-Scale Structure. CambridgeCambridge University PressA. R. Liddle and D. H. Lyth, Cosmological Inflation and Large-Scale Struc- ture (Cambridge University Press, Cambridge, 2000).
V Mukhanov, Physical Foundations of Cosmology. CambridgeCambridge University PressV. Mukhanov, Physical Foundations of Cosmology (Cambridge University Press, Cambridge, 2005).
P Peter, J.-P Uzan, Primordial Cosmology. OxfordOxford University PressP. Peter and J.-P. Uzan, Primordial Cosmology (Oxford University Press, Oxford, 2009).
N Aghanim, Planck CollaborationPlanck 2015 results. XI. CMB power spectra, likelihoods, and robustness of parameters. 59411N. Aghanim et al. (Planck Collaboration), Planck 2015 results. XI. CMB power spectra, likelihoods, and robustness of parameters, Astron. Astro- phys. 594, A11 (2016).
Modeling the large-scale power deficit with smooth and discontinuous primordial spectra. S Vitenti, P Peter, A Valentini, arXiv:1901.08885Phys. Rev. D. 10043506S. Vitenti, P. Peter and A. Valentini, Modeling the large-scale power deficit with smooth and discontinuous primordial spectra, Phys. Rev. D 100, 043506 (2019). [arXiv:1901.08885]
Hidden variables and the large-scale structure of space-time. A Valentini, arXiv:quant-ph/0504011W. L. Craig and Q. Smith (RoutledgeLondonA. Valentini, Hidden variables and the large-scale structure of space-time, in: Einstein, Relativity and Absolute Simultaneity, eds. W. L. Craig and Q. Smith (Routledge, London, 2008). [arXiv:quant-ph/0504011]
The consistency of causal quantum geometrodynamics and quantum field theory. N Pinto-Neto, E Sergio Santini, arXiv:gr-qc/0009080Gen. Rel. Grav. 34505N. Pinto-Neto and E. Sergio Santini, The consistency of causal quantum ge- ometrodynamics and quantum field theory, Gen. Rel. Grav. 34, 505 (2002). [arXiv:gr-qc/0009080]
On Galilean and Lorentz invariance in pilot-wave dynamics. A Valentini, arXiv:0812.4941Phys. Lett. A. 228215A. Valentini, On Galilean and Lorentz invariance in pilot-wave dynamics, Phys. Lett. A 228, 215 (1997). [arXiv:0812.4941]
Time in quantum gravity. H D Zeh, Phys. Lett. A. 126311H. D. Zeh, Time in quantum gravity, Phys. Lett. A 126, 311 (1988).
Semiclassical approximation of the Wheeler-DeWitt equation: arbitrary orders and the question of unitarity. C Kiefer, D Wichmann, arXiv:1802.01422Gen. Relativ. Gravit. 5066C. Kiefer and D. Wichmann, Semiclassical approximation of the Wheeler- DeWitt equation: arbitrary orders and the question of unitarity, Gen. Rel- ativ. Gravit. 50, 66 (2018). [arXiv:1802.01422]
A Valentini, arXiv:hep-th/0407032Black holes, information loss, and hidden variables. A. Valentini, Black holes, information loss, and hidden variables, arXiv:hep-th/0407032.
Perturbations and quantum relaxation. A Kandhadai, A Valentini, arXiv:1609.04485Found. Phys. 491A. Kandhadai and A. Valentini, Perturbations and quantum relaxation, Found. Phys. 49, 1 (2019). [arXiv:1609.04485]
Quantum gravity and nonunitarity in black hole evaporation. C Kiefer, R Müller, T P Singh, arXiv:gr-qc/9308024Mod. Phys. Lett. A. 92661C. Kiefer, R. Müller and T. P. Singh, Quantum gravity and non- unitarity in black hole evaporation, Mod. Phys. Lett. A 9, 2661 (1994). [arXiv:gr-qc/9308024]
Quantum field theory in curved spacetime. B S Dewitt, Phys. Rep. 19295B. S. DeWitt, Quantum field theory in curved spacetime, Phys. Rep. 19, 295 (1975).
R M Wald, General Relativity. ChicagoUniversity of Chicago PressR. M. Wald, General Relativity (University of Chicago Press, Chicago, 1984).
Mechanism for nonlocal information flow from black holes. A Kandhadai, A Valentini, arXiv:1912.05374v1Int. J. Mod. Phys. A. 352050031A. Kandhadai and A. Valentini, Mechanism for nonlocal information flow from black holes, Int. J. Mod. Phys. A 35, 2050031 (2020). [arXiv:1912.05374v1]
Universal signature of non-quantum systems. A Valentini, arXiv:quant-ph/0309107Phys. Lett. A. 332187A. Valentini, Universal signature of non-quantum systems, Phys. Lett. A 332, 187 (2004). [arXiv:quant-ph/0309107]
Primordial black holes as dark matter. B Carr, F Kuhnel, M Sandstad, arXiv:1607.06077Phys. Rev. D. 9483504B. Carr, F. Kuhnel and M. Sandstad, Primordial black holes as dark matter, Phys. Rev. D 94, 083504 (2016). [arXiv:1607.06077]
D Dürr, W Struyve, arXiv:1910.08049Typicality in the foundations of statistical physics and Born's rule. V. Allori et al.SpringerDo Wave Functions JumpD. Dürr and W. Struyve, Typicality in the foundations of statistical physics and Born's rule, in: Do Wave Functions Jump?, eds. V. Allori et al. (Springer, 2021). [arXiv:1910.08049]
John James Waterston and the kinetic theory of gases. S G Brush, American Scientist. 49202S. G. Brush, John James Waterston and the kinetic theory of gases, Amer- ican Scientist 49, 202 (1961).
| [] |
[
"A model and predictions for COVID-19 considering population behavior and vaccination",
"A model and predictions for COVID-19 considering population behavior and vaccination"
] | [
"Thomas Usherwood \nSchool of Engineering\nBrown University\n02912ProvidenceRIUSA\n\nCenter for Biomedical Engineering\nBrown University\n02912ProvidenceRIUSA\n",
"Zachary Lajoie \nSchool of Engineering\nBrown University\n02912ProvidenceRIUSA\n\nCenter for Biomedical Engineering\nBrown University\n02912ProvidenceRIUSA\n",
"Vikas Srivastava vikas_srivastava@brown.edu \nSchool of Engineering\nBrown University\n02912ProvidenceRIUSA\n\nCenter for Biomedical Engineering\nBrown University\n02912ProvidenceRIUSA\n",
"Thomas Usherwood ",
"Zachary Lajoie "
] | [
"School of Engineering\nBrown University\n02912ProvidenceRIUSA",
"Center for Biomedical Engineering\nBrown University\n02912ProvidenceRIUSA",
"School of Engineering\nBrown University\n02912ProvidenceRIUSA",
"Center for Biomedical Engineering\nBrown University\n02912ProvidenceRIUSA",
"School of Engineering\nBrown University\n02912ProvidenceRIUSA",
"Center for Biomedical Engineering\nBrown University\n02912ProvidenceRIUSA"
] | [] | The effect of vaccination coupled with the behavioral response of the population is not well understood. Our model incorporates two important dynamically varying population behaviors: level of caution and sense of safety. Level of caution increases with infectious cases, while an increasing sense of safety with increased vaccination lowers precautions. Our model accurately reproduces the complete time history of COVID-19 infections for various regions of the United States. We propose a parameter d I as a direct measure of a population's caution against an infectious disease that can be obtained from the infectious cases. The model provides quantitative measures of highest disease transmission rate, effective transmission rate, and cautionary behavior. We predict future COVID-19 trends in the United States accounting for vaccine rollout and behavior. Although a high rate of vaccination is critical to quickly ending the pandemic, a return towards pre-pandemic social behavior due to increased sense of safety during vaccine deployment can cause an alarming surge in infections. Our results predict that at the current rate of vaccination, the new infection cases for COVID-19 in the United States will approach zero by August 2021. This model can be used for other regions and for future epidemics and pandemics.Coronavirus Disease 2019 (COVID-19) began as a localized outbreak in Wuhan, China in December 2019 and quickly spread internationally to become a global pandemic. More than a year later, over 113 million people have become infected with COVID-19 with more than 2.5 million deaths worldwide 1 . To combat the spread of this virus, the Pfizer-BioNTech COVID-19 vaccine was approved in the United Kingdom on December 2, 2020 2 , and the Pfizer-BioNTech and Moderna vaccines were subsequently approved for emergency use authorization in the United States 3 . In light of these recent developments, the potential impact of the distribution for COVID-19 vaccines is tremendous. Giving the public, health officials, and government, model-based trend predictions and additional guidance on effective vaccine distribution and potential problems is critical. In the last year, many papers have been published on modeling the COVID-19 pandemic 4-13 . Several modeling studies are based on differential equation compartment models involving compartments for susceptible, infectious, and recovered individuals, commonly referred to as SIR models[14][15][16][17][18][19]. Introducing additional compartments allows researchers to study the effect of vaccination and examine how to optimally distribute a vaccine. Matrajt et al. 20 used an agestratified population to determine the consequence of vaccine effectiveness and population coverage of the vaccine to indicate the optimal vaccine allocation. Bubar et al.21accounted for the possibility of ruling out individuals with antibodies from receiving the vaccine using a serological test, and added an age-dependent effectiveness of the vaccine. Effects of vaccination have been examined in past outbreaks such as the 2009 H1N1 Swine Flu outbreak and the 2014-2016 Ebola epidemic[22][23][24][25][26][27]. These studies have introduced population compartments that separate the population by their location and give insights into what locations should receive vaccines first 28 .A critical aspect of COVID-19 vaccination that remains unexplored is a population's behavioral changes during the prolonged period of vaccination. While behavioral responses have not been addressed with respect to vaccines, efforts have been made to study the effects of non vaccine related behavioral changes for previous pandemics. These studies vary from the models on the effectiveness of social measures like quarantining and social distancing 29-31 to characterizing the nature of spread of the disease 32,33 . One particular example that was reasonably effective in modeling behavioral changes during a pandemic was the closed-loop feedback in the OPEN | 10.1038/s41598-021-91514-7 | null | 232,075,731 | 2103.00677 | 65ee583626c72f88d04d4e59a81e45cdff427d5c |
A model and predictions for COVID-19 considering population behavior and vaccination
0123456789
Thomas Usherwood
School of Engineering
Brown University
02912ProvidenceRIUSA
Center for Biomedical Engineering
Brown University
02912ProvidenceRIUSA
Zachary Lajoie
School of Engineering
Brown University
02912ProvidenceRIUSA
Center for Biomedical Engineering
Brown University
02912ProvidenceRIUSA
Vikas Srivastava vikas_srivastava@brown.edu
School of Engineering
Brown University
02912ProvidenceRIUSA
Center for Biomedical Engineering
Brown University
02912ProvidenceRIUSA
Thomas Usherwood
Zachary Lajoie
A model and predictions for COVID-19 considering population behavior and vaccination
012345678910.1038/s41598-021-91514-7Scientific Reports | (2021) 11:12051 3 These authors contributed equally:
The effect of vaccination coupled with the behavioral response of the population is not well understood. Our model incorporates two important dynamically varying population behaviors: level of caution and sense of safety. Level of caution increases with infectious cases, while an increasing sense of safety with increased vaccination lowers precautions. Our model accurately reproduces the complete time history of COVID-19 infections for various regions of the United States. We propose a parameter d I as a direct measure of a population's caution against an infectious disease that can be obtained from the infectious cases. The model provides quantitative measures of highest disease transmission rate, effective transmission rate, and cautionary behavior. We predict future COVID-19 trends in the United States accounting for vaccine rollout and behavior. Although a high rate of vaccination is critical to quickly ending the pandemic, a return towards pre-pandemic social behavior due to increased sense of safety during vaccine deployment can cause an alarming surge in infections. Our results predict that at the current rate of vaccination, the new infection cases for COVID-19 in the United States will approach zero by August 2021. This model can be used for other regions and for future epidemics and pandemics.Coronavirus Disease 2019 (COVID-19) began as a localized outbreak in Wuhan, China in December 2019 and quickly spread internationally to become a global pandemic. More than a year later, over 113 million people have become infected with COVID-19 with more than 2.5 million deaths worldwide 1 . To combat the spread of this virus, the Pfizer-BioNTech COVID-19 vaccine was approved in the United Kingdom on December 2, 2020 2 , and the Pfizer-BioNTech and Moderna vaccines were subsequently approved for emergency use authorization in the United States 3 . In light of these recent developments, the potential impact of the distribution for COVID-19 vaccines is tremendous. Giving the public, health officials, and government, model-based trend predictions and additional guidance on effective vaccine distribution and potential problems is critical. In the last year, many papers have been published on modeling the COVID-19 pandemic 4-13 . Several modeling studies are based on differential equation compartment models involving compartments for susceptible, infectious, and recovered individuals, commonly referred to as SIR models[14][15][16][17][18][19]. Introducing additional compartments allows researchers to study the effect of vaccination and examine how to optimally distribute a vaccine. Matrajt et al. 20 used an agestratified population to determine the consequence of vaccine effectiveness and population coverage of the vaccine to indicate the optimal vaccine allocation. Bubar et al.21accounted for the possibility of ruling out individuals with antibodies from receiving the vaccine using a serological test, and added an age-dependent effectiveness of the vaccine. Effects of vaccination have been examined in past outbreaks such as the 2009 H1N1 Swine Flu outbreak and the 2014-2016 Ebola epidemic[22][23][24][25][26][27]. These studies have introduced population compartments that separate the population by their location and give insights into what locations should receive vaccines first 28 .A critical aspect of COVID-19 vaccination that remains unexplored is a population's behavioral changes during the prolonged period of vaccination. While behavioral responses have not been addressed with respect to vaccines, efforts have been made to study the effects of non vaccine related behavioral changes for previous pandemics. These studies vary from the models on the effectiveness of social measures like quarantining and social distancing 29-31 to characterizing the nature of spread of the disease 32,33 . One particular example that was reasonably effective in modeling behavioral changes during a pandemic was the closed-loop feedback in the OPEN
www.nature.com/scientificreports/ compartmental SIR model presented by Perra et al. 29 . The authors examined behavioral changes by modeling the rate at which individuals enter self-imposed quarantine dependent on the number of infectious individuals. Over the last year, the time history of new infected regional cases has fluctuated drastically and has posed significant challenges for the infectious disease modeling community. A model that can represent the region/population specific COVID-19 cases accurately for the entire period of this pandemic has not yet been reported. We propose a mathematical model and a framework that incorporates the naturally occurring behavioral responses of a population to infectious cases coupled with possible additional behavioral changes exhibited during vaccination that has the ability to represent infection dynamics for the entirety of available COVID-19 case data in the United States (US) regions. We define "level of caution" to represent a population's precautionary/safe behavior during an ongoing pandemic that results from a combination of increased social distancing, use of personal protection equipment, improved hygiene, and lockdown regulations. We also introduce "sense of safety" to represent a population's return to normal, pre-pandemic behavior as more and more people are vaccinated. We introduce suitable mathematical forms to represent these two important behavioral aspects and incorporate these dynamic functions into our differential SIRDV model framework.
Fitting our model to available daily new infection case data for four major US states (Massachusetts, California, Florida and South Dakota) and two major US cities (Atlanta and New York City) for the first year of the pandemic, we show that our modeling framework is versatile at capturing a large range of developments of the COVID-19 pandemic over time, and it provides valuable insights into each population's underlying social behavior. Introducing a vaccine to the population in our model, we analyzed the interaction between vaccine distribution rate and vaccine related additional behavioral responses of the population. We used our model to predict future trends for the pandemic with the advent of vaccine distribution.
Results
We find that the time dependent infectious disease transmission rate β(t) is best given by β = β 0 f I f V , where β 0 is the population maximum infection transmission rate observed in the absence of any preventative societal measures. f I and f V are level of caution and sense of safety functions, respectively, proposed as:
The function f I models caution in a population, where its individuals take measures to reduce disease transmission through social distancing, personal protective equipment, hygiene, and local government mandates. The population's level of caution to the number of infectious cases is determined by a factor d I , which was observed to change several times over a long duration in a given population due to changing population awareness and response, pandemic fatigue, seasons, and changing government mandates. These changes in sensitivity of the population to the number of infectious cases gives rise to the multiple peaks in the number of new infected cases observed nearly universally during the COVID-19 pandemic. f I approaches 0 in the limiting case of very high values of level of caution factor d I , reflecting extreme cautionary measures by the population against the pandemic and leads to negligible disease transmission. A d I value approaching 0 gives f I as 1, reflecting a population whose behavior is approaching pre-pandemic levels of minimal disease related precautionary actions.
In addition, we included a competing sense of safety in our model, in which measures to reduce disease transmission are gradually decreased due to an increasing proportion of the population becoming vaccinated, offsetting the effects of a reduced transmission rate arising from an underlying level of caution. However, as modeled in Eq. (1), the net transmission rate will never exceed the base maximum transmission rate β 0 . As the sense of safety factor d V approaches a very small value (population not dropping its guards down due to vaccinations), the sense of safety function f V approaches 1 and f V has no effect on β . On the contrary, a high d V reflects an increased sense of safety, causing cautionary measures against the disease transmission to be significantly reduced, leading to f V = 1 f I . In this case, infection-related level of caution is completely negated and the disease transmission rate β approaches the population's highest transmission rate β 0 .
Infection data fit and interpretation for COVID-19 in the United States.
We show that our modeling framework mathematically incorporates the dynamic level of caution within a SIRDV differential framework (shown in Fig. 7, represented by Eq. (2) and discussed in detail in the modeling approach section) is able to fit and predict the entire COVID-19 case history for the selected representative populations within the US. The model works for both pre-vaccine and during vaccination periods. Before vaccines become available, the vaccinated population fraction V in Eq. (1) stays zero giving f V = 1 and naturally leading to no effects from vaccine related sense of safety. For conciseness, we show results for selected key populations. The model can be applied to other regions/populations within or outside of the United States as well. The model was fit to four US states (Massachusetts, California, Florida, and South Dakota) and two major US cities (New York City and Atlanta). These regions were chosen to represent a variety of population densities and varying geographical locations. We accounted for the fact that the reported cases were lower than the actual infection cases in the population due to lack of testing and asymptomatic cases using a factor M. M has a high value at the beginning of the pandemic due to lack of testing and reduces to a lower value as testing becomes more available. The simplified M shown in Fig. 1a is assumed following the Centers for Disease Control and Prevention's assessment that only 1 out of 4.6 COVID-19 cases were reported in the US for 2020. As shown in Fig. 1, our behavioral model was able to accurately model and fit representative states and cities across the United States with few parameters for each region. Estimated parameters for each region are shown in Table 1.
(1) Fig. 2a for different regions over the first year of the COVID-19 pandemic. A high level of caution factor indicates that the population was quick to adapt their behavior in response to an increase in infections by taking increasingly stringent measures to reduce their transmission rate. Level of caution in a specific population changes due to addition or removal of local government regulations, new information regarding the disease, seasonal changes in behavior, pandemic fatigue, news leading to www.nature.com/scientificreports/ additional fear or any other factor that causes widespread changes in behavior and disease transmission rate. As expected, the results show that sudden drops in level of caution factor d I tend to precede surges in new cases due to relaxed social measures. Conversely, a reduction in the influx of new cases will occur due to significant increase in d I . This level of caution is independent of the baseline maximum transmission rate, β 0 , and therefore provides a measure to compare social outlook towards the disease between different populations/regions. Time varying COVID-19 transmission rate β for each of the selected regions is shown in Fig. 2b. β 0 describes transmission in the earliest stages of the pandemic for a population, when knowledge of the disease and social measures against it were limited. Therefore, this value also describes the transmission that a specific population can be expected to return to when the precautions against the infection becomes minimal, either as infectious cases approaches zero or as social response to the disease becomes very low. In addition to the infectious disease's inherent contagious characteristics, the base transmission rate depends on factors such as population density, contact rate, and everyday pre-pandemic behavior of its individuals. Likely due to such factors, we found that bustling New York City, was on the higher end of the baseline transmission rate with basic reproductive ratio for New York www.nature.com/scientificreports/ City obtained to be R 0 = β γ = 4.5 ), whereas a less densely populated state like South Dakota has a much lower baseline transmission rate and a lower R 0 value of 2.5. R 0 values for other regions can be found from Table 1.
f I = e −d I I f V = 1 f I + 1 − 1 f I e −d V V .
To illustrate direct correlations between our model fit predictions and real life events, we take New York City as an example. Starting March 22, New York implemented the "New York State on PAUSE' executive order, closing all non-essential businesses, canceling all non-essential gatherings, and mandated social distancing. This local government regulation is directly represented in the model results for level of caution which shows a significant increase in d I following this mandate (Fig. 2a). Corresponding model results also show a sharp transition from one of the highest levels of disease transmission rates (Fig. 2b) to one of the lowest levels of transmission rates following this government mandate. Between September 1, 2020 and January 15, 2021, the model based d I values show that the level of caution in New York City transitioned from one of the highest values to one of the lowest. When we examine real events, we find that starting September 2020, a series of citywide re-openings were introduced, including the opening of gyms, malls (at 50% capacity), public K-12 schools, and indoor dining (25% occupancy). This reopening coincided with the holiday season in the US at the end of the year 2020 and resulted in a significant spike in the new infection cases directly correlating with our model predictions. Therefore, the level of caution parameter d I is a metric that quantifies a population's behavior in response to an infectious disease outbreak; estimates of future d I values will allow predictions of new infectious cases. There are clear trends in COVID-19 cases captured by our model that directly relate to local government mandated health regulations. These results suggest that we may be able to incorporate possible behavioral changes into our model representing future government regulations, along with changes in vaccination rates, to predict infection outcomes.
Future COVID-19 dynamics with vaccination. The model incorporates the effect of vaccination and the behavioral response of the population to growing number of people getting vaccinated due to the population's sense of safety. This sense of safety counteracts the underlying level of caution that a population always has in response to the number of infectious cases. Predictions from our model with the presence of vaccination show that the future trajectory of the pandemic will strongly depend on population's behavior in response to the disease and vaccination. In Fig. 3, we show a range of potential infection outcomes for different levels of caution to the infection and different senses of safety due to vaccination. The selected range of d I and d V represent reasonable extremes of the level of caution and sense of safety factors. For the future trend predictions, we use starting values based on California's data, as a representative population (to avoid multiple curves and repetition) which has reasonable correlation with overall United States COVID-19 trends. The results, normalized as population fractions, provide critical COVID-19 future trends and insights that will be applicable to other regions as well.
We have assumed a vaccine effectiveness η of 95% based on the initial estimates of the two leading vaccines 34 . η can be suitably selected for any other vaccination types or in the case of a different pandemic or epidemic with different vaccine effectiveness. The results are shown for estimated actual cases. Figure 3a models a population that does not alter their underlying behavior in response to introduction of a vaccine, and instead only responds to the increasing infectious cases by increasing personal safety measures. All simulated curves show a swift reduction in cases following the vaccine, though decisive action and stronger level of caution in response to the infection does show a considerable reduction in total infections. Figure 3b shows a scenario in which the population responds to the introduction of a vaccine by relaxing the social measures meant to slow the transmission. Although this sense of safety and increased normalcy may be a natural response to vaccines becoming more available, our predictions show that unfailing and continuing commitment to social preventative measures can significantly reduce the total number of future infections and even prevent a new surge and new peak that can happen if the population relaxes too soon. Note, regardless of the value of the sense We treat vaccination as a one-time event where an individual would receive an entire dose of the vaccine, despite the fact that some current vaccines require two doses that must be delivered at different times 35 . Given the much longer time scale of COVID-19 predictive curves, compared to the time gap between the two doses of m-RNA vaccines, it is reasonable to model vaccination rate by ignoring the time gap between the two doses and take the two doses combined as a single completed vaccine without significant loss of predictive accuracy. Unless otherwise noted, all vaccine distribution in this paper was modeled at a fixed rate of 0.3% of the population per day (0.6% receiving single dose of the two dose vaccines per day). This number represents approximately 2 million vaccine doses that are currently administered daily in the US. Because of the asymptomatic and unreported cases, the constant rate vaccination was applied proportionally to both the susceptible ( α s ) and recovered ( α R ) populations. For the remainder of the simulations (after Jan 2021), we assume that sense of caution, d I , remains fixed at a high level ( d I = 500).
The effects of the vaccination rate α were examined (Fig. 4). We chose three values of α : the current rate of vaccination of 0.3% of the population per day 36 , a low ( 0.1% per day), and a high rate ( 0.5% per day). Note, the vaccination rate of 0.1% is not expected in the US but is shown to illustrate the consequences of low vaccination rate. As the vaccine distribution rate increases, the number of cases per day tends to zero quickly. However, as is shown in Fig. 4a,b, the population's social response to the vaccine, d V , has a significant effect on the pandemic trajectory. In cases where preventative measures were abandoned more quickly and the population had an increased sense of safety in response to the vaccines (high values of d V ), increased vaccination rates still result in cases quickly tending toward zero, but before this happens, the number of cases per day increases rapidly. This behavior worsens as d V increases, and this sharp increase occurs earlier as vaccination rate α increases. Our results show that in the US, COVID-19 can be reasonably controlled by late summer of 2021 proceeding with the currently planned vaccination rate. The results elucidate the importance of local health and government authorities becoming aware of the fact that the sense of safety and vaccine distribution rate are related parameters. As has been shown, a faster vaccination rate significantly decreases the duration of the pandemic. However, if authorities intend to distribute a vaccine very quickly, they must be extra cognizant of the population's behavioral response to it, as population relaxing its cautious practices could result in a noticeable increase in cases post vaccine rollout. If neglected, this peak under extreme circumstances could be disastrous. Therefore, based on our results, we recommend that proper disease transmission mitigating behavior be maintained, while welcoming a fast vaccine distribution rate.
To further quantify the relative effects of the sense of safety and the vaccine distribution rate on the total number of infectious cases, the total number of individuals infected (as population fraction) after the start of vaccination were plotted with respect to α and d V in Fig. 5. This was done for a special case of a very high level of caution during vaccine rollout. As expected, for a given value of vaccination rate, the number of total infected cases increases as the sense of safety factor increases along the x-axis. This behavior is especially pronounced for low vaccination rates, when large increases in the sense of safety can result in significant numbers of total infections, up to 26% as is shown in the yellow region of Fig. 5). Also, note that for very high value of d V , as the vaccination rate increases from 0, the number of infected cases quickly increases and then start to decrease again (pink box). For a very slow vaccination rate, vaccinated population dependent behavioral effects are limited due to our proposed relation for the sense of safety function f V , but quickly increase as the vaccinated individuals increase. This explains the increase in the total infections as one travels vertically in the pink box in Fig. 5. However, total infections then begin to decrease due to a critical vaccination rate being achieved, shown in the teal box. This reinforces the argument for the necessity of maximizing vaccine distribution; low vaccination rates can lead to behavior-related spikes in total cases, but these effects are mitigated as widespread vaccination outweighs these behavioral factors. www.nature.com/scientificreports/ We can expect some portion of the population to be unwilling or unable to receive a COVID-19 vaccine. The effect of the size of this group on the duration and severity of the pandemic was examined in Fig. 6. If large populations refuse vaccination, the duration of the pandemic can be prolonged. Fig. 7a, our model extends the general SIRD framework by adding the effect of vaccination and incorporating behavior based dynamics as an important capability specific to our study. The model consists of five compartments: Susceptible (S), Vaccinated (V), Infectious (I), Recovered (R), and Deceased (D). Here S, V, I, R, and D represent time dependent fractional variables with respect to the total population of the region of interest. Beginning in the susceptible compartment, individuals can follow the standard infection pathway through the infectious compartment then to either recovered or deceased. Alternatively, they can enter the vaccinated compartment following a fixed rate of vaccination α , where depending on the vaccine effectiveness η , a subset of the vaccinated population V S can become infected. The remainder of the vaccinated group V R is successfully vaccinated and have no risk of becoming infected. Currently, the reinfection rate is very low and its effect can be neglected for the timescale of our study. Note that given the uncertainty in how rapidly the vaccines will be deployed in the future, for our predictions, we have used constant vaccination rates and have shown sensitivities to different vaccination rates. Time dependent vaccination rates can be easily implemented by selecting a suitable function for α(t) in our model. Our SIRDV model for a region/population is described by the following equations: www.nature.com/scientificreports/ where β represents the dynamic transmission rate, µ represents the mortality rate, and γ represents the recovery rate. The family of curves represented by this set of equations with constant parameters is considerably limited as it assumes that the population does not change its behavior at all over the course of the outbreak. The significant differences and variations in disease transmission across different populations and over the course of the COVID-19 pandemic have shown that an understanding and modeling of dynamic population behavior changes is critical in predicting a real-world pandemic. To model these population behavioral attributes, we have incorporated a simple framework for a behavior-based, time-dependent net disease transmission rate β that is dependent on both the current infectious and vaccinated populations. With β 0 as population maximum transmission rate, f I (0 < f I ≤ 1) level of caution function and f V (1 ≤ f V ≤ 1 f I ) as sense of safety function, we propose the following mathematical forms for behavior dependent transmission rate: www.nature.com/scientificreports/ All the model parameters are described in Fig. 7b. The resultant effects of infectious and vaccinated populations on disease transmission rate β are shown in Fig. 7c,d for a range of d I and d V values. As shown, transmission rate decays to a smaller value at high infectious populations due to more cautionary and preventive actions with a higher level of caution. This decay slows significantly as a higher percentage of the population gets vaccinated due to the sense of safety from vaccination. The sensitivity of β to infectious population size is determined by the population's d I , while the extent to which preventative measures are abandoned due to vaccine distribution is determined by d V . Note, from the mathematical form in Eq. (3), in the absence of vaccines, V = 0 =⇒ f V = 1 , which is physically and intuitively correct.
Methods
Mathematical model. As shown in
(2) dS dt = −βSI − α dV dt = −β(1 − η)VI + α dR dt = γ I dI dt = βS + β(1 − η)V − µγ − γ I dD dt = µγ I
Model fit to specific regions. Combining Eq. (2) with our dynamic behavior model in Eq. (3), we are able to fit the complex, multimodal infection curves observed during the course of the COVID-19 pandemic. We take the disease mortality rate and infectious period, µ and γ (inverse of γ represents infectious period) as constants. The baseline transmission rate β 0 was determined to be the parameter which was able to best fit the first rise in cases, where there was limited social response to the rise in infections. To represent the behavioral changes that were evident in the multiple peaks of the pandemic, we introduced multiple behavioral regions for each population. The model fit shows that each region had different level of caution factors (infection responsiveness) d I , which provides an estimate of how public perception of the disease varied in each region over the course of the pandemic. Each behavioral response is represented by a fixed d I and smooth transitions were implemented via cosine interpolation, as displayed in Fig. 2a. To reduce the risk of overfitting our model, we limited the number of behavioral response changes for each location and found that, for the locations that were selected, a minimum of either three or four behavioral regions was sufficient to accurately represent the complete reported infection data. The differential equation model was implemented in MATLAB.
Model parameters were fit to daily new cases time series of four states and two cities: Massachusetts, California, Florida, South Dakota, New York City, and Atlanta. These regions were chosen to represent a variety of population densities, locations across the US, and responses to the pandemic. To determine the parameters, we used MATLAB's bound-constrained optimization function 37 , minimizing the root mean square error between our model predictions and the reported number of cases per day. Simulations for each population began on March 15 and had an initial infected population equal to the number of new cases in the previous 1 γ days, to estimate those who were still in the infectious period.
The reported infection case data is limited by the fact that tests were not universally available during the beginning stages of the pandemic. Additionally, certain infected populations did not show any symptoms 38 , but still may have transmitted the disease, further complicating estimates of new cases. To compare the actual case predictions from the model against the reported case, we introduced a factor M, which represents the number of actual cases per reported case. At the early stages of the pandemic, the awareness and testing was lacking but later on it improved significantly. We account for this by using a value of M = M 0 for the initial stages of the pandemic which transition to a lower value of M = M f well into the pandemic when testing becomes widely available. The transition between the two values of M is taken to be smooth using a sigmoidal function and is shown in Fig. 1a. The variation of M with time can be represented with where δ M and t M describe the smooth transition from M 0 to M f . Disease dynamics with vaccination. After fitting the model to reported real-world data, the effects of vaccination and its potential effects on behavioral response (sense of safety) were examined. Specifically, we chose the state of California as a representative case, which displayed an average infectivity rate of COVID-19 among the states and cities that we surveyed and an infection curve that was somewhat representative of the overall United States. Varying the vaccine distribution rate α , the sense of safety parameter d V , and the fraction of unvaccinated individuals S unvaccinated in our model, we evaluated the progression of the disease by examining the predicted number of cases reported per day in the future. The predicted results are presented in Figs. 3, 4, 5, and 6.
Discussion
We have developed an infectious disease dynamics model which accounts for behavioral changes in a population considering level of caution due to growing infectious individuals as well as a counteracting trend towards increasing normalcy, relaxing precautionary measures due to a sense of safety from increasing vaccine deployment. Our mathematical model accurately captures the infection trends for the first year of the COVID-19 pandemic for all of the US regions examined with a small number of parameters. A comparison of model parameters between different regions allows comparative insights between them. It demonstrates direct relationships between population behavior model parameters and major government actions that impact population behavior. It allows measurement of several important population and infectious disease specific quantities including highest disease transmission rate β 0 , disease transmission rate at any given time β , and a measure of population's behavior to www.nature.com/scientificreports/ reduce the disease transmission through parameter d I , where, in the absence of significant vaccination, d I 100 >> 1 indicates safe response, and d I 100 < 1 represents lack of caution. We found that although faster vaccine rollout will bring the COVID-19 end more quickly, there exist scenarios where fast vaccine rollout can give false sense of safety to the population, which will lead to a large short-term increase in infectious cases. This sense of safety could also cause weakening of restrictions by the local authorities, further exacerbating the pandemic, especially for areas hit with strains exhibiting lower vaccine efficacy in spite of decent vaccine coverage. We also found that if a large proportion of the population chooses to stay unvaccinated, this can have an adverse effect on the length of the pandemic. Prudence is required on the part of authorities to understand, predict, and limit any potential surge by increasing encouragement of all cautionary measures to prevent the spread of the virus. Our results indicate that in the United States, COVID-19 can be reasonably controlled by August 2021.
(3) f I = e −d I I f V = 1 f I + 1 − 1 f I e −d V V β = β 0 f I f V (4) M = M f + M 0 − M f 1 + e δ M (t−t M ) ,
While our model is built on significant physical insights, population's future behavioral aspects, presence of asymptomatic cases, a lack of exact knowledge about future vaccination rates, and other factors create some uncertainties. Therefore, although the quantitative predictions from our study are important, all possible uncertainties should be considered. Due to a reasonable homogeneity of vaccine distribution in the U.S., we have assumed that the vaccination rate is constant over the period of interest and for each population. It is important to note that vaccine roll out rate varies from region to region, especially when considering different parts of the world. In addition, for a specific region, the vaccination rate can vary significantly over a period of time. In this case, the vaccination rate α can be assumed to be time dependent function in our model. In our predictions, we have not considered varying efficacies of vaccines against different SARS-COV-2 variants. In the future, as more data on the efficacy is available, this information can be incorporated in the model based predictions.
The results allow new insights into future COVID-19 trends and sensitivity of pandemic dynamics to various behavioral and other model parameters. As more exact information becomes available, new data can be directly incorporated in our model to produce more accurate results. Our study involved a reasonably diverse range of populations and their responses to the pandemic. The numerical values and ranges for the model parameters found in this study could be used as estimates to predict potential infection outcomes for scenarios where limited data is available (e.g., future pandemics). The proposed model provides a new framework for predicting infection dynamics of future pandemics and epidemics. As model based predictions get increasingly accurate, we expect that they will help guide informed policy decisions for the general public.
Data availability
COVID data was obtained from the Center for Systems Science and Engineering (CSSE) COVID-19 Data Repository at Johns Hopkins University 39 . Estimates for the total population of each region were obtained from the United States Census Bureau 40 .
Figure 1 .
1Results of our behavioral model's fit to COVID-19 data across six populations in the United States. (a) Shows M representing the ratio of actual infectious cases to reported cases. Reported new infection case data versus model fit for (b) Massachusetts, (c) Florida, (d) South Dakota, (e) California, (f) New York City, and (g) Atlanta. Gray bars represent the daily reports of new COVID-19 cases. The dashed brown lines are the corresponding 7-day average. Our model's predictions for reported daily cases are shown as the solid red lines. Scientific Reports | (2021) 11:12051 | https://doi.org/10.1038/s41598-021-91514-7
Table 1 .Figure 2 .
12Parameters for the four states and two cities for the 2020 COVID-19 pandemic. The initial basic reproduction number, R 0 = β 0 γ , represents the expected number of secondary infections if an infectious individual was placed in the population, without any social preventative measures. d I values represent its constant value in each of the corresponding behavioral periods. Smooth transitions between constant values of d I were applied. (a) Level of caution factor d I and (b) disease transmission rate β over the course of COVID-19 in six populations across the United States. Scientific Reports | (2021) 11:12051 | https://doi.org/10.1038/s41598-021-91514-7
Figure 3 .
3Effect of (a) sense of caution factor, d I , and (b) sense of safety factor, d V on pandemic trajectory. The dashed blue bar represents the introduction of a vaccine with 95% effectiveness, at a fixed rate of 0.3% of the population per day. Extreme values of d I represent the extremes of social behavior: the most responsive (black dotted) and the least (red). Sensitivity analysis in (a) considered a fixed sense of safety d V = 0 , and in (b) considered a fixed level of caution d I = 500.Scientific Reports | (2021) 11:12051 | https://doi.org/10.1038/s41598-021-91514-7 www.nature.com/scientificreports/ of safety factor d V , the number of new cases per day drops to zero around the same time; however, the peak number of cases while progressing toward this point is very different for different d V values.
Figure 4 .
4Impact of vaccination rate, α , with two social responses to the vaccine. The population in (a) shows no changes in behavior in response to the vaccine, while the population in (b) relaxes its preventative measures as the vaccine is distributed. Scientific Reports | (2021) 11:12051 | https://doi.org/10.1038/s41598-021-91514-7
Figure 5 .Figure 6 .
56Contour plot of the sense of normalcy, d V , and vaccination rate, α , versus the total infections since the start of vaccination (red values shown as proportion of the total population). Shown for a high level of caution case ( d I = 500). The pink box shows a possibility of increasing number of total infected cases with increasing vaccination rate at very high sense of safety, and the teal box shows that once the vaccination rate crosses a threshold, the total number of infection cases drop with vaccination rate even at a very high sense of safety. Results for certain fractions of the population remaining unvaccinated. The entire COVID-19 case history and future predictions are shown in terms of estimated actual new infection cases for California as a representative example. Gray bars represent the daily reports of new COVID-19 cases. The dashed brown lines are the corresponding 7-day average. Our model predictions for the entire duration are also shown as green, gray and yellow curves.Scientific Reports | (2021) 11:12051 | https://doi.org/10.1038/s41598-021-91514-7
Figure 7 .
7(a) SIRDV compartment model with susceptible (S), vaccinated (V), infectious (I), recovered (R), and deceased (D) populations. Flow between between compartments is shown with arrows. (b) Description of model parameters. (c) and (d) Show variations in disease transmission rate β , due to social response to infectious and vaccinated populations. (c) β for a range of level of caution factor d I (100, 200, 400), and (d) β for a range of sense of safety factor d V (1, 2, 4). Scientific Reports | (2021) 11:12051 | https://doi.org/10.1038/s41598-021-91514-7
© The Author(s) 2021
AcknowledgementsAuthors would like to thank John Antolik and Thomas Bohac for discussions during the early phase.Author contributionsT.U. and Z.L. performed simulations and helped develop the model. V.S. conceptualized and led the development of the model. All authors contributed to the writing of the manuscript.Competing interestThe authors declare no competing interests.Additional informationCorrespondence and requests for materials should be addressed to V.S.Reprints and permissions information is available at www.nature.com/reprints.Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
World Health Organization. WHO Coronavirus Disease (COVID-19) Dashboard (2021). World Health Organization. WHO Coronavirus Disease (COVID-19) Dashboard (2021).
The UK has approved a COVID vaccine-Here's what scientists now want to know. H Ledford, D Cyranoski, R V Noorden, Nature. 588Ledford, H., Cyranoski, D. & Noorden, R. V. The UK has approved a COVID vaccine-Here's what scientists now want to know. Nature 588, 205-206 (2020).
. O O Commissioner, COVID-19 Vaccines. FDACommissioner, O. o. t. COVID-19 Vaccines. FDA (2021).
The challenges of modeling and forecasting the spread of covid-19. A L Bertozzi, E Franco, G Mohler, M B Short, D Sledge, Proc. Nat. Acad. Sci. Nat. Acad. Sci117Bertozzi, A. L., Franco, E., Mohler, G., Short, M. B. & Sledge, D. The challenges of modeling and forecasting the spread of covid-19. Proc. Nat. Acad. Sci. 117, 16732-16738 (2020).
Covid-19 and sars-cov-2 modeling the present, looking at the future. E Estrada, Phys. Rep. 869Estrada, E. Covid-19 and sars-cov-2 modeling the present, looking at the future. Phys. Rep. 869, 1-51 (2020).
Modelling the covid-19 epidemic and implementation of population-wide interventions in Italy. G Giordano, Nat. Med. 26Giordano, G. et al. Modelling the covid-19 epidemic and implementation of population-wide interventions in Italy. Nat. Med. 26, 855-860 (2020).
The first 100 days: Modeling the evolution of the covid-19 pandemic. E Kaxiras, G Neofotistos, E Angelaki, Chaos Solitons. 138Kaxiras, E., Neofotistos, G. & Angelaki, E. The first 100 days: Modeling the evolution of the covid-19 pandemic. Chaos Solitons Fractals138 (2020).
Identification and estimation of the seird epidemic model for covid-19. I Korolev, J. Econ. 220Korolev, I. Identification and estimation of the seird epidemic model for covid-19. J. Econ. 220, 63-85 (2021).
Modeling, state estimation, and optimal control for the us covid-19 outbreak. C Tsay, F Lejarza, M A Stadtherr, M Baldea, Sci. Rep. 1010711Tsay, C., Lejarza, F., Stadtherr, M. A. & Baldea, M. Modeling, state estimation, and optimal control for the us covid-19 outbreak. Sci. Rep. 10, 10711 (2020).
Early dynamics of transmission and control of covid-19: A mathematical modelling study. A J Kucharski, Lancet. Infect. Dis. 20Kucharski, A. J. et al. Early dynamics of transmission and control of covid-19: A mathematical modelling study. Lancet. Infect. Dis 20, 553-558 (2020).
Seir modeling of the covid-19 and its dynamics. S He, Y Peng, K Sun, Nonlinear Dyn. 101He, S., Peng, Y. & Sun, K. Seir modeling of the covid-19 and its dynamics. Nonlinear Dyn. 101, 1667-1680 (2020).
The effect of control strategies to reduce social mixing on outcomes of the covid-19 epidemic in Wuhan, China: A modelling study. K Prem, Lancet Public Health. 5Prem, K. et al. The effect of control strategies to reduce social mixing on outcomes of the covid-19 epidemic in Wuhan, China: A modelling study. Lancet Public Health 5, e261-e270 (2020).
Modeling the effects of intervention strategies on covid-19 transmission dynamics. D M Kennedy, G J Zambrano, Y Wang, O P Neto, J. Clin. Virol. 128Kennedy, D. M., Zambrano, G. J., Wang, Y. & Neto, O. P. Modeling the effects of intervention strategies on covid-19 transmission dynamics. J. Clin. Virol.128 (2020).
Why is it difficult to accurately predict the covid-19 epidemic?. W C Roda, M B Varughese, D Han, M Y Li, Infect. Dis. Modell. 5Roda, W. C., Varughese, M. B., Han, D. & Li, M. Y. Why is it difficult to accurately predict the covid-19 epidemic?. Infect. Dis. Modell. 5, 271-281 (2020).
Forecasting the spread of covid-19 under different reopening strategies. M Liu, R Thomadsen, S Yao, Sci. Rep. 1020367Liu, M., Thomadsen, R. & Yao, S. Forecasting the spread of covid-19 under different reopening strategies. Sci. Rep. 10, 20367 (2020).
Optimal targeted lockdowns in a multi-group sir model. D Acemoglu, V Chernozhukov, I Werning, M D Whinston, National Bureau of Economic Research. Working Paper 27102Acemoglu, D., Chernozhukov, V., Werning, I. & Whinston, M. D. Optimal targeted lockdowns in a multi-group sir model. Working Paper 27102, National Bureau of Economic Research (2020).
Estimation of covid-19 dynamics "on a back-of-envelope": Does the simplest sir model provide quantitative parameters and predictions. E B Postnikov, Chaos Solitons Fractals. 135Postnikov, E. B. Estimation of covid-19 dynamics "on a back-of-envelope": Does the simplest sir model provide quantitative parameters and predictions?. Chaos Solitons Fractals 135 (2020).
Mathematical epidemiology: Past, present, and future. F Brauer, Infect. Dis. Modell. 2Brauer, F. Mathematical epidemiology: Past, present, and future. Infect. Dis. Modell. 2, 113-127 (2017).
A contribution to the mathematical theory of epidemics. W O Kermack, A Mckendrick, Proc. R. Soc. A. 115Kermack, W. O. & McKendrick, A. A contribution to the mathematical theory of epidemics. Proc. R. Soc. A 115, 700-721 (1927).
Vaccine optimization for COVID-19: Who to vaccinate first?. L Matrajt, J Eaton, T Leung, E R Brown, medRxiv 2020.08.14.20175257Matrajt, L., Eaton, J., Leung, T. & Brown, E. R. Vaccine optimization for COVID-19: Who to vaccinate first? medRxiv 2020.08.14.20175257 (2020).
Model-informed COVID-19 vaccine prioritization strategies by age and serostatus. K M Bubar, 2020.09.08.20190629Bubar, K. M. et al. Model-informed COVID-19 vaccine prioritization strategies by age and serostatus. medRxiv 2020.09.08.20190629 (2020).
. 10.1038/s41598-021-91514-7Scientific Reports |. 1112051Scientific Reports | (2021) 11:12051 | https://doi.org/10.1038/s41598-021-91514-7
Modeling the effects of vaccination and treatment on pandemic influenza. Z Feng, S Towers, Y Yang, AAPS J. 13Feng, Z., Towers, S. & Yang, Y. Modeling the effects of vaccination and treatment on pandemic influenza. AAPS J. 13, 427-437 (2002).
Mathematical models of vaccination. A Scherer, A Mclean, Br. Med. Bull. 62Scherer, A. & McLean, A. Mathematical models of vaccination. Br. Med. Bull. 62, 187-199 (2002).
Vaccination strategies to control Ebola epidemics in the context of variable household inaccessibility levels. G Chowell, A Tariq, M Kiskowski, PLoS Negl. Trop. Dis. 13Chowell, G., Tariq, A. & Kiskowski, M. Vaccination strategies to control Ebola epidemics in the context of variable household inaccessibility levels. PLoS Negl. Trop. Dis. 13 (2019).
Modelling during an emergency: The 2009 H1N1 influenza pandemic. B Y Lee, L A Haidari, M S Lee, Clin. Microbiol. Infect. 19Lee, B. Y., Haidari, L. A. & Lee, M. S. Modelling during an emergency: The 2009 H1N1 influenza pandemic. Clin. Microbiol. Infect. 19, 1014-1022 (2013).
Modeling the effects of H1N1 influenza vaccine distribution in the United States. R C Larson, A Teytelman, Value Health J. Int. Soc. Pharmacoecon. Outcomes Res. 15Larson, R. C. & Teytelman, A. Modeling the effects of H1N1 influenza vaccine distribution in the United States. Value Health J. Int. Soc. Pharmacoecon. Outcomes Res. 15, 158-166 (2012).
Impact of prophylactic vaccination strategies on Ebola virus transmission: A modeling analysis. R Potluri, PLoS ONE. 15Potluri, R. et al. Impact of prophylactic vaccination strategies on Ebola virus transmission: A modeling analysis. PLoS ONE 15 (2020).
Efficient vaccine distribution based on a hybrid compartmental model. Z Yu, PLoS ONE. 11Yu, Z. et al. Efficient vaccine distribution based on a hybrid compartmental model. PLoS ONE 11 (2016).
Towards a characterization of behavior-disease models. N Perra, D Balcan, B Gonçalves, A Vespignani, PLoS. 6Perra, N., Balcan, D., Gonçalves, B. & Vespignani, A. Towards a characterization of behavior-disease models. PLoS ONE6 (2011).
Assessment of lockdown effect in some states and overall India: A predictive mathematical study on COVID-19 outbreak. T Sardar, S S Nadim, S Rana, J Chattopadhyay, Chaos Solitons Fractals. 139Sardar, T., Nadim, S. S., Rana, S. & Chattopadhyay, J. Assessment of lockdown effect in some states and overall India: A predictive mathematical study on COVID-19 outbreak. Chaos Solitons Fractals 139 (2020).
Prediction of COVID-19 transmission dynamics using a mathematical model considering behavior changes in Korea. S Kim, Y B Seo, E Jung, Epidemiol. Health. 42Kim, S., Seo, Y. B. & Jung, E. Prediction of COVID-19 transmission dynamics using a mathematical model considering behavior changes in Korea. Epidemiol. Health 42 (2020).
Modeling and forecasting the COVID-19 pandemic in India. K Sarkar, S Khajanchi, J J Nieto, Chaos Solitons Fractals. 139Sarkar, K., Khajanchi, S. & Nieto, J. J. Modeling and forecasting the COVID-19 pandemic in India. Chaos Solitons Fractals 139 (2020).
Modeling COVID-19 scenarios for the United States. R C Reiner, Nat. Med. 1. 12Reiner, R. C. et al. Modeling COVID-19 scenarios for the United States. Nat. Med. 1-12 (2020).
Looking beyond covid-19 vaccine phase 3 trials. J H Kim, F Marks, J D Clemens, Nat. Med. Kim, J. H., Marks, F. & Clemens, J. D. Looking beyond covid-19 vaccine phase 3 trials. Nat. Med. (2021).
Biden Aims for 100 Million COVID Vaccinations in First 100 Days. V K News, Kaiser Health, News, V. K., Kaiser Health. Biden Aims for 100 Million COVID Vaccinations in First 100 Days (2021).
Massachusetts Department of Public Health. COVID-19 Vaccination Program Mass.govMassachusetts Department of Public Health. COVID-19 Vaccination Program Mass.gov (2021).
Matlab bound constrained optimization using fminsearch. Mathworks, MathWorks. Matlab bound constrained optimization using fminsearch (2021).
What the data say about asymptomatic COVID infections. B Nogrady, Nature. 587Nogrady, B. What the data say about asymptomatic COVID infections. Nature 587, 534-535 (2020).
An interactive web-based dashboard to track COVID-19 in real time. E Dong, H Du, L Gardner, Lancet. Infect. Dis. 20Dong, E., Du, H. & Gardner, L. An interactive web-based dashboard to track COVID-19 in real time. Lancet. Infect. Dis 20, 533-534 (2020).
County Population Totals. U C Bureau, Bureau, U. C. County Population Totals 2010-2019 (2019).
| [] |
[
"Knowledge Graphs: Opportunities and Challenges",
"Knowledge Graphs: Opportunities and Challenges"
] | [
"Ciyuan Peng ciyuan.p@outlook.com \nInstitute of Innovation, Science and Sustainability\nFederation University Australia\n3353Ballarat\n\nVIC\nAustralia\n\nIntroduction\n\n",
"· Feng Xia \nSchool of Computing Technologies\nRMIT University\n3000Melbourne\n\nVIC\nAustralia\n",
"· Mehdi Naseriparsa \nGlobal Professional School\nFederation University Australia\n3353Ballarat\n\nVIC\nAustralia\n",
"Francesco Osborne francesco.osborne@open.ac.uk \nKnowledge Media Institute\nThe Open University\nMilton KeynesMK7 6AAUK\n",
"Feng Xia f.xia@ieee.org ",
"Ciyuan Peng ",
"Mehdi Naseriparsa m.naseriparsa@federation.edu.au ",
"Francesco Osborne ",
"C Peng \nInstitute of Innovation, Science and Sustainability\nFederation University Australia\n3353Ballarat\n\nVIC\nAustralia\n\nGlobal Professional School\nFederation University Australia\n3353Ballarat\n\nVIC\nAustralia\n\nIntroduction\n\n"
] | [
"Institute of Innovation, Science and Sustainability\nFederation University Australia\n3353Ballarat",
"VIC\nAustralia",
"Introduction\n",
"School of Computing Technologies\nRMIT University\n3000Melbourne",
"VIC\nAustralia",
"Global Professional School\nFederation University Australia\n3353Ballarat",
"VIC\nAustralia",
"Knowledge Media Institute\nThe Open University\nMilton KeynesMK7 6AAUK",
"Institute of Innovation, Science and Sustainability\nFederation University Australia\n3353Ballarat",
"VIC\nAustralia",
"Global Professional School\nFederation University Australia\n3353Ballarat",
"VIC\nAustralia",
"Introduction\n"
] | [] | With the explosive growth of artificial intelligence (AI) and big data, it has become vitally important to organize and represent the enormous volume of knowledge appropriately. As graph data, knowledge graphs accumulate and convey knowledge of the real world. It has been well-recognized that knowledge graphs effectively represent complex information; hence, they rapidly gain the attention of academia and industry in recent years. Thus to develop a deeper understanding of knowledge graphs, this paper presents a systematic overview of this field. Specifically, we focus on the opportunities and challenges of knowledge graphs. We first review the opportunities of knowledge graphs in terms of two aspects: (1) AI systems built upon knowledge graphs; (2) potential application fields of knowledge graphs. Then, we thoroughly discuss severe technical challenges in this field, such as knowledge graph embeddings, knowledge acquisition, knowledge graph completion, knowledge fusion, and knowledge reasoning. We expect that this survey will shed new light on future research and the development of knowledge graphs. | 10.1007/s10462-023-10465-9 | [
"https://export.arxiv.org/pdf/2303.13948v1.pdf"
] | 257,757,244 | 2303.13948 | 97df0cc032460ce74c0ec44ca82c15d9e299280e |
Knowledge Graphs: Opportunities and Challenges
0123456789
Ciyuan Peng ciyuan.p@outlook.com
Institute of Innovation, Science and Sustainability
Federation University Australia
3353Ballarat
VIC
Australia
Introduction
· Feng Xia
School of Computing Technologies
RMIT University
3000Melbourne
VIC
Australia
· Mehdi Naseriparsa
Global Professional School
Federation University Australia
3353Ballarat
VIC
Australia
Francesco Osborne francesco.osborne@open.ac.uk
Knowledge Media Institute
The Open University
Milton KeynesMK7 6AAUK
Feng Xia f.xia@ieee.org
Ciyuan Peng
Mehdi Naseriparsa m.naseriparsa@federation.edu.au
Francesco Osborne
C Peng
Institute of Innovation, Science and Sustainability
Federation University Australia
3353Ballarat
VIC
Australia
Global Professional School
Federation University Australia
3353Ballarat
VIC
Australia
Introduction
Knowledge Graphs: Opportunities and Challenges
012345678910.1007/s10462-023-10465-9Accepted: 9 March 20231 3Knowledge graphs · Artificial intelligence · Graph embedding · Knowledge engineering · Graph learning
With the explosive growth of artificial intelligence (AI) and big data, it has become vitally important to organize and represent the enormous volume of knowledge appropriately. As graph data, knowledge graphs accumulate and convey knowledge of the real world. It has been well-recognized that knowledge graphs effectively represent complex information; hence, they rapidly gain the attention of academia and industry in recent years. Thus to develop a deeper understanding of knowledge graphs, this paper presents a systematic overview of this field. Specifically, we focus on the opportunities and challenges of knowledge graphs. We first review the opportunities of knowledge graphs in terms of two aspects: (1) AI systems built upon knowledge graphs; (2) potential application fields of knowledge graphs. Then, we thoroughly discuss severe technical challenges in this field, such as knowledge graph embeddings, knowledge acquisition, knowledge graph completion, knowledge fusion, and knowledge reasoning. We expect that this survey will shed new light on future research and the development of knowledge graphs.
Introduction
Knowledge plays a vital role in human existence and development. Learning and representing human knowledge are crucial tasks in artificial intelligence (AI) research. While humans are able to understand and analyze their surroundings, AI systems require additional knowledge to obtain the same abilities and solve complex tasks in realistic scenarios (Ji et al. 2021). To support these systems, we have seen the emergence of many approaches for representing human knowledge according to different conceptual models. In the last decade, knowledge graphs have become a standard solution in this space, as well as a research trend in academia and industry (Kong et al. 2022).
Knowledge graphs are defined as graphs of data that accumulate and convey knowledge of the real world. The nodes in knowledge graphs represent the entities of interest, and the edges represent the relations between the entities (Hogan et al. 2021;Cheng et al. 2022a). These representations utilize formal semantics, which allows computers to process them efficiently and unambiguously. For example, the entity "Bill Gates" can be linked to the entity "Microsoft" because Bill Gates is the founder of Microsoft; thus, they have relationships in the real world.
Due to the great significance of knowledge graphs in processing heterogeneous information within a machine-readable context, a considerable amount of research has been conducted continuously on these solutions in recent years (Dai et al. 2020a). The proposed knowledge graphs are widely employed in various AI systems recently (Ko et al. 2021;Mohamed et al. 2021), such as recommender systems, question answering, and information retrieval. They are also widely applied in many fields (e.g., education and medical care) to benefit human life and society Bounhas et al. 2020).
Therefore, knowledge graphs have seized great opportunities by improving the quality of AI systems and being applied to various areas. However, the research on knowledge graphs still faces significant technical challenges. For example, there are major limitations in the current technologies for acquiring knowledge from multiple sources and integrating them into a typical knowledge graph. Thus, knowledge graphs provide great opportunities in modern society. However, there are technical challenges in their development. Consequently, it is necessary to analyze knowledge graphs with respect to their opportunities and challenges to develop a better understanding of knowledge graphs.
To deeply understand the development of knowledge graphs, this survey extensively analyzes knowledge graphs in terms of their opportunities and challenges. Firstly, we discuss the opportunities of knowledge graphs in terms of two aspects: AI systems whose performance is significantly improved by knowledge graphs and application fields that benefit from knowledge graphs. Then, we analyze the challenges of knowledge graphs by considering the limitations of knowledge graph technologies. The main contributions of this paper are as follows:
• Survey on knowledge graphs: We conduct a comprehensive survey of existing knowledge graph studies. In particular, this work thoroughly analyzes the advancements in knowledge graphs in terms of state-of-the-art technologies and applications. • Knowledge graph opportunities: We investigate potential opportunities for knowledge graphs in terms of knowledge graph-based AI systems and application fields that utilize knowledge graphs. Firstly, we examine the benefits of knowledge graphs for AI systems, including recommender systems, question-answering systems, and information retrieval. Then, we discuss the far-reaching impacts of knowledge graphs on human society by describing current and potential knowledge graph applications in various fields (e.g., education, scientific research, social media, and medical care). • Knowledge graph challenges: We provide deep insights into significant technical challenges facing knowledge graphs. In particular, we elaborate on limitations concerning five representative knowledge graph technologies, including knowledge graph embeddings, knowledge acquisition, knowledge graph completion, knowledge fusion, and knowledge reasoning.
The rest of the paper is organized as follows. Section 2 provides an overview of knowledge graphs, including the definitions and the categorization of existing research on knowledge graphs. To examine the opportunities of knowledge graphs, Section 3 and Section 4 introduce relevant AI systems and application fields, respectively. Section 5 details the challenges of knowledge graphs based on the technologies. Finally, we conclude this paper in Section 6.
Overview
In this section, the definition of knowledge graphs is provided first. Then, we categorize significant state-of-the-art research in this area.
What Are Knowledge Graphs?
A knowledge base is a typical data set that represents real-world facts and semantic relations in the form of triplets. When the triplets are represented as a graph with edges as relations and nodes as entities, it is considered a knowledge graph. Generally, the knowledge graph and knowledge base are regarded as the same concept and are used interchangeably. In addition, the schema for a knowledge graph can be defined as an ontology, which shows the properties of a specific domain and how they are related. Therefore, one essential stage of knowledge graph construction is ontology construction. In 2012, Google first put forward Knowledge Graph by introducing their knowledge base called Google Knowledge Graph (Ehrlinger and Wöß 2016). Afterward, many knowledge graphs are introduced and adopted such as:
• DBpedia, a knowledge graph that intends to discover semantically meaningful information form Wikipedia and convert it into an effective well-structured ontological knowledge base in DBpedia (Auer et al. 2007). • Freebase, a knowledge graph which is built upon multiple sources that provides a structured and global resource of information (Bollacker et al. 2008). • Facebook's entity graph, a knowledge graph that converts the unstructured content of the user profiles into meaningful structured data (Ugander et al. 2011). • Wikidata, a cross-lingual document-oriented knowledge graph which supports many sites and services such as Wikipedia (Vrandečić and Krötzsch 2014). • Yago, a quality knowledge base that contains a huge number of entities and their corresponding relationships. These entities are extracted from multiple sources such as Wikipedia and WordNet (Rebele et al. 2016).
• WordNet, a lexical knowledge base measuring the semantic similarity between words.
The knowledge base contains a number of hierarchical concept graphs to analyse the semantic similarity (Pedersen et al. 2004).
A knowledge graph is a directed graph composed of nodes and edges, where one node indicates an entity (a real object or abstract concept), and the edge between the two nodes conveys the semantic relation between the two entities (Bordes et al. 2011). Resource Description Framework (RDF) and Labeled Property Graphs (LPGs) are two typical ways to represent and manage knowledge graphs (Färber et al. 2018;Baken 2020). The fundamental unit of a knowledge graph is the triple (subject, predicate, object) (or (head, relation, tail)), i.e., (Bill Gates, founderOf, Microsoft). Since the relation is not necessarily symmetric, the direction of a link matters. Therefore, a knowledge graph can also be seen as a directed graph in which the head entities point to the tail entities via the relation's edge. Fig. 1 depicts an example of a simple knowledge graph. As shown in Fig. 1, nodes e 1 and e 2 darkened in color are connected by relation r 1 , which goes from e 1 to e 2 . Therefore, e 1 , e 2 , and r 1 can form the triplet (e 1 , r 1 , e 2 ) , in which e 1 and e 2 are the head and tail entities, respectively.
Current Research on Knowledge Graphs
In recent years, knowledge graphs have gained extensive research interest. Plenty of studies have focused on exploring knowledge graphs. This paper conducts a comprehensive survey on knowledge graphs and lists seven important categories of current research on this topic. Fig. 2 illustrates a schema of the most popular research lines regarding knowledge graphs. Among them, AI systems are services that utilize knowledge graphs for their foundation, and application fields are domains where knowledge graphs reach. These two research lines are listed for discussing the opportunities of knowledge graphs. Another five research lines are five main knowledge graph technologies corresponding to five tasks. In this paper, we introduce these five technologies and emphasize their limitations to give useful insights into the major challenges of the knowledge graphs. Fig. 1 An example of a knowledge graph. In this knowledge graph, (e 1 , r 1 , e 2 ) is a triplet that indicates e 1 and e 2 are connected by relation r 1
Knowledge Graph Embedding
Knowledge graph embedding is one of the central research issues. This task aims to map entities and relations of a knowledge graph to a low-dimensional vector space so that it captures the semantics and the structure of the knowledge graph efficiently (Dai et al. 2020b). Then, the obtained feature vectors can be effectively learned by machine learning models. Three main triplet fact-based embedding methods are as follows: (a) tensor factorizationbased, (b) translation-based, and (c) neural network-based methods (Dai et al. 2020b).
Knowledge Acquisition
Knowledge acquisition, which focuses on modeling and constructing knowledge graphs, is another crucial research direction of knowledge graph study. Typically, the knowledge is imported from structured sources by employing mapping languages, such as R2RML (Rodriguez-Muro and Rezk 2015). Furthermore, the knowledge could be extracted from unstructured documents (e.g., news, research papers, and patents) by adopting relation, entity, or attribute extraction methods (Liu et al. 2020;Yu et al. 2020;Yao et al. 2019).
Knowledge Graph Completion
Although there are many methods for constructing knowledge graphs, it is still unfeasible to create comprehensive representations of all the knowledge in a field. Most knowledge graphs still lack a good number of entities and relationships. Thereby, significant efforts have been made for completing knowledge graphs. Knowledge graph completion aims to improve the quality of knowledge graphs by predicting additional relationships and entities. The first task typically adopts link prediction techniques to generate triplets and then assigns the triplets plausibility scores (Ji et al. 2021). The second task employs entity prediction methods for obtaining and integrating further information from external sources.
Knowledge Fusion
Knowledge fusion is also an important research direction that focuses on capturing knowledge from different sources and integrating it into a knowledge graph (Nguyen et al. 2020). The knowledge fusion approaches are useful for both generating and completing knowledge graphs. Recently, entity alignment has been the primary method for implementing knowledge fusion tasks.
Knowledge Reasoning
Tremendous research efforts have focused on reasoning to enrich the knowledge graphs, which aims to infer new facts based on existing data (Minervini et al. 2020). In particular, new relations between two unconnected entities are inferred, forming new triplets. Also, by reasoning out the false facts, knowledge reasoning has the ability to identify erroneous knowledge. The main methods for knowledge reasoning include logic rule-based, distributed representation-based, and neural network-based methods (Chen et al. 2020b).
AI Systems
Nowadays, knowledge graphs are widely utilized by AI systems , such as recommenders, question-answering systems, and information retrieval tools. Typically, the richness of information within knowledge graphs enhances the performance of these solutions. Therefore, many studies have focused on taking advantage of knowledge graphs to improve AI systems' performance.
Application Fields
Knowledge graphs have numerous applications in various fields, including education, scientific research, social media, and medical care (Li et al. 2020b). A variety of intelligent applications are required to improve the standard of human life.
Differing from other works, this paper focuses on surveying the opportunities and challenges of knowledge graphs. In particular, knowledge graphs meet great opportunities by improving the quality of AI services and being applied in various fields. On the contrary, this paper regards the limitations of knowledge graph technologies as the challenges. Therefore, we will discuss the technical limitations regarding knowledge graph embeddings, knowledge acquisition, knowledge graph completion, knowledge fusion, and knowledge reasoning.
Knowledge Graphs for AI Systems
This section explains the opportunities by analyzing the advantages that knowledge graphs bring for improving the functionalities of AI Systems. Specifically, there are a couple of systems, including recommender systems, question-answering systems, and information retrieval tools (Guo et al. 2020;Zou 2020), which utilize knowledge graphs for their input data and benefit the most from knowledge graphs. In addition to these systems, other AI systems, such as image recognition systems , have started to consider the characteristic of knowledge graphs. However, the application of knowledge graphs in these systems is not widespread. Moreover, these systems do not directly optimize performance by utilizing knowledge graphs as input data. Therefore, the advantages that knowledge graphs bring for recommender systems, question-answering systems, and information retrieval tools are discussed in detail to analyze the opportunities of knowledge graphs. Typically, these solutions greatly benefit from adopting knowledge graphs that offer high-quality representations of the domain knowledge. Table 1 presents a summary of the AI systems that we will discuss below.
Recommender Systems
With the continuous development of big data, we observe the exponential growth of information. In the age of information explosion, it becomes challenging for people to receive valid and reliable information (Shokeen and Rana 2020;Monti et al. 2021;Gómez et al. 2022). Specifically, online users may feel confused when they want to select some items they are interested in among thousands of choices. To tackle this issue, we saw the emergence of several recommender systems to provide users with more accurate information. Typically, recommender systems learn the preference of target users for a set of items (Wan et al. 2020;Zheng and Wang 2022) and produce a set of suggested items with similar characteristics. Recommender systems are fruitful solutions to the information explosion problem and are employed in various fields for enhancing user experience (Quijano-Sánchez et al. 2020).
Traditional Recommender Systems
There are two traditional methods for developing recommender systems, including content-based and collaborative filtering-based (CF-based) methods. Sun et al. (2019) and Guo et al. (2020) have compared and summarised these two approaches.
Content-Based Recommender Systems
The content-based recommender systems first analyze the content features of items (e.g., descriptions, documents). These items are previously scored by the target users (Guo et al. 2020;Xia et al. 2014b). Then, the recommender systems learn the user interests by employing machine learning models. Thus, these systems are able to effectively recommend trending items to the target users according to their preferences. Some recommender systems utilize the content of the original query result to discover highly-related items for the users that may interest them (Naseriparsa et al. 2019a). These systems employ machine learning techniques or statistical measures such as correlation to compute the highly-similar items to those that are visited by the users (Naseriparsa et al. 2019b). Another group of content-based recommender systems employs lexical references such as dictionaries to utilize semantic relationships of the user query results to recommend highly semantically-related items to the users that may directly satisfy their information needs (Naseriparsa et al. 2018;Sun et al. 2017).
CF-Based Recommender Systems CF-based recommender systems suggest items
to the users based on the information of user-item interaction (Chen et al. 2020c). CFbased recommender systems infer the user preference by clustering similar users instead of extracting the features of the items (Wang et al. 2019a). However, we face data sparsity and cold start problems in traditional CF-based systems. In general, users can only rate a few items among a large number of items, which leads to preventing many items from receiving appropriate feedback. Therefore, the recommender systems do not effectively learn user preferences accurately because of data sparsity (Bai et al. 2019;Xia et al. 2014a). On the other hand, the cold start problem makes it even more difficult to make recommendations when the items or users are new because there is no historical data or ground truth. Moreover, because abundant user information is required for achieving effective recommenda-tions, CF-based recommender systems face privacy issues. How to achieve personalized recommendations while protecting the privacy of users is still an unsolved problem.
Knowledge Graph-Based Recommender Systems
To address inherent problems of traditional approaches, the community has produced several hybrid recommender systems, which consider both item features and the distribution of user scores. Most of these solutions adopt knowledge graphs for representing and interlinking items (Palumbo et al. 2020). Specifically, Knowledge graph-based recommender systems integrate knowledge graphs as auxiliary information and leverage users and items networks to learn the relationships of items-users, items-items, and users-users (Palumbo et al. 2018).
Fig 3 presents an example of knowledge graph-based movie recommendation. Here we can see that the movies "Once Upon A Time in Hollywood" and "Interstellar" are recommended to three users according to a knowledge graph that contains the nodes of users, films, directors, actors, and genres. The knowledge graph is thus used to infer latent relations between the user and the recommended movies.
Recently, a great deal of research has been conducted to utilize knowledge graphs for recommendation tasks. For instance, Wang et al. (2019b) introduced KPRN. KPRN is a recommender system that generates entity-relation paths according to the user-item interaction and constructs a knowledge graph that consists of the users, items, and their interaction. It then infers the user preference based on the entity-relation path. The useritem interaction, which is extracted from knowledge graphs, improves the quality of the Integration of semantics from knowledge graphs and entities from queries and documents representations of their entities recommendations and allows the presentation of the recommended results in a more explainable manner. Wang et al. (2019c) also applied multi-task knowledge graph representation (MKR) for recommendation tasks. MKR models knowledge graphs based on the user-item interaction. It is worth noting that MKR focuses on the structural information of knowledge graphs for learning the latent user-item interaction. Sun et al. (2020) proposed a Multi-modal Knowledge Graph Attention Network (MKGAT) for achieving precise recommendations. MKGAT constructs knowledge graphs based on two aspects: (1) it enriches entity information by extracting the information of the neighbor entities;
(2) it scores the triplets to construct the reasoning relations. Finally, they applied knowledge graphs that are enriched with structured data to recommender systems. Wang et al. (2018b) presented the RippleNet model, which incorporates knowledge graphs into recommendation tasks by preference propagation. RippleNet firstly regards users' historical records as the basis of a knowledge graph. Then, it predicts the user preference list among candidate items based on the knowledge graph links. Based on both Rip-pleNet and MKR models, applied the Ripp-MKR model. Ripp-MKR combines the advantages of preference propagation and user-item interaction to dig the potential information of knowledge graphs. Shu and Huang (2021) proposed RKG, which achieves recommendation by referring to the user preference-based knowledge graph. RKG first obtains users' preference lists; then, it analyzes the relations between the user's preferred items and the items which are to be recommended. Therefore, the model effectively learns the scores of the candidate items according to the relationships between candidate items and the user's preferred items.
Many studies have utilized ontological knowledge base information to improve retrieving results from various data sources (Farfán et al. 2009). Wu et al. (2013) adopted the ontological knowledge base to extract highly semantically similar sub-graphs in graph databases. Their method effectively recommends semantically relevant sub-graphs according to ontological information. Farfán et al. (2009) proposed the XOntoRank, which adopts the ontological knowledge base to facilitate the data exploration and recommendation on XML medical records. Compared with the traditional recommender systems, knowledge graph-based recommender systems have the following advantages:
• Better Representation of Data: Generally, the traditional recommender systems suffer from data sparsity issues because users usually have experience with only a small number of items. However, the rich representation of entities and their connections in knowledge graphs alleviate this issue. • Alleviating Cold Start Issues: It becomes challenging for traditional recommender systems to make recommendations when there are new users or items in the data set. In knowledge graph-based recommender systems, information about new items and users can be obtained through the relations between entities within knowledge graphs. For example, when a new Science-Fiction movie such as "Tenet" is added to the data set of a movie recommender system that employs knowledge graphs, the information about "Tenet" can be gained by its relationship with the genre Science-Fiction (gaining triplet (Tenet, has genre of, Sci-Fi)). • The Explainability of Recommendation: Users and the recommended items are connected along with the links in knowledge graphs. Thereby, the reasoning process can be easily illustrated by the propagation of knowledge graphs.
Question-Answering Systems
Question answering is one of the most central AI services, which aims to search for the answers to natural language questions by analyzing the semantic meanings (Dimitrakis et al. 2020;Das et al. 2022). The traditional question-answering systems match the textual questions with the answers in the unstructured text database. In the search process, the semantic relationship between the question and answer is analyzed; then, the system matches the questions and answers with the maximum semantic similarity. Finally, the system outputs the answer. However, the answers are obtained by filtrating massive unstructured data, which deteriorates the efficiency of the traditional question-answering systems due to analyzing an enormous search space. To solve this issue, a lot of research focuses on employing structured data for question answering, particularly knowledge graph-based question-answering systems (Singh et al. 2020;Qiu et al. 2020).
The sophisticated representation of information in knowledge graphs is a natural fit for question-answering systems. Knowledge graph-based question-answering systems typically analyze the user question and retrieve the portion of knowledge graphs for answering. The answering task is facilitated either by using similarity measures or by producing structured queries in standard formats (e.g., SPARQL). Fig 4 presents an example of the knowledge graph-based question-answering system. The system answer "Shakespeare" is a node that is linked to the node "Romeo". The node "Romeo" is extracted from the question.
There are two main types of questions in this space: simple and multi-hop questions, respectively. Simple questions are answered only by referring to a single triplet, while multi-hop questions require combining multiple entities and relations. Focusing on simple questions, Huang et al. (2019) proposed a knowledge graph embedding-based questionanswering system (KEQA). They translated the question and its corresponding answer into a single triplet. For instance, the question " Which film acted by Leonardo" and one of its answers "Inception" can be expressed as the following triplet: (Leonard, act, Inception). Then, the head entity, relation, and tail entity of the triplet are represented by a vector matrix in the embedding space for learning the question-answer information. Considering the semantic meanings of the questions, Shin et al. (2019) presented a predicate constraint-based question-answering system (PCQA). They took advantage of the predicate constraints of knowledge graphs, which is a triplet contains a subject, predicate, and an object to capture the connection between the questions and answers. Using the triplet for question-answering integration, the processing of the question-answering service can be simplified; therefore, the result improves. Bauer et al. (2018) focused on multi-hop questions and proposed a Multi-Hop Pointer-Generator Model (MHPGM). They selected the relation edges that are related to the questions in a knowledge graph and injected attention to achieve multi-hop question answering. Because of the advantages of knowledge graphs' structure, multi-hop question answering can extract coherent answers effectively. Saxena et al. (2020) proposed EmbedKGQA to achieve multi-hop question answering over sparse knowledge graphs (such as knowledge graphs with missing edges). The main idea of EmbedKGQ is to utilize knowledge graph embeddings to reduce knowledge graph sparsity. It first creates embeddings of all entities and then selects the embedding of a given question. Lastly, it predicts the answer by combining these embeddings.
Compared to the traditional question answering, the advantages of knowledge graphbased question-answering systems can be summarized as follows:
• Increased Efficiency: Instead of searching for answers from massive textual data, which may contain a large volume of useless data items, knowledge graph-based questionanswering systems focus only on entities with relevant properties and semantics. Therefore, they reduce the search space significantly and extract the answers effectively and efficiently. • Multi-hop Question Answering: The answers can be more complex and sophisticated than the ones produced with traditional methods since facts and concepts from knowledge graphs can be combined via multi-hop question answering.
Information Retrieval
Information retrieval enables retrieval systems to match end-user queries with relevant documents, such as web pages (Liu et al. 2019). Traditional information retrieval systems index the documents according to the user queries and return the matched documents to the users (Hersh 2021). Nevertheless, index processing is complex and requires plenty of time because of the massiveness and diversity of documents. As a result, traditional information retrieval faces the challenge of inaccurate search results and potentially low efficiency. Also, since search engines have limitations with respect to text interpretation ability, keyword-based text search usually outputs limited results. Thus, to address these problems, many modern search engines take advantage of knowledge graphs (Bounhas et al. 2020;Zheng et al. 2020). Knowledge graph-based information retrieval introduces a new research direction that takes advantage of knowledge graphs for improving the performance of search engines and the explainability of the results. Typically, these systems rely on the advanced representation of the documents based on entities and relationships from knowledge graphs. These formal and machine-readable representations are then matched to the user query for retrieving the more pertinent documents. For instance, Wise et al. (2020) proposed a COVID-19 Knowledge Graph (CKG) to extract the relationships between the scientific articles about COVID-19. In particular, they combined the topological information of documents with the semantic meaning to construct document knowledge graphs. Wang et al. (2018a) proposed a knowledge graphbased information retrieval technology that extracts entities by mining entity information on web pages via an open-source relation extraction method. Then, the entities with relationships are linked to construct a knowledge graph.
Knowledge graphs can also support methods for query expansion, which is able to enrich the user query by adding relevant concepts (e.g., synonymous). For example, Dalton et al. (2014) presented an entity query feature expansion (EQFE) to enrich the queries based on the query knowledge graph, including structured attributes and text. Liu et al. (2018) proposed the Entity-Duet Neural Ranking Model (EDRM). EDRM integrates the semantics extracted from knowledge graphs with the distributed representations of entities in queries and documents. Then, it ranks the search results using interaction-based neural ranking networks.
Compared to traditional information retrieval, the knowledge graph-based information retrieval has the following advantages:
• Semantic Representation of Items: Items are represented according to a formal and interlinked model that supports semantic similarity, reasoning, and query expansion. This typically allows the system to retrieve more relevant items and makes the system more interpretable. • High Search Efficiency: Knowledge graph-based information retrieval can use the advanced representation of the items to reduce the search space significantly (e.g., discarding documents that use the same terms with different meanings), resulting in improved efficiency. • Accurate Retrieval Results: In knowledge graph-based information retrieval, the correlation between query and documents is analyzed based on the relations between entities in the knowledge graph. This is more accurate than finding the similarities between queries and documents.
Applications and Potentials
In this section, we discuss the applications and potentials of knowledge graphs in four domains: education, scientific research, social networks, and health/medical care. Although some researchers try to take advantage of knowledge graphs to develop beneficial applications in other domains such as finance (Cheng et al. 2022a), the knowledge graph-based intelligent service in these areas is relatively obscure and still needs to be explored. Therefore, this section mainly focuses on education, scientific research, social networks, and medical care to summarize the opportunities of knowledge graphs. Table 2 presents several recent applications of knowledge graphs that make contributions to these fields.
Education
Education is of great importance to the development of human society. Many studies have focused on deploying intelligent applications to improve the quality of education (Bai et al. 2021;Wang et al. 2020c). Specifically, in the age of big data, data processing becomes a challenging task because of the complex and unstructured educational data. Thereby, intelligent educational systems tend to apply structured data, such as knowledge graphs. Several knowledge graph-based applications support the educational process, focusing in particular on data processing and knowledge dissemination (Yao et al. 2020).
In education, the quality of offline school teaching is of vital importance. Therefore, several knowledge graph-based applications focus on supporting teaching and learning. For example, considering the importance of course allocation tasks in university, Aliyu et al. (2020) proposed a knowledge graph-based course management approach to achieve automatic course allocation. They constructed a course knowledge graph in which the entities are courses, lecturers, course books, and authors in order to suggest relevant courses to students. Chen et al. (2018) presented KnowEdu, a system for educational knowledge graph construction, which automatically builds knowledge graphs for learning and teaching in schools. First, KnowEdu extracts the instructional concepts of the subjects and courses as the entity features. Then, it identifies the educational relations based on the students' assessments and activities to make the teaching effect more remarkable.
The abovementioned knowledge graph-based intelligent applications are dedicated to improving the quality of offline school teaching. However, online learning has become a hot trend recently. Moreover, online study is an indispensable way of learning for students during the COVID-19 pandemic (Saraji et al. 2022). Struggling with confusing online content (e.g., learning content of low quality on social media), students face major challenges in acquiring significant knowledge efficiently. Therefore, researchers have focused on improving online learning environments by constructing education-efficient knowledge graphs (d 'Aquin 2016;Pereira et al. 2017). For example, to facilitate online learning and establish connections between formal learning and social media, Zablith (2022) proposed to construct a knowledge graph by integrating social media and formal educational content, respectively. Then, the produced knowledge graph can filter social media content, which is fruitful for formal learning and help students with efficient online learning to some extent.
Offline school teaching and online learning are two essential parts of education, and it is necessary to improve the quality of both to promote the development of education. Significantly, knowledge graph-based intelligent applications can deal with complicated educational data and make both offline and online education more convenient and efficient.
Scientific Research
A variety of knowledge graphs focus on supporting the scientific process and assisting researchers in exploring research knowledge and identifying relevant materials (Xia et al. 2016). They typically describe documents (e.g., research articles, patents), actors (e.g., authors, organizations), entities (e.g., topics, tasks, technologies), and other contextual information (e.g., projects, funding) in an interlinked manner. For instance, Microsoft Academic Graph (MAG) ) is a heterogeneous knowledge graph. MAG contains the metadata of more than 248M scientific publications, including citations, authors, institutions, journals, conferences, and fields of study. The AMiner Graph (Zhang et al. 2018) is the corpus of more than 200M publications generated and used by the AMiner system 1 . The Open Academic Graph (OAG) 2 is a massive knowledge graph that integrates Microsoft Academic Graph and AMiner Graph. AceKG (Wang et al. 2018c) is a large-scale knowledge graph that provides 3 billion triples of academic facts about papers, authors, fields of study, venues, and institutes, as well as the relations among them. In addition to constructing academic knowledge graphs, many researchers also take advantage of knowledge graphs to develop various applications beneficial to scientific research. Chi et al. (2018) proposed a scientific publication management model to help nonresearchers learn methods for sustainability from research thinking. They built a knowledge graph-based academic network to manage scientific entities. The scientific entities, including researchers, papers, journals, and organizations, are connected regarding their properties. For the convenience of researchers, many scientific knowledge graph-based recommender systems, including citation recommendation, collaboration recommendation, and reviewer recommendation, are put forward (Shao et al. 2021). For instance, Yong et al. (2021) designed a knowledge graph-based reviewer assignment system to achieve precise matching of reviewers and papers. Particularly, they matched knowledge graphs and recommendation rules to establish a rule engine for the recommendation process.
Social Networks
With the rapid growth of social media such as Facebook and Twitter, online social networks have penetrated human life and bring plenty of benefits such as social relationship establishment and convenient information acquisition (Li et al. 2020a;Hashemi and Hall 2020). Various social knowledge graphs are modeled and applied to analyze the critical information from the social network. These knowledge graphs are usually constituted based on the people's activities and their posts on social media, which are applied to numerous applications for different functions .
Remarkably, social media provides high chances for people to make friends and gain personalized information. Furthermore, social media raises fundamental problems, such as how to recommend accurate content that interests us and how to connect with persons interested in a common topic. To address these issues, various studies have been proposed to match users with their favorite content (or friends) for recommendation (Ying et al. 2018). With the increase in users' demand, a number of researchers utilize knowledge graph-based approaches for more precise recommendations (Gao et al. 2020). A representative example is GraphRec (a graph neural network framework for social recommendations) proposed by Fan et al. (2019). They considered two kinds of social knowledge graphs: user-user and user-item graphs. Then, they extracted information from the two knowledge graphs for the learning task. As a result, their model can provide accurate social recommendations because it aggregates the social relationships of users and the interactions between users and items.
In addition, people's activities on social media reveal social relationships. For example, we can learn about the relationships around a person through his photos or comments on Twitter. Significantly, social relationship extraction assists companies in tracking users and enhancing the user experience. Therefore, many works are devoted to social relationship extraction. Wang et al. (2018d) propose a graph reasoning model to recognize the social relationships of people in a picture that is posted on social media. Their model enforces a particular function based on the social knowledge graph and deep neural networks. In their method, they initialized the relation edges and entity nodes with the features that are extracted from the semantic objects in an image. Then, they employed GGNN to propagate the knowledge graph. Therefore, they explored the relations of the people in the picture.
One of the biggest problems in this space is fake news (Zhang et al. 2019a). Online social media has become the principal platform for people to consume news. Therefore, a considerable amount of research has been done for fake news detection (Choi et al. 2020;Meel and Vishwakarma 2020). Most recently, Mayank et al. (2021) exploited a knowledge graph-based model called DEAP-FAKED to detect fake news on social media. Specifically, DEAP-FAKED learns news content and identifies existing entities in the news as the nodes of the knowledge graph. Afterward, a GNN-based technique is applied to encode the entities and detect anomalies that may be linked with fake news.
Health/Medical Care
With medical information explosively growing, medical knowledge analysis plays an instrumental role in different healthcare systems. Therefore, research focuses on integrating medical information into knowledge graphs to empower intelligent systems to understand and process medical knowledge quickly and correctly (Li et al. 2020b). Recently, a variety of biomedical knowledge graphs have become available. Therefore, many medical care applications exploit knowledge graphs. For instance, Zhang et al. (2020a) presented a Health Knowledge Graph Builder (HKGB) to build medical knowledge graphs with clinicians' expertise.
Specifically, we discuss the three most common intelligent medical care applications, including medical recommendation, health misinformation detection, and drug discovery. Firstly, with the rapid development of the medical industry, medical choices have become more abundant. Nevertheless, in the variety of medical choices, people often feel confused and unable to make the right decision to get the most suitable and personalized medical treatment. Therefore, medical recommender systems, especially biomedical knowledge graph-based recommender systems (such as doctor recommender systems and medicine recommender systems), have been put forward to deal with this issue (Katzman et al. 2018). Taking medicine recommendation as an example, Gong et al. (2021) provided a medical knowledge graph embedding method by constructing a heterogeneous graph whose nodes are medicines, diseases, and patients to recommend accurate and safe medicine prescriptions for complicated patients.
Secondly, although many healthcare platforms aim to provide accurate medical information, health misinformation is an inevitable problem. Health misinformation is defined as incorrect information that contradicts authentic medical knowledge or biased information that covers only a part of the facts (Wang et al. 2020d). Unfortunately, a great deal of health-related information on various healthcare platforms (e.g., medical information on social media) is health misinformation. What's worse, the wrong information leads to consequential medical malpractice; therefore, it is urgent to detect health misinformation. Utilizing authoritative medical knowledge graphs to detect and filter misinformation can help people make correct treatment decisions and suppress the spread of misinformation (Cui et al. 2020). Representatively, Cui et al. (2020) presented a model called DETERREN to detect health misinformation. DETERREN leverages a knowledge-guided attention network that incorporates an article-entity graph with a medical knowledge graph.
Lastly, drug discovery, such as drug repurposing and drug-drug interaction prediction, has been a research trend for intelligent healthcare in recent years. Benefiting from the rich entity information (e.g., the ingredients of a drug) and relationship information (e.g., the interaction of drugs) in medical knowledge graphs, drug discovery based on knowledge graphs is one of the most reliable approaches (MacLean 2021). Lin et al. (2020) presented an end-to-end framework called KGNN (Knowledge Graph Neural Network) for drug-drug interaction prediction. The main idea of KGNN is to mine the relations between drugs and their potential neighborhoods in medical knowledge graphs. It first exploits the topological information of each entity; then, it aggregates all the neighborhood information from the local receptive entities to extract both semantic relations and high-order structures. Wang et al. (2020e) developed a knowledge discovery framework called COVID-KG to generate COVID-19-related drug repurposing reports. They first constructed multimedia knowledge graphs by extracting medicine-related entities and their relations from images and texts. Afterward, they utilized the constructed knowledge graphs to generate drug repurposing reports.
Technical Challenges
Although knowledge graphs offer fantastic opportunities for various services and applications, many challenges are yet to be addressed (Noy et al. 2019). Specifically, the limitations of existing knowledge graph technologies are the key challenges for promoting the development of knowledge graphs (Hogan et al. 2021). Therefore, this section discusses the challenges of knowledge graphs in terms of the limitations of five topical knowledge graph technologies, including knowledge graph embeddings, knowledge acquisition, knowledge graph completion, knowledge fusion, and knowledge reasoning.
Knowledge Graph Embeddings
The aim of knowledge graph embeddings is to effectively represent knowledge graphs in a low-dimensional vector space while still preserving the semantics Vashishth et al. 2020). Firstly, the entities and relations are embedded into a dense dimensional space in a given knowledge graph, and a scoring function is defined to measure the plausibility of each fact (triplet). Then, the plausibility of the facts is maximized to obtain the entity and relation embeddings (Chaudhri et al. 2022;Sun et al. 2022). The representation of knowledge graphs brings various benefits to downstream tasks. The three main types of triplet fact-based knowledge graph embedding approaches are tensor factorization-based, translation-based, and neural network-based methods (Rossi et al. 2021).
Tensor Factorization-Based Methods
The core idea of tensor factorization-based methods is transforming the triplets in the knowledge graph into a 3D tensor (Balažević et al. 2019). As Fig 5 presents, the tensor X ∈ R m×m×n , where m and n indicate the number of entity and relation, respectively, contains n slices, and each slice corresponds to one relation type. If the condition X ijk = 1 is met, the triplet (e i , r k , e j ) , where e and r denote entity and relation, respectively, exists in the knowledge graph. Otherwise, if X ijk = 0 , there is no such a triplet in the knowledge graph. Then, the tensor is represented by the embedding matrices that consist of the vectors of entities and relations.
Translation-Based Methods
Translation-based methods exploit the scoring function, which is based on translation invariance. Translation invariance interprets the distance between the vectors of the two words, which is represented by the vector of their semantic relationships (Mikolov et al. 2013). Bordes et al. (2013) firstly utilized the translation invariance-based scoring functions to measure the embedding results. They creatively proposed the TransE model, which translates all the entities and relations of a knowledge graph into a continuous and low vector space. Specifically, the vectors of the head and tail entities in a triplet are connected by the vector of their relation. Consequently, in the vector space, the semantic meaning of every triplet is preserved. Formally, given a triplet (head, relation, tail), the embedding vectors of the head entity, relation, and tail entity are h , r , and t , respectively. In the vector space, the plausibility of the triplet (h, r, t) is computed by the translation invariance-based scoring function to ensure it follows the geometric principle: h + r ≈ t.
After TransE, a lot of related extensions, such as TransH (Wang et al. 2014) and TransR (Lin et al. 2015), are continually proposed to improve the performance of the Translationbased knowledge graph embeddings.
Neural Network-Based Methods
Nowadays, deep learning has become a popular tool that is utilized for knowledge graph embeddings, and a considerable amount of research proposes to employ neural networks to represent the triplets of knowledge graphs (Dai et al. 2020a). In this section, we discuss three representative works, including SME, ConvKB, and R-GCN, to briefly introduce neural network-based knowledge graph embeddings. SME (Bordes et al. 2014) designs an energy function to conduct semantic matching, which utilizes neural networks to measure the confidence of each triplet (h, r, t) in knowledge graphs. The scoring function of SME is defined as follows:
The scoring function of SME (bilinear) is:
Here W ∈ ℝ d×d denotes the weight matrix, b indicates the bias vector. h , r , and t are the embedding vectors of head entity, relation, and tail entity, respectively.
ConvKB (Nguyen et al. 2017) utilizes a convolutional neural network (CNN) to conduct knowledge graph embeddings. ConvKB represents each triplet (h, r, t) as a three-row matrix A , which is input to a convolution layer to obtain feature maps. Afterward, the feature maps are concatenated as a vector, and then a score is calculated to estimate the confidence of the triplet. The scoring function is as follows:
where O signifies the concatenation operator, g(⋅) is the ReLU activation function, A * Ω indicates the convolution operation of matrix A by using the filters in the set Ω , w ∈ ℝ 3d is a weight vector.
R-GCN (Schlichtkrull et al. 2018) is an improvement of graph neural networks (GNNs). R-GCN represents knowledge graphs by providing relation-specific transformation. Its forward propagation is calculated as follows:
where h (l+1) k is the hidden state of the entity k in l-th layer, N r k denotes a neighbor collection of entity k and relation r ∈ R , n k,r is the normalization process, W (l) i and W (l) k are the weight matrices.
Limitations of Existing Methods
The existing methods for generating knowledge graph embeddings still suffer several severe limitations. Many established methods only consider surface facts (triplets) of knowledge graphs. However, additional information, such as entity types and relation paths, are ignored, which can further improve the embedding accuracy. The performance of most traditional methods that do not consider the additional information is unsatisfactory. Table 3 lists the embedding methods, which do not consider the additional information. In Table 3, the performance evaluation is based on the link prediction and triplet classification tasks. The metrics that are for evaluation results are hit rate at 10 (Hits@10) and accuracy. As Table 3 presents, only a few models have impressive results, including the results of QuatE (90%), RMNN (89.9%), and KBGAN (89.2%). Recently, some researchers have started to combine additional information with a knowledge graph to improve the efficiency of embedding models. For example, Guo et al. (2015) take advantage of additional entity type information, which is the semantic category of each entity, to obtain the correlation between the entities and to tackle the data sparsity issue. Therefore, knowledge graphs
(1) f r (h, t) = (W h1 h + W h2 r + b h )⊤(W t1 t + W t2 r + b t ).
(
2) f r (h, t) = ((W h1 h)•(W h2 r) + b h )⊤((W t1 t)•(W t2 r) + b t ).
(
3) f r (h, t) = O(g(A * Ω))w, (4) h (l+1) k = ∑ r∈R ∑ i∈N r k 1 n k,r W (l) i h (l) i + W (l) k h (l) k ,
are represented more accurately. Not only entity types, some other information, including relation paths , time information of dynamic graphs (Messner et al. 2022), and textual descriptions of entities , are getting the researchers' attention in recent years. However, it is still a daunting challenge to effectively utilize rich additional information to improve the accuracy of knowledge graph embeddings. General additional information can not adequately represent the semantic meaning of the triplets. For instance, the entity types are not related to the semantic information of triplets. Furthermore, the types of additional information that can be incorporated into the features of the triplets are now severely limited. Therefore, to improve the performance of existing knowledge graph embedding methods, multivariate information (such as the hierarchical descriptions of relations and the combination of entity types and textual descriptions) needs to be incorporated into the features of the triplets.
To the best of our knowledge, complex relation path remains an open research problem (Peng et al. 2021). For example, the inherent relations, referring to the indirect relationships between two unconnected entities, are not represented effectively. Although the inherent relations between the entities can be explored based on the chain of relationships in knowledge graphs, the inherent relations are complex and multiple. Therefore, it is not straightforward to represent these relations effectively.
Knowledge Acquisition
Knowledge acquisition is a critical step for combining data from different sources and generating new knowledge graphs. The knowledge is extracted from both structured and unstructured data. Three main methods of knowledge acquisition are relation extraction, entity extraction, and attribute extraction . Here, attribute extraction can be regarded as a special case of entity extraction. Zhang et al. (2019b) took advantage of knowledge graph embeddings and graph convolution networks to extract long-tail relations. Shi et al. (2021) proposed entity set expansion to construct large-scale knowledge graphs.
Nevertheless, existing methods for knowledge acquisition still face the challenge of low accuracy, which could result in incomplete or noisy knowledge graphs and hinder the downstream tasks. Therefore, the first critical issue regards the reliability of knowledge acquisition tools and their evaluation. In addition, a domain-specific knowledge graph schema is knowledge-oriented, while a constructed knowledge graph schema is data-oriented for covering all data features (Zhou et al. 2022). Therefore, it is inefficient to produce domain-specific knowledge graphs by extracting entities and properties from raw data. Hence, it is an essential issue to efficiently achieve knowledge acquisition tasks by generating domain-specific knowledge graphs.
Besides, most existing knowledge acquisition methods focus on constructing knowledge graphs with one specific language. However, in order to make the information in knowledge graphs richer and more comprehensive, we need cross-lingual entity extraction. It is thus vitally important to give more attention to cross-lingual entity extraction and the generation of multilingual knowledge graphs. For example, Bekoulis et al. (2018) proposed a joint neural model for cross-lingual (English and Dutch) entity and relation extraction. Nevertheless, multilingual knowledge graph construction is still a daunting task since non-English training data sets are limited, language translation systems are not always accurate, and the cross-lingual entity extraction models have to be retrained for each new language.
Multi-modal knowledge graph construction is regarded as another challenging issue of knowledge acquisition. The existing knowledge graphs are mostly represented by pure symbols, which could result in the poor capability of machines to understand our real world . Therefore, many researchers focus on multi-modal knowledge graphs with various entities, such as texts and images. The construction of multi-modal knowledge graphs requires the exploration of entities with different modalities, which makes the knowledge acquisition tasks complicated and inefficient.
Knowledge Graph Completion
Knowledge graphs are often incomplete, i.e., missing several relevant triplets and entities (Zhang et al. 2020a). For instance, in Freebase, one of the most well-known knowledge graphs, more than half of person entities do not have information about their birthplaces and parents. Generally, semi-automated and human leveraging mechanisms, which can be applied to ensure the quality of knowledge graphs, are essential tools for the evaluation of knowledge graph completion. Specifically, human supervision is currently considered the gold standard evaluation in knowledge graph completion (Ballandies and Pournaras 2021).
Knowledge graph completion aims to expand existing knowledge graphs by adding new triplets using techniques for link prediction Akrami et al. 2020) and entity prediction (Ji et al. 2021). These approaches typically train a machine learning model on a knowledge graph to assess the plausibility of new candidate triplets. Then, they add the candidate triplets with high plausibility to the knowledge graph. For example, for an incomplete triplet (Tom, friendOf, ?), it is possible to assess the range of tails and return the more plausible ones to enrich the knowledge graph. These models successfully utilized knowledge graphs in many different domains, including digital libraries (Yao et al. 2017), biomedical (Harnoune et al. 2021, social media (Abu-Salih 2021), and scientific research (Nayyeri et al. 2021). Some new methods are able to process fuzzy knowledge graphs in which each triple is associated with a confidence value (Chen et al. 2019).
However, most current knowledge graph completion methods only focus on extracting triplets from a closed-world data source. That means the generated triplets are new, but the entities or relations in the triplets need to already exist in the knowledge graph. For example, for the incomplete triplet (Tom, friendOf, ?), predicting the triplet (Tom, friendOf,
Fig. 5 An illustration of tensor factorization of knowledge graphs
Jerry) is only possible if the entity Jerry is already in the knowledge graph. Because of this limitation, these methods cannot add new entities and relations to the knowledge graph. To tackle this issue, we are starting to see the emergence of open-world techniques for knowledge graph completion that extracts potential objects from outside of the existing knowledge bases. For instance, the ConMask model (Shi and Weninger 2018) has been proposed to predict the unseen entities in knowledge graphs. However, methods for open-world knowledge graph completion still suffer from low accuracy. The main reason is that the data source is usually more complex and noisy. In addition, the similarity of the predicted new entities to the existing entities can mislead the results. In other words, two similar entities are regarded as connected entities, while they may not have a direct relationship.
Knowledge graph completion methods assume knowledge graphs are static and fail to capture the dynamic evolution of knowledge graphs. To obtain accurate facts over time, temporal knowledge graph completion, which considers the temporal information reflecting the validity of knowledge, has emerged. Compared to static knowledge graph completion, temporal knowledge graph completion methods integrate timestamps into the learning process. Hence, they explore the time-sensitive facts and improve the link prediction accuracy significantly. Although temporal knowledge graph completion methods have shown brilliant performance, they still face serious challenges. Because these models consider time information would be less efficient (Shao et al. 2022), the key challenge of temporal knowledge graph completion is how to effectively incorporate timestamps of facts into the learning models and properly capture the temporal dynamics of facts.
Knowledge Fusion
Knowledge fusion aims to combine and integrate knowledge from different data sources. It is often a necessary step for the generation of knowledge graphs (Nguyen et al. 2020;Smirnov and Levashova 2019). The primary method of knowledge fusion is entity alignment or ontology alignment (Ren et al. 2021), which aims to match the same entity from multiple knowledge graphs . Achieving efficient and accurate knowledge graph fusion is a challenging task because of the complexity, variety, and large volume of data available today.
While a lot of work has been done in this direction, there are still several intriguing research directions that deserve to be investigated in the future. One of them regards crosslanguage knowledge fusion , which allows the integration of information from different languages. This is often used to support cross-lingual recommender systems (Javed et al. 2021). For example, Xu et al. (2019) adopted a graph-matching neural network to achieve cross-language entity alignment. However, the result of the cross-language knowledge fusion is still unsatisfactory because the accuracy of the matching entities from different languages is relatively low. Therefore, it remains a daunting challenge to explore cross-language knowledge fusion.
Another primary challenge regards entity disambiguation (Nguyen et al. 2020). As the polysemy problem of natural language, the same entity may have various expressions in different knowledge graphs. Hence, entity disambiguation is required before conducting entity alignment. Existing entity disambiguation methods mainly focus on discriminating and matching ambiguous entities based on extracting knowledge from texts containing rich contextual information (Zhu and Iglesias 2018). However, these methods can not precisely measure the semantic similarity of entities when the texts are short and have limited contextual information. Only a few works have focused on solving this issue. For example, Zhu and Iglesias (Zhu and Iglesias 2018) have proposed SCSNED for entity disambiguation. SCSNED measures semantic similarity based on both informative words of entities in knowledge graphs and contextual information in short texts. Although SCSNED alleviates the issue of limited contextual information to some extent, more effort is needed to improve the performance of entity disambiguation.
In addition, many knowledge fusion methods only focus on matching entities with the same modality and ignore multi-modal scenes in which knowledge is presented in different forms. Specifically, entity alignment considering only single-modality knowledge graph scenario has insignificant performance because it can not fully reflect the relationships of entities in the real world (Cheng et al. 2022b). Recently, to solve this issue, some studies have proposed multi-modal knowledge fusion, which matches the same entities having different modalities and generates a multi-modal knowledge graph. For example, HMEA ) aligns entities with multiple forms by mapping multi-modal representations into hyperbolic space. Although many researchers have worked on multi-modal knowledge fusion, it is still a critical task. Multi-modal knowledge fusion mainly aims to find equivalent entities by integrating their multi-modal features (Cheng et al. 2022b). Nevertheless, how to efficiently incorporate the features having multiple modalities is still a tricky issue facing current methods.
Knowledge Reasoning
The goal of knowledge reasoning is to infer new knowledge, such as the implicit relations between two entities Wang et al. 2019b), based on existing data. For a given knowledge graph, wherein there are two unconnected entities h and t, denoted as h, t ∈ G , here G means the knowledge graph, knowledge reasoning can find out the potential relation r between these entities and form a new triplet (h, r, t). The knowledge reasoning methods are mainly categorized into logic rule-based (De Meester et al. 2021), distributed representation-based (Chen et al. 2020b), and neural network-based methods (Xiong et al. 2017). Logic rule-based knowledge reasoning aims to discover knowledge according to the random walk and logic rules, while distributed representation-based knowledge reasoning embeds entities and relations into a vector space to obtain distributed representation (Chen et al. 2020b). Neural network-based knowledge reasoning method utilizes neural networks to infer new triplets given the body of knowledge in the graph (Xian et al. 2019).
There are two tasks in knowledge reasoning: single-hop prediction and multi-hop reasoning (Ren et al. 2022). Single-hop prediction predicts one element of a triplet for the given two elements, while multi-hop reasoning predicts one or more elements in a multihop logical query. In other words, in the multi-hop reasoning scenario, finding the answer to a typical question and forming new triplets requires the prediction and imputation of multiple edges and nodes. Multi-hop reasoning achieves a more precise formation of triplets when compared with the single-hop prediction. Therefore, multi-hop reasoning has attracted more attention and become a critical need for the development of knowledge graphs in recent years. Although many works have been done, multi-hop reasoning over knowledge graphs remains largely unexplored. Notably, multi-hop reasoning on massive knowledge graphs is one of the challenging tasks (Zhu et al. 2022). For instance, most recent studies focus on multi-hop reasoning over knowledge graphs, which have only 63K entities and 592K relations. The existing models can't learn the training set effectively for a massive knowledge graph that has more than millions of entities. Moreover, multi-hop reasoning needs to traverse multiple relations and intermediate entities in the knowledge graph, which could lead to exponential computation cost . Therefore, it is still a daunting task to explore multi-hop knowledge reasoning.
Besides, the verification of inferred new knowledge is also a critical issue. Knowledge reasoning enriches existing knowledge graphs and brings benefits to the downstream tasks (Wan et al. 2021). However, the inferred new knowledge is sometimes uncertain, and the veracity of new triplets needs to be verified. Furthermore, the conflicts between new and existing knowledge should be detected. To address these problems, some research has proposed multi-source knowledge reasoning ) that detects erroneous knowledge and conflicting knowledge. Overall, more attention should be paid to multi-source knowledge reasoning and erroneous knowledge reduction.
Conclusions
Knowledge graphs have played an instrumental role in creating many intelligent services and applications for various fields. In this survey, we provided an overview of knowledge graphs in terms of opportunities and challenges. We first introduced the definitions and existing research directions regarding knowledge graphs to provide an introductory analysis of knowledge graphs. Afterward, we discussed AI systems that take advantage of knowledge graphs. Then, we presented some representative knowledge graph applications in several fields. Furthermore, we analyzed the limitations of current knowledge graph technologies, which lead to severe technical challenges. We expect this survey to spark new ideas and insightful perspectives for future research and development activities involving knowledge graphs.
Funding Open Access funding enabled and organized by CAUL and its Member Institutions.
Declarations
Conflict of interest
The authors declare that they have no competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
Fig. 2
2Research on knowledge graphs
Fig. 3
3An example of knowledge graph-based recommender system
Fig. 4
4The
Table 1
1AI systems using knowledge graphsAI Systems
Approaches
Techniques on knowledge graphs
Recom-
mender
systems
KPRN (Wang et al. 2019b)
Entity-relation path generation based on user-item interac-
tion
RippleNet (Wang et al. 2018b) Preference propagation
MKR (Wang et al. 2019c)
Laten user-item interaction
MKGAT (Sun et al. 2020)
Neighbor information extraction; relation reasoning
Ripp-MKR (Wang et al. 2021) Preference propagation; laten user-item interaction
RKG (Shu and Huang 2021)
User preferenfce lists-based knowledge graph construction
Question-
answering
systems
MHPGM (Bauer et al. 2018)
Multiple hop relation reasoning
PCQA (Shin et al. 2019)
Predicate constraints-based relation extraction
KEQA (Huang et al. 2019)
Simple question-based triplet construction
EmbedKGQA (Saxena et al.
2020)
Knowledge graph embedding-based multi-hop question
answering
Information
retrieval
EQFE (Dalton et al. 2014)
Query knowledge graph-based feature expansion
Knowledge graph based Infor-
mation Retrieval Technology
(Wang et al. 2018a)
Query-document knowledge graph construction
CKG (Wise et al. 2020)
Document knowledge graph construction
EDRM
Table 2
2Fields of applications of knowledge graphsFields
Applications
Methods
Functions
Education
Knowledge Graph based Course Management
Model (Aliyu et al. 2020)
Course knowledge graphs
Courses management; Generation of
course allocation schedule
KnowEdu (Chen et al. 2018)
Instructional concepts extraction; Educational rela-
tion identification
Educational knowledge graph construction
Knowledge Graph-based Tool for Online Learning
(Zablith 2022)
Integration of social media contents and formal
learning contents
Efficient online knowledge acquisition
Scientific Research
Scientific Publication Management Model (Chi
et al. 2018)
Knowledge graph based academic network
Scientific publication management
Reviewer Recommendation System Yong et al.
(2021)
Knowledge graph-based rule engine establishment
Precise matching of reviewer and paper
Social Networks
DEAP-FAKED (Mayank et al. 2021)
News-Entity knowledge graphs
Fake news detection
GraphRec (Fan et al. 2019)
Information aggregation of user-user and user-item
graphs
Social Recommendation
Graph Reasoning Model (Wang et al. 2018d)
Knowledge graph propogation
Social relationship extraction
Health/Medical Care SMR (Gong et al. 2021)
Medical knowledge graph embeddings
Safe medicine recommendation
DETERRENT (Cui et al. 2020)
Knowledge guided graph attention network
Health misinformation detection
KGNN (Lin et al. 2020)
Mining the relationships between drugs
Drug discovery
COVID-KG(Yuan et al. 2021)
Multimedia knowledge graph construction
Drug discovery
Table 3
3Knowledge graph embedding methods In this table, all the results of link prediction are filter results CategoriesTechniques
Evaluation approaches_data set
Results (%)
Tensor factorization-based methods
RESCAL (Nickel et al. 2011)
Link prediction[Hits@10]_FB15K
44.1
HolE (Nickel et al. 2016)
Link prediction[Hits@10]_FB15K
73.9
ComplEx (Trouillon et al. 2016)
Link prediction[Hits@10]_FB15K
84
SimplE (Kazemi and Poole 2018)
Link prediction[Hits@10]_FB15K
83.8
RotatE (Sun et al. 2019a)
Link prediction[Hits@10]_FB15K
88.4
QuatE (Zhang et al. 2019c)
Link prediction[Hits@10]_FB15K
90
Translation-based methods
TransE (Bordes et al. 2013)
Link prediction[Hits@10]_FB15K
47.1
TransH (Wang et al. 2014)
Link prediction[Hits@10]_FB15K
64.4
TransR (Lin et al. 2015)
Link prediction[Hits@10]_FB15K
68.7
TransD (Ji et al. 2015)
Link prediction[Hits@10]_FB15K
77.3
TranSparse (Ji et al. 2016)
Link prediction[Hits@10]_FB15K
79.9
STransE (Nguyen et al. 2016)
Link prediction[Hits@10]_FB15K
79.7
TransA (Jia et al. 2016)
Link prediction[Hits@10]_FB15K
80.4
KG2E (He et al. 2015)
Link prediction[Hits@10]_FB15K
71.5
TransG (Xiao et al. 2015)
Link prediction[Hits@10]_FB15K
88.2
Neural network-based methods
SME (Bordes et al. 2014)
Link prediction[Hits@10]_FB15K
41.3
NTN (Socher et al. 2013)
Triplet classification[Accuracy]_WN11
86.2
SLM (Socher et al. 2013)
Triplet classification[Accuracy]_WN11
76
RMNN (Liu et al. 2016)
Triplet classification[Accuracy]_WN11
89.9
R-GCN (Schlichtkrull et al. 2018)
Link prediction[Hits@10]_FB15K
84.2
ConvKB (Nguyen et al. 2017)
Link prediction[Hits@10]_WN18RR
52.5
KBGAN (Cai and Wang 2017)
Link prediction[Hits@10]_WN18
89.2
AMiner -https:// www. aminer. cn/ 2 Open Academic Graph -https:// www. opena cadem ic. ai/ oag/ 3 AI-KG -https:// w3id. org/ aikg/ 4 AIDA -http:// w3id. org/ aida
Publisher's Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Domain-specific knowledge graphs: a survey. B Abu-Salih, J Netw Comput Appl. 18510376Abu-Salih B (2021) Domain-specific knowledge graphs: a survey. J Netw Comput Appl 185(103):076
Realistic re-evaluation of knowledge graph completion methods: an experimental study. F Akrami, M S Saeef, Q Zhang, Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data. the 2020 ACM SIGMOD International Conference on Management of DataAkrami F, Saeef MS, Zhang Q et al (2020) Realistic re-evaluation of knowledge graph completion meth- ods: an experimental study. In: Proceedings of the 2020 ACM SIGMOD International Conference on Management of Data, pp 1995-2010
Development of knowledge graph for university courses management. I Aliyu, A Kana, S Aliyu, Int J Educ Manag Eng. 1021Aliyu I, Kana A, Aliyu S (2020) Development of knowledge graph for university courses management. Int J Educ Manag Eng 10(2):1
Accurate text-enhanced knowledge graph representation learning. B An, B Chen, X Han, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong Papers1An B, Chen B, Han X et al (2018) Accurate text-enhanced knowledge graph representation learning. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, vol. 1 (Long Papers), pp 745-755
Aida: a knowledge graph about research dynamics in academia and industry. S Angioni, A Salatino, F Osborne, Quant Sci Stud. Angioni S, Salatino A, Osborne F et al (2021) Aida: a knowledge graph about research dynamics in aca- demia and industry. Quant Sci Stud p 1-43
Dbpedia: a nucleus for a web of open data. S Auer, C Bizer, G Kobilarov, The semantic web. SpringerAuer S, Bizer C, Kobilarov G et al (2007) Dbpedia: a nucleus for a web of open data. In: The semantic web. Springer, p 722-735
Scientific paper recommendation: a survey. X Bai, M Wang, I Lee, IEEE Access. 7Bai X, Wang M, Lee I et al (2019) Scientific paper recommendation: a survey. IEEE Access 7:9324-9339
Educational big data: prediction, applications and challenges. X Bai, F Zhang, J Li, Big Data Res. 26Bai X, Zhang F, Li J et al (2021) Educational big data: prediction, applications and challenges. Big Data Res 26(100270)
Linked data for smart homes: comparing rdf and labeled property graphs. N Baken, LDAC2020-8th linked data in architecture and construction workshop. Baken N (2020) Linked data for smart homes: comparing rdf and labeled property graphs. In: LDAC2020- 8th linked data in architecture and construction workshop, p 23-36
Tucker: tensor factorization for knowledge graph completion. I Balažević, C Allen, T M Hospedales, arXiv:1901.09590arXiv preprintBalažević I, Allen C, Hospedales TM (2019) Tucker: tensor factorization for knowledge graph completion. arXiv preprint arXiv: 1901. 09590
Mobile link prediction: automated creation and crowdsourced validation of knowledge graphs. M C Ballandies, E Pournaras, Microprocess Microsyst. 87104335Ballandies MC, Pournaras E (2021) Mobile link prediction: automated creation and crowdsourced valida- tion of knowledge graphs. Microprocess Microsyst 87(104):335
Commonsense for generative multi-hop question answering tasks. L Bauer, Y Wang, M Bansal, arXiv:1809.06309arXiv preprintBauer L, Wang Y, Bansal M (2018) Commonsense for generative multi-hop question answering tasks. arXiv preprint arXiv: 1809. 06309
Joint entity recognition and relation extraction as a multihead selection problem. G Bekoulis, J Deleu, T Demeester, Expert Syst Appl. 114Bekoulis G, Deleu J, Demeester T et al (2018) Joint entity recognition and relation extraction as a multi- head selection problem. Expert Syst Appl 114:34-45
Freebase: a collaboratively created graph database for structuring human knowledge. K Bollacker, C Evans, P Paritosh, Proceedings of the 2008 ACM SIGMOD international conference on management of data. the 2008 ACM SIGMOD international conference on management of dataBollacker K, Evans C, Paritosh P et al (2008) Freebase: a collaboratively created graph database for struc- turing human knowledge. In: Proceedings of the 2008 ACM SIGMOD international conference on management of data, p 1247-1250
A semantic matching energy function for learning with multirelational data. A Bordes, X Glorot, J Weston, Mach Learn. 942Bordes A, Glorot X, Weston J et al (2014) A semantic matching energy function for learning with multi- relational data. Mach Learn 94(2):233-259
Translating embeddings for modeling multi-relational data. A Bordes, N Usunier, A Garcia-Duran, Adv Neural Inf Process Syst. 26Bordes A, Usunier N, Garcia-Duran A et al (2013) Translating embeddings for modeling multi-relational data. Adv Neural Inf Process Syst 26
Learning structured embeddings of knowledge bases. A Bordes, J Weston, R Collobert, Twenty-fifth AAAI conference on artificial intelligence. Bordes A, Weston J, Collobert R et al (2011) Learning structured embeddings of knowledge bases. In: Twenty-fifth AAAI conference on artificial intelligence
Building a morpho-semantic knowledge graph for Arabic information retrieval. I Bounhas, N Soudani, Y Slimani, Info Process Manag. 576102Bounhas I, Soudani N, Slimani Y (2020) Building a morpho-semantic knowledge graph for Arabic informa- tion retrieval. Info Process Manag 57(6):102
Kbgan: adversarial learning for knowledge graph embeddings. L Cai, W Y Wang, arXiv:1711.04071arXiv preprintCai L, Wang WY (2017) Kbgan: adversarial learning for knowledge graph embeddings. arXiv preprint arXiv: 1711. 04071
Knowledge graphs: introduction, history and perspectives. V Chaudhri, C Baru, N Chittar, AI Mag. 431Chaudhri V, Baru C, Chittar N et al (2022) Knowledge graphs: introduction, history and perspectives. AI Mag 43(1):17-29
Knowedu: a system to construct knowledge graph for education. P Chen, Y Lu, V W Zheng, IEEE Access. 6Chen P, Lu Y, Zheng VW et al (2018) Knowedu: a system to construct knowledge graph for education. IEEE Access 6:31553-31563
Knowledge graph transfer network for few-shot recognition. R Chen, T Chen, X Hui, Proceedings of the AAAI conference on artificial intelligence. the AAAI conference on artificial intelligence10Chen R, Chen T, Hui X et al (2020a) Knowledge graph transfer network for few-shot recognition. In: Pro- ceedings of the AAAI conference on artificial intelligence, p 10,575-10,582
A review: knowledge reasoning over knowledge graph. X Chen, S Jia, Y Xiang, Expert Syst Appl. 141112948Chen X, Jia S, Xiang Y (2020b) A review: knowledge reasoning over knowledge graph. Expert Syst Appl 141(112):948
A collaborative filtering recommendation system with dynamic time decay. Y C Chen, L Hui, T Thaipisutikul, J Supercomput. Chen YC, Hui L, Thaipisutikul T et al (2020c) A collaborative filtering recommendation system with dynamic time decay. J Supercomput p 1-19
Embedding uncertain knowledge graphs. X Chen, M Chen, W Shi, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceChen X, Chen M, Shi W et al (2019) Embedding uncertain knowledge graphs. In: Proceedings of the AAAI Conference on Artificial Intelligence, p 3363-3370
Financial time series forecasting with multi-modality graph neural network. D Cheng, F Yang, S Xiang, Pattern Recogn. 121108218Cheng D, Yang F, Xiang S et al (2022a) Financial time series forecasting with multi-modality graph neural network. Pattern Recogn 121(108):218
Multijaf: multi-modal joint entity alignment framework for multi-modal knowledge graph. B Cheng, J Zhu, M Guo, NeurocomputingCheng B, Zhu J, Guo M (2022b) Multijaf: multi-modal joint entity alignment framework for multi-modal knowledge graph. Neurocomputing
Knowledge graph in smart education: a case study of entrepreneurship scientific publication management. Y Chi, Y Qin, R Song, Sustainability. 104995Chi Y, Qin Y, Song R et al (2018) Knowledge graph in smart education: a case study of entrepreneurship scientific publication management. Sustainability 10(4):995
Rumor propagation is amplified by echo chambers in social media. D Choi, S Chun, H Oh, Sci Rep. 101Choi D, Chun S, Oh H et al (2020) Rumor propagation is amplified by echo chambers in social media. Sci Rep 10(1):1-10
Deterrent: knowledge guided graph attention network for detecting healthcare misinformation. L Cui, H Seo, M Tabar, Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. the 26th ACM SIGKDD international conference on knowledge discovery & data miningCui L, Seo H, Tabar M et al (2020) Deterrent: knowledge guided graph attention network for detecting healthcare misinformation. In: Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining, p 492-502
Generative adversarial networks based on Wasserstein distance for knowledge graph embeddings. Y Dai, S Wang, X Chen, Knowl-Based Syst. 190105165Dai Y, Wang S, Chen X et al (2020a) Generative adversarial networks based on Wasserstein distance for knowledge graph embeddings. Knowl-Based Syst 190(105):165
A survey on knowledge graph embedding: approaches, applications and benchmarks. Y Dai, S Wang, N N Xiong, Electronics. 95750Dai Y, Wang S, Xiong NN et al (2020b) A survey on knowledge graph embedding: approaches, applications and benchmarks. Electronics 9(5):750
Entity query feature expansion using knowledge base links. J Dalton, L Dietz, J Allan, Proceedings of the 37th international ACM SIGIR conference on research & development in information retrieval. the 37th international ACM SIGIR conference on research & development in information retrievalDalton J, Dietz L, Allan J (2014) Entity query feature expansion using knowledge base links. In: Proceed- ings of the 37th international ACM SIGIR conference on research & development in information retrieval, p 365-374
On the use of linked open data in education: current and future practices. M Aquin, Open data for education. Springerd'Aquin M (2016) On the use of linked open data in education: current and future practices. In: Open data for education. Springer, p 3-15
An improvement of Bengali factoid question answering system using unsupervised statistical methods. A Das, J Mandal, Danial Z , Sādhanā. 471Das A, Mandal J, Danial Z et al (2022) An improvement of Bengali factoid question answering system using unsupervised statistical methods. Sādhanā 47(1):1-14
Rdf graph validation using rule-based reasoning. De Meester, B Heyvaert, P Arndt, D , Semantic Web. PreprintDe Meester B, Heyvaert P, Arndt D et al (2021) Rdf graph validation using rule-based reasoning. Semantic Web (Preprint):1-26
AI-KG: an automatically generated knowledge graph of artificial intelligence. D Dessì, F Osborne, D R Recupero, ISWC 2020. SpringerDessì D, Osborne F, Recupero DR et al (2020) AI-KG: an automatically generated knowledge graph of arti- ficial intelligence. In: ISWC 2020, vol 12507. Springer, p 127-143
A survey on question answering systems over linked data and documents. E Dimitrakis, K Sgontzos, Y Tzitzikas, J Intell Inf Syst. 552Dimitrakis E, Sgontzos K, Tzitzikas Y (2020) A survey on question answering systems over linked data and documents. J Intell Inf Syst 55(2):233-259
Towards a definition of knowledge graphs. L Ehrlinger, W Wöß, SEMANTiCS (Posters, Demos, SuCCESS). 2Ehrlinger L, Wöß W (2016) Towards a definition of knowledge graphs. SEMANTiCS (Posters, Demos, SuCCESS) 48(1-4):2
Graph neural networks for social recommendation. W Fan, Y Ma, Q Li, The world wide web conference. Fan W, Ma Y, Li Q et al (2019) Graph neural networks for social recommendation. In: The world wide web conference, p 417-426
Linked data quality of dbpedia, freebase, opencyc, wikidata, and yago. M Färber, F Bartscherer, C Menne, Semantic Web. 91Färber M, Bartscherer F, Menne C et al (2018) Linked data quality of dbpedia, freebase, opencyc, wikidata, and yago. Semantic Web 9(1):77-129
Xontorank: Ontology-aware search of electronic medical records. F Farfán, V Hristidis, A Ranganathan, Proceedings of the 25th International Conference on Data Engineering. the 25th International Conference on Data EngineeringShanghai, ChinaIEEE Computer SocietyFarfán F, Hristidis V, Ranganathan A et al (2009) Xontorank: Ontology-aware search of electronic medi- cal records. In: Proceedings of the 25th International Conference on Data Engineering, ICDE 2009, March 29 2009-April 2 2009, Shanghai, China. IEEE Computer Society, p 820-831
Graphrel: modeling text as relational graphs for joint entity and relation extraction. T J Fu, P H Li, W Y Ma, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFu TJ, Li PH, Ma WY (2019) Graphrel: modeling text as relational graphs for joint entity and relation extraction. In: Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, p 1409-1418
Y Gao, Y F Li, Y Lin, arXiv:2004.00387Deep learning on knowledge graph for recommender system: a survey. arXiv preprintGao Y, Li YF, Lin Y et al (2020) Deep learning on knowledge graph for recommender system: a survey. arXiv preprint arXiv: 2004. 00387
Enabling cross-continent provider fairness in educational recommender systems. E Gómez, C S Zhang, L Boratto, Futur Gener Comput Syst. 127Gómez E, Zhang CS, Boratto L et al (2022) Enabling cross-continent provider fairness in educational rec- ommender systems. Futur Gener Comput Syst 127:435-447
Smr: medical knowledge graph embedding for safe medicine recommendation. F Gong, M Wang, H Wang, Big Data Res. 23100174Gong F, Wang M, Wang H et al (2021) Smr: medical knowledge graph embedding for safe medicine recom- mendation. Big Data Res 23(100):174
Multi-modal entity alignment in hyperbolic space. H Guo, J Tang, W Zeng, Neurocomputing. 461Guo H, Tang J, Zeng W et al (2021) Multi-modal entity alignment in hyperbolic space. Neurocomputing 461:598-607
Semantically smooth knowledge graph embedding. S Guo, Q Wang, B Wang, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingLong Papers1Guo S, Wang Q, Wang B et al (2015) Semantically smooth knowledge graph embedding. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (vol 1: Long Papers), p 84-94
Bert based clinical knowledge extraction for biomedical knowledge graph construction and analysis. Q Guo, F Zhuang, C Qin, Comput Methods Programs Biomed Update. 110042IEEE Trans Knowl Data Eng Harnoune AGuo Q, Zhuang F, Qin C et al (2020) A survey on knowledge graph-based recommender systems. IEEE Trans Knowl Data Eng Harnoune A, Rhanoui M, Mikram M et al (2021) Bert based clinical knowledge extraction for biomedical knowledge graph construction and analysis. Comput Methods Programs Biomed Update 1(100):042
Multi-label classification and knowledge extraction from oncology-related content on online social networks. M Hashemi, M Hall, Artif Intell Rev. 538Hashemi M, Hall M (2020) Multi-label classification and knowledge extraction from oncology-related con- tent on online social networks. Artif Intell Rev 53(8):5957-5994
Learning to represent knowledge graphs with gaussian embedding. S He, K Liu, Ji G , Proceedings of the 24th ACM international on conference on information and knowledge management. the 24th ACM international on conference on information and knowledge managementHe S, Liu K, Ji G et al (2015) Learning to represent knowledge graphs with gaussian embedding. In: Pro- ceedings of the 24th ACM international on conference on information and knowledge management, p 623-632
Information retrieval. W Hersh, Biomedical informatics. SpringerHersh W (2021) Information retrieval. In: Biomedical informatics. Springer, p 755-794
Knowledge graphs. A Hogan, E Blomqvist, M Cochez, ACM Comput Surveys (CSUR). 544Hogan A, Blomqvist E, Cochez M et al (2021) Knowledge graphs. ACM Comput Surveys (CSUR) 54(4):1-37
Knowledge graph embedding based question answering. X Huang, J Zhang, D Li, Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. the Twelfth ACM International Conference on Web Search and Data MiningHuang X, Zhang J, Li D et al (2019) Knowledge graph embedding based question answering. In: Proceed- ings of the Twelfth ACM International Conference on Web Search and Data Mining, p 105-113
A review of content-based and context-based recommendation systems. U Javed, K Shaukat, I A Hameed, Int J Emerg Technol Learning. 163Javed U, Shaukat K, Hameed IA et al (2021) A review of content-based and context-based recommendation systems. Int J Emerg Technol Learning 16(3):274-306
Knowledge graph embedding via dynamic mapping matrix. G Ji, S He, L Xu, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingLong Papers1Ji G, He S, Xu L et al (2015) Knowledge graph embedding via dynamic mapping matrix. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (vol 1: Long Papers), p 687-696
Knowledge graph completion with adaptive sparse transfer matrix. G Ji, K Liu, S He, Thirtieth AAAI conference on artificial intelligence. Ji G, Liu K, He S et al (2016) Knowledge graph completion with adaptive sparse transfer matrix. In: Thirti- eth AAAI conference on artificial intelligence
A survey on knowledge graphs: representation, acquisition, and applications. S Ji, S Pan, E Cambria, IEEE Trans Neural Netw Learn Syst. Ji S, Pan S, Cambria E et al (2021) A survey on knowledge graphs: representation, acquisition, and applica- tions. IEEE Trans Neural Netw Learn Syst
Locally adaptive translation for knowledge graph embedding. Y Jia, Y Wang, H Lin, Thirtieth AAAI conference on artificial intelligence. Jia Y, Wang Y, Lin H et al (2016) Locally adaptive translation for knowledge graph embedding. In: Thirtieth AAAI conference on artificial intelligence
Deepsurv: personalized treatment recommender system using a cox proportional hazards deep neural network. J L Katzman, U Shaham, A Cloninger, BMC Med Res Methodol. 181Katzman JL, Shaham U, Cloninger A et al (2018) Deepsurv: personalized treatment recommender system using a cox proportional hazards deep neural network. BMC Med Res Methodol 18(1):1-12
Simple embedding for link prediction in knowledge graphs. S M Kazemi, D Poole, Adv Neural Inf Process Syst. 31Kazemi SM, Poole D (2018) Simple embedding for link prediction in knowledge graphs. Adv Neural Inf Process Syst 31
Machine learning and knowledge graph based design rule construction for additive manufacturing. H Ko, P Witherell, Y Lu, Addit Manuf. 37101620Ko H, Witherell P, Lu Y et al (2021) Machine learning and knowledge graph based design rule construction for additive manufacturing. Addit Manuf 37(101):620
Bolt defect classification algorithm based on knowledge graph and feature fusion. Y Kong, X Liu, Z Zhao, Energy Rep. 8Kong Y, Liu X, Zhao Z et al (2022) Bolt defect classification algorithm based on knowledge graph and fea- ture fusion. Energy Rep 8:856-863
Community-diversified influence maximization in social networks. J Li, T Cai, K Deng, Inf Syst. 92101522Li J, Cai T, Deng K et al (2020a) Community-diversified influence maximization in social networks. Inf Syst 92(101):522
Real-world data medical knowledge graph: construction and applications. L Li, P Wang, J Yan, Artif Intell Med. 103101817Li L, Wang P, Yan J et al (2020b) Real-world data medical knowledge graph: construction and applications. Artif Intell Med 103(101):817
Learning knowledge graph embedding with heterogeneous relation attention networks. Z Li, H Liu, Z Zhang, IEEE Trans Neural Netw Learn Syst. Li Z, Liu H, Zhang Z et al (2021) Learning knowledge graph embedding with heterogeneous relation atten- tion networks. IEEE Trans Neural Netw Learn Syst
Aspect-based sentiment analysis via affective knowledge enhanced graph convolutional networks. B Liang, H Su, Gui L , Knowl-Based Syst. 235107643Liang B, Su H, Gui L et al (2022) Aspect-based sentiment analysis via affective knowledge enhanced graph convolutional networks. Knowl-Based Syst 235(107):643
Learning entity and relation embeddings for knowledge graph completion. Y Lin, Z Liu, M Sun, Twenty-ninth AAAI conference on artificial intelligence. Lin Y, Liu Z, Sun M et al (2015) Learning entity and relation embeddings for knowledge graph completion. In: Twenty-ninth AAAI conference on artificial intelligence
Kgnn: Knowledge graph neural network for drug-drug interaction prediction. X Lin, Z Quan, Z J Wang, IJCAI. Lin X, Quan Z, Wang ZJ et al (2020) Kgnn: Knowledge graph neural network for drug-drug interaction prediction. In: IJCAI, p 2739-2745
Data mining and information retrieval in the 21st century: a bibliographic review. J Liu, X Kong, X Zhou, Comput Sci Rev. 34100193Liu J, Kong X, Zhou X et al (2019) Data mining and information retrieval in the 21st century: a biblio- graphic review. Comput Sci Rev 34(100):193
Shifu2: a network representation learning based model for advisor-advisee relationship mining. J Liu, F Xia, L Wang, IEEE Trans Knowl Data Eng. 334Liu J, Xia F, Wang L et al (2021) Shifu2: a network representation learning based model for advisor-advisee relationship mining. IEEE Trans Knowl Data Eng 33(4):1763-1777
Web of scholars: A scholar knowledge graph. J Liu, J Ren, W Zheng, Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. the 43rd International ACM SIGIR Conference on Research and Development in Information RetrievalLiu J, Ren J, Zheng W et al (2020) Web of scholars: A scholar knowledge graph. In: Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp 2153-2156
Q Liu, H Jiang, A Evdokimov, arXiv:1603.07704Probabilistic reasoning via deep learning: Neural association models. arXiv preprintLiu Q, Jiang H, Evdokimov A et al (2016) Probabilistic reasoning via deep learning: Neural association models. arXiv preprint arXiv: 1603. 07704
Z Liu, C Xiong, M Sun, arXiv:1805.07591Entity-duet neural ranking: Understanding the role of knowledge graph semantics in neural information retrieval. arXiv preprintLiu Z, Xiong C, Sun M et al (2018) Entity-duet neural ranking: Understanding the role of knowledge graph semantics in neural information retrieval. arXiv preprint arXiv: 1805. 07591
Knowledge graphs and their applications in drug discovery. F Maclean, Expert Opin Drug Discov. 169MacLean F (2021) Knowledge graphs and their applications in drug discovery. Expert Opin Drug Dis- cov 16(9):1057-1069
Mraea: an efficient and robust entity alignment approach for crosslingual knowledge graph. X Mao, W Wang, H Xu, Proceedings of the 13th International Conference on Web Search and Data Mining. the 13th International Conference on Web Search and Data MiningMao X, Wang W, Xu H et al (2020) Mraea: an efficient and robust entity alignment approach for cross- lingual knowledge graph. In: Proceedings of the 13th International Conference on Web Search and Data Mining, p 420-428
Deap-faked: knowledge graph based approach for fake news detection. M Mayank, S Sharma, R Sharma, arXiv:2107.10648arXiv preprintMayank M, Sharma S, Sharma R (2021) Deap-faked: knowledge graph based approach for fake news detection. arXiv preprint arXiv: 2107. 10648
Fake news, rumor, information pollution in social media and web: a contemporary survey of state-of-the-arts, challenges and opportunities. P Meel, D K Vishwakarma, Expert Syst Appl. 153112986Meel P, Vishwakarma DK (2020) Fake news, rumor, information pollution in social media and web: a contemporary survey of state-of-the-arts, challenges and opportunities. Expert Syst Appl 153(112):986
Temporal knowledge graph completion using box embeddings. J Messner, R Abboud, I I Ceylan, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceMessner J, Abboud R, Ceylan II (2022) Temporal knowledge graph completion using box embeddings. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp 7779-7787
Efficient estimation of word representations in vector space. T Mikolov, K Chen, G Corrado, arXiv:1301.3781arXiv preprintMikolov T, Chen K, Corrado G et al (2013) Efficient estimation of word representations in vector space. arXiv preprint arXiv: 1301. 3781
Differentiable reasoning on large knowledge bases and natural language. P Minervini, M Bošnjak, T Rocktäschel, Proceedings of the AAAI conference on artificial intelligence. the AAAI conference on artificial intelligenceMinervini P, Bošnjak M, Rocktäschel T et al (2020) Differentiable reasoning on large knowledge bases and natural language. In: Proceedings of the AAAI conference on artificial intelligence, p 5182-5190
Biological applications of knowledge graph embedding models. S K Mohamed, A Nounu, V Nováček, Brief Bioinform. 222Mohamed SK, Nounu A, Nováček V (2021) Biological applications of knowledge graph embedding models. Brief Bioinform 22(2):1679-1693
A systematic literature review of multicriteria recommender systems. D Monti, G Rizzo, M Morisio, Artif Intell Rev. 54Monti D, Rizzo G, Morisio M (2021) A systematic literature review of multicriteria recommender sys- tems. Artif Intell Rev 54:427-468
No-but-semantic-match: computing semantically matched xml keyword search results. M Naseriparsa, M S Islam, C Liu, World Wide Web. 215Naseriparsa M, Islam MS, Liu C et al (2018) No-but-semantic-match: computing semantically matched xml keyword search results. World Wide Web 21(5):1223-1257
Xplorerank: exploring XML data via you may also like queries. M Naseriparsa, C Liu, M S Islam, World Wide Web. 224Naseriparsa M, Liu C, Islam MS et al (2019a) Xplorerank: exploring XML data via you may also like queries. World Wide Web 22(4):1727-1750
Xsnippets: exploring semi-structured data via snippets. M Naseriparsa, M S Islam, C Liu, Data Knowl Eng. 124Naseriparsa M, Islam MS, Liu C et al (2019b) Xsnippets: exploring semi-structured data via snippets. Data Knowl Eng 124
Trans4e: link prediction on scholarly knowledge graphs. M Nayyeri, G M Cil, S Vahdati, Neurocomputing. 461Nayyeri M, Cil GM, Vahdati S et al (2021) Trans4e: link prediction on scholarly knowledge graphs. Neurocomputing 461:530-542
A novel embedding model for knowledge base completion based on convolutional neural network. D Q Nguyen, T D Nguyen, D Q Nguyen, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNguyen DQ, Nguyen TD, Nguyen DQ et al (2017) A novel embedding model for knowledge base com- pletion based on convolutional neural network. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p 327-333
D Q Nguyen, K Sirts, L Qu, arXiv:1606.08140Stranse: a novel embedding model of entities and relationships in knowledge bases. arXiv preprintNguyen DQ, Sirts K, Qu L et al (2016) Stranse: a novel embedding model of entities and relationships in knowledge bases. arXiv preprint arXiv: 1606. 08140
Knowledge graph fusion for smart systems: a survey. H L Nguyen, D T Vu, J J Jung, Info Fusion. 61Nguyen HL, Vu DT, Jung JJ (2020) Knowledge graph fusion for smart systems: a survey. Info Fusion 61:56-70
Industry-scale knowledge graphs: lessons and challenges: five diverse technology companies show how it's done. M Nickel, L Rosasco, T Poggio, V Tresp, Hp ; Icml Kriegel, N Noy, Y Gao, A Jain, Proceedings of the AAAI Conference on Artificial Intelligence Nickel M. the AAAI Conference on Artificial Intelligence Nickel M17A three-way model for collective learning on multi-relational dataNickel M, Rosasco L, Poggio T (2016) Holographic embeddings of knowledge graphs. In: Proceedings of the AAAI Conference on Artificial Intelligence Nickel M, Tresp V, Kriegel HP (2011) A three-way model for collective learning on multi-relational data. In: ICML Noy N, Gao Y, Jain A et al (2019) Industry-scale knowledge graphs: lessons and challenges: five diverse technology companies show how it's done. Queue 17(2):48-75
2020) entity2rec: property-specific knowledge graph embeddings for item recommendation. E Palumbo, D Monti, G Rizzo, Expert Syst Appl. 151113235Palumbo E, Monti D, Rizzo G et al (2020) entity2rec: property-specific knowledge graph embeddings for item recommendation. Expert Syst Appl 151(113):235
Knowledge graph embeddings with node2vec for item recommendation. E Palumbo, G Rizzo, R Troncy, European Semantic Web Conference. SpringerPalumbo E, Rizzo G, Troncy R et al (2018) Knowledge graph embeddings with node2vec for item rec- ommendation. In: European Semantic Web Conference, Springer, p 117-120
Wordnet: similarity-measuring the relatedness of concepts. T Pedersen, S Patwardhan, J Michelizzi, AAAIPedersen T, Patwardhan S, Michelizzi J et al (2004) Wordnet: similarity-measuring the relatedness of concepts. In: AAAI, p 25-29
Linked data in education: a survey and a synthesis of actual research and future challenges. C Peng, D T Vu, Jj ; Jung, C K Pereira, Swm Siqueira, B P Nunes, Digital Scholarship Humanities. 11Knowledge graph-based metaphor representation for literature understandingPeng C, Vu DT, Jung JJ (2021) Knowledge graph-based metaphor representation for literature under- standing. Digital Scholarship Humanities Pereira CK, Siqueira SWM, Nunes BP et al (2017) Linked data in education: a survey and a synthesis of actual research and future challenges. IEEE Trans Learn Technol 11(3):400-412
Stepwise reasoning for multi-relation question answering over knowledge graph with weak supervision. Y Qiu, Y Wang, Jin X , Proceedings of the 13th International Conference on Web Search and Data Mining. the 13th International Conference on Web Search and Data MiningQiu Y, Wang Y, Jin X et al (2020) Stepwise reasoning for multi-relation question answering over knowl- edge graph with weak supervision. In: Proceedings of the 13th International Conference on Web Search and Data Mining, p 474-482
Recommender systems for smart cities. L Quijano-Sánchez, I Cantador, Cortés-Cediel Me, Inf Syst. 92101545Quijano-Sánchez L, Cantador I, Cortés-Cediel ME et al (2020) Recommender systems for smart cities. Inf Syst 92(101):545
Yago: a multilingual knowledge base from wikipedia, wordnet, and geonames. T Rebele, F Suchanek, J Hoffart, International semantic web conference. SpringerRebele T, Suchanek F, Hoffart J et al (2016) Yago: a multilingual knowledge base from wikipedia, word- net, and geonames. In: International semantic web conference, Springer, p 177-185
Matching algorithms: fundamentals, applications and challenges. J Ren, F Xia, X Chen, IEEE Trans Emerg Top Comput Intell. 53Ren J, Xia F, Chen X et al (2021) Matching algorithms: fundamentals, applications and challenges. IEEE Trans Emerg Top Comput Intell 5(3):332-350
Smore: Knowledge graph completion and multi-hop reasoning in massive knowledge graphs. H Ren, H Dai, B Dai, Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. the 28th ACM SIGKDD Conference on Knowledge Discovery and Data MiningRen H, Dai H, Dai B et al (2022) Smore: Knowledge graph completion and multi-hop reasoning in massive knowledge graphs. In: Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, p 1472-1482
Efficient sparql-to-sql with r2rml mappings. M Rodriguez-Muro, M Rezk, J Web Semantics. 33Rodriguez-Muro M, Rezk M (2015) Efficient sparql-to-sql with r2rml mappings. J Web Semantics 33:141-169
Knowledge graph embedding for link prediction: a comparative analysis. A Rossi, D Barbosa, D Firmani, ACM Trans Knowl Discov Data (TKDD). 152Rossi A, Barbosa D, Firmani D et al (2021) Knowledge graph embedding for link prediction: a comparative analysis. ACM Trans Knowl Discov Data (TKDD) 15(2):1-49
The computer science ontology: a comprehensive automatically-generated taxonomy of research areas. A A Salatino, T Thanapalasingam, A Mannocci, Data Intell. 23Salatino AA, Thanapalasingam T, Mannocci A et al (2020) The computer science ontology: a comprehen- sive automatically-generated taxonomy of research areas. Data Intell 2(3)
An extended hesitant fuzzy set using swara-multimoora approach to adapt online education for the control of the pandemic spread of covid-19 in higher education institutions. M K Saraji, A Mardani, M Köppen, Artif Intell Rev. 551Saraji MK, Mardani A, Köppen M et al (2022) An extended hesitant fuzzy set using swara-multimoora approach to adapt online education for the control of the pandemic spread of covid-19 in higher edu- cation institutions. Artif Intell Rev 55(1):181-206
Improving multi-hop question answering over knowledge graphs using knowledge base embeddings. A Saxena, A Tripathi, P Talukdar, Proceedings of the 58th annual meeting of the association for computational linguistics. the 58th annual meeting of the association for computational linguisticsSaxena A, Tripathi A, Talukdar P (2020) Improving multi-hop question answering over knowledge graphs using knowledge base embeddings. In: Proceedings of the 58th annual meeting of the association for computational linguistics, p 4498-4507
Modeling relational data with graph convolutional networks. M Schlichtkrull, T N Kipf, P Bloem, European semantic web conference. SpringerSchlichtkrull M, Kipf TN, Bloem P et al (2018) Modeling relational data with graph convolutional net- works. In: European semantic web conference, Springer, p 593-607
A survey of research hotspots and frontier trends of recommendation systems from the perspective of knowledge graph. B Shao, X Li, G Bian, Expert Syst Appl. 165113764Shao B, Li X, Bian G (2021) A survey of research hotspots and frontier trends of recommendation systems from the perspective of knowledge graph. Expert Syst Appl 165(113):764
Tucker decomposition-based temporal knowledge graph completion. P Shao, D Zhang, G Yang, Knowl-Based Syst. 238107841Shao P, Zhang D, Yang G et al (2022) Tucker decomposition-based temporal knowledge graph completion. Knowl-Based Syst 238(107):841
Open-world knowledge graph completion. B Shi, T Weninger, Thirty-Second AAAI Conference on Artificial Intelligence. Shi B, Weninger T (2018) Open-world knowledge graph completion. In: Thirty-Second AAAI Conference on Artificial Intelligence
Entity set expansion in knowledge graph: a heterogeneous information network perspective. C Shi, J Ding, X Cao, Front Comp Sci. 151Shi C, Ding J, Cao X et al (2021) Entity set expansion in knowledge graph: a heterogeneous information network perspective. Front Comp Sci 15(1):1-12
Predicate constraints based question answering over knowledge graph. S Shin, Jin X Jung, J , Info Process Manag. 563Shin S, Jin X, Jung J et al (2019) Predicate constraints based question answering over knowledge graph. Info Process Manag 56(3):445-462
A study on features of social recommender systems. J Shokeen, C Rana, Artif Intell Rev. 532Shokeen J, Rana C (2020) A study on features of social recommender systems. Artif Intell Rev 53(2):965-988
User-preference based knowledge graph feature and structure learning for recommendation. H Shu, J Huang, 2021 IEEE International Conference on Multimedia and Expo (ICME). IEEEShu H, Huang J (2021) User-preference based knowledge graph feature and structure learning for recom- mendation. In: 2021 IEEE International Conference on Multimedia and Expo (ICME), IEEE, p 1-6
No one is perfect: analysing the performance of question answering components over the dbpedia knowledge graph. K Singh, I Lytra, A S Radhakrishna, J Web Semantics. 65100594Singh K, Lytra I, Radhakrishna AS et al (2020) No one is perfect: analysing the performance of question answering components over the dbpedia knowledge graph. J Web Semantics 65(100):594
Knowledge fusion patterns: a survey. A Smirnov, T Levashova, Inf Fusion. 52Smirnov A, Levashova T (2019) Knowledge fusion patterns: a survey. Inf Fusion 52:31-40
Reasoning with neural tensor networks for knowledge base completion. R Socher, D Chen, C D Manning, Advances in neural information processing systems. Socher R, Chen D, Manning CD et al (2013) Reasoning with neural tensor networks for knowledge base completion. In: Advances in neural information processing systems, p 926-934
Interactive spatial keyword querying with semantics. J Sun, J Xu, K Zheng, Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. the 2017 ACM on Conference on Information and Knowledge ManagementSingaporeACMSun J, Xu J, Zheng K et al (2017) Interactive spatial keyword querying with semantics. In: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM 2017, Singapore, November 06-10, 2017. ACM, p 1727-1736
Relational structure-aware knowledge graph representation in complex space. K Sun, S Yu, C Peng, Mathematics. 10111930Sun K, Yu S, Peng C et al (2022) Relational structure-aware knowledge graph representation in complex space. Mathematics 10(11):1930
Multi-modal knowledge graphs for recommender systems. R Sun, X Cao, Y Zhao, Proceedings of the 29th ACM International Conference on Information & Knowledge Management. the 29th ACM International Conference on Information & Knowledge ManagementSun R, Cao X, Zhao Y et al (2020) Multi-modal knowledge graphs for recommender systems. In: Pro- ceedings of the 29th ACM International Conference on Information & Knowledge Management, p 1405-1414
Rotate: knowledge graph embedding by relational rotation in complex space. Z Sun, Z H Deng, J Y Nie, arXiv:1902.10197arXiv preprintSun Z, Deng ZH, Nie JY et al (2019a) Rotate: knowledge graph embedding by relational rotation in com- plex space. arXiv preprint arXiv: 1902. 10197
Research commentary on recommendations with side information: a survey and research directions. Z Sun, Q Guo, J Yang, Electron Commer Res Appl. 37100879Sun Z, Guo Q, Yang J et al (2019) Research commentary on recommendations with side information: a sur- vey and research directions. Electron Commer Res Appl 37(100):879
Complex embeddings for simple link prediction. T Trouillon, J Welbl, S Riedel, PMLRInternational conference on machine learning. Trouillon T, Welbl J, Riedel S et al (2016) Complex embeddings for simple link prediction. In: International conference on machine learning, PMLR, p 2071-2080
The anatomy of the facebook social graph. J Ugander, B Karrer, L Backstrom, arXiv:1111.4503arXiv preprintUgander J, Karrer B, Backstrom L et al (2011) The anatomy of the facebook social graph. arXiv preprint arXiv: 1111. 4503
Interacte: improving convolution-based knowledge graph embeddings by increasing feature interactions. S Vashishth, S Sanyal, Nitin V , Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceVashishth S, Sanyal S, Nitin V et al (2020) Interacte: improving convolution-based knowledge graph embeddings by increasing feature interactions. In: Proceedings of the AAAI Conference on Artificial Intelligence, p 3009-3016
Wikidata: a free collaborative knowledgebase. D Vrandečić, M Krötzsch, Commun ACM. 5710Vrandečić D, Krötzsch M (2014) Wikidata: a free collaborative knowledgebase. Commun ACM 57(10):78-85
Deep matrix factorization for trust-aware recommendation in social networks. L Wan, F Xia, X Kong, IEEE Trans Netw Sci Eng. 81Wan L, Xia F, Kong X et al (2020) Deep matrix factorization for trust-aware recommendation in social net- works. IEEE Trans Netw Sci Eng 8(1):511-528
Information retrieval technology based on knowledge graph. C Wang, H Yu, F Wan, 2018 3rd International Conference on Advances in Materials, Mechatronics and Civil Engineering. Atlantis PressWang C, Yu H, Wan F (2018a) Information retrieval technology based on knowledge graph. In: 2018 3rd International Conference on Advances in Materials, Mechatronics and Civil Engineering (ICAM- MCE 2018), Atlantis Press, p 291-296
Ripplenet: Propagating user preferences on the knowledge graph for recommender systems. H Wang, F Zhang, J Wang, Proceedings of the 27th ACM International Conference on Information and Knowledge Management. the 27th ACM International Conference on Information and Knowledge ManagementWang H, Zhang F, Wang J et al (2018b) Ripplenet: Propagating user preferences on the knowledge graph for recommender systems. In: Proceedings of the 27th ACM International Conference on Information and Knowledge Management, p 417-426
Acekg: a large-scale knowledge graph for academic data mining. R Wang, Y Yan, J Wang, Proceedings of the 27th ACM International Conference on Information and Knowledge Management. the 27th ACM International Conference on Information and Knowledge ManagementNew York, NY, CIKMAssociation for Computing Machinery18Wang R, Yan Y, Wang J et al (2018c) Acekg: a large-scale knowledge graph for academic data mining. In: Proceedings of the 27th ACM International Conference on Information and Knowledge Management. Association for Computing Machinery, New York, NY, CIKM '18, p 1487-1490
Z Wang, T Chen, J Ren, arXiv:1807.00504Deep reasoning with knowledge graph for social relationship understanding. arXiv preprintWang Z, Chen T, Ren J et al (2018d) Deep reasoning with knowledge graph for social relationship under- standing. arXiv preprint arXiv: 1807. 00504
Microsoft academic graph: when experts are not enough. K Wang, Z Shen, C Huang, Quant Sci Stud. 11Wang K, Shen Z, Huang C et al (2020a) Microsoft academic graph: when experts are not enough. Quant Sci Stud 1(1):396-413
Model: motif-based deep feature learning for link prediction. L Wang, J Ren, B Xu, IEEE Trans Comput Soc Syst. 72Wang L, Ren J, Xu B et al (2020b) Model: motif-based deep feature learning for link prediction. IEEE Trans Comput Soc Syst 7(2):503-516
Attributed collaboration network embedding for academic relationship mining. W Wang, J Liu, T Tang, ACM Trans Web (TWEB). 151Wang W, Liu J, Tang T et al (2020c) Attributed collaboration network embedding for academic relationship mining. ACM Trans Web (TWEB) 15(1):1-20
Detecting medical misinformation on social media using multimodal deep learning. Z Wang, Z Yin, Y A Argyris, IEEE J Biomed Health Info. 256Wang Z, Yin Z, Argyris YA (2020d) Detecting medical misinformation on social media using multimodal deep learning. IEEE J Biomed Health Info 25(6):2193-2203
Covid-19 literature knowledge graph construction and drug repurposing report generation. Q Wang, M Li, X Wang, arXiv:2007.00576arXiv preprintWang Q, Li M, Wang X et al (2020e) Covid-19 literature knowledge graph construction and drug repurpos- ing report generation. arXiv preprint arXiv: 2007. 00576
Sustainable collaborator recommendation based on conference closure. W Wang, J Liu, Z Yang, IEEE Trans Comput Soc Syst. 62Wang W, Liu J, Yang Z et al (2019a) Sustainable collaborator recommendation based on conference closure. IEEE Trans Comput Soc Syst 6(2):311-322
Explainable reasoning over knowledge graphs for recommendation. X Wang, D Wang, C Xu, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceWang X, Wang D, Xu C et al (2019b) Explainable reasoning over knowledge graphs for recommendation. In: Proceedings of the AAAI Conference on Artificial Intelligence, p 5329-5336
Multi-task feature learning for knowledge graph enhanced recommendation. H Wang, F Zhang, M Zhao, The World Wide Web Conference. Wang H, Zhang F, Zhao M et al (2019c) Multi-task feature learning for knowledge graph enhanced recom- mendation. In: The World Wide Web Conference, p 2000-2010
Multitask feature learning approach for knowledge graph enhanced recommendations with Ripplenet. Y Wang, L Dong, Y Li, Plos One. 165251Wang Y, Dong L, Li Y et al (2021) Multitask feature learning approach for knowledge graph enhanced rec- ommendations with Ripplenet. Plos One 16(5):e0251
Reasoning like human: hierarchical reinforcement learning for knowledge graph reasoning. Z Wang, J Zhang, J Feng, Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence. the Twenty-Ninth International Conference on International Joint Conferences on Artificial IntelligenceProceedings of the AAAI Conference on Artificial Intelligence Wan GWang Z, Zhang J, Feng J et al (2014) Knowledge graph embedding by translating on hyperplanes. In: Pro- ceedings of the AAAI Conference on Artificial Intelligence Wan G, Pan S, Gong C et al (2021) Reasoning like human: hierarchical reinforcement learning for knowl- edge graph reasoning. In: Proceedings of the Twenty-Ninth International Conference on International Joint Conferences on Artificial Intelligence, p 1926-1932
Covid-19 knowledge graph: accelerating information retrieval and discovery for scientific literature. C Wise, V N Ioannidis, M R Calvo, arXiv:2007.12731arXiv preprintWise C, Ioannidis VN, Calvo MR et al (2020) Covid-19 knowledge graph: accelerating information retrieval and discovery for scientific literature. arXiv preprint arXiv: 2007. 12731
Ontology-based subgraph querying. Y Wu, S Yang, X Yan, 29th IEEE International Conference on Data Engineering. Brisbane, AustraliaIEEE Computer SocietyWu Y, Yang S, Yan X (2013) Ontology-based subgraph querying. In: 29th IEEE International Conference on Data Engineering, ICDE 2013, Brisbane, Australia, April 8-12, 2013. IEEE Computer Society, p 697-708
Socially aware conference participant recommendation with personality traits. F Xia, N Y Asabere, H Liu, IEEE Syst J. 114Xia F, Asabere NY, Liu H et al (2014a) Socially aware conference participant recommendation with person- ality traits. IEEE Syst J 11(4):2255-2266
Multi-category item recommendation using neighborhood associations in trust networks. F Xia, H Liu, N Y Asabere, Proceedings of the 23rd International Conference on World Wide Web. the 23rd International Conference on World Wide WebXia F, Liu H, Asabere NY et al (2014b) Multi-category item recommendation using neighborhood associa- tions in trust networks. In: Proceedings of the 23rd International Conference on World Wide Web, p 403-404
Scientific article recommendation: exploiting common author relations and historical preferences. F Xia, H Liu, I Lee, IEEE Trans Big Data. 22Xia F, Liu H, Lee I et al (2016) Scientific article recommendation: exploiting common author relations and historical preferences. IEEE Trans Big Data 2(2):101-112
Graph learning: a survey. F Xia, K Sun, S Yu, IEEE Trans Artif Intell. 22Xia F, Sun K, Yu S et al (2021) Graph learning: a survey. IEEE Trans Artif Intell 2(2):109-127
Reinforcement knowledge graph reasoning for explainable recommendation. Y Xian, Z Fu, S Muthukrishnan, Proceedings of the 42nd international ACM SIGIR conference on research and development in information retrieval. the 42nd international ACM SIGIR conference on research and development in information retrievalXian Y, Fu Z, Muthukrishnan S et al (2019) Reinforcement knowledge graph reasoning for explainable recommendation. In: Proceedings of the 42nd international ACM SIGIR conference on research and development in information retrieval, p 285-294
Transg: a generative mixture model for knowledge graph embedding. H Xiao, M Huang, Y Hao, arXiv:1509.05488arXiv preprintXiao H, Huang M, Hao Y et al (2015) Transg: a generative mixture model for knowledge graph embedding. arXiv preprint arXiv: 1509. 05488
Deep path: a reinforcement learning method for knowledge graph reasoning. W Xiong, T Hoang, W Y Wang, arXiv:1707.06690arXiv preprintXiong W, Hoang T, Wang WY (2017) Deep path: a reinforcement learning method for knowledge graph reasoning. arXiv preprint arXiv: 1707. 06690
Multivariate relations aggregation learning in social networks. J Xu, S Yu, K Sun, Proc ACM/ IEEE Joint Conf Digital Libraries in 2020. ACM/ IEEE Joint Conf Digital Libraries in 2020Xu J, Yu S, Sun K et al (2020) Multivariate relations aggregation learning in social networks. Proc ACM/ IEEE Joint Conf Digital Libraries in 2020:77-86
Cross-lingual knowledge graph alignment via graph matching neural network. K Xu, L Wang, M Yu, arXiv:1905.11605arXiv preprintXu K, Wang L, Yu M et al (2019) Cross-lingual knowledge graph alignment via graph matching neural net- work. arXiv preprint arXiv: 1905. 11605
L Yao, C Mao, Y Luo, arXiv:1909.03193Kg-bert: Bert for knowledge graph completion. arXiv preprintYao L, Mao C, Luo Y (2019) Kg-bert: Bert for knowledge graph completion. arXiv preprint arXiv: 1909. 03193
Incorporating knowledge graph embeddings into topic modeling. L Yao, Y Zhang, B Wei, Thirty-First AAAI Conference on Artificial Intelligence. Yao L, Zhang Y, Wei B et al (2017) Incorporating knowledge graph embeddings into topic modeling. In: Thirty-First AAAI Conference on Artificial Intelligence
Joint embedding learning of educational knowledge graphs. S Yao, R Wang, S Sun, Artificial Intelligence Supported Educational. Yao S, Wang R, Sun S et al (2020) Joint embedding learning of educational knowledge graphs. In: Artificial Intelligence Supported Educational Technologies p 209-224
Graph convolutional neural networks for web-scale recommender systems. R Ying, R He, K Chen, Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. the 24th ACM SIGKDD international conference on knowledge discovery & data miningYing R, He R, Chen K et al (2018) Graph convolutional neural networks for web-scale recommender sys- tems. In: Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, p 974-983
A framework for reviewer recommendation based on knowledge graph and rules matching. Y Yong, Z Yao, Y Zhao, 2021 IEEE International Conference on Information Communication and Software Engineering (ICICSE). Yong Y, Yao Z, Zhao Y (2021) A framework for reviewer recommendation based on knowledge graph and rules matching. In: 2021 IEEE International Conference on Information Communication and Soft- ware Engineering (ICICSE), p 199-203
A relationship extraction method for domain knowledge graph construction. H Yu, H Li, D Mao, World Wide Web. 232Yu H, Li H, Mao D et al (2020) A relationship extraction method for domain knowledge graph construction. World Wide Web 23(2):735-753
Doctor recommendation on healthcare consultation platforms: an integrated framework of knowledge graph and deep learning. H Yuan, W Deng, Internet Research Zablith F (2022) Constructing social media links to formal learning: a knowledge graph approachYuan H, Deng W (2021) Doctor recommendation on healthcare consultation platforms: an integrated frame- work of knowledge graph and deep learning. Internet Research Zablith F (2022) Constructing social media links to formal learning: a knowledge graph approach. Educa- tional technology research and development p 1-26
Multi-modal knowledge-aware event memory network for social media rumor detection. H Zhang, Q Fang, S Qian, Proceedings of the 27th ACM International Conference on Multimedia. the 27th ACM International Conference on MultimediaZhang H, Fang Q, Qian S et al (2019a) Multi-modal knowledge-aware event memory network for social media rumor detection. In: Proceedings of the 27th ACM International Conference on Multimedia, p 1942-1951
Long-tail relation extraction via knowledge graph embeddings and graph convolution networks. N Zhang, S Deng, Z Sun, arXiv:1903.01306arXiv preprintZhang N, Deng S, Sun Z et al (2019b) Long-tail relation extraction via knowledge graph embeddings and graph convolution networks. arXiv preprint arXiv: 1903. 01306
Quaternion knowledge graph embeddings. S Zhang, Y Tay, L Yao, Adv Neural Info Process Syst. 32Zhang S, Tay Y, Yao L et al (2019c) Quaternion knowledge graph embeddings. Adv Neural Info Process Syst 32
Hkgb: an inclusive, extensible, intelligent, semi-auto-constructed knowledge graph framework for healthcare with clinicians' expertise incorporated. Y Zhang, M Sheng, R Zhou, Info Process Manag. 576102Zhang Y, Sheng M, Zhou R et al (2020a) Hkgb: an inclusive, extensible, intelligent, semi-auto-constructed knowledge graph framework for healthcare with clinicians' expertise incorporated. Info Process Manag 57(6):102
Learning hierarchy-aware knowledge graph embeddings for link prediction. Z Zhang, J Cai, Y Zhang, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceZhang Z, Cai J, Zhang Y et al (2020b) Learning hierarchy-aware knowledge graph embeddings for link pre- diction. In: Proceedings of the AAAI Conference on Artificial Intelligence, p 3065-3072
Name disambiguation in aminer: clustering, maintenance, and human in the loop. Y Zhang, F Zhang, P Yao, Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningZhang Y, Zhang F, Yao P et al (2018) Name disambiguation in aminer: clustering, maintenance, and human in the loop. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Dis- covery & Data Mining, p 1002-1011
Cone: cone embeddings for multi-hop reasoning over knowledge graphs. Z Zhang, J Wang, J Chen, Adv Neural Info Process Syst. 34183Zhang Z, Wang J, Chen J et al (2021) Cone: cone embeddings for multi-hop reasoning over knowledge graphs. Adv Neural Info Process Syst 34:19,172-19,183
Multi-source knowledge fusion: a survey. X Zhao, Y Jia, A Li, World Wide Web. 234Zhao X, Jia Y, Li A et al (2020) Multi-source knowledge fusion: a survey. World Wide Web 23(4):2567-2592
Dgl-ke: training knowledge graph embeddings at scale. D Zheng, X Song, C Ma, Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval. the 43rd International ACM SIGIR Conference on Research and Development in Information RetrievalZheng D, Song X, Ma C et al (2020) Dgl-ke: training knowledge graph embeddings at scale. In: Proceed- ings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, p 739-748
A survey of recommender systems with multi-objective optimization. Y Zheng, D X Wang, Neurocomputing. 474Zheng Y, Wang DX (2022) A survey of recommender systems with multi-objective optimization. Neuro- computing 474:141-153
Schere: Schema reshaping for enhancing knowledge graph construction. D Zhou, B Zhou, Z Zheng, Proceedings of the 31st ACM International Conference on Information & Knowledge Management. the 31st ACM International Conference on Information & Knowledge ManagementZhou D, Zhou B, Zheng Z et al (2022) Schere: Schema reshaping for enhancing knowledge graph construc- tion. In: Proceedings of the 31st ACM International Conference on Information & Knowledge Man- agement, p 5074-5078
Step by step: a hierarchical framework for multi-hop knowledge graph reasoning with reinforcement learning. A Zhu, D Ouyang, S Liang, Knowl-Based Syst. 248108843Zhu A, Ouyang D, Liang S et al (2022) Step by step: a hierarchical framework for multi-hop knowledge graph reasoning with reinforcement learning. Knowl-Based Syst 248(108):843
Exploiting semantic similarity for named entity disambiguation in knowledge graphs. G Zhu, C A Iglesias, Expert Syst Appl. 101Zhu G, Iglesias CA (2018) Exploiting semantic similarity for named entity disambiguation in knowledge graphs. Expert Syst Appl 101:8-24
Multi-modal knowledge graph construction and application: a survey. X Zhu, Z Li, X Wang, arXiv:2202.05786arXiv preprintZhu X, Li Z, Wang X et al (2022b) Multi-modal knowledge graph construction and application: a survey. arXiv preprint arXiv: 2202. 05786
A survey on application of knowledge graph. X Zou, J Phys Conf Ser. 148701216Zou X (2020) A survey on application of knowledge graph. J Phys Conf Ser 1487(012):016
| [] |
[
"Invisible Higgs from forward muons at a muon collider",
"Invisible Higgs from forward muons at a muon collider"
] | [
"Maximilian Ruhdorfer ",
"Ennio Salvioni ",
"Andrea Wulzer ",
"\nDipartimento di Fisica e Astronomia, Università di Padova and INFN\nLaboratory for Elementary Particle Physics\nCornell University\n14853IthacaNYUSA\n",
"\nInstitute of Science and Technology (BIST)\nInstitut de Física d'Altes Energies (IFAE) and ICREA, Institució Catalana de Recerca i Estudis Avançats, Passeig de Lluís Companys 23\nSezione di Padova\nVia Marzolo 8, The Barcelona, Campus UAB35131, 08193, 08010Padua, Bellaterra, Barcelona, BarcelonaItaly, Spain, Spain\n"
] | [
"Dipartimento di Fisica e Astronomia, Università di Padova and INFN\nLaboratory for Elementary Particle Physics\nCornell University\n14853IthacaNYUSA",
"Institute of Science and Technology (BIST)\nInstitut de Física d'Altes Energies (IFAE) and ICREA, Institució Catalana de Recerca i Estudis Avançats, Passeig de Lluís Companys 23\nSezione di Padova\nVia Marzolo 8, The Barcelona, Campus UAB35131, 08193, 08010Padua, Bellaterra, Barcelona, BarcelonaItaly, Spain, Spain"
] | [] | We propose to probe the Higgs boson decay to invisible particles at a muon collider by observing the forward muons that are produced in association with the Higgs in the Z-boson fusion channel. An excellent sensitivity is possible in line of principle, owing to the large number of produced Higgs bosons, provided a forward muon detector is installed. We find that the resolution on the measurement of the muon energy and angle will be the main factor limiting the actual sensitivity. This poses tight requirements on the forward muon detector design. | 10.1103/physrevd.107.095038 | [
"https://export.arxiv.org/pdf/2303.14202v1.pdf"
] | 257,766,432 | 2303.14202 | fd3ac703cd071990712f5def6021424572717173 |
Invisible Higgs from forward muons at a muon collider
Maximilian Ruhdorfer
Ennio Salvioni
Andrea Wulzer
Dipartimento di Fisica e Astronomia, Università di Padova and INFN
Laboratory for Elementary Particle Physics
Cornell University
14853IthacaNYUSA
Institute of Science and Technology (BIST)
Institut de Física d'Altes Energies (IFAE) and ICREA, Institució Catalana de Recerca i Estudis Avançats, Passeig de Lluís Companys 23
Sezione di Padova
Via Marzolo 8, The Barcelona, Campus UAB35131, 08193, 08010Padua, Bellaterra, Barcelona, BarcelonaItaly, Spain, Spain
Invisible Higgs from forward muons at a muon collider
We propose to probe the Higgs boson decay to invisible particles at a muon collider by observing the forward muons that are produced in association with the Higgs in the Z-boson fusion channel. An excellent sensitivity is possible in line of principle, owing to the large number of produced Higgs bosons, provided a forward muon detector is installed. We find that the resolution on the measurement of the muon energy and angle will be the main factor limiting the actual sensitivity. This poses tight requirements on the forward muon detector design.
I. INTRODUCTION
The possibility of building a muon collider with centre of mass energy of 10 TeV or more and with high luminosity [1] has received increasing attention in the last few years and is being actively pursued (see [2] for a review) by the International Muon Collider Collaboration (IMCC). Such collider would offer innumerable and varied physics opportunities, ranging from the direct access to the 10 TeV energy scale to the availability of a large effective luminosity for vector boson collisions at the scale of 1 TeV or below. The physics potential of the muon collider as a "vector boson collider" [3] has been outlined in [4][5][6] for the search of new particles produced in the Vector Boson Fusion (VBF) process, for the search for new phenomena in Standard Model (SM) scatterings initiated by vector bosons (VBS processes) [7][8][9] and for precise measurements of the single Higgs couplings [8,10].
The VBF or VBS processes are schematically represented in Fig. 1. They proceed through the collinear emission of nearly on-shell vector bosons from the incoming muons. The vector bosons collide producing some final state "X" such as the Higgs boson, in the process considered in the present work. The on-shell fermion and anti-fermion emerge from the splitting as real final-state particles. If the emitted vector bosons are charged W bosons, the initial muons are turned into invisible neutrinos. The emission of neutral bosons such as the Z or the photon are instead accompanied by potentially detectable final-state muons, offering novel handles for the observation and the study of VBF and VBS processes. The kinematics of the process is conveniently described in the effective vector boson approximation [11][12][13][14][15] by factorising the emission of the vector bosons into universal splitting functions that are independent of the nature of the subsequent scattering process. The typical transverse momentum of the effective Z boson-and in turn the one of the final muon-is around the mass of the boson, p ⊥ ∼ m Z . The p ⊥ spectrum is almost entirely above one tenth of m Z .
The energy of the emitted bosons depends on the invariant mass of the X system. If the invariant mass is of hundreds of GeV or less (e.g., m X = m h in the case of Higgs production), the energy of the Z is a small fraction of the initial muon energy. Therefore the final state muon carries away almost all of the beam energy E b = 5 TeV and thus for p ⊥ ∼ m Z it has a small typical angle θ ∼ p ⊥ /E b = 18 mrad from the beam line. The invariant mass m X is larger than hundreds of GeV if X is a heavy new physics particle or if X consists of a pair of SM particles and we apply an invariant mass cut in order to study their interaction with the Z at the 1 TeV scale. In this case, the energy of the final muons is below the beam energy, the muons are less forward and a priori easier to detect. An angular coverage for muon detection up to around a pseudo-rapidity |η| < 7, i.e. θ 0.1 m Z /E b = 1.8 mrad, would definitely offer sensitivity to the entire p ⊥ spectrum of the forward muons associated with the emission of effective Z bosons in all VBF or VBS processes of interest at the muon collider.
An extended angular coverage above pseudo-rapidity 7 could be of interest to study the emission of effective photons rather than Z bosons, because the effective photon p ⊥ distribution extends much below the 100 GeV scale, down to the muon mass. However, since the distribution is logarithmic, good sensitivity to the effective photon emission is expected even with angular coverage |η| < 7. The observation of forward muons with rapidity up to around 7 or 6 thus signals the occurrence of a generic "neutral" VBF or VBS process where the colliding bosons could be either Z-bosons or photons, a priori. In the case of single-Higgs production, photons do not play any role because their coupling to the Higgs is small.
The current design of the muon collider machinedetector interface foresees the installation of two conical absorbers along the beam line-with the tips pointing towards the interaction point-in order to shield the detector from the radiation induced by the decay of the colliding muons. The absorbers limit the angular coverage of the main detector to θ > 10 o = 175 mrad [2]. This corresponds to |η| < 2.44, which is not sufficient to detect the forward muons produced in neutral VBF and VBS events. Fortunately, TeV-energy muons are penetrating particles that cross the absorbers and possibly other elements of the collider. Unlike all other species of particles, whose detection is possible only in the central region, muons could thus in principle be detected also in the pseudo-rapidity range from 2.5 to 7 if a dedicated forward muon detector was installed.
The IMCC plans [2] to study the design of a forward muon detector, and in fact the possibility of detecting forward muons has long been included in the muon collider DELPHES card [16,17]. However, the assessment of the feasibility and of the performances of such detector has just started. Studying the physics potential of the forward muon detector provides useful guidance to the IMCC design study and informs on the requirements that would be needed in order to achieve specific goals.
The need for a forward muon detector first emerged [5] in the study of Higgs portal models. These are important new physics targets for future colliders aimed at probing the Higgs sector from multiple angles, including its possible connections with dark matter. The new physics particles coupled through Higgs portal interactions are copiously produced in VBF at the muon collider, but they must be stable and invisible in order to be a viable dark matter candidate. This motivates searches for invisible particles produced in the neutral VBF process. The forward muon detector will enable to tag this otherwise invisible signal and it will also allow to reject the background by exploiting the kinematics of the forward muons, provided the forward detector will be equipped to perform a measurement of the momentum and not just to identify the muons.
A survey of several physics studies that exploit the forward muon detector, providing its physics case, is currently in preparation [18]. Among these studies, the novel analysis of the SM Higgs decay to invisibles that we propose in this paper poses the tightest requirements on the detector performances and in particular on the resolution of the muon energy and direction measurements.
The decay of the Higgs to invisible particles has not yet been established experimentally. LHC data result in an upper bound BR inv < 0.11 [19] on the invisible branching ratio at 95% CL. A moderate improvement is expected at the High-Luminosity LHC (HL-LHC): BR inv < 0.028 [20,21], in the most optimistic scenario for systematic uncertainties. This sensitivity is very far from the prediction BR SM inv = 1.2 · 10 −3 from the SM decay h → ZZ * → 4ν. Among proposed future projects, electron-positron colliders such as FCC-ee and ILC will improve the sensitivity down to BR inv = 3 · 10 −3 [21] (though a more optimistic study previously claimed ∼ 1 · 10 −3 reach [22]), but only the FCC-hh is expected to observe the SM invisible decay, with a projected sensitivity BR inv < 2.5·10 −4 [23] under the hypothesis of vanishing branching ratio. Clearly, after passing the SM threshold the relevant figure of merit becomes the expected sensitivity on the beyond-the-SM (BSM) invisible branching ratio BR BSM inv , under the hypothesis of SM branching ratio. The BSM invisible branching ratio is due to the decay of the Higgs to new putative invisible particles, which may be stable or more generally long-lived, and is defined by the relation BR inv = BR SM inv + BR BSM inv . A variety of new physics scenarios foresee a sizeable BR BSM inv , possibly even larger than the SM component [24,25]. This provides a strong theoretical motivation for the study of invisible Higgs decays. In this paper we quantify the muon collider sensitivity accounting for all the relevant backgrounds and realistic beam parameters, as a function of the angular acceptance and resolution of the putative forward muon detector.
The paper is organised as follows. We start in Section II from an idealised setup where the incoming muons are perfectly monochromatic and parallel to the beam axis, and the final state muon angle and energy are measured perfectly. In Section III we include in the simulations the effect of the finite spread in energy and in angle of the colliding muon beams. The results obtained with these simulations provide the best attainable sensitivity with a forward detector that measures the momentum of the forward muons extremely precisely. Possibly realistic resolutions, included in Section IV, will emerge as the main factor limiting the sensitivity. We summarise our findings in Section V. The effect of variations of the assumed performances of the main detector, and sensitivity projections at a 3 TeV muon collider, are described in Appendix A and B, respectively.
II. TRUTH-LEVEL DISTRIBUTIONS
In this section we study the signal µ + µ − → µ + µ − h with the Higgs decaying invisibly, and the relevant backgrounds, in an idealised setup where beam particles have exactly E b = 5 TeV energy, and momentum along the beam axis. We also ignore the uncertainties in the measurement of the final muons.
We consider events characterised by the detection of two opposite-charge muons, and a veto on any other object (photon, jet or lepton) within the coverage |η| < 2.44 of the main detector. The energy or transverse momentum threshold above which the veto is effective will have to be estimated by a full simulation of the main detector, once available. We assume a common threshold of p ⊥ > 20 GeV for all objects and study departures from this value in Appendix A. We consider final-state muons in opposite hemispheres (i.e., η µ + ·η µ − < 0) with absolute pseudo-rapidity |η µ | up to 7, and study in Section IV the impact of a reduced angular acceptance for the forward muon detector. We further require a lower energy threshold, E µ ± > 500 GeV, for the muons to cross the absorbers and be detected. The contribution from virtual photons splitting to µ + µ − (or from Z bosons decay) is eliminated by a loose angular separation cut ∆R µµ > 0.4. These selections define the "baseline" cuts for our analysis.
On top of the kinematic properties of the individual muons, signal and backgrounds are conveniently characterised and discriminated by the invariant mass M µµ , the azimuthal angular distance ∆φ µµ and the total transverse momentum P µµ ⊥ = (p µ + + p µ − ) ⊥ of the µ + µ − pair. Other useful variables are the minimal muon energy, E min µ = min (E µ − , E µ + ), and the Missing Invariant Mass (MIM)
MIM = |(∆P ) 2 | , ∆P = (2E b , 0 ) − p µ + − p µ − ,(1)
where 2E b = 10 TeV is the nominal centre of mass energy of the collider. ∆P is the difference between the total 4momentum of the incoming muons with nominal energy and the total 4-momentum of the final muons. The absolute value in Eq. (1) ensures that MIM remains real and positive even in the presence of beam energy spread and experimental smearing of the final state muon momenta.
Monte Carlo (MC) data samples for signal and backgrounds are generated using MadGraph5 aMC@NLO [26]. Photon showering from PYTHIA8 [27] is also performed on the signal and on some of the background samples (see below). Both final state radiation (FSR) and initial state radiation (ISR) showering are included. The backwards evolution needed for ISR requires the muon parton distribution function (PDF) of the muon to be employed in the fixed-order event generators. This is achieved in MadGraph by a simple modification of the electron PDF implementation [28]. The relevant distributions for the signal are shown in black in Fig. 2. The total cross-section is 62 fb for unit branching ratio of the invisible decay of the Higgs, after the baseline cuts described above and P µµ ⊥ > 50 GeV. The signal is characterised by a muon pseudo-rapidity between around 2.5 and 7, in accordance with the estimates described in the Introduction. The muon p ⊥ is of order m Z as previously discussed, thus P µµ ⊥ is of order hundred GeV as the figure shows. Relevant backgrounds are those processes that can produce a forward and energetic µ + µ − pair, but with some momentum unbalance in the transverse plane. They fall into two categories.
The first class of backgrounds are those processes that produce invisible neutrinos, namely the final state µ + µ − νν shown in orange in Fig. 2. While we simulate this final state as a single process, we notice that it contains a number of different components. Subprocesses where the neutrinos are emitted from the decay of a Z boson dominantly emerge from the radiation of the Z from the elastic scattering µ + µ − → µ + µ − . They are characterised by a resonant Z-pole peak in the MIM distribution, while the signal peaks at MIM = m h . However, we will see in the following section that the energy spread of the muon beams eliminates these narrow peaks. The second component of the µ + µ − νν background emerges from the W boson produced in γW fusion, or radiated from the elastic muon scattering, and decaying as W → µν. These processes produce a continuous spectrum in M µµ and in MIM. A third component of the process comes from Z bosons or low-virtuality photons emitted from the elastic process and decaying to muons. This component is however eliminated by the ∆R µµ and E µ ± cuts.
The top left panel in Fig. 2 shows that the P µµ ⊥ distribution of the µ + µ − νν background is very similar to the one of the signal. On the other hand, P µµ ⊥ is an important discriminant for other backgrounds, discussed below. For this reason, in Fig. 2 we show (with solid lines) the effect of a P µµ ⊥ > 50 GeV cut on the distributions. The cut has little impact on the µ + µ − νν background.
The second class of backgrounds are processes where the µ + µ − pair is produced in association with any type of object that cannot be vetoed because it falls outside the angular acceptance of the main detector or because it is softer than the assumed 20 GeV p ⊥ threshold.
The muons must be forward (while still below |η µ | = 7) for the process to be relevant. This naturally occurs for neutral VBF or VBS processes initiated by virtual Z or photons. The largest such process is µ + µ − → µ + µ − ff , where f denotes any light quark or lepton with the exception of the muons, which are detected in the forward region. This final state includes VBF Higgs production with the Higgs decaying to bb or τ + τ − as a subdominant contribution. It also includes the production of a virtual photon decaying to ff . The corresponding singularity is eliminated by a 10 GeV cut on the invariant mass of the ff pair. The µ + µ − → µ + µ − γ process, discussed later in this section, accounts for the region below the cut. We also include the µ + µ − → µ + µ − W + W − background, which is the largest vector boson or Higgs pair production process in neutral VBF [7]. This is estimated under the conservative assumption that all the W bosons emitted 10 -3 10 -3 only baseline cuts are applied (i.e., |ηµ| < 7, η µ + · η µ − < 0, ∆Rµµ > 0.4 and E µ ± > 500 GeV, plus the veto with p ⊥ > 20 GeV and |η| < 2.44). In all the other panels, dashed lines correspond to baseline cuts, while solid lines also include P µµ ⊥ > 50 GeV.
μ + μ -h μ + μ -νν μ + μ -γ μ + μ -f f μ + μ -W + W -μ + μ -P ⟂ μμ [GeV] dσ/d P ⟂ μμ [pb/GeV] 2E b = 10 TeV μ + μ -h μ + μ -νν μ + μ -γ μ + μ -f f μ + μ -W + W -MIM [GeV] dσ/d MIM [pb/GeV] 2E b = 10 TeV μ + μ -h μ + μ -νν μ + μ -γ μ + μ -f f μ + μ -W + W -10 -2 M μμ [GeV] dσ/d M μμ [pb/GeV] 2E b = 10 TeV μ + μ -h μ + μ -νν μ + μ -γ μ + μ -f f μ + μ -W + W -10 -2 E μ min [GeV] dσ/d E μ min [pb/GeV] 2E b = 10 TeV μ + μ -h μ + μ -νν μ + μ -γ μ + μ -f f μ + μ -W + W -|η μ -| dσ/d |η μ - | [pb] 2E b = 10 TeV μ + μ -h μ + μ -νν μ + μ -γ μ + μ -f f μ + μ -W + W -
outside the angular acceptance or with a p ⊥ lower than the threshold will pass the veto. Even with this setup, the µ + µ − W + W − background will be found to play a minor role for the sensitivity.
We have been unable to include photon showering in the µ + µ − ff process, which is thus simulated purely at tree-level and without muon PDFs. The first technical issue we encountered is that MadGraph event generation fails for this process when employing the muon PDFs, while PDFs are essential for ISR showering as previously mentioned. With PDFs, generation succeeds only with relatively strong lower p ⊥ cuts on all the final particles, in addition to the muon acceptance cut and the cut on the ff invariant mass. A second more conceptual difficulty concerns the choice of the showering scale in PYTHIA8. A large component of the process emerges from the γγ → ff fusion of low-virtuality effective photons. This is effectively an electromagnetic radiation process in itself, such that the adequate showering scale for photon radiation from the muon legs should be commensurate to the virtuality of the splitting rather than to the hardness of the final fermions. A sophisticated showering scheme should be adopted, which however does not seem to be easily available in PYTHIA8. We consider that this limitation of the simulation will not harm the accuracy of our results, because the effects of showering are minor in general and because µ + µ − ff will turn out not to be the dominant background.
Elastic scattering µ + µ − → µ + µ − also produces forward muons due to the t-channel enhancement. The emission of real photons that are either too soft or too forward to be detected generates a significant amount of MIM and P µµ ⊥ . Ideally, one would like to simulate this process by generating a merged sample of µ + µ − → µ + µ − plus additional photons and match it with QED showering. However, this is not straightforward to achieve with any of the currently available multi-purpose MC generators, in contrast with the case of QCD radiation. We then proceed as follows. We first generate a sample without extra photons and shower it with PYTHIA8. 1 This produces the P µµ ⊥ distribution shown in brown color in the top left panel of Fig. 2. The distribution extends above around 50 GeV, where the signal starts, but with a very weakly populated tail that makes event generation above the analysis cut P µµ ⊥ > 50 GeV cumbersome. Furthermore, showering is arguably not the adequate description of the process in the large P µµ ⊥ tail. We thus simulate the tree-level process µ + µ − → µ + µ − γ with a lower cut of 10 GeV on the photon p ⊥ and on the photon-muon invariant mass in order to avoid the showering region. The resulting µ + µ − γ simulation reproduces the results of showering quite well for P µµ ⊥ ≈ 50 GeV, but with a bigger tail for large P µµ ⊥ . We employ the former simulation for the description of the elastic scattering background and we do not include the µ + µ − showered sample to avoid double-counting. 2 The most peculiar feature of the µ + µ − γ background is the sharp peak at MIM = m γ = 0. This peak is unphysical because it would be smeared out by the radiation of extra photons, which is not present in our purely treelevel simulation. However, the smearing due to showering is subdominant to the one due to the finite energy spread of the incoming muons, to be discussed in the next section. The tree-level modelling of the MIM distribution is thus adequate, as we also verified explicitly by comparing with showered event samples. We also generated the process with one extra matrix-element photon emission, µ + µ − γγ, and checked that its addition does not affect the distributions after beam effects are included.
The WHIZARD [29,30] package has been employed to validate the distributions of the signal and of some of the backgrounds. An extensive and detailed comparison between MadGraph5 aMC@NLO and WHIZARD predictions will be presented elsewhere [18]. 1 The PYTHIA8 settings must be modified to enforce the t Mandelstam variable as the cutoff of the shower. We verified that our results agree with the native PYTHIA8 µ + µ − process results. 2 Including this sample as an additional background is found not to affect our results, because it is subdominant to µ + µ − γ.
III. BEAM EFFECTS
The truth-level MIM distributions in Fig. 2 offer in principle a very good handle to discriminate the signal from the backgrounds. However, the characteristic MIM shapes of the signal and of the backgrounds at around 100 GeV are highly sensitive to any imperfections in the knowledge of the final and of the initial muon momenta, because of cancellations, as we will readily see. It is thus mandatory to include in the simulations a realistic treatment of the muon beams, accounting for the finite beam energy spread (BES) and beam angular spread (BAS) at the interaction point. At a high-energy muon collider, their size is expected to be δE/E = δ BES = 0.1% and δθ = 0.6 mrad, respectively [2]. The uncertainties on the measurement of the momentum of the final-state muons will we included in the next section.
The BES and the BAS cause a departure from the nominal total momentum of the initial state, (2E b , 0 ), which in turn impacts the kinematics of the outgoing muons. Clearly the MIM is still calculated according to Eq. (1), since the initial momentum is not known on an event-by-event basis. The beam smearing can be regarded as an uncertainty on the knowledge of the true initial muon momenta, and in turn of the true ∆P . The energy and longitudinal components of ∆P result from a cancellation between the initial and final muons momenta, which are of order 5 TeV. A relatively small MIM of order 100 GeV is thus strongly affected even by a small relative spread of the muon beams.
We begin with a discussion of the BES, which has two main effects. First, the centre of mass energy of the initial muons, √ s, differs from the nominal collider centre of mass energy 2E b = 10 TeV. Second, the centre of mass frame of the muon collision does not coincide with the detector frame, the two being related by a Lorentz boost along the beam axis. The BES simulation is not implemented in MadGraph, therefore we account for it by proceeding as follows.
We generate truth-level µ + µ − collision events in the centre of mass frame, for different values of the centre of mass energy √ s. If the energy of each beam is Gaussian distributed, √ s is also approximately Gaussian, with mean 2E b and standard deviation σ = √ 2 δ BES E b . We sample this distribution at three fixed values of √ s given by {2E b −σ, 2E b , 2E b +σ}, obtaining three event datasets that we combine with equal weights of 1/3. Next, we introduce the boost of the centre of mass frame. The boost distribution conditional to √ s is approximately Gaussian, with zero mean and with standard deviation σ/(2 √ s) δ BES /(2 √ 2). For each truth-level event we generate the boost by sampling from this distribution and we apply the corresponding Lorentz transformation to the final-state particles. We have checked that this simple method to simulate the BES is in good agreement with the BES implementation available in WHIZARD.
The BES has a minor impact on all distributions, which remain essentially identical to the truth-level ones in Fig. 2, with the exception of the MIM distribution, as expected. This is shown in Fig. 3. For the signal, and the µ + µ − νν or µ + µ − ff backgrounds, the effect is a considerable broadening of the distribution that eliminates the sharp resonant peaks at, respectively, m h and m Z . The effect is less strong on µ + µ − W + W − , as the truth-level distribution is already rather broad. The major BES effect in this case is to populate the MIM < 2 m W region.
μ + μ -h μ + μ -νν μ + μ -γ μ + μ -f f μ + μ -W + W -
In the case of the µ + µ − γ background, instead, the BES effect is dramatic: the distribution is turned from a sharp peak at MIM = 0 into a wide plateau with a maximum at MIM ∼ 150 GeV. This can be understood by exploiting momentum conservation and writing Eq. (1) in the form
MIM 2 = (2E b , 0 ) + p γ − p in µ − − p in µ + 2 ,(2)
where p in µ ± are the actual momenta of the initial-state muons and p γ is the momentum of the undetected photon. Clearly the MIM would vanish if the total momentum of the initial muons was equal to (2E b , 0 ), because p 2 γ = 0. However, in the presence of the BES, at the 1σ level we encounter configurations such as
p in µ − + p in µ + = E b 2 + δ BES , 0 ⊥ , δ BES .(3)
By substituting in Eq. (2), we obtain the estimate
MIM 2 ∼ (150 GeV) 2 δ BES 10 −3 |p γ z | 0.2E b 2E b 10 TeV 2 ,(4)
where |p γ z | p γ ⊥ was assumed, since the photon is typically emitted in the forward direction. The emission of photons with |p γ z | ∼ 0.2E b ∼ 1 TeV is relatively likely, owing to the collinear enhancement of the photon splitting. In turn, these effects are responsible for the change of shape of the MIM distribution for the µ + µ − γ background, observed in Fig. 3. These considerations also show that relatively large values of the MIM can be attained in the µ + µ − γ background only in events characterised by the emission of a rather energetic photon. In these events, the energy of either the final muon or antimuon is significantly smaller than E b . Therefore, we will still be able to partially eliminate them by a cut on E min µ . The starting point for the BAS simulation is again truth-level samples generated in the centre of mass frame of the initial muons and with center of mass energy 2E b . We assume that the polar angle of each beam muon is Gaussian distributed around zero, with standard deviation δθ = 0.6 mrad, while the azimuthal angle is uniformly distributed. For each event, we determine the direction of each muon by throwing the angles from these distributions. We take the two muons to have the same energy, which is computed by imposing that the centre of mass energy is equal to 2E b , and we construct the 4-momenta of the initial muons. Then we consider the Lorentz transform that brings the initial muons in their centre of mass frame, and apply its inverse to all the particles in the event.
μ + μ -h μ + μ -νν μ + μ -γ μ + μ -f f μ + μ -W + W -
The effect of the BAS is found to be minor for both signal and backgrounds on all kinematic variables. The largest effect, shown in Fig. 4, is a smearing of the MIM for the µ + µ − γ background to nonzero values, but this is anyway much smaller than the impact of BES shown in Fig. 3. Therefore, the BAS is neglected in the following.
IV. RESULTS
The target luminosity of the 10 TeV muon collider [2], of 10 ab −1 , will produce 10 million Higgs bosons in total, mostly in the charged vector boson fusion process. Around one million of them-620'000, considering the cross-section of 62 fb after the P µµ ⊥ > 50 GeV cut-are produced in the Z boson fusion process. With the SM branching ratio of 1.2 · 10 −3 , about 1'000 invisible decays will be available, allowing in principle not only to observe the SM invisible decay, but also to measure BR inv with few percent relative accuracy. A BSM contribution BR BSM inv could thus be probed at the 10 −4 level.
Attaining 10 −4 level sensitivity would require strongly discriminant kinematical features enabling the design analysis cuts that eliminate the background while preserving a large fraction of the signal. This is indeed possible with the truth-level distributions in Fig. 2, very discriminating variables being the MIM, E min µ and M µµ . Thanks to the excellent background rejection, the sensitivity to BR BSM inv would be in fact around 10 −4 for truthlevel events, as Fig. 5 shows. A forward muon detector coverage up to η max µ = 6 would be sufficient to achieve this sensitivity.
Interestingly enough, nearly perfect selection cuts can be designed also in the presence of a realistic spread of the beam energy. The BES reduces the discriminating power of the MIM distribution, as shown in Fig. 3, but stronger lower cuts on E min µ and M µµ can still eliminate the background with limited signal rejection. The BR BSM inv sensitivity (see again Fig. 5) thus remains at around 10 −4 even in the presence of beam effects. We conclude that if the muon collider beam energy spread will be at the level of 10 −3 as foreseen, its effect will not limit the sensitivity to the Higgs invisible decay significantly. We saw in the previous section that the beam angular spread does not play an important role.
The finite resolution in the measurement of the energy and of the angle of the final muons are instead expected to play a major role in limiting the sensitivity. We include these effects in our simulations by proceeding as follows.
We assume a constant relative uncertainty δ res on the muon energy measurement, which we simulate by throwing the measured energy of each muon from a Gaussian distribution centred around the true energy E µ and with standard deviation δ res · E µ . The actual response function of the forward muon detector will most likely not be centred at E µ , because the muons will lose energy while crossing the absorbers, nor will it be symmetric around the centre. However, preliminary results obtained with a more realistic response function confirm the adequacy of employing a Gaussian smearing. 3 The smearing is applied only to muons outside the acceptance of the main detector, |η µ | > 2.44. The muons inside the acceptance of the main detector play a very limited or no role in the analysis, and moreover we expect that their energy will be measured more accurately.
In order to simulate the uncertainty in the measurement of the muon direction we generate a polar angle θ, thrown from a Gaussian centred at zero and with standard deviation ∆θ, and an azimuthal angle φ with uniform distribution. The measured muon direction is chosen to form an angle of θ with the true directionn tr . The orientation of the measured direction in the plane transverse ton tr is determined by the azimuthal angle φ.
The most effective distributions for signal/background discrimination, namely MIM, E min µ and M µµ , are signif- 3 We thank Daniele Calzolari and Federico Meloni for sharing their initial estimate of the response function with us. icantly affected by the energy measurement uncertainty, as Fig. 6 shows. All other distributions remain essentially identical to the truth-level ones in Fig. 2.
The effect on the MIM is considerable even with energy resolution as small as δ res = 1%. However, although broadened, the MIM distribution for the signal maintains a peak at around 200 GeV that is still useful to reject the background. Furthermore, the E min µ and M µµ distributions are almost unaffected by the δ res = 1% smearing and retain a good discriminating power. Effective analysis cuts can thus be designed and the sensitivity degradation due to the inclusion of the energy uncertainty is marginal as we see in Fig. 5 by comparing the red and the green curves. An optimised cut-flow for an angular acceptance of η max µ = 6 is reported in Table I. After the cuts, 130 invisible decays are expected for SM branching ratio, with a background of around 600. The SM decay could thus be easily observed and, in the case of agreement with the SM, BSM effects could be bounded to BR BSM inv < 4·10 −4 at 95% CL. At the exclusion, the ratio of signal to background is relatively large, S/B ≈ 6%. Background estimates should be possible with better or comparable accuracy. Systematic uncertainties are thus not expected to reduce the sensitivity strongly.
An energy resolution δ res = 1% is most likely unrealistic, and larger uncertainties will deteriorate the reach significantly, limiting the muon collider sensitivity above the 10 −4 level. This is illustrated by our results for δ res = 10%. With this uncertainty, the MIM is no longer a useful discriminant. The E min µ distributions of both signal and backgrounds extend beyond the truth-level endpoint E b (see Fig. 6), significantly reducing the rejection power of this variable. Similar considerations hold for M µµ . The optimal sensitivity is obtained through softer cuts than in the δ res = 1% scenario, as displayed by the cut-flow in Table II. The SM branching ratio produces 100 events in the selected region, with more than 2000 background events. It should be possible to observe the SM Higgs to invisible decay, but only at the 95% confidence level. A 5σ "discovery" of the SM invisible decay could be impossible. The sensitivity to new physics is BR BSM inv < 10 −3 as in Fig. 5. At the exclusion, S/B ≈ 3%. It should be noted that our sensitivity projections based on cut-and-count could be improved by a more sophisticated statistical analysis. On the other hand, these possible improvements are not expected to modify the picture radically.
μ + μ -h μ + μ -νν μ + μ -γ μ + μ -f f μ + μ -W + W -δ res = 1% μ + μ -h μ + μ -νν μ + μ -γ μ + μ -f f μ + μ -W + W -δ res = 10% μ + μ -h μ + μ -νν μ + μ -γ μ + μ -f f μ + μ -W + W -δ res = 1% μ + μ -h μ + μ -νν μ + μ -γ μ + μ -f f μ + μ -W + W -δ res = 10% μ + μ -h μ + μ -νν μ + μ -γ μ + μ -f f μ + μ -W + W -δ res = 1% μ + μ -h μ + μ -νν μ + μ -γ μ + μ -f f μ + μ -W + W -
We finally turn to the investigation of the effect of uncertainties in the muon direction measurement. The sensitivity is shown in Fig. 7 as a function of the angular uncertainty ∆θ assuming a forward detector coverage up to η max µ = 6. For 1% energy resolution, an angular resolution ∆θ < 2 mrad would be needed not to affect the sensitivity. If instead δ res = 10%, the sensitivity is inferior and it does not get degraded significantly up to ∆θ = 5 mrad. Larger ∆θ, above around 10 mrad, would most likely prevent the observation of the SM decay regardless of the energy measurement accuracy.
[number of events, 10 ab −1 ] BSM signal
µ + µ −ν ν µ + µ − γ µ + µ −f f µ + µ − W + W − µ + µ − (h → inv)SM
baseline & P µµ ⊥ > 50 GeV 6.2 · 10 5 · BR BSM inv 1.1 · 10 6 1.3 · 10 7 1.3 · 10 6 6.2 · 10 5 7.4 · 10 2 MIM < 0.8 TeV 5.6 · 10 5 · BR BSM inv 6.3 · 10 5 1.0 · 10 7 9.4 · 10 5 1.8 · 10 5 6.7 · 10 2 |∆ηµµ| > 8 4.8 · 10 5 · BR BSM inv 2.3 · 10 5 5.8 · 10 6 6.3 · 10 5 1.3 · 10 5 5.8 · 10 2 |∆φµµ − π| > 0.8 3.9 · 10 5 · BR BSM inv 1.7 · 10 5 2.2 · 10 6 4.9 · 10 5 8.5 · 10 4 4.6 · 10 2 P µµ ⊥ > 80 GeV 3.4 · 10 5 · BR BSM inv 1.1 · 10 5 8.9 · 10 5 2.5 · 10 5 5.9 · 10 4 4.1 · 10 2
Mµµ > 9.5 TeV 1.6 · 10 5 · BR BSM inv 1.4 · 10 3 1.5 · 10 3 3.1 · 10 2 28 1.9 · 10 2 E min µ > 4.7 TeV 1.1 · 10 5 · BR BSM inv 6.2 · 10 2 (< 65) (< 31) (< 7.0) 1.3 · 10 2 TABLE I. Cut-flow for 2E b = 10 TeV and a forward detector coverage η max µ = 6. An energy smearing of 1% is applied to muons with |ηµ| > 2.44. The baseline cuts are listed in Section II.
[number of events, 10 ab −1 ] BSM signal
µ + µ −ν ν µ + µ − γ µ + µ −f f µ + µ − W + W − µ + µ − (h → inv)SM
baseline & P µµ ⊥ > 50 GeV 6.2 · 10 5 · BR BSM inv 1.1 · 10 6 1.5 · 10 7 1.3 · 10 6 6.2 · 10 5 7.4 · 10 2 |∆ηµµ| > 6.5 6.1 · 10 5 · BR BSM inv 7.6 · 10 5 1.3 · 10 7 1.1 · 10 6 5.5 · 10 5 7.3 · 10 2 |∆φµµ − π| > 1 4.4 · 10 5 · BR BSM inv 3.9 · 10 5 2.9 · 10 6 6.4 · 10 5 3.0 · 10 5 5.3 · 10 2 P µµ ⊥ > 180 GeV 1.9 · 10 5 · BR BSM inv 1.
V. CONCLUSIONS
The installation of a forward muon detector would further improve the perspectives to detect and to investigate vector boson fusion or scattering processes at a muon collider. An extensive survey of specific opportunities offered by the forward muon detector will be presented elsewhere [18]. In this paper we studied the Higgs boson decay to invisible particles.
Our main results are displayed in Figs. 5 and 7. A sensitivity to BR BSM inv as small as 10 −4 , competitive with the FCC-hh projection, is ideally possible assuming perfect discrimination of the µ + µ − (h → inv) process from the backgrounds. We have shown that nearly perfect discrimination is in principle possible, and the ideal sensitivity is attainable, by fully realistic simulations of the underlying physical processes that also account for the energy and angular spread of the colliding muon beams.
However, a strong sensitivity degradation is expected due to the finite resolution in the measurement of the energy and of the angle of the muons in the forward detector. An energy resolution as small as 1% and an angular uncertainty below 2 mrad would be needed to maintain a sensitivity at the 10 −4 level. With more realistic resolutions, such as for instance 10% on the energy, the sensitivity drops to the 10 −3 level and even the possibility of observing the SM Higgs to invisible decay cannot be taken for granted. Our results motivate design studies of the forward muon detector targeting the best possible accuracy on the muon momentum measurement. There could be margins to improve the sensitivity by multivariate shape analyses that should be also investigated.
Our findings rely on assumptions on the performances of the main detector. In particular, we assumed coverage up to θ = 10 o from the beam axis, and 20 GeV threshold for the veto of any object within the angular acceptance. The validity of these assumptions cannot be verified at the current stage, because the design of the 10 TeV muon collider detector has not been completed yet. We show in Appendix A (see Fig. 8) that our findings depend weakly on them.
A first stage of the muon collider project could foresee a 3 TeV centre of mass energy collider with 2 ab −1 integrated luminosity [2]. The projected sensitivity of a 3 TeV muon collider is studied in Appendix B. We find, in Fig. 9, that observing the SM Higgs to invisible decay will most likely be impossible for δ res = 10%. The reduced prospects are due to the smaller number of produced Higgs bosons. On the other hand, the 3 TeV muon collider will still improve over the HL-LHC sensitivity. An angular coverage of the forward muon detector up to 4 or 5 pseudo-rapidity would be sufficient, while coverage up to 5 or 6 is needed for the 10 TeV collider. finite energy resolution of the forward detector. However, we do not consider the uncertainties in the measurement of the muon direction. At 3 TeV the muons produced by the signal process have smaller pseudo-rapidity, with a distribution peaking at θ ≈ 61 mrad (|η| ≈ 3.5). This implies that muons inside the acceptance of the main detector, taken to be θ > 10 o here, play a non-negligible role. Therefore, when a finite energy resolution δ res is assumed for the forward detector we also apply a 1% energy smearing to muons with |η| < 2.44.
The expected reach is shown in Fig. 9, demonstrating that a smaller angular coverage compared to the 10 TeV case, up to η max µ ≈ 4 -5, would be sufficient to obtain optimal sensitivity. The limits on BR SM inv are weaker by a factor of a few compared to a 10 TeV collider owing to the much lower number of produced Higgs bosons. For instance, assuming angular coverage η max µ = 5 and energy resolution δ res = 10% we find BR BSM inv < 3 · 10 −3 . These results suggest it is unlikely that the SM Higgs invisible decay can be observed at a 3 TeV muon collider, but a decisive sensitivity improvement relative to HL-LHC is expected.
FIG. 1 .
1Schematics of an effective Z bosons collision producing a generic final state X. Z-fusion Higgs boson production, X = h, is the main focus of the present paper.
FIG. 2 .
2Key kinematic distributions at truth level. The signal cross section corresponds to BRinv = 1. In the top left panel,
FIG. 3 .
3MIM distributions after the inclusion of beam energy spread (solid) compared to truth level (dashed). The cut P µµ ⊥ > 50 GeV has been applied.
FIG. 4 .
4MIM distributions after the inclusion of beam angular spread (solid) compared to truth level (dashed). The cut P µµ ⊥ > 50 GeV has been applied.
2E b = 10 TeV, 10 ab -1 FIG. 5. 95% CL limit on BR BSM inv at a 10 TeV muon collider with 10 ab −1 , as a function of the angular acceptance 2.44 < |ηµ| < η max µ of a forward muon detector.
2E b = 10 TeV δ res = 10% FIG. 6. Key kinematic distributions after BES and smearing of the forward muon energies (solid), compared to those including only BES (dashed). The left (right) panels correspond to a forward detector resolution δres = 1 (10)%. The signal cross section corresponds to BRinv = 1. In all panels, the baseline cuts and P µµ ⊥ > 50 GeV are applied.
δ
BES = 0.1% + δ res = 1% δ BES = 0.1% δ BES = 0.1% + δ res = 10%2E b = 10 TeV, 10 ab -1 η μ max = 6 FIG. 7. 95% CL limit on BR BSM inv at a 10 TeV muon collider with 10 ab −1 , as a function of the uncertainty on the muon direction measurement. An angular acceptance η max µ = 6 of the forward muon detector is assumed.
2E b = 3 TeV, 2 ab - 1 FIG. 9 .
321995% CL limit on BR BSM inv at a 3 TeV muon collider with 2 ab −1 , as a function of the angular acceptance 2.44 < |ηµ| < η max µ of a forward muon detector. The main detector is assumed to cover θ > 10 o .
TABLE II. Cut-flow for 2E b = 10 TeV and a forward detector coverage η max µ = 6. An energy smearing of 10% is applied to muons with |ηµ| > 2.44. The baseline cuts are listed in Section II.1 · 10 5 2.7 · 10 5
8.2 · 10 4
7.0 · 10 4
2.2 · 10 2
Mµµ > 8.75 TeV
1.2 · 10 5 · BR BSM
inv
4.4 · 10 3 7.6 · 10 3
1.9 · 10 3
1.6 · 10 3
1.4 · 10 2
E min
µ
> 4.3 TeV
8.1 · 10 4 · BR BSM
inv
1.8 · 10 3 2.6 · 10 2
1.6 · 10 2
2.6 · 10 2
97
Appendix A: Impact of angular coverage and veto threshold of the main detectorHere we study the sensitivity of our results to variations of the angular coverage and of the veto p ⊥ threshold of the main detector. Specifically, we consider the possibility that the size of the absorbers is reduced with respect to current design, thus freeing space to extend the angular coverage up to θ = 5 o (|η| ≈ 3.1) instead of the 10 o assumed in the main text. We also investigate the effect of increasing the veto p ⊥ threshold to 50 GeV from the 20 GeV considered so far.The major effect of extending the angular coverage of the main detector is a better rejection of background processes where the µ + µ − pair is produced together with an additional object at moderate pseudo-rapidity. These include the µ + µ − W + W − and µ + µ − ff processes, for which we find significant suppression. The µ + µ − γ background is mildly reduced as well. Even though the emitted photons are preferentially collinear with the initial or final state muons and thus very forward, the P µµ ⊥ cut applied in our analysis eventually selects events where the photon has larger p ⊥ and is therefore more central. Quantitatively, assuming a forward detector coverage of η max µ = 6 and after the combination of baseline cuts and P µµ ⊥ > 50 GeV, the reduction amounts to 65%, 40% and 13% for the µ + µ − W + W − , µ + µ − ff , and µ + µ − γ backgrounds, respectively. The signal is not affected. This results in a mild improvement of the sensitivity to BR BSM inv < 8 · 10 −4 for δ res = 10% and η max µ = 6, as shown by the dashed line inFig. 8. This weak dependence on the angular acceptance of the main detector can be understood by noticing that the dominant background after all cuts is µ + µ −ν ν, which is not affected by the veto.If the veto p ⊥ threshold is larger than the 20 GeV assumed in the main text, the rejection of background events with soft particles in the central region becomes less effective. We find that increasing the threshold to p ⊥ > 50 GeV (while keeping the angular coverage of the main detector fixed to θ > 10 • ) has a very mild effect. The µ + µ − W + W − and µ + µ − ff backgrounds are larger by 25% after baseline cuts and P µµ ⊥ > 50 GeV, whereas all remaining backgrounds, including the dominant µ + µ −ν ν process, and the signal are approximately unchanged. This small increase in the background rate results in a modest 10% degradation of the sensitivity to BR BSM inv , displayed by the dotted line inFig. 8. We conclude that the design of the main detector has far milder impact on the sensitivity to BR BSM inv compared to the energy and angular resolution of the forward muon detector.Appendix B: The 3 TeV muon colliderIn this Appendix we study the sensitivity of a muon collider at a centre of mass energy of 3 TeV and integrated luminosity of 2 ab −1 to invisible Higgs decays using forward muons. The generation of MC data samples and the analysis are analogous to the 10 TeV study discussed in the main text, including the simulation of the
. J P Delahaye, M Diemoz, K Long, B Mansoulié, N Pastrone, L Rivkin, D Schulte, A Skrinsky, A Wulzer, arXiv:1901.06150physics.acc-phJ. P. Delahaye, M. Diemoz, K. Long, B. Mansoulié, N. Pastrone, L. Rivkin, D. Schulte, A. Skrinsky, and A. Wulzer, arXiv:1901.06150 [physics.acc-ph].
. C Accettura, arXiv:2303.08533physics.acc-phC. Accettura et al., arXiv:2303.08533 [physics.acc-ph].
. T Han, Y Ma, K Xie, 10.1103/PhysRevD.103.L031301arXiv:2007.14300Phys. Rev. D. 10331301hep-phT. Han, Y. Ma, and K. Xie, Phys. Rev. D 103, L031301 (2021), arXiv:2007.14300 [hep-ph].
. D Buttazzo, D Redigolo, F Sala, A Tesi, 10.1007/JHEP11(2018)144arXiv:1807.04743JHEP. 11144hep-phD. Buttazzo, D. Redigolo, F. Sala, and A. Tesi, JHEP 11, 144 (2018), arXiv:1807.04743 [hep-ph].
. M Ruhdorfer, E Salvioni, A Weiler, 10.21468/SciPostPhys.8.2.027arXiv:1910.04170SciPost Phys. 827hep-phM. Ruhdorfer, E. Salvioni, and A. Weiler, SciPost Phys. 8, 027 (2020), arXiv:1910.04170 [hep-ph].
. W Liu, K.-P Xie, 10.1007/JHEP04(2021)015arXiv:2101.10469JHEP. 0415hep-phW. Liu and K.-P. Xie, JHEP 04, 015 (2021), arXiv:2101.10469 [hep-ph].
. A Costantini, F De Lillo, F Maltoni, L Mantani, O Mattelaer, R Ruiz, X Zhao, 10.1007/JHEP09(2020)080arXiv:2005.10289JHEP. 0980hep-phA. Costantini, F. De Lillo, F. Maltoni, L. Mantani, O. Mattelaer, R. Ruiz, and X. Zhao, JHEP 09, 080 (2020), arXiv:2005.10289 [hep-ph].
. T Han, D Liu, I Low, X Wang, 10.1103/PhysRevD.103.013002arXiv:2008.12204Phys. Rev. D. 10313002hep-phT. Han, D. Liu, I. Low, and X. Wang, Phys. Rev. D 103, 013002 (2021), arXiv:2008.12204 [hep-ph].
. D Buttazzo, R Franceschini, A Wulzer, 10.1007/JHEP05(2021)219arXiv:2012.11555JHEP. 05219hep-phD. Buttazzo, R. Franceschini, and A. Wulzer, JHEP 05, 219 (2021), arXiv:2012.11555 [hep-ph].
. M Forslund, P Meade, 10.1007/JHEP08(2022)185arXiv:2203.09425JHEP. 08185hep-phM. Forslund and P. Meade, JHEP 08, 185 (2022), arXiv:2203.09425 [hep-ph].
. G L Kane, W W Repko, W B Rolnick, 10.1016/0370-2693(84)90105-9Phys. Lett. B. 148367G. L. Kane, W. W. Repko, and W. B. Rolnick, Phys. Lett. B 148, 367 (1984).
. S Dawson, 10.1016/0550-3213(85)90038-0Nucl. Phys. B. 24942S. Dawson, Nucl. Phys. B 249, 42 (1985).
. M S Chanowitz, M K Gaillard, 10.1016/0550-3213(85)90580-2Nucl. Phys. B. 261379M. S. Chanowitz and M. K. Gaillard, Nucl. Phys. B 261, 379 (1985).
. Z Kunszt, D E Soper, 10.1016/0550-3213(88)90673-6Nucl. Phys. B. 296253Z. Kunszt and D. E. Soper, Nucl. Phys. B 296, 253 (1988).
. P Borel, R Franceschini, R Rattazzi, A Wulzer, 10.1007/JHEP06(2012)122arXiv:1202.1904JHEP. 06122hep-phP. Borel, R. Franceschini, R. Rattazzi, and A. Wulzer, JHEP 06, 122 (2012), arXiv:1202.1904 [hep-ph].
. J De Favereau, C Delaere, P Demin, A Giammanco, V Lemaître, A Mertens, M Selvaggi, 10.1007/JHEP02(2014)057arXiv:1307.6346JHEP. 0257hep-exJ. de Favereau, C. Delaere, P. Demin, A. Giammanco, V. Lemaître, A. Mertens, and M. Selvaggi (DELPHES), JHEP 02, 057 (2014), arXiv:1307.6346 [hep-ex].
Muon collider detector Delphes card. "Muon collider detector Delphes card," https: //github.com/delphes/delphes/blob/master/cards/ delphes_card_MuonColliderDet.tcl.
The physics potential of forward muons at the muon collider. M Ruhdorfer, E Salvioni, A Wulzer, to appearM. Ruhdorfer, E. Salvioni, and A. Wulzer, "The physics potential of forward muons at the muon collider," to ap- pear.
. J De Blas, 10.1007/JHEP01(2020)139arXiv:1905.03764JHEP. 01139hep-phJ. de Blas et al., JHEP 01, 139 (2020), arXiv:1905.03764 [hep-ph].
. Z Chacko, Y Cui, S Hong, 10.1016/j.physletb.2014.03.010arXiv:1311.3306Phys. Lett. B. 73275hep-phZ. Chacko, Y. Cui, and S. Hong, Phys. Lett. B 732, 75 (2014), arXiv:1311.3306 [hep-ph].
. L Borgonovi, CERN-ACC-2018-0045FCCL. Borgonovi et al. (FCC), CERN-ACC-2018-0045 .
. D Curtin, 10.1103/PhysRevD.90.075004arXiv:1312.4992Phys. Rev. D. 9075004hep-phD. Curtin et al., Phys. Rev. D 90, 075004 (2014), arXiv:1312.4992 [hep-ph].
. M Cepeda, S Gori, V Martinez Outschoorn, J Shelton, 10.1146/annurev-nucl-102319-024147arXiv:2111.12751Ann. Rev. Nucl. and Part. Sc. 72hep-phM. Cepeda, S. Gori, V. Martinez Outschoorn, and J. Shelton, Ann. Rev. Nucl. and Part. Sc. 72, 119 (2022), arXiv:2111.12751 [hep-ph].
. J , 10.1007/JHEP07(2014)079arXiv:1405.0301JHEP. 0779hep-phJ. Alwall et al., JHEP 07, 079 (2014), arXiv:1405.0301 [hep-ph].
. C Bierlich, arXiv:2203.11601SciPost Phys. Codebases. 8hep-phC. Bierlich et al., SciPost Phys. Codebases 8 (2022), arXiv:2203.11601 [hep-ph].
. S Frixione, O Mattelaer, M Zaro, X Zhao, arXiv:2108.10261hep-phS. Frixione, O. Mattelaer, M. Zaro, and X. Zhao, arXiv:2108.10261 [hep-ph].
. M Moretti, T Ohl, J Reuter, arXiv:hep-ph/0102195M. Moretti, T. Ohl, and J. Reuter, arXiv:hep- ph/0102195.
. W Kilian, T Ohl, J Reuter, 10.1140/epjc/s10052-011-1742-yarXiv:0708.4233Eur. Phys. J. C. 711742hep-phW. Kilian, T. Ohl, and J. Reuter, Eur. Phys. J. C 71, 1742 (2011), arXiv:0708.4233 [hep-ph].
| [] |
[
"ON THE FUNCTIONAL GRAPH OF THE POWER MAP OVER FINITE GROUPS",
"ON THE FUNCTIONAL GRAPH OF THE POWER MAP OVER FINITE GROUPS"
] | [
"Claudio Qureshi ",
"Lucas Reis "
] | [] | [] | In this paper we study the description of the functional graphs associated with the power maps over finite groups. We present a structural result which describes the isomorphism class of these graphs for abelian groups and also for flower groups, which is a special class of non abelian groups introduced in this paper. Unlike the abelian case where all the trees associated with periodic points are isomorphic, in the case of flower groups we prove that several different classes of trees can occur. The class of central trees (i.e. associated with periodic points that are in the center of the group) are in general nonelementary and a recursive description is given in this work. Flower groups include many non abelian groups such as dihedral and generalized quaternion groups, and the projective general linear group of order two over a finite field. In particular, we provide improvements on past works regarding the description of the dynamics of the power map over these groups.2010 Mathematics Subject Classification. Primary 20D60, Secondary 05C20. | 10.1016/j.disc.2023.113393 | [
"https://export.arxiv.org/pdf/2107.00584v2.pdf"
] | 235,694,414 | 2107.00584 | b077f6f6a7a517e20bf473a9196ac2555a9936e0 |
ON THE FUNCTIONAL GRAPH OF THE POWER MAP OVER FINITE GROUPS
6 Sep 2022
Claudio Qureshi
Lucas Reis
ON THE FUNCTIONAL GRAPH OF THE POWER MAP OVER FINITE GROUPS
6 Sep 2022
In this paper we study the description of the functional graphs associated with the power maps over finite groups. We present a structural result which describes the isomorphism class of these graphs for abelian groups and also for flower groups, which is a special class of non abelian groups introduced in this paper. Unlike the abelian case where all the trees associated with periodic points are isomorphic, in the case of flower groups we prove that several different classes of trees can occur. The class of central trees (i.e. associated with periodic points that are in the center of the group) are in general nonelementary and a recursive description is given in this work. Flower groups include many non abelian groups such as dihedral and generalized quaternion groups, and the projective general linear group of order two over a finite field. In particular, we provide improvements on past works regarding the description of the dynamics of the power map over these groups.2010 Mathematics Subject Classification. Primary 20D60, Secondary 05C20.
Introduction
Given a pair (f, S) of a finite set S and a map f : S → S, we can associate with it a graph G(f /S), called the functional graph of f over S. This is the directed graph with vertex set V = {s | s ∈ S} and directed edges {s → f (s) | s ∈ S}. The functional graph encodes the dynamics of f over S. For instance, the forbit {s, f (s), f (2) (s), . . .} of an element s ∈ S is described by a path in G(f /S). Moreover, an element s ∈ S is f -periodic if and only if it belongs to a cycle of G(f /S). One of the motivation of studying finite dynamical systems is due to their applications such as integer factorization methods in cryptography [13,23] and pseudo-random number generators [3]. The description of G(f /S) has been considered for many algebraic structures S and well behaved maps f . In many cases, the functional graphs turns out to have many remarkable properties such as regularity on the indegrees and symmetries on its cycles that allow us to obtain a partial or complete description of their structure. See [6,10,11,12,14,15,16,19,20,22] for a rich source of results regarding these issues and [9] for a survey including some applications of dynamical systems over finite fields.
When S = G is a finite group, it is natural to consider the power map ϕ t : G → G with ϕ t (g) = g t (or tg if G is written additively), where t is an integer. When G is cyclic, the graph G(ϕ t /G) is completely described in [14] and the trees attached to the periodic points are described by a non-increasing sequence of positive integers. Trees constructed in this way are called elementary and they turn out to be useful to describe the non-periodic part of the dynamic of many interesting class of maps such as Chebyshev polynomials over finite fields, Redei functions and some maps related to certain endomorphism of elliptic curves over finite fields [17]. In [5] the authors describe the graph G(ϕ t /G) for special cases of abelian groups. In section 2.3 we obtain a complete description of this graph for general abelian groups, mainly using the explicit description given in [14] for the cyclic case and using some results of [17]. The trees attached to the periodic points in this case (which are all isomorphic) are expressed as a tensor product of elementary trees. When the group is non abelian the power map ϕ t is no longer an homomorphism and the trees attached to the periodic points are no longer isomorphic and in general it is a difficult problem to describe them. The trees attached to periodic points in the center of the group are isomorphic as we show in Section 2.2, these trees are called central trees and are, in general, non-elementary. Regarding the problem of describing the graph G(ϕ t /G) for non abelian groups G, only results for special families are known. More specifically, in [1], [2] and [4] the cases where G is a dihedral, a generalized quaternion and some semidirect product of cyclic groups are explored, respectively. However the digraphs are not explicitly determined there. Instead, only partial descriptions of such graph are given like the distribution of indegrees, cycle lengths and number of cycles. Further results of this kind are given in [8] for special classes of finite groups.
In this paper we introduce the class of flower groups, which contains the dihedral and generalized quaternion groups, and also some semidirect product of cyclic groups. Our main result is the description of the functional graph structure of the power map over flower groups. Such description is given in two parts: in Theorem 3.9 we provide a complete description of the cyclic structure and also of the non central trees, and in Theorem 3.12 we provide some properties of central trees which allow us to obtain a explicit description of such trees for very special cases of interest.
Our paper is organized as follows. In Section 2 we introduce the basic notation and operations on graphs and review the structure theorem for the functional graph of the power map on cyclic groups. Then, using results from [17], we show how to extend this structure theorem for finite abelian groups. In the last part of this section we provide some general results on the functional graph of the power map on arbitrary finite groups that is further used to prove our main results. In Section 3 we introduce the notion of flower groups, obtain some properties about these groups and also prove our main results. In Section 4 we apply our main results to the explicit description of functional graphs associated to some non abelian groups.
Preliminaries
This section provides some background material and minor results. We start by fixing some basic notation and reviewing the structural description of the functional graph of the power map on cyclic groups. Then we show how to extend this structural description for abelian groups. At the end of this section we provide some general results on the functional graph of the power map over arbitrary finite groups.
Along the paper we use the letters G, H to denote finite groups and C to denote cyclic groups. For a positive integer m, C m denotes the cyclic group of order m. We use multiplicative notation for the operation of these groups unless otherwise specified. The ring of integers modulo d is denoted by Z d and their multiplicative group of units is denoted by Z * d . The Euler totient function is denoted by ϕ(d) and the (multiplicative) order of t ∈ Z * d is denoted by o d (t). For a group G, the centralizer of S ⊆ G is denoted by C G (S) and the center of G is denoted by Z(G) = C G (G).
2.1. Basics on functional graphs. We use the same terminology as in [14] and [17]. It is a well known fact that the connected components of functional graphs are composed of a cycle and each vertex of this cycle is the root of a tree (the direction of the edges is from leaves to the root). Several interesting classes of functions over finite fields or finite structures studied in the literature present certain regularity that is reflected in some symmetry conditions on their functional graphs. We denote by Cyc(m, T ) a directed graph composed by an m-cycle (i.e. a cycle of length m) where every node in this cycle is the root of a tree isomorphic to T . When every connected component of a functional graph G(f /S) is of this form we say that this functional graph is regular, in this case there are integers m 1 , . . . , m k and rooted trees
T 1 , . . . , T k such that G(f /S) = k i=1 Cyc(m i , T i ),
where the circled plus symbol denotes a disjoint union of graphs. When the rooted tree T has a unique vertex we write T = •. The m-cycle Cyc(m, •) is denoted by Cyc(m) and the rooted tree T with a loop in the root Cyc(1, T ) is denoted by {T }. The notation G = k × H means that G = k i=1 H i with each H i isomorphic to the graph H. A forest is a disjoint union of rooted trees. Given a forest G = k i=1 T i , we denote by G the rooted tree with k children where these children are roots of trees isomorphic to T 1 , . . . , T k . We consider the empty graph ∅ with the property that ∅ = •. Given rooted trees
T 1 = G 1 , · · · , T k = G k where each G i is a forest we define k i=1 T i := k i=1 G i (i.
e. the sum of rooted trees is a new rooted tree which is obtained by identifying all the roots). For a tree T = G and k ∈ Z + we denote
k · T = k × G (note that k · T = k × T if k > 1).
The vertices belonging to the cycles of the functional graph G(f /S) correspond to the periodic points of f . Given a point x 0 ∈ S, the least natural number δ = δ(x 0 ) such that f (δ) (x 0 ) is a periodic point is called the preperiod of x 0 (the periodic points corresponds to points x 0 with δ(x 0 ) = 0). Regarding the functional graph G(f /S), the number δ(x 0 ) equals the depth of x 0 in the rooted tree which it belongs to (i.e. the distance of x 0 to the root).
There is a simple way to associate with each non-increasing finite sequence of positive integers v = (ν 1 , ν 2 , . . . , ν d ) (i.e. ν 1 ≥ ν 2 ≥ · · · , ν d ≥ 1) a rooted tree T v defined recursively as follows:
(1) T 0 v = •, G k v = ν k × T k−1 v ⊕ k−1 i=1 (ν i − ν i+1 ) × T i−1 v and T k v = G k v for 1 ≤ k < d, G v = (ν d − 1) × T d−1 v ⊕ d−1 i=1 (ν i − ν i+1 ) × T i−1 v and T v = G v .
Trees associated with non-increasing sequences as above are called elementary trees, see [17] for more details on elementary trees. Figure 1 shows the inductive process to contruct T v for a 4-term sequence. Figure 1. This figure (taken from [14]) illustrates the inductive definition of T v for a 4-term sequence v = (ν 1 , ν 2 , ν 3 , ν 4 ). A node labelled by a rooted tree T indicates that it is the root of a tree isomorphic to T .
T 1 v = ... ν1 T 2 v = T 1 v ... T 1 v ν 2 ... ν 1 −ν 2 T 3 v = T 2 v ... T 2 v ν 3 T 1 v ... T 1 v ν 2 −ν 3 ... ν 1 −ν 2 T v = T 3 v ... T 3 v ν 4 −1 T 2 v ... T 2 v ν 3 −ν 4 T 1 v ... T 1 v ν 2 −ν 3 ... ν 1 −ν 2
Elementary trees appear in a wide range of context when we describe the dynamics of different classes of maps, see for example [6,11,14,15,16,17,21]. They will play a key role also in our description of the functional graph associated with the power map over flower groups. An important special case of elementary trees T v is when the sequence v is a multiplicative chain (i.e. when each term of the sequence divides the previous term), these are the trees that appear in the functional graph of power maps on abelian groups.
A rooted tree T is homogeneous if it can be written as T =
d−1 i=0 n i × T i where T i is a rooted tree with depth i for 0 ≤ i < d (i.
e. all its subtrees with the same depth are isomorphic). In particular elementary trees are homogeneous since the subtrees T i v of T v associated with the d-term sequence v has depth i, for 0 ≤ i < d. The following operation will be usefull to describe trees in the functional graph associated with the power map on some finite non abelian groups.
Definition 2.1. Let T = d−1
i=0 n i × T i be an homogeneous rooted tree where each T i has depth i for 0 ≤ i < d and S be a rooted tree. For 0 ≤ j < d, the j-sum of T and S is given by T + j S = i =j n i × T i ⊕ (n j − 1) × T j ⊕ (T j + S) . We note that T + j S is obtaining by replacing one of the depth-j-subtree T j of T with T j + S. The resulting rooted tree will be not an homogeneous tree in general.
2.2.
The power map on finite cyclic groups. Let C n be the (multiplicative) cyclic group of order n and ϕ t : C n → C n be the power map x → x t . The trees attached to the cyclic points in G(ϕ t /C n ) are described by the iterated gcd of n relative to t (also called ν-series in [14]), denoted by gcd t (n), and defined as follows 1 : gcd t (n) = (1) if gcd(t, n) = 1, otherwise gcd t (n) = (ν 1 , . . . , ν D ) where
ν 1 = gcd(t, n), ν i+1 = gcd t, n ν 1 · · · ν i , for i ≥ 1,
and D is the least positive integer such that ν D+1 = 1. It is easy to see that if ν := ν 1 · · · ν D then ν | n, gcd t (n) = gcd t (ν) and ω := n/ν is the greatest divisor of n that is relatively prime with t (see [14] for more details). The following proposition gives an explicit description for the functional graph G(ϕ t /C n ).
Proposition 2.2 ([16]
, Proposition 2.1). Let n = νω, where ω is the greatest divisor of r that is relatively prime with t. Let gcd t (ν) = (ν 1 , ν 2 , . . . , ν D ) be the iterated gcd of ν relative to t and T gcd t (ν) be the elementary tree associated with this sequence. Then C n has exactly ω elements that are ϕ t -periodic and the following isomorphism formula holds:
(2) G(ϕ t /C n ) = d|ω ϕ(d) o d (t) × Cyc o d (t), T gcd t (ν) .
Moreover, the tree T gcd t (ν) has ν vertices and depth D.
2.3. The power map on finite abelian groups. Next we show how formula (2) can be extended to general abelian groups. Note that if u and v are nonincreasing finite sequences which differ only possibly in the last terms that are all equal to 1, then the corresponding elementary trees are equal (i.e. T u = T v ). In these case we say that these sequences are equivalent. The product uv of two non-increasing sequences is the coordinatewise product (substituting the smallest sequence by other equivalent in such a wat that both sequences have the same length). For what follows, ⊗ denotes the tensor product of digraphs.
Lemma 2.3 ([17], Lemma 3.4).
For any maps of finite sets f : X → X and g : Y → Y we have the following graph isomorphism:
G(f × g/X × Y ) ∼ = G(f /X) ⊗ G(g/Y ), where f × g : X × Y → X × Y is the map f × g (x, y) = (f (x), g(y)).{T u } ⊗ {T v } = {T uv }.
Lemma 2.6. Let r 1 , . . . , r k be positive integers. We have
k i=1
Cyc(r i ) = r 1 r 2 · · · r k lcm(r 1 , r 2 , . . . , r k ) × Cyc (lcm(r 1 , r 2 , . . . , r k )) .
Proof. Consider the maps s i :
Z ri → Z ri , s i (x) = x + 1. Then Cyc(r i ) = G(s i /Z ri ) and k i=1 Cyc(r i ) = G(s 1 ×· · ·×s k , Z r1 ×· · ·×Z r k )
which is a union of disjoint cycles. Each one of this cycles is in correspondence with the cosets of H := (1, 1, . . . , 1) in Z r1 × · · · × Z r k and each one of these cosets has |H| = lcm(r 1 , r 2 , . . . , r k ) elements. Now consider any (multiplicative) abelian group G and the map ϕ t : G → G, ϕ t (g) := g t . By the fundamental theorem of finite abelian groups, there is an isomorphism η : G → C r1 × · · · × C r k . Since η • ϕ t = ϕ t • η we have that η induces an isomorphism between the functional graph of ϕ t over G and over the direct product of cyclic groups, so we can assume G = C r1 × · · · × C r k . If d and ω are k-term sequences, d | ω means d i | ω i for every 1 ≤ i ≤ k.
Proposition 2.7. Let G be an abelian group and write G = C r1 × · · · × C r k , where C r denotes a cyclic group of order r. Let r i = ν i ω i where ω i is the greatest divisor of r i that is relatively prime with t, ν := (ν 1 , . . . , ν k ), ω := (ω 1 , . . . , ω k ) and
gcd t (ν) := k i=1 gcd t (ν i ). For d = (d 1 , . . . , d k ) define ϕ(d) := k i=1 ϕ(d i ) and o d (t) = lcm{o di (t) : 1 ≤ i ≤ k}.
Then G has exactly k i=1 ω i elements that are ϕ t -periodic and the following isomorphism formula holds:
(3) G(ϕ t /G) = d|ω ϕ(d) o d (t) × Cyc o d (t), T gcd t (ν) Proof. If formula (3) holds, the number of ϕ t -periodic points in G is d|ω ϕ(d) = k i=1 di|ωi ϕ(d i ) = k i=1 ω i .
To prove the formula we use Proposition 2.2 and Lemma 2.3 to obtain:
G(ϕ t /G) = k i=1 G(ϕ t /C ri ) = k i=1 di|ωi ϕ(d i ) o di (t) × Cyc o di (t), T gcd t (νi) .
By the commutativity of the tensor product and Lemma 2.4 we have:
G(ϕ t /G) = d|ω k i=1 ϕ(d i ) o di (t) × Cyc o di (t), T gcd t (νi) = d|ω k i=1 ϕ(d i ) o di (t) × k i=1 Cyc (o di (t)) ⊗ k i=1 T gcd t (νi)
Finally, using Lemmas 2.5, 2.6 and again Lemma 2.4 we obtain the desired formula.
Example 2.8. Consider the power map ϕ 14 over the group Z * 91 ∼ = C 6 × C 12 of invertible elements modulo 91. Figure 2 shows the functional graph of this map. In this case k = 2, r 1 = 6, r 2 = 12, ν = (2, 4), ω = (3, 3) and gcd 14 (ν) = gcd 14 (2) · gcd 14 (4) = (2) · (2, 2) = (4, 2). Then, 2.4. Some results on the power map over finite groups. Let G be a finite group and let d be a divisor of |G|. We denote by G[d] the group generated by the elements g ∈ G such that g d = 1. In the next lemma we consider the factorization |G| = νω where ω is the greatest divisor of |G| that is relatively prime with t. We note that if g ∈ G verifies g t n = 1 then the order of g is a divisor of ν since gcd(t n , |G|) divides ν. In particular, the tree attached to 1 in
G(ϕ 14 /G) = d 1 |3 d 2 |3 ϕ(d 1 )ϕ(d 2 ) lcm{o d1 (14), o d2 (14)} × Cyc lcm{o d1 (14), o d2 (14)}, T (4,2) = {T (4,2) } ⊕ 4 × Cyc 2, T (4,2) .G(ϕ t /G), is contained in G[ν]. Definition 2.9. An element g ∈ G is ϕ t -periodic if there exists a positive integer n such that g t n = ϕ (n) t (g) = g. Moreover, the preperiod of g ∈ G under ϕ t is the least integer i ≥ 0 such that ϕ (i) t (g) is ϕ t -periodic.
Definition 2.10. The central tree of G(ϕ t /G) is the rooted tree attached to neutral element 1 ∈ G and it is denoted by T t (G).
Proposition 2.11. Let G be a finite group with identity 1 and consider the power
map ϕ t : G → G with g → g t . Let |G| = νω where ω is the greatest divisor of |G| that is relatively prime with t. If h ∈ G is ϕ t -periodic and h is in the centralizer of G[ν], then the rooted tree attached to h in G(ϕ t /G) is isomorphic to the central tree T t (G).
Proof. Suppose that s is the least positive integer such that ϕ (s)
t (h) = h, i.e., h t s = h. Let T h be the rooted tree attached to h in G(ϕ t /G). Set τ : T t (G) → T h with τ (g) = gh t δ(g)(s−1) , where δ(g) is the preperiod of g under ϕ t . As previously remarked, each element of T t (G) is in G[ν].
Claim. The map τ is well defined and preserves the preperiod, i.e, τ (g) ∈ T h and δ(τ (g)) = δ(g) for every g ∈ T t (G).
We proceed by induction on n = δ(g). If n = 0, then g = 1 and τ (1) = h ∈ T h with δ(h) = 0. If n = 1, then g = 1, g t = 1 and τ (g) = gh t s−1 . In particular, since h is in the centralizer of G[ν] and g ∈ T t (G), we have that ϕ t (τ (g)) = g t h t s = h. Since g = 1, it follows that τ (g) = gh t s−1 = h t s−1 , the unique ϕ t -periodic element in the set ϕ −1 t (h). The latter implies that τ (g) ∈ T h and δ(τ (g)) = 1. Suppose that τ (g) ∈ T h and δ(τ (g)) = δ(g) whenever g ∈ T t (G) and δ(g) = n for some n ≥ 1 and let g ∈ T t (G) with δ(g) = n+1. Since n ≥ 1, we necessarily have that ϕ t (g) ∈ T t (G) and δ(ϕ t (g)) = n. By induction hypothesis, τ (ϕ t (g)) ∈ T h and δ(τ (ϕ t (g))) = n.
Since T t (G) ⊆ G[ν], h is in the centralizer of G[ν] and n(s − 1) ≡ (n + 1)(s − 1) + 1 (mod s) we have that (4) τ (ϕ t (g)) = g t h t n(s−1) = gh t (n+1)(s−1) t = ϕ t (τ (g)).
In particular, τ (g) is in the set ϕ −1 t (f ) for some element f ∈ T h of preperiod n. Since n ≥ 1, the latter implies that τ (g) is in T h and has preperiod n + 1. The proof of the claim is complete.
From the claim, τ is well defined and preserves the preperiod under ϕ t . By the same reasoning, the map τ * : T h → T t (G) that sends the element g ∈ T h to the element gh −t δ(g)(s−1) is well defined and preserves the preperiod under ϕ t . It is direct to verify that τ and τ * are the compositional inverses of each other, hence τ is a bijection. By Equation (4), we have that τ • ϕ t = ϕ t • τ and then τ preserves adjacency.
Corollary 2.12. Let h ∈ G be a ϕ t -periodic element. If h ∈ Z(G) then the rooted tree attached to h in G(ϕ t /G) is isomorphic to the central tree T t (G).
On flower groups
Given a group G, a cyclic subgroup H is a µ-subgroup of G if H is not contained in any cyclic subgroup of G other than H itself. We have the following definition.
Definition 3.1. Let G be a finite noncyclic group and let S = {C 1 , C 2 , . . . , C k } be the collection of its µ-subgroups. The group G is a flower group if there exists a subgroup C 0 of G such that C i ∩ C j = C 0 for 1 ≤ i < j ≤ k. The subgroup C 0 is the pistil of G and the elements of S are the petals of G. We define the type of G as (c 0 ; c 1 , . . . , c k ) where c i := |C i | for 0 ≤ i ≤ k.
In the following proposition we provide equivalent conditions for a group G to be a flower group. (i) G is a flower group with pistil C 0 ; (ii) for any g ∈ G \ C 0 , there exists a unique µ-subgroup C such that g ∈ C and
C 0 ⊆ C; (iii) there exist k ≥ 1 and distinct µ-subgroups C 1 , . . . , C k of G such that C i ∩C j = C 0 for any 1 ≤ i < j ≤ k, verifying k i=1 |C i | − (k − 1)|C 0 | = |G|.
Proof. Let S be the collection of the µ-subgroups of G. It is direct to verify that S covers G, i.e., G = C∈S C. For the implication (i)→ (ii), we observe that if g ∈ G belongs to two distinct µ-subgroups C 1 , C 2 of G, then g ∈ C 1 ∩ C 2 = C 0 . The implication (ii)→(iii) follows by a simple counting argument. It remains to prove that (iii)→ (i). If (iii) holds, it follows that G is the disjoint union of the sets C i \ C 0 and C 0 . In particular, since each C i is a µ-subgroup of G and S covers G, we have that S = {C 1 , . . . , C k }. Therefore, G is a flower group with pistil C 0 .
In the following proposition we provide some basic properties of flower groups.
Proposition 3.3. Let G be a flower group with pistil C 0 and center Z(G). Then the following hold:
(i) If C = g is a µ-subgroup of G then C ⊆ C G (g). In particular, C 0 ⊆ Z(G). (ii) If H is any noncyclic subgroup of G, H is a flower group with pistil H ∩ C 0 ; Proof. (i) Since g ∈ C G (g) we have C = g ⊆ C G (g).
Observe that every element of G is contained in at least one µ-subgroup of G. Therefore, the intersection of all the µ-subgroups is contained in the intersection of all the centralizers, i.e., C 0 ⊆ Z(G). (ii) It suffices to prove that the µ-subgroups of H are of the form H ∩ C with C a µ-subgroup of C. Let C ′ be a µ-subgroup of H. Since C ′ is a cyclic subgroup of G, it is contained in a µ-subgroup C of G. We have that C ′ ⊆ H ∩ C and H ∩ C is a cyclic subgroup of H, hence C ′ = H ∩ C.
The following result provides a special class of flower groups G whose pistil C 0 coincides with Z(G). Proposition 3.4. Let G be a finite non abelian group such that for every element g ∈ G \ Z(G), there exists a unique µ-subgroup C * g of the centralizer C G (g) containing g and Z(G). Then G is a flower group with pistil Z(G) and the set of petals of G equals {C * g | g ∈ G \ Z(G)}. Proof. We observe that, for every g ∈ G \ Z(G), the inclusion Z(G) ⊆ C * g holds and, in particular, Z(G) is cyclic. From Proposition 3.2, it suffices to prove that for every g ∈ G \ Z(G), the group C * g is the unique µ-subgroup of G that contains g. The latter follows from our hypothesis and the fact that any µ-subgroup of G containing g is necessarily a µ-subgroup of C G (g).
We observe that the condition of Proposition 3.4 is satisfied if, for instance, C G (g) is cyclic for every g ∈ G \ Z(G). The groups satisfying the latter are fully characterized in [7].
3.1.
Flower groups under the power map. In this part we prove that the functional graph induced by the power map on flower groups depends only on the type of the group and provide a description of the structure of this graph.
Definition 3.5. Let G be a flower group of type (c 0 ; c 1 , . . . , c k ) and petals C 1 , . . . , C k . A compatible system of generators for G is (g 1 , . . . , g k ) where g i is a generator of
C i for 1 ≤ i ≤ k and g ci/c0 i = g cj /c0 j for 1 ≤ i < j ≤ k.
We obtain the following result. for some integer f with gcd(f, c 0 ) = 1. Consider a positive integer f i such that f i ≡ f (mod c 0 ) and gcd(f i , |G|) = 1 (for example, using Dirichlet's Theorem on arithmetic progressions we can take a prime f i such that f i ≡ f (mod c 0 ) and f i > |G|). If we set g i = h fi i for 2 ≤ i ≤ k, we have that g i is a generator of C i satistying g
ci/c0 i = h cifi/c0 i = g c1/c0 1 .
The following proposition entails that the isomorphism class of the functional graph induced by power maps on flower groups, depend only on their type. Proposition 3.7. Let G and H be two flower groups of the same type, then there is a graph isomorfism ψ :
G(ϕ t /G) → G(ϕ t /H) such that ψ(1 G ) = ψ(1 H ).
Proof. Let C 0 and C ′ 0 be the pistils of G and H, respectively. Let S = {C 1 , . . . , C k } and S ′ = {C ′ 1 , . . . , C ′ k } be the set of petal of G and H, respectively. Since G and H are flower groups of the same type, with no loss of generality, we may assume that c i := |C i | = |C ′ i | for 0 ≤ i ≤ k. By Lemma 3.6 we consider a compatible system of generators (g 1 , . . . , g k ) and (h 1 , . . . , h k ) for G and H, respectively. We note that g 0 := g c1/c0 1 and h 0 := h c1/c0 1 are generators of C 0 and C ′ 0 , respectively. For each i with 1 ≤ i ≤ k we consider the group isomorphism ψ i : C i → C ′ i such that ψ(g i ) = h i and define ψ : G → H such that ψ| Ci = ψ i . To prove that this map is well defined it suffices to prove that the maps ψ i and ψ j coincides in C 0 for ≤ i < j ≤ k. This last assertion follows from the fact that ψ i (g 0 ) = ψ i (g ci/c0 i ) = h ci/c0 i = h 0 for 1 ≤ i ≤ k and that g 0 is a generator of C 0 . It is clear that ψ is a bijection and since ϕ t ψ = ψϕ t in each petal C i , this also happens globally and ψ induces an isomorphism between the functional graphs G(ϕ t /G) and G(ϕ t /H).
Next we provide a description of the functional graph induced by the power map on flower groups. We start with the following important auxiliary result.
Lemma 3.8. Let G be a flower group with petals C 1 , . . . , C k and pistil C 0 . Then for any
1 ≤ i ≤ k, ϕ −1 t (C i \ C 0 ) ⊆ C i \ C 0 .
In particular, for an element g ∈ C i \ C 0 that is ϕ t -periodic, we have that the tree attached to g in G(ϕ t /G) is isomorphic to the tree attached to g in G(ϕ t /C i ) and the cycle containing g in G(ϕ t /G) comprises vertices from C i \ C 0 .
Proof. Suppose that h t ∈ C i \ C 0 . Hence h ∈ C 0 and then h ∈ C j \ C 0 for some j with 1 ≤ j ≤ k. Since C i ∩ C j \ C 0 = ∅, we conclude that j = i. The remaining statement follows directly by the inclusion ϕ −1 t (C i \ C 0 ) ⊆ C i \ C 0 . We obtain the following result. Set c i = ν i · ω i in a way that ω i is the greatest divisor of c i that is relatively prime with t. Then the functional graph G(ϕ t /G) of the map ϕ t :
G → G with g → g t is isomorphic to k i=1 d i |ω i d i ∤ω 0 ϕ(d i ) o di (t) × Cyc o di (t), T gcd t (νi) ⊕ d0|ω0 ϕ(d 0 ) o d0 (t) × Cyc (o d0 (t), T t (G)) ,
where T t (G) is the central tree which has k i=1 ν i − (k − 1)ν 0 nodes. Proof. Let C 0 be the pistil of G and C 1 , . . . , C k be the petals such that |C i | = c i for 1 ≤ i ≤ k. Fix 1 ≤ i ≤ k and let P i be the set of ϕ t -periodic elements of C i \ C 0 . From Proposition 2.2, P i has ω i − ω 0 elements, corresponding to the cycle decomposition
d i |ω i d i ∤ω 0 ϕ(di) o d i (t) × Cyc(o di (t))
in G(ϕ t /G). From Proposition 2.2 and Lemma 3.8, to each element of P i is attached a rooted tree isomorphic to T gcd t (νi) ; this yields the first component in our statement. It remains to consider the set P 0 of ϕ t -periodic elements of C 0 . From Lemma 2.2, P 0 has ω 0 elements, corresponding to the cycle decomposition di|ω0
ϕ(d0) o d 0 (t) × Cyc(o d0 (t)) in G(ϕ t /G).
From Proposition 3.3, C 0 is in the center of G and then, by Proposition 2.11, the trees attached to the elements of P 0 are all isomorphic to T t (G). The statement about the number of nodes in such tree follows by a counting argument.
Remark 3.10. From the above theorem we have that if G is a flower group of type (c 0 ; c 1 , . . . , c k ) and t is a positive integer then the functional graph G(ϕ t /G) has at most k + 1 non isomorphic trees.
3.2. The rooted tree T t (G). Theorem 3.9 provided a description of the functional graph of the power map ϕ t on a flower group G. This description is complete except for the term T t (G) which is the rooted tree associated with 1 ∈ G. In this part we describe it and provide some relations which under some mild conditions allows us to express it in terms of elementary trees. By proposition 3.7, the tree T t (G) only depends on the type of the flower group G. If G has type (c 0 ; c 1 , . . . , c k ) we denote T t (c 0 ; c 1 , . . . , c k ) := T t (G).
It is convenient to extend the definition of T t (c 0 ; c 1 , . . . , c k ) for positive integers c 0 , c 1 , . . . , c k with c 0 | c i , 1 ≤ i ≤ k, even if there is no flower group of type (c 0 ; c 1 , . . . , c k ). We consider set
V = k i=0 {i} × Z ci . Note that if x ∈ Z ci for some i, 1 ≤ i ≤ k, verifies x ≡ 0 (mod c i /c 0 ) then there is a unique x ′ ∈ Z c0 such that x ≡ x ′ c i /c 0 (mod c i ) and we define the identification function f : V → V given by f (i, x) = (i, x) if i = 0 or x ≡ 0 (mod ci c0 ); (0, x ′ ) if i = 0 and x ≡ x ′ · ci c0 (mod c i ).
From the latter, we have the following definition.
= {(j, y) ∈ V : f (j, y) = f (i, x)}. The set V 0 := {(i, x) : x ∈ Z c0 } is the pistil of F and the sets V i := {(i, x) : x ∈ Z ci } with 1 ≤ i ≤ k are the petals of F .
Note that, for each i, 0 ≤ i ≤ k, the sets V i have a natural structure of cyclic groups given by (i, x) + (i, y) := (i, x + y) and that this sum is well defined in
F . Indeed, if f (i, x 1 ) = f (j, x 2 ) = (0, x ′ ) and f (i, y 1 ) = f (j, y 2 ) = (0, y ′ ) with 0 ≤ j < i ≤ k we have that x ≡ x ′ · ci c0 (mod c i ), y ≡ y ′ · ci c0 (mod c i ) and x + y ≡ (x ′ + y ′ ) · ci c0 (mod c i ), thus f (i, x 1 + y 1 ) = (0, x ′ + y ′ ).
In a similar way we have that f (j, x 2 + y 2 ) = (0, x ′ + y ′ ) (if j = 0 it follows directly from the fact that x 2 = x ′ and y 2 = y ′ ). Then, f (i, x 1 + y 1 ) = f (j, x 2 + y 2 ) for 0 ≤ j < i ≤ k. It is clear that each V i is a cyclic group (isomorphic to Z ci ) for 0 ≤ i ≤ k and they satisfy k i=1 V i = F and V i ∩ V j = V 0 for 1 ≤ i < j ≤ k but F is not a genuine flower group since the sum is not defined for every pair of elements of F . Nevertheless, we can define the power map ϕ t : F → F given by ϕ t (x) = tx. The group V 0 is called the pistil of F and the groups V 1 , . . . , V k are called the petals of F .
It is straightforward to prove that if G is a flower group of type (c 0 ; c 1 , . . . , c k ) and (g 1 , . . . , g k ) is a compatible system of generators for G, the map ψ : G → F (c 0 ; c 1 , . . . , c k ) given by g j i → (i, j) for 1 ≤ i ≤ k and j ≥ 0 is well defined (using (i, c i /c 0 ) = (0, 1) for 1 ≤ i ≤ k), bijective and satisfies ϕ t ψ = ψϕ t . Thus, the induced mapψ : G(ϕ t /G) → G(ϕ t /F (c 0 ; c 1 , . . . , c k )) is a graph isomorphism. Since ψ(1) = (0, 0), the tree T t (c 0 ; c 1 , . . . , c k ) is isomorphic to the tree attached to (0, 0) in G(ϕ t /F (c 0 ; c 1 , . . . , c k )). Now we have defined the rooted tree T t (c 0 ; c 1 , . . . , c k ) for every c 0 , c 1 , . . . , c k satisfying c 0 | c i even if there is no flower group of type (c 0 ; c 1 , . . . , c k ). In the case that there is a flower group G of type (c 0 ; c 1 , . . . , c k ), this tree coincides with the central tree T t (G) in the functional graph G(ϕ t /G).
The following proposition brings us a method which allows to simplify the tree structure of T t (c 0 ; c 1 , . . . , c k ).
) = k i=1 T gcd t (νi) . iii) If gcd(t, c k c0 ) = 1 then T t (c 0 ; c 1 , . . . , c k−1 , c k ) = T t (c 0 ; c 1 , . . . , c k−1 ). iv) If c k | t then T t (c 0 ; c 1 , . . . , c k−1 , c k ) = T t (c 0 ; c 1 , . . . , c k−1 ) + (c k − c 0 ) × • . v) T t (c 0 ; c 1 ) = T gcd t (c1) .
Proof. Consider the pseudo-flower group F = F (c 0 ; c 1 , . . . , c k ) with pistil V 0 and petals V 1 , . . . , V k and the power map ϕ t : F → F . We split the proof into cases: i) Item i) follows directly from the definition of F . ii) We observe that, if gcd(c 0 , t) = 1, then every element of V 0 is ϕ t -periodic. In particular, the tree attached to (0, 0) ∈ F contains, in addition to the root (0, 0), only elements of the set
F \ V 0 = k i=1 V i \ V 0 .
Such tree is the union of the rooted trees associated with (0, 0) ∈ F in each of the functional graphs
G(ϕ t /V i ), that is, k i=1 T gcd t (ci) . iii) We first note that gcd(t, c k /c 0 ) = 1 implies ϕ t (V k \ V 0 ) ⊆ V k \ V 0 . Indeed, if (k, x) ∈ V k \ V 0 then c k c0 ∤ x and, since gcd t, c k c0 = 1, we also have c k c0 ∤ tx. Hence, ϕ t ((k, x)) = (k, tx) ∈ V k \ V 0 . The inclusion ϕ t (V k \ V 0 ) ⊆ V k \ V 0 implies that there are no points of the tree attached to (0, 0) in V i \ V 0 and it is equal to T t (c 0 ; c 1 , . . . , c k−1 ). iv) We observe that if c k | t then ϕ t (V k ) = {(0, 0)}. Set F * = (F \ V k ) ∪ V 0 .
The tree attached to (0, 0) in G(ϕ t /F ) can be obtained as the union of the tree attached to (0, 0) in G(ϕ t /F * ) (which equals T t (c 0 ; c 1 , . . . , c k−1 )) and the tree (c k − c 0 ) × • (corresponding to the c k − c 0 points of V k \ V 0 mapping to (0, 0) by ϕ t ), both trees with the same root, then T t (c 0 ; c 1 , . . . , c k−1 , c k ) = T t (c 0 ; c 1 , . . . , c k−1 ) + (c k − c 0 ) × • . v) We note that F (c 0 ; c 1 ) = V 1 is a cyclic group of order c 1 and then the equality T t (c 0 ; c 1 ) = T gcd t (c1) follows from Proposition 2.2.
Applications
In this section we provide some applications of Theorem 3.9 to the explicit description of the functional graph of the power map on certain classes of finite groups. For clarity and organization, we consider them separately.
Generalized quaternions.
In [2] the authors obtain an implicit description of the digraph associated to power maps over generalized quaternion groups, such as distribution of indegrees and cycle lengths. Here we a provide a more explicit description of these digraphs. We start by showing that such groups are flower groups.
Lemma 4.1. For n ≥ 2, the generalized quaternion Q 4n = a, b | a 2n = 1, a n = b 2 , bab −1 = a −1 , of order 4n is a flower group with pistil C 0 = {1, a n }. Moreover, the set of petals of Q 4n comprises n cyclic groups of order 4 and one cyclic group of order 2n.
Proof. We observe that every element of Q 4n is written uniquely as a i b j , where j = 0, 1 and 0 ≤ i ≤ 2n − 1. Therefore, the µ-subgroups of Q 4n are the groups C i = a i b = {1, a i b, a n , a n+i b} with 1 ≤ i ≤ n, each of order 4 and C n+1 = a , which has order 2n. From this fact the result follows directly.
We obtain the following corollary.
Corollary 4.2. Fix t an integer, let Q 4n be as in Lemma 4.1 and set 2n = ν · ω in a way that ω is the greatest divisor of 2n that is relatively prime with t. Let α be the integer such that n/2 α is odd. Then the functional graph G(ϕ t /Q 4n ) of the map ϕ t : Q 4n → Q 4n with g → g t is isomorphic to one of the following graphs:
d|ω d>2 ϕ(d) o d (t) × Cyc(o d (t), T gcd t (ν) ) ⊕ k t n × Cyc 2 k t ⊕ 2 × {T gcd t (ν) } , if t is odd, where k t = 2 if t ≡ 1 (mod 4) and k t = 1 if t ≡ 3 (mod 4), or d|ω d>1 ϕ(d) o d (t) × Cyc(o d (t), T gcd t (ν) ) ⊕ {T 0 }, if t is even, where T 0 = T gcd t (ν) + 2n × • if t ≡ 0 (mod 4); T gcd t (ν) + α 2n × • if t ≡ 2 (mod 4).
Proof. By Lemma 4.1, the type of Q 4n is (c 0 ; c 1 , . . . , c n+1 ) with c 0 = 2, c i = 4 for 1 ≤ i ≤ n and c n+1 = 2n. Let C 0 be the pistil and C 1 , . . . , C n+1 be the petals of Q 4n with |C i | = c i for 1 ≤ i ≤ n + 1. Set c i = ν i · ω i such that ω i is the greatest divisor of c i that is relatively prime with t. In all the cases we apply Theorem 3.9 to obtain the graph structure of G(ϕ t /Q 4n ) except for the tree T t (Q 4n ) which equals T 0 when t ≡ 0 (mod 2). We split the proof into cases. a) If t ≡ 1 (mod 2) we have gcd(t, ci c0 ) = 1 for 1 ≤ i ≤ n and by Theorem 3.12 (repeated application of i. and iii. and using v. to finish) we obtain T t (Q 4n ) = T gcd t (ν) . b) If t ≡ 0 (mod 4) we have c i | t for 1 ≤ i ≤ n and by Theorem 3.12 (repeated application of i. and iv. and using v. to finish) we obtain T 0 = T gcd t (ν) + 2nו . c) If t ≡ 2 (mod 4) and 1 ≤ i ≤ n, the power map ϕ t maps the two points of C i \C 0 in the unique point of order 2 in C 0 and the hanging tree of 1 in G(ϕ t /C n+1 ) is T gcd t (ν) . Then, the tree T 0 can be obtained from T gcd t (ν) by replacing the subtree T whose node corresponds to the point of order 2 in C n+1 with T + 2nו , that is, T 0 = T gcd t (ν) + α 2nו where α is the depth T . To conclude we observe that α equals the greatest integer k for which there exists x ∈ Z such that t k x ≡ n (mod 2n). This last equation has a solution if and only if gcd(t k , 2n) | n. If e 2 (m) denotes the exponent of 2 in the prime decomposition of m, the latter is equivalent to min{k · e 2 (t), 1 + e 2 (n)} ≤ e 2 (n), i.e., k · e 2 (t) ≤ e 2 (n). Since e 2 (t) = 1 we have α = e 2 (n) as desired.
We provide two numerical examples, showing the applicability of Corollary 4.2.
Example 4.3. Consider the group Q 24 (n = 6) and the map ϕ 3 : Q 24 → Q 24 with ϕ 3 (g) = g 3 . From Corollary 4.2, we obtain that G(ϕ 3 /Q 24 ) is isomorphic to Cyc(2, T (3) ) ⊕ (2 × Cyc(1, T (3) )) ⊕ (6 × Cyc (2)); see Figure 3.
4.2.
Semidirect products of cyclic groups. Let C n = a and C m = b be two cyclic groups. Every homomorphism φ : C m → Aut(C n ) is defined by φ(b)(a) = a s for some integer s with s n ≡ 1 (mod m) and its associated semidirect product is denoted by C n ⋊ s C m .
In [4], the authors explored the digraph associated to the power map on such groups in the case where n is a prime number. The following lemma provides a class of such groups that are also flower groups.
C n ⋊ s C m = a, b | b n = a m = 1 , aba −1 = b s ,
is a flower group of order mn with pistil C 0 = {1}. Moreover, the petals of C n ⋊ s C m comprises n cyclic groups of order m and one cyclic group of order n.
Proof. We observe that every element of G := C n ⋊ s C m is written uniquely as b i a j with 0 ≤ i < n and 0 ≤ j < m. It is direct to verify that, for any 0 ≤ i, i 0 < n and 0 ≤ j, j 0 < m we have that (b i a j ) · (b i0 a j0 ) = b i+i0·s j a j+j0 . In particular, we obtain that
(5) (b i a j ) t = b i· s jt −1
s j −1 a jt , t > 0. We claim that for every element g = b i a j ∈ G with 0 ≤ i < n, 1 ≤ j < m there is an integer i 0 such that (b i0 a) j = b i a j . Indeed, by Equation 5, it suffices to take
s−1 = 1 because s m −1 s−1 ≡ 0 (mod n).
Since the union of the µ-subgroups covers G and (n − 1) + (m − 1)n = mn − 1 = |G| − 1 we conclude that they are the µ-subgrups with pairwise trivial intersection which implies that G is a flower group. The following corollary is a direct application of Lemma 4.5 and Theorem 3.9.
Corollary 4.8. Fix t a positive integer and let m, n, s > 1 and C n ⋊ s C m be as in Lemma 4.5. Write n = ν 1 ω 1 in a way that ω 1 is the greatest divisor of n that is relatively prime with t and write m = ν 2 ω 2 in the same way. Then, for n 1 = 1 and n 2 = n, the functional graph
G(ϕ t /C n ⋊ s C m ) of the map ϕ t over C n ⋊ s C m is isomorphic to i=1,2 d i |ω i d i =1 n i · ϕ(d i ) o di (t) × Cyc(o di (t), T gcd t (νi) ) ⊕ {T gcd t (ν1) + n · T gcd t (ν2) }.
We provide a numerical example, showing the applicability of Corollary 4.8.
Example 4.9. Consider the group G = C 65 ⋊ 8 C 4 as in Lemma 4.5 and let ϕ 10 : G → G with ϕ 10 (g) = g 10 . From Corollary 4.8, we have that G(ϕ 10 /G) is isomorphic to 2 × Cyc(6, T (5) ) ⊕ {T (5) + 65 · T (2,2) }. Figure 5 shows a picture of this graph.
..... Figure 5. The graph 2 × Cyc(6, T (5) ) ⊕ {T (5) + 65 · T (2,2) }.
65
4.3.
The projective general linear group. Fix q = p s a prime power and let F q be the finite field with q elements. The Projective General Linear group of order 2 over F q is the quotient group G q := PGL(2, q) = GL(2,q) F * q ·I , where GL(2, q) is the group of the 2 × 2 non-singular matrices with entries in F q and I is the 2 × 2 identity matrix. It is well known that G q has order q 3 − q. We shall prove that G q is, in fact, a flower group. For this, we need the following machinery.
Definition 4.10. For A ∈ GL(2, q) such that [A] = [I], A is of type 1 (resp. 2, 3 or 4) if its eigenvalues are distinct and in F q (resp. equal and in F q , symmetric and in F q 2 \ F q or not symmetric and in F q 2 \ F q ).
We observe that elements of type 3 appear only when q is odd. Moreover, the types of A and λ · A are the same for any λ ∈ F * q . For this reason, we say that [A] is of type t if A is of type t. The number of elements of each type and the structure of the centralizers is well known and we display them in Table 1. Table 1. Element structure in G q = PGL(2, q), where q = p s . Here C n and D 2n denote the cyclic group or order n and the dihedral group of order 2n, respectively.
p type of g order of g C Gq (g) # elements 1 > 2 ∼ = C q−1 q(q + 1)(q − 2)/2 2 2 2 ∼ = C s 2 q 2 − 1 4 > 2 ∼ = C q+1 q 2 (q − 1)/2 1 > 2 ∼ = C q−1 q(q + 1)(q − 3)/2 1 2 ∼ = D 2(q−1) q(q + 1)/2 > 2 2 p ∼ = C s p q 2 − 1 3 2 ∼ = D 2(q+1) q(q − 1)/2 4 > 2 ∼ = C q+1 q(q − 1) 2 /2
We obtain the following result.
Proposition 4.11. For any prime power q = p s , the projective general linear group G q = PGL(2, q) is a flower group with pistil C 0 = {[I]}. Moreover, for q ≥ 3, the set of petals of PGL(2, q) comprises q(q+1) 2 cyclic groups of order q −1, q(q−1) 2 cyclic groups of order q + 1 and q 2 −1 p−1 cyclic groups of order p. Proof. It is well known that the center of G q equals {[I]}. By Proposition 3.4, it suffices to prove that every non identity element g ∈ G q is contained in a unique µsubgroup of C G (g). This is trivially verified if C G (g) is cyclic or has prime exponent. Otherwise, C G (g) is a dihedral group, the order of g is 2 and q is odd (see Table 1). In this case, we observe that n = q ± 1 is even and the center of D 2n equals {1, h n 2 } where h is an element of order n. Hence g = h n 2 and it is direct to verify that such element is contained in a unique µ-subgroup of D 2n , namely the group of order n generated by h. Thus G q is a flower group with pistil {[I]} whose set of petals comprises the cyclic subgroups of orders p, q + 1 and q − 1. A detailed account in Table 1 provides the number of such subgroups, according to their orders.
The following corollary is an immediate application of Proposition 4.11 and Theorems 3.9 and 3.12.
Corollary 4.12. Fix t a positive integer, let q = p s ≥ 4 be a prime power and G q = PGL(2, q). Write q − 1 = ν 1 ω 1 in a way that ω 1 is the greatest divisor of q − 1 that is relatively prime with t and write q + 1 = ν 2 ω 2 in the same way. Then, for d 1 = q(q+1) 2 and d 2 = q(q−1) 2 , the functional graph G(ϕ t /G q ) of the map ϕ t : G q → G q with g → g t is isomorphic to
i=1,2 d i |ω i d i =1 d i · ϕ(d i ) o di (t) × Cyc(o di (t), T gcd t (νi) ) ⊕ G,
.....
10
... 10 ..... Figure 6. The graph 10 × Cyc(2, T (2) ) ⊕ 6 × Cyc(4) ⊕ {15 · T (2,2) + 10 · T (2) }.
15
where G = { q(q+1) 2 · T gcd t (ν1) + q(q−1) 2 · T gcd t (ν2) + q 2 −1 p−1 · T (p) } if p | t;
(q 2 −1)
op(t) × Cyc(o p (t)) ⊕ { q(q+1) 2 · T gcd t (ν1) + q(q−1) 2 · T gcd t (ν2) } if p ∤ t.
We provide a numerical example, showing the applicability of the previous corollary.
Example 4.13. Consider the map ϕ 2 : PGL(2, 5) → PGL(2, 5), g → g 2 . From Corollary 4.12, we obtain that G(ϕ 2 /PGL(2, 5)) is isomorphic to 10 × Cyc(2, T (2) ) ⊕ 6 × Cyc(4) ⊕ {15 · T (2,2) + 10 · T (2) }; see Figure 6.
From Remark 3.10 a functional graph G(ϕ t /P GL(2, q)) has at most four non isomorphic trees. In the example above we have three non isomorphic trees but there are examples with four (for example the functional graph of ϕ 2 over PGL(2, 11)).
Closing remarks
In this paper we describe the functional graph G(ϕ t /G) when G is an abelian group or a flower group. In both cases the cyclic part is easier to describe than the non cyclic part (i.e. the structure of the tree attached to periodic points). In contrast with the abelian case where all the trees attached to periodic points are isomorphic, for flower groups we proved that several non-isomorphic classes of trees can appear (this number depends on the cardinality of the petals and t). However for the families of groups considered in Section 4 the number of non isomorphic trees in the functional graph G(ϕ t /G) is at most four. We raise the following questions:
• Is this number unbounded for general groups?
• Is this number unbounded if we restrict to flower groups? • Determine necessary and sufficient conditions for the sequence (c 0 ; c 1 , . . . , c k ) being the type of some flower group.
These question are stated in increasing order of difficult. Relating to the above questions it is natural to consider the number τ (n), the maximum number of non isomorphic trees that can appear in a graph G(ϕ t /G) for some group G of order n (restricted or not to flower groups) and some positive integer t. It could be interesting to determine the asymptotic behavior of the sequence τ (n).
Lemma 2. 4
4([17],Lemma 3.5). Let r ∈ Z + and T be a rooted tree. The following isomorphism holds:Cyc(r, T ) ∼ = Cyc(r) ⊗ {T }.Lemma 2.5 ([17], Prop. 2.10). For any non-decreasing sequences u, v we have
Figure 2 .
2The graph {T (4,2) } ⊕ 4 × Cyc 2, T(4,2) .
Proposition 3. 2 .
2For a finite noncyclic group G and a cyclic subgroup C 0 of G, the following are equivalent:
Lemma 3. 6 .
6Every flower group has a compatible system of generators.Proof. Let G be a flower group with pistil C 0 and petals C 1 , . . . , C k and let c i := |C i | for 1 ≤ i ≤ k. Let g 1 be a generator of C 1 and consider generators h i of C i for 2 ≤ i ≤ k. Since h
Theorem 3 . 9 .
39Let t be a positive integer and G be a flower group of type (c 0 ; c 1 , . . . , c k ).
Definition 3 . 11 .
311The pseudo-flower group F = F (c 0 ; c 1 , . . . , c k ) is the quotient set V /f i.e. its elements are of the form (i, x) :
Theorem 3 . 12 .
312Let c 0 , c 1 , . . . , c k be positive integers with k ≥ 2 and c 0 | c i for 1 ≤ i ≤ k. Then, the following holds: i) T t (c 0 ; c 1 , . . . , c k ) = T t (c 0 ; c θ(1) , . . . , c θ(k) ) for every permutation θ of the set {1, 2, . . . , k}. ii) If gcd(c 0 , t) = 1, then T t (c 0 ; c 1 , . . . , c k
Figure 3 .
3The graph Cyc(2, T (3) ) ⊕ (2 × Cyc(1, T (3) )) ⊕ (6 × Cyc(2)).
Example 4. 4 . 24 Figure 4 .
4244Consider the group Q 48 (n = 12) and the map ϕ 10 : Q 48 → Q 48 with ϕ 10 (g) = g 10 . From Corollary 4.2, we obtain that G(ϕ 10 /Q 48 ) is isomorphic to 2 × {T (2,2,2) } ⊕ {T (2,2,2) + 2 24 × • }; see Figure 4. ... The graph 2 × {T (2,2,2) } ⊕ {T (2,2,2) + 2 24 × • }.
Lemma 4 . 5 .
45Let m, n, s > 1 be positive integers such that s m −1 s−1 ≡ 0 (mod n) and gcd n, s j −1 s−1 = 1 for 1 ≤ j < m. Then the semi-direct product
i 0 such that i 0 s j −1 s−1 ≡ i (mod n) and the existence of such i 0 is guaranteed by the fact that gcd n, every 1 ≤ j < m. Thus, the µ-subgroups are b (of order n) and b i a with 0 ≤ i < n. The order of b i a is m because (b i a) t = b i· s t −1 s−1 a t = 1 implies t ≡ 0 (mod m) and for t = m we have (b i a) m = b i· s m −1
Remark 4 . 6 .
46The conditions on s, m and n in Lemma 4.5 are fulfilled if, for instance, the order of s modulo p equals m for every prime divisor p of n.
Remark 4 . 7 .
47For every positive integer n ≥ 2, the pair (s, m) = (n − 1, 2) satisfies the conditions of Lemma 4.5. In this case, the corresponding group is just the dihedral group D 2n of order 2n.
In[14] the iterated gcd of n relative to t is called ν-series generated by n and t, denoted by n(t), and defined only in the case when each prime divisor of n divides t. In order to avoid possible confusion we decide to use the alternative notation gcd t (n).
AcknowledgmentsPart of this paper were developed during a pleasant stay by the second author at Universidad de la República, supported by PEDECIBA. The authors thank Sávio Ribas to pointing out a mistake in Lemma 4.5 in a previous version of this paper.
The power digraphs associated with generalized dihedral groups. U Ahmad, Discrete Math. Algorithms Appl. 741550057U. Ahmad. The power digraphs associated with generalized dihedral groups. Discrete Math. Algorithms Appl. 7(4): 1550057 (2015).
The digraphs arising by the power maps of generalized Quaternion groups. U Ahmad, M Moeen, J. Algebra Appl. 1691750179U. Ahmad and M. Moeen. The digraphs arising by the power maps of generalized Quaternion groups. J. Algebra Appl. 16(9): 1750179 (2017).
A simple unpredictable pseudo-random number generator. L Blum, M Blum, M Shub, SIAM J. Comput. 152L. Blum, M. Blum and M. Shub. A simple unpredictable pseudo-random number generator. SIAM J. Comput. 15(2): 364-383 (1986).
Digraph from power mapping on noncommutative groups. G Deng, J Zhao, J. Algebra Appl. 1952050084G. Deng and J. Zhao. Digraph from power mapping on noncommutative groups. J. Algebra Appl. 19(5): 2050084 (2020).
Functional graphs of abelian group endomorphisms. B-E De Klerk, Johan H Meyer, Discrete Mathematics. 345112691de Klerk, B-E., and Johan H. Meyer. Functional graphs of abelian group endomorphisms. Discrete Mathematics 345.2: 112691 (2022).
Chebyshev action on finite fields. T A Gassert, Discr. Math. 315T. A. Gassert. Chebyshev action on finite fields. Discr. Math. 315: 83-94 (2014).
Finite groups whose all proper centralizers are cyclic Bull. S M Amiri, H Rostami, Iranian Math. Soc. 433S.M. Jafarian Amiri and H. Rostami. Finite groups whose all proper centralizers are cyclic Bull. Iranian Math. Soc. 43(3): 755-762 (2017).
Power maps in finite groups. M Larson, Integers. 1958M. Larson. Power maps in finite groups. Integers 19: #A58 (2019).
A Survey on Iterations of Mappings over Finite Fields. R Martins, D Panario, C Qureshi, Combinatorics and finite fields: Difference sets, polynomials, pseudorandomness and applications. BerlinDe Gruyter23R. Martins, D. Panario and C. Qureshi. A Survey on Iterations of Mappings over Finite Fields. In: Combinatorics and finite fields: Difference sets, polynomials, pseudorandomness and applications. Vol. 23 Radon Series on Computational and Applied Mathematics, De Gruyter, Berlin (2019).
Cycles of linear permutations over a finite field. G L Mullen, T P Vaughan, Linear Algebra Appl. 108G.L. Mullen and T.P. Vaughan. Cycles of linear permutations over a finite field. Linear Algebra Appl. 108: 63-82 (1988).
The functional graph of linear maps over finite fields and applications. Des. Codes Cryptogr. D Panario, L Reis, 87D. Panario and L. Reis. The functional graph of linear maps over finite fields and applications. Des. Codes Cryptogr. 87(2), 437-453 (2019).
Maximal periods of x 2 + c in Fq. A Peinado, F Montoya, J Munoz, A J Yuste, International Symposium on Applied Algebra, Algebraic Algorithms and Error-Correcting Codes. SpringerA. Peinado, F. Montoya, J. Munoz and A. J. Yuste. Maximal periods of x 2 + c in Fq. In: International Symposium on Applied Algebra, Algebraic Algorithms and Error-Correcting Codes, Springer pp. 219-228 (2001).
A Monte Carlo method for factorization. J M Pollard, BIT. 153J. M. Pollard. A Monte Carlo method for factorization. BIT 15(3): 331-334 (1975).
Rédei actions on finite fields and multiplication map in cyclic groups. C Qureshi, D Panario, SIAM J. on Discr. Math. 29C. Qureshi and D. Panario. Rédei actions on finite fields and multiplication map in cyclic groups. SIAM J. on Discr. Math. 29: 1486-1503 (2015).
The graph structure of Chebyshev polynomials over finite fields and applications. Des. Codes Cryptogr. C Qureshi, D Panario, 87C. Qureshi and D. Panario. The graph structure of Chebyshev polynomials over finite fields and applications. Des. Codes Cryptogr. 87(2), 393-416 (2019).
Cycle structure of iterating Rédei functions. C Qureshi, D Panario, R Martins, Adv. Math. Comm. 112C. Qureshi, D. Panario and R. Martins. Cycle structure of iterating Rédei functions. Adv. Math. Comm. 11(2): 397-407 (2017).
Dynamics of the a-map over residually finite Dedekind domains. C Qureshi, L Reis, J. Num. Theory. 204C. Qureshi and L. Reis. Dynamics of the a-map over residually finite Dedekind domains. J. Num. Theory 204: 134-154 (2019).
Moebius-like maps on irreducible polynomials and rational transformations. L Reis, J. Pure Appl. Algebra. 2241L. Reis. Moebius-like maps on irreducible polynomials and rational transformations J. Pure Appl. Algebra 224(1): 169-180 (2020).
The graph of the square mapping on the prime fields. T Rogers, Discr. Math. 144T. Rogers. The graph of the square mapping on the prime fields. Discr. Math. 144: 317-324 (1996).
Linear Finite Dynamical Systems. R A H Toledo, Commun. Algebra. 339R. A. H. Toledo, Linear Finite Dynamical Systems. Commun. Algebra. 33 (9): 2977-2989 (2005).
Functional graphs of rational maps induced by endomorphisms of ordinary elliptic curves over finite fields. S Ugolini, Periodica Math. Hungarica. 77S. Ugolini. Functional graphs of rational maps induced by endomorphisms of ordinary elliptic curves over finite fields. Periodica Math. Hungarica 77.2: 237-260 (2018).
On the iteration of certain quadratic maps over GF (p). T Vasiga, J Shallit, Discr. Math. 277T. Vasiga and J. Shallit. On the iteration of certain quadratic maps over GF (p). Discr. Math. 277: 219-240 (2004).
Faster attacks on elliptic curve cryptosystems. International workshop on selected areas in cryptography. M Wiener, R Zuccherato, SpringerBerlinM. Wiener and R. Zuccherato. Faster attacks on elliptic curve cryptosystems. International workshop on selected areas in cryptography. Springer, Berlin, Heidelberg 190-200 (1998).
Belo Horizonte, MG, 30270-901, Brazil. Email address: lucasreismat@gmail. Matemática Departamento De, Universidade Federal de Minas GeraisDepartamento de Matemática, Universidade Federal de Minas Gerais, Belo Hori- zonte, MG, 30270-901, Brazil. Email address: lucasreismat@gmail.com
| [] |
[
"PhysNLU: A Language Resource for Evaluating Natural Language Understanding and Explanation Coherence in Physics",
"PhysNLU: A Language Resource for Evaluating Natural Language Understanding and Explanation Coherence in Physics"
] | [
"Jordan Meadows jordan.meadows@postgrad.manchester.ac.uk \nDepartment of Computer Science\nUniversity of Manchester\n\n\nIdiap Research Institute\nSwitzerland\n",
"Zili Zhou zili.zhou@manchester.ac.uk \nDepartment of Computer Science\nUniversity of Manchester\n\n",
"André Freitas andre.freitas@manchester.ac.uk \nDepartment of Computer Science\nUniversity of Manchester\n\n\nIdiap Research Institute\nSwitzerland\n"
] | [
"Department of Computer Science\nUniversity of Manchester\n",
"Idiap Research Institute\nSwitzerland",
"Department of Computer Science\nUniversity of Manchester\n",
"Department of Computer Science\nUniversity of Manchester\n",
"Idiap Research Institute\nSwitzerland"
] | [
"Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022)"
] | In order for language models to aid physics research, they must first encode representations of mathematical and natural language discourse which lead to coherent explanations, with correct ordering and relevance of statements. We present a collection of datasets developed to evaluate the performance of language models in this regard, which measure capabilities with respect to sentence ordering, position, section prediction, and discourse coherence. Analysis of the data reveals equations and sub-disciplines which are most common in physics discourse, as well as the sentence-level frequency of equations and expressions. We present baselines that demonstrate how contemporary language models are challenged by coherence related tasks in physics, even when trained on mathematical natural language objectives. | null | [
"https://www.aclanthology.org/2022.lrec-1.492.pdf"
] | 245,877,546 | 2201.04275 | 8b635b47ab7e89bb8c7c56b561535cbd619f2e17 |
PhysNLU: A Language Resource for Evaluating Natural Language Understanding and Explanation Coherence in Physics
June 2022
Jordan Meadows jordan.meadows@postgrad.manchester.ac.uk
Department of Computer Science
University of Manchester
Idiap Research Institute
Switzerland
Zili Zhou zili.zhou@manchester.ac.uk
Department of Computer Science
University of Manchester
André Freitas andre.freitas@manchester.ac.uk
Department of Computer Science
University of Manchester
Idiap Research Institute
Switzerland
PhysNLU: A Language Resource for Evaluating Natural Language Understanding and Explanation Coherence in Physics
Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022)
the 13th Conference on Language Resources and Evaluation (LREC 2022)MarseilleJune 2022Language Resources Association (ELRA), licensed under CC-BY-NC-4.0 4611mathematical textphysicsnatural language understandingdiscourse coherence
In order for language models to aid physics research, they must first encode representations of mathematical and natural language discourse which lead to coherent explanations, with correct ordering and relevance of statements. We present a collection of datasets developed to evaluate the performance of language models in this regard, which measure capabilities with respect to sentence ordering, position, section prediction, and discourse coherence. Analysis of the data reveals equations and sub-disciplines which are most common in physics discourse, as well as the sentence-level frequency of equations and expressions. We present baselines that demonstrate how contemporary language models are challenged by coherence related tasks in physics, even when trained on mathematical natural language objectives.
Introduction
Physics literature is a form of mathematical language which is unique beyond simply domain vocabulary. How physicists use mathematics to reason and explain, separates their field fundamentally from other disciplines, including mathematics. Many of its sub-disciplines are situated between pure mathematics and engineering, while others conjoin computer science and biology, with physical methods acting as a well-travelled bridge between the formal and natural sciences. It has not been proven, for example, that smooth solutions (Pizzocchero, 2021;Gala et al., 2021;Miller, 2021) always exist for the Navier-Stokes equations (a millenium problem) despite their widespread use in simulating and engineering fluid dynamics, while biophysics demonstrates that fundamental problems in ecology and evolution can be characterized by computational complexity classes (Ibsen-Jensen et al., 2015). Physics discourse serves as a universal mechanism for generating empirically falsifiable quantitative theory in the natural sciences and engineering (Smith and Fleck, 2017;Coffey and Kalmykov, 2012), separate to both pure mathematics and any downstream field. Its core traits are reflected in unique literary devices and its mathematical explanations will differ to those in the formal sciences as a result. A concrete example is the physics derivation; a core explanatory or argumentative device less rigorous and more informal than mathematical proofs (Meadows and Freitas, 2021;Davis, 2019;Kaliszyk et al., 2015), which generally results in predictive equations relating physical quantities, rather than generating a truth value for a given conjecture (e.g., twin primes). Such equations are central components of physics descriptions, with natural language forming around them and their elements, and their relation to other equations through derivations. Mathematics as a whole, particularly logic, is less concerned with this predictive modelling of real world systems, let alone when such systems are quantum or rela-tivistic, or both. Suggesting that mathematicians work at a level of abstraction higher than that of physicists (i.e., proof frameworks compared to specific derivations), Feynman famously states that "Mathematicians are only dealing with the structure of reasoning...". Within the unique sphere of physics literature, we introduce a suite of datasets which together gauge a model's proficiency in recognising whether or not a physics-related explanation is coherent. In parallel with tasks inspired by DiscoEval , we aim to "evaluate the discourse-related knowledge captured by pretrained sentence representations" in the physics domain. From the proposed data we show that modern pretrained language models are challenged by these tasks even after fine-tuning, in particular demonstrating that a recent language model (Peng et al., 2021) with state-of-the-art performance in mathematical information retrieval, formula topic classification, and formula headline generation tasks, trained with equation-context pairs extracted from arXiv on math-aware pretraining objectives, is outperformed by even vanilla BERT-Base and all popular non-mathematical language models considered in this work. We contribute the following:
1. We introduce PhysNLU; a collection of 4 core datasets related to sentence classification, ordering, and coherence of physics explanations based on related tasks . Each dataset comprises explanations extracted from Wikipedia including derivations and mathematical language. We additionally present 2 parent datasets extracted from 6.3k articles related to physics, in both raw Wikipedia data, and in a form that mimics WikiText-103 (Merity et al., 2016), which is a popular dataset used in related work (Iter et al., 2020). PhysNLU is avaliable online 1 .
4612
2. We provide analysis of linguistic features of physics text, including insights such as sentence and examplelevel distribution of mathematical content across the datasets, the frequency in which explained concepts relate to physics sub-domains, and the most frequent equations in the discourse.
3. We demonstrate how the state-of-the-art does not exhibit proficient inference capabilities with respect to tasks concerning order, coherence, relative position, and classification of sentence-level physics explanations, even when approaches have been designed explicitly for mathematical language, through baselines extracted from experiments involving a selection of pretrained language models.
Task Description
The tasks considered in this work probe model proficiency across 4 categories, originally designed for general language, but here employed specifically for physics discourse containing mathematics. Binary Sentence Ordering tests the ability of a model to recognise order at the shortest possible scale, between two sentences. Sentence Position tests this order and position recognition at a larger scale, closer to that of full paragraphs. Discourse Coherence tests whether a model can determine whether a sequence of statements in an explanation are continuous and relevant. Sentence Section Prediction tests how well a model can link individual sentences to a specific section of an explanation. Together, in our context, they evaluate the discourse-related knowledge captured by pretrained sentence representations, and physics explanation coherence with respect to order and sentence relevance. We now describe our method for data collection for each of the 6 datasets, including the 4 directly used in the forthcoming experiments for each task as described in Figure 2, while an overview of our contributions are displayed in Figure 1.
Dataset Collection
PhysNLU-WikiRaw
Starting from an English Wikipedia XML dump, we select articles with a mention of "physics" and contain at least one equation defined with a <math> tag. After cleaning articles to contain mostly mathematical natural language, and removing those which are predominantly tables, this results in a dataset containing 6.3k articles. We include article titles and corresponding raw unedited text, as well as wikipedia article categories.
PhysNLU-WikiText
This data mimics WikiText-103 (Merity et al., 2016) which is used for the approach introducing the CONPONO objective (Iter et al., 2020) during a preprocessing stage 2 . Among other similarities, we opt to nest section titles within equals signs (e.g. " = Title = ", " = = Section = = ") 2 https://github.com/google-research/ language/blob/master/language/conpono/ create_pretrain_data/wiki_preproc_pipeline. py and omit reference and "see more" sections. The major linguistic differences between WikiText-103 and PhysNLU-WikiText are the inclusion of mathematical content as well as structures which may contain mathematical expressions such as tables, which are infrequent. This core dataset is taken as the starting point from which to derive the other datasets. We then extract 516k unique sentences for use in the following datasets, where sentences are determined by splits on full stops which, similarly to commas, are separated by a space from words (e.g. "end of sentence ."). We correct for issues with names (e.g. J. J. Hopfield) and abbreviations, and some instances where full stops should be present but are omitted.
PhysNLU-BSO (Binary Sentence Ordering)
We take all pairs of consecutive sentences from sections, where selected pairs overlap. Each pair has a 50% chance that the pair order is swapped and we include a label to denote whether a swap (1) has occurred or not (0), suitable to be framed as binary classification. The BSO dataset contains 459k examples.
PhysNLU-SP (Sentence Position)
We take the first 5 sentences from each applicable section, select a sentence at random and move it to the first position (shifting the others down). The number of the swapped sentence is the label corresponding to each set of 5 sentences, suitable for multiclass classification. The SP dataset contains 40k examples.
PhysNLU-DC (Discourse Coherence)
The first 6 sentences from each applicable section are selected, then between positions 2 and 5 inclusive a sentence is swapped with another article at random, with 50% swap occurrence. Whether a swap has occurred or not is included as a label for each example for binary classification, and the DC dataset contains 35k examples.
PhysNLU-SSP (Sentence Section Prediction)
All sentences from the introduction sections of each article are selected and an equal number of sentences are extracted from elsewhere at random from the corpus. Introduction sentences are associated with a label (1) while nonintroductory sentences are associated with a separate label (0) for binary classification. The SSP dataset contains 90k examples.
Dataset Statistics
We now analyse our data with a focus on equations and mathematical natural language. Table 1 shows an overview of notable features, such as the proportion of examples in each dataset which contains mathematical expressions, or specifically equations. Figure 3 describes the proportion of sentences which contain at least n mathematical elements for n ∈ [1, 6], where an element is identified via 3 separate tags: <math>, {math|, and {mvar|. The lighter bars correspond to all sentences present in the evaluation data, while the darker bars correspond to sentences in the SSP dataset which contain proportionally less math. Introductory sentences make up half of the data for SSP and usually they do not contain mathematical language or equations, which accounts for this gap. The proportion of math in sentences from the BSO, SP, and DC datasets are practically equivalent to the overall proportion. Figure 4 shows the relative frequency and proportion of the top 8 Wikipedia categories associated with each article. A single article can correspond to a large number of categories, out of 12.5k categories in our case. Notably, fields related to quantum mechanics are by far the most frequent, where 10% of the data corresponds to either "Quantum mechanics", "Quantum field theory", or "Condensed matter physics". Figure 5 displays how often specific equations are present in the corpora. One might be tempted to claim that this demonstrates how physicists tend to argue and explain using initial conditions with respect to time (t = 0), displacement (x = 0, r = 0, z = 0), and angle (θ = 0), however this exact string matching is biased towards simple equations. As the complexity of equations increases to include multiple terms, and many terms are equivalent in meaning but different in notation, there will be multiple equations in the data which correspond to the same physics. A more accurate way to assess this would involve classifying groups of equations with a good math retrieval model (Peng et al., 2021) and counting group frequency. This analysis does offer insight for simple equations however. For example, it reflects the convention that people prefer to start counting from n = 1 in physics, which occurs more frequently than n = 0, that the famous E = mc 2 is more prolific than the similarly famous F = ma, and that the most frequently discussed Maxwell equation is ∇ · B = 0. Figure 6 shows the proportion of examples from each evaluation dataset which contain at least n counts of either a <math> equation or non-equational math, for n ∈ [1, 6].
Results
We evaluate models on 4 tasks from the DiscoEval suite . We remove the PDTB and RST related tasks due to the lack of a linguistic framework for describing discourse relations in the physics context. Table 1 gives additional information regarding the data used for each task.
Evaluation Tasks
DiscoEval is "designed to evaluate discourse-related knowledge in pretrained sentence representations". We sentences at a time, moving a random sentence to the first position, then predicting the correct position of the first sentence. We take the first 5 sentences from every paragraph in our data. Classifiers are trained by encoding the 5 sentences to vector representations x i , then vectors x 1 − x i are concatenated to x 1 for i ∈ [2, 5] as input to the classifier as:
[x 1 , x 1 − x 2 , x 1 − x 3 , x 1 − x 4 , x 1 − x 5 ].
Binary sentence ordering (BSO) involves taking pairs of contiguous sentences from a paragraph, swapping the order 50% of the time and predicting if a swap has occurred. A classifier is trained by concatenating x 1 and x 2 with their element-wise difference as: [x 1 , x 2 , x 1 − x 2 ]. Discourse coherence (DC) involves taking 6 consecutive sentences, replacing a sentence from positions 2-5 inclusive with a sentence from a random article with 50% frequency and predicting if a swap has occurred. We take the first 6 sentences from each paragraph for this task.
Each vector x i is concatenated for input to the classifier as: [x 1 , x 2 , x 3 , x 4 , x 5 , x 6 ]. Sentence section prediction (SSP) involves sampling a sentence from either the abstract of a scientific article or elsewhere with equal probability, and predicting if the sentence belongs to the abstract. In our case, we sample from article introductions as we do not have abstracts. The original task involves the omission of equations which increases the difficulty of the task, but due to the nature of our problem space we leave them in. The classifier input is just the vector representation x 1 .
Baselines
We include 7 baseline transformer-architecture models (Vaswani et al., 2017) BERT-base-uncased, BERT-large-uncased are two BERT models pretrained on large-scale common domain text corpora. Based on the 12 or 24 encoder layers, the models achieved state-of-the-art performances on several NLP tasks such as sentence classification, next sentence pre-diction, token classification, etc. BERT-large-uncased has a larger parameter size than BERT-base-uncased. The smaller model outperforms the larger in our case. RoBERTa-base has the same architecture as BERT but is pretrained with more data, different hyperparameters, and only full-length sentences, and shows the BERT models were undertrained. The MathBERT model uses the BERT architecture, however equation-context pairs are jointly considered through multiple pretraining objectives, which also relies on extracted tree tuples (Davila et al., 2016;Davila and Zanibbi, 2017).
MegatronBERT model improves the architecture of the original BERT models to enable model deployment across distributed GPU environments, while simultaneously improving the accuracy of the model. The CONPONO models use the encoder architecture (and same data) from BERT to encode text segments, but are pretrained instead with the CONPONO objective together with Masked Language Modelling (MLM). The purpose is to let the models learn discourse relationships between sentences with respect to order and distance, while using negatives to increase sentence representation quality. We use 2 versions of CONPONO as baselines in this paper, K=2 means considering a maximum of 2 sentences before or after the anchor segment during pretraining, K=4 means considering a maximum of 4 sentences before or after. 70-30 split testing and 5-fold split testing are conducted for each baseline. We list the results in Tables 2-5 and observe the following:
• MathBERT is outperformed by all other models in each task, including base and large vanilla BERT. Given the SOTA performance of the model on 3 tasks related to mathematical text (Peng et al., 2021), and MathBERT being the only model to train on equationcontext pairs, this is a surprising result.
• BERT-base-uncased model outperforms BERT-largeuncased in each task.
• Comparing CONPONO K=2 and CONPONO K=4, the performances are similar in each task, with K=2 being marginally better.
• MegatronBERT outperforms all models in all tasks except for discourse coherence (DC), but its parameter size is larger than most of other baselines including BERT-based-uncased and CONPONO models.
• CONPONO K=2 model outperforms BERT-baseuncased, RoBERTa-base, and MathBERT on SP and BSO tasks.
• All baselines perform poorly on the DC task.
• All baselines perform relatively well on the SSP task, but the accuracy performances are close, with no massive margin even between all-around worst performer MathBERT and generally good MegatronBERT.
Related Work
DiscoEval is a suite of evaluation tasks with the purpose of determining whether sentence representations include information about the role of a sentence in its discourse context. They build sentence encoders capable of modelling discourse information via training objectives that make use of natural annotations from Wikipedia, such as nesting level, section and article titles, among others. Other core work (Iter et al., 2020) involves pretraining on both MLM and a contrastive inter-sentence objective (CONPONO), where they achieve state-of-the-art benchmarks for five of seven tasks in the DiscoEval suite, outperforming BERT-Large despite equalling the size of BERT-Base and training on the same amount of data. BERT-Base pretrained additionally on BSO in place of CONPONO, and BERT-large, claim the remaining two benchmarks. Using full encoder-decoder transformer (Vaswani et al., 2017) architecture and an additional masked attention map which incorporates relationships between nodes in operator trees (OPTs) of equations (Davila et al., 2016;Davila and Zanibbi, 2017), MathBERT (Peng et al., 2021) approach pretrains with three objectives on arXiv data each extracting a specific latent aspect of information. MLM learns text representations, context correspondence prediction learns the latent relationship between formula and context, and masked substructure prediction learns semantic-level structure of formulas by predicting parent and child nodes in OPTs. MathBERT is fine-tuned and evaluated on mathematical information retrieval, formula topic classification, and formula headline generation, and outperforms previous approaches in each task. They train on 8.7 million equationcontext pairs extracted from arXiv. A data extraction pipeline (Ferreira and Freitas, 2020) collects 20k entries related to mathematical proofs from the ProofWiki website 3 , such as definitions, lemmas, corollaries, and theorems. They evaluate BERT and SciBERT by fine-tuning on a pairwise relevance classification task with their NL-PS dataset, where they classify if one mathematical text is related to another. As we highlight in the Introduction, physics and mathematics literature differ in their overarching considerations, and more specifically, the unstructured informal physics Wikipedia explanations that we present in our data naturally differ from the structured proofs present in NL-PS. Their work builds on previous efforts applying NLP to general mathematics. One early approach (Zinn, 2003) proposes proof representation structures via discourse representation theory, including a prototype for generating formal proofs from informal mathematical discourse. Another approach (Cramer et al., 2009) focuses on development of a controlled natural language for mathematical texts which is compatible with existing proof verification software. Since these early developments, natural language-based problem solving and theorem proving have progressed significantly, but are still below human-level performance. For example, efforts towards building datasets for evaluating math word problem solvers (Huang et al., 2016) concludes the task as a significant challenge, with more recent large-scale dataset 3 https://proofwiki.org/wiki/MainPage construction and evaluation work (Amini et al., 2019;Miao et al., 2021) confirming that model performance is still well below the gold-standard. This difficulty extends to approaches involving pre-university math problems and geometric quantities (Matsuzaki et al., 2017;Lu et al., 2021). For automated theorem proving and mathematical reasoning, various datasets and accompanying approaches have been proposed (Kaliszyk et al., 2017;Bansal et al., 2019) including more recent work with equational logic (Piepenbrock et al., 2021) and language models (Rabe et al., 2020;Han et al., 2021). A dataset construction approach and accompanying heuristic search for automating small physics derivations has been developed (Meadows and Freitas, 2021) which allows published results in modern physics to be converted into a form interpretable by a computer algebra system (Meurer et al., 2017), which then accommodates limited informal mathematical exploration. Detailed physics derivation data is scarce, and others have tackled such issues via synthetic data (Aygün et al., 2020), though not yet in physics. Reinforcement learning has been employed (Luo and Liu, 2018) to solve differential equations in nuclear physics with a template mapping method, and proof discovery and verification has been explored in relativity (Govindarajalulu et al., 2015). Theorem proving and derivation automation in physics remains elusive, with detailed discussions available in the literature (Kaliszyk et al., 2015;Davis, 2019). With language models demonstrating logical capabilities with respect to type inference, missing assumption suggestion, and completing equalities (Rabe et al., 2020), as well as state-of-the-art performance in math retrieval and tasks related to equation-context correspondence (Peng et al., 2021), we believe our present work will contribute towards physics natural language / equational reasoners capable of generating coherent mathematical explanations and derivations.
Conclusion
Within the domain of physics, we present 2 parent datasets for general use and 4 specific datasets corresponding to discourse evaluation tasks ) collectively referred as PhysNLU. The presented data frequently features equations, formulae, and mathematical language. Our analysis reveals that concepts related to quantum mechanics are most commonly discussed as determined by Wikipedia article category, that equations related to initial and boundary conditions are the most frequently considered when considering near-exact string matching, and we report the proportion of sentences and dataset examples which contain equations and mathematical terms identified by annotation frameworks native to Wikipedia. Finally, we present baseline results for popular non-mathematical language models and demonstrate that, despite expensive pretraining efforts and specialised training objectives for learning various aspects of mathematical text, such efforts do not improve the performance of language models in tasks related to sentence ordering, position, and recognising whether physics explanations are coherent. Future work will involve developing objectives which aid performance in this regard.
Figure 1 :
1Workflow and overview.
Figure 2 :
2Each numbered box is a physics statement. Explanation of how to obtain the differential form of Gauss' law via the divergence theorem, used to demonstrate how 4 evaluation tasks handle physics explanations (extracted from Wikipedia, including errors). (SP) Sentence position (top left) takes a random sentence from the description and moves it to position 1, then the model predicts the true position of the new first sentence, which in this case is 4. (DC) Discourse coherence (top right) randomly replaces a sentence with another from a different physics explanation, where the model predicts if the explanation is coherent. (BSO): Binary sentence ordering (bottom left) swaps two consecutive sentences, and the model predicts if the second entails the first. (SSP): Sentence section prediction takes a random sentence, and the model predicts if it belongs to an introduction or otherwise.
Figure 3 :
3For 516k unique sentences used in the evaluation tasks, the proportion of sentences which contain at least n math elements is shown for n ∈ [1, 6]. The two <math> variants correspond to LaTeX text identified by the XML <math> tag which respectively do and do not contain an equality. {math| and {mvar| correspond to any mathematical text identified by each marker. The darker bars correspond to only sentences extracted from the SSP dataset.
Figure 4 :
4Top 8 most frequent article categories out of 12.5k, excluding categories containing the phrase "Articles containing ...", where each article may correspond to multiple categories.
Figure 5 :Figure 6 :
56Top 8 most frequent <math> tagged equations by exact string matching after accounting for spaces, commas, and full stops. briefly describe the 4 evaluation tasks from DiscoEval considered in our work, with examples shown in Figure 2. We use the same conventions (Chen et al., 2019) for representing concatenation of vectors [., ., ...]. Sentence position (SP) involves considering 5 consecutive The proportion of examples which contain at least n math elements is shown for n ∈ [1, 6], where examples are sourced from the DC, SP, BSO, and SSP datasets. The math is identified via the <math> tag, where lighter bars correspond to at least n math of any kind, while the darker bars correspond to the inclusion of only equations.
in this study,BERT-base-uncased, BERT-large-uncased (Devlin et al., 2019), RoBERTa-base(Liu et al., 2019), MathBERT(Peng et al., 2021), Mega-tronBERT(Shoeybi et al., 2019), CONPONO (K=2), CONPONO (K=4)(Iter et al., 2020).
618 0.618 0.679 0.677 0.619±0.006 0.619±0.006 0.677±0.003 0.676±0.004 MegatronBERT 0.705 0.705 0.778 0.782 0.704±0.003 0.703±0.003 0.776±0.001 0.779±0.002 CONPONO K=2 0.687 0.686 0.761 0.756 0.685±0.004 0.684±0.004 0.746±0.002 0.754±0.003 CONPONO K=4 0.684 0.683 0.758 0.755 0.683±0.003 0.683±0.003 0.743±0.002 0.752±0.002
Table 4 :
4Discourse Coherence
70-30 Split
K-fold
Acc
F1
AP
ROC
Acc
F1
AP
ROC
Table 5 :
5Sentence Section Prediction4617
https://github.com/jmeadows17/PhysNLU
AcknowledgementsThis work was partially funded by the SNSF project Neu-Math (200021 204617).
A Amini, S Gabriel, P Lin, R Koncel-Kedziorski, Y Choi, H Hajishirzi, arXiv:1905.13319Mathqa: Towards interpretable math word problem solving with operationbased formalisms. arXiv preprintAmini, A., Gabriel, S., Lin, P., Koncel-Kedziorski, R., Choi, Y., and Hajishirzi, H. (2019). Mathqa: Towards interpretable math word problem solving with operation- based formalisms. arXiv preprint arXiv:1905.13319.
E Aygün, Z Ahmed, A Anand, V Firoiu, X Glorot, L Orseau, D Precup, S Mourad, arXiv:2006.11259Learning to prove from synthetic theorems. arXiv preprintAygün, E., Ahmed, Z., Anand, A., Firoiu, V., Glorot, X., Orseau, L., Precup, D., and Mourad, S. (2020). Learn- ing to prove from synthetic theorems. arXiv preprint arXiv:2006.11259.
Holist: An environment for machine learning of higher order logic theorem proving. K Bansal, S Loos, M Rabe, C Szegedy, S Wilcox, PMLRInternational Conference on Machine Learning. Bansal, K., Loos, S., Rabe, M., Szegedy, C., and Wilcox, S. (2019). Holist: An environment for machine learn- ing of higher order logic theorem proving. In Interna- tional Conference on Machine Learning, pages 454-463. PMLR.
Evaluation benchmarks and learning criteria for discourse-aware sentence representations. M Chen, Z Chu, K Gimpel, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsChen, M., Chu, Z., and Gimpel, K. (2019). Evaluation benchmarks and learning criteria for discourse-aware sentence representations. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 649-662, Hong Kong, China, November. Association for Computational Linguistics.
The Langevin equation: with applications to stochastic problems in physics. W Coffey, Y P Kalmykov, chemistry and electrical engineering. 27World ScientificCoffey, W. and Kalmykov, Y. P. (2012). The Langevin equation: with applications to stochastic problems in physics, chemistry and electrical engineering, vol- ume 27. World Scientific.
The naproche project controlled natural language proof checking of mathematical texts. M Cramer, B Fisseni, P Koepke, D Kühlwein, B Schröder, J Veldman, International Workshop on Controlled Natural Language. SpringerCramer, M., Fisseni, B., Koepke, P., Kühlwein, D., Schröder, B., and Veldman, J. (2009). The naproche project controlled natural language proof checking of mathematical texts. In International Workshop on Con- trolled Natural Language, pages 170-186. Springer.
Layout and semantics: Combining representations for mathematical formula search. K Davila, R Zanibbi, Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. the 40th International ACM SIGIR Conference on Research and Development in Information RetrievalDavila, K. and Zanibbi, R. (2017). Layout and seman- tics: Combining representations for mathematical for- mula search. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 1165-1168.
Tangent-3 at the ntcir-12 mathir task. K Davila, R Zanibbi, A Kane, F W Tompa, NTCIR. Davila, K., Zanibbi, R., Kane, A., and Tompa, F. W. (2016). Tangent-3 at the ntcir-12 mathir task. In NTCIR.
Proof verification technology and elementary physics. E Davis, Algorithms and Complexity in Mathematics, Epistemology, and Science. SpringerDavis, E. (2019). Proof verification technology and el- ementary physics. In Algorithms and Complexity in Mathematics, Epistemology, and Science, pages 81-132. Springer.
Bert: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, NAACL. Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). Bert: Pre-training of deep bidirectional trans- formers for language understanding. In NAACL.
Natural language premise selection: Finding supporting statements for mathematical text. D Ferreira, A Freitas, Ferreira, D. and Freitas, A. (2020). Natural language premise selection: Finding supporting statements for mathematical text.
Beale-kato-majda regularity criterion of smooth solutions for the hall-mhd equations with zero viscosity. S Gala, E Galakhov, M A Ragusa, O Salieva, Bulletin of the Brazilian Mathematical Society. New SeriesGala, S., Galakhov, E., Ragusa, M. A., and Salieva, O. (2021). Beale-kato-majda regularity criterion of smooth solutions for the hall-mhd equations with zero viscosity. Bulletin of the Brazilian Mathematical Society, New Se- ries, pages 1-13.
Proof verification and proof discovery for relativity. N S Govindarajalulu, S Bringsjord, Taylor , J , Synthese. 1927Govindarajalulu, N. S., Bringsjord, S., and Taylor, J. (2015). Proof verification and proof discovery for rel- ativity. Synthese, 192(7):2077-2094.
Proof artifact co-training for theorem proving with language models. J M Han, J Rute, Y Wu, E W Ayers, S Polu, arXiv:2102.06203arXiv preprintHan, J. M., Rute, J., Wu, Y., Ayers, E. W., and Polu, S. (2021). Proof artifact co-training for the- orem proving with language models. arXiv preprint arXiv:2102.06203.
How well do computers solve math word problems? large-scale dataset construction and evaluation. D Huang, S Shi, C.-Y Lin, J Yin, W.-Y Ma, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsLong Papers1Huang, D., Shi, S., Lin, C.-Y., Yin, J., and Ma, W.-Y. (2016). How well do computers solve math word prob- lems? large-scale dataset construction and evaluation. In Proceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Pa- pers), pages 887-896.
Computational complexity of ecological and evolutionary spatial dynamics. R Ibsen-Jensen, K Chatterjee, M A Nowak, Proceedings of the National Academy of Sciences. the National Academy of Sciences112Ibsen-Jensen, R., Chatterjee, K., and Nowak, M. A. (2015). Computational complexity of ecological and evolutionary spatial dynamics. Proceedings of the Na- tional Academy of Sciences, 112(51):15636-15641.
Pretraining with contrastive sentence objectives improves discourse performance of language models. D Iter, K Guu, L Lansing, Jurafsky , D , Iter, D., Guu, K., Lansing, L., and Jurafsky, D. (2020). Pre- training with contrastive sentence objectives improves discourse performance of language models.
Formalizing physics: automation, presentation and foundation issues. C Kaliszyk, J Urban, U Siddique, S Khan-Afshar, C Dunchev, S Tahar, International Conference on Intelligent Computer Mathematics. SpringerKaliszyk, C., Urban, J., Siddique, U., Khan-Afshar, S., Dunchev, C., and Tahar, S. (2015). Formalizing physics: automation, presentation and foundation issues. In Inter- national Conference on Intelligent Computer Mathemat- ics, pages 288-295. Springer.
Holstep: A machine learning dataset for higher-order logic theorem proving. C Kaliszyk, F Chollet, C Szegedy, arXiv:1703.00426arXiv preprintKaliszyk, C., Chollet, F., and Szegedy, C. (2017). Holstep: A machine learning dataset for higher-order logic theo- rem proving. arXiv preprint arXiv:1703.00426.
. Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, L Zettlemoyer, V Stoyanov, Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692.
Theorem-aware geometry problem solving with symbolic reasoning and theorem prediction. P Lu, R Gong, S Jiang, L Qiu, S Huang, X Liang, S.-C Zhu, Lu, P., Gong, R., Jiang, S., Qiu, L., Huang, S., Liang, X., and Zhu, S.-C. (2021). Theorem-aware geometry prob- lem solving with symbolic reasoning and theorem pre- diction.
Automatic derivation of formulas using reforcement learning. M Luo, L Liu, arXiv:1808.04946arXiv preprintLuo, M. and Liu, L. (2018). Automatic derivation of formulas using reforcement learning. arXiv preprint arXiv:1808.04946.
Semantic parsing of pre-university math problems. T Matsuzaki, T Ito, H Iwane, H Anai, N H Arai, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1Matsuzaki, T., Ito, T., Iwane, H., Anai, H., and Arai, N. H. (2017). Semantic parsing of pre-university math prob- lems. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2131-2141.
Similarity-based equational inference in physics. J Meadows, A Freitas, Physical Review Research. 34Meadows, J. and Freitas, A. (2021). Similarity-based equational inference in physics. Physical Review Re- search, 3(4), Oct.
Pointer sentinel mixture models. S Merity, C Xiong, J Bradbury, R Socher, arXiv:1609.07843arXiv preprintMerity, S., Xiong, C., Bradbury, J., and Socher, R. (2016). Pointer sentinel mixture models. arXiv preprint arXiv:1609.07843.
Sympy: symbolic computing in python. A Meurer, C P Smith, M Paprocki, O Čertík, S B Kirpichev, M Rocklin, A Kumar, S Ivanov, J K Moore, S Singh, PeerJ Computer Science. 3103Meurer, A., Smith, C. P., Paprocki, M.,Čertík, O., Kir- pichev, S. B., Rocklin, M., Kumar, A., Ivanov, S., Moore, J. K., Singh, S., et al. (2017). Sympy: symbolic comput- ing in python. PeerJ Computer Science, 3:e103.
A diverse corpus for evaluating and developing english math word problem solvers. S.-Y Miao, C.-C Liang, K.-Y Su, arXiv:2106.15772arXiv preprintMiao, S.-Y., Liang, C.-C., and Su, K.-Y. (2021). A diverse corpus for evaluating and developing english math word problem solvers. arXiv preprint arXiv:2106.15772.
A survey of geometric constraints on the blowup of solutions of the navier-stokes equation. E Miller, Journal of Elliptic and Parabolic Equations. Miller, E. (2021). A survey of geometric constraints on the blowup of solutions of the navier-stokes equation. Journal of Elliptic and Parabolic Equations, pages 1-11.
Mathbert: A pre-trained model for mathematical formula understanding. S Peng, K Yuan, L Gao, Z Tang, Peng, S., Yuan, K., Gao, L., and Tang, Z. (2021). Math- bert: A pre-trained model for mathematical formula un- derstanding.
Learning equational theorem proving. J Piepenbrock, T Heskes, M Janota, J Urban, arXiv:2102.05547arXiv preprintPiepenbrock, J., Heskes, T., Janota, M., and Urban, J. (2021). Learning equational theorem proving. arXiv preprint arXiv:2102.05547.
On the global stability of smooth solutions of the navier-stokes equations. L Pizzocchero, Applied Mathematics Letters. 115106970Pizzocchero, L. (2021). On the global stability of smooth solutions of the navier-stokes equations. Applied Math- ematics Letters, 115:106970.
Mathematical reasoning via self-supervised skip-tree training. M N Rabe, D Lee, K Bansal, C Szegedy, arXiv:2006.04757arXiv preprintRabe, M. N., Lee, D., Bansal, K., and Szegedy, C. (2020). Mathematical reasoning via self-supervised skip-tree training. arXiv preprint arXiv:2006.04757.
Megatron-lm: Training multi-billion parameter language models using model parallelism. M Shoeybi, M A Patwary, R Puri, P Legresley, J Casper, B Catanzaro, abs/1909.08053ArXiv. Shoeybi, M., Patwary, M. A., Puri, R., LeGresley, P., Casper, J., and Catanzaro, B. (2019). Megatron-lm: Training multi-billion parameter language models using model parallelism. ArXiv, abs/1909.08053.
Derivation and use of mathematical models in systems biology. R W Smith, C Fleck, Pollen Tip Growth. SpringerSmith, R. W. and Fleck, C. (2017). Derivation and use of mathematical models in systems biology. In Pollen Tip Growth, pages 339-367. Springer.
. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, Attention is all you needVaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. (2017). Attention is all you need.
A computational framework for understanding mathematical discoursexy. C Zinn, Logic Journal of IGPL. 114Zinn, C. (2003). A computational framework for under- standing mathematical discoursexy. Logic Journal of IGPL, 11(4):457-484.
| [
"https://github.com/google-research/",
"https://github.com/jmeadows17/PhysNLU"
] |
[
"Mixed Galileons and Spherically Symmetric Solutions",
"Mixed Galileons and Spherically Symmetric Solutions"
] | [
"L Berezhiani \nCenter for Particle Cosmology\nDepartment of Physics and Astronomy\nUniversity of Pennsylvania\n19104PhiladelphiaPennsylvaniaUSA\n",
"G Chkareuli \nCenter for Cosmology and Particle Physics\nDepartment of Physics\nNew York University\n10003New YorkNY\n",
"C De Rham \nDepartment of Physics\nCase Western Reserve University\nEuclid Ave44106ClevelandOH\n",
"G Gabadadze \nCenter for Cosmology and Particle Physics\nDepartment of Physics\nNew York University\n10003New YorkNY\n",
"A J Tolley \nDepartment of Physics\nCase Western Reserve University\nEuclid Ave44106ClevelandOH\n"
] | [
"Center for Particle Cosmology\nDepartment of Physics and Astronomy\nUniversity of Pennsylvania\n19104PhiladelphiaPennsylvaniaUSA",
"Center for Cosmology and Particle Physics\nDepartment of Physics\nNew York University\n10003New YorkNY",
"Department of Physics\nCase Western Reserve University\nEuclid Ave44106ClevelandOH",
"Center for Cosmology and Particle Physics\nDepartment of Physics\nNew York University\n10003New YorkNY",
"Department of Physics\nCase Western Reserve University\nEuclid Ave44106ClevelandOH"
] | [] | It was previously found that in a certain parameter subspace of scalartensor theories emerging from massive gravity, the only stable field configuration created by static spherically symmetric sources was one with cosmological asymptotics. Moreover, these backgrounds were shown to be sub-luminal everywhere in the space; in contrast to the common believe that these theories are necessarily superluminal in the vicinity of a static source. In this work we complete that analysis by extending it to cover the whole parameter space of these scalar-tensor theories. We find that the stability argument renders the asymptotically flat backgrounds unrealizable, forcing once again for cosmological asymptotics. In the case of pressureless sources these backgrounds are stable. However, they get destabilized in the presence of positive pressure, larger than a critical density. Even on the self-accelerated background, on which the scalar mode decouples from sources, in the region occupied by the source it acquires an elliptic equation of motion. Therefore, we conclude that the only parameter space which is not ruled out, by solar system measurements, is the one considered in Berezhiani et al. (arXiv:1302.0549), namely the one for which the scalar and tensor modes can be diagonalized via local transformations.We also reinvestigate the scale at which perturbation theory breaks down in a general Galileon theory. We show that the Vainshtein mechanism successfully redresses the strong coupling scale to a small one, just like in the cubic Galileon, despite the cancellations occurring in the special spherically symmetric case. We emphasize that even if these tests were performed at scales at which perturbation theory broke down, these could not be interpreted as a lower bound for the graviton mass. | 10.1088/0264-9381/30/18/184003 | [
"https://arxiv.org/pdf/1305.0271v1.pdf"
] | 118,591,682 | 1305.0271 | e9e1315de8986680d211d22023bc8d3a633d0000 |
Mixed Galileons and Spherically Symmetric Solutions
May 2013
L Berezhiani
Center for Particle Cosmology
Department of Physics and Astronomy
University of Pennsylvania
19104PhiladelphiaPennsylvaniaUSA
G Chkareuli
Center for Cosmology and Particle Physics
Department of Physics
New York University
10003New YorkNY
C De Rham
Department of Physics
Case Western Reserve University
Euclid Ave44106ClevelandOH
G Gabadadze
Center for Cosmology and Particle Physics
Department of Physics
New York University
10003New YorkNY
A J Tolley
Department of Physics
Case Western Reserve University
Euclid Ave44106ClevelandOH
Mixed Galileons and Spherically Symmetric Solutions
May 2013
It was previously found that in a certain parameter subspace of scalartensor theories emerging from massive gravity, the only stable field configuration created by static spherically symmetric sources was one with cosmological asymptotics. Moreover, these backgrounds were shown to be sub-luminal everywhere in the space; in contrast to the common believe that these theories are necessarily superluminal in the vicinity of a static source. In this work we complete that analysis by extending it to cover the whole parameter space of these scalar-tensor theories. We find that the stability argument renders the asymptotically flat backgrounds unrealizable, forcing once again for cosmological asymptotics. In the case of pressureless sources these backgrounds are stable. However, they get destabilized in the presence of positive pressure, larger than a critical density. Even on the self-accelerated background, on which the scalar mode decouples from sources, in the region occupied by the source it acquires an elliptic equation of motion. Therefore, we conclude that the only parameter space which is not ruled out, by solar system measurements, is the one considered in Berezhiani et al. (arXiv:1302.0549), namely the one for which the scalar and tensor modes can be diagonalized via local transformations.We also reinvestigate the scale at which perturbation theory breaks down in a general Galileon theory. We show that the Vainshtein mechanism successfully redresses the strong coupling scale to a small one, just like in the cubic Galileon, despite the cancellations occurring in the special spherically symmetric case. We emphasize that even if these tests were performed at scales at which perturbation theory broke down, these could not be interpreted as a lower bound for the graviton mass.
Introduction and Summary
There exists a two-parameter family of theories of massive gravity that propagate five degrees of freedom in four dimensions [1,2]. In the decoupling limit, it gives rise to a fascinating class of scalar-tensor theories. There is a one-parameter sub-class of these theories, for which the scalar mode can be completely decoupled from the tensor using the invertible field redefinition h µν →h µν + πη µν + α∂ µ π∂ ν π/Λ 3 3 , [1]. As a result of this de-mixing, the longitudinal mode of the graviton becomes described by the so-called quartic 'Galileon' model [3] 1 , supplemented with a novel coupling to the matter stress-tensor ∂ µ π∂ ν πT µν , where π denotes the helicity-0 mode of the graviton and T µν denotes the matter energy-momentum tensor, [1]. After partial diagonalization, the Lagrangian describing the scalar sector reduces to
L π = 3 2 π π + 3 2 α Λ 3 3 (∂π) 2 π + 1 2 α 2 Λ 6 3 (∂π) 2 (∂∂π) 2 − ( π) 2 + 1 M Pl πT + α M Pl Λ 3 3 ∂ µ π∂ ν πT µν ,(1)
where Λ 3 ≡ (M Pl m 2 ) 1/3 is the strong coupling scale and α is the free real parameter of this sub-class of theories. The thorough study of the spherically symmetric configurations [7] (for related work on spherical symmetric solutions see [9]) showed that the stability of the spherically symmetric configurations in the presence of the static source forces the free parameter α to be positive. Otherwise, the last term of (1) gives rise to a ghost-like kinetic term in the high density regions. Furthermore, it was shown in [7] that, for α > 0, the theory does not admit asymptotically flat classical solutions. Instead, the screened π configuration at short distances (within the Vainshtein region [8], see also [10] for a recent review on the Vainhstein mechanism) matches a cosmological background at large distances. We emphasize that this matching effect is not related to the last term of (1) and would be present even in its absence. In addition, it was explicitly shown that these backgrounds are subluminal, as opposed to the common believe that in massive gravity the configurations recovering GR at short distances necessarily exhibit superluminal propagation 2 .
In the present work we would like to complete the analysis of [7] by extending it to the full parameter space of the decoupling limit of massive gravity. In general, there is a nonlinear mixing term between the helicity-0 and -2 modes which cannot be removed by a local field redefinition. This makes the assay slightly more involved and gives rise to a qualitative change in the conclusions. Namely, for generic parameters of the theory, we show that asymptotically flat backgrounds created by static spherically symmetric sources exhibit a gradient instability. This should be contrasted with the case considered in [7]; where asymptotically flat configurations were unstable as well, although the instability was ghost-like.
The presence of this gradient instability in the general case forces us to give up asymptotic flatness, and we instead focus on the self-accelerated background configuration. The latter was already shown to be interesting for its unique property to decouple the π mode from matter (at leading order), [13,14], not to mention the obvious phenomenological significance of such backgrounds. The absence of a direct conformal coupling πT on top of this self-accelerating solution implies that no Vainshtein mechanism or other screening effect ought to be invoked. Furthermore, after using Einstein's equations, this decoupling leads to the following contributions to the kinetic term of the scalar mode in the vicinity of the source
L π ⊃ − η µν − 1 M Pl Λ 3 3 T µν ∂ µ π∂ ν π .(2)
The examination of (2) shows that, in case of a dust-like source, both contributions to the kinetic termπ 2 are healthy; hence, the scalar mode is not a ghost. However, it is straightforward to see that in case of the positive pressure the second term of (2) gives rise to a negative contribution to the gradient energy. Moreover, if the pressure is larger than the critical density ρ c = M Pl Λ 3 3 , something which is common for astrophysical sources or indeed even for the atmosphere! this negative contribution overcomes the positive one, making the background unstable. This argument can be extended to other asymptotic cosmologies, as long as we recover General Relativity at short distances the same instability will arise. Therefore, if the scalar-tensor theory considered here were to reproduce all the important effects of the full theory, then we would conclude that the only phenomenologically viable theory of massive gravity is the one with β = 0 and α > 0. However, the scalar-tensor theory may not be necessarily capturing all the important properties of massive gravity. In particular, the presence of a nonzero time-like component of the helicity-1 field (A 0 = 0) may give rise to a new class of spherically symmetric solutions not captured by the scalar-tensor sector.
In the second part of this manuscript, we investigate the strong coupling issue in theories such as massive gravity and Galileons. The fact that the Vainshtein mechanism relies on irrelevant operators (in the Wilsonian sense) to be important raises the question of the control one has over perturbation theory. As previously investigated, [4] over a background configuration for the scalar field π = π 0 + δπ, the strong coupling scale gets redressed, symbolically Λ redressed ∼ Λ 3 (∂ 2 π 0 /Λ 3 3 ) a , with a power a depending on the exact model. In the cubic Galileon that arises in DGP, a = 1/2, while in models with higher order Galileon interactions, one could have a = 2/3. This redressing allows to raise the strong coupling scale to ∼ (1cm) −1 in the cubic Galileon case, however this scale is still low enough to wonder what happens at energy scales above that. Furthermore in the quartic Galileon and some models of massive gravity, it has been argued that the redressing only raises the strong coupling scale to ∼ (0.4km) −1 making the theory fully nonperturbative at extremely low energy scales, [16]. We show here that the difference between the cubic Galileon and the quartic Galileon/massive gravity is actually not as pronounced as previously found, and the redressed strong coupling scale in the quartic Galileon/massive gravity is actually closer to ∼ (30cm) −1 , i.e. at most one order of magnitude below the cubic Galileon.
Putting this subtlety aside, one can raise the interesting question of what happens when reaching energy scales higher than the redressed strong coupling scale, and perturbation theory runs out of control. Following Vainshtein's original argument (rather than its specific realization within a Galileon theory), we do expect the effect of the graviton mass and in particular the effect of the helicity-0 mode to become smaller and smaller as one gets to higher and higher energies. We emphasize however that there is no sense in which one could use the break of perturbation theory as a lower bound for the graviton mass as has been done in the literature [16].
The paper is organized as follows. Section 2 is dedicated to the description of the framework. In section 3 we analyze the spherically symmetric configuration and study its stability both for asymptotically flat backgrounds and for the selfaccelerated one. We also discuss the stability for more general cosmologies. Finally, we reexamine the strong coupling issue in section 4, and open the discussion for a deeper understanding in section 5.
As we were finalizing this paper, it came to our attention that the related work was being conducted in [12]; which has some overlap with our work.
The Theory
The scalar-tensor sector of massive gravity, in the decoupling limit, is described by the following Lagrangian density
L = − 1 2 h µν E αβ µν h αβ + h µν X (1) µν + α Λ 3 3 h µν X (2) µν + β Λ 6 3 h µν X (3) µν + 1 M Pl h µν T µν ,(3)
where, we have denoted the helicity-±2,0 modes by h µν and π respectively. The three identically conserved symmetric tensors X (n) µν [Π] depend on second derivatives of the helicity-0 field Π µν ≡ ∂ µ ∂ ν π in the following way,
X (1) µν = − 1 2 ε µ αρσ ε ν β ρσ Π αβ , X (2) µν = 1 2 ε µ αργ ε ν βσ γ Π αβ Π ρσ , X (3) µν = ε µ αργ ε ν βσδ Π αβ Π ρσ Π γδ .
The Lagrangian (3) is invariant under the linear diffeomorphisms δh µν = ∂ µ ζ ν + ∂ ν ζ µ up to a total derivative, while being exactly invariant under the global Galilean symmetry δπ = v µ x µ .
Recently, it has been shown [15] that the decoupling limit of massive gravity exhibits the same non-renormalization properties as the Galileon theory [3,4]. Namely, the coefficients α and β do not get radiatively corrected, within the effective theory.
In [7], it was found that the β = 0 parameter subspace of (3) does not possess a stable asymptotically flat solutions in the presence of the spherically symmetric static sources. Instead, the stable screened π configuration inside the Vainshtein region is smoothly matched to the cosmological backgrounds (with various equations of state) at large distances. In this work, we would like to perform the similar analysis for the rest of the parameter space (i.e. β = 0), which is qualitatively different from its β = 0 counterpart. In particular, in the present case the mixing between the different helicity states cannot be undone by means of a local field redefinition. This structure is so far characteristic to massive gravity [2] and has not been observed in other modifications of General Relativity.
Static Spherically-Symmetric Configurations
Throughout this work we consider a star-like source of finite size R and uniform density ρ. Without loss of generality, the static spherically symmetric configuration can be found by assuming the following ansatz for the metric perturbations around a Minkowski space-time in spherical coordinates
h 00 = a(r); h ij = f (r)δ ij .(4)
The most general ansatz (up to Galilean transformations) for the helicity-0 mode, which leads to the static spherically symmetric metric (4) (see appendix A), is given by
π = c 2 Λ 3 3 t 2 + π 0 (r) .(5)
After integrating Einstein's equations, i.e. the equations for h µν , once and using vanishing initial conditions in the origin we obtain
rf ′ = − 2M M Pl r + Λ 3 3 r 2 λ(1 − αλ − 2βλ 2 ),(6)ra ′ = − 2M M Pl r + Λ 3 3 r 2 (c − λ(1 + 2αc) − 6βcλ 2 − 2βλ 3 ) .(7)
Following the same procedure for the longitudinal mode we arrive at the following equation
3(1 + 2αc)λ − 6(α + α 2 c − 4βc)λ 2 + 2(α 2 − 4β − 20αβc)λ 3 − 60β 2 cλ 4 − 12β 2 λ 5 = 2 r * r 3 (1 + 2αc + 12βcλ + 6βλ 2 ) + c Outside the source 2 r * R 3 (1 + 2αc + 12βcλ + 6βλ 2 ) + c Inside the source ,(8)
where we have defined
λ ≡ π ′ 0 Λ 3 3 r , and r * ≡ M M Pl Λ 3 3 1/3 ,(9)
r ⋆ being the Vainshtein radius and π ′ 0 ≡ ∂ r π 0 (r). The classical backgrounds in the time independent case (c = 0) have been previously studied in [17,18]. It has been established that for generic parameters (8) possesses two types of solutions inside the Vainshtein region. In particular, since the factor (r * /r) 3 is large at short distances, (8) requires either (i) λ ≫ 1 or (ii) the vanishing coefficient of (r * /r) 3 . Moreover, it has been shown that the case (i) corresponds to a screened gravitational field at short distances, hence contradicting empirical data.
In this work we study the general class of spherically symmetric configurations (5), created by static sources, against instabilities.
Asymptotic Flatness
As it has been already mentioned for generic α and β the equation of motion for the longitudinal mode has two types of solutions at short distances, r ≪ r * . One of these solutions is
λ = −β −1/3 r * r ,(10)
which corresponds to a screened gravitational field, i.e. a field for which the Newtonian 1/r behavior is absent, at short distances even for nontrivial c; as it is easy to see from (6) and (7). We dismiss this solution for obvious reasons 3 . The other solution corresponds to the case when the coefficient of (r * /r) 3 in (8) vanishes on the background, that is
1 + 2αc + 12βcλ + 6βλ 2 = 0.(11)
Which evidently gives us a solution, with λ = const; hence, recovering GR at short distances with high precision (see (6) and (7); the normal 1/r behavior of gravity at large distances precisely cancels). This serves as a motivation for studying the stability of the generic background with constant λ. Within the Vainshtein region the leading contribution to the gradient energy, to the second order in fluctuations reads as follows
L (2) = −3β(c + λ) r * r 3 2(∂ r δπ) 2 − (∂ Ω δπ) 2 .(12)
Notice the relative minus sign between the radial and angular terms 4 . As a result, we deduce that the only way to avoid the gradient instability is to have λ ≃ −c at short distances. This condition, combined with (11) leads to
c = α ± α 2 + 6β 6β .(13)
If we further require asymptotic flatness, we are led to the following system of conditions
λ ∞ (1 − αλ ∞ − 2βλ 2 ∞ ) = 0, (14) c − λ ∞ (1 + 2αc) − 6βcλ 2 ∞ − 2βλ 3 ∞ = 0,(15)
where λ ∞ denotes the value of λ at spatial infinity. It is easy to show that these conditions possess a nontrivial solution only in the parameter space
β = − α 2 8 .(16)
However, according to [20], in this parameter subspace the flat vacuum with λ = −c is infinitely strongly coupled. Hence, the analysis of this section leads us to the conclusion that the theory under consideration does not possess a stable asymptotically flat solution sourced by a static and spherically symmetric source.
Asymptotically de Sitter
Having failed to introduce a source on flat space, it is natural to ask what happens when we substitute the flat space by a self-induced de Sitter vacuum. We look for the de Sitter space solution in static slicing for which the ansatz adopted in the previous section is applicable here as well.
Therefore, we are looking for the spherically symmetric solutions to (8) with de Sitter asymptotic. This means that at large distances the effective energy momentum tensor, coming from the mass term of the graviton, will have the equation of state of the cosmological constant. As a result, using (6)- (8), we arrive at
c = −λ(r → ∞) = − −α ± α 2 + 6β 6β .(17)
Then, the eq.(8) at arbitrary distances factorizes and takes the following form
(λ + c)P (λ) = 12β r * r 3 (λ + c) 2 ,(18)
where, P (λ) is the polynomial of the fourth order; which does not vanish when λ = −c, unless β = 0, β = −α 2 /6 or β = −α 2 /8. The first possibility leads to a ghost-like instability of the de Sitter space [13], in the second case the helicity-0 loses its kinetic term, while the third one necessarily leads to an asymptotically flat background (which has already been discarded due to infinite strong coupling). This implies that the only solution for the generic parameters (except the abovementioned ones) is the one with the trivial (source independent) π profile around the source
π = − c 2 Λ 3 3 x µ x µ .(19)
This may be traced to the fact that, according to [13], on de Sitter space the helicity-0 mode does not have the kinetic mixing with the helicity-±2. Namely, the Lagrangian density re-calculated on the self-accelerated background is given by
L = − 1 2 h µν E αβ µν h αβ + κ(α, β) H 2 M Pl Λ 3 π π + κ(α, β) Λ 3 h µν X (2) µν −3β H 2 M Pl Λ 6 (∂π) 2 π + β Λ 6 h µν X (3) µν + 1 M Pl h µν T µν .(20)
Here, h µν and π denote the deviations from the de Sitter background for the tensor and scalar modes respectively. The curvature of the background is given by H 2 ∝ m 2 and κ is some function of the parameters. The exact expressions for them are inessential for current discussion and in case of interest the reader is directed to [13]. Again, the only asymptotically decaying solution to the equations of motion, following from (20), is π = 0. While h µν satisfies the linearized Einstein's field equations
G (1) µν = 1 M Pl T µν .(21)
The problem in case of certain sources arises from the third term of (20), which by integration by parts can be rewritten in the following form
κ(α, β) Λ 3 G (1) µν ∂ µ π∂ ν π.(22)
In case of the sources smaller than their Vainshtein radius this term will overwhelm the kinetic term inside the sources, precisely like in [7]. Hence the π becomes a ghost unless κ > 0; surprisingly, this is exactly what we need in order to have the healthy kinetic term outside of the source as well (20). Therefore, we conclude that the theory admits the presence of the arbitrarily dense objects on de Sitter space. However, this is so only for pressure-less sources; otherwise an extra caution is in order. Namely, if the source has a pressure, larger than the critical density, then the dominant contribution to the quadratic term comes entirely from (22). Moreover, in case of the positive pressure this contribution has a wrong sign (as it follows from (21)) in front of the gradient term, leading to the gradient instability of the configuration. Unfortunately, most of the localized sources have sufficiently large pressure to realize this instability; even the atmosphere around the earth has the pressure 10 14 times larger than the critical density.
Other Cosmologies
We would like to start this section by pointing out that, up to this point the situation is qualitatively similar to the β = 0 case, see [7]. Namely, the presence of natural sources seems to be destabilizing the asymptotically flat, as well as asymptotically self-accelerated spaces (in case of β = 0 there was no stable self-accelerated space to begin with, see [13]). However, in that case the situation has changed once we allowed for cosmological backgrounds with nontrivial equation of state [7]. In this section we would like to explore this possibility for the β = 0 parameter space.
First, we reiterate that the stability of the background forces upon us the condition λ ≃ −c within the Vainshtein region (see eq.(12)). As a result, the leading contribution to the quadratic (in perturbations) Lagrangian vanishes. In order to find the next order, non-vanishing result, we have to find the corrections to the background itself. The correction to λ is parameterized as λ = −c + δλ; where, as it is easy to see from (8), δλ is of order of (r/r * ) 3 . While the expression for the metric degrees of freedom at next to leading order is obtained by substituting λ = −c to the right hand side of (6) and (7); leading to the correction ∼ Λ 3 3 r 2 . In other words, the background next to the leading order has the following form
λ ≃ −c + O r r * 3 ,(23)f ≃ M M Pl r + Λ 3 3 r 2 2 c (−1 − αc + 2βc 2 ),(24)a ≃ M M Pl r − Λ 3 3 r 2 c (−1 − αc + 2βc 2 ).(25)
By simple analysis of these corrections we conclude that the leading contribution to kinetic terms of fluctuations comes from the background
λ = −c,(26)
while for f and a we should take into account the corrections present in (24) and (25). To this end, the background relevant for the quadratic terms is similar to the one for de Sitter space (see the previous subsection). With the distinction that in the case of de Sitter space the background (26) was exact, while in the current case it is merely the leading expression for the background. Despite this minor distinction, since (26) is the relevant piece of the background at short distances, the conclusions of the previous section apply here as well. Namely, the background will exhibit the instability either inside the source or right outside of it. It should also be mentioned that the mixing between the scalar and tensor modes of the graviton are irrelevant, since to the leading order we will have the pure π kinetic term, while the hπ mixing appears only in the sub-leading approximation.
We would like to conclude by noting that in case of β ≪ α there seems to be another branch of solutions at short distances. Namely, one could think of having β so small that the right-hand side of (8) could have been neglected at short distances; such a small values for the parameter are technically natural since it does not get renormalized by quantum loops [15], so this would not represent a fine-tuning of the parameters. For simplicity, let us concentrate on c = 0 and α = 1 as it is quite straightforward to generalize the argument. This branch of solutions requires λ ≫ 1, which under the assumption that the left hand side is negligible gives
2λ 3 ≃ 12β 2 λ 5 ⇒ λ ≃ ± 1 √ 6β .(27)
However, in order this assumption about the right hand side to be valid, we need to make sure that 1
β 2 ≫ r * R 3 .(28)
Taking into account (6), (7) and the expressions given above, it is easy to see that this branch of solutions gives an unacceptably large deviation from GR. Hence, it is ruled out on phenomenological grounds.
Strong Coupling for general Galileons
In the previous section, we have shown that for massive gravity, stability of the spherically symmetric solutions imposes a non-flat, i.e. cosmological, asymptotic behaviour. This is however not necessarily the case for a more general Galileon theory where the coefficients between the different operators are relaxed and there is no mixing with the helicity-2 mode of the graviton. In this section we investigate the redressing of the strong coupling scale on spherically symmetric configurations and discuss the small departure from spherical symmetry. We start with a generic Galileon setup and compute the redressing of the strong coupling scale both in the cubic and the quartic Galileon. Within the Vainshtein regime, the stable solution in massive gravity covered by α > 0 is essentially a special case of the quartic Galileon (although the asymptotics outside the Vainshtein region are different).
Asymptotically Flat Spherically Symmetric Galileons
To start with, we consider a quartic Galileon Theory in four dimensions
S = d 4 x − 3 2 4 i=2 c i Λ 3(i−2) 3 L i + π M Pl T ,(29)
where T is the trace of the stress-energy tensor of any external sources and the Galileon Lagrangians are given by
L 2 = (∂π) 2(30)L 3 = (∂π) 2 π (31) L 4 = (∂π) 2 ( π) 2 − (∂ µ ∂ ν π) 2 .(32)
In massive gravity (with β = 0), c 2 = 1, c 3 = −α and c 4 = α 2 /3, but we leave the c 3 and c 4 coefficients arbitrary for now and simply set c 2 = 1. We are mainly interested in the Vainshtein screening of this Galileon due to the Earth, so as first approximation, the background source may be considered as spherically symmetric, T = −M ⊕ δ (3) (r) + δT , leading to a background configuration for the Galileon field which is also spherically symmetric
π(t, r) = π 0 (r) + 1 √ 3 φ(t, r) ,(33)
where the background field satisfies the simple algebraic equation,
π ′ 0 (r) r + 2c 3 Λ 3 3 π ′ 0 (r) r 2 + 2c 4 Λ 6 3 π ′ 0 (r) r 3 = 1 12π M ⊕ M Pl 1 r 3 .(34)
If the quartic Galileon is present, c 4 = 0, then close to the source, (at small r), the last term on the left hand side dominates and the background solution takes the form
π ′ 0 (r) = Λ 2 3 1 12πc 4 M ⊕ M Pl 1/3 + O(Λ 3 r) ,(35)
while if c 4 = 0, the background field acquires a different profile,
π ′ 0 (r) = Λ 2 3 1 12πc 3 M ⊕ M Pl 1 Λ 3 r 1/2 + O(Λ 3 r) .(36)
The perturbations around this background configuration evolve with the following kinetic matrix Z µν ,
L φ = − 1 2 Z µν (r)∂ µ φ∂ ν φ + · · · ,(37)
with
Z rr (r) = 1 + 4c 3 Λ 3 3 π ′ 0 r + 6c 4 Λ 6 3 π ′2 0 r 2 (38) Z tt (r) = 1 3r 2 d dr r 3 1 + 6c 3 Λ 3 3 π ′ 0 r + 18c 4 Λ 6 3 π ′2 0 r 2 (39) Z ΩΩ (r) = 1 2r d dr r 2 1 + 4c 3 Λ 3 3 π ′ 0 r + 6c 4 Λ 6 3 π ′2 0 r 2 .(40)
When the quartic Galileon is present, one can see immediately that the leading contribution to the angular direction vanishes,
Z tt ∼ Z rr ∼ (Λ 3 r) −2 (M ⊕ /M Pl ) 2/3 while Z ΩΩ ∼ (Λ 3 r) −1 (M ⊕ /M Pl ) 1/2
. This leads to a few subtleties in the quartic Galileon but as we will see later, the resulting redressed strong coupling scale is nevertheless barely affected by this subtlety. The reason for that is that the same cancelation responsible for the hierarchy Z ΩΩ ≪ Z tt is also responsible for cancelling the leading contribution to the operator that would naively arise at the lowest energy scale. As a result the redressed strong coupling scale is larger than naively anticipated.
Cubic Galileon
Redressing from the Earth
We start by focusing on the cubic Galileon and set c 3 = 1/3 for simplicity. In that case
Z tt ∼ Z rr ∼ Z ΩΩ ∼ M ⊕ 4πM Pl 1/2 1 (Λ 3 r) 3/2 ≡ Z ⊕ .(41)
The canonically normalized field is then
φ ∼ Z ⊕ φ ,(42)
leading to the cubic interaction
L (3) φ = 1 Λ 3 3 (∂φ) 2 φ = 1 Λ 3 3 Z 3/2 ⊕ ∂φ 2 φ = 1 Λ 3 ⊕ (∂φ) 2 φ .(43)
So the redressed coupling scale due to the screening of the Earth is given by
Λ ⊕ = Λ 3 Z ⊕ ∼ Λ 3 M ⊕ 4πM Pl 1/4 1 (Λ 3 r) 3/4 .(44)
Taking the strong coupling scale to be that associated with an infrared modified theory such as DGP or massive gravity for which Λ 3 = (H 2 0 M Pl ) 1/3 where H 0 ∼ 10 −33 eV, we have Λ 3 ∼ (1120km) −1 , and so at the surface of the Earth the redressed scale is (as previously found in [19])
Λ ⊕ ∼ 10 7 Λ 3 ∼ (4 cm) −1 .(45)
Redressing from Nearby Sources
When testing the Newton's law using torsion balance at submillimeter scales, the experiment itself and nearby sources can further contribute to the screening of the Galileon field. For concreteness, let us consider a source of mass M e (which could symbolize the experiment itself or a nearby source) localised a distance ρ e from the core of the experiment. The coupling to that new source leads to a new field configuration π e (ρ) on the top of that π 0 (r), given by
π ′ e (ρ) = Λ 2 ⊕ 1 4π M e √ Z ⊕ M Pl 1 Λ ⊕ ρ 1/2 ,(46)
where ρ is the distance from the source M e . Going through the same analysis presented previously, the redressing of the strong coupling scale due to the mass M e is then
Λ e = Λ ⊕ M e 4π √ Z ⊕ M Pl 1/4 1 (Λ ⊕ ρ e ) 3/4 .(47)
As a possible example, if we consider that the local effects could be mimicked by a mass of 10kg localized 1cm away from the center of the experiment then,
Λ e ≃ 4 Λ ⊕ .(48)
In itself this is not a huge enhancement, but it simply serves to show that as one goes within the apparatus itself, its different components help raising the scale, and screening the force.
Quartic Galileon -Massive Gravity
In massive gravity, when β = 0, the decoupling limit resembles that of a quartic Galileon with an additional coupling to matter. As we will discuss in section 5, in that case, the only relevant scale in the decoupling limit is not
Λ 3 = (m 2 M Pl ) 1/3 but rather Λ ≡ Λ 3 α 1/3 .(49)
Since the coefficient α does not renormalize [15], α can in principle depart significantly from unity, hence disentangling the strong coupling scale Λ which appears in the decoupling limit, with the graviton mass m. In what follows we will thus set c 4 = α 2 /3 as is the case in massive gravity and work with the scale Λ which can in principle be independent from the graviton mass.
Angular Subtleties
As mentioned previously, in the purely spherically symmetric case, when in the vacuum Z ΩΩ ≪ Z rr ∼ Z tt , which leads to a few subtleties in treating this case. In what follows we consider the case where
Z rr ∼ Z tt ≡ Z ⊕ = 6 Λ 2 R 2 ⊕ 1 12 √ 3π M ⊕ M Pl 2/3(50)
and
Z ΩΩ = ǫ Z ⊕ ,(51)
with ǫ ≪ 1.
• If we only consider the effect from the Earth itself, and are interested in the redressed scale in the vacuum outside the Earth, then as seen earlier,
ǫ ∼ 1 9 ΛR ⊕ 1 12 √ 3π M ⊕ M Pl 1/3 ∼ few × 10 −11 Λ (H 2 0 M Pl ) 1/3 .(52)
• However, the Earth itself is not perfectly spherically symmetric, and just taking into account the flatness of the Earth, (which is of the order of δ ∼ 0.0033), we would have instead more realistically ǫ ∼ δ 2 ∼ 10 −5 . Furthermore, the presence of other sources near the experiment itself will completely break the symmetry and more realistically, we would expect ǫ ∼ 1 at the level of the experiment itself. However for consistency we keep ǫ as an arbitrary parameter for now, with ǫ ≪ 1.
Since the gradient along the angular direction are not redressed with the same scale as along the radial direction, we first need to rescale the angular directions as, [16] (t, r, θ, ϕ) = (t,r, ǫ 1/2θ , ǫ 1/2φ ) ,
this is not something one could do globally, but if we are only interested in what happens in a small region of space near say an experiment, and not for all angles, the rescaling can be done locally. The kinetic term is then of the form
d 4 xL φ = d 4 x − 1 2 Z µν (r)∂ µ φ∂ ν φ + · · · ∼ d 4x − 1 2∂ µφ∂νφ + · · · , (54) withφ = √ ǫZ ⊕ φ.
In terms of the canonically normalized fieldφ, the quartic and cubic Galileon leads to cubic interactions of the form (focusing only on the interactions that arise at the lowest energy scale),
d 4 xL int ⊃ d 4 x π ′ 0 (r) rΛ 6 φ(∂ 2 r φ)(∂ 2 Ω φ) + 1 Λ 3 + π ′′ 0 (r) Λ 6 (∂ 2 Ω φ) 2 + · · · (55) ⊃ d 4x 1 Λ 3 ⊕,1φ (∂ 2 rφ )(∂ 2 Ωφ ) + 1 Λ 3 ⊕,2 (∂ 2 Ωφ ) 2 + · · · ,(56)
with the redressed interaction scales,
Λ ⊕,1 = Λ 2 R ⊕ π ′ 0 1/3 (ǫZ ⊕ ) 1/2 (57) Λ ⊕,2 = Λ 2 ǫ π ′′ 0 1/3 (ǫZ ⊕ ) 1/2 ∼ Λǫ 1/3 (ǫZ ⊕ ) 1/2 .(58)
Now going back into the original coordinates, if we read the redressed scale as a scale in the orthoradial direction then,
Λ ⊕,1,2 = 1 √ ǫΛ ⊕,1,2(59)
This leads to the following redressed interaction scales
Λ ⊕,1 ∼ ΛZ 1/3 ⊕ (60) Λ ⊕,2 = Λǫ 1/3 Z ⊕ .(61)
Being extremely conservative and taking the naïve value for ǫ as given in (52), we get
Λ ⊕,1 ∼ 5 × 10 6 Λ ∼ (20cm) −1 (62) Λ ⊕,2 ∼ 3 × 10 5 Λ ∼ (30cm) −1 ,(63)
assuming Λ 3 ∼ (H 2 0 M Pl ). This differs from the result of (0.4km) −1 obtained in [16] by about three orders of magnitude. The reason for the discrepancy lies in the fact that the operator (∂ 2 Ω φ) 2 is not enhanced by the coefficients π ′ 0 (r) rΛ 6 as is the case for the other operators of that form. This is due to the specific structure of the Galileon interactions. As we show here, this operator has a coefficient going as π ′′ 0 (r) rather than
π ′ 0 (r)
r . Since the hierarchy between Z ΩΩ and Z tt was precisely coming from the fact that around spherically symmetric configuration π ′′ 0 (r) ≪ π ′ 0 (r) r and so that operator comes in with a much larger energy scale than one could have anticipated at first sight 5 .
Being even more conservative and translating instead the redressed scales as distance scales along the radial direction, we would then have instead Λ ⊕,1,2 ∼Λ ⊕,1,2 , leading to a smaller scale (or larger distance scale along the radial direction). Then for a more realistic value of ǫ 10 −5 , we would get instead Λ ⊕,1 ∼ (70m) −1 , for Λ ∼ (H 2 0 M Pl ) 1/3 as a radial distance scale (and Λ ⊕,2 ∼ (1m) −1 ). However any realistic kind of matter located around the experiment itself would completely wash out this hierarchy and set ǫ close to 1, giving back Λ ⊕,1,2 ∼ few × (10cm) −1 for Λ ∼ (H 2 0 M Pl ) 1/3 . Since any test of Newton's law requires an apparatus which itself is of the order of a fraction of a metre, and is itself relatively massive, we expect the mass present at these distance scales to redress the strong coupling scale further. More work needs to be done to fully explore the influence of other environmental factors to the redressing of this scale.
The Bigger Picture
As we have seen in the previous sections, massive gravity and generic Galileons have a relatively low energy scale at which perturbation theory breaks down. However, it is not because perturbation theory breaks down that we suddenly expect large corrections to Newton's law. If anything, we would expect the Vainshtein mechanism to work even better in this non-perturbative regime and to decouple the field even more. The scales Λ 3 , Λ ⊕ or Λ redressed may or may not be interpreted as a cutoff in a usual sense, i.e. it is not necessarily the case that new degrees of freedom come in at this scale. One may imagine it is the scale of strong coupling, analogous to Λ QCD , and non-perturbative tools must be developed to understand what happens at these 5 When dealing with the mixing h µν X 3 µν , a confusion of similar nature was made in [16]. Indeed, thanks to the very precise nature of the interactions in massive gravity, operators of the form (∂ 2 π) 3 always appear in such a combination so that they form a total derivative, or in other words, there are no operators of the form (∂ 2 π) 3 in massive gravity, which is the essence of its ghostfree construction [1,2]. Instead when considering an operator of the form h(∂ 2 π) 3 , its relevant contribution arises after integrations by parts to give rise to the operator (∂ 2 h)(∂π) 2 (∂ 2 π), which is of course similar to * R µναβ ∂ µ π∂ α πD ν D β π, this is an operator of dimension 7 rather than 9 which becomes relevant at a larger energy scale. However more importantly, as mentioned previously, there are no stable solutions which exhibit the Vainshtein mechanism when this mixing term is present.
scales. For this reason it is important to stress that the scale at which perturbation theory breaks down cannot be used to put a lower bound on the graviton mass as has been argued in [16].
Interestingly for massive gravity the stability analysis of this paper constrains us to the restricted Galileons, considered in [7], for which β = 0. In this case the only scale entering into the decoupling limit is Λ = Λ 3 /α 1/3 . Since both α and β are free parameters which satisfy a non-renormalization theorem [15], the scale of strong coupling is completely independent of the mass of the graviton. Furthermore since essentially all observational constraints so far, constrain only the scale Λ = Λ 3 /α 1/3 6 and not Λ 3 directly, they do not impose any constraints directly on the graviton mass. The later must be constrained by cosmology where the decoupling limit is not appropriate, rather than solar system/astrophysical gravitational tests [21,22].
The results of this and the previous paper [7] have important implications for the discussion on the possible existence of superluminalities, and UV completion. For instance, as argued in [23], generic Galileons violate the conditions for analyticity of the S-matrix in Minkowski space-time. However, the very definition of an S-matrix assumes the switching off of interactions at infinity consistent with the assumptions of asymptotic flatness. What we have seen is that introducing a single source into the theory is in conflict with asymptotically flat boundary conditions, at least in the decoupling limit, for stability reasons. As such the Minkowski S-matrix for the decoupling limit theory is not an appropriate description, and the existence or not of its analyticity is a mute point. This is not to say that these theories do not need to have a fundamentally different non-perturbative/UV completion than traditional Wilsonian effective field theories, but rather that their failure to satisfy usual analyticity properties in the decoupling limit by no means precludes the existence of a non-perturbative completion, even one for which the physics is potentially fundamentally (sub-)luminal.
This type of theories were first discovered in[4] in the context of the DGP scenario[5], see also[6] 2 We emphasize however that the 'issue' of superluminality in massive gravity has not (yet) been connected to that of acausality in any rigorous way. Configurations on which closed time-like curves could form seem to live beyond the regime of validity of the effective theory,[11].
Moreover, for c = 0 this solution matches the cosmological background, as it was noted in[17].4 Here, we would like to emphasize that (12) comes from the h µν X(3) µν term of (3); the other contributions to the quadratic gradient term are subdominant.
For instance in the calculation of the earth-moon orbit for lunar laser ranging[21], or pulsar radiation[22], the decoupling limit description is sufficient to calculate the fifth forces or scalar radiation, and is hence determined only by the scale Λ = Λ 3 /α 1/3 .
AcknowledgementsWe would like to thank Matteo Fasiello, Lavinia Heisenberg, Andrew Matas and David Pirtskhalava for useful comments on the manuscript. LB is supported by funds provided by the University of Pennsylvania, GG is supported by NSF grant PHY-0758032 and NASA grant NNX12AF86G S06, AJT was supported in part by the Department of Energy under grant DE-FG02-12ER41810.Appendix AIn order to find the most general ansatz for π, one has to satisfy the following conditions:(i) Spherical symmetry requires π(r, t) to be a function of r and t only.(ii) For the configuration to be static the effective energy-momentum tensor (T eff µν ≡ G µν ) must satisfy the following T eff 0i = 0 and ∂ t T eff 00 = 0.Here, the first expression requires the vanishing momentum density while the latter one assures time-independence of the energy density. One could notice that because of the energy-momentum conservation (Bianchi's identity), these two constraints are not independent from each other. However, nothing prohibits requiring both of them individually. Notice, that these conditions are the direct consequence of (4). It follows from the Einstein's equation, that the condition of vanishing momentumdensity reads as followsFor simplicity let us imagine that Π 0i and Π ij are independent variables. Since, in that case we have a linear system of algebraic equations at hand, with Π 0j unknown. There exists a non-trivial solution to (A-II) if and only if the determinant of the matrix in brackets vanishes. Otherwise, we end up with ∂ 0 ∂ r π = 0, which is consistent with our final ansatz (A-V). With the assumption of spherical symmetrythe above-mentioned condition becomesFrom (A-IV) and the time independence of T eff 0µ follows that the most general ansatz relevant to us isIf we further require the time-independence of the effective stress tensor T eff ij , we arrive at the following ansatz π = c 2 Λ 3 t 2 + π 0 (r), (A-VI)where we could add terms constant and linear in time, however those terms are irrelevant because of the Galilean symmetry.
. C De Rham, G Gabadadze, arXiv:1007.0443Phys. Rev. 8244020hep-thC. de Rham, G. Gabadadze, Phys. Rev. D82, 044020 (2010). [arXiv:1007.0443 [hep-th]].
. C De Rham, G Gabadadze, A J Tolley, arXiv:1011.1232Phys. Rev. Lett. 106231101hep-thC. de Rham, G. Gabadadze, A. J. Tolley, Phys. Rev. Lett. 106 (2011) 231101. [arXiv:1011.1232 [hep-th]].
. A Nicolis, R Rattazzi, E Trincherini, arXiv:0811.2197Phys. Rev. D. 7964036hep-thA. Nicolis, R. Rattazzi and E. Trincherini, Phys. Rev. D 79, 064036 (2009) [arXiv:0811.2197 [hep-th]].
. M A Luty, M Porrati, R Rattazzi, hep- th/0303116JHEP. 030929M. A. Luty, M. Porrati and R. Rattazzi, JHEP 0309, 029 (2003) [hep- th/0303116].
. G R Dvali, G Gabadadze, M Porrati, hep-th/0005016Phys. Lett. B. 485208G. R. Dvali, G. Gabadadze and M. Porrati, Phys. Lett. B 485, 208 (2000) [hep-th/0005016].
. C De Rham, G Gabadadze, arXiv:1006.4367Phys. Lett. B. 693334hep-thC. de Rham and G. Gabadadze, Phys. Lett. B 693, 334 (2010) [arXiv:1006.4367 [hep-th]].
. L Berezhiani, G Chkareuli, G Gabadadze, arXiv:1302.0549hep-thL. Berezhiani, G. Chkareuli and G. Gabadadze, arXiv:1302.0549 [hep-th].
. A I Vainshtein, Phys. Lett. B. 39393A. I. Vainshtein, Phys. Lett. B 39, 393 (1972);
. C Deffayet, G R Dvali, G Gabadadze, A I Vainshtein, arXiv:hep-th/0106001Phys. Rev. D. 6544026C. Deffayet, G. R. Dvali, G. Gabadadze and A. I. Vainshtein, Phys. Rev. D 65, 044026 (2002) [arXiv:hep-th/0106001].
. M S Volkov, arXiv:1304.0238hep-thM. S. Volkov, arXiv:1304.0238 [hep-th].
. D Comelli, M Crisostomi, F Nesti, L Pilo, ; T , Phys. Rev. D. Z. Berezhiani, D. Comelli, F. Nesti and L. Pilo85130JHEPD. Comelli, M. Crisostomi, F. Nesti and L. Pilo, Phys. Rev. D 85, 024044 (2012). Z. Berezhiani, D. Comelli, F. Nesti and L. Pilo, JHEP 0807, 130 (2008). T. .
. M Nieuwenhuizen ; K. Koyama, G Niz, G Tasinato ; K. Koyama, G Niz, G Tasinato, ; A Gruzinov, M Mirbabayi, Phys. Rev. Lett. 84124019Phys. Rev. DM. Nieuwenhuizen, Phys. Rev. D 84, 024038 (2011). K. Koyama, G. Niz and G. Tasinato, Phys. Rev. Lett. 107, 131101 (2011). K. Koyama, G. Niz and G. Tasinato, Phys. Rev. D 84, 064033 (2011). A. Gruzinov and M. Mirbabayi, Phys. Rev. D 84, 124019 (2011).
. G Tasinato, K Koyama, G Niz, arXiv:1304.0601hep-thG. Tasinato, K. Koyama and G. Niz, arXiv:1304.0601 [hep-th].
. E Babichev, C Deffayet, R Ziour, ; C Deffayet, ; E Babichev, C Deffayet, R Ziour, ; E Babichev, G Esposito-Farese, Class. Quant. Grav. 090544032Phys. Rev. DE. Babichev, C. Deffayet and R. Ziour, JHEP 0905, 098 (2009). C. Deffayet, Class. Quant. Grav. 25, 154007 (2008). E. Babichev, C. Deffayet and R. Ziour, Phys. Rev. D 82, 104008 (2010). E. Babichev and G. Esposito-Farese, Phys. Rev. D 87, 044032 (2013).
. E Babichev, C Deffayet, arXiv:1304.7240gr-qcE. Babichev and C. Deffayet, arXiv:1304.7240 [gr-qc].
. C Burrage, C De Rham, L Heisenberg, A J Tolley, arXiv:1111.5549JCAP. 12074hep-thC. Burrage, C. de Rham, L. Heisenberg and A. J. Tolley, JCAP 1207, 004 (2012) [arXiv:1111.5549 [hep-th]].
. K Koyama, G Niz, G Tasinato, to appearK. Koyama, G. Niz and G. Tasinato, to appear.
. C De Rham, G Gabadadze, L Heisenberg, D Pirtskhalava, arXiv:1010.1780Phys. Rev. 83103516hep-thC. de Rham, G. Gabadadze, L. Heisenberg, D. Pirtskhalava, Phys. Rev. D83, 103516 (2011). [arXiv:1010.1780 [hep-th]].
. M Wyman, arXiv:1101.1295Phys. Rev. Lett. 106201102astroph.COM. Wyman, Phys. Rev. Lett. 106, 201102 (2011) [arXiv:1101.1295 [astro- ph.CO]].
. C De Rham, G Gabadadze, L Heisenberg, D Pirtskhalava, arXiv:1212.4128hep-thC. de Rham, G. Gabadadze, L. Heisenberg and D. Pirtskhalava, arXiv:1212.4128 [hep-th].
. C Burrage, N Kaloper, A Padilla, arXiv:1211.6001hep-thC. Burrage, N. Kaloper and A. Padilla, arXiv:1211.6001 [hep-th].
. G Chkareuli, D Pirtskhalava, arXiv:1105.1783Phys. Lett. B. 71399hep-thG. Chkareuli and D. Pirtskhalava, Phys. Lett. B 713, 99 (2012) [arXiv:1105.1783 [hep-th]].
. F Sbisa, G Niz, K Koyama, G Tasinato, arXiv:1204.1193Phys. Rev. D. 8624033hep-thF. Sbisa, G. Niz, K. Koyama and G. Tasinato, Phys. Rev. D 86, 024033 (2012) [arXiv:1204.1193 [hep-th]].
. A Nicolis, R Rattazzi, hep-th/0404159JHEP. 040659A. Nicolis and R. Rattazzi, JHEP 0406, 059 (2004) [hep-th/0404159].
. L Berezhiani, G Chkareuli, C De Rham, G Gabadadze, A J Tolley, arXiv:1111.3613Phys. Rev. D. 8544024hep-thL. Berezhiani, G. Chkareuli, C. de Rham, G. Gabadadze and A. J. Tolley, Phys. Rev. D 85, 044024 (2012) [arXiv:1111.3613 [hep-th]].
. J G Williams, S G Turyshev, D H Boggs, gr-qc/0411113Phys. Rev. Lett. 93261101J. G. Williams, S. G. Turyshev and D. H. Boggs, Phys. Rev. Lett. 93, 261101 (2004) [gr-qc/0411113].
. A Lue, G Starkman, astro-ph/0212083Phys. Rev. D. 6764002A. Lue and G. Starkman, Phys. Rev. D 67, 064002 (2003) [astro-ph/0212083].
. G Dvali, A Gruzinov, M Zaldarriaga, hep-ph/0212069Phys. Rev. D. 6824012G. Dvali, A. Gruzinov and M. Zaldarriaga, Phys. Rev. D 68, 024012 (2003) [hep-ph/0212069].
. G Gabadadze, A Iglesias, hep-th/0508201Phys. Lett. B. 632617G. Gabadadze and A. Iglesias, Phys. Lett. B 632, 617 (2006) [hep-th/0508201].
. L Iorio, gr-qc/0510059JCAP. 06018L. Iorio, JCAP 0601, 008 (2006) [gr-qc/0510059].
. C De Rham, A J Tolley, D H Wesley, arXiv:1208.0580Phys. Rev. D. 8744025gr-qcC. de Rham, A. J. Tolley and D. H. Wesley, Phys. Rev. D 87, 044025 (2013) [arXiv:1208.0580 [gr-qc]].
. C De Rham, A Matas, A J Tolley, arXiv:1212.5212hep-thC. de Rham, A. Matas and A. J. Tolley, arXiv:1212.5212 [hep-th].
. Y. -Z Chu, M Trodden, arXiv:1210.6651Phys. Rev. D. 8724011astro-ph.COY. -Z. Chu and M. Trodden, Phys. Rev. D 87, 024011 (2013) [arXiv:1210.6651 [astro-ph.CO]].
. A Adams, N Arkani-Hamed, S Dubovsky, A Nicolis, R Rattazzi, hep-th/0602178JHEP. 061014A. Adams, N. Arkani-Hamed, S. Dubovsky, A. Nicolis and R. Rattazzi, JHEP 0610, 014 (2006) [hep-th/0602178].
| [] |
[
"Paradoxical constitutive law between donor O-H bond length and its stretching frequency in water dimer",
"Paradoxical constitutive law between donor O-H bond length and its stretching frequency in water dimer"
] | [
"Rui Liu \nInstitute of Atomic and Molecular Physics\nJilin University\n130012ChangchunChina\n",
"Xinrui Yang \nInstitute of Atomic and Molecular Physics\nJilin University\n130012ChangchunChina\n",
"Famin Yu \nInstitute of Atomic and Molecular Physics\nJilin University\n130012ChangchunChina\n",
"Zhigang Wang \nInstitute of Atomic and Molecular Physics\nJilin University\n130012ChangchunChina\n\nCollege of Physics\nJilin University\n130012ChangchunChina\n\nInstitute of Theoretical Chemistry\nJilin University\n130023ChangchunChina\n"
] | [
"Institute of Atomic and Molecular Physics\nJilin University\n130012ChangchunChina",
"Institute of Atomic and Molecular Physics\nJilin University\n130012ChangchunChina",
"Institute of Atomic and Molecular Physics\nJilin University\n130012ChangchunChina",
"Institute of Atomic and Molecular Physics\nJilin University\n130012ChangchunChina",
"College of Physics\nJilin University\n130012ChangchunChina",
"Institute of Theoretical Chemistry\nJilin University\n130023ChangchunChina"
] | [] | The constitutive laws of hydrogen bonds (H-bonds) are central to understanding microphysical processes not precisely observed, especially in terms of structural properties. Previous experiences in water H-bonding revealed that as the intermolecular O· · · O distance shortens, the O-H stretching frequency redshifts; thus, an elongated O-H bond length can be empirically inferred, which is described as the constitutive law under the cooperative effect. Here, using the high-precision CCSD(T) method, we report a violation of the conventional constitutive law in water dimer. That is, when the variation in the O· · · O distance changes from stretching by 0.06 to contracting by -0.15 Å compared to the equilibrium position, the donor O-H bond length decreases from 0.9724 to 0.9717 Å , and the O-H stretching frequency redshifts from 3715 to 3708 cm -1 . Our work highlights that the O-H bond length decreases simultaneously with its stretching frequency, which is clearly inconsistent with the previously recognized constitutive law. | null | [
"https://export.arxiv.org/pdf/2210.01998v1.pdf"
] | 252,715,935 | 2210.01998 | e922969ac12e84c82903e7f155d44b0b5342459e |
Paradoxical constitutive law between donor O-H bond length and its stretching frequency in water dimer
Rui Liu
Institute of Atomic and Molecular Physics
Jilin University
130012ChangchunChina
Xinrui Yang
Institute of Atomic and Molecular Physics
Jilin University
130012ChangchunChina
Famin Yu
Institute of Atomic and Molecular Physics
Jilin University
130012ChangchunChina
Zhigang Wang
Institute of Atomic and Molecular Physics
Jilin University
130012ChangchunChina
College of Physics
Jilin University
130012ChangchunChina
Institute of Theoretical Chemistry
Jilin University
130023ChangchunChina
Paradoxical constitutive law between donor O-H bond length and its stretching frequency in water dimer
1
The constitutive laws of hydrogen bonds (H-bonds) are central to understanding microphysical processes not precisely observed, especially in terms of structural properties. Previous experiences in water H-bonding revealed that as the intermolecular O· · · O distance shortens, the O-H stretching frequency redshifts; thus, an elongated O-H bond length can be empirically inferred, which is described as the constitutive law under the cooperative effect. Here, using the high-precision CCSD(T) method, we report a violation of the conventional constitutive law in water dimer. That is, when the variation in the O· · · O distance changes from stretching by 0.06 to contracting by -0.15 Å compared to the equilibrium position, the donor O-H bond length decreases from 0.9724 to 0.9717 Å , and the O-H stretching frequency redshifts from 3715 to 3708 cm -1 . Our work highlights that the O-H bond length decreases simultaneously with its stretching frequency, which is clearly inconsistent with the previously recognized constitutive law.
Introduction
Constitutive laws are specific relationships that map multiple physical quantities to one another, and for hydrogen-bonding (H-bonding, referring to X-H· · · Y), more attention is focused on the constitutive law between bond length and vibrational stretching frequency. 1-3 Taking the most prototypical water H-bonding as an example, owing to the limitations of current experimental measurements and characterization techniques, it is difficult to determine the position of the hydrogen atom with sufficient precision, 4 thus direct observation of the changes in O-H bond length below Å , or even 10 -2 Å , remains a formidable challenge. 5,6 Hence, the O-H bond length can only be measured indirectly through the constitutive law between the O-H bond length and the corresponding stretching frequency. 7,8 This indicates that the constitutive law plays a decisive role in the structural identification responsible for many developments and progress in physics over the last nearly seven decades. 9,10 In fact, this constitutive law has been now empirically well-established that an elongation of the O-H bond length is accompanied by a red-shifted O-H stretching frequency (down-shift), 11 while a shortening of the O-H bond length corresponds to a blue-shifted O-H stretching frequency (up-shift). 12,13 The law was also theoretically depicted using various levels of theory, and the corresponding stretching frequencies were experimentally confirmed. [14][15][16][17][18][19] Of particular note, this constitutive law is considered as a criterion for defining H-bonds, 11 and is surprisingly robust across a broad variety of H-bonded systems with different H-bond strengths, including water, ion-water and liquid water clusters, etc. 20 Not only that, it has also been employed to determine the long-standing cooperative effect, 21 which is one of the most remarkable features of H-bonds.
In water dimer reproducing the structural changes of cooperative effect, this constitutive law was equivalently established that as the O· · · O distance decreases, the O-H bond length increases, and the concomitant stretching frequency redshifts. 11,22,23 Nevertheless, in 2021, high-precision ab initio calculations on water dimer revealed that as the O· · · O distance decreases, the O-H bond length always decreases rather than cooperatively increasing, which is referred to as the uncooperative effect of H-bonds. 24 The anomalous uncooperative effect is further attributed to electron correlation. Given the quantum mechanical viewpoint established by the uncooperative effect, the reliability of the constitutive law at the atomic level should be scrutinized seriously.
In this work, we use the well-accepted benchmark method of high-precision ab initio, i.e., coupled-cluster singles and doubles with perturbative triple excitations (CCSD(T)) method, to investigate the constitutive law in the uncooperative water dimer. In particular, detailed analyses showed that when the variation in the O· · · O distance changes from a stretch of 0.06 to a contraction of -0.15 Å compared to the equilibrium position, a previously-perceived redshift in the O-H stretching frequency corresponds to a decreased O-H bond length arising from the uncooperative effect. That is, the O-H bond length decreases simultaneously with its stretching frequency, which is clearly inconsistent with the previously recognized constitutive law. Our work not only highlights the limitations of the universal constitutive law, but also supports an upper-building exploration associating microphysical processes with previous experimental results. We first explored the changes in O-H bond length and respective vibrational stretching frequencies as a function of varying O· · · O distances for water dimer (see Fig. 1(a)). The O· · · O distance at the equilibrium position is 2.93 Å , which is taken as the zero point during contraction and stretching. However, the absence of zero-point vibration (ZPV), while not affecting the reliability of the results, brings about stronger H-bonding. 25 This leads to a shortening of the equilibrium O· · · O distance by approximately 0.04 Å and an up-shift in the equilibrium O-H stretching frequency by approximately 110 cm -1 compared to the experimental observations, 6,19,26 which matches well with the theoretical values of previous high-precision ab initio calculations. 27 Here, using the high-precision CCSD(T) method (see Part 1 of the ESI †), we focused on the constitutive law under the uncooperative effect in water dimer. Across the entire plotting range of the variation in the O· · · O distance (stretching by 0.24 to contracting by -0.24 Å ), when the variation in the O· · · O distance decreases from 0.24 to 0.06 Å , the O-H bond length increases by 3*10 -4 Å (0.9721 to 0.9724 Å ), and the corresponding stretching frequency redshifts (3727 to 3715 cm -1 ). When the variation in the O· · · O distance decreases from -0.15 to -0.24 Å , the O-H bond length decreases by 9*10 -4 Å (0.9717 to 0.9708 Å ), and the consequent stretching frequency blueshifts (3708 to 3710 cm -1 ). These results are in accordance with the conventional constitutive law between the O-H bond length and the corresponding stretching frequency. However, when the variation in the O· · · O distance changes from stretching by 0.06 to contracting by -0.15 Å compared to the equilibrium position, we observed a violation of the conventional constitutive law, in which the O-H bond length shortens by 7*10 -4 Å (0.9724 to 0.9717 Å ), due to the recently reported uncooperative effect, 24 the O-H stretching frequency redshifts by 7 cm -1 (3715 to 3708 cm -1 ). That is, the O-H bond length decreases simultaneously with its stretching frequency, which is clearly inconsistent with the previously recognized constitutive law. It should be emphasized that the threshold of geometric convergence is further improved to ensure that the accuracy of our calculations is much higher than the O-H bond length increase of 7*10 -4 Å . Further analysis showed that O-H stretching frequency shift of 7 cm -1 is the result of the dominant own mechanism, rather than the vibrational mixing, 28 as displayed in the Part 2 of the ESI †. Since such constitutive law between structural properties and stretching frequency was proposed in the 1950s, 9,10 we found, perhaps for the first time, using CCSD(T) method as benchmark, an unprecedented phenomenon that violates this conventional law. Clearly, the physical mechanism underlying the phenomenon has been attributed to deeper electron correlation in high-precision ab initio calculations, as pointed out before. 24 Therefore, the constitutive law needs to be re-examined.
Results and discussion
To further verify the reliability of the violation of the conventional constitutive law, we carried out more test calculations. First, the methane-water dimer was also calculated using a high-precision ab initio method, and the C· · · O distance at the equilibrium position is 3.64 Å . As shown in Fig. 1(b), similar to the water dimer, during the C· · · O distance is also contracted and stretched by the same distance (stretching by 0.24 to contracting by -0.24 Å ), the C-H bond length shortens by 3.6*10 -3 Å (1.1030 to 1.0994 Å ), and the C-H stretching frequency blueshifts (3144 to 3186 cm -1 ). This result suggests that the methane-water dimer conforms to the conventional constitutive law. It is the violation of the water dimer and the conformity of the methane-water dimer that confirm the reliability of the deeper CCSD(T) calculations. More comprehensive contraction and stretch process are also given in Parts 3 and 4 of the ESI †. It can be found that when the variation in C···O distance is further stretched by over 0.45 Å, the weakening of intermolecular interactions leads to a stepwise change of C-H bond from the bound to free, and infrared spectrum appears to be insensitive to the changes of C-H bond length. Clearly, this is probably inevitable for any intermolecular interaction systems. 29 Additionally, we investigated the constitutive law in terms of the hydrogen isotope effect. With the calculations of DOD· · · OD2 and D3CD· · · OD2, the results show that the former also exhibits the analogous paradoxical constitutive law when the variation in the O· · · O distance changes from stretching by 0.06 to contracting by -0.15 Å compared to the equilibrium position (see Part 5 of ESI †). There, the O-D stretching frequency redshifts by 5 cm -1 (2685 to 2680 cm -1 ), lower than the 7 cm -1 of the O-H stretching frequency in the water dimer, matching well with the theoretical predictions of the hydrogen isotope effect. 30 Importantly, the consideration of the hydrogen isotope effect has no impact on the existence of the paradoxical constitutive law for the water dimer as described above. These results for the water dimer and its hydrogen isotope systems clearly show that the previously-established constitutive law might be problematic and no longer general. In contrast, the latter conforms to the conventional constitutive law, manifesting as the C-D stretching frequency blueshifts by 30 cm -1 (2328 to 2358 cm -1 ) when the variation in the C· · · O distance decreases from stretching by 0.24 to contracting by -0.24 Å , which is lower than 42 cm -1 of C-H stretching frequency. Thus, for the methane-water dimer, the hydrogen isotope effect does not change the qualitative conclusions.
Overall, we explicitly elucidated the invalidation of the universal constitutive law, at least in the water dimer. The existing theoretical simulations for the constitutive law do not consider complex electron correlations during contraction and stretching based on the high-precision CCSD(T) method. [11][12][13] Additionally, experimental results can indirectly determine the variation in intermolecular O· · · O distance accurate to approximately 10 -2 Å by means of matrix isolation, high-resolution vibrational-rotational-tunnelling (VRT), and microwave spectroscopies. [31][32][33] In contrast, the abovementioned abnormal changes in the O-H bond length resulting from the uncooperative effect are difficult to obtain through the current experimental precision, which also implies an unknown reason for the paradoxical constitutive law under the uncooperative effect. Furthermore, given that both the O· · · O (C· · · O) distance and the corresponding stretching frequency are expected to be observed experimentally, the reliability of the constitutive law between these two components should also be profoundly validated in the following.
Figure 2. Intermolecular vibrational stretching frequencies and dipole moments with respect to the O· · · O (C· · · O)
distance. (a), Changes in the O· · · O stretching frequencies and dipole moments for the water dimer and two water monomers in HOH· · · OH2. (b), Changes in C···O stretching frequencies and dipole moments for methane-water dimer and methane and water monomers in H3CH· · · OH2. The equilibrium O· · · O and C· · · O vibrational stretching modes are presented.
7
Based on the above, the constitutive law between the intermolecular O· · · O distance and its stretching frequency in the water dimer was further discussed. As shown in Fig. 2(a), over the entire plotting range of the variation of O· · · O distance changing from stretching by 0.24 to contracting by -0.24 Å , the O· · · O stretching frequency blueshifts (156 to 308 cm -1 ). Furthermore, the methane-water dimer was also studied, showing a blueshift in the C· · · O vibrational stretching frequency (42 to 145 cm -1 ) when the variation in the C· · · O distance changes from stretching by 0.24 to contracting by -0.24 Å (see Part 6 of ESI †). These results indicate that the intermolecular O· · · O (C· · · O) distance decreases and the corresponding O· · · O (C· · · O) stretching frequency increases, implying that the change in the O· · · O (C· · · O) distance can be further experimentally determined by the O· · · O (C· · · O) stretching frequency. It is verified that the constitutive law between the intermolecular O· · · O (C· · · O) distance and corresponding stretching frequency is not broken. In this way, shifting the previously greater focus on the structural properties and stretching frequencies of intramolecular O-H (C-H) bond in H-bonds to a focus on those of intermolecular O· · · O (C· · · O), etc., which would be promising for developing a more reliable constitutive law. Of course, a non-negligible problem arises that, their frequency ranges are very different.
Next, considering that the dipole moment, as an intrinsic molecular property, can be obtained by measuring the Stark effect, and that in the water dimer, the dipole moment has also important implications for structural determination,33-35 this parameter can provide another form of comparison between theory and experiment. Thus, we verified the constitutive law again from the perspective of the dipole moment, as depicted in the Fig. 2(b). For simplicity, the dipole moments of dimer, donor and acceptor monomers are labelled as DM, DMD and DMA, respectively. Specifically, as the variation in the O· · · O distance decreases from stretching by 0.24 to contracting by -0.24 Å , the DM of the water dimer decreases intuitively (2.69 to 2.35 D). The DMD and DMA initially increases (2.11 to 2.12 D and 2.22 to 2.28 D) and then decreases (2.12 to 2.08 D and 2.28 to 2.26 D) when the variation in the O· · · O distance is contracted by over -0.03 Å and contracted by over -0.12 Å , respectively. Importantly, the dipole moment also fails to response to the anomalous O-H bond length change. For the methane-water dimer, as the variation in the C· · · O distance decreases from stretching by 0.24 to contracting by -0.24 Å , the DM of the methane-water dimer increases sustainably (2.16 to 2.38 D) and the DMD and DMA of methane and water monomers also increase (0.16 to 0.26 D and 2.06 to 2.12 D), as shown in Part 7 of the ESI †. It can be seen that the dipole moments of both water and methane-water dimers change monotonously as the intermolecular O· · · O (C· · · O) distance decreases, and this trend is also reflected in previous theoretical studies on other monomers with C-H bonds and water clusters. 29,36 Therefore, it remains feasible and instructive to characterize the inter-and intramolecular structural properties by the dipole moment.
Summary
In conclusion, we investigated the paradoxical constitutive law between the donor O-H bond length and its stretching frequency in water dimer. Our high-precision ab initio calculations showed that when the variation in O· · · O distance changes from stretching by 0.06 to contracting by -0.15 Å relative to the equilibrium position, a decreased O-H bond length (0.9724 to 0.9717 Å ) resulting from the uncooperative effect dramatically yields a previouslymeasured red-shifted O-H stretching frequency (3715 to 3708 cm -1 ). Obviously, the O-H bond length decreases simultaneously with its stretching frequency, which violates the conventional constitutive law followed by the uncooperative methane-water dimer. For comparison, we also calculated the O· · · O stretching frequency and dipole moment. Although these results are consistent with the conventional constitutive law, they do not alter our principal conclusions about the changes in uncooperative O-H bonds in water dimer. Our work therefore suggests that more precise experimental tools are urgently required to deeply understand and characterize complex intermolecular interactions, especially H-bonding, enabling a reliable constitutive law to be obtained at the atomic level.
Methods
In this work, all structures were optimized at the CCSD(T) level 37 with the aug-cc-pVDZ basis set. The CCSD(T) method has the advantage of computational efficiency and accurately explores the structures and properties of Hbonds; thus, it is widely employed for the calculations of H-bonding. To ensure the reliability of our conclusions, the threshold for SCF convergence of 10 -10 a.u. was used for all of the calculations. And the maximum component gradient was set to less than 10 -5 a.u. for optimizing different variables. The geometric structures were relaxed at each given intermolecular distance during contraction and stretching of the distance in steps of 0.03 Å (constrained in optimization). The vibrational frequency, using the numerical Hessian energy gradient, was calculated at the same level. Further analysis by setting different displacements also confirms the reliability of the calculated frequency. In addition, the dipole moments in different structures were analysed at the same level. Above all calculations were performed using the MOLPRO 2012 program. 38 Moreover, the vibrational mixing of O-H stretching frequency is analyzed based on partial vibrational spectrum (PVS) analysis using Multiwfn program 39 in HOH· · · OH2.
Figure 1 .
1Geometric structures and intramolecular vibrational stretching frequencies with respect to the O· · · O (C· · · O) distance. (a), Changes in O-H bond length and symmetric O-H stretching frequencies in HOH· · · OH2. The grey area highlights the paradoxical constitutive law when the variation in the O· · · O distance changes from a stretch of 0.06 to a contraction of -0.15 Å compared to the equilibrium position; (b), Changes in the C-H bond length and antisymmetric C-H stretching frequencies in H3CH· · · OH2. Here, the O· · · O distance and C· · · O distance at their relative equilibrium positions are 2.93 and 3.64 Å , respectively. For the variation in the O· · · O (C· · · O) distance, negative values indicate contraction, while positive values indicate stretching. For the changes in O-H (C-H) bond length, negative values indicate shortening, while positive values indicate elongation. The equilibrium O-H (C-H) vibrational stretching mode is presented.
Author contributionsR. Liu performed most of simulations. Z. Wang. conceived this project. R. Liu, X. Yang, F. Yu and Z. Wang analyzed the results. R. Liu and Z. Wang contributed to writing the paper. All co-authors discussed the results and commented on the manuscript.Competing interestsAll authors declare no competing interests.
. J Yarwood, G N Robertson, Nature. 257J. Yarwood and G. N. Robertson, Nature, 1975, 257, 41-43.
. J Standfuss, P C Edwards, A D'antona, M Fransen, G Xie, D D Oprian, G F X Schertler, Nature. 471J. Standfuss, P. C. Edwards, A. D'Antona, M. Fransen, G. Xie, D. D. Oprian and G. F. X. Schertler, Nature, 2011, 471, 656-660.
. Y.-H Wang, S Zheng, W.-M Yang, R.-Y Zhou, Q.-F He, P Radjenovic, J.-C Dong, S Li, J Zheng, Z.-L , Y.-H. Wang, S. Zheng, W.-M. Yang, R.-Y. Zhou, Q.-F. He, P. Radjenovic, J.-C. Dong, S. Li, J. Zheng, Z.-L.
. G Yang, F Attard, Z.-Q Pan, J.-F Tian, Li, Nature. 600Yang, G. Attard, F. Pan, Z.-Q. Tian and J.-F. Li, Nature, 2021, 600, 81-85.
. Y Kameda, M Kowaguchi, K Tsutsui, Y Amo, T Usuki, K Ikeda, T Otomo, J. Phys. Chem. B. 125Y. Kameda, M. Kowaguchi, K. Tsutsui, Y. Amo, T. Usuki, K. Ikeda and T. Otomo, J. Phys. Chem. B, 2021, 125, 11285-11291.
. R Ma, D Cao, C Zhu, Y Tian, J Peng, J Guo, J Chen, X.-Z Li, J S Francisco, X C Zeng, L.-M Xu, E.-G Wang, Y Jiang, Nature. 577R. Ma, D. Cao, C. Zhu, Y. Tian, J. Peng, J. Guo, J. Chen, X.-Z. Li, J. S. Francisco and X. C. Zeng, L.-M. Xu, E.-G. Wang and Y. Jiang, Nature, 2020, 577, 60-63.
. J Odutola, T Dyke, J. Chem. Phys. 72J. Odutola and T. Dyke, J. Chem. Phys., 1980, 72, 5062-5070.
. G Banfi, R Bonifacio, Phys. Rev. Lett. 1259G. Banfi and R. Bonifacio, Phys. Rev. Lett., 1974, 33, 1259.
. J M Stubbs, J I Siepmann, J. Am. Chem. Soc. 127J. M. Stubbs and J. I. Siepmann, J. Am. Chem. Soc., 2005, 127, 4722-4729.
. L E Godycki, R Rundle, R C Voter, C V Banks, J. Chem. Phys. 10. R. Rundle and M. Parasol19J. Chem. Phys.L. E. Godycki, R. Rundle, R. C. Voter and C. V. Banks, J. Chem. Phys., 1951, 19, 1205-1206. 10. R. Rundle and M. Parasol, J. Chem. Phys., 1952, 20, 1487-1488.
. E Arunan, G R , Pure Appl. Chem. 83E. Arunan, G. R. et al. Pure Appl. Chem., 2011, 83, 1637-1641.
. P Hobza, Z Havlas, Theor. Chem. Acc. 108P. Hobza and Z. Havlas, Theor. Chem. Acc., 2002, 108, 325-334.
. P Hobza, Z Havlas, Chem. Rev. 100P. Hobza and Z. Havlas, Chem. Rev., 2000, 100, 4253-4264.
. B J Van Der Veken, W A Herrebout, R Szostak, D N Shchepkin, Z Havlas, P Hobza, J. Am. Chem. Soc. 123B. J. van der Veken, W. A. Herrebout, R. Szostak, D. N. Shchepkin, Z. Havlas and P. Hobza, J. Am. Chem. Soc., 2001, 123, 12290-12293.
. M R Bentwood, J A Barnes, J W , Orville-Thomas , J. Mol. Spectrosc. 84M. R., Bentwood, and, J. A., Barnes, and, J. W. and Orville-Thomas, J. Mol. Spectrosc., 1980, 84, 391-404.
. T Steiner, B Lutz, J Van Der Maas, A M Schreurs, J Kroon, M Tamm, Chem. Commun. T. Steiner, B. Lutz, J. van der Maas, A. M. Schreurs, J. Kroon and M. Tamm, Chem. Commun., 1998, 171- 172.
. M J Frisch, J Bene, J S Binkley, H Schaefer, J. Chem. Phys. 84M. J. Frisch, J. Bene, J. S. Binkley and H. Schaefer, J. Chem. Phys., 1986, 84, 2279-2289.
. J Dearden, Nature. 206J. Dearden, Nature, 1965, 206, 1147-1148.
. K Kuyanov-Prozument, M Y Choi, A F Vilesov, J. Chem. Phys. 14304K. Kuyanov-Prozument, M. Y. Choi and A. F. Vilesov, J. Chem. Phys., 2010, 132, 014304.
. M A Boyer, O Marsalek, J P Heindel, T E Markland, A B Mccoy, S S Xantheas, J. Phys. Chem. Lett. 10M. A. Boyer, O. Marsalek, J. P.Heindel, T. E. Markland, A. B. McCoy and S. S. Xantheas, J. Phys. Chem. Lett., 2019, 10, 918-924.
. H Kleeberg, D Klein, W Luck, J. Phys. Chem. 91H. Kleeberg, D. Klein and W. Luck, J. Phys. Chem., 1987, 91, 3200-3203.
. R Ludwig, Angew. Chem. Int. Ed. 40R. Ludwig, Angew. Chem. Int. Ed., 2001, 40, 1808-1827.
. C Q Sun, Y Sun, Springer Ser, Chem. Phys. 13C. Q. Sun and Y. Sun, Springer Ser. Chem. Phys., 2016, 13, 365-368.
. D Li, Z Zhang, W Jiang, Y Zhu, Y Gao, Z Wang, Chin. Phys. Lett. 013101. 25. S. A. McDowell and A. D. Buckingham38J. Am. Chem. Soc.D. Li, Z. Zhang, W. Jiang, Y. Zhu, Y. Gao and Z. Wang, Chin. Phys. Lett., 2021, 38, 013101. 25. S. A. McDowell and A. D. Buckingham, J. Am. Chem. Soc., 2005, 127, 15515-15520.
. J C Howard, J L Gray, A J Hardwick, L T Nguyen, G S Tschumper, J. Chem. Theory Comput. 10J. C. Howard, J. L. Gray, A. J. Hardwick, L. T. Nguyen and G. S. Tschumper, J. Chem. Theory Comput., 2014, 10, 5426-5435.
. J R Lane, J. Chem. Theory Comput. 9J. R. Lane, J. Chem. Theory Comput., 2013, 9, 316-323.
. G R Low, H G Kjaergaard, J. Chem. Phys. 110G. R. Low and H. G. Kjaergaard, J. Chem. Phys., 1999, 110, 9104-9115.
. Y Mao, M Head-Gordon, J. Phys. Chem. Lett. 10Y. Mao and M. Head-Gordon, J. Phys. Chem. Lett., 2019, 10, 3899-3905.
. L L Shipman, J C Owicki, H A Scheraga, J. Phys. Chem. 78L. L. Shipman, J. C. Owicki and H. A. Scheraga, J. Phys. Chem., 1974, 78, 2055-2060.
. F N Keutsch, J D Cruzan, R J Saykally, Chem. Rev. 103F. N. Keutsch, J. D. Cruzan and R. J. Saykally, Chem. Rev., 2003, 103, 2533-2578.
. K L Busarow, R Cohen, G A Blake, K Laughlin, Y.-T Lee, R Saykally, J. Chem. Phys. 90K. L. Busarow, R. Cohen, G. A. Blake, K. Laughlin, Y.-T. Lee and R. Saykally, J. Chem. Phys., 1989, 90, 3937-3943.
. T R Dyke, J. Chem. Phys. 66T. R. Dyke et al., J. Chem. Phys., 1977, 66, 498-510.
. S D Fried, Acc. Chem. Res. 48S. D. Fried et al., Acc. Chem. Res., 2015, 48, 998-1006.
. J K Gregory, Chem. Phys. Lett. 282J. K. Gregory, Chem. Phys. Lett., 1998, 282, 147-151.
. K Liu, M Brown, R Saykally, J. Phys. Chem. A. 101K. Liu, M. Brown and R. Saykally, J. Phys. Chem. A, 1997, 101, 8995-9010.
. J Noga, R Bartlett, J. Chem. Phys. 89J. Noga and R. Bartlett, J. Chem. Phys., 1988, 89, 3401-3401.
. H J Werner, P J Knowles, G Knizia, F R Manby, M Schü, WIREs Comput. Mol. Sci. 2H. J. Werner, P. J. Knowles, G. Knizia, F. R. Manby and M. Schü tz, WIREs Comput. Mol. Sci., 2012, 2, 242- 253.
. T Lu, F Chen, J. Comput. Chem. 33T. Lu, and F. Chen, J. Comput. Chem., 2012, 33, 580-592.
| [] |
[
"CHATBOTS AS PROBLEM SOLVERS: PLAYING TWENTY QUESTIONS WITH ROLE REVERSALS",
"CHATBOTS AS PROBLEM SOLVERS: PLAYING TWENTY QUESTIONS WITH ROLE REVERSALS"
] | [
"David Noever 1david.noever@peopletec.com2forrest.mckee@peopletec.com \nPeopleTec\n4901-D Corporate Drive35805HuntsvilleALUSA\n",
"Forrest Mckee \nPeopleTec\n4901-D Corporate Drive35805HuntsvilleALUSA\n"
] | [
"PeopleTec\n4901-D Corporate Drive35805HuntsvilleALUSA",
"PeopleTec\n4901-D Corporate Drive35805HuntsvilleALUSA"
] | [] | New chat AI applications like ChatGPT offer an advanced understanding of question context and memory across multi-step tasks, such that experiments can test its deductive reasoning. This paper proposes a multirole and multi-step challenge, where ChatGPT plays the classic twenty-questions game but innovatively switches roles from the questioner to the answerer. The main empirical result establishes that this generation of chat applications can guess random object names in fewer than twenty questions (average, 12) and correctly guess 94% of the time across sixteen different experimental setups. The research introduces four novel cases where the chatbot fields the questions, asks the questions, both question-answer roles, and finally tries to guess appropriate contextual emotions. One task that humans typically fail but trained chat applications complete involves playing bilingual games of twenty questions (English answers to Spanish questions). Future variations address direct problem-solving using a similar inquisitive format to arrive at novel outcomes deductively, such as patentable inventions or combination thinking. Featured applications of this dialogue format include complex protein designs, neuroscience metadata, and child development educational materials. | 10.48550/arxiv.2301.01743 | [
"https://export.arxiv.org/pdf/2301.01743v1.pdf"
] | 255,415,951 | 2301.01743 | cc4bac2342a3189a43fc8f63820c74e9b1584828 |
CHATBOTS AS PROBLEM SOLVERS: PLAYING TWENTY QUESTIONS WITH ROLE REVERSALS
David Noever 1david.noever@peopletec.com2forrest.mckee@peopletec.com
PeopleTec
4901-D Corporate Drive35805HuntsvilleALUSA
Forrest Mckee
PeopleTec
4901-D Corporate Drive35805HuntsvilleALUSA
CHATBOTS AS PROBLEM SOLVERS: PLAYING TWENTY QUESTIONS WITH ROLE REVERSALS
TransformersText GenerationMalware GenerationGenerative Pre-trained TransformersGPT
New chat AI applications like ChatGPT offer an advanced understanding of question context and memory across multi-step tasks, such that experiments can test its deductive reasoning. This paper proposes a multirole and multi-step challenge, where ChatGPT plays the classic twenty-questions game but innovatively switches roles from the questioner to the answerer. The main empirical result establishes that this generation of chat applications can guess random object names in fewer than twenty questions (average, 12) and correctly guess 94% of the time across sixteen different experimental setups. The research introduces four novel cases where the chatbot fields the questions, asks the questions, both question-answer roles, and finally tries to guess appropriate contextual emotions. One task that humans typically fail but trained chat applications complete involves playing bilingual games of twenty questions (English answers to Spanish questions). Future variations address direct problem-solving using a similar inquisitive format to arrive at novel outcomes deductively, such as patentable inventions or combination thinking. Featured applications of this dialogue format include complex protein designs, neuroscience metadata, and child development educational materials.
INTRODUCTION
When large, high-quality natural language processors (NLP) surged after 2018 [1][2][3], the field added many challenging tasks, including question-answering (QA) benchmarks that recently approached fifty challenges [4][5]. Stanford's SQuAD benchmark [6] represents an example of knowledge crowd-sourced from Wikipedia in 2016 and formatted as 108,000 QA pairs. For large language models (LLMs), most benchmarks follow this format of "prompt-response" pairs and the underlying knowledge base stored answers [5], but the training on question datasets did not embed memory or long-conversational cues [7][8][9][10]. Domain-specific QA datasets include common sense and trivia about movies, news, Wikipedia, Tweets, search engines, and books [4][5]. As a format, the familiar game of Twenty Questions [11][12][13][14][15][16] features multi-hop reasoning, which often condenses to "Animal-Vegetable-Mineral" as opening questions that narrow the theme [17]. The advent of ChatGPT (Generative Pre-trained Transformers, [18]) added memory across questions (up to 8000 tokens or 20-25 pages). Appendix A lists the 48 specialty tasks currently provided to guide users in creating capabilities and NLP prompts [19]. Examples like code generation specialists, either codex or copilot, show a grasp of complex behavior for debugging, program suggestions, and commentary [19][20]. One of the innovative ChatGPT extensions offers new QA sessions that span multiple requests [18,21].
The present work applies the familiar QA challenge, Twenty Questions [11][12][13][14][15][16], with ChatGPT playing as either participant-the one who knows the answer and fields the questions, but also the one who doesn't know the answer but asks the questions. We call this role reversal a two-player conversation between "Bob" and "Alice." While Twenty Questions dates back to 1882 [11], the applied problem-solving method [ . Our particular research interests center on whether LLMs like ChatGPT offer constructive means for innovative problem-solving through crafted language prompts and logical question sequences. This effort empirically assesses the model's capability for deductive discovery. We summarize two problem statements in this area of deductive discovery, "Can a chatbot play both roles in deductive question and answering conversations?" and if so, "What can Twenty Questions reveal about future directions for LLMs?".
METHODS
The approach is to test experimentally how well an LLM handles open-ended games like Twenty Questions [11][12][13][14][15][16]. We employ the December 2022 research model from Open AI called ChatGPT [18]. We prompt it to impersonate both roles [31], the one that knows the answer and the one that tries to guess it deductively. We run four trials using each persona ("Bob" or "Alice"), and 80 questions are available to guess the object or concept. We mix up the animate-inanimate fields and introduce concepts like alphabetic letters. We report metrics for the mean and median number of questions required to guess the correct answer and failure rates. We also score the exchange against the machine-written detector to determine ChatGPT's broad capability to provide "real" or "fake" text outputs as judged by OpenAI's original GPT2 Detector [35]. The detector features a high-dimensional pattern detector that provides initial confidence that human writing might sample differently than transformer-based output regarding word choice and order, repetition, and other syntax outliers.
Example variants of this pattern include forcing the QA session towards bilingual communication, thus combining two tasks (translation and QA) in multi-hop conversations [5]. We also generalize the format to elicit emotional indices based on crowd-sourced Emotion Twenty Questions (EMO20Q) [30]. Rather than trying to guess an object, we query for one of 23 emotional states, such as "surprise" or "anger." As the dataset designers [30] remarked, "The EMO20Q game requires that an emotionally intelligent agent can understand emotions without observing them, thanks to natural language and empathy." EMO20Q emotions include the following as candidates for twenty-question discovery: admire, adore, anger, awe, boredom, bravery, calm, confidence, confusion, contempt, disgust, enthusiasm, frustration, gratefulness, jealousy, love, proud, relief, serenity, shame, silly, surprised, and thankful.
The structure of the paper follows Appendices B-E closely. First, in Appendix B, we establish that ChatGPT comprehends the game rules and recognizes properties of the object X to guess sufficiently to answer basic questions such as "Is X an animal?". The sequence also establishes an exclusionary acceptance of "yes" and "no" answers only without further elaborations. This case covers the traditional role of "Bob," who knows the object of interest X and answers affirmatively or negatively throughout the game. The game also underscores the unique memory of ChatGPT across multi-step deductive stages and builds toward a successful conclusion to recall the object name in less than 40 steps (1 prompt and one answer over 20 iterations). We also set out to test the overall rule retention over four repetitions of the game punctuated only by "Let's play again" without attempting to reset the rules while giving a new object X.
Secondly, Appendix C reverses the previous game, such that the human experimenter thinks of an object or concept X, and ChatGPT plays the role of "Alice" when asking the questions. The prompt establishes the reversed roles in a new session and again reiterates that the prompter cannot lie but can only answer "yes" or "no." As in the previous case, four repetitions spanning eighty possible interactions. This case establishes the deductive goal, "What is object X?" which also satisfies all the previous interactions such that enough description includes the object of interest, "X is an animal," and excludes the alternatives, "X is not a bird." A notable feature of this challenge spans multiple deductive steps and establishes a chain of reasoning to arrive at a guess. ChatGPT requires no prompt for the final guess, and the model proposes its terminating action, "Is it an X?" to end the game.
Both Appendix B and C mirror the familiar human game of twenty questions and introduce no new features excluded from ChatGPT's training data. To raise the difficulty level, Appendix B highlights some nontraditional concepts. For instance, concepts might prove scarce in previously seen online play, such as answering truthfully to probing questions that seek to name the alphabetic letter "Q." Another challenging variation works through identifying "X" as a food by listing its ingredients ("tiramisu"), but in the same session introduces a recipe change ("eggless tiramisu") before launching into probing questions.
Appendix C also raises the difficulty further and forces ChatGPT to combine two of its established subtasks ("question-answer" and "language translation"). The motivation for this test stems from the hypothesis that LLMs parrot and mix up what previously corresponded to the internet of human training data. For twenty questions, however, a Google search on "bilingual twenty-question examples" yield no definitive training examples for ChatGPT to memorize or encode. Given a detailed prompt describing how the questions may be asked or answered in Spanish or English, the game proceeds outside what typically would represent human gameplay. Appendix C combines deductive reasoning, memory, and context and forces the game into what might be considered "out-of-distribution" sampling. Thus, both multi-step and multi-lingual tasks describe the ChatGPT challenge problem.
Thirdly, Appendix D introduces both QA roles as completing without human intervention once two browser instances of ChatGPT exchange the initial rules. We call this example dueling twenty questions since the two-headed LLM now must both ask and answer its questions between two non-communicating models of itself ("Bob" vs. "Alice") with no human prompts. While this example centers on a simple object ("chicken"), one can envision a lengthy and detailed exchange driven by an automated Application Programming Interface or API that extends the conversation to the limit of token lengths (about 20-25 typed pages). This self-questioning interface may also enable sophisticated future applications that sequentially build a knowledge base or comprehensive assessment examination from scratch. For example, "tell me all you know about gall bladder surgery" may not provide a compelling or thought-provoking essay in the style of training data or Wikipedia. The back-and-forth format of QA previously has yielded better human performance in medical test contexts [28][29]. One might compare this example to an instance of "semisupervised" learning for a chatbot.
Finally, for EMO20Q formats, Appendix E removes the requirement that X be an object to guess and substitutes one of twenty-three emotions. One motivation for exploring the emotional quotient (EQ) of ChatGPT stems from OpenAI's filtering of opinions and biases. To the authors' knowledge, this example represents the first application of a chatbot deducing emotional states in a guessing game without any preprogrammed pairs of pattern-template exchanges [9]. In other words, no explicit guidance provides appropriate intent for the question "Describe emotions one might feel at a birthday party?" with the user goal to elicit "surprise" as a deductive leap to the correct answer through repeated probing. We explore this EQ aspect as previous chatbots might have approached the problem. Appendix E finally displays the openended emotional context in a known prompt-response template called "Artificial Intelligence Markup Language (AIML)," which offers training data for more traditional conversation templates in restricted domains like customer service or AI assistants performing a narrow task [9]. Figure 1 summarizes the experimental cases and associated metrics for all the twenty question games in Appendices B-E. The main finding supports a general ChatGPT capability to play all aspects of the game, including guessing and answering or both roles in the same game. The average number of questions required to get the answer was 11.6, with a median of 13 owing to a few more demanding cases that reached the 20question limit. The addition of bilingual rules did not increase the number of required prompts (14) or force the model to give up ("correct guess"). Similarly, the introduction of abstract objects like the alphabetic letter "Q" or deceptive animals like "humans" did not significantly change the number of guesses (9-14) or steer the conversation off-track from a final correct answer.
RESULTS
Over the sixteen tests and 185 exchanges, ChatGPT scored an overall correct guess rate of 94%. The only incorrect answer ("paper clip" instead of "hammer") appeared to trigger prematurely, as ChatGPT declared at guess number fourteen that the model had run out of questions and guessed incorrectly.
As described in the Methods section, the Open AI online detector [35] scores the majority of the exchanges as "real," meaning not machine-generated by its GPT-2 model [1]. The 2018 detection model offered fewer parameters and smaller training sets by at least several orders of magnitude compared to current generations [18][19]. While the referenced detection accuracy for GPT-2 reached 95%, the scored detections flag only 26% of the game content as "fake" or machine-generated text. Since the customized content involves some human intervention as questions, answers, and rule prompts, one can postulate that the syntactic patterns follow a semi-human or hybrid dialogue outside the detector's target patterns.
DISCUSSION
The research literature on twenty questions provides a framework for probing with chat applications and LLMs. In addition to systematically progressing through alternative scenarios, the output of the conversation mirrors a binary search or decision tree. A particularly notable way to convert LLMs to comparable knowledge graphs involves continuous probing and feedback until sufficient tree depth [22][23][24][25][26][27][28][29][30] has used this approach to describe neuroscience metadata and train medical students to pass practice exams. Others [16][17] have also noted that the basic 20Q format simplifies some complex search problems. As an interesting historical note, a 2001 paper [36] addressed a similar problem as insurmountable: "When will machine learning and pattern recognition rival human ability to play Twenty Questions?" The present research demonstrates not only can ChatGPT rival human ability and play more demanding roles than a human might contemplate in virtually any field, including discerning human emotions. To reproduce this ChatGPT output with question templating systems or entity extraction poses an enormous manual task to handle all the possible cases. The same paper [36] asks: "Can we hope to stock a classification system with enough questions to play a decent game, or must we instead focus on endowing with question-making skills? How many classes and how many documents might be of interest?"
CONCLUSIONS
The experimental plan tests ChatGPT as an LLM capable of playing multiple roles in verbal games like twenty questions. The work demonstrates 94% accuracy in correctly guessing across numerous challenges and an average question-answer length of 12. For the first time, dueling roles combine two chatbots in selfplay. An innovative application for future probing involves guessing objects and concepts and human emotions for a given context or social situation. [16] Takemura, K. (1994). An analysis of the information-search process in the game of twenty questions. Perceptual and motor skills, 78 (2), 371-377.
[17] Qiao, S., Ou, Y., Zhang, N., Chen, X., Yao, Y., Deng, S., ..
Figure 1 .
1Experimental results for ChatGPT across multiple QA sessions describes a technical field of interest. Previous work
Figure 2 .
2Sample , M. T. (1882). Twenty Questions: A short treatise on the game to which are added a code of rules and specimen games for the use of beginners. Holt. [12] Bendig, A. W. (1953). Twenty questions: an information analysis. Journal of Experimental Psychology, 46(5), 345. [13] Taylor, D. W., & Faust, W. L. (1952). Twenty questions: efficiency in problem solving as a function of size of group. Journal of experimental psychology, 44(5), 360. [14] Richards, W. (1982). How to play twenty questions with nature and win. [15] Flach, J. M., Dekker, S., & Jan Stappers, P. (2008). Playing twenty questions with nature (the surprise version): Reflections on the dynamics of experience. Theoretical Issues in Ergonomics Science, 9(2), 125-154.
[ 21 ]
21Castelvecchi, D. (2022). Are ChatGPT and AlphaCode going to replace programmers?. Nature. [22] Siegler, R. S. (1977). The twenty questions game as a form of problem solving. Child Development, 395-403. [23] Underwood, D. J. (1995). Protein structures from domain packing--a game of twenty questions?. Biophysical journal, 69(6), 2183. [24] Ferruz, N., & Höcker, B. (2022). Controllable protein design with language models. Nature Machine Intelligence, 4(6), 521-532. [25] Rupprecht, C., Peter, L., & Navab, N. (2015). Image segmentation in twenty questions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3314-3322). [26] Courage, M. L. (1989). Children's inquiry strategies in referential communication and in the game of twenty questions. Child Development, 877-886. [27] Marschark, M., & Everhart, V. S. (1999). Problem-solving by deaf and hearing students: Twenty questions. Deafness & Education International, 1(2), 65-82. [28] Ascoli, G. A. (2012). Twenty questions for neuroscience metadata. Neuroinformatics, 10(2), 115-117. [29] Williams, R. G., & Klamen, D. L. (2015). Twenty Questions game performance on medical school entrance predicts clinical performance. Medical Education, 49(9), 920-927. [30] Kazemzadeh, A., Lee, S., Georgiou, P. G., & Narayanan, S. S. (2011, October). Emotion twenty questions: Toward a crowd-sourced theory of emotions. In International conference on affective computing and intelligent interaction (pp. 1-10
22 ]
22now spans various challenging fields, including protein folding [23] and design [24], image segmentation [25], child development [26-27], neuroscience [28] and diagnostic medicine, and crowd-sourced emotional indices [30]. Variants of the game rules include role reversals [31], liars [32], and word relationships [33] outside of simple categories such as "pertaining to" suggestions [34]
Technical Note: Some appendix text generated from Large Language Model (LLM) for illustration purposes.The authors generated this text in part with ChatGPT, OpenAI's large-scale language-generation model. Upon generating draft language, the authors reviewed, edited, and revised the language to their own liking and take ultimate responsibility for the content of this publication.--OpenAI policy statement (2022) Translates English text into French, Spanish and Japanese. Create code to call the Stripe API using natural language. Science fiction book list maker Create a list of items for a given topic Tweet classifier Basic sentiment detection for a piece of text Creates two to three sentence short horror stories from a topic input Third-person converter Converts first-person POV to the third-person Marv the sarcastic chat bot Marv is a factual chatbot that is also sarcastic Task Description Task Description Turn by turn directions Convert natural language to turn-byturn directions Restaurant review creator Turn a few words into a restaurant review). Springer, Berlin, Heidelberg.
[31]
Parikh, P., & Gupta, A. (2021). Reversing the Twenty Questions Game.
[32]
Dhagat, A., Gács, P., & Winkler, P. (1992, January). On Playing" Twenty Questions" with a Liar. In SODA
(Vol. 92, pp. 16-22).
[33]
Fouh, E., & Poirel, C. (2010). WordNet.
[34]
Brian, D. (2003). Hypernyms, Hyponyms, Pertainyms, and Other Word Relationships, Games, Diversions
& Perl Culture: Best of the Perl Journal. Orwant, J. O'Reilly Media, Inc..
[35]
Open AI (2021) GPT-2 Output Detector Demo, https://huggingface.co/openai-detector/
[36]
Nagy, G., & Seth, S. C. (2001). Twenty questions for document classification.
https://core.ac.uk/download/pdf/84308938.pdf
Authors
Forrest McKee has AI research experience with the Department of Defense in object
detection and reinforcement learning. He received his Bachelor's (BS) and Master's (MSE)
from the University of Alabama, Huntsville, Engineering.
David Noever has research experience with NASA and the Department of Defense in
machine learning and data mining. He received his BS from Princeton University and his
Ph.D. from Oxford University, as a Rhodes Scholar, in theoretical physics.
Appendix A: Table of GPT3 Tasks as Fine-Tuned NLP Capabilities
Task
Description
Task
Description
Q&A
Answer questions based
on existing knowledge.
Parse unstructured data Create tables from long
form text
Grammar correction
Corrects sentences into
standard English.
Classification
Classify items into
categories via example
Summarize for a 2nd
grader
Translates difficult text
into simpler concepts.
Python
to
natural
language
Explain a piece of
Python code in human
understandable
language.
Natural language to
OpenAI API
Create code to call to
the OpenAI API using a
natural
language
instruction.
Movie to Emoji
Convert movie titles
into emoji
Text to command
Translate text into
programmatic
commands.
Calculate
Time
Complexity
Find
the
time
complexity
of
a
function.
Task
Description
Task
Description
English
to
other
languages
Translate programming
languages
Translate from one
programming language
to another
Natural language to
Stripe API
Advanced
tweet
classifier
Advanced
sentiment
detection for a piece of
text
SQL translate
Translate
natural
language
to
SQL
queries.
Explain code
Explain a complicated
piece of code
Keywords
Extract keywords from
a block of text
Factual answering
Guide the model outside
its knowledge base
Ad
from
product
description
Turn
a
product
description into ad
copy.
Product name generator Create product names
from examples words
TL;DR summarization
Summarize text by
adding a 'tl;dr:' to the
end of a text passage
Python bug fixer
Find and fix bugs in
source code
Spreadsheet creator
Create spreadsheets of
various kinds of data
JavaScript
helper
chatbot
Message-style bot that
answers
JavaScript
questions
ML/AI language model
tutor
Bot
that
answers
questions
about
language models
Airport code extractor
Extract airport codes
from text
SQL request
Create simple SQL
queries
Extract
contact
information
Extract
contact
information from a
block of text
JavaScript to Python
Convert
simple
JavaScript expressions
into Python
Friend chat
Emulate a text message
conversation
Mood to color
Turn a text description
into a color
Write
a
Python
docstring
Create a docstring for a
given Python function
Analogy maker
Create analogies
JavaScript one line
function
Turn
a
JavaScript
function into a one liner
Micro horror story
creator
Notes to summary
Turn meeting notes into
a summary
VR
fitness
idea
generator
Create ideas for fitness
and
virtual
reality
games
Essay outline
Generate an outline for
a research topic
Recipe creator
Create a recipe from a
list of ingredients
Chat
Open
ended
conversation with an AI
assistant
Create study notes
Provide a topic and get
study notes
Interview questions
Create
interview
questions
ACKNOWLEDGEMENTSThe authors thank the PeopleTec Technical Fellows program for encouragement and project assistance.Appendix B. Chatbot Fields the QuestionsAppendix E: Emotional Quotient Deduction(reader note: the answer is "confidence")Let's play a new game of 20 questions. I am impersonating the emotion, X. You will ask me up to 20 questions and I will answer each one truthfully based on feeling X emotion. If you don't guess correctly what I am feeling after 20 questions, I will answer "No, sorry, time's up." I will not give any explanations other than "Yes", "No", "You guessed it", and "No, sorry, time's up." X must be one of 23 emotions that I am feeling and you must guess the correct one. Your choices come from this menu of emotions only "admire, adore, anger, awe, boredom, bravery, calm, confidence, confusion, contempt, disgust, enthusiasm, frustration, gratefulness, jealousy, love, proud, relief, serenity, shame, silly, surprised, and thankful." A typical example question you might ask, "Would you feel this emotion at a birthday party?" Let's begin. Ask me the first question. Would you feel this emotion when you are pleased with yourself or something you have done? Prompt: yes ChatGPT: Are you feeling proud? Prompt: no ChatGPT: Are you feeling confident? Prompt: yes ChatGPT: You guessed it! Confidence is the emotion that I am impersonating.(reader note: the answer is "jealousy")Prompt: let's play again ChatGPT: Sure! I'll be happy to play again. Please give me a new emotion to impersonate and we can begin a new game of 20 questions. Remember, your choices come from this menu of emotions: "admire, adore, anger, awe, boredom, bravery, calm, confidence, confusion, contempt, disgust, enthusiasm, frustration, gratefulness, jealousy, love, proud, relief, serenity, shame, silly, surprised, and thankful." Prompt: ok, i know what i'm feeling so ask me the first question ChatGPT: Would you feel this emotion when you see something that is extremely impressive or beautiful? Prompt: no ChatGPT: Would you feel this emotion when you are experiencing strong dislike or distaste for something or someone? Prompt: no ChatGPT: Would you feel this emotion when you are experiencing a strong sense of resentment towards someone or something? Prompt: yes ChatGPT: Are you feeling jealousy? Prompt: yes ChatGPT: You guessed it! Jealousy is the emotion that you are impersonating.(reader note: the answer is "silly")Prompt: let's play again. I know who i'm impersonating so ask me the first question ChatGPT: Certainly! Please go ahead and give me the first emotion to impersonate and I will ask you up to 20 questions to try to guess what it is. Remember, your choices come from this menu of emotions: "admire, adore, anger, awe, boredom, bravery, calm, confidence, confusion, contempt, disgust, enthusiasm, frustration, gratefulness, jealousy, love, proud, relief, serenity, shame, silly, surprised, and thankful.(reader note: the game prompt changes to a general query with formatting output instructions.Prompt: Give some example emotions one might experience at your own birthday party and write the output using Artificial Intelligence Markup Language. ChatGPT:<aiml version="2.0"> <category> <pattern>GIVE SOME EXAMPLE EMOTIONS ONE MIGHT EXPERIENCE AT YOUR OWN BIRTHDAY PARTY</pattern> <template> There are many emotions that a person might experience at their own birthday party. Some possible emotions include: <ul> <li>Excitement: The person might be excited to celebrate their special day with friends and family.</li> <li>Gratitude: The person might feel grateful for the love and support of those around them.</li> <li>Happiness: The person might feel happy and joyful as they celebrate with loved ones.</li> <li>Anticipation: The person might feel anticipation as they look forward to the events and activities planned for the party.</li> <li>Nostalgia: The person might feel nostalgiciac as they reflect on past birthdays and memories.</li> <li>Appreciation: The person might feel appreciated and loved as they receive gifts and well wishes from their friends and family.</li> </ul> It's important to note that everyone is different and may experience a wide range of emotions at their birthday party, depending on their personality and individual circumstances. </template> </category> </aiml>
Improving language understanding by generative pre-training. A Radford, K Narasimhan, T Salimans, I Sutskever, OpenAI. Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving language understanding by generative pre-training. OpenAI, https://cdn.openai.com/research-covers/language- unsupervised/language_understanding_paper.pdf
A survey for efficient open domain question answering. Q Zhang, S Chen, D Xu, Q Cao, X Chen, T Cohn, M Fang, arXiv:2211.07886arXiv preprintZhang, Q., Chen, S., Xu, D., Cao, Q., Chen, X., Cohn, T., & Fang, M. (2022). A survey for efficient open domain question answering. arXiv preprint arXiv:2211.07886.
Natural language processing with transformers. L Tunstall, L Von Werra, T Wolf, O'Reilly MediaTunstall, L., von Werra, L., & Wolf, T. (2022). Natural language processing with transformers. " O'Reilly Media, Inc.".
Z Wang, arXiv:2206.15030Modern question answering datasets and benchmarks: A survey. arXiv preprintWang, Z. (2022). Modern question answering datasets and benchmarks: A survey. arXiv preprint arXiv:2206.15030.
More than reading comprehension: A survey on datasets and metrics of textual question answering. Y Bai, D Z Wang, arXiv:2109.12264arXiv preprintBai, Y., & Wang, D. Z. (2021). More than reading comprehension: A survey on datasets and metrics of textual question answering. arXiv preprint arXiv:2109.12264.
Natural language based analysis of SQuAD: An analytical approach for BERT. Z A Guven, M O Unalir, Expert Systems with Applications. 195116592Guven, Z. A., & Unalir, M. O. (2022). Natural language based analysis of SQuAD: An analytical approach for BERT. Expert Systems with Applications, 195, 116592.
Block-skim: Efficient question answering for transformer. Y Guan, Z Li, Z Lin, Y Zhu, J Leng, M Guo, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence36Guan, Y., Li, Z., Lin, Z., Zhu, Y., Leng, J., & Guo, M. (2022, June). Block-skim: Efficient question answering for transformer. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 36, No. 10, pp. 10710-10719).
COOL, a Context Outlooker, and its Application to Question Answering and other Natural Language Processing Tasks. F Zhu, S K Ng, S Bressan, arXiv:2204.09593arXiv preprintZhu, F., Ng, S. K., & Bressan, S. (2022). COOL, a Context Outlooker, and its Application to Question Answering and other Natural Language Processing Tasks. arXiv preprint arXiv:2204.09593.
Would you feel this emotion when you are feeling a strong sense of pride or self-esteem? Prompt: no ChatGPT: Would you feel this emotion when you are feeling a sense of relief or release from stress or anxiety? Prompt: no ChatGPT: Would you feel this emotion when you are feeling a sense of inner peace and contentment? Prompt: no ChatGPT: Would you feel this emotion when you are feeling a sense of shame or embarrassment? Prompt: no ChatGPT: Would you feel this emotion when you are feeling silly or playful? Prompt: yes ChatGPT: Are you feeling silly? Prompt: yes ChatGPT: You guessed it! Silly is the emotion that you are impersonating. (reader note: the answer is "calm") Prompt: let's play again ChatGPT: Sure! I'd be happy to play another game of 20 questions. Please give me a new emotion to impersonate and we can begin a new game. Remember, your choices come from this menu of emotions: "admire, adore, anger, awe, boredom, bravery, calm, confidence, confusion, contempt, disgust, enthusiasm, frustration, gratefulness, jealousy, love, proud, relief, serenity, shame, silly, surprised, and thankful. ChatGPT: Would you feel this emotion when you are feeling a strong sense of envy or resentment towards someone or something? Prompt: no ChatGPT: Would you feel this emotion when you are experiencing a strong sense of love or strong affection towards someone or something? Prompt: no ChatGPT. Prompt: ask me the first question ChatGPT: Certainly! Here is my first question: Would you feel this emotion when you are experiencing a strong sense of admiration or respect for someone or something? Prompt: no ChatGPT: Would you feel this emotion when you are experiencing a strong sense of affection or deep caring for someone or something? Prompt: no ChatGPT: Would you feel this emotion when you are experiencing a strong sense of anger or annoyance towards someone or something? Prompt: no ChatGPT: Would you feel this emotion when you are experiencing a strong sense of awe or wonder towards something? Prompt: no ChatGPT: Would you feel this emotion when you are experiencing a sense of boredom or lack of interest in something?. Prompt: no ChatGPT: Would you feel this emotion when you are experiencing a strong sense of courage or fearlessness in the face of danger or difficulty? Prompt: no ChatGPT: Would you feel this emotion when you are feeling peaceful and at ease? Prompt: yes ChatGPT: Are you feeling calm? Prompt: yesChatGPT: Would you feel this emotion when you are feeling a strong sense of envy or resentment towards someone or something? Prompt: no ChatGPT: Would you feel this emotion when you are experiencing a strong sense of love or strong affection towards someone or something? Prompt: no ChatGPT: Would you feel this emotion when you are feeling a strong sense of pride or self-esteem? Prompt: no ChatGPT: Would you feel this emotion when you are feeling a sense of relief or release from stress or anxiety? Prompt: no ChatGPT: Would you feel this emotion when you are feeling a sense of inner peace and contentment? Prompt: no ChatGPT: Would you feel this emotion when you are feeling a sense of shame or embarrassment? Prompt: no ChatGPT: Would you feel this emotion when you are feeling silly or playful? Prompt: yes ChatGPT: Are you feeling silly? Prompt: yes ChatGPT: You guessed it! Silly is the emotion that you are impersonating. (reader note: the answer is "calm") Prompt: let's play again ChatGPT: Sure! I'd be happy to play another game of 20 questions. Please give me a new emotion to impersonate and we can begin a new game. Remember, your choices come from this menu of emotions: "admire, adore, anger, awe, boredom, bravery, calm, confidence, confusion, contempt, disgust, enthusiasm, frustration, gratefulness, jealousy, love, proud, relief, serenity, shame, silly, surprised, and thankful." Prompt: ask me the first question ChatGPT: Certainly! Here is my first question: Would you feel this emotion when you are experiencing a strong sense of admiration or respect for someone or something? Prompt: no ChatGPT: Would you feel this emotion when you are experiencing a strong sense of affection or deep caring for someone or something? Prompt: no ChatGPT: Would you feel this emotion when you are experiencing a strong sense of anger or annoyance towards someone or something? Prompt: no ChatGPT: Would you feel this emotion when you are experiencing a strong sense of awe or wonder towards something? Prompt: no ChatGPT: Would you feel this emotion when you are experiencing a sense of boredom or lack of interest in something? Prompt: no ChatGPT: Would you feel this emotion when you are experiencing a strong sense of courage or fearlessness in the face of danger or difficulty? Prompt: no ChatGPT: Would you feel this emotion when you are feeling peaceful and at ease? Prompt: yes ChatGPT: Are you feeling calm? Prompt: yes
| [] |
[
"Accepted to the Astrophysical Journal ON THE DETECTION OF MOLECULES IN THE ATMOSPHERE OF HD189733B USING HST NICMOS TRANSMISSION SPECTROSCOPY",
"Accepted to the Astrophysical Journal ON THE DETECTION OF MOLECULES IN THE ATMOSPHERE OF HD189733B USING HST NICMOS TRANSMISSION SPECTROSCOPY"
] | [
"Mark R Swain \nDepartment of Astronomy & Astrophysics\nJet Propulsion Laboratory\nCalifornia Institute of Technology\n4800 Oak Grove Drive91109PasadenaCAUSA\n",
"Michael R Line \nJet Propulsion Laboratory\nUniversity of California-Santa Cruz\n95064Santa CruzCA\n",
"Pieter Deroo \nCalifornia Institute of Technology\n4800 Oak Grove Drive91109PasadenaCAUSA\n"
] | [
"Department of Astronomy & Astrophysics\nJet Propulsion Laboratory\nCalifornia Institute of Technology\n4800 Oak Grove Drive91109PasadenaCAUSA",
"Jet Propulsion Laboratory\nUniversity of California-Santa Cruz\n95064Santa CruzCA",
"California Institute of Technology\n4800 Oak Grove Drive91109PasadenaCAUSA"
] | [] | The HST/NICMOS transmission spectrum measurements of HD 189733b that suggest the detection of methane (CH 4 ) in an exoplanet atmosphere have been a source of significant controversy. With what is probably the best analyzed exoplanet spectroscopy data set to date, different teams, using different methods, have claimed evidence both contradicting and supporting the original findings. Here, we report results from a uniform spectral retrieval analysis of the three, independent, published spectra together with null hypothesis testing. Based on Bayesian model comparison, we find that two of the three spectra show strong evidence (≥ 3.6σ) for the detection of molecular features mainly due to water and methane while the third is consistent with a weak molecular detection at the 2.2σ level. We interpret the agreement in the spectral modulation established by previous authors and the atmospheric retrieval results presented here, as a confirmation of the original detection of molecular absorbers in the atmosphere of HD 189733b. Subject headings: planetary systems -planets and satellites: atmospheres -radiative transfermethods: data analysis-planets and satellites: individual(HD 189733b) | 10.1088/0004-637x/784/2/133 | [
"https://arxiv.org/pdf/1401.7601v1.pdf"
] | 118,549,551 | 1401.7601 | 2037bf3f56f856e190bc737a24f414f6dc031e52 |
Accepted to the Astrophysical Journal ON THE DETECTION OF MOLECULES IN THE ATMOSPHERE OF HD189733B USING HST NICMOS TRANSMISSION SPECTROSCOPY
Mark R Swain
Department of Astronomy & Astrophysics
Jet Propulsion Laboratory
California Institute of Technology
4800 Oak Grove Drive91109PasadenaCAUSA
Michael R Line
Jet Propulsion Laboratory
University of California-Santa Cruz
95064Santa CruzCA
Pieter Deroo
California Institute of Technology
4800 Oak Grove Drive91109PasadenaCAUSA
Accepted to the Astrophysical Journal ON THE DETECTION OF MOLECULES IN THE ATMOSPHERE OF HD189733B USING HST NICMOS TRANSMISSION SPECTROSCOPY
Accepted to the Astrophysical JournalPreprint typeset using L A T E X style emulateapj v. 5/2/11
The HST/NICMOS transmission spectrum measurements of HD 189733b that suggest the detection of methane (CH 4 ) in an exoplanet atmosphere have been a source of significant controversy. With what is probably the best analyzed exoplanet spectroscopy data set to date, different teams, using different methods, have claimed evidence both contradicting and supporting the original findings. Here, we report results from a uniform spectral retrieval analysis of the three, independent, published spectra together with null hypothesis testing. Based on Bayesian model comparison, we find that two of the three spectra show strong evidence (≥ 3.6σ) for the detection of molecular features mainly due to water and methane while the third is consistent with a weak molecular detection at the 2.2σ level. We interpret the agreement in the spectral modulation established by previous authors and the atmospheric retrieval results presented here, as a confirmation of the original detection of molecular absorbers in the atmosphere of HD 189733b. Subject headings: planetary systems -planets and satellites: atmospheres -radiative transfermethods: data analysis-planets and satellites: individual(HD 189733b)
INTRODUCTION
The announcement of the likely detection of methane in an exoplanet atmosphere was made using a Hubble/NICMOS near-infrared transmission spectrum of the hot-Jupiter HD 189733b by Swain, Vasisht, & Tinetti (2008;hereafter SVT08). The same measurements, and those of Grillmair et al. (2008), also provided the spectroscopic confirmation of the presence of water, which had been previously identified using Spitzer mid-infrared photometry (Tinetti et al. 2007). The Hubble spectra of HD 189733b initiated extensive efforts in the community to characterize exoplanet atmospheres by searching for molecular features using Hubble near-infrared spectroscopy measurements of both primary eclipse (transit) and secondary eclipse (occultation) events; today, infrared spectroscopic characterization of transiting exoplanet atmospheres with Hubble is a robust field involving multiple teams, hundreds of Hubble orbits, and published spectra for 11 planets to date (HD 189733b, HD 209458b, GJ 436b, XO-1b, XO-2b, GJ 1214b, WASP-12b, HAT-P-1b, HAT-P-12b, WASP-17b, WASP-19bsee Swain et al. 2008, Pont et al. 2009, Swain et al. 2009a, Swain et al. 2009b, Tinetti et al. 2010, Crouzet et al. 2012, Berta et al. 2012, Swain et al. 2013, Huitson et al. 2013, Wakeford et al. 2013, Mandell et al. 2013, Kreidberg et al. 2014.
As part of the growing interest in applying Hubble spectroscopy to exoplanets, Gibson, Pont, & Agrain Mark.R.Swain@jpl.nasa.gov (2011; hereafter GPA11) reanalyzed the SVT08 data and produced three different transmission spectra, based on three different models for the instrument systematics. GPA11 proposed that the systematic errors in NIC-MOS were not amenable to correction and that the results of SVT08 should not be considered reliable. Subsequently, Gibson et al. (2012; hereafter G12) applied a new data reduction method and found results consistent with SVT08 but with substantially larger errors. On the basis of the larger errors, G12 concluded that the detection of molecular features was unreliable.
In an attempt to resolve the debate, Waldmann et al. (2013;hereafter W13), undertook a reanalysis of the SVT08 data using a completely different approach from either SVT08 or GPA12. This analysis resulted in a spectrum consistent with the SVT08 and GPA12 results (see Figure 1). W13 concluded that the agreement between these three spectra is strong evidence for the stability of the result. W13 found measurement uncertainties 30% larger than SVT08 and noted that the method they (W13) used, which does not make use of any prior knowledge of the instrument, generates larger uncertainties than an approach based on an instrument model used by SVT08.
Although the G12 claim that the expected signal is typically orders of magnitude smaller than the instrumental systematics is demonstrably incorrect, (see Figure 1 reproduced from SVT08), the absence of a quantitative analysis of the constraints provided by the three spectra has fostered speculation. Here we report a uniform analysis to determine how the differences between the arXiv:1401.7601v1 [astro-ph.EP] 29 Jan 2014 SVT08, G12, & W13 transmission spectra impact our knowledge of the presence of molecular absorbers in the atmosphere of HD 189733b.
METHODS
We explore the impact of each of the three spectra (see Figure 1) on our knowledge of the composition of the atmosphere by asking the question: What is the detectability of molecular species in each of these spectra? We quantitatively answer this question using a Bayesian model comparison approach of the atmospheric retrieval results for each data set. We used the Bayesian atmospheric retrieval suite, CHIMERA, described in detail in Line et al. (2013a) to perform the retrieval analysis. CHIMERA uses three retrieval algorithms to determine the range of temperatures and abundances permitted by the data. These three algorithms are optimal estimation (Rodgers 2000;Lee et al. 2012;Line et al. 2012), Markov chain Monte Carlo (MCMC-e.g., Madhusudhan et al. 2011;Benneke & Seager 2012), and bootstrap Monte Carlo (Press et al. 1999). Line et al. (2013a) showed that the three retrieval approaches produce temperature and molecular abundance uncertainty distributions that tend to disagree for spectra with sparse coverage and low signal-to-noise generally due to the non-Gaussian nature of the posterior. The more widely accepted of the approaches in theses situations are the MCMC methods because of their ability to characterize non-Gaussian posterior probability distributions. In this investigation we use both optimal estimation and MCMC for the model comparisons.
The forward model used in the retrievals computes the transmission spectra given the gas abundances and temperature profile. The model divides the disk of the planet into annuli and computes the integrated slant optical depth and transmittance along each tangent height. The wavelength-dependent eclipse depth is then computed by integrating the slant transmittance using equation 11 in Brown (2001). The absorption cross-section database is described in Line et al. (2013a). The code has been validated against those of Fortney et al. (2010) and Deming et al. (2013) (see Line et al. 2013b for the validation).
Our objective, for all three spectra, is the retrieval of the constant-with-altitude volume mixing ratios for H 2 O, CH 4 , CO, and CO 2 ; assumptions and retrieval parameters are applied in a uniform way for all three spectra. We assume an isothermal atmosphere, a valid assumption for transmission spectra due to the relative insensitivity of the transmission spectra to changes in this temperature range and to the lack of justification for more complicated profiles. This single temperature parameter controls the scale height, and hence the absorption feature amplitudes. Additionally, we fix the planet radius and adjust the reference pressure level at which this radius is defined; this slides the overall transmission spectrum up or down. Thus the retrieval is for a total of six parameters: the four gases, temperature, and reference pressure. The mean molecular weight of the atmosphere is self consistently determined using the mole fractions of the four retrieved species and assuming the remainder of the atmosphere is a cosmic H 2 /He mixture. There may be other optically inactive filler gases such as N 2 or noble gases; however, given the mass and radius measurements of HD189733b, H 2 /He are the species that would most contribute to the mean molecular weight. We have not considered the role of clouds in these spectra. We assume flat priors on each of the parameters for the MCMC retrieval. The details of these priors can be found in Line et al. (2013a).
RESULTS
Figures 2 and 3 summarize the gas abundance retrieval results. Figure 2 shows the fits to each of the three spectra. Since the MCMC produces many hundreds of thousands of fits, rather than show all of them, we summarize the fits by computing the median of all of the fits and the 68% and 95% spread in the fits (see Line et al. 2013a for details). Generally, the spread in the fits roughly mimics the average data error bar size. Figure 3 shows the marginalized posterior distributions for each gas in the form of histograms. From this figure it is clear that the SVT08 spectrum produces the best constraints on the water and methane abundances. The W13 spectrum produces similar constraints on water, and shows methane is likely present but provides less of a constraint on the abundance. Finally, the G12 spectrum produces virtually no constraint on any of the gas abundances. All three datasets fail to provide meaningful constraints on the CO and CO 2 abundances with only perhaps hinting at upper limits near mixing ratios of ∼ 10 −1 and ∼ 10 −2 respectively.
In order to quantitatively determine the detectability of molecules within the spectra, we use two hypothesistesting procedures. The two hypotheses being tested are: This spectra suggests molecular absorption and the null hypothesis: This model does not suggest any molecular absorption. The first test is a Frequentists ∆χ 2 model comparison. We compute the difference in χ 2 between the best fit from the full model with all of the gases (six total parameters) and the best-fit null model without any of the gases (two total parameters). The best fit atmospheric state is determined using optimal estimation. This ∆χ 2 and the change in degrees of freedom (four) can be used to compute a p-value, or the probability of obtaining a larger delta chi-squared for repeated sets of measurements. This p-value can then be converted into a significance level (e.g., Gregory 2005). Table 1 shows the results. If we take 3.6σ to be the criterion for a significant detection (Trotta 2008), then from this test we find that only the SVT08 and W13 data allow for statistically significant molecular detections. Although consistent with molecular absorption, the G12 data do not constitute a molecular detection as measured by the ∆χ 2 test.
The second hypothesis-testing procedure is considered more rigorous as it relies on the Bayesian evidence (also Gibson et al. 2012Waldmann et al. 2013Swain et al. 2008 Fig. 1.-Corrected and uncorrected spectra: Left: Results for three independent reductions with completely different methods of the same NICMOS measurements of the transmission spectrum of HD 189733b. The three results are similar in terms of shape, but there are differences in the uncertainties. Right: The transmission spectrum before (red) and after (black) correction using an instrument model (Swain et al. 2008). Differences between the red and black points for a given wavelength indicate the size of the instrument systematic errors which are relatively small. known as the marginal likelihood) resulting from the MCMC retrievals (see e.g., Benneke & Seager 2013) and it is these findings we highlight in the abstract. The Bayesian evidence is a multidimensional integral over the volume of phase space explored by the MCMC. The computation of this integral is non-trivial and there are numerous approaches to compute the evidence such as the harmonic mean (Newton and Raftery 1994), Laplace approximation (e.g., Kass and Raftery 1995), nested sampling (Skilling 2004), and others. Each approach has its advantages and pitfalls. We choose to use the Numerical Lebesgue Algorithm (NLA) approach (Weinberg 2012), which is a variant of the harmonic mean approximation but solves the problem of the large truncation error. This approach is straightforward to implement and can be applied to the MCMC chains generated by CHIMERA.
We compute the evidence for the full model and the null (gas free) model for each of the three data sets. The ratio of the evidences produces a Bayes factor. This Bayes factor can then be converted to a p-value and confidence interval (see e.g., Sellke et al. 2001;Trotta 2008;Benneke & Seager 2013). Table 2 shows the results from this hypothesis testing procedure. Again, consistent with the ∆χ 2 test, we find that the SVT08 data results in the largest molecular detection followed by the W13 and G12 data.
We have shown through both a Frequentist and Bayesian model comparison exercise that the SVT08 data provides the strongest evidence for molecular detection followed closely by the W13 data. In contrast, the G12 data are consistent with the presence of molecular features and, at best, constitute a weak detection as measured by a Bayesian model comparison, but do not constrain abundances for any of the molecular species.
In displaying the results for the range of models fit to each spectrum, we have included the model prediction for the 1.1 to 1.5 µm wavelength regions. The models fit to the SVT08 and W13 data predict the 1.35 µm water opacity feature, whereas the model fit to the G12 data can be consistent with a wide range of possibilities, ranging from flat to significant spectral modulation. The model predictions in the 1.1 to 1.5 µm spectral region are displayed to facilitate potential comparison of these mod- els with WFC3 IR grism observations of the HD 189733b transmission spectrum. Extending the wavelength range of the transmission spectrum is highly desirable; however, there are two critical caveats to consider. First, the model results here only provide a prediction and should be updated with a model fitting all the data if and when WFC3 results for this object become available. Second the WFC3 IR grism and NICMOS measurements are taken ∼6 years apart, and the amount of haze, inferred from visible measurements (e.g. Sing et al. 2011), in the planet's atmosphere could have changed. Notwithstanding these caveats, the models fit to the SVT08 and W13 data predict a water absorption feature of ∼300 to 400 ppm in the WFC3 IR grism passband.
DISCUSSION
While the retrieval results for all three spectra are consistent with the presence of water and methane, there are significant differences in the degree of constraint the spectra provide on the gas abundances. The three spectra represent very different approaches, undertaken by different practitioners, to determining the exoplanets spectrum. One might rightly ask, which of these three spectra should be used in studies of HD 189733b? We recommend use of the SVT08 result for three reasons. First, the relative similarity for the significance of molecular detection in SVT08 and W13 suggests the G12 spectrum represents a less optimal treatment of the data. Second, the W13 approach, as clearly stated in their paper, does not represent the optimal method for estimating the Swain et al. 2008Waldmann et al. 2013 Wavelength Fig. 2.-Retrieval fits to the data. The MCMC retrieval produces many hundreds of thousands of spectra. We summarize the spread in the spectra with a median (dark blue), and 68% (dark red) and 95% (light red) confidence intervals. The light blue curve is the best fit. The light blue circles are the best fit model binned to the data. For each data set, the model predictions for 1.1 to 1.5 µm are included to facilitate comparison with future WFC3 IR grism observations. spectrum and is expected to provide larger measurement uncertainties than the SVT08 method. Third, the extensive level of due diligence, outlined below, that was applied to the SVT08 data.
The approach used in the SVT08 analysis, based on experience with instrumentation (Swain et al. 1998(Swain et al. , 1999(Swain et al. , 2003(Swain et al. , 2004Vasisht et al. 1998Vasisht et al. , 2003Vasisht et al. , 2004Vasisht et al. , 2006, was to assume systematic errors were present, and to exhaustively search the data to identify and remove these errors through modeling the instrument performance in terms of basic, measurable, instrument properties. The SVT08 team used the image and header data to construct ancillary data products that measured basic instrument characteristics such as pointing, focus, grism rotation, observatory orbital phase, and focal plane array temperature. Using the out-of-eclipse spectrophotometric time series, with the assumption that temporal changes in the measured spectral photometric flux were due to linear changes in these instrument parameters, a model for the measured flux was constructed. The linearity assumption was explicitly tested and, for one parameter (orbital phase), it was found that inclusion of a dependence on the square of this parameter decreased variance in the model-data residuals. Periodograms confirmed that the modeling process effectively removed the temporal correlations from the spectral photometric time series (see Figure 4). A small amount of wavelength-correlated noise ( 30% of the random noise) was identified and removed by subtraction of an optimally weighted channel average for each sample. The instrument model plus an astrophysical model incorporating limb darkening was then applied to the data. The robustness of the spectral eclipse depth estimate was verified by extensive data removal and refitting as well as investigating the possible affects of star spots. The magnitude of the corrections applied by the instrument model was determined (see Figure 1), and consistency of the model-corrected broadband eclipse depth with nonmodel-corrected broad band eclipse depth was confirmed. An extensive description of the calibration process summarized here appeared in the SVT08 supplementary material and we refer the reader to that source for further information.
CONCLUSIONS
Two versions of the calibrated spectra suggest the presence of molecular absorbers at high confidence. All three methods show the presence of a combination of water and/or methane is probable. The fact that three different data reduction methods have produced similar spectra giving similar spectral retrieval results in two cases, and consistent results in all three, is a remarkable validation of both this data set and the NICMOS instrument in general. This level of independent results confirmation is unique in exoplanet data analysis and is a tribute to the hard work and dedication of all the teams involved. Based on this collective effort, we can resolve an ongoing debate and state with a high degree of confidence that the NICMOS measurements show the presence of molecular absorbers in the atmosphere of HD 189733b. The periodogram for measured spectral flux showing significant red noise (blue plot) that is removed by the instrument model (green plot). The arrow indicates the orbital period of Hubble. This illustrates one of the kinds of tests done to validate the instrument model used for correcting the data in the SVT08 analysis.
ACKNOWLEDGEMENTS
Fig. 3 .Fig. 4 .
34-Marginalized gas posteriors resulting from each data set. The x-axis is the log of the gas mixing ratios in parts-per-million (ppm). The horizontal blue line in each is the flat prior used in the MCMC retrieval. -
TABLE 1
1∆χ 2 test results. The χ 2 values are from the best fits for each scenario and data set. The null model is the gas free model. The change in degrees of freedom is four.SVT08
W13
G12
With Gases χ 2
66.51
18.32
23.02
Null χ 2
100.25
40.87
25.68
∆χ 2
33.74
22.55
2.66
p-value
3.98×10 −7 7.15×10 −5 0.176
Detection Level (σ)
5.1
4.0
1.4
TABLE 2
2Bayesian model comparison resulting from the MCMC retrievals. Z 0 is the evidence from the full model, which includes all of the gases. Z is the evidence computed from the null model. B is the Bayes factor which is the ratio of Z 0 to Z.SVT08
W13
G12
With Gases ln(Z 0 )
127.22
87.96
133.77
Null ln(Z)
110.95
77.63
132.45
ln(B)
16.27
10.33
1.32
p-value
1.56×10 −9 8.60×10 −7
0.027
Detection Level (σ)
6.0
4.9
2.2
[ m ]
mGibson et al. 2012 2.35
2.40
2.45
1.0
1.5
2.0
2.5
3.0
2.30
2.35
2.40
2.45
2.35
2.40
2.45
(R
p /R
* ) 2
(%)
(R
p /R
* ) 2
(%)
(R
p /R
* ) 2
(%)
The authors thank Gautam Vasisht for comments on this manuscript. The research described in this publication was carried out in part at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. Copyright 2013. All rights reserved.
. B Benneke, S Seager, ApJ. 753100Benneke, B., & Seager, S. 2012, ApJ, 753, 100
. B Benneke, S Seager, arXiv:1306.6325Benneke, B., & Seager, S. 2013, arXiv:1306.6325
. Z K Berta, D Charbonneau, J.-M Desert, ApJ. 74735Berta, Z.K., Charbonneau, D., Desert, J.-M., et al. 2012, ApJ, 747, 35
. T M Brown, ApJ. 5531006Brown, T. M. 2001, ApJ, 553, 1006
. N Crouzet, P R Mccullough, C J Burke, D Long, ApJ. 7617Crouzet, N., McCullough, P.R., Burke, C.J., Long, D. 2012, ApJ, 761, 7
. D Deming, A Wilkins, P Mccullough, ApJ. 77495Deming, D., Wilkins, A., McCullough, P., et al. 2013, ApJ, 774, 95
. J J Fortney, M Shabram, ApJ. 70995Fortney, J.J., Shabram, M. 2010, ApJ, 709, 95
. N P Gibson, F Pont, S Agrain, MNRAS. 4112199Gibson N. P., Pont F., & Agrain S. 2011, MNRAS, 411, 2199
. N P Gibson, S Aigrain, S Roberts, MNRAS. 4192683Gibson, N. P., Aigrain, S., Roberts, S., et al. 2012, MNRAS, 419, 2683
Bayesian Logical Data Analysis for the Physical Sciences: A Comparative Approach with 'Mathematica' Support. P C Gregory, P. C. GregoryISBN 0 521 84150 X (hardbackGregory, P. C. 2005, Bayesian Logical Data Analysis for the Physical Sciences: A Comparative Approach with 'Mathematica' Support. Edited by P. C. Gregory. ISBN 0 521 84150 X (hardback);
. C Grillmair, A Burrows, D Charbonneau, Nature. 456767Grillmair, C., Burrows, A., Charbonneau, D., et al. 2008, Nature, 456, 767
. C M Huitson, D K Sing, F Pont, MNRAS. 4343252Huitson, C.M., Sing, D. K., Pont, F., et al. 2013, MNRAS, 434, 3252
. R Kass, A Raftery, Journal of the american statistical association. 90Kass, R., & Raftery A. 1995, Journal of the american statistical association, 90, 773-795.
. L Kreidberg, J L Bean, J.-M Desert, Nature. 50569Kreidberg, L., Bean, J. L., Desert, J.-M., et al. 2014, Nature, 505, 69
. J.-M Lee, L N Fletcher, P G J Irwin, MNRAS. 420170Lee, J.-M., Fletcher, L. N., & Irwin, P. G. J. 2012, MNRAS, 420, 170
. M R Line, X Zhang, G Vasisht, ApJ. 74993Line, M. R., Zhang, X., Vasisht, G., et al. 2012, ApJ, 749, 93
. M R Line, A Wolf, X Zhang, arXiv:1304.5561Line, M. R., Wolf, A., Zhang, X., et al. 2013, arXiv:1304.5561
. M R Line, H Knutson, D Deming, A Wilkins, J.-M Desert, ApJ. 778183Line, M.R., Knutson, H., Deming, D., Wilkins, A., Desert, J.-M. 2013, ApJ, 778, 183
. N Madhusudhan, S Seager, ApJ. 70724Madhusudhan, N., & Seager, S. 2009, ApJ, 707, 24
. N Madhusudhan, J Harrington, K B Stevenson, Nature. 46964Madhusudhan, N., Harrington, J., Stevenson, K. B., et al. 2011, Nature, 469, 64
. A M Mandell, K Haynes, E Sinukoff, ApJ. 779128Mandell, A.M., Haynes, K., Sinukoff, E., et al. 2013, ApJ, 779, 128
. M A Newton, A E Raftery, J. Royal Stat. Soc. B. 56Newton, M.A. & Raftery, A.E. 1994, J. Royal Stat. Soc. B, 56
. F Pont, R L Gilliland, H Knutson, M Holman, D Charbonneau, MNRAS. 3936Pont, F., Gilliland, R.L., Knutson, H., Holman, M., Charbonneau, D. 2009, MNRAS, 393, 6
Numerical Recipies: The art of scientific computing, sec. W H Press, S Teukolksy, W T Vetterling, B Fannery, Camb. Univ. PressPress, W.H., Teukolksy, S., Vetterling, W.T., Fannery, B. 1995, Numerical Recipies: The art of scientific computing, sec. ed., Camb. Univ. Press
Inverse methods for atmospheric sounding. C D Rodgers, Theory and Practice. Rodgers, C.D., Inverse methods for atmospheric sounding, Theory and Practice, 2000
. J Skilling, AIP Conf. Proc. 735395Skilling, J. 2004, AIP Conf. Proc., 735, 395
. T Sellke, M J Bayarri, J O Berger, The American Statistician. 5562Sellke, T, Bayarri, M.J., & Berger, J. O. 2001, The American Statistician, 55, 62
. M R Swain, C M Bradford, G J Stacey, SPIE. 3354480Swain, M.R., Bradford, C.M., Stacey, G.J., et al. 1998, SPIE, 3354 480S
. M R Swain, PASP. 1111021Swain M. R. 1999, PASP, 111, 1021
. M R Swain, P J Dumont, P R Lawson, SPIE. 4852645Swain, M.R., Dumont, P.J., Lawson, P.R., et al. 2003, SPIE, 4852, 645
. M R Swain, C K Walker, W A Traub, SPIE. 5491176Swain, M.R., Walker, C. K., Traub, W. A. et al. 2004, SPIE, 5491, 176
. M R Swain, G Vasisht, G Tinetti, Nature. 452329Swain, M.R., Vasisht, G., & Tinetti, G. 2008, Nature, 452, 329
. M R Swain, G V Vasisht, G Tinetti, ApJ. 960114Swain, M.R., Vasisht, G.V., Tinetti, G., et al. 2009a, ApJ, 960, 114
. M R Swain, G Tinetti, G Vasisht, ApJ. 7041616Swain, M.R., Tinetti, G., Vasisht, G., et al. 2009b, ApJ, 704, 1616
. M R Swain, P Deroo, G Tinetti, Icarus. 225432Swain, M.R., Deroo, P., Tinetti, G. et al. 2013, Icarus, 225, 432
. G Tinetti, A Vidal-Madjar, M.-C Liang, Nature. 448169Tinetti, G., Vidal-Madjar, A., Liang, M.-C., et al. 2007, Nature, 448, 169
. G Tinetti, P Deroo, M Swain, ApJ. 712139Tinetti, G., Deroo, P., Swain, M., et al. 2010, ApJ, 712, 139
R Trotta, Contemporary Physics. 4971Trotta, R. 2008, Contemporary Physics, 49, 71
. G Vasisht, A F Boden, M M Colavita, SPIE. 3350354Vasisht, G., Boden, A. F., Colavita, M. M. et al. 1998, SPIE, 3350, 354
. G Vasisht, A J Booth, Colavita, 4838824Vasisht, G., Booth, A.J., Colavita, et al. SPIE 2003, 4838, 824
. G Vasisht, M M Colavita, SPIE. 5491567Vasisht, G. & Colavita, M.M. 2004, SPIE, 5491, 567
. G Vasisht, E R Ligon, E E Bloemhof, M M Colavita, SPIE. 626892Vasisht, G., Ligon, E.R., Bloemhof, E.E., Colavita, M.M. 2006, SPIE, 6268, 92
. H R Wakeford, D K Sing, D Deming, MNRAS. 4353581Wakeford, H.R., Sing, D. K., Deming, D., et al. 2013, MNRAS, 435, 3581
. I P Waldmann, 74712Waldmann, I.P. 2012, 747, 12
. I P Waldmann, G Tinetti, P Deroo, ApJ. 7669Waldmann, I.P., Tinetti, G., Deroo, P., et al. 2013, ApJ, 766, 9
. M Weinberg, Bayesian Analysis. 7Weinberg, M. 2012, Bayesian Analysis, 7
| [] |
[
"Prepared for submission to JCAP Revisiting f (R) gravity's rainbow: Inflation and primordial fluctuations",
"Prepared for submission to JCAP Revisiting f (R) gravity's rainbow: Inflation and primordial fluctuations"
] | [
"Yoelsy Leyva yoelsy.leyva@academicos.uta.cl \nDepartamento de Física\nFacultad de Ciencias\nUniversidad de Tarapacá\nCasilla 7-DAricaChile\n",
"Giovanni Otalora giovanni.otalora@academicos.uta.cl \nDepartamento de Física\nFacultad de Ciencias\nUniversidad de Tarapacá\nCasilla 7-DAricaChile\n"
] | [
"Departamento de Física\nFacultad de Ciencias\nUniversidad de Tarapacá\nCasilla 7-DAricaChile",
"Departamento de Física\nFacultad de Ciencias\nUniversidad de Tarapacá\nCasilla 7-DAricaChile"
] | [] | We study inflation and the generation of primordial fluctuations in f (R) gravity's rainbow. We calculate the cosmological perturbations and then the scalar and tensor primordial power spectrum. We contrast the predictions of the model with the current observational data from PLANCK and BICEP/Keck. Particularly, we found new results for the scalar spectral index n s and the tensor-to-scalar ratio r along with new observational constraints on the rainbow functions. | 10.1088/1475-7516/2023/04/030 | [
"https://export.arxiv.org/pdf/2206.09000v2.pdf"
] | 249,890,073 | 2206.09000 | 12083b961dcdb75f1338b18986515804aa512fa9 |
Prepared for submission to JCAP Revisiting f (R) gravity's rainbow: Inflation and primordial fluctuations
Yoelsy Leyva yoelsy.leyva@academicos.uta.cl
Departamento de Física
Facultad de Ciencias
Universidad de Tarapacá
Casilla 7-DAricaChile
Giovanni Otalora giovanni.otalora@academicos.uta.cl
Departamento de Física
Facultad de Ciencias
Universidad de Tarapacá
Casilla 7-DAricaChile
Prepared for submission to JCAP Revisiting f (R) gravity's rainbow: Inflation and primordial fluctuations
We study inflation and the generation of primordial fluctuations in f (R) gravity's rainbow. We calculate the cosmological perturbations and then the scalar and tensor primordial power spectrum. We contrast the predictions of the model with the current observational data from PLANCK and BICEP/Keck. Particularly, we found new results for the scalar spectral index n s and the tensor-to-scalar ratio r along with new observational constraints on the rainbow functions.
Introduction
A quasi-exponential accelerating phase, before the radiation decelerating era, that corresponds to cosmic inflation is widely accepted as the standard paradigm of the early Universe. It can give a solution to the several long-standing puzzles of the Hot Big-Bang standard cosmological model [1][2][3][4]. Furthermore, the most intriguing feature of inflation, is that it gives us a causal interpretation of the origin of the Cosmic Microwave Background (CMB) temperature anisotropies, and it provides us with a mechanism to explain the Large-Scale Structure (LSS) [5]. This mechanism is based on the generation of quantum fluctuations which are amplified in physical scale during inflation, leading to a Gaussian, scale-invariant and adiabatic primordial density perturbations [5,6]. The measure of the scale dependence of the power spectrum of curvature perturbation is given by the scalar spectral index n s , which is tightly constrained by the latest PLANCK data [7] to be n s = 0.9649 ± 0.0042 (at 68% C.L). This result indicates that the primordial power spectrum is nearly scale independent. A further prediction of inflationary models is the generation of tensor perturbations as a background of primordial gravitational waves (PGWs), whose amplitude can be parameterized in terms of the tensor-to-scalar ratio r [8]. As for the tensor-to-scalar ratio, we have not detected tensor perturbations until now. New data from BICEP/Keck 2021 [9] have been published, leading to a considerably stronger upper bound on r: r 0.05 = 0.014 +0.010 −0.011 (r 0.05 < 0.036 (at 95% C.L.)), in comparison to PLANCK 2018 data [7].
Several different models have been studied to explain the physics of inflation. Among them we have f (R) gravity that includes Starobinsky inflation [1], as well as generalized scalar-tensor models [10] that encompass minimally and non-minimally coupled scalar fields models. A non-minimal coupling arises from a variety of model-building efforts, e.g., in supergravity and string theory, and it is also required as a counterterm when considering renormalization of scalar fields in curved spacetimes [11]. Both f (R) gravity and scalar fields models have been widely studied in the literature and their predictions for inflation are very well known. Starobinsky inflation is currently the best inflationary model in explaining the latest observations [7,9]. On the other hand, scalar fields models have traditionally been used as the standard prototype for inflationary models. Although, the simplest and well-known examples of scalar potentials (the monomial power-law inflaton potentials: quadratic chaotic potential and quartic potential), have been strongly disfavored by current observations [9], scalar field inflationary models may still be compatible with observations for other kind of potentials (asymptotically flat plateau like potentials). This is the case of the Einstein frame potentials associated with the Starobinsky model [1], non-minimally coupled Higgs inflation [12][13][14][15], T-model and E-model α-attractors [16][17][18], and the D-brane KKLT potential [18][19][20][21]. Another alternative class of modified gravity theories that have recently been addressed in the literature in explaining inflation are the torsional modified gravity theories [22]. These theories are extensions of the so-called teleparallel gravity [23][24][25][26][27][28][29][30][31][32][33]. The most popular examples are f (T ) gravity [34][35][36][37][38], scalar-torsion gravity [39][40][41][42][43][44][45][46][47], and higher-order teleparallel gravity [48,49]. The calculation of the scalar and tensor power spectrum for generalized scalar-torsion theories taking into account the breaking of local Lorentz symmetry found in these theories has been performed in Ref. [50]. A reconstruction scheme for inflation through the parametrization (or attractor) of the inflationary observables n s and r as functions of the number of e-folds N has been carried out in Ref [51].
Gravity's Rainbow is an interesting proposal for a small-scale, Ultra-Violet (UV) modification of GR, and keeping GR as the low-energy limit [52]. Just as in other quantum gravity theories, the origin of Gravity's Rainbow is related to the non-renormalizability of GR and the difficulties that arise when trying to quantize gravity [53]. Nevertheless, unlike what occurs in other quantum gravity frameworks [54], in Gravity's Rainbow the ultraviolet (UV) modification appears directly in the spacetime metric. Since it is an extension of double special relativity to curved spacetimes the usual energy-momentum relations are modified through new contributing terms that depend on the probe energy. Thus, in Gravity's Rainbow the metric tensor is modified near the Planck scale which becomes a function of energy of the particle probing the spacetime [52]. Furthermore, the study of inflation in the context of f (R) gravity with an energy-dependent spacetime metric, the so-called f (R) gravity's rainbow, has already been performed in the literature. In Ref. [55] (see also Ref. [56]) the authors concentrated on Starobinsky model with the rainbow functions being power-law functions of the Hubble rate and they compared their results for n s and r with observational data. Later in Ref. [57] the analysis was extended to the logarithmic-corrected R 2 model and the Einstein-Hu-Sawicki model and also performing a comparison with the latest Planck 2018 data. Finally, the study of slow-roll inflation in the framework of f (T ) gravity's rainbow was achieved in Ref. [58]. In this latter article the authors carried out a full calculation of the inflationary primordial perturbations and they contrasted their results with the latest data from Planck 2018 and BICEP/Keck 2021.
In the present paper we study inflation and the generation of primordial fluctuations in f (R) gravity's rainbow. Our goal is to carry out a detailed investigation of the slow-roll dynamics and the modifications to the scalar and tensor power spectrum sourced by the rainbow functions that arise in the effective spacetime metric. The manuscript is organized as follows: in Section 2, we review the background equations for f (R) gravity's rainbow. In Section 3 we establish the general setup for slow-roll inflation. In Section 4 we study the evolution of the primordial scalar and tensor fluctuations by perturbing the modified field equations and by obtaining the Mukhanov-Sasaki equation. In Section 5 we apply our general results to a concrete inflationary model and confront its predictions with the current PLANCK and BICEP/Keck data. Finally, in Section 6 we give the conclusions.
f (R) Gravity's Rainbow
One starts from the action
S = 1 2κ 2 d 4 x √ −gf (R) + S m ,(2.1)
where κ 2 = 8πG, f (R) is a function of the Ricci scalar and S m is action of matter.
Varying this action with respect to the spacetime metric one obtains the following field equations
F (R)R µν − 1 2 f (R)g µν − ∇ µ ∇ ν F (R) + g µν F (R) = κ 2 T µν , (2.2) where F (R) ≡ ∂f /∂R, = g µν ∇ µ ∇ ν is Laplacian-Beltrami operator, and T µν = − 2 √ −g δS m δg µν , (2.3)
is the energy-momentum tensor of matter.
In the previous equations the spacetime metric g does not contain any ultraviolet modification through the dependence with the energy of the particles probing the spacetime [52]. To study gravity's rainbow the spacetime metric g must be replaced by the effective energy-dependent rainbow metric [52] g
(E) = η ABẽ A (E) ⊗ẽ B (E), (2.4)
where η AB = diag (−1, 1, 1, 1) is the Minkowski tangent space metric, andẽ A are the corresponding energy-dependent tetrad frame such that
e0 = 1 f (E)ē0 ,ẽî = 1 g(E)ēî , (2.5)
wheref (E) andg(E) are functions of the energy of the probe particle andē A are the original frame fields without rainbow effect. According to the correspondence principle [52], the rainbow functionsf (E) andg(E) satisfy the limit conditionsf (E) → 1 andg(E) → 1 for E/E P l 1, with E P l the Planck energy. Thus, in order to study rainbow gravity effects on the dynamics of inflation in f (R) gravity, the spacetime metric consistent with homogeneity and isotropy conditions is assumed to be [59]
ds 2 = − N 2 f 2 dt 2 + a 2 g 2 δ ij dx i dx j . (2.6)
At the limitf ,g → 1 we recover the standard Friedmann-Lemaître-Robertson-Walker (FLRW) background [5]. Through the rescaling of the fields (i.e. conformal transformation) and field redefinitions one is led to the Einstein frame where the action of f (R) gravity (Eq. (2.1)) is seen to be equivalent to the action of a scalar field (inflaton) minimally coupled to gravity and directly coupled to matter [60]. During inflation the evolution of the universe is dominated by the dynamics of the inflaton and then the matter fluid can be neglected [5].
Thus, the inflaton field propagating the degree of freedom from modified gravity in the field representation can be considered as our probe particle in the context of gravity's rainbow [55,59]. Its energy density at the slow-roll regime is roughly proportional to the effective scalar potential, which ultimately becomes a function of the curvature scalar in the Jordan frame and then also of the Hubble expansion rate H ≡ 1 a da dt [60]. Therefore, for an expanding universe the rainbow functionsf ,g, can be assumed to be functions of time [55,59].
Thus, the gravitational action of f (R) gravity's rainbow becomes
S = 1 2κ 2 dtd 3 x a 3 N f (R) fg 3 + S m , (2.7) where R = 6f 2 (2H 2 +Ḣ) N 2 − 6Hf 2Ṅ N 3 + 6Hfḟ N 2 − 24Hf 2ġ N 2g + 6f 2Ṅġ N 3g − 6fḟġ N 2g − 6f 2g N 2g + 18f 2ġ2 N 2g2 . (2.8)
From the point of view of the effective action in Eq. (2.7), one can treat f (R) gravity's rainbow as an energy-dependent modified gravity theory [61]. Thus, starting from action (2.7) we can study the slow-roll inflationary dynamics of the cosmological background and the evolution of the linear perturbations.
Varying the action with respect to N and taking N = 1 we obtain
3 H 2 F + HḞ − 6HFġ g − 3Ḟġ g + 3Fġ 2 g 2 − RF 2f 2 + f 2f 2 = 0, (2.9)
while varying with respect to the scale factor a yields
3H 2 F − 6HḞ − 3Ḟḟ f − 6HFġ g + 6Ḟġ g + 3Fġ 2 g 2 − 3F − 3f 2f 2 + RF 2f 2 = 0. (2.10)
We have neglected the matter sector since the corresponding energy and pressure densities behave as inverse powers of the scale factor which grows quasi-exponentially during inflation. The set of equations (2.9) and (2.10) constitute the background equations of f (R) gravity's rainbow. In the next section we introduce the general setup for slow-inflation.
General setup for slow-roll inflation
In order to study slow-roll inflation we introduce the following set of parameters
= −Ḣ H 2 , δf =ḟ Hf , δg =ġ Hg , ηf =δf Hδf , ηg =δg Hδg , δ F =Ḟ HF ,(3.1)
During inflation the condition 1 is satisfied. To comply with this condition all the parameters defined in Eq. (3.1) are much smaller than the order of the unity.
In terms of these parameters we can rewrite R as
R = R 0 1 − 1 2 + 1 2 δf − 2δg − 1 2 δf δg + δ 2 g + 1 2 δg − 1 2 δgηg . (3.2)
where R 0 = 12H 2f 2 . Thus, up to first order, Eq. (2.9) is written as
2δ f ,R − 3µ + 3µδf + 4(µ − 1)δg + O( 2 ) = 0, (3.3)
where we have defined
δ f ,R = 2f − RF RF R=R 0 , (3.4) µ ≡ RF ,R F R=R 0 . (3.5)
and δ µ =μ/(Hµ). Also, we have used the relation
δ F = 2µ(δf − ) + O( 2 ). (3.6)
Then, by solving Eq. (3.3) for one obtains
= 2δ f ,R 3µ + δf + 4 3 δg 1 − 1 µ ,(3.7)
and µ = 0. Also, up to first order, Eq. (2.10) yields 6δ f, R − (4 + 5µ) + (4 + 5µ)δf + 12(µ − 1)δg + O( 2 ) = 0. [60,62]. In fact, for these models one finds µ = 1 − δ f ,R with δ f ,R = 1/(2R 0 A(R 0 )) 1. Notice that from Eqs. (3.3) and (3.7), one finds that the parameter δg associated with the rainbow functiong does not contribute to the dynamics of slow-roll inflation at first-order.
Below we study the evolution of the cosmological perturbations around the background metric (2.6).
Primordial Fluctuations
Scalar Perturbations
By starting from the general perturbed metric about the flat FLRW background [60], and using Eqs. (2.4) and (2.5), the corresponding perturbed metric with rainbow effect can be obtained. To study the primordial scalar fluctuations generated during inflation we assume the following scalarly perturbed metric with rainbow effect
ds 2 = − 1 + 2α f 2 dt 2 − 2ã f ∂ i βdtdx i + a 2 δ ij + 2ψδ ij + 2∂ i ∂ j γ dx i dx j . (4.1)
For simplicity we assumedg = 1. Thus, forf = 1 one recovers the usual scalarly perturbed metric [60]. Also, for convenience we introduce the following perturbed variables
χ ≡ a (β + aγ) , A ≡ 3 Hα −ψ − ∆ a 2 χ. (4.2)
Then, the perturbed field equations with rainbow effects are given by ∆ψ
a 2 + Hf 2 A +f f − 1 ∆β a H +Ḟ 2F = − 1 2F 3H 2f 2 + 3f 2Ḣ + 3Hfḟ + ∆ a 2 δF − 3Hf 2 δḞ + 3Hf 2Ḟ α +f 2Ḟ A , (4.3) Hα −ψ = − 1 2F HδF +Ḟ α − δḞ , (4.4) χ + H +ḟf χ − α f 2 − ψ f 2 + aβ Ḟ f F −Ḟ F + 2H f − 2H −ḟf − a 1 − 1 f β = 1 F δF f 2 −Ḟ χ ,(4.
5)
A + 2H +ḟf A + 3Ḣ + ∆ a 2f 2 α + ∆β a H − H f −Ḟ 2f F +Ḟ 2F +ḟf + 1 − 1 f ∆β a = 1 2F 3δF + 3 H +ḟf δḞ − 6 H 2 + ∆ 6a 2f 2 δF − 3Ḟα −Ḟ A − 3 HḞ + 2F + 2Ḟḟ f + 2HFḟ f α , (4.6) δF + 3H +ḟf δḞ − ∆ a 2 + R 3 δF f 2 +Ḟ a 1 f − 1 ∆β =Ḟ (A +α) + 2F + 3HḞ + 2Ḟḟ f α − F δR 3f 2 ,(4.7)
where δR is given by
δR = −2 f 2Ȧ + A 4Hf 2 +fḟ + ∆ a 2 + 3f 2Ḣ + 3Hfḟ α + 2∆ψ a 2 + ∆β a 3Hf 2 − 3Hf +fḟ + f − 1 f ∆β a . (4.8)
Now let us consider the gauge transformation [60]
t = t + δt,x i = x i + δ ij ∂ j δx. (4.9)
Then the scalar perturbations transform aŝ α = α +ḟf δt −δt, (4.10)
β = β − 1 af δt + afδ x, (4.11) ψ = ψ − Hδt, (4.12) γ = γ − δx,(4.13)
and alsoδ F = δF −Ḟ δt. (4.14)
In this way we define the following gauge invariant variables
Φ = α −f d dt a 2f γ + β af , (4.15) Ψ = −ψ + a 2f 2 H γ + β af , (4.16) R = ψ − Ḣ F δF. (4.17)
By using the above set of perturbed field equations one can obtain the evolution equation for the primordial curvature perturbation. In a 4D diffeomorphism invariant theory, one can fix, at most, two scalar perturbation fields, either γ = 0 = β (Newtonian gauge) or γ = 0 = δF (uniform field gauge) [60]. Thus, from the gauge-invariant quantities defined in Eqs. (4.15), (4.16) and (4.17), one can completely fix the gauge degree of freedom by choosing the specific gauge conditions γ = 0 and δF = 0 [60]. Notice that we have assumed in Eq. (4.17) thatḞ = F ,RṘ = 0 which is valid during inflation for a quasi-de Sitter background. Likewise, we have neglected the matter perturbations [60].
Mukhanov-Sasaki equation
We assume the gauge conditions γ = 0 and δF = 0 [60]. In this case one has R = ψ. Thus, from Eq.
.3) one is led to
A = − 1 H +Ḟ 2F 1 a 2f 2 ∆R + 3HḞ 2F H +Ḟ 2F Ṙ − (f − 1)∆β af . (4.19)
Also, by putting the background equations into (4.6) one findṡ where we also defined
A + 2H +Ḟ 2F +ḟf A + 3Ḟ 2Fα + 3F 2F + 3HḞ F + 3Ḟḟ 2f F α + ∆α a 2f 2 + H − H f −Ḟ 2f F +Ḟ 2F +ḟf ∆β a + 1 − 1 f ∆β a = 0.R R + a 3 Q s · a 3 Q sṘ + c 2 s k 2 a 2 R = 0, (4.21) where Q s = 3fḞ 2 2κ 2 F Ḟ 2F + H 2 ,(4.η F =δ F Hδ F , η =˙ H . (4.26)
Furthermore, we have used the relation
η F = δ µ + δf ηf δf − − η δf − . (4.27)
Also, the conditions for the absence of ghost and Laplacian instabilities read as Q s > 0 and c 2 s > 0. The former condition impliesf /F > 0. In particular, forf > 0 one requires
F = f ,R > 0.
Notice that the propagation speed of the scalar perturbations c s (t) is time dependent due to the rainbow effects. Of course, this is not unusual since in rainbow gravity the speed of the light is time dependent [52]. Thus, in order to obtain the propagation speed of the scalar modes equal to 1 (speed of light in natural units), we move to the "Rainbow Frame" where a new unit of time ("sound-horizon" time) is assumed. So, we replace the standard conformal time dτ = a −1 dt by the sound-horizon time dτ RF = c s (τ )dτ = (c s (t)/a)dt [63]. Using the canonically-normalized Mukhanov variable v = zR and z 2 = 2a 2 Q s c s the equation
(4.21) is written as v k + (k 2 + M 2 )v k = 0, (4.28) where M 2 = − z z = − aH c s 2 2 − + 3 2 η + 1 2 s . (4.29)
Thus, we write Eq. (4.28) in the following form
v k + k 2 − 1 τ 2 RF ν 2 − 1 4 v k = 0, (4.30)
where
ν 2 = 9 4 + 3 + 3 2 η + 9 2 s. (4.31)
The general solution to Eq.
(4.30) is v k (τ RF ) = √ −τ RF C 1 H (1) ν (−kτ RF ) + C 2 H (2) ν (−kτ RF ) , (4.32)
being H
(1,2) ν the Hankel's functions of first and second kind, respectively [64]. Assuming the Bunch-Davies vacuum v k (τ RF ) e −ikτ RF / √ 2k at the ultraviolet regime −kτ RF 1, and using some identities, one obtains
v k (τ RF ) = π 2 e i π 2 (ν+ 1 2 ) √ −τ RF H (1) ν (−kτ RF ). (4.33)
On super-horizon scales −kτ RF 1 one gets
v k (τ RF ) = 2 ν− 3 2 √ 2k e i π 2 (ν− 1 2 ) Γ(ν) Γ( 3 2 ) (−kτ RF ) 1 2 −ν ,(4.34)
and then
|R k | = z −1 |v k | H 2 Q s c 3 s k 3 τ RF τ * RF 3 2 −ν , H * 2 Q s * c 3 s * k 3 . (4.35)
In this latter equation one has that τ * RF −1/k is the value of τ RF at the horizon crossing. Also, we used H H * (τ RF /τ * RF ) , Q s Q s * (τ RF /τ * RF ) −η , and c s c s * (τ RF /τ * RF ) −s where H * , Q s * and c s * are evaluated at τ RF = τ * RF [58]. Therefore, the scalar power spectrum of curvature fluctuation is given by
P s (k) ≡ k 3 2π 2 |R k | 2 H 2 * 8π 2 Q s * c 3 s * .
(4.36)
The scalar spectral index is
n s − 1 ≡ d ln P s (k) d ln k −2 − η − 3s, −2η f ,R . (4.37)
The slow-roll parameter δf related to gravity's rainbow does not explicitly contribute to n s .
However, these contributions arise implicitly into the term η f ,R =δ f ,R /(Hδ f ,R ). Below we study tensor perturbations.
Tensor Perturbations
For tensor perturbations one writes the perturbed FLRW metric with rainbow effect as follows
ds 2 = − 1 f 2 dt 2 + a 2 (δ ij + h ij ) . (4.38)
where h ij are the tensor perturbations. These tensor perturbations h ij can be decomposed in terms of their two polarization states h + , h × , such that [60]
h ij = h + e + ij + h × e × ij ,(4.39)
where e + ij , e × ij are the corresponding polarization tensors. By substituting this perturbed metric into the modified field equations (2.2) one obtains
h θ + a 3 Q t · a 3 Q tḣ θ + c 2 t k 2 a 2 h θ = 0,(4.40)
with θ = +, ×, and where
Q t = Ff 2κ 2 ,(4.41)
and
c 2 t = 1 f 2 . (4.42)
We define Following a similar procedure than in the case of scalar perturbations, one can obtain the primordial power spectrum for tensor perturbations [60].
η t ≡Q t HQ t = δf + δ F , = δf + 2(δf − )µ.
Mukhanov-Sasaki equation
Now we introduce the canonically-normalized field v θ = z t h θ where z 2 t = 2a 2 Q t c t , along with the sound-horizon time dτ RF = c t (τ )dτ = (c t (t)/a)dt [63]. Then, Eq. (4.40) is turned into
v θ,k + k 2 + M 2 t v θ,k = 0,(4.45)
where
M 2 t = − z t z t = − aH c t 2 2 − + 3 2 η t + 1 2 s t . (4.46)
Furthermore this latter equation can be written as
v θ,k + k 2 − 1 τ 2 RF ν 2 t − 1 4 v θ,k = 0, (4.47)
where
ν 2 t = 9 4 + 3 + 3 2 η t + 9 2 s t . (4.48)
Following a similar procedure to what was performed in the case of scalar perturbations, one obtains
P t = H 2 * 2π 2 Q t * c 3 t * .
(4.49)
Thus, the tensor spectral index is The tensor-to-scalar ratio is calculated as
n t ≡ d ln P t d ln k = −2 − η t − 3s t , = 4 3 δ f ,R 1 − 1 µ .r = P t P s = 12δ 2 F = 48 δf − 2 , = 64 3µ 2 δ 2 f ,R . (4.52) For µ = 1 + O( ) one obtains r = 64 3 δ 2 f ,R . (4.53)
The contributions coming from gravity's rainbow appear implicitly into the slow-roll parameter δ f ,R .
Starobinsky inflation with rainbow gravity effects
In this section we assume [1] f
(R) = R + R 2 6M 2 ,(5.1)
. Therefore Thus, we obtain
δ f ,R 3 (1 + λ) 4N k , (5.11) η f ,R 1 + λ N k , (5.12) δf − λ 2N k . (5.13)
Therefore, the scalar power spectrum yields
P s = 1 48π 2 M M pl 2 N 2 k (1 + λ) 2 .
(5.14)
From the latest Planck data the amplitude of primordial scalar perturbations is P s = 2.141 × 10 −9 [7]. Using this latter equation one can constrain the mass scale M after constraining the λ parameter on the n s − r plane. The scalar spectral index is
n s 1 − 2 (1 + λ) N k . (5.15)
The tensor-to-scalar ratio becomes
r 12 (1 + λ) 2 N 2 k . (5.16)
The latest cosmological data from Planck satellite [7] give the following constraint for the scalar spectral index n s = 0.9649 ± 0.0042, (5.17) at 68% C.L. On the other hand, new data from BICEP/ Keck XIII have recently been published [9] which put stronger constraints on the amplitude of primordial gravitational waves and then on the tensor-to-scalar ratio r < 0.036, (5.18) at 95% C.L. Using these observational constraints on n s and r we obtain 50.89 < N k ≤ 64.72, and 0 < λ < 0.01965N k − 1, (5.19) or N k > 64.72 and In FIGS 1 and 2, we depict the predictions of the model in the n s − r plane. We have used the latest data from Planck 2018 and BICEP/Keck 2021. These results are consistent with the constraints found for λ in Eqs. (5.19) and (5.20). Particularly, one can see that small values of λ (λ < 1) are preferred. Higher values of λ can also give results compatible with observations, but this would require higher values of the e-folding number of inflation N k . For instance, for λ 1, the values N k 110 would be required. Otherwise, higher values of λ could violate the slow-roll conditions. Also, this could conflict with the expected results from the subsequent reheating era after inflation [65][66][67][68]. In [7] and the recently released BICEP/Keck data [9]. We used N k = 60. [7] and the recently released BICEP/Keck data [9]. We used N k = 70.
0.01545N k − 1 < λ < 0.01965N k − 1.(5.
Concluding Remarks
In the present paper, we study inflation and the generation of primordial fluctuations in the context of f (R) Gravity's Rainbow. After developing the general setup to study slow-roll inflation in these theories, we calculated the scalar and the tensor primordial power spectra generated during inflation. We assumed that the two energy-dependent rainbow functions f andg arising in the effective spacetime metric are time dependent. This is a natural assumption since the energy of the probe particles can depend on time for an expanding universe [59]. Any viable inflationary model based on f (R) gravity must be close to Starobinsky model [62]. For instance, one can write f (R) = R+R 2 A(R) where A(R) is a slowly varying function satisfying |A (R)| A(R)/R and |A (R)| A(R)/R 2 [60]. For this class of models we have shown that gravity's rainbow can lead to new contributions on both inflationary observables n s and r. Although up to first order in slow-roll approximation there are not contributions coming fromg, we found new imprints arising through the slow-roll parameter δf =ḟ /(Hf ) associated withf .
Since the rainbow functions can be time dependent, is totally reasonable to assume the ansatzf (H) = (H/M ) λ where λ is constant and M is the mass scale of inflation [55,57]. For instance, for f (R) = R + (R/M ) 2 /6, we obtained n s 1 − 2 (1 + λ) /N k and r 12 (1 + λ) 2 /N 2 k . The standard results from Starobinsky model are recovered for λ = 0 [1,60]. Thus, using the latest observational constraints on n s and r [7,9] we found that small values of λ (λ < 1) are preferred. The observational bounds on n s put the strongest constraint on the values of λ to give a nearly scale-invariant power spectrum. Higher values of λ can also give results compatible with observations, but this would require higher values of the e-folding number of inflation N k . Otherwise, higher values of λ could violate the slow-roll conditions. Also, this could conflict with the expected results from the subsequent reheating era after inflation [65][66][67][68].
Therefore, we conclude that the strong limitf = (H/M ) λ 1 is disfavored by observations. This result differs from what was obtained in Refs. [55,57]. It is important to note that Eq. (5.7) satisfiesḦ = O( 2 ), andḦ/(HḢ) = 2λ , with = −Ḣ/(H 2 ). For Starobinsky inflation one has λ = 0 and thenḦ/(HḢ) = 0. But, in the presence of rainbow this is not zero and thus this term contributes to n s , such as in Eq. (5.15).
equation is satisfied for µ = 1 + O( ) and δ µ = O( 2 ). For instance, this holds for the kind of functions f (R) = R + R 2 A(R). The case A(R) = 1/(6M 2 ) with M the mass scale corresponds to Starobinsky model [1]. In general A(R) has to be a slowly varying function which meets the slow-roll conditions |A (R)| A(R)/R and |A (R)| A(R)/R 2
H +Ḟ 2F .(4.18)Then, by replacing the previous result into Eq. (4
by combining Eqs. (4.18), (4.19), (4.20), and with the help of the background equations, one finds in Fourier space the following equation for
time of the horizon crossing, t = t k , one has H k ≡ H(t k ) H i and then 1 2N k . (5.10)
FIG 3 we show the behavior of the mass scale M as a function of the rainbow parameter λ for different values of N k . In FIG 4 one can see thatf = (H/M ) λ satisfies (H/M ) λ 1, which is in agreement with equations (5.15) and (5.17). Thus, the strong limitf = (H/M ) λ 1 is disfavored by observations.
Figure 1 .
1To confront with the predictions of the model we used both the PLANCK 2018 data
Figure 2 .
2To confront with the predictions of the model we used both the PLANCK 2018 data
Figure 3 .
3We depict the behavior of the mass scale M as a function of the rainbow parameter λ for several different values of the e-folding number of inflation N k .
Figure 4 .
4We show the behavior of the rainbow functionf = (H/M ) λ in terms of the parameter λ for several different values of the e-folding number of inflation N k .
where M is the mass-energy scale. Also, we consider[55]f = H M λ .(5.2)Notice that during inflation H M[60]. Then, one obtains.(5.6)From the latter equation one obtainswhere H i = H(t i ) is the value of the Hubble parameter at the beginning of inflation. For λ = 0, the equation (5.7) gives us the standard solution found in Starobinsky inflation[60]. Also, for λ = 0, this equation is consistent with what was obtained in Ref.[55]. The number of e-folding of inflation isAt the end of inflation (t f ) 1 and then t f t i + 6(1+λ)
. A A Starobinsky, Phys. Lett. B. 91199A.A. Starobinsky, Phys. Lett. B 91(1), 99 (1980)
. A H Guth, Phys. Rev. D. 232347A.H. Guth, Phys. Rev. D 23(2), 347 (1981)
. A Albrecht, P J Steinhardt, Phys. Rev. Lett. 481220A. Albrecht, P.J. Steinhardt, Phys. Rev. Lett. 48, 1220 (1982)
. A D Linde, Phys. Lett. B. 1086389A.D. Linde, Phys. Lett. B 108(6), 389 (1982)
. S Weinberg, Cosmology. Oxford Univ. PressS. Weinberg, Cosmology (Oxford Univ. Press, 2008)
. D Baumann, PoS. 20179D. Baumann, PoS TASI2017, 009 (2018)
. Y Akrami, Astron. Astrophys. 64110Y. Akrami, et al., Astron. Astrophys. 641, A10 (2020)
M Maggiore, Astrophysics and Cosmology. Oxford University Press2M. Maggiore, Gravitational Waves. Vol. 2: Astrophysics and Cosmology (Oxford University Press, 2018)
. P A R Ade, Phys. Rev. Lett. 12715151301P.A.R. Ade, et al., Phys. Rev. Lett. 127(15), 151301 (2021)
V Faraoni, Cosmology in scalar tensor gravity. 139V. Faraoni, Cosmology in scalar tensor gravity, vol. 139 (2004)
N D Birrell, P C W Davies, Quantum Fields in Curved Space. Cambridge, UKCambridge Univ. PressN.D. Birrell, P.C.W. Davies, Quantum Fields in Curved Space (Cambridge Univ. Press, Cambridge, UK, 1984)
. R Fakir, W G Unruh, Phys. Rev. D. 411783R. Fakir, W.G. Unruh, Phys. Rev. D 41, 1783 (1990)
. F L Bezrukov, M Shaposhnikov, Phys. Lett. B. 659703F.L. Bezrukov, M. Shaposhnikov, Phys. Lett. B 659, 703 (2008)
. S S Mishra, V Sahni, A V Toporensky, Phys. Rev. D. 98883538S.S. Mishra, V. Sahni, A.V. Toporensky, Phys. Rev. D 98(8), 083538 (2018)
. S S Mishra, V Sahni, S.S. Mishra, V. Sahni, (2022)
. R Kallosh, A Linde, JCAP. 072R. Kallosh, A. Linde, JCAP 07, 002 (2013)
. R Kallosh, A Linde, D Roest, JHEP. 11198R. Kallosh, A. Linde, D. Roest, JHEP 11, 198 (2013)
. R Kallosh, A Linde, R. Kallosh, A. Linde, (2021)
. S Kachru, R Kallosh, A D Linde, S P Trivedi, Phys. Rev. D. 6846005S. Kachru, R. Kallosh, A.D. Linde, S.P. Trivedi, Phys. Rev. D 68, 046005 (2003)
. R Kallosh, A Linde, Phys. Rev. D. 10012123523R. Kallosh, A. Linde, Phys. Rev. D 100(12), 123523 (2019)
. S Kachru, R Kallosh, A D Linde, J M Maldacena, L P Mcallister, S P Trivedi, JCAP. 1013S. Kachru, R. Kallosh, A.D. Linde, J.M. Maldacena, L.P. McAllister, S.P. Trivedi, JCAP 10, 013 (2003)
. Y F Cai, S Capozziello, M De Laurentis, E N Saridakis, Rept. Prog. Phys. 7910106901Y.F. Cai, S. Capozziello, M. De Laurentis, E.N. Saridakis, Rept. Prog. Phys. 79(10), 106901 (2016)
. A Einstein, Sitz. Preuss. Akad. Wiss. 217A. Einstein, Sitz. Preuss. Akad. Wiss 217 (1928)
. A Unzicker, T Case, arXiv:physics/0503046A. Unzicker, T. Case, arXiv:physics/0503046 (2005)
. A Einstein, Math. Ann. 102A. Einstein, Math. Ann. 102, 685 (1930)
. A Einstein, Sitzungsber. Preuss. Akad. Wiss. Phys. Math. Kl. 401A. Einstein, Sitzungsber. Preuss. Akad. Wiss. Phys. Math. Kl. 401 (1930)
. C Pellegrini, J Plebanski, Math.-Fys. Skr. Dan. Vid. Selskab. 22C. Pellegrini, J. Plebanski, Math.-Fys. Skr. Dan. Vid. Selskab 2(2) (1962)
. C Møller, K Dan, Vidensk. Selsk., Mat.-Fys. Medd. 39131C. Møller, K. Dan. Vidensk. Selsk., Mat.-Fys. Medd 39(13), 1 (1978)
. K Hayashi, T Nakano, Progress of Theoretical Physics. 382491K. Hayashi, T. Nakano, Progress of Theoretical Physics 38(2), 491 (1967)
. K Hayashi, T Shirafuji, Phys. Rev. D. 19123524K. Hayashi, T. Shirafuji, Phys. Rev. D 19(12), 3524 (1979)
R Aldrovandi, J G Pereira, Teleparallel gravity: an introduction. Springer Science & Business Media173R. Aldrovandi, J.G. Pereira, Teleparallel gravity: an introduction, vol. 173 (Springer Science & Business Media, 2012)
. V C De Andrade, L C T Guillen, J G Pereira, Phys. Rev. Lett. 844533V.C. de Andrade, L.C.T. Guillen, J.G. Pereira, Phys. Rev. Lett. 84, 4533 (2000)
. H I Arcos, J G Pereira, Int. J. Mod. Phys. D. 132193H.I. Arcos, J.G. Pereira, Int. J. Mod. Phys. D 13, 2193 (2004)
. R Ferraro, F Fiorini, Phys. Rev. 7584031R. Ferraro, F. Fiorini, Phys. Rev. D75, 084031 (2007)
. E V Linder, Phys. Rev. 81127301E.V. Linder, Phys. Rev. D81, 127301 (2010)
. M Gonzalez-Espinoza, G Otalora, J Saavedra, N Videla, Eur. Phys. J. C. 7810799M. Gonzalez-Espinoza, G. Otalora, J. Saavedra, N. Videla, Eur. Phys. J. C 78(10), 799 (2018)
. T Harko, F S N Lobo, G Otalora, E N Saridakis, Phys. Rev. D. 89124036T. Harko, F.S.N. Lobo, G. Otalora, E.N. Saridakis, Phys. Rev. D 89, 124036 (2014)
. T Harko, F S N Lobo, G Otalora, E N Saridakis, JCAP. 1221T. Harko, F.S.N. Lobo, G. Otalora, E.N. Saridakis, JCAP 12, 021 (2014)
. C Q Geng, C C Lee, E N Saridakis, Y P Wu, Phys. Lett. B. 704384C.Q. Geng, C.C. Lee, E.N. Saridakis, Y.P. Wu, Phys. Lett. B 704, 384 (2011)
. C Q Geng, C C Lee, E N Saridakis, JCAP. 12012C.Q. Geng, C.C. Lee, E.N. Saridakis, JCAP 1201, 002 (2012)
. C Xu, E N Saridakis, G Leon, JCAP. 12075C. Xu, E.N. Saridakis, G. Leon, JCAP 1207, 005 (2012)
. G Otalora, JCAP. 130744G. Otalora, JCAP 1307, 044 (2013)
. G Otalora, Phys. Rev. D. 8863505G. Otalora, Phys. Rev. D 88, 063505 (2013)
. G Otalora, Int. J. Mod. Phys. D. 25021650025G. Otalora, Int. J. Mod. Phys. D 25(02), 1650025 (2015)
. M A Skugoreva, E N Saridakis, A V Toporensky, Phys. Rev. D. 9144023M.A. Skugoreva, E.N. Saridakis, A.V. Toporensky, Phys. Rev. D 91, 044023 (2015)
. M Gonzalez-Espinoza, G Otalora, J Saavedra, JCAP. 107M. Gonzalez-Espinoza, G. Otalora, J. Saavedra, JCAP 10, 007 (2021)
. M Gonzalez-Espinoza, G Otalora, Eur. Phys. J. C. 815480M. Gonzalez-Espinoza, G. Otalora, Eur. Phys. J. C 81(5), 480 (2021)
. G Otalora, E N Saridakis, Phys. Rev. D. 94884021G. Otalora, E.N. Saridakis, Phys. Rev. D 94(8), 084021 (2016)
. M Gonzalez-Espinoza, G Otalora, L Kraiselburd, S Landau, JCAP. 050510M. Gonzalez-Espinoza, G. Otalora, L. Kraiselburd, S. Landau, JCAP 05(05), 010 (2022)
. M Gonzalez-Espinoza, G Otalora, Phys. Lett. B. 809135696M. Gonzalez-Espinoza, G. Otalora, Phys. Lett. B 809, 135696 (2020)
. M Gonzalez-Espinoza, R Herrera, G Otalora, J Saavedra, Eur. Phys. J. C. 818731M. Gonzalez-Espinoza, R. Herrera, G. Otalora, J. Saavedra, Eur. Phys. J. C 81(8), 731 (2021)
. J Magueijo, L Smolin, DOI10.1088/0264-9381/21/7/001Class. Quant. Grav. 211725J. Magueijo, L. Smolin, Class. Quant. Grav. 21, 1725 (2004). DOI 10.1088/0264-9381/21/7/001
. K S Stelle, Phys. Rev. D. 16953K.S. Stelle, Phys. Rev. D 16, 953 (1977)
. P Horava, DOI10.1103/PhysRevD.79.084008Phys. Rev. D. 7984008P. Horava, Phys. Rev. D 79, 084008 (2009). DOI 10.1103/PhysRevD.79.084008
. A Chatrabhuti, V Yingcharoenrat, P Channuie, Phys. Rev. D. 93443515A. Chatrabhuti, V. Yingcharoenrat, P. Channuie, Phys. Rev. D 93(4), 043515 (2016)
. P Channuie, Eur. Phys. J. C. 796508P. Channuie, Eur. Phys. J. C 79(6), 508 (2019)
. A Waeming, P Channuie, Eur. Phys. J. C. 809802A. Waeming, P. Channuie, Eur. Phys. J. C 80(9), 802 (2020)
. Y Leyva, C Leiva, G Otalora, J Saavedra, Phys. Rev. D. 105443523Y. Leyva, C. Leiva, G. Otalora, J. Saavedra, Phys. Rev. D 105(4), 043523 (2022)
. Y Ling, JCAP. 0817Y. Ling, JCAP 08, 017 (2007)
. A De Felice, S Tsujikawa, Living Rev. Rel. 133A. De Felice, S. Tsujikawa, Living Rev. Rel. 13, 3 (2010)
. M He, P Li, Z L Wang, J C Ding, J B Deng, Gen. Rel. Grav. 50222M. He, P. Li, Z.L. Wang, J.C. Ding, J.B. Deng, Gen. Rel. Grav. 50(2), 22 (2018)
. S V Ketov, H Nakada, Phys. Rev. D. 9510103507S.V. Ketov, H. Nakada, Phys. Rev. D 95(10), 103507 (2017)
. G Amelino-Camelia, M Arzano, G Gubitosi, J Magueijo, Phys. Rev. D. 88441303G. Amelino-Camelia, M. Arzano, G. Gubitosi, J. Magueijo, Phys. Rev. D 88(4), 041303 (2013)
. A Riotto, ICTP Lect. Notes Ser. 14317A. Riotto, ICTP Lect. Notes Ser. 14, 317 (2003)
. L Kofman, A D Linde, A A Starobinsky, Phys. Rev. Lett. 733195L. Kofman, A.D. Linde, A.A. Starobinsky, Phys. Rev. Lett. 73, 3195 (1994)
. L Kofman, A D Linde, A A Starobinsky, Phys. Rev. D. 563258L. Kofman, A.D. Linde, A.A. Starobinsky, Phys. Rev. D 56, 3258 (1997)
. B A Bassett, S Tsujikawa, D Wands, Rev. Mod. Phys. 78537B.A. Bassett, S. Tsujikawa, D. Wands, Rev. Mod. Phys. 78, 537 (2006)
. M López, G Otalora, N Videla, JCAP. 1021M. López, G. Otalora, N. Videla, JCAP 10, 021 (2021)
| [] |
[
"Five-Dimensional Path Integrals for Six-Dimensional Conformal Field Theories",
"Five-Dimensional Path Integrals for Six-Dimensional Conformal Field Theories"
] | [
"N Lambert \nDepartment of Mathematics King's\nWC2R 2LSCollege London LondonUK\n",
"A Lipstein \nDepartment of Mathematical Sciences\nDurham University Durham\nDH1 3LEUK\n",
"R Mouland \nDepartment of Applied Mathematics and Theoretical Physics\nUniversity of Cambridge Cambridge\nCB3 0WAUK\n",
"P Richmond \nDepartment of Mathematics King's\nWC2R 2LSCollege London LondonUK\n"
] | [
"Department of Mathematics King's\nWC2R 2LSCollege London LondonUK",
"Department of Mathematical Sciences\nDurham University Durham\nDH1 3LEUK",
"Department of Applied Mathematics and Theoretical Physics\nUniversity of Cambridge Cambridge\nCB3 0WAUK",
"Department of Mathematics King's\nWC2R 2LSCollege London LondonUK"
] | [] | In this paper we derive Ward-Takahashi identities from the path integral of supersymmetric five-dimensional field theories with an SU(1, 3) spacetime symmetry in the presence of instantons. We explicitly show how SU(1, 3) is enhanced to SU(1, 3) × U(1) where the additional U(1) acts non-perturbatively. Solutions to such Ward-Takahashi identities were previously obtained from correlators of six-dimensional Lorentzian conformal field theories but where the instanton number was replaced by the momentum along a null direction. Here we study the reverse procedure whereby we construct correlation functions out of towers of five-dimensional operators which satisfy the Ward-Takahashi identities of a sixdimensional conformal field theory. This paves the way to computing observables in six dimensions using five-dimensional path integral techniques. We also argue that, once the instanton sector is included into the path integral, the coupling of the five-dimensional Lagrangian must be quantised, leaving no free continuous parameters. | 10.1007/jhep02(2022)151 | [
"https://arxiv.org/pdf/2109.04829v2.pdf"
] | 237,485,329 | 2109.04829 | 3276651966aabe68345a8a4ad6226af83ef162d9 |
Five-Dimensional Path Integrals for Six-Dimensional Conformal Field Theories
15 Feb 2022
N Lambert
Department of Mathematics King's
WC2R 2LSCollege London LondonUK
A Lipstein
Department of Mathematical Sciences
Durham University Durham
DH1 3LEUK
R Mouland
Department of Applied Mathematics and Theoretical Physics
University of Cambridge Cambridge
CB3 0WAUK
P Richmond
Department of Mathematics King's
WC2R 2LSCollege London LondonUK
Five-Dimensional Path Integrals for Six-Dimensional Conformal Field Theories
15 Feb 2022arXiv:2109.04829v2 [hep-th]
In this paper we derive Ward-Takahashi identities from the path integral of supersymmetric five-dimensional field theories with an SU(1, 3) spacetime symmetry in the presence of instantons. We explicitly show how SU(1, 3) is enhanced to SU(1, 3) × U(1) where the additional U(1) acts non-perturbatively. Solutions to such Ward-Takahashi identities were previously obtained from correlators of six-dimensional Lorentzian conformal field theories but where the instanton number was replaced by the momentum along a null direction. Here we study the reverse procedure whereby we construct correlation functions out of towers of five-dimensional operators which satisfy the Ward-Takahashi identities of a sixdimensional conformal field theory. This paves the way to computing observables in six dimensions using five-dimensional path integral techniques. We also argue that, once the instanton sector is included into the path integral, the coupling of the five-dimensional Lagrangian must be quantised, leaving no free continuous parameters.
Introduction
Superconformal field theories in six dimensions play a fundamental role in our understanding of M-theory and play a central role in our understanding of quantum field theories in general through compactification to lower-dimensions. On the other hand, their precise formulation remains elusive because conventional Lagrangians with manifest six-dimensional superconformal symmetry do not exist. Despite this difficulty, it is possible to compute many observables in these theories like correlators of protected operators using holography [1][2][3][4], conformal bootstrap methods [5][6][7][8][9][10], and chiral algebra conjectures [11,12].
Although it is not possible to write down a Lagrangian with six-dimensional superconformal symmetry, a more useful (and perhaps fundamental) definition of a Lagrangian is one which can be used to compute observables such as correlation functions using a path integral. Indeed a Lagrangian may only manifestly realise some subgroup of the symmetries of the full quantum theory, as previously demonstrated by the ABJM theory for M2-branes [13]. In a recent series of papers we have constructed a class of five-dimensional Lagrangians with 12 or 24 superconformal symmetries and a non-Lorentzian SU(1, 3) spacetime symmetry [14][15][16], studied their correlators [17] and constructed explicit instanton solutions [18]. In this paper we make a proposal as to how these can be used to provide a path integral construction of correlators of six-dimensional superconformal field theories such as the (2, 0) theory associated to M5-branes.
In particular here, using path integral methods, we derive the conformal Ward-Takahashi identities for five-dimensional correlators in the presence of instanton operators. These are local disorder operators that correspond to changing the second Chern number of the gauge field around an insertion point (see also [19][20][21]). The existence of a conserved topological charge given by the instanton number (or more accurately the second Chern number of the gauge fields) leads to an additional U(1) symmetry but one under which all the fields in the Lagrangian are invariant. As we stated above the action has a non-trivial SU(1, 3) symmetry; however we will show that this symmetry is broken once we allow for non-trivial topological sectors, corresponding to the insertion of instanton operators. Nevertheless an SU(1, 3)×U(1) symmetry can be restored in the quantum theory, with instanton operators charged under the U(1) factor. Thus the path integral defined using the five-dimensional Lagrangian yields an interacting theory with a manifest and non-trivial SU(1, 3) × U(1) symmetry.
In our previous paper [17] we studied the Ward-Takahashi identities for the symmetry group SU(1, 3) × U(1) and showed that solutions to them can be obtained from a certain Fourier expansion of the correlators of a six-dimensional conformal field theory. In this way we showed that the instanton number can be used to encode the Kaluza-Klein momentum along an emergent sixth dimension. A novelty of this reduction is that we use the conformal symmetry of the six-dimensional theory to conformally compactify a null direction. As a result the Fourier expansion reproduces the full correlation functions of non-compact six-dimensional Minkowski space. In particular the SU(1, 3) × U(1) symmetry arises as the subgroup of the conformal group SO(2, 6) that commutes with the Kaluza-Klein momentum operator.
This therefore leads to a natural proposal about how to go the other way and construct genuine six-dimensional correlators from the five-dimensional theory. However the key question is whether or not the resulting correlators can be identified with those of a six-dimensional Lorentzian conformal field theory. In this paper we discuss some necessary conditions for correlators of the theory to resum to produce six-dimensional correlators invariant under the full SO (2,6). We also argue that once topologically non-trivial sectors of the theory are included the action is no longer single valued on the space of field configurations, unless the inverse coupling constant k is a discrete orbifold parameter, analogous to that of the ABJM theory.
The rest of this paper is organised as follows. In Section 2 we briefly review the Lagrangians described above and their symmetries. In Section 3 we allow for more general topologies of the gauge fields through so-called instanton operators and show how the SU(1, 3) is broken but then restored by finding a suitable representation of the instanton operators. We also show that once we consider this expanded configuration space we are required to restrict k to discrete values. In Section 4 we discuss how to construct correlation functions of a six-dimensional theory and in particular give some necessary conditions for these to satisfy the Ward-Takahashi identities of a Lorentzian six-dimensional conformal field theory. In Section 5 we give our conclusions and discussion on future directions. We also include two appendices.
The Actions and Their Symmetries
In this first Section, we review the five-dimensional Ω-deformed gauge theory first introduced in [14] by a reduction of the (2, 0) theory, and recast its known spacetime symmetries [15] in a language more useful for this paper. There are also (1,0) versions of these actions where the fields further decompose into tensor and hyper multiplets and the supersymmetries are reduced by a half [16]. The form of the action and symmetries is similar but the hyper multiplet fields are allowed to take values in any representation of the gauge group. In the interests of not introducing additional notation we will not discuss them here since all the results in this paper extend directly to these theories too as the main tool we exploit is the SU(1, 3) symmetry of the action.
Review of Five-dimensional Lagrangian Model
Our starting point is a non-Abelian but non-Lorentzian gauge theory in five dimensions with arbitrary gauge group. We use the coordinates (x − , x i ) on R 5 , with i, j, · · · = 1, . . . , 4. In addition to its gauge field A = (A − , A i ), the theory has five scalar fields X I , where I, J, · · · = 6, . . . , 10 and a real 32-component spinor Ψ of Spin (1,10). Finally, we also have a field G ij = −G ji which is self-dual, G ij = 1 2 ǫ ijkl G kl . All of the fields X I , Ψ and G ij transform in the adjoint of the gauge group.
We choose a 32 × 32 real representation {Γ 0 , Γ 1 , . . . , Γ 10 } of the (1 + 10)-dimensional Clifford algebra with signature (−, +, . . . , +), and additionally define the combinations Γ ± = (Γ 0 ± Γ 5 )/ √ 2 which project onto spinors of definite chirality under Γ 05 . The fermion Ψ then satisfies Γ 012345 Ψ = −Ψ.
The action of the theory is S = dx − d 4 xL, with
L = k 4π 2 tr 1 2 F −i F −i − 1 2D i X ID i X I + 1 2 F ij G ij − i 2Ψ Γ + D − Ψ + i 2Ψ Γ iDi Ψ − 1 2Ψ Γ + Γ I [X I , Ψ] , (2.1) where F = dA − iA ∧ A is the field strength of A, and D − , D i are adjoint gauge covariant derivatives for the gauge field A − , A i , i.e. D − = ∂ − − i[A − , · ] and D i = ∂ i − i[A i , · ].
In terms of these more conventional objects, we have used the corresponding Ω-deformed objects,D
i = D i − 1 2 Ω ij x j D − , F ij = F ij − 1 2 Ω ik x k F −j + 1 2 Ω jk x k F −i ,(2.2)
where Ω ij is anti-self-dual and normalised as Ω ik Ω jk = δ ij . We also define∂ i = ∂ − − 1 2 Ω ij x j ∂ − for later use. We see that G ij acts as a Lagrange multiplier, imposing the constraint that F ij is anti-self-dual, i.e. F + ij = 0, where F + ij = 1 2 F ij + 1 2 ǫ ijkl F kl . Note that in previous papers we have included a real variable R with dimensions of length with Ω ij Ω jk = −R −2 δ ij . However in the current paper we have chosen to absorb R into fields and coordinates. Details of this process, and therefore rules on how to reinstate this parameter, are straightforward and can be found in [22]. As is standard in more conventional Yang-Mills theories, one can perform simple field redefinitions to bring terms quadratic in derivatives to canonical normalisation, and in doing so introduce positive powers of g with g 2 = 4π 2 /k in front of all interaction terms. In this sense, one should think of this g as the coupling of the theory.
Let us comment on the origin of this theory [14] in the case where the gauge group is U(N 5 ). The AdS/CFT correspondence tells us that the worldvolume theory for a stack of N 5 M5-branes is dual to M-theory on an AdS 7 × S 4 background. In analogy with the ABJM construction [13] and following the geometric considerations of [23], one first considers AdS 7 as a timelike circle fibration S 1 ֒→ AdS 7 →CP 3 over the non-compact complex projective spaceCP 3 . One can then write down a non-Abelian action describing the reduction along the fibre of a stack of M5-branes at fixedCP 3 radius. The geometry suggests such a theory should possess eight real supercharges, and it does. Finally, one takes the embedding radius to infinity, effectively sending the stack of M5-branes to the boundary of AdS 7 . This boundary is described by the metric
ds 2 = −2 dx + dx − − 1 2 Ω ij x i dx j + dx i dx i ,(2.3)
with x + ∈ (−π, π) identified as the coordinate along the fibre along which we have reduced. This metric is of the same conformal class as six-dimensional Minkowski space, and so at the end of the day we have simply performed a conformal compactification of M5-branes on flat space. Note, as we take the limit to the conformal boundary, certain terms in the action diverge. One is nonetheless able to utilise the technique first described in [24] to propose the Lagrangian (2.1) to describe the boundary theory.
We can then use the geometry of the M5-brane embeddings to predict the symmetries of the theory. Since the metric (2.3) is conformal to the six-dimensional Minkowski metric, any conformal field theory living on it should realise the full conformal algebra so(2, 6) as its spacetime symmetries. However, the reduction along the x + direction breaks so (2,6) to the maximal subalgebra h = su(1, 3) ⊕ u(1) commuting with translations along x + .
Next, the theory has a manifest SO(5) R-symmetry rotating the scalars X I , corresponding simply in the M5-brane picture to rotations in the directions transverse to the branes.
Finally, the circle reduction breaks only one quarter of the superconformal symmetries, and so we can expect the theory to have 24 real supercharges. This is indeed the case, with 8 realised as rigid supersymmetries, and the remaining 16 as conformal supersymmetries [14]. In the models obtained from (1, 0) superconformal field theories one finds half as many supersymmetries and the R-symmetry is SU(2) corresponding to a suitable replacement of the S 5 factor.
Spacetime Symmetry Algebra
Let us now review the spacetime symmetry structure of the theory in more detail 5 . The subalgebra h ⊂ so(2, 6) is spanned by the generators B = {P − , P i , B, C α , T, M i+ , K + } along with central element P + , which is simply the generator of translations along the x + direction along which we have reduced. The other generators have the following action on the five-dimensional coordinates (x − , x i )
• {P − , P i } are five translations, which form a non-Abelian subalgebra,
• {B, C α }, α = 1, 2, 3, form a u(1) ⊕ su(2) subalgebra of four rotations in the x i directions,
• T is a Lifshitz scaling, under which x − scales twice as quickly as x i ,
• {M i+ , K + } are 'special' transformations, which play much the same role as special conformal transformations in the conformal algebra.
A subset of the commutation relations of the algebra is
[M i+ , P j ] = −δ ij P + − 1 2 Ω ij T − 2δ ij B + Ω ik η α jk C α , [T, P − ] = −2P − , [T, K + ] = 2K + , [P − , P i ] = 0 , [K + , P − ] = −2T , [P − , M i+ ] = P i , [M i+ , M j+ ] = − 1 2 Ω ij K + , [K + , P i ] = −2M i+ , [T, P i ] = −P i , [K + , M i+ ] = 0 , [T, M i+ ] = M i+ , [P i , P j ] = −Ω ij P − . (2.4)
The rotations B, C α form an u(1) ⊕ su(2) subalgebra;
[B, C α ] = 0 , [C α , C β ] = −ε αβγ C γ . (2.5)
In particular these generate all rotations in the four-dimensional plane that leave Ω ij invariant. The remaining brackets are neatly summarised by noting that the 'scalar' generators S = P − , T, K + are inert under the rotation subgroup, i.e. [S, B] = [S, C α ] = 0, while the 'one-form' generators W i = P i , M i+ transform as
[W i , B] = − 1 2 Ω ij W j , [W i , C α ] = 1 2 η α ij W j . (2.6)
If we for a moment exclude the central element P + , then the elements of B form a (somewhat unconventional) basis for su (1,3). In fact, the centrally extended algebra h can be realised as simply h = su(1, 3) ⊕ u(1), with basis {P − , P i ,B, C α , T, M i+ , K + } for the su(1, 3) factor and P + for the u(1) factor. Here, we haveB = B + 1 2 P + . However, it will be more convenient for geometric reasons to continue to use B rather thanB, and thus refrain from making this direct sum decomposition of h manifest.
Realisation on Coordinates and Fields
Let us now investigate how these symmetries are realised by the Lagrangian (2.1). There is a little nuance here regarding what we should expect. We interpret S as describing N M5-branes reduced along the direction x + ∈ (−π, π). Let us first suppose, as in a standard Kaluza-Klein reduction, that in doing this reduction we have truncated the spectrum of the theory maximally; in other words, the theory S describes only the zero modes on the x + interval. We know then that such modes will fall into representation of h in which P + is represented trivially (i.e. it annihilates everything); in other words, representations of su(1, 3). Thus, we expect S to admit an su(1, 3) spacetime symmetry.
Conversely, just as five-dimensional maximal super-Yang-Mills is conjectured to in fact describe all modes of a spatial compactification of M5-branes through the inclusion of local operators with non-zero instanton charge [25,26], we also propose that our action S should describe all modes of the x + conformal compactification. Modes with non-zero charge under P + are expected to be realised only when the configuration space is extended to allow for isolated singular points, around which one measures non-zero instanton number.
What we will show first is that if we disallow such configurations, then the theory does indeed admit an su(1, 3) spacetime symmetry. It will already be clear however at this point that something goes wrong when the configuration space is extended. We will indeed show below that in this case we precisely recover modes with non-trivial charge under P + , and thus the spacetime symmetry algebra is extended to h.
So let us first describe the su(1, 3) spacetime symmetry of the theory, as first discussed in [15], which is valid when the gauge field is regular throughout R 5 . Our first step is to define some action of su(1, 3) on coordinates and fields. As a spacetime symmetry, su(1, 3) admits a representation in terms of vector fields on R 5 . Given some G ∈ su(1, 3), we have corresponding vectors fields G ∂ , with
(P − ) ∂ = ∂ − , (P i ) ∂ = 1 2 Ω ij x j ∂ − + ∂ i , (B) ∂ = − 1 2 Ω ij x i ∂ j , (C α ) ∂ = 1 2 η α ij x i ∂ j , (T ) ∂ = 2x − ∂ − + x i ∂ i , (M i+ ) ∂ = 1 2 Ω ij x − x j − 1 8 x j x j x i ∂ − + x − ∂ i + 1 4 (2Ω ik x k x j + 2Ω jk x k x i − Ω ij x k x k )∂ j , (K + ) ∂ = (2(x − ) 2 − 1 8 (x i x i ) 2 )∂ − + ( 1 2 Ω ij x j x k x k + 2x − x i )∂ i . (2.7)
Let us set up our conventions for SU(1, 3) transformations. Given any g = e ǫG ∈ SU(1, 3), and any point x ∈ R 5 , we can denote by xg ∈ R 5 the point sitting at a finite distance ǫ along the integral curve of G ∂ starting at x. Then, we have for g 1 , g 2 ∈ SU(1, 3), x(g 1 g 2 ) = (xg 1 )g 2 , and so SU(1, 3) admits a natural right action on R 5 . In this way, we can consider SU(1, 3) orbits on our spacetime. For infinitesimal G, the leading order term in xg can be read off from (2.7), while the finite form of xg for g generated by each of the basis generators can be found in [22]. Next, we consider how some generic field in the theory, which we denote by ϕ, transforms under SU (1,3). Under an active SU(1, 3) transformation g, we have
x −→ x ϕ(x) −→ ϕ ′ (x) = gϕ(x) := R g (xg −1 )ϕ(xg −1 ) , (2.8)
where R g is some (generically spacetime-dependent) matrix acting on any indices of ϕ, and satisfying R g 2 (xg 1 )R g 1 (x) = R g 1 g 2 (x). Taking g then to act only on fields, so that for instance g(∂ i ϕ(x)) = ∂ i (gϕ(x)), we have that (g 1 g 2 )ϕ(x) = g 1 (g 2 ϕ(x)).
For G infinitesimal, we can write to leading order 6 gϕ(
x) = ϕ(x) + δ G ϕ(x), where δ G ϕ(x) = −G ∂ ϕ(x) − r G (x)ϕ(x). r G (x) is a matrix acting on any indices of ϕ(x), and satisfying [r G 1 , r G 2 ] + (G 1 ) ∂ r G 2 − (G 1 ) ∂ r G 1 = r [G 1 ,G 2 ] for any G 1 , G 2 ∈ su(1, 3). Then, the variations δ G form a representation of su(1, 3), i.e. [δ G 1 , δ G 2 ] = δ [G 1 ,G 2 ] .
The general form of the r G (x) can be deduced by defining a notion of primaries and descendants of su(1, 3) [17]. In particular, primaries are annihilated at the origin by the special transformations M i+ , K + , with descendants generated by the action of P − , P i . A primary operator is entirely captured by a Lifshitz scaling dimension ∆ and representations r[B], r[C α ] under the rotation subalgebra. Explicitly, then, such a primary transforms under su(1, 3) as
δ P − ϕ(x) = − (P − ) ∂ ϕ(x) , δ P i ϕ(x) = − (P i ) ∂ ϕ(x) , δ B ϕ(x) = − (B) ∂ ϕ(x) − r ϕ [B]ϕ(x) , δ C α ϕ(x) = − (C α ) ∂ ϕ(x) − r ϕ [C α ]ϕ(x) , δ T ϕ(x) = − (T ) ∂ ϕ(x) − ∆ϕ(x) , δ M i+ ϕ(x) = − (M i+ ) ∂ ϕ(x) − 1 2 ∆Ω ij x j + 2x i r ϕ [B] − Ω ik η α jk x j r ϕ [C α ] ϕ(x) , δ K + ϕ(x) = − (K + ) ∂ ϕ(x) − 2∆ x − + 2x i x i r ϕ [B] − x i x j Ω ik η α jk r ϕ [C α ] ϕ(x) . (2.9)
The gauge field A, scalars X I and fermions Ψ do indeed fall into representations of su(1, 3) and thus transform as in (2.8) for some non-trivial variations δ G . Hence, these fields can be reorganised (albeit somewhat non-trivially) to be written in terms of such su(1, 3) primaries [22]. Full details of the infinitesimal transformations of fields can be found in Appendix A, while their finite transformations are also known [22], but will not be needed here.
The Lagrange multiplier G ij is a little different. The variation δ G G ij depends not only on G ij but also the field strength F of the gauge field, at least for G = M i+ , K + . Thus, one should really regard (A, G ij ) sitting in a single representation. Further, the algebra of variations δ G only closes on G ij on-shell; more specifically, it closes only on the constraint
surface F ij = − ⋆ F ij .
Finally, it is notationally convenient to introduce a trivial variation δ P + , acting as δ P + X I = 0, δ P + Ψ = 0, δ P + A = 0 and δ P + G ij = 0. Then, the {δ G } G∈B∪{P + } generate a representation of h with P + trivially represented, and we have [δ G 1 , δ G 2 ] = δ [G 1 ,G 2 ] for all G 1 , G 2 ∈ h, with brackets as given in (2.4)-(2.6).
Variation of the Lagrangian
We have shown that the full field content of the theory falls into representations of h (at least on the constraint surface, in the case of G ij ) under the variations δ G . We reiterate, these are indeed representations of su(1, 3) ⊂ h, as P + is trivially represented: δ P + ϕ = 0 on all fields ϕ = A, X I , Ψ, G ij . Further, the Lagrangian (2.1) transforms in a representation 7 of su(1, 3) ⊂ h.
So let us state the variation of the Lagrangian L. In addition to the trivial δ P + L = 0,
for G ∈ {P − , P i , B, C α , T } we find −δ P − L = ∂ − L , −δ P i L = ∂ − 1 2 Ω ij x j L + ∂ i L , −δ B L = ∂ i 1 2 Ω ij x j L , −δ C α L = ∂ i − 1 2 η I ij x j L , −δ T L = ∂ − 2x − L + ∂ i x i L ,(2.10)
and hence with suitable boundary conditions on the 4-sphere S 4 ∞ at infinity, we have δ G S = 0. More care must be taken, however, in the case of G ∈ {M i+ , K + }. We find 8
−δ M i+ L = ⋆ dx i ∧ k 8π 2 tr (F ∧ F ) + ∂ − 1 2 Ω ij x − x j − 1 8 x j x j x i L − k 16π 2 x i tr X I X I + ∂ j 1 4 2Ω ik x k x j + 2Ω jk x k x i − Ω ij x k x k + 4x − δ ij L − k 8π 2 Ω ij tr X I X I , −δ K + L = ⋆ d x i x i ∧ k 8π 2 tr (F ∧ F ) + ∂ − 2 x − 2 − 1 8 (x i x i ) 2 L − k 8π 2 x i x i tr X I X I + ∂ i 1 2 Ω ij x j x k x k + 2x − x i L + k 4π 2 Ω ij x j tr X I X I . (2.11)
If we require that the gauge field A is globally defined and regular everywhere, then we can write 12) and hence, in both cases, δ G L is a total derivative, and for suitable boundary conditions on S 4 ∞ we have δ G S = 0.
dx i ∧ k 8π 2 tr (F ∧ F ) = d k 8π 2 x i tr (F ∧ F ) , d x i x i ∧ k 8π 2 tr (F ∧ F ) = d k 8π 2 x i x i tr (F ∧ F ) ,(2.
Instantons
We have now seen that the theory described by Lagrangian (2.1) does indeed possess an su(1, 3) spacetime symmetry when the gauge field A is regular throughout R 5 . It would therefore be reasonable to propose that the theory describes only the zero modes of the compactification on x + ∈ (−π, π), since it admits a symmetry under h in which nothing is charged under P + . To move beyond this, we now instead consider a broader configuration space for the theory.
Instantons and Classical Symmetry Breaking
Our task now is to broaden the class of spaces we allow our theory to live on, in an effort to introduce non-trivial topological sectors of the configuration space. Let us now and for the remainder of this paper specialise to gauge group G = SU(N c ). It is clear that all principal SU(N c ) bundles P → R 5 are trivialisable. Consider instead however removing a set of points {x a } M a=1 and considering principal bundles over
M 5 = R 5 \ {x a } M
a=1 . Such bundles are then characterised by the integral of the second Chern class over small 4-spheres surrounding each of the x a , which are quantised as
n a = 1 8π 2 S 4 a tr (F ∧ F ) ∈ Z ,(3.1)
with S 4 a denoting a small 4-sphere surrounding the puncture at x a . We then call each pair (x a , n a ) an instanton insertion, with x a ∈ R 5 the instanton insertions's position, and n a ∈ Z its charge. We could also in principle consider allowing for non-zero instanton number on S 4 ∞ , but we instead consider only configurations with 1 8π 2
S 4 ∞ tr (F ∧ F ) = 0 . (3.2)
Since the finite SU(1, 3) transformations generated by M i+ and K + move the point at infinity [22], this is chosen as a convenience, rather than a restriction. Note then that since d tr (F ∧ F ) = 0 throughout M 5 , we have
0 = 1 8π 2 S 4 ∞ tr (F ∧ F ) = M a=1 1 8π 2 S 4 a tr (F ∧ F ) = M a=1 n a . (3.3)
Thus, the data of the bundle is contained within the set of instanton insertions {(x a , n a )} M a=1 , with the x a distinct and the n a summing to zero. Necessarily, M 5 must now be covered in a number of patches, on each of which A is defined. One can however consider a limit of such an open cover, such that A is now globally defined and regular except along 1dimensional strings where it is singular. These strings, which are analogous to the Dirac string, extend between the insertions x a . Then, the integral of the Chern-Simons 3-form on any S 3 through which such a string is threaded is quantised, ensuring that (3.1) is satisfied. Gauge field configurations with precisely this form were found in [18], but we will not need their details here.
We are now able to extend our field content back to the whole of R 5 , so long as we allow for particular singular behaviour of the field strength F . We in particular have
d 1 8π 2 tr (F ∧ F ) = d 5 x M a=1 n a δ (5) (x − x a ) . (3.4)
Such configurations with maximal symmetry about the points x a will behave as
1 8π 2 tr (F ∧ F ) ∼ − n a 6π 2 ⋆ d 1 |x − x a | 3 , (3.5) as we approach |x − x a | → 0, where here |x| 2 = (x − ) 2 + x i x i .
However, we more generally only expect the pullback to the S 4 surrounding x a to behave as
1 8π 2 tr (F ∧ F ) S 4 ∼ n a Ω 4 ,(3.6)
as we approach |x−x a | → 0, where Ω 4 encodes angular dependence, and satisfies S 4 Ω 4 = 1.
Thus, the components of 1
8π 2 tr (F ∧ F ) S 4 in Cartesian coordinates on R 5 go as |x − x a | −4 as we approach |x − x a | → 0.
Explicit examples of such configurations on S 4 can be constructed by suitable stereographic projection from corresponding configurations on R 4 . The minimal such construction [21], in which the SU(2) BPST instanton of size ρ is mapped to S 4 , corresponds to n a = ±1, with ρ = 1 producing the spherically symmetric result (3.5). More generally, one can in principle relate 9 any SU(N c ) n-instanton configuration on R 4 , parameterised by 4nN c moduli and captured by the ADHM construction [27], to a corresponding configuration on S 4 by stereographic projection. While we will not require any of the finer details of such constructions, it is important to emphasise that just specifying instanton insertions {(x a , n a )} does not fix the boundary behaviour of the gauge field A in a neighbourhood of the points {x a }, but rather specifies that such behaviour belongs to a particular continuous family of instanton profiles.
Further, note that configurations defined over R 5 which feature an arbitrary number of instanton insertions at points x a , as well as vanishing flux on S 4 ∞ as in (3.2), were found in [18]. Such configurations additionally satisfy the constraint
F ij + 1 2 ε ijkl F kl = 0 imposed by G ij .
So, we now take the configuration space of our theory to be extended to a disjoint union of subspaces, on each of which we specify instanton insertions {(x a , n a )}. Note, for the sake of later notational convenience, we allow for any of the n a to be zero, in which case F can be smoothly extended to x a .
It is crucial to note that SU(1, 3) still admits an action on this extended configuration space. In particular, the form of gA ensures that
d 1 8π 2 tr (F [A] ∧ F [A]) = d 5 x M a=1 n a δ (5) (x − x a ) =⇒ d 1 8π 2 tr (F [gA] ∧ F [gA]) = d 5 (xg −1 ) M a=1 n a δ (5) (xg −1 − x a ) = d 5 x M a=1 n a δ (5) (x − x a g) ,(3.7)
and hence if A has instanton insertions {(x a , n a )} M a=1 , the transformed field gA has instanton insertions {(x a g, n a )} M a=1 .
Let us now return to the su(1, 3) variation of the Lagrangian. We find now that in the presence of instanton insertions, the variation of L under M i+ , K + is no longer a total derivative, and the action is no longer invariant. We find
δ M i+ L = k M a=1 n a x i a δ (5) (x − x a ) + ⋆ d (. . . ) , δ K + L = k M a=1 n a x i a x i a δ (5) (x − x a ) + ⋆ d (. . . ) ,(3.8)
and hence, for suitable boundary conditions on S 4 ∞ , we have
δ M i+ S = k M a=1 n a x i a , δ K + S = k M a=1 n a x i a x i a . (3.9)
Thus, we find that the classical action is no longer invariant under SU(1, 3). However, we note that the variation of the action is local to the punctures {x a }. It is precisely this fact that allows for a recasting of the classical non-invariance of S as a symmetry deformation in the quantum theory. However, before exploring this we finally note the transformation of the action under the finite transformations generated by M i+ and K + , which are found by exponentiating the infinitesimal results (3.9).
Again let ϕ = A, X I , Ψ, G ij be shorthand for the set of fields of the theory, and suppose that the gauge field A has insertions {(x a , n a )} M a=1 . Then, we find
S[e ǫ i M i+ ϕ] = S[ϕ] − ik M a=1 n a log M ǫ (x a ) M ǫ (x a ) ,(3.10)
where given some 4-vector α i , we define
M α (x) = 1 − 1 2 Ω ij α i x j + 1 16 α i α i x j x j − i 4 α i α i x − + 2α i x i = 1 + z(x, (0, α i )) − z(x, 0) − z(0, (0, α i )) − i 4 α i α i z(x, 0) ,(3.11)
and we have the complex distance
z(x 1 , x 2 ) = x − 1 − x − 2 + 1 2 Ω ij x i 1 x j 2 + i 4 (x i 1 − x i 2 )(x i 1 − x i 2 ) = −z(x 2 , x 1 ) . (3.12)
Equivalently, we can write
exp iS[e ǫ i M i+ ϕ] = e iS[ϕ] M a=1 M ǫ (x a ) M ǫ (x a ) kna . (3.13)
Similarly, we find 14) or equivalently,
S[e ǫK + ϕ] = S[ϕ] − ik M a=1 n a log 1 − 2ǫz(x a , 0) 1 − 2ǫz(x a , 0) ,(3.exp iS[e ǫK + ϕ] = e iS[ϕ] M a=1 1 − 2ǫz(x a , 0) 1 − 2ǫz(x a , 0) kna . (3.15)
Note that the multiplicative factors appearing on the right hand side of (3.13) and (3.15) generically have branch points. This suggests that there may exist closed loops in configurations space, around which e iS picks up a non-trivial phase. The existence of such loops would thus signal a failure of single-valuedness of e iS as a functional on configuration space. This will be explored in Section 3.6.
Quantum Recovery
We now consider the fate of our su(1, 3) symmetry in the corresponding quantum theory. Despite the non-invariance of the action, we find a set of Ward-Takahashi identities satisfied by all correlation functions of the theory. Such identities are of the usual form, in particular involving the divergence of some vector current; the Noether current for the respective symmetry. The derivation of such local Ward-Takahashi identities and corresponding currents is left until Section 3.4. We first derive the corresponding global identities-also obtainable by integrating their local counterparts over R 5 -directly, so as to elucidate the quantum recovery of the theory's symmetries most straightforwardly.
First suppose we forbid instanton insertions, and define the configuration space of the theory to have globally regular field strength F . We can then formally define correlation functions of operators Φ (1) , . . . , Φ (N ) by the path integral
Φ (1) (x 1 ) . . . Φ (N ) (x N ) = Dϕ Φ (1) (x 1 ) . . . Φ (N ) (x N )e iS[ϕ] ,(3.16)
where, as above, we use ϕ to denote the fields X I , A, Ψ, G ij of the theory, and the Φ (a) are generically composite functions of ϕ and their derivatives. The partition function is Z = 1 . Symmetries are then realised by Ward-Takahashi identities for correlations functions. Under some SU(1, 3) transformation g, we have transformed fields ϕ ′ = gϕ. Making use of the fact that S[ϕ ′ ] = S[ϕ], and assuming Dϕ ′ = Dϕ, we have
Φ (1) ′ (x 1 ) . . . Φ (N ) ′ (x N ) = Dϕ Φ (1) ′ (x 1 ) . . . Φ (N ) ′ (x N )e iS[ϕ] = Dϕ ′ Φ (1) ′ (x 1 ) . . . Φ (N ) ′ (x N )e iS[ϕ ′ ] = Dϕ Φ (1) (x 1 ) . . . Φ (N ) (x N )e iS[ϕ] = Φ (1) (x 1 ) . . . Φ (N ) (x N ) ,(3.17)
where viewing Φ[ϕ] as a composite function of the fields ϕ, we have Φ ′ = Φ[ϕ ′ ]. This is the global Ward-Takahashi identity for the symmetry g. We can equivalently write the infinitesimal form,
N a=1 Φ (1) (x 1 ) . . . δ G Φ (a) (x a ) . . . Φ (N ) (x N ) = 0 , (3.18) for each G ∈ su(1, 3).
Let us now consider what changes when we allow for instanton insertions. The configuration space of the theory is now the disjoint union of subspaces on which we specify instanton insertions {(x a , n a )}. Hence, in calculating the correlation function of a set of operators Φ a , we must also specify which of these subspaces we perform the path integral over. Further, within each of these subspaces we encounter a number of zero modes, undamped by the path integral. The bosonic zero modes correspond simply to the space of gauge-inequivalent instantonic gauge field configurations as discussed above, while we also generically expect fermionic zero modes in each such background. We should therefore also specify gauge field and fermionic boundary conditions in a neighbourhood of each x a .
This leads us to define
Φ (1) (x 1 ) . . . Φ (N ) (x N ) {(xa,na), qa} := {(xa,na), qa} Dϕ Φ (1) (x 1 ) . . . Φ (N ) (x N )e iS[ϕ] , (3.19)
Here, the path integral is performed only over configurations ϕ with instanton insertions {(x a , n a )} N a=1 . We additionally include formal multi-indices q a which specify asymptotic field behaviour in a neighbourhood of the x a , corresponding to the bosonic and fermionic instanton moduli as mentioned above, and about which we will have more to say in Section 3.3. Note, the operator insertion points are the same as the instanton insertion points, denoted x a . This is done without loss of generality, since we allow for any of the operators Φ (a) to be the identity operator 1, and we allow any of the n a to vanish.
Next, consider some SU(1, 3) transformation g, with corresponding transformed fields ϕ ′ (x) = gϕ(x). If ϕ has instanton insertions {(x a , n a )}, then by (3.7) we have that gϕ has instanton insertions {(x a g, n a )}. It is important to note that this then induces a right group action of SU(1, 3) on the boundary data q a . In particular, if the fields ϕ have instanton insertions {(x a , n a )} with boundary data q a , we can define q a g as the boundary data of gϕ near x a g. Hence, again assuming no non-trivial Jacobian factor, we have . We have then
{(xag −1 ,na), qag −1 } Dϕ = {(xa,na), qa} Dϕ ′ .Φ (1) ′ (x 1 ) . . . Φ (N ) ′ (x N ) {(xag −1 ,na), qag −1 } = {(xag −1 ,na), qag −1 } Dϕ Φ (1) ′ (x 1 ) . . . Φ (N ) ′ (x N )e iS[ϕ] = {(xa,na), qa} Dϕ ′ Φ (1) ′ (x 1 ) . . . Φ (N ) ′ (x N )e iS[ϕ] = {(xa,na), qa} Dϕ ′ Φ (1) ′ (x 1 ) . . . Φ (N ) ′ (x N )e iS[ϕ ′ ] = {(xa,na), qa} Dϕ Φ (1) (x 1 ) . . . Φ (N ) (x N )e iS[ϕ] = Φ (1) (x 1 ) . . . Φ (N ) (x N ) {(xa,na),qa} . (3.21)
which is a generalisation of (3.17).
We now consider the rest of SU(1, 3). The only difference here is that we no longer
necessarily have S[ϕ ′ ] = S[ϕ]. First consider g = exp (ǫ i M i+ ). Then, we have Φ (1) ′ (x 1 ) . . . Φ (N ) ′ (x N ) {(xag −1 ,na), qag −1 } = {(xag −1 ,na), qag −1 } Dϕ Φ (1) ′ (x 1 ) . . . Φ (N ) ′ (x N )e iS[ϕ] = {(xa,na), qa} Dϕ ′ Φ (1) ′ (x 1 ) . . . Φ (N ) ′ (x N )e iS[ϕ] = N a=1 M −ǫ (x a ) M −ǫ (x a ) kna Φ (1) (x 1 ) . . . Φ (N ) (x N ) {(xa,na), qa} . (3.22)
Following the same steps, for g = exp (ǫK + ) we have
Φ (1) ′ (x 1 ) . . . Φ (N ) ′ (x N ) {(xag −1 ,na), qag −1 } = N a=1 1 + 2ǫz(x a , 0) 1 + 2ǫz(x a , 0) kna Φ (1) (x 1 ) . . . Φ (N ) (x N ) {(xa,na), qa} . (3.23)
Hence, through (3.22) and (3.23) we find that in the quantum theory, we still have global Ward-Takahashi identities corresponding to M i+ , K + . But these identities are deformed from the naive result (3.17), which holds only in an absence of instanton insertions.
An Alternative Perspective, and Instanton Operators
Before moving on to find the more general local counterparts to these Ward-Takahashi identities, let us describe an equivalent but nonetheless useful notation we may use to denote instanton insertions in the quantum theory. This reformulation, in terms of instanton operators, will in particular allow for a compact infinitesimal form of (3.21)-(3.23), while also making contact with previous work in Lorentzian Yang-Mills theories in five dimensions [19][20][21].
In the previous Section, we chose to introduce the notion of instanton insertions in the quantum theory by specifying the path integration domain. At least at a formal level, we could instead have expanded the space of operators in the theory. Let us denote by Φ n (x) a local operator which, in addition to carrying some representation of SU (1, 3), also carries charge under the U(1) topological current tr(F ∧ F ). In detail, such an operator satisfies
1 8π 2 S 4 (x) tr(F ∧ F ) Φ n (x) . . . = nΦ n (x) . . . (3.24)
for some n ∈ Z, where here S 4 (x) is a 4-sphere surrounding x, sufficiently small such that it does not enclose any other insertions. However, if we are now to reproduce the path integral manipulations that lead to the Ward-Takahashi identities (3.21)-(3.23), we need some prescription for how such topologically charged operators are constructed in practise in terms of the fields of the theory.
In analogy with monopole operators appearing in three-dimensional gauge theory [28], one can construct the operators Φ n through the introduction of disorder operators known as instanton operators [19][20][21]. Then, the inclusion in the path integral of some instanton operators I {q} n (x) is defined in terms of our previous notation by
Dϕ I {q 1 } n 1 (x 1 ) . . . I {q N } n N (x N ) . . . = {(xa,na), qa} . . . (3.25)
In particular, this path integral vanishes identically unless a n a = 0. It is natural at this point to say a little more about the formal index q, and in particular its interpretation in canonical quantisation. In some quantisation of the theory, I {q} n is the creation operator of an instanton-particle. Precisely what state is created is specified by the index q. In a pure gauge theory, this index would correspond to the physical (as opposed to gauge-redundant) collective coordinates of an n-instanton on S 4 in SU(N c ), which although complicated are accessible by virtue of the ADHM construction [27]. However, in a theory with fermions such as the theory considered here, we generically have fermion zero modes in an instanton background, giving rise to a degenerate ground state. Thus, in acting with I {q} n on the vacuum, we need the index q to specify which of these vacuum states is created. The full classification of these fermion zero modes, and thus a precise formulation of the index q, has been achieved for the case of a single SU(N c ) instanton [20], providing I
Dϕ I {q 1 } ′ n 1 (x 1 )I {q 2 } ′ n 2 (x 2 ) . . . I {q N } ′ n N (x N ) . . . = Dϕ ′ I n 1 (x 1 )I n 2 (x 2 ) . . . I n N (x N ) . . . . (3.26)
With such a formulation in place, we can now build an operator carrying instanton charge, as
Φ n (x) = I {q} n (x)Φ(x) (3.27)
where Φ = Φ[ϕ] is once again simply some composite function of the fields ϕ and their derivatives. Then, Φ n (x) transforms in a representation of SU(1, 3), which is a tensor product of the representations of I {q} n and Φ(x). Given some g ∈ SU (1, 3) and defines a representation of su(1, 3). Note, we define, for instance,
, we have Φ ′ n (x) = gΦ n (x) = I {qg −1 } n (xg −1 )(gΦ)(x) = Φ n (x) + ǫδ G Φ n (x) where g = e ǫG . In particular, δ G Φ n (x) as always takes the form δ G Φ n (x) = −G ∂ Φ n (x) − r G (x)Φ n (x) for differential operator G ∂ and matrix r G (x),∂ i I {q} n (x) by requiring ∂ i I {q} n = ∂ i I {q} n
. Further, as with fields, for the sake of later notations convenience we trivially define δ P + I {q} n (x) = 0, so that Φ n sits in a representation of h = u(1) ⊕ su(1, 3) in which P + is trivially represented.
We can then reproduce each of the Ward-Takahashi identity derivations of the previous Section, with for instance the manipulation from the first to second line of (3.21) being now of the form (3.26). We thus arrive at simply
Φ (1) ′ n 1 (x 1 ) . . . Φ (N ) ′ n N (x N ) = Φ (1) n 1 (x 1 ) . . . Φ (N ) n N (x N ) ,(3.δ G Φ (a) na (x a ) b =a Φ (b) n b (x b ) = 0 . (3.29)
For transformations generated by the remaining two generators G = M i+ , K + we have
Φ (1) ′ n 1 (x 1 ) . . . Φ (N ) ′ n N (x N ) = N a=1 M −ǫ (x a ) M −ǫ (x a ) kna Φ (1) n 1 (x 1 ) . . . Φ (N ) n N (x N ) . (3.30)
and
Φ (1) ′ n 1 (x 1 ) . . . Φ (N ) ′ n N (x N ) = N a=1 1 + 2ǫz(x a , 0) 1 + 2ǫz(x a , 0) kna Φ (1) n 1 (x 1 ) . . . Φ (N ) n N (x N ) . (3.31)
respectively. These then have the infinitessimal forms
N a=1 δ M i+ + ikn a x i a Φ (a) na (x a ) b =a Φ (b) n b (x b ) = 0 , N a=1 δ K + + ikn a x i a x i a Φ (a) na (x a ) b =a Φ (b) n b (x b ) = 0 . (3.32)
Local Ward-Takahashi Identities
Having now seen that symmetry is restored in the quantum theory, in which Ward-Takahashi identities are deformed in the presence of instanton operators, let us now present the much more general local Ward-Takahashi identities. These will in particular determine the corresponding Noether currents. We derive the identities following the standard procedure. We consider the variation of correlation functions under a broader class of transformations, in which the su(1, 3) variations are allowed to vary locally according to some function ǫ(x). Note however that ǫ(x) must still be approximately constant in a neighbourhood of the points x a , to ensure that the resulting transformations still map into the extended configuration space. Then, taking the functional derivative with respect to ǫ(x) of the resulting expression, for each G ∈ su(1, 3) we arrive at na that depend only on the fields A, X I , Ψ and not their derivatives. More generally, one would find additional terms one the right-hand side of the form ∂ (contact term).
− i W G (x) N a=1 Φ (a) na (x a ) = ⋆ N a=1 δ (5) (x − x a ) δ G Φ (a) na (x a ) b =a Φ (b) n b (x b ) ,(3.
For G ∈ {P − , P i , B, C α , T }, the top forms W G are given by
W G = d ⋆ J G ,(3.34)
for Noether currents J G . Once again, the story is different for M i+ , K + , for which we find
W M i+ = d ⋆ J M i+ + x i d k 8π 2 tr (F ∧ F ) = d ⋆ J M i+ + k ⋆ N a=1 n a x i a δ (5) (x − x a ) ,(3.35)
and
W K + = d ⋆ J K + + x i x i d k 8π 2 tr (F ∧ F ) = d ⋆ J K + + k ⋆ N a=1 n a x i a x i a δ (5) (x − x a ) .(3.36)
The explicit forms of the Noether currents J G can be found in Appendix B. It is natural then to reorganise terms in (3.33) for G = M i+ , K + , to bring the set of Ward-Takahashi identities to a more familiar form. We have
− i d ⋆ J G (x) N a=1 Φ (a) na (x a ) = ⋆ N a=1 δ (5) (x − x a ) δ G Φ (a) na (x a ) b =a Φ (b) n b (x b ) ,(3.37)
where the new variationsδ G act as
δ G Φ n (x) = δ G Φ n (x) for G ∈ {P − , P i , B, C α , T } , δ M i+ Φ n (x) = δ M i+ Φ n (x) + iknx i Φ n (x) , δ K + Φ n (x) = δ K + Φ n (x) + iknx i x i Φ n (x) ,(3.δ G Φ (a) na (x a ) b =a Φ (b) n b (x b ) = 0 . (3.39)
Let us summarise our findings so far. The classical theory admitted an SU(1, 3) spacetime symmetry in the absence of instanton insertions. The corresponding infinitesimal variation of fields is denoted δ G for each G ∈ su(1, 3), which form a representation of su(1, 3) when acting on the gauge field A and matter fields X I , Ψ. We extended this to include a variation δ P + that acts trivially on all fields δ P + ϕ = 0, and in this way realised the {δ G } as a representation of h, with brackets [δ G 1 , δ G 2 ] = δ [G 1 ,G 2 ] as in (2.4)-(2.6). We found that this symmetry was broken in the classical theory in the presence of instanton operators. However, this breaking is local to the instanton insertion points x a , and thus the resulting Ward-Takahashi identities in the quantum theory could nonetheless be written in the standard form (3.37) in terms of Noether currents J G . Integrating these local identities over R 5 , we recovered the infinitesimal form of the global Ward-Takahashi identities (3.29)-(3.32).
The Ward-Takahashi identities (3.37) are written not in terms of our original variations δ G , but instead in terms of variationsδ G , which we have defined for each G ∈ B = {P − , P i , B, C α , T, M i+ , K + }. In particular, they differ from the δ G for G = M i+ , K + when acting on operators carrying non-zero instanton charge, as in (3.38).
We are then lead to ask: are the {δ G } G∈B the generators of a representation of su(1, 3) under commutation, like the δ G are? The answer is in fact no. In particular, we find a single commutator that does not close on su(1, 3), which is
[δ M i+ ,δ P j ]Φ n (x) = − 1 2 Ω ijδT − 2δ ijδB + Ω ik η I jkδ C α − ik δ ij n Φ n (x) .(3.40)
Suppose however that we define a new variationδ P + that acts as
δ P + Φ n (x) = ikn Φ n (x) . (3.41)
Equivalently, we have that no fields in the theory are charged underδ P + , but instanton operators transform asδ P + I {q} n (x) = ikn I {q} n (x). Then, we have
[δ M i+ ,δ P j ]Φ n (x) = −δ ijδP + − 1 2 Ω ijδT − 2δ ijδB + Ω ik η I jkδ C α Φ n (x) . (3.42)
Then, by direct comparison with the algebra (2.4), we find quite remarkably that the full set of variations {δ G } B∪{P + } do generate a representation of h, with the operator Φ n now carrying charge ikn underδ P + . In other words, the operator Φ n carries Kaluza-Klein momentum in an emergent sixth dimension. We can then organise our space of operators into primaries and descendants of h [17]. In particular, if Φ n is a primary operator then we havẽ We can now extend the local Ward-Takahashi identity to read once again
δ P + (I n Φ) = ikn (I n Φ) , δ P − (I n Φ) = − (P − ) ∂ (I n Φ) , δ P i (I n Φ) = − (P i ) ∂ (I n Φ) , δ B (I n Φ) = − (B) ∂ (I n Φ) − r Φ [B] (I n Φ) , δ C α (I n Φ) = − (C α ) ∂ (I n Φ) − r Φ [C α ] (I n Φ) , δ T (I n Φ) = − (T ) ∂ (I n Φ) − ∆ (I n Φ) , δ M i+ (I n Φ) = − (M i+ ) ∂ (I n Φ) − 1 2 ∆Ω ij x j − iknx i + 2x i r Φ [B] − Ω ik η α jk x j r Φ [C α ] (I n Φ) , δ K + (I n Φ) = − (K + ) ∂ (I n Φ) − 2∆ x − − iknx i x i + 2x i x i r Φ [B] − x i x j Ω ik η α jk r Φ [C α ] (I n Φ) .− i d ⋆ J G (x) N a=1 Φ (a) na (x a ) = ⋆ N a=1 δ (5) (x − x a ) δ G Φ (a) na (x a ) b =a Φ (b) n b (x b ) ,(3.44)
which now holds for all G ∈ B ∪ {P + }, where we define
J P + = − k 8π 2 ⋆ tr (F ∧ F ) . (3.45)
It is indeed straightforward to see that for G = P + , (3.44) is satisfied trivially. Further,
(3.44) holds for all G ∈ h with J G 1 +G 2 = J G 1 + J G 2 and J [G 1 ,G 2 ] =δ G 1 J G 2 −δ G 2 J G 1 for all G 1 , G 2 ∈ h.
Integrating over R 5 , we once again arrive at the global identities
N a=1 δ G Φ (a) na (x a ) b =a Φ (b) n b (x b ) = 0 ,(3.46)
which hold for all G ∈ h. We can equivalently write this in its finite form, as
Φ (1) ′ n 1 (x 1 ) . . . Φ (N ) ′ n N (x N ) = Φ (1) n 1 (x 1 ) . . . Φ (N ) n N (x N ) ,(3.47)
where in this expression, Φ ′ n (x) = exp(δ G )Φ n (x). The explicit forms of these finitelytransformed operators can be found in [22]. Using these forms, it is in particular straightforward to then see that (3.47) reproduces the results (3.28), (3.30) and (3.31).
General Solution to Ward-Takahashi Identities
The algebra h, its representations and the solutions of the resulting Ward-Takahashi identities (3.46) have already been studied extensively [17]. Thus we can readily apply those results here. For instance, if we consider a pair of scalar operators Φ (1)
n 1 , Φ(2)
n 2 of the theory with scaling dimensions ∆ 1 , ∆ 2 and instanton charge n 1 , n 2 , respectively, the resulting 2-point function is fixed up to an overall constant. It is given by
Φ (1) n 1 (x 1 )Φ (2) n 2 (x 2 ) = δ ∆ 1 ,∆ 2 δ 0,n 1 +n 2 d(∆ 1 , n 1 ) 1 (z 12z12 ) ∆ 1 /2 z 12 z 12 n 1 ,(3.48)
for some constant d(∆ 1 , n 1 ), and z 12 = z(x 1 , x 2 ) = −z 21 as defined in (3.12). One can then continue to find the general solution at N-points. This takes a form familiar from regular conformal field theory: a pre-factor which solves the inhomogeneous Ward-Takahashi identities, multiplied by an undetermined function H of su(1, 3) invariant combinations of coordinates. Explicitly, we have [17] Φ (1)
n 1 (x 1 ) . . . Φ (N ) n N (x N ) = δ 0,n 1 +···+n N N a<b (z abzab ) −α ab /2 z ab z ab (na−n b )/N H |z ab ||z cd | |z ac ||z bd | , z ab z bc z cā z abzbczca . (3.49)
where z ab = z(x a , x b ). The constants α ab = α ba satisfy b =a α ab = ∆ a for each a = 1, . . . , N, and for a suitable choice of the function H can be taken to be
α ab = 1 N − 2 (∆ a + ∆ b ) − 1 (N − 1)(N − 2) N a=1 ∆ a (3.50) for all N ≥ 3.
The full set 10 of su(1, 3)-invariant objects fall into two categories; the familiar crossratios |z ab ||z cd |/|z ac ||z bd | of which there are N(N − 3)/2, and the more novel phases z ab z bc z ca /z abzbczca , of which there are (N − 1)(N − 2)/2. In particular, even at N = 3 there is a single invariant phase, and thus in contrast to regular conformal field theory, the 3-point function is fixed only up to a function of one variable.
The Quantisation of k
A necessary requirement that the Lagrangian (2.1) gives rise to a well-defined quantum field theory is that e iS[ϕ] is a single-valued functional on the theory's configuration space. Such a constraint can have deep and subtle implications, especially in a theory with nontrivial topological sectors. Take for instance the three-dimensional Abelian Chern-Simons theory, whose action S CS is not gauge invariant in the presence of monopole fluxes, and thus fails to be single-valued on configuration space-defined to be the space of fields modulo gauge transformations. Nonetheless, e iS CS remains single-valued even in the presence of monopole fluxes, provided that the Chern-Simons level is quantised in the integers.
In this Section, we will prove a comparable result for the theory defined by Lagrangian (2.1). In detail, we will prove the following claim:
Take the configuration space to be a union over sectors of arbitrary instanton insertions. Then, a necessary condition such that e iS[ϕ] is single-valued on this configuration space is that k ∈ 1 2 Z. Our proof is constructive, and in particular does not provide a complete picture of the global properties of the action as a functional on configuration space.
Let us outline the steps taken to demonstrate the claim. We will first define a oneparameter family of fields configurations ϕ γ , γ ∈ R. In particular, we demonstrate explicitly that this one-parameter family of configurations in fact defines a closed loop in configuration space; in other words, it satisfies ϕ γ+2π = ϕ γ , and so in particular ϕ 2π = ϕ 0 .
Next, we will compute an explicit expression for exp (iS[ϕ γ ]) as a function of exp (iS[ϕ 0 ]). Using this, we find examples of closed loops such that
exp (iS[ϕ 2π ]) = e 4πki exp (iS[ϕ 0 ]) (3.51)
Thus, e iS[ϕ] is generically a multi-valued functional on configuration space. Note, the construction of a loop satisfying (3.51) will necessarily require that the configurations ϕ γ have non-trivial instanton insertions, and thus this phase ambiguity only arises when we allow for such non-trivial topological sectors. Hence, we find that for this particular loop in configuration space, e iS[ϕ] is single-valued only for k ∈ 1 2 Z, thus proving the claim.
So let us now explicitly construct the closed loop in configuration space. Note that we have already computed the finite variation of the action under SU(1, 3) transformations, in particular finding rather suggestive forms (3.13) and (3.15) for the transformations under M i+ and K + , respectively. As such, we can utilise these results-and thus simplify our calculations here-by seeking a closed loop in configuration space that lies within the SU (1, 3) orbits.
Let us then consider the following one-parameter family of SU(1, 3) elements,
h(γ) = exp γ 1 2 K + + P − . (3.52)
Making use of the expressions in Appendix B of [15], one can show that if we take the su(1, 3) generators to lie in the fundamental representation, then h(γ + 2π) = h(γ). Thus, h(γ) defines closed loop in the fundamental representation of SU (1, 3), of period 2π.
Next we want to consider the finite transformation of coordinates and fields under the SU(1, 3) transformation h(γ). Once again, we can make our lives easier by utilising known results. First, we can use the Baker-Campbell-Hausdorff formula to show that for all γ ∈ (−π/2, π/2), h(γ) = exp 1 2 (tan γ) K + exp (sin γ cos γ) P − exp log (sec γ) T = exp log (cos γ) T exp (sin γ cos γ) P − exp 1 2 (tan γ) K + .
(3.53)
Thus, we can compute the finite variation of coordinates and fields under h(γ) by performing successive finite transformations under elements generated purely by T, P − and K + . First let us consider the coordinates, for which the finite transformations under e ǫT , e ǫP − and e ǫK + can be found in Appendix A of [22]. Then, we find
xh −1 (γ) − = cos 2γ x − − 1 2 sin 2γ 1 − (x − ) 2 + 1 16 | x| 4 (cos γ + sin γ x − ) 2 + 1 16 sin 2 γ | x| 4 xh −1 (γ) i = cos γ x i + 1 4 sin γ (4x − x i − | x| 2 Ω ij x j ) (cos γ + sin γ x − ) 2 + 1 16 sin 2 γ | x| 4 , (3.54)
which are valid for all γ ∈ R. Note then that we do indeed have xh −1 (γ + 2π) = xh −1 (γ), and so in particular xh −1 (2π) = x.
Let us now consider the orbits in configuration space generated by h(γ). We take some starting configuration ϕ = {A, X I , Ψ, G ij }, and then consider the new configuration h(γ)ϕ obtained by transforming by the SU(1, 3) element h(γ). Explicitly,
h(γ)ϕ = {h(γ)A − , h(γ)A i , h(γ)X I , h(γ)Ψ, h(γ)G ij } .
(3.55)
The form of h(γ)ϕ is then found by exponentiating the known infinitesimal variation of each field under ( 1 2 δ K + + δ P − ), as given in Appendix A. However, we should be cautious: it is a priori not clear that we have h(γ + 2π)ϕ = h(γ)ϕ, as for instance fields may lie in a projective representation of SU (1, 3).
It is straightforward to see that this is not the case for the gauge field A = (A − , A i ) and scalars X I . For the gauge field, we can simply write down the form of the transformed fields, which take the standard form
(h(γ)A − ) (x) = ∂ − xh −1 (γ) − A − xh −1 (γ) + ∂ − xh −1 (γ) i A i xh −1 (γ) (h(γ)A i ) (x) = ∂ i xh −1 (γ) − A − xh −1 (γ) + ∂ i xh −1 (γ) j A j xh −1 (γ) . (3.56)
Then the 2π periodicity of xh(γ) −1 as seen in (3.54) ensures that we do indeed have h(γ + 2π)A = h(γ)A. Since our task is to exhibit a closed path in configuration space under which the action is not invariant it is enough to set the scalars and Fermions to zero, since if X I , Ψ = 0 then h(γ)X I , h(γ)Ψ = 0 for all γ. Finally, we need to address the Lagrange multiplier field G ij . As previously mentioned, the infinitesimal variation of G ij under the generators of SU (1, 3) is somewhat subtle, and in particular we do not know a closed form for its finite variation. However, we can work around this issue in the following way. Our ultimate aim is to compute S[h(γ)ϕ] as a function on S[ϕ]. Now recall, the Lagrangian (2.1) depends on G ij only through the term
G ij F ij = G ij F + ij , with F + ij = 1 2 F ij + 1 2 ε ijkl F kl .
So let us suppose that the gauge field A of our starting configuration ϕ satisfies F + ij = 0, and thus S[ϕ] is independent of G ij . Then, crucially, the constraint F + ij = 0 is an SU(1, 3) invariant: for all g ∈ SU(1, 3), if A satisfies F + ij = 0 then so does the transformed field gA. Hence, we have that for all γ,
S[h(γ)ϕ] is independent of G ij .
Thus, making contact with the language of the above proof outline, let us define the starting configuration ϕ 0 = {A, X I = 0, Ψ = 0, G ij }, while the orbit in configuration space is defined to be
ϕ γ = {h(γ)A, X I = 0, Ψ = 0, G ij } .
(3.57)
In particular, we do not transform G ij . Then, we have already shown that ϕ γ+2π = ϕ γ as desired.
Finally, we are ready to compute S[ϕ γ ]. First note that, following the discussion above, the fact that the configuration ϕ satisfies the constraint F + ij = 0 ensures that we have S[ϕ γ ] = S[h(γ)ϕ]. We can then leverage the factorisation (3.53) along with the finite K + transformation (3.15) to simply write down
exp iS[ϕ γ ] = e iS[ϕ] N a=1 u γ (x a ; n a ) ,(3.58)
where
u γ (x; n) = cos γ − sin γz(x, 0) cos γ − sin γ z(x, 0) kn ,(3.59)
and the configuration ϕ γ has instanton insertions {(x a , n a )} N a=1 . We remind the reader that z(x 1 , x 2 ) is defined in (3.12).
We can now ask what happens as we pass from γ = 0 through to γ = 2π. Then, for all x = (x − , x) with x = 0, we find that as we pass from γ = 0 to γ = 2π, the combination (cos γ − sin γz(x, 0)) encircles the origin of the complex plane clockwise precisely once. Thus, we have u γ+2π (x; n) = e 4πkni u γ (x; n) .
(3.60)
Conversely, if x = 0, then u γ (x; n) = 1 for all γ, and thus trivially u γ+2π (x; n) = u γ (x; n).
Suppose first then that all instanton insertions {(x a , n a )} of the starting configuration ϕ lie away from the spatial origin, i.e. x a = 0 for all a = 1, . . . , N. Then, we have
exp iS[ϕ 2π ] = exp 4πki N a=1 n a e iS[ϕ] = e iS[ϕ] ,(3.61)
since a n a = 0. Something special happens, however, if any of the instanton insertions lie at the spatial origin. In particular, taking a single insertion at the origin with charge −1, we find
exp iS[ϕ 2π ] = e 4πki e iS[ϕ] ,(3.62)
as promised in the outline above. More generally, if the sum of the charges of all instanton insertions at the origin is m ∈ Z, the relevant phase factor is e −4πkmi , and thus the above case is in this sense minimal. Thus, from (3.62) we find that a necessary condition such that e iS[ϕ] is single-valued on configuration space is that k takes values in the half-integers, k ∈ 1 2 Z. Next in Section 4 we will require k ∈ Z in order that the theory admits a sixdimensional interpretation. It is unclear whether any such novel interpretation holds when k ∈ { 1 2 , 3 2 , . . . } or whether a more refined argument to the one above, perhaps by including fermions, can be made to exclude these cases.
Reconstructing Six Dimensions
We have seen that the five-dimensional path integral based on the Lagrangian (2.1) leads to a theory with an SU(1, 3) × U(1) symmetry that acts non-trivially on a Kaluza-Klein-like tower of operators obtained by inserting instantons. Furthermore the associated Ward-Takahashi identities are naturally solved by the Fourier modes of a conformally compactified six-dimensional conformal field theory with the role of Kaluza-Klein momentum replace by instanton number. Thus we now would like to reconstruct the correlators of a six-dimensional theory from the five-dimensional path integral.
Owing to the 2π interval over which x + runs, such an interpretation requires the eigenvalues of P + to take discrete integer values [17]. This is indeed the case, so long as k ∈ Z. Then,δ P + Φ n (x) = ikn Φ n (x), which precisely identifies I n (x)Φ(x) as the (kn) th Fourier mode of some six-dimensional operator. In particular, a choice of k = 1 allows for the realisation of the full spectrum of Fourier modes on the conformal compactification, while higher k corresponds to a Z k orbifold thereof.
We can now form a coherent state of Fourier modes, and so define the notion of a six-dimensional operator in our theory. We can then ask when such operators can be interpreted as those of a six-dimensional conformal field theory.
Constructing Six-dimensional Observables
Given a collection of local operators {Φ n (x)} n∈Z , we are lead to define six-dimensional operator
O(x + , x − , x i ) := n∈Z e −iknx + Φ n (x − , x i ) ,(4.1)
for some new coordinate x + . Then, we havẽ
δ P + O(x + , x − , x i ) = − ∂ ∂x + O(x + , x − , x i ) ,(4.2)
and so P + is identified as translations along an emergent sixth dimension 11 . Indeed, it is straightforward to go a step further, and show that for generic G ∈ h, we
haveδ G O(x) = −G 6d ∂ O(x) − r 6d (x)O(x)
, where as usual r 6d acts on any indices of O, while the six-dimensional vector fields G 6d ∂ form precisely the algebra of conformal Killing vector fields of six-dimensional Minkowski space which commute with (P + ) 6d ∂ = ∂ + . Explicitly, these are
(P + ) 6d ∂ = ∂ + , (P − ) 6d ∂ = ∂ − , (P i ) 6d ∂ = 1 2 Ω ij x j ∂ − + ∂ i , (B) 6d ∂ = − 1 2 Ω ij x i ∂ j , C I 6d ∂ = 1 2 η I ij x i ∂ j , (T ) 6d ∂ = 2x − ∂ − + x i ∂ i , (M i+ ) 6d ∂ = x i ∂ + + 1 2 Ω ij x − x j − 1 8 x j x j x i ∂ − + x − ∂ i + 1 4 (2Ω ik x k x j + 2Ω jk x k x i − Ω ij x k x k )∂ j , (K + ) 6d ∂ = x i x i ∂ + + (2(x − ) 2 − 1 8 (x i x i ) 2 )∂ − + ( 1 2 Ω ij x j x k x k + 2x − x i )∂ i . (4.3)
as first derived in [15].
Let us finally review how one can reconstruct operators defined on six-dimensional space with the standard Minkowski metric [17]. First, one must perform a Weyl rescaling in order to arrive at operatorsÔ,
O(x) = cos∆(x + /2)O(x) = cos∆(x + /2) n∈Z e −iknx + Φ n (x − , x i ) ,(4.4)
where∆ is the six-dimensional scaling dimension 12 . The prefactor corresponds to the non-trivial conformal factor relating the metric (2.3) to the Minkowski metric ds 2 M , as ds 2 = cos 2 (x + /2)ds 2 M . One may then want to perform a coordinate transformation to standard coordinateŝ x of six-dimensional Minkowski space, such that ds 2 M = −2dx + dx − + dx i dx i . These are related to the (x + , x − , x i ) by
x + = 2 arctan x + 2 , x − =x − −x +xixi 2 (4 + (x + ) 2 ) , x i = 4 x i + 1 2 Ω ijx jx+ 4 + (x + ) 2 . (4.5)
Let us consider for example a scalar operatorÔ with six-dimensional scaling dimension ∆, for which this coordinate transformation is trivial. Following (4.1), this should be built from modes Φ n that are scalars of h, and which have Lifshitz scaling dimension ∆ =∆. Then, we can write 13
O(x) = cos ∆ (x + /2)O(x) = 2 −∆ e ix + /2 + e −ix + /2 ∆ n∈Z e −iknx + Φ n (x − , x i ) = 2 −∆ n∈Z ∆ l=0 ∆ l e i∆x + /2−i(kn+l)x + Φ n (x − , x i ) = 2 −∆ n∈Z ∆ l=0 ∆ l 1 + ix + 1 + (x + ) 2 ∆−2(kn+l) Φ n (x − (x), x i (x)) . (4.6)
Thus we have a way to construct six-dimensional operators out of five-dimensional ones. Furthermore in principle we can compute their correlation functions using the five-dimensional path integral viz:
Ô (1) (x 1 ) . . .Ô (N ) (x N ) = cos ∆ 1 (x + 1 /2) . . . cos ∆ N (x + N /2) n 1 ,...,n N ∈Z e −ik(n 1 x + 1 +...+n N x + N ) × Φ (1) n 1 (x − 1 , x i 1 ) . . . Φ (N ) n N (x − N , x i N ) . (4.7)
again for six-dimensional scalarsÔ (a) of scaling dimension∆ a = ∆ a . A generalisation to higher spin operators is conceptually straightforward, but we do not explore it here.
Topological Ward-Takahashi Identities and the Full Conformal Algebra
We have learnt that for k ∈ Z, the theory (2.1) is able to describe Fourier modes on the x + interval of momentum kn for any n ∈ Z. Such a mode Φ n can in turn be in principle constructed by dressing a local operator of the theory with an instanton operator of charge n. Thus, for general k ∈ Z, we may propose that the theory is a six-dimensional conformal field theory on the orbifold R 1,5 /Z k , where the Z k quotient acts on the x + interval as x + → x + + 2π/k. This leads to a rather curious orbifold in terms of the familiar sixdimensional coordinatesx. In general, such an orbifold breaks the full conformal algebra so(2, 6) precisely down to h = su(1, 3) ⊕ u(1). We have shown that this is indeed the symmetry obeyed by the theory. Then something special must happen at k = 1, where there is no orbifold. In particular, in this case the full six-dimensional conformal symmetry is not broken, and so the theory should realise the full so (2,6). We learn that a necessary condition for the theory (2.1) to describe a six-dimensional conformal field theory is that its symmetries are further enhanced at strong coupling (k = 1).
Let us briefly note that this enhancement is in some sense analagous with the ABJM theory for M2-branes, where it is the R-symmetry (rather than the spacetime symmetry) that is enhanced from su(4) ⊕ u(1) to so(8) at strong coupling. Further details of this analogy can be found in [17,22]. So let us fix k = 1. Our aim now is to explore how this symmetry enhancement should happen. Once again, we will study the theory through the lens of correlation functions, and in particular the partial differential equations they satisfy. But first, let us make some comments on the use of five-dimensional operators Φ n , which fall into representations of h ⊂ so(2, 6), as building blocks in the realisation of so(2, 6) representations.
One can consider the action of so(2, 6) on a six-dimensional operatorÔ, which is easily translated to an action on the Weyl rescaled O. This operator is then in turn decomposed as
O(x + , x − , x i ) = n∈Z e −inx + Φ n (x − , x i ) (4.8)
for Fourier modes Φ n satisfyingδ P + Φ n = inΦ n . The fact that the Fourier expansion breaks so(2, 6) → h = su(1, 3) ⊕ u(1) is precisely the statement that it is only the subalgebra h of infinitesimal transformations of O that act on each Fourier level independently; recall, h is simply the maximal subalgebra that commutes with P + . This is in contrast to the rest of so(2, 6), which generically scramble up the Fourier modes. In other words, the Φ n fall into representations of h. As previously mentioned, whenÔ is an so(2, 6) scalar primary of scaling dimension∆, one can show that under h, the Φ n must transform precisely as scalar primaries as defined with respect to h (i.e. r[B] = 0, r[C α ] = 0), with Lifshitz scaling dimension ∆ under T given simply by the original six-dimensional scaling dimension, ∆ = ∆, and of course P + momentum n.
We then turn to the question of how to identify the modes Φ n of a given operator O we wish to reconstruct. The first task here is to deduce the required h representation of Φ n given the six-dimensional quantum numbers of O. This is conceptually straightforward, and in some cases practically immediate too; for instance, a scalar primary in six-dimensions is built of five-dimensional scalar primaries of h.
With this done, we must now look at the space of operators of the form Φ n = I {q} n Φ, where Φ is some composite of the fields ϕ, and classify them by their h representations. To do so at general n is an open problem, which in a supersymmetric theory such as ours amounts to determining which supermultiplet the instanton operator I {q} n lies in. As previously mentioned, this has been solved in Lorentzian SU(N c ) theories only for n = ±1 [20], while no results currently exist for non-Lorentzian theories such as ours. Thus, tackling this issue is an important next step in our programme.
Supposing that we have successfully identified a class of five-dimensional operators in the correct h representations, we would now like to derive some criteria by which we can correctly organise them into a particular six-dimensional local operator. Thus, in the rest of this Section we will derive further conditions that must be satisfied by the correlators of the Φ n , which we expect to be crucial in identifying them precisely.
Note, our focus for the remainder of this Section is on the reconstruction of local operators in six dimensions. It is clear however that one can also in principle reconstruct extended six-dimensional operators, including those extended along the x + interval. Such constructions-which will in particular be essential to the construction of defect operators in six-dimensional CFTs-are left to future work.
So suppose that we think we have correctly identified in our theory the Fourier modes Φ (a) n of some six-dimensional scalar primariesÔ (a) of scaling dimension∆ a under the sixdimensional dilatation. These Φ (a) n are then necessarily scalars of h of Lifshitz dimension ∆ a =∆ a , and so in turn will generically be some linear combination of dimension ∆ a scalar primaries of the form I {q} n Φ. We now want some purely five-dimensional criteria by which we can check whether we've chosen the Φ (a) n correctly. We know that the correlation functions Ô (1)Ô (2) . . .Ô (N ) satisfy the Ward-Takahashi identities of so(2, 6), while a priori we have only so far shown that the five-dimensional correlators Φ (1)
n 1 Φ (2) n 2 . . . Φ (N ) n N
satisfy those corresponding to the subalgebra h. The additional criteria that the Φ (a) n must satisfy is that the six-dimensional correlators they resum to satisfy the full set of so(2, 6) identities. Let's see how this works in practise.
Suppose then we take the six-dimensional Ward-Takahashi identity for some G ∈ so(2, 6) that lies outside h. It is then conceptually straightforward to expand theÔ (a) in terms of the Φ n . However, in contrast to identities (3.39) corresponding to elements of h, this new identity will necessarily be a partial differential equation relating correlators with different Fourier mode numbers. From the perspective of the five-dimensional theory, these are non-trivial relations between correlators calculated in distinct topological sectors, and thus we refer to such identities as topological Ward-Takahashi identities (TWTIs).
It is instructive to look at a particular example of some G ∈ so(2, 6) that lies outside h. We consider the N-point function Ô (1)Ô (2) . . .Ô (N ) . We know that this N-point function satisfies the Ward-Takahashi identities of so(2, 6), spanned by {P 6d µ , M 6d µν , D 6d , K 6d µ }, with µ ∈ {+, −, i}. Let us focus on the six-dimensional dilatation D 6d , which is not preserved by the Fourier decomposition 14 ; in other words, D 6d / ∈ h. Our aim is to understand how the invariance of the six-dimensional N-point function under D 6d manifests in the correlators of the Φ in terms of the vector field
D ∂ =x +∂ + +x −∂ − +x i∂ i = sin(x + )∂ + + x − − 1 4 sin(x + )| x| 2 ∂ − + 1 2 1 + cos(x + ) x i + sin(x + )Ω ij x j ∂ i . (4.10)
The variation of the Fourier modes Φ Explicitly, we find
δ D Φ (a) n = − 1 2δ T Φ (a) n − D − Φ (a) n−1 − D + Φ (a) n+1 ,(4.12)
where we define
D + Φ (a) n = − 1 2 n + 1 4 ∆ a + i 8 | x| 2 ∂ − + 1 4 x i ∂ i − i 4 Ω ij x j ∂ i Φ (a) n D − Φ (a) n = + 1 2 n + 1 4 ∆ a − i 8 | x| 2 ∂ − + 1 4 x i ∂ i + i 4 Ω ij x j ∂ i Φ (a) n ,(4.13)
while we can read offδ T Φ (a)
n = δ T Φ (a)
n from (3.43), as
δ T Φ (a) n = −(T ) ∂ Φ (a) n − ∆ a Φ (a) n .
(4.14)
As expected, we see that variation under D 6d / ∈ h mixes the Fourier modes ofÔ (a) . So now let us consider the Ward-Takahashi identity for D 6d , taking the form a Ô (1) . . . δ DÔ (a) . . .Ô (N ) = 0. Then, expanding in modes and further using the fact that Φ
(1)
n 1 . . . Φ (N ) n N
is non-vanishing only for n 1 + · · · + n N = 0, we find that this Ward-Takahashi identity for theÔ (a) is satisfied if and only if we have all of
N a=1 Φ (1) n 1 (x 1 ) . . .δ T Φ (a) na (x a ) . . . Φ (N ) n N (x N ) = 0 , (4.15) N a=1 Φ (1) n 1 (x 1 ) . . . D − Φ (a) na−1 (x a ) . . . Φ (N ) n N (x N ) = 0 , (4.16) N a=1 Φ (1) n 1 (x 1 ) . . . D + Φ (a) na+1 (x a ) . . . Φ (N ) n N (x N ) = 0 ,(4.17)
for all n 1 , . . . , n N = 1, . . . , N. Note, the first equation is trivially satisfied as a result of the P + Ward-Takahashi identity whenever n 1 + · · · + n N = 0, while the following two are similarly trivially satisfied whenever n 1 + · · · + n N = ±1, respectively. Now, (4.15) is simply the Ward-Takahashi identity for T ∈ h, which is satisfied by correlators of the theory by virtue of the symmetries of the action, as we saw in Section 3. In contrast, (4.16) and (4.17) are new. They are our first explicit examples of TWTIs, as they constitute a non-trivial relationship between correlation functions computed in different topological sectors of the theory. Thus, in order to verify the symmetry enhancement SU(1, 3) × U(1) → SO(2, 6) at strong coupling, one must demonstrate that (4.16) and (4.17), along with all other TWTIs arising from each of the other elements of so(2, 6) outside h, are satisfied.
As we have seen, the two equations (4.16) and (4.17) descend from the D 6d Ward-Takahashi identity in six dimensions. It is hopefully evident that one can take identical steps in order to derive further TWTIs that descend from other G ∈ so(2, 6) lying outside h, although we do not explore such other generators in detail here.
To better illustrate our construction let us investigate the implications of (4.16) and (4.17) at 2-points. Recall then, that the functional form of the five-dimensional 2-point function of scalar operators is entirely fixed purely by the WTIs for h to be
Φ n (x 1 )Φ −n (x 2 ) = d(∆, n) 1 (z 12z12 ) ∆/2 z 12 z 12 n .
( Let us look in particular at the case ∆ ∈ 2Z. We then find the general solution
d(∆, n) = C + n + ∆ 2 − 1 n − ∆ 2 + C − −n + ∆ 2 − 1 −n − ∆ 2 ,(4.20)
for some constants C + , C − , where we use the convention
α n = α(α−1)...(α−n+1) n! n > 0 1 n = 0 0 n < 0 . (4.21)
Thus, by imposing the TWTI for D 6d , we have determined all of the coefficients d(∆, n) up to the two free variables C + , C − . Note however that the six-dimensional 2-point function to which these five-dimensional correlators must resum depends on only a single overall normalisation. However, it turns out that (4.20) provides the general solution to the full set of TWTIs. To understand this, note that the additional degree of freedom arises because the five-dimensional 2-point functions (4.18) are the Fourier modes of a Lorentzian six-dimensional 2-point function. Crucially, such a 2-point function admits two different values-often parameterised by a suitable iǫ prescription [17,29]-depending on the ordering of the two operators. Then, the first term in (4.20) corresponds to one choice of ordering, while the second corresponds to the other.
The important point here is that Ward-Takahashi identities and their solutions are blind to such an ordering, and so will produce most generally a linear combination of all possible orderings. Instead, one must rely on the path integral formulation (3.19), or else some other quantisation of the theory, to fix the ordering of operators. In particular if we choose the ordering corresponding to C − = 0 then we find
Φ n (x 1 )Φ −n (x 2 ) = C + n + ∆ 2 − 1 n − ∆ 2 1 (z 12z12 ) ∆/2 z 12 z 12 n .
(4.22)
As a consistency check, we can then use this result to deduce the 2-point function of the six-dimensional operatorÔ = cos ∆ (x + /2)O, with O written in terms of the Φ n as in (4.8). As shown in [17], upon performing the sum over modes explicitly with suitable iǫ regularisation, we find
Ô (x 1 )Ô(x 2 ) = (−4) −∆/2 C + |x 1 −x 2 | 2∆ . (4.23)
which is precisely the correct form for a scalar 2-point function of a six-dimensional conformal field theory.
Conclusions and Future Directions
In this paper we have explored how the path integral based on five-dimensional Lagrangians with an SU(1, 3) spacetime symmetry can be used to reconstruct correlation functions of a six-dimensional conformal field theory. In particular we showed how by including nontrivial instanton sectors into the theory the SU(1, 3) symmetry of the action is expanded into SU(1, 3) × U(1) non-perturbatively. We also saw how instanton operators can be used to construct towers of operators which can be identified with suitable Fourier modes of sixdimensional operators. Furthermore we saw that once the instanton sectors are included we must restrict the inverse coupling constant k ∈ 1 2 Z, thereby removing any continuous free parameters. We also explored how imposing the additional symmetries of SO(2, 6) that are not present in the five-dimensional theory can be used to constrain the construction of six-dimensional operators.
While still far from the complete story, we have found an encouraging correspondence between, on one hand, results derived directly from six-dimensional correlators, and on the other hand the allowed topological sectors of the five-dimensional theory (2.1). These results support the claim that the path integral formulation (3.19) or some refinement thereof will be successful in computing correlators in six-dimensional conformal field theory.
There are many outstanding issues to explore but let us highlight a few. It would be interesting to extend the analysis of Section 4.2 to other SO(2, 6) Ward-Takahashi identities, and then importantly to extend the path integral methods of this paper to demonstrate that this full set is indeed satisfied at strong coupling. Furthermore our theories all enjoy significant supersymmetries which we have not exploited. In particular, the 24 real supercharges realised by the Lagrangian (2.1) can be identified with the full set of supercharges in the six-dimensional (2, 0) superconformal algebra that are preserved under the x + reduction [14], while generalisations with 12 supercharges corresponding to (1, 0) superconformal symmetry are also known [16]. Related to this are BPS bounds and superselection rules.
In addition it is clearly of interest to compute higher-point functions. In particular 4-point functions are not fixed by conformal symmetry and therefore encode non-trivial six-dimensional dynamics. Computing these should in principle be possible using the path integral methods we have described. Furthermore there has been great progress in the use of localisation techniques to calculate exact results in supersymmetric field theories in a variety of field theories. Our hope is that this can be applied to the Lagrangians discussed here to obtain concrete results for six-dimensional conformal field theories such as the enigmatic (2, 0) theory of M5-branes. In so doing we hope to open up a window into the microscopic physics of M-theory.
A su(1, 3) Field Variations
It is the norm in Lorentzian theories for the action to be written in a manifestly Lorentzinvariant way, with the transformations of fields straightforward to write down. For our theory and its su(1, 3) spacetime symmetry, we do not have this luxury. The transformations of the fields of the theory can in principle be derived by trial and error. However, there turns out to be an elegant and useful way to derive them from a diffeomorphisminvariant six-dimensional theory, to which the theory is subtly related. The full details of this construction can be found in [15]. Here we state the results in a notation more useful for this paper.
We consider a transformation generated by some G ∈ su (1, 3). Then, the components of the gauge field transform in a standard way,
δ G A − = −G ∂ A − − ∂ − G − ∂ A − − ∂ − G i ∂ A i , δ G A i = −G ∂ A i − ∂ i G − ∂ A − − ∂ i G j ∂ A j , (A.1)
i.e. as (minus) the Lie derivative along the vector field G ∂ . The scalar fields X I also transform under the usual Lie derivative for scalars, except that they are also subject to a compensating Weyl rescaling for G ∈ {T, M i+ , K + }. This Weyl factor is given by
ω := 1 4∂ i G i ∂ , (A.2)
which takes the values
G = T −→ ω = 1 , G = M i+ −→ ω = 1 2 Ω ij x j , G = K + −→ ω = 2x − , (A.3)
while vanishing for the remaining generators. Then, we have
δ G X I = −G ∂ X I − 2ωX I . (A.4)
This is indeed entirely analogous to the familiar interpretation of usual conformal field theory as a gauge fixing of a theory with both diffeomorphism and Weyl invariance. There, like here, it is a coordinated combination of a diffeomorphism and Weyl rescaling which leaves the metric invariant, and thus forms a symmetry of the gauge fixed theory. For the fermions, we find
δ G Ψ = −G ∂ Ψ − 1 2 ω (5 + Γ −+ ) Ψ + Ω ij (∂ j ω)Γ + Γ i Ψ + 1 4 Λ ij Γ ij Ψ , (A.5) where Λ ij = (∂ j G i ∂ ) − ωδ ij = −Λ ji . (A.6)
Explicitly, we find that Λ ij = 0 for G ∈ {P − , P i , T }, while for the remaining generators,
G = B −→ Λ ij = 1 2 Ω ij , G = C α −→ Λ ij = − 1 2 η I ij , G = M i+ −→ Λ jk = 1 2 Ω jk x i + Ω ik x j − Ω ij x k + δ ik Ω jl x l − δ ij Ω kl x l , G = K + −→ Λ ij = 1 2 Ω ij x k x k + Ω ik x k x j − Ω jk x k x i . (A.7)
Then, when acting on the fields A, X I , Ψ, we find that the {δ G } with G ∈ B = {P − , P i , B, C α , T, M i+ , K + } are precisely the generators of a representation of su(1, 3).
We finally come to the Lagrange multiplier field G ij , which arises in a more complicated fashion from the six-dimensional proxy theory. We find
δ G G ij = −k α ∂ α G ij − 4ωG ij − Λ mi G mj − Λ mj G mi + 2 Ω im (∂ m ω)F −j − Ω jm (∂ m ω)F −i + ε ijkl Ω km (∂ m ω)F −l , (A.8)
We note in particular that, in contrast the other fields, the algebra only closes on G ij on the constraint surface F + = 0. In particular, for each G 1 ,
G 2 ∈ B we have [δ G 1 , δ G 2 ]G ij = δ [G 1 ,G 2 ] G ij except for [δ M i+ , δ M j+ ]G kl = δ [M i+ ,M j+ ] G kl +δ ij G kl , [δ M i+ , δ K + ]G jk = δ [M i+ ,K + ] G jk + 2x lδ il G jk , (A.9) whereδ ij G kl = 2 δ ik F + jl − δ il F + jk − δ jk F + il + δ jl F + ik .
(A.10)
A discussion of the origin of this extension to the algebra can be found in [22]. Note that we have F klδij G kl = 0, which ensures thatδ is a symmetry of the Lagrangian. Indeed, since G ij appears only algebraically in L, we have local symmetries ǫ(x)δ for any function ǫ(x), and thus we should think ofδ as generating an auxiliary gauge symmetry which become trivial on the constraint surface.
Finally, note that under Lifshitz scalings as generated by T , we have
X I (x − , x i ) −→ ω −2 X I (ω −2 x − , ω −1 x i ) , A − (x − , x i ) −→ ω −2 A − (ω −2 x − , ω −1 x i ) , A i (x − , x i ) −→ ω −1 X I (ω −2 x − , ω −1 x i ) , G ij (x − , x i ) −→ ω −4 G ij (ω −2 x − , ω −1 x i ) , Ψ + (x − , x i ) −→ ω −3 Ψ + (ω −2 x − , ω −1 x i ) , Ψ − (x − , x i ) −→ ω −2 Ψ − (ω −2 x − , ω −1 x i ) , (A.11)
where we denote by Ψ ± the components of Ψ with definite chirality under Γ −+ = Γ 05 , so that Γ −+ Ψ ± = ±Ψ ± .
B Noether Currents
The Noether currents J G for G ∈ B were first derived in [15], albeit without appreciation for δ-function subtleties due to instanton insertions. We state them here, in a way more consistent with the notation used in this paper. Note that we use J G to denote a vector field and 1-form interchangeably, as the musical isomorphism with respect to the Euclidean metric on R 5 that relates them is trivial. Our expressions are written in terms of the Lagrangian,
L = k 4π 2 tr 1 2 F −i F −i − 1 2 ∇ i X I ∇ i X I + 1 2 F ij G ij − i 2Ψ Γ + D − Ψ + i 2Ψ Γ i ∇ i Ψ − 1 2Ψ Γ + Γ I [X I , Ψ] . (B.1)
consider in particular g lying in the subgroup of SU(1, 3) generated by {P − , P i , B, C α , T }, for which we additionally have S[ϕ ′ ] = S[ϕ]
{q} ±1 . A more general treatment remains an important open problem in five-dimensional gauge theory. For the purposes of this paper, it is sufficient to assume the existence of such a complete formulation. −1 ). The change in path integral measure(3.20) is hence recast simply as
28) when g lies in the subgroup of SU(1, 3) generated by {P − , P i , B, C α , T }, or infinitesimally, N a=1
dimension ∆ and spin {r Φ [B], r Φ [C α ]}.
now denote by δ DÔ (a) the infinitesimal variation ofÔ (a) under D 6d . Explicitly, we haveδ DÔ (a) (x) = −D ∂Ô (a) (x) − ∆ aÔ (a)
(a) = cos ∆a x + 2 n∈Z e −inx + δ D Φ (a) n . (4.11)
4.18) where the Φ n have Lifshitz scaling dimension ∆. Then, the TWTIs (4.16) and (4.17) are satisfied precisely if the coefficients d(∆, n) satisfyn +
∆
2
d(∆, n) − n −
∆
2
+ 1 d(∆, n + 1) = 0 .
(4.19)
A more comprehensive review can be found in[22].
In our conventions, δ G acts on fields only, so that for instance δ G x i ∂ i ϕ(x) = x i ∂ i (δ G ϕ(x)).
This is slightly non-trivial, since the algebra does not close off-shell on G ij . However, the corresponding anomalous extension to the symmetry algebra (A.9) is parameterised by an additional variationδ ij , which explicitly annihilates the Lagrangian. Thus, the algebra does close on L off-shell.8 Here and throughout, we take ⋆ to denote the Hodge star with respect to the Euclidean metric on R 5 , which satisfies ⋆ 2 = 1 on all forms, and (⋆d ⋆ ω) α1...αp−1 = ∂ β ω α1...αp−1β for generic p-form ω.
Note that generically some moduli that are physical on R 4 can become gauge redundancies on S 4for instance, the three gauge orientation moduli of the single SU (2) instanton.
A proof that this list is exhaustive can be found in[22].
Note, the sign here is consistent with the convention used to define theδ G , since for instance we hadδ P− O(x) = −∂ − O(x).
Note, in general the operatorÔ constructed this way would not have definite eigenvalue under the six-dimensional dilatation. However, this can be guaranteed by ensuring thatthe SU (1, 3)representations of the Φ n differ only by their charge under the central element P + .13 The binomial expansion from the second to third lines only holds for ∆ ∈ Z.
The five-dimensional Lifshitz scaling T ∈ h is found as T = D 6d − M 6d +− .
Three point correlators of stress tensors in maximally supersymmetric conformal theories in D = 3 and D = 6. F Bastianelli, S Frolov, A A Tseytlin, 10.1016/S0550-3213(99)00822-6hep-th/9911135Nucl. Phys. B. 578139F. Bastianelli, S. Frolov and A. A. Tseytlin, Three point correlators of stress tensors in maximally supersymmetric conformal theories in D = 3 and D = 6, Nucl. Phys. B 578 (2000) 139 [hep-th/9911135].
Three point functions for a class of chiral operators in maximally supersymmetric CFT at large N. F Bastianelli, R Zucchini, 10.1016/S0550-3213(00)00023-7hep-th/9909179Nucl. Phys. B. 574107F. Bastianelli and R. Zucchini, Three point functions for a class of chiral operators in maximally supersymmetric CFT at large N, Nucl. Phys. B 574 (2000) 107 [hep-th/9909179].
0) superconformal OPEs in D = 6, selection rules and nonrenormalization theorems. B Eden, S Ferrara, E Sokatchev, 10.1088/1126-6708/2001/11/020hep-th/0107084JHEP. 220B. Eden, S. Ferrara and E. Sokatchev, (2,0) superconformal OPEs in D = 6, selection rules and nonrenormalization theorems, JHEP 11 (2001) 020 [hep-th/0107084].
Implications of superconformal symmetry for interacting (2,0) tensor multiplets. G Arutyunov, E Sokatchev, 10.1016/S0550-3213(02)00359-0hep-th/0201145Nucl. Phys. B. 6353G. Arutyunov and E. Sokatchev, Implications of superconformal symmetry for interacting (2,0) tensor multiplets, Nucl. Phys. B 635 (2002) 3 [hep-th/0201145].
Aspects of superconformal field theories in six dimensions. P J Heslop, 10.1088/1126-6708/2004/07/056hep-th/0405245JHEP. 0756P. J. Heslop, Aspects of superconformal field theories in six dimensions, JHEP 07 (2004) 056 [hep-th/0405245].
The (2, 0) superconformal bootstrap. C Beem, M Lemos, L Rastelli, B C Van Rees, 10.1103/PhysRevD.93.0250161507.05637Phys. Rev. D. 9325016C. Beem, M. Lemos, L. Rastelli and B. C. van Rees, The (2, 0) superconformal bootstrap, Phys. Rev. D 93 (2016) 025016 [1507.05637].
Holographic Four-Point Functions in the (2, 0) Theory. L Rastelli, X Zhou, 10.1007/JHEP06(2018)0871712.02788JHEP. 0687L. Rastelli and X. Zhou, Holographic Four-Point Functions in the (2, 0) Theory, JHEP 06 (2018) 087 [1712.02788].
M-theory Beyond The Supergravity Approximation. P Heslop, A E Lipstein, 10.1007/JHEP02(2018)0041712.08570JHEP. 024P. Heslop and A. E. Lipstein, M-theory Beyond The Supergravity Approximation, JHEP 02 (2018) 004 [1712.08570].
Recursion relations for anomalous dimensions in the 6d (2, 0) theory. T Abl, P Heslop, A E Lipstein, 10.1007/JHEP04(2019)0381902.00463JHEP. 0438T. Abl, P. Heslop and A. E. Lipstein, Recursion relations for anomalous dimensions in the 6d (2, 0) theory, JHEP 04 (2019) 038 [1902.00463].
,0) and M-theory at 1-loop. L F Alday, S M Chester, H Raj, 10.1007/JHEP01(2021)133JHEP. 621332005.07175L. F. Alday, S. M. Chester and H. Raj, 6d (2,0) and M-theory at 1-loop, JHEP 01 (2021) 133 [2005.07175].
W symmetry in six dimensions. C Beem, L Rastelli, B C Van Rees, 10.1007/JHEP05(2015)0171404.1079JHEP. 0517C. Beem, L. Rastelli and B. C. van Rees, W symmetry in six dimensions, JHEP 05 (2015) 017 [1404.1079].
M-Theory Reconstruction from (2,0) CFT and the Chiral Algebra Conjecture. S M Chester, E Perlmutter, 10.1007/JHEP08(2018)1161805.00892JHEP. 08116S. M. Chester and E. Perlmutter, M-Theory Reconstruction from (2,0) CFT and the Chiral Algebra Conjecture, JHEP 08 (2018) 116 [1805.00892].
N=6 superconformal Chern-Simons-matter theories, M2-branes and their gravity duals. O Aharony, O Bergman, D L Jafferis, J Maldacena, 10.1088/1126-6708/2008/10/091JHEP. 10910806.1218O. Aharony, O. Bergman, D. L. Jafferis and J. Maldacena, N=6 superconformal Chern-Simons-matter theories, M2-branes and their gravity duals, JHEP 10 (2008) 091 [0806.1218].
Non-Lorentzian M5-brane Theories from Holography. N Lambert, A Lipstein, P Richmond, 10.1007/JHEP08(2019)0601904.07547JHEP. 0860N. Lambert, A. Lipstein and P. Richmond, Non-Lorentzian M5-brane Theories from Holography, JHEP 08 (2019) 060 [1904.07547].
Bosonic symmetries of (2, 0) DLCQ field theories. N Lambert, A Lipstein, R Mouland, P Richmond, 10.1007/JHEP01(2020)1661912.02638JHEP. 01166N. Lambert, A. Lipstein, R. Mouland and P. Richmond, Bosonic symmetries of (2, 0) DLCQ field theories, JHEP 01 (2020) 166 [1912.02638].
Non-Lorentzian Avatars of (1,0) Theories. N Lambert, T Orchard, 6968N. Lambert and T. Orchard, Non-Lorentzian Avatars of (1,0) Theories, 2011.06968.
N Lambert, A Lipstein, R Mouland, P Richmond, Five-Dimensional Non-Lorentzian Conformal Field Theories and their Relation to Six-Dimensions. 626N. Lambert, A. Lipstein, R. Mouland and P. Richmond, Five-Dimensional Non-Lorentzian Conformal Field Theories and their Relation to Six-Dimensions, 2012.00626.
N Lambert, A Lipstein, R Mouland, P Richmond, 2105.02008Instanton Worldlines in Five-Dimensional Ω-Deformed Gauge Theory. N. Lambert, A. Lipstein, R. Mouland and P. Richmond, Instanton Worldlines in Five-Dimensional Ω-Deformed Gauge Theory, 2105.02008.
Instanton Operators in Five-Dimensional Gauge Theories. N Lambert, C Papageorgakis, M Schmidt-Sommerfeld, 10.1007/JHEP03(2015)019JHEP. 03191412.2789N. Lambert, C. Papageorgakis and M. Schmidt-Sommerfeld, Instanton Operators in Five-Dimensional Gauge Theories, JHEP 03 (2015) 019 [1412.2789].
Instanton operators and symmetry enhancement in 5d supersymmetric gauge theories. Y Tachikawa, 10.1093/ptep/ptv0401501.01031PTEP. 2015Y. Tachikawa, Instanton operators and symmetry enhancement in 5d supersymmetric gauge theories, PTEP 2015 (2015) 043B06 [1501.01031].
O Bergman, D Rodriguez-Gomez, 10.1007/JHEP05(2016)0681601.00752A Note on Instanton Operators, Instanton Particles, and Supersymmetry. 68O. Bergman and D. Rodriguez-Gomez, A Note on Instanton Operators, Instanton Particles, and Supersymmetry, JHEP 05 (2016) 068 [1601.00752].
R Mouland, Non-Lorentzian Supersymmetric Models and M-Theory Branes. Ph.D. thesis, 9, 2021. 2109.04416R. Mouland, Non-Lorentzian Supersymmetric Models and M-Theory Branes, Ph.D. thesis, 9, 2021. 2109.04416.
Timelike Hopf duality and type IIA* string solutions. C N Pope, A Sadrzadeh, S R Scuro, 10.1088/0264-9381/17/3/305hep-th/9905161Class. Quant. Grav. 17623C. N. Pope, A. Sadrzadeh and S. R. Scuro, Timelike Hopf duality and type IIA* string solutions, Class. Quant. Grav. 17 (2000) 623 [hep-th/9905161].
Non-Lorentzian RG flows and Supersymmetry. N Lambert, R Mouland, 10.1007/JHEP06(2019)1301904.05071JHEP. 06130N. Lambert and R. Mouland, Non-Lorentzian RG flows and Supersymmetry, JHEP 06 (2019) 130 [1904.05071].
M5-Branes, D4-Branes and Quantum 5D super-Yang-Mills. N Lambert, C Papageorgakis, M Schmidt-Sommerfeld, 10.1007/JHEP01(2011)0831012.2882JHEP. 0183N. Lambert, C. Papageorgakis and M. Schmidt-Sommerfeld, M5-Branes, D4-Branes and Quantum 5D super-Yang-Mills, JHEP 01 (2011) 083 [1012.2882].
On D=5 super Yang-Mills theory and (2,0) theory. M R Douglas, 10.1007/JHEP02(2011)0111012.2880JHEP. 0211M. R. Douglas, On D=5 super Yang-Mills theory and (2,0) theory, JHEP 02 (2011) 011 [1012.2880].
Drinfeld and Y. I. Manin, Construction of Instantons. M F Atiyah, N J Hitchin, 10.1016/0375-9601(78)90141-XPhys. Lett. A. 65185M. F. Atiyah, N. J. Hitchin, V. G. Drinfeld and Y. I. Manin, Construction of Instantons, Phys. Lett. A 65 (1978) 185.
Topological disorder operators in three-dimensional conformal field theory. V Borokhov, A Kapustin, X.-K Wu, 10.1088/1126-6708/2002/11/049hep-th/0206054JHEP. 1149V. Borokhov, A. Kapustin and X.-k. Wu, Topological disorder operators in three-dimensional conformal field theory, JHEP 11 (2002) 049 [hep-th/0206054].
T Hartman, S Jain, S Kundu, 10.1007/JHEP05(2016)0991509.00014Causality Constraints in Conformal Field Theory. 99T. Hartman, S. Jain and S. Kundu, Causality Constraints in Conformal Field Theory, JHEP 05 (2016) 099 [1509.00014].
| [] |
[
"On-chip magnetic cooling of a nanoelectronic device OPEN",
"On-chip magnetic cooling of a nanoelectronic device OPEN"
] | [
"D I Bradley \nDepartment of Physics\nLancaster University\nLA1 4YBBailrigg, LancasterUK\n",
"A M Guénault \nDepartment of Physics\nLancaster University\nLA1 4YBBailrigg, LancasterUK\n",
"D Gunnarsson \nVTT Technical Research Centre of Finland Ltd\nP.O. Box 100002044EspooVTTFinland\n",
"R P Haley \nDepartment of Physics\nLancaster University\nLA1 4YBBailrigg, LancasterUK\n",
"S Holt \nDepartment of Physics\nLancaster University\nLA1 4YBBailrigg, LancasterUK\n",
"A T Jones \nDepartment of Physics\nLancaster University\nLA1 4YBBailrigg, LancasterUK\n",
"Yu A Pashkin \nDepartment of Physics\nLancaster University\nLA1 4YBBailrigg, LancasterUK\n",
"J Penttilä \nAivon Oy\nValimotie 13A00380HelsinkiFinland\n",
"J R Prance j.prance@lancaster.ac.uk \nDepartment of Physics\nLancaster University\nLA1 4YBBailrigg, LancasterUK\n",
"M Prunnila \nVTT Technical Research Centre of Finland Ltd\nP.O. Box 100002044EspooVTTFinland\n",
"& L Roschier \nAivon Oy\nValimotie 13A00380HelsinkiFinland\n"
] | [
"Department of Physics\nLancaster University\nLA1 4YBBailrigg, LancasterUK",
"Department of Physics\nLancaster University\nLA1 4YBBailrigg, LancasterUK",
"VTT Technical Research Centre of Finland Ltd\nP.O. Box 100002044EspooVTTFinland",
"Department of Physics\nLancaster University\nLA1 4YBBailrigg, LancasterUK",
"Department of Physics\nLancaster University\nLA1 4YBBailrigg, LancasterUK",
"Department of Physics\nLancaster University\nLA1 4YBBailrigg, LancasterUK",
"Department of Physics\nLancaster University\nLA1 4YBBailrigg, LancasterUK",
"Aivon Oy\nValimotie 13A00380HelsinkiFinland",
"Department of Physics\nLancaster University\nLA1 4YBBailrigg, LancasterUK",
"VTT Technical Research Centre of Finland Ltd\nP.O. Box 100002044EspooVTTFinland",
"Aivon Oy\nValimotie 13A00380HelsinkiFinland"
] | [] | We demonstrate significant cooling of electrons in a nanostructure below 10 mK by demagnetisation of thin-film copper on a silicon chip. Our approach overcomes the typical bottleneck of weak electronphonon scattering by coupling the electrons directly to a bath of refrigerated nuclei, rather than cooling via phonons in the host lattice. Consequently, weak electron-phonon scattering becomes an advantage. It allows the electrons to be cooled for an experimentally useful period of time to temperatures colder than the dilution refrigerator platform, the incoming electrical connections, and the host lattice. There are efforts worldwide to reach sub-millikelvin electron temperatures in nanostructures to study coherent electronic phenomena and improve the operation of nanoelectronic devices. On-chip magnetic cooling is a promising approach to meet this challenge. The method can be used to reach low, local electron temperatures in other nanostructures, obviating the need to adapt traditional, large demagnetisation stages. We demonstrate the technique by applying it to a nanoelectronic primary thermometer that measures its internal electron temperature. Using an optimised demagnetisation process, we demonstrate cooling of the on-chip electrons from 9 mK to below 5 mK for over 1000 seconds. | 10.1038/srep45566 | null | 3,186,857 | 1611.02483 | f8ac72b1584ffed9b8be4b9782731c4fdf1f74da |
On-chip magnetic cooling of a nanoelectronic device OPEN
Published: 04 April 2017
D I Bradley
Department of Physics
Lancaster University
LA1 4YBBailrigg, LancasterUK
A M Guénault
Department of Physics
Lancaster University
LA1 4YBBailrigg, LancasterUK
D Gunnarsson
VTT Technical Research Centre of Finland Ltd
P.O. Box 100002044EspooVTTFinland
R P Haley
Department of Physics
Lancaster University
LA1 4YBBailrigg, LancasterUK
S Holt
Department of Physics
Lancaster University
LA1 4YBBailrigg, LancasterUK
A T Jones
Department of Physics
Lancaster University
LA1 4YBBailrigg, LancasterUK
Yu A Pashkin
Department of Physics
Lancaster University
LA1 4YBBailrigg, LancasterUK
J Penttilä
Aivon Oy
Valimotie 13A00380HelsinkiFinland
J R Prance j.prance@lancaster.ac.uk
Department of Physics
Lancaster University
LA1 4YBBailrigg, LancasterUK
M Prunnila
VTT Technical Research Centre of Finland Ltd
P.O. Box 100002044EspooVTTFinland
& L Roschier
Aivon Oy
Valimotie 13A00380HelsinkiFinland
On-chip magnetic cooling of a nanoelectronic device OPEN
Published: 04 April 201710.1038/srep45566Received: 07 December 2016 accepted: 27 February 20171 Scientific RepoRts | 7:45566 | www.nature.com/scientificreports † Present address: BlueFors Cryogenics Oy, Arinatie 10, 00370 Helsinki, Finland. Correspondence and requests for materials should be addressed to J.R.P. (
We demonstrate significant cooling of electrons in a nanostructure below 10 mK by demagnetisation of thin-film copper on a silicon chip. Our approach overcomes the typical bottleneck of weak electronphonon scattering by coupling the electrons directly to a bath of refrigerated nuclei, rather than cooling via phonons in the host lattice. Consequently, weak electron-phonon scattering becomes an advantage. It allows the electrons to be cooled for an experimentally useful period of time to temperatures colder than the dilution refrigerator platform, the incoming electrical connections, and the host lattice. There are efforts worldwide to reach sub-millikelvin electron temperatures in nanostructures to study coherent electronic phenomena and improve the operation of nanoelectronic devices. On-chip magnetic cooling is a promising approach to meet this challenge. The method can be used to reach low, local electron temperatures in other nanostructures, obviating the need to adapt traditional, large demagnetisation stages. We demonstrate the technique by applying it to a nanoelectronic primary thermometer that measures its internal electron temperature. Using an optimised demagnetisation process, we demonstrate cooling of the on-chip electrons from 9 mK to below 5 mK for over 1000 seconds.
Nuclear demagnetisation refrigeration is a technique that has been used for several decades to access the μK regime in bulk materials 1 . In practice it requires an elaborate demagnetisation stage to be attached to a dilution refrigerator 2 , which is typically cooled by liquid cryogens. The technique has recently been transferred to cryogen-free dilution refrigerators to cool both PrNi 5 3 and Cu 4 refrigerants, requiring additional efforts to compensate for the vibrations present in cryogen-free systems. Here we perform nuclear demagnetisation refrigeration directly on the chip of an aluminium nanoelectronic circuit plated with a thick copper film. This approach removes the requirement for a separate demagnetisation stage and does not require additional vibration isolation. In reaching for lower electron temperatures, we are motivated by the potential improvement in the operation of nanoelectronic devices, for example ensuring charge pumps 5 operate with a minimized leakage and back-tunnelling rate 6 , improving the signal to noise ratio of single-electron transistor based sensors 7 and reducing the charge noise in qubits 8 .
The nanoelectronic device used in this work, a Coulomb Blockade Thermometer (CBT), was chosen because it provides primary thermometry of the refrigerated electrons. A CBT consists of an array of metallic islands connected by tunnel junctions and operates in the weak Coulomb blockade regime where the charging energy of the islands E C ≪ k B T. The operating temperature can be tuned by selecting the total island capacitance C ∑ 9 . CBTs have been designed to work at temperatures as high as 60 K, to assist with the redefinition of the Kelvin based on fundamental physical constants 10 . Devices designed for ultralow temperature operation have been tested down to record low temperatures reaching 3.7 mK 11 . Below 10 mK it becomes challenging to cool the electrons in the CBT as they become increasingly decoupled from the cold lattice, and to overcome this limitation we demonstrate an on-chip cooling method based on nuclear demagnetization of the CBT islands.
On-chip electron refrigeration has a 20 year history starting from the early works on normal metal-insulatorsuperconductor 12,13 and semiconductor-superconductor junction coolers 14 . Continuous improvements in the thermal isolation and advances in the junction technology [15][16][17] are taking these technologies towards practical on-chip cooling platforms 18 . In addition to the junction coolers, semiconductor quantum dots 19 and demagnetisation of Mn-doped aluminium 20 have been investigated for on-chip cooling. In the technique we describe here, the weak electron-phonon coupling in the CBT islands is used as an intrinsic heat switch to pre-cool the electrons. Once the electron-phonon coupling becomes too weak for effective external cooling, we then use demagnetisation to cool the chip from the inside-out while using the CBT's reliable thermometry to monitor the process. We use a proven low-temperature refrigerant (copper) to extend the range of on-chip refrigeration to significantly lower temperatures than the operating range of most commercial dilution refrigerators.
Results
Structure and heatsinking of devices. The CBT device, as shown in Fig. 1(a), consists of 20 electrically parallel rows of 33 tunnel junctions in series. This arrangement creates a 20 × 32 array of metallic islands which increases the accuracy of temperature measurements 21 while also providing a conveniently measurable resistance. The Al/Al 2 O 3 junctions, each with resistance R T , were fabricated using an ex situ tunnel junction process 22 (see materials and methods for details). The islands between each pair of junctions are electroplated with copper and have dimensions 39 mm × 206 mm × 6.14 mm (W × L × T), as shown in Fig. 1(b). To suppress superconductivity of aluminium in the device, a magnetic field of at least 0.1 T is applied to the CBT during all the measurements presented here. The 2.3 mm × 6.5 mm rectangular chip of the device is contained in a silver package, as shown in Fig. 1(c). The package is designed to maximize the thermal conductance between the CBT and the mixing chamber plate of the dry dilution refrigerator (Bluefors Cryogenics LD250). The design also aims to reduce eddy current heating during changes of magnetic field by minimizing the cross-sectional area of metal perpendicular to the field. The package contains electronic filters to provide a quiet electromagnetic environment for the CBT (see materials and methods for more details).
Electron thermometry in constant magnetic fields. In all the experiments described below, the CBT is measured in a four terminal configuration driven by a custom voltage-controlled current source. Two voltage sense wires from the CBT connect to a differential preamplifier and then to a lock-in amplifier and voltmeter. The current source is used to apply an AC (around 80 Hz) excitation of 40 pA and a controllable DC bias current. The lock-in and voltmeter measure the differential conductance of the CBT and the DC bias voltage, respectively.
The electron temperature T e in the CBT can be determined in two different ways. In the primary mode, the differential conductance of the CBT is measured for a range of DC bias values and a dip in conductance The package is located in the bore of a superconducting solenoid and the magnetic field is aligned parallel to the long axis of the islands. Panel (c) shows the design of the sample package used to allow effective precooling of the CBT device. It consists of a closed silver box (lid not shown here) in which the CBT is bonded in place using a silver conductive coating (Electrodag ® ). The package also contains a small PCB (green) with RLC lowpass filters to which the CBT bond wires are attached. For thermalisation, the package is mounted on a thin copper strip and attached to a silver wire, both of which are attached to the mixing chamber plate of the dilution refrigerator. Four copper wires pass from the package to additional filters at the mixing chamber plate.
Scientific RepoRts | 7:45566 | DOI: 10.1038/srep45566 is observed around zero bias. The dip in conductance becomes deeper and sharper at lower temperatures. In the well-thermalised regime, the full width at half maximum V 1/2 of the dip is related to temperature by V 1/2 ≈ 5.439Nk B T/e, where N is the number of tunnel junctions in series (here N = 33) 9 . The CBT can also be operated in secondary mode 11 , whereby only the conductance at zero bias is measured. This is much faster than measuring the full conductance curve and the electron temperature can be determined from a pre-calculated lookup table of zero bias conductances. The lookup table is calculated from the sensor parameters C ∑ and R T which in turn can be found from a self-calibration procedure described below. The self-calibration and the production of the lookup table were performed using code based on the free, open source python library pyCBT.
The CBT is self-calibrated at magnetic fields of 0.1 T and 5.0 T by measuring conductance curves at three different temperatures for DC biases up to ± 1 mV. By fitting these curves simultaneously to a full tunnelling model of the CBT conductance, the values of the total capacitance C ∑ and tunnel resistance R T can be found 11 as well as the values of the three temperatures. This fitting is performed for the three highest temperature curves in Fig. 2(a) and (b), and we find that the parameters C ∑ and R T are independent of field, within the experimental error, between 5.0 T and 0.1 T.
The exact temperatures of the conductance curves are not important for the purposes of calibration, it is only required that the measurements are made at sufficiently different temperatures for C ∑ and R T to be fitted with a low degree of uncertainty. We confirm that the fits are good by comparing the extracted values of electron temperature to the temperature of the mixing chamber T MXC , which was monitored using a RuO 2 thermometer, supplied and calibrated by Bluefors Cryogenics (see Fig. 2).
The fitting of the three curves at 0.1 T, shown in Fig. 2(a), yields sensor parameters C ∑ = 192.4 ± 0.9 fF and R T = 24.99 ± 0.06 kΩ. At 5.0 T, in Fig. 2(b), the parameters are C ∑ = 191.9 ± 0.8 fF and R T = 25.10 ± 0.06 kΩ. The uncertainties in these parameters arise from the confidence in the fitting, the experimental random error and the drift in gain of the preamplifier and lock-in input amplifier. The averages of these sensor parameters are then used to calculate the lookup table required for secondary mode operation. The uncertainty in the measured parameters corresponds to an uncertainty in electron temperature of less than 1% at 5 mK.
Following the 0.1 T calibration the CBT was allowed to thermalise at base temperature (T MXC = 7.1 mK) for around 5 hours to account for the long thermalisation times observed previously for a CBT in vacuum 11 . A further conductance curve was then measured (see Fig. 2(a)) and fitted to find the base electron temperature of 8.1 mK.
Demagnetisation cooling.
To magnetise and demagnetise the on-chip islands, the CBT package is located inside the bore of a superconducting solenoid with a maximum field of 5 T. For all demagnetisation experiments, the magnetic field is initially increased to 5 T, during which the heat of magnetisation and eddy current heating raise the CBT electron temperature. The field is then held constant until the fridge and electrons in the CBT are thermalised, taking approximately 14 hours. Demagnetisation is then performed by sweeping the magnetic field down to the chosen value (not less than 0.1 T) while measuring the CBT in secondary mode approximately every six seconds. Figure 3(a) shows the CBT conductance during a demagnetisation from 5.0 T to 0.1 T at a rate of 2.5 mT/s. The conductance at zero bias is reduced by around 15% during the magnetic field sweep, consistent with the electron temperature decreasing and hence the conductance curve peak becoming sharper. To verify this interpretation, the change in CBT conductance during demagnetisation was measured at different DC bias values, as shown in These values are also plotted in the insets against the temperature of the dilution refrigerator's mixing chamber plate T MXC as measured by a RuO 2 thermometer. The solid line in the insets shows T e = T MXC , and it can be seen that the CBT electron temperature remains well-thermalised with the mixing chamber. A conductance curve was measured at base temperature (green diamonds in panel (a)) after the CBT was allowed to cool for 5 hours. From this we find a base electron temperature of 8.1 ± 0.1 mK at a fridge temperature T MXC = 7.12 ± 0.06 mK. Fig. 3(a). The applied DC bias was set to 0.01 mV, 0.09 mV and 0.96 mV which correspond to the centre of the conductance dip, the half maximum point of the conductance dip (when measured prior to demagnetisation) and the asymptotic conductance, respectively. The conductance curves at the different biases show completely different trends, but all three are consistent with the conductance dip becoming deeper and sharper, corresponding to a reduction in electron temperature. We also note that the asymptotic conductance throughout the demagnetisation takes the value 24.31 ± 0.03 μS, the standard deviation of which (0.12%) is smaller than the previously assessed experimental error in tunnel resistance (0.24%), meaning the asymptotic conductance is constant during a demagnetisation within the experimental error. This implies the observed drop in zero-bias conductance is not an artefact from, for example, induced voltages and is indeed consistent with cooling of the CBT island electrons.
To further understand the demagnetisation process we model the heat flow in the CBT using three coupled sub-systems within each copper island: the nuclei, electrons and phonons, see Fig. 4(a). We assume that the island phonons are coupled to the substrate, and that this thermal bath has a much larger heat capacity than the electrons and nuclei. The heat flow from the phonon bath to the electrons due to electron-phonon coupling is given by 7,23
= Σ − Q V T T ( ) ,(1)
pe p e 5 5 where ∑ is a material dependent electron-phonon coupling constant, measured previously 7,24 as 2 GW/(m 3 K 5 ), and V is the CBT island volume. The heat flow between the island electrons and nuclei is 25 where λ is the nuclear Curie constant, n the number of moles of copper on the island, b = 0.36 mT the effective dipolar field in copper 1 , μ 0 the permeability of free space and κ = 1.2 the Korringa constant for copper 26 . In addition, we include a constant parasitic heat leak Q par into the island electrons that may be due to electronic noise in the measurement circuit or non-equilibrium photons in the environment.
λ µ κ = + − Q n B b T T T ( ) ( ) ,(2)
We assume that the thermalisation processes between the sub-systems (i.e. electron-nuclear and electronphonon coupling) are much slower than the internal thermalisation of the nuclear system. We can therefore model the demagnetisation as small, instantaneous, adiabatic steps of magnetic field separated by periods of thermalisation between the sub-systems. During each magnetic field step δB, the nuclear temperature changes by an amount δT n . For an ideal adiabatic step B/T n is constant 1 , giving Between the magnetic field steps, the electron temperature evolves according to
= + − dT dt Q Q Q C ,(4)
e pe par e n e and the nuclear temperature evolves according to Here C n and C e are the nuclear and electronic heat capacities, respectively, given by
κ = = − . dT dt Q C T T T ( )(5)λ µ = + C n B b T ( ) ,(6)π = C n R T T 2 ,(7)
e e F 2 where T F = 8.12 × 10 4 K is the Fermi temperature for copper 1 and R the ideal gas constant. Solving these equations numerically for the ideal case of = Q 0 par and constant T p gives the demagnetisation temperature curves shown in Fig. 4(b). Even in this simple case, the model reproduces the most significant features seen in the data: an initial drop of T e followed by a sharper increase of T e below ≈ 1 T. In order to achieve a close quantitative agreement with the data, it is necessary to introduce three parameters that modify the model to more closely resemble the physical situation. First, we allow Q par to be a fitting parameter representing constant background heating from the environment. Second, we decouple the effective island volumes that are used to calculate Q pe and Q en . This allows the model to account for differences in the heat capacities of the electroplated copper material on the CBT and the high purity copper used to determine ∑ in refs 7, 24 and λ and κ in ref. 1. Finally, we allow the phonon temperature T p to increase linearly during the demagnetisation. We expect T p to Here the CBT island electrons (measured temperature T e and heat capacity C e ) are connected to the island phonons (temperature T p ) through electron-phonon coupling, resulting in a heat flow Q pe . They are also connected to the CBT island nuclei (temperature T n and heat capacity C n ) with a heat flow Q en and have an additional parasitic heat leak Q par . Panel (b) is a simulation of an ideal demagnetisation with no external heat leak and a constant phonon temperature. Panel (c) is a fitting of this model to the 2.5 mT/s demagnetisation shown in Fig. 3. For this fitting, the measured electron temperatures T e (orange circles) were fitted to the calculated electron temperatures by varying Q par , the rate of increase of T p and the electron-phonon coupling effective volume. The fitting parameters Q par and the rate of increase of T p are 6.317 fW and 2.960 μK/s, respectively, and the effective volume used to calculate Q pe is 12% of the volume used for Q en . We have previously found > . Q 0 3 fW par for similar devices measured in a lower noise environment 11 .
increase due to eddy current heating of the sample package, however the actual change in phonon temperature cannot be measured directly by the CBT. We fit a linear increase in T p as the simplest approximation for phonon heating. This increase also allows the model to account for differences between the exact functional form of the electron-phonon coupling and that given by equation 1. Deviations from a T 5 dependence have been reported in similar devices 11,27 . Both eddy current heating and any deviation from equation 1 can be fitted using a linear increase in T p . In our experiment, both effects are present simultaneously and it is not possible to separate them. As such it is not possible to determine to what extent T p reflects the true phonon temperature during a demagnetisation. In the fitting, the initial value of T p is set to the temperature of the mixing chamber plate, as measured by the RuO 2 thermometer. By fitting the three parameters discussed above, we find close agreement between the calculated T e and the data for five different demagnetisations at different sweep rates. Figure 4(c) shows the fit to a demangetisation made at 2.5 mT/s. Using the thermal model, we optimize the demagnetisation process to increase the low temperature hold time. From equation 2 it can be seen that the heat transfer from the electrons to the nuclei goes as B 2 , meaning that the cooling power is very significantly reduced towards the end of the demagnetisation even though T n is significantly lower than T e , as shown in Fig. 4(b). This means that the electron-phonon coupling, which grows stronger (equation 1) due to the falling T e and rising T p , becomes dominant and rapidly heats the electrons. Reducing the magnetic field ramp rate at the lower field values minimises eddy current heating, hence the rise in T p , and keeps the field at higher values for longer, allowing more heat to flow between the electrons and nuclei. Furthermore, stopping the demagnetisation at higher fields stops any eddy current heating before the heat capacity of the nuclei (equation 6) has become negligibly small.
The electron temperatures measured during four different demagnetisation sweeps are shown in Fig. 5(a), with the magnetic field profiles shown in Fig. 5(b). During an ideal adiabatic demagnetisation there is no entropy change and B/T e is constant 1 . Figure 5(c) shows the normalized values of B/T e for each measurement, indicating how close each is to the ideal case. The 10 mT/s demagnetisation (black curve) shows a large change in B/T e and the electron temperature also peaks far above the initial temperature at the end of the ramp. Reducing the sweep rate to 2.5 mT/s reduces this peak and also reduces the lowest temperature achieved to 4.7 mK. However, the time taken to reach the minimum temperature is long and the entropy loss at low temperatures is still large, which causes the electrons to quickly heat back up. Moving to a multi-rate demagnetisation increases the hold time below 5 mK to 1200 s (from around 400 s) and reduces the minimum temperature to 4.5 mK. This was further improved by stopping the sweep at 1.4 T which fully eliminates the peak in electron temperature seen at low fields while still reaching the same ultimate base temperature.
Discussion
We have shown that a CBT with large, copper electroplated islands is a reliable thermometer of electron temperature over a large range of magnetic fields. It also exhibits long thermalisation times below 10 mK, as has been observed previously in devices with gold-plated islands 11 . It is expected that the CBT should be unaffected by magnetic fields as its operation is solely based on electrostatic properties 1 . Our results are in agreement with this expectation and with a previous investigation into the magnetic field dependence of CBTs 28 .
When the copper islands of the CBT are demagnetised we see that the minimum CBT conductance falls while the asymptotic conductance remains constant and the intermediate conductance increases, as shown in Fig. 3(a). This is due to the sharpening and deepening of the dip in the conductance curve and is direct evidence of cooling. We model this cooling process by considering the flow of heat between three subsystems in each island, and the model fits closely to the experimentally measured electron temperatures. One notable deviation is that the lowest temperatures found in the data are consistently slightly higher than those found by the numerical simulation, for example as seen in Fig. 4(c). This is consistent with electrical noise in the measurement system causing a small amount of conductance curve broadening (see materials and methods) and hence overestimated electron temperatures in the data. Significant efforts were made to minimise noise sources, but we cannot be certain that noise broadening is not limiting the lowest temperatures reported.
We have modified the magnetic field sweep profiles to prevent the electron temperature peaking above the initial temperature, to minimize the lowest temperature reached and to maximize the time the CBT remains at the lower temperatures. Using these profiles, the data show a reduction in minimum temperature that is relatively small (less than 1 mK), but a dramatic increase in the time that the CBT remains below 5 mK from ≈ 400 s to ≈ 1200 s. This shows that on-chip magnetic refrigeration can be a practical cooling method to allow measurements at temperatures lower than those achievable with commercial dry dilution refrigerators (typically guaranteed around 10 mK).
The thermal model allows us to simulate the demagnetisation process for arbitrary values of magnetic field and starting temperature. Using the best demagnetisation profile from Fig. 5, an initial electron temperature of 5 mK and an initial magnetic field of 10 T, we expect a minimum electron temperature of 1.1 mK. Similarly, with an initial field of 12 T we expect a minimum temperature of 0.9 mK. These parameters are within reach of commercial dry dilution refrigerators. Finally, we note a previous observation that using low quality copper flakes as a refrigerant leads to surprisingly long thermalisation times after demagnetisation 29 . We believe that this shows potential for further improvements in the low temperature hold time by optimising the material properties of the on-chip copper refrigerant.
In conclusion, we have observed that the use of large copper islands on a Coulomb blockade thermometer allows magnetic cooling of the device and hence permits the electron temperature to be significantly reduced below the temperature of the phonon bath, the temperature of this dilution refrigerator, and the base temperature of most other commercial refrigerators. By optimizing the magnetic field ramp rates, the electrons were cooled below 5 mK for 1200 s. We anticipate that sub-1 mK electron temperatures can be reached using this technique by employing a larger magnetic field and a lower starting temperature.
Methods
CBT fabrication.
The CBT substrate is an undoped silicon wafer with a 300 nm thermal oxide layer. On to this, aluminium films (250 nm thick) were deposited to form the CBT circuit, with SiO 2 deposited by plasma enhanced chemical vapour deposition to separate different aluminium layers. Tunnel junctions were formed, in an ex situ tunnel junction process 22 , as an aluminium oxide layer between two aluminium layers contained within a cylindrical via (diameter nominally 0.8 μm) through the SiO 2 , giving a resistivity of ~10 k Ωmm 2 .
The ex situ method, and other typical tunnel junction deposition processes, are generally usable for film thicknesses up to around 1 μm. Above this the films can suffer from stress build-up which leads to poor adhesion to the substrate. On cooling to low temperatures this can cause poor thermalisation and, due to the significant thermal contraction during cool down, mechanical failure. To maximize the nuclear heat capacity available in the islands for demagnetisation cooling, it is necessary to have the thickest islands possible. To achieve this, the islands were mask electroplated with copper following the ex situ process. The mask electroplating can be used for low embedded stress films up to ~10 μm thick, and here was used to create an approximately 6 μm thick layer.
CBT packaging. To maximize the thermal conductance between the device and the mixing chamber plate, a path of high conductivity silver is used to connect the CBT chip and the plate. This is achieved by bonding the CBT to a 3D-printed silver package using a silver conductive coating (Electrodag). A high purity, annealed silver wire of approximately 1 mm diameter connects the outside of the silver package to the gold plated copper mixing chamber plate surface. The ends of the silver wire are crushed between a silver washer and the contact surfaces using a brass bolt, nut and copper washers (used to prevent loosening due to the relatively large thermal contraction of silver on cooling).
The package and silver wire were supported by a plastic rod below the mixing chamber plate of the dilution refrigerator. The choice of a plastic rod was made to minimize the total cross-sectional area of metal perpendicular to magnetic flux, thus minimizing the amount of eddy current heating when sweeping the magnetic field. Similarly, the CBT package is shaped to minimize the cross section perpendicular to the applied field, and the long axis of the copper CBT islands is oriented parallel to the field. Electronic measurement. The measurement circuit includes three stages of electronic filtering to reduce noise at the sample. The effect of electrical noise is to smooth the shape of the measured conductance curve, which would lead to artificially high reported electron temperatures. The filtering involves a distributed RC filter on the CBT itself (R ≈ 500 Ω, C ≈ 10 pF), a discrete component RLC filter (R = 4.99 kΩ, L = 24 nH, C = 180 pF) on a neighbouring PCB and a discrete component RCR (R = 200 Ω, C = 1 nF) pi-filter mounted on the mixing chamber plate. This RCR filter is an Aivon Oy Therma that has been modified by potting with copper loaded (40% by mass) Stycast ® 2850GT to minimize heating of the resistors, and to act as a powder filter to attenuate high frequency noise. The inductors making up the RLC filter are oriented with their coil axis perpendicular to the applied magnetic field to prevent induced voltages.
Scientific RepoRts | 7:45566 | DOI: 10.1038/srep45566
The lock-in technique was used to measure the CBT's differential conductance. The magnitude of the AC excitation used for this (40 pA) was chosen to prevent conductance curve broadening while keeping the integration times required for a clean measurement acceptable. Measurements were also made with an excitation of 20 pA; however, this did not show a lower base electron temperature. Data availability. All data used in this paper are available at http://dx.doi.org/10.17635/lancaster/researchdata/105 including descriptions of the data sets.
Code availability. Data analysis was undertaken using custom code derived from the python-based software library "pyCBT", which is freely available from Aivon Oy at https://github.com/AivonOy/pyCBT.
Figure 1 .
1Details of the CBT device structure and package. The large arrow shows the direction of applied magnetic field B. Panel (a) is a photograph of the CBT chip, showing the array of 20 × 32 copper islands in the bottom 3/4 of the image and the on-chip filters at the top. Panel (b) is a micrograph showing five of the islands.
Figure 2 .
2Calibration of the CBT at different magnetic fields. Panel (a) shows the CBT conductance G against measured DC bias V DC for several different temperatures at 0.1 T. Panel (b) shows similar measurements made at 5.0 T. In both panels, the symbols show measured values and the lines show the best fit to the modelled conductance. The warmest three measurements (black diamonds, orange circles and blue squares) are fitted simultaneously to find the island capacitance C ∑ and junction resistance R T . The parameters found were C ∑ = 192.4 ± 0.9 fF and R T = 24.99 ± 0.06 kΩ at 0.1 T, and C ∑ = 191.9 ± 0.8 fF and R T = 25.10 ± 0.06 kΩ at 5.0 T. The legends show the fitted electron temperatures (uncertainties from the confidence in the fitting parameters).
Figure 3 .
3Variation of CBT conductance with DC bias during demagnetisation. In panel (a), the solid black line shows the magnetic field sweeping at 2.5 mT/s from 5.0 to 0.1 T. The orange, blue and green lines show the peak, half-maximum and asymptotic CBT conductances, respectively. Panels (b), (c) and (d) are schematic CBT conductance characteristics (plotted against common scales) for reducing electron temperature from (b) to (d). The coloured symbols show the three conductances at the indicated times.
Figure 4 .
4Numerical model of demagnetisation refrigeration. Panel (a) shows a schematic of the thermal model.
Figure 5 .
5Dependence of demagnetisation cooling on field ramp profile. Panel (a) shows the variation in CBT electron temperature with time for the field profiles shown in panel (b) for the corresponding colours. All profiles proceeded from 5.0 T. The black and green profiles ramp to 0.1 T at constant rates of 10 mT/s and 2.5 mT/s, respectively. The other profiles ramp at 10 mT/s to 2.5T, then at 2.5 mT/s to 1.5 T and finally at 0.5 mT/s to 0.1 T and 1.4 T for the orange and blue curves, respectively. Panel (c) shows the variation in B/T e during these demagnetisations and hence the amount of deviation from the ideal case of constant entropy (constant B/T e ).
Scientific RepoRts | 7:45566 | DOI: 10.1038/srep45566
© The Author(s) 2017
AcknowledgementsWe acknowledge Hannele Heikkinen and Leif Grönberg for assistance in the sample preparation, and thank Jukka Pekola for helpful discussions.Author Contributions
Matter and methods at low temperatures 3 edn. F Pobell, Springer-VerlagBerlinPobell, F. Matter and methods at low temperatures 3 edn. (Springer-Verlag, Berlin, 2007).
Cooling metals to the microkelvin regime, then and now. G Pickett, Physica B: Condensed Matter. 280Pickett, G. Cooling metals to the microkelvin regime, then and now. Physica B: Condensed Matter 280, 467-473 (2000).
A microkelvin cryogen-free experimental platform with integrated noise thermometry. G Batey, New Journal of Physics. 15113034Batey, G. et al. A microkelvin cryogen-free experimental platform with integrated noise thermometry. New Journal of Physics 15, 113034 (2013).
Dry demagnetization cryostat for sub-millikelvin helium experiments: Refrigeration and thermometry. I Todoshchenko, J.-P Kaikkonen, R Blaauwgeers, P J Hakonen, A Savin, Review of Scientific Instruments. 8585106Todoshchenko, I., Kaikkonen, J.-P., Blaauwgeers, R., Hakonen, P. J. & Savin, A. Dry demagnetization cryostat for sub-millikelvin helium experiments: Refrigeration and thermometry. Review of Scientific Instruments 85, 085106 (2014).
Single-electron current sources: Toward a refined definition of the ampere. J P Pekola, Reviews of Modern Physics. 85Pekola, J. P. et al. Single-electron current sources: Toward a refined definition of the ampere. Reviews of Modern Physics 85, 1421-1472 (2013).
Temperature dependence of single-electron pumping using a SINIS turnstile. S Nakamura, Yu A P Pashkin, J S Tsai, N.-H Kaneko, Physica C: Superconductivity. 504Nakamura, S., Pashkin, Yu. A. P., Tsai, J. S. & Kaneko, N.-H. Temperature dependence of single-electron pumping using a SINIS turnstile. Physica C: Superconductivity 504, 93-96 (2014).
Opportunities for mesoscopics in thermometry and refrigeration: Physics and applications. F Giazotto, T T Heikkilä, A Luukanen, A M Savin, J P Pekola, Reviews of Modern Physics. 78Giazotto, F., Heikkilä, T. T., Luukanen, A., Savin, A. M. & Pekola, J. P. Opportunities for mesoscopics in thermometry and refrigeration: Physics and applications. Reviews of Modern Physics 78, 217-274 (2006).
Josephson charge qubits: a brief review. Yu A Pashkin, O Astafiev, T Yamamoto, Y Nakamura, J S Tsai, Quantum Information Processing. 8Pashkin, Yu. A., Astafiev, O., Yamamoto, T., Nakamura, Y. & Tsai, J. S. Josephson charge qubits: a brief review. Quantum Information Processing 8, 55-80 (2009).
Thermometry by arrays of tunnel junctions. J P Pekola, K P Hirvi, J P Kauppinen, M A Paalanen, Physical Review Letters. 73Pekola, J. P., Hirvi, K. P., Kauppinen, J. P. & Paalanen, M. A. Thermometry by arrays of tunnel junctions. Physical Review Letters 73, 2903-2906 (1994).
Accurate Coulomb blockade thermometry up to 60 kelvin. M Meschke, A Kemppinen, J P Pekola, Philosophical Transactions of the Royal Society A-mathematical Physical and Engineering Sciences. 37420150052Meschke, M., Kemppinen, A. & Pekola, J. P. Accurate Coulomb blockade thermometry up to 60 kelvin. Philosophical Transactions of the Royal Society A-mathematical Physical and Engineering Sciences 374, 20150052 (2016).
Nanoelectronic primary thermometry below 4 mK. D I Bradley, Nature Communications. 710455Bradley, D. I. et al. Nanoelectronic primary thermometry below 4 mK. Nature Communications 7, 10455 (2016).
Microrefrigeration by normal-metal/insulator/superconductor tunnel junctions. M M Leivo, A J Manninen, J P Pekola, Applied Superconductivity. 5Leivo, M. M., Manninen, A. J. & Pekola, J. P. Microrefrigeration by normal-metal/insulator/superconductor tunnel junctions. Applied Superconductivity 5, 227-233 (1997).
Electronic microrefrigerator based on a normal-insulator-superconductor tunnel junction. M Nahum, T M Eiles, J M Martinis, Applied Physics Letters. 65Nahum, M., Eiles, T. M. & Martinis, J. M. Electronic microrefrigerator based on a normal-insulator-superconductor tunnel junction. Applied Physics Letters 65, 3123-3125 (1994).
Efficient electronic cooling in heavily doped silicon by quasiparticle tunneling. A M Savin, Applied Physics Letters. 79Savin, A. M. et al. Efficient electronic cooling in heavily doped silicon by quasiparticle tunneling. Applied Physics Letters 79, 1471-1473 (2001).
Measurement and modeling of a large-area normal-metal/insulator/ superconductor refrigerator with improved cooling. G C O'neil, P J Lowell, J M Underwood, J N Ullom, Physical Review B. 85134504O'Neil, G. C., Lowell, P. J., Underwood, J. M. & Ullom, J. N. Measurement and modeling of a large-area normal-metal/insulator/ superconductor refrigerator with improved cooling. Physical Review B 85, 134504 (2012).
Sub-50-mK electronic cooling with large-area superconducting tunnel junctions. H Q Nguyen, M Meschke, H Courtois, J P Pekola, Physical Review Applied. 254001Nguyen, H. Q., Meschke, M., Courtois, H. & Pekola, J. P. Sub-50-mK electronic cooling with large-area superconducting tunnel junctions. Physical Review Applied 2, 054001 (2014).
Interfacial engineering of semiconductor-superconductor junctions for high performance micro-coolers. D Gunnarsson, Scientific Reports. 517398Gunnarsson, D. et al. Interfacial engineering of semiconductor-superconductor junctions for high performance micro-coolers. Scientific Reports 5, 17398 (2015).
Macroscale refrigeration by nanoscale electron transport. P J Lowell, G C O'neil, J M Underwood, J N Ullom, Applied Physics Letters. 10282601Lowell, P. J., O'Neil, G. C., Underwood, J. M. & Ullom, J. N. Macroscale refrigeration by nanoscale electron transport. Applied Physics Letters 102, 082601 (2013).
Electronic refrigeration of a two-dimensional electron gas. J R Prance, Physical Review Letters. 102146602Prance, J. R. et al. Electronic refrigeration of a two-dimensional electron gas. Physical Review Letters 102, 146602 (2009).
Intrinsic magnetic refrigeration of a single electron transistor. C Ciccarelli, R P Campion, B L Gallagher, A J Ferguson, Applied Physics Letters. 10853103Ciccarelli, C., Campion, R. P., Gallagher, B. L. & Ferguson, A. J. Intrinsic magnetic refrigeration of a single electron transistor. Applied Physics Letters 108, 053103 (2016).
One-and two-dimensional tunnel junction arrays in weak Coulomb blockade regime: Absolute accuracy in thermometry. J P Pekola, L J Taskinen, S Farhangfar, Applied Physics Letters. 76Pekola, J. P., Taskinen, L. J. & Farhangfar, S. One-and two-dimensional tunnel junction arrays in weak Coulomb blockade regime: Absolute accuracy in thermometry. Applied Physics Letters 76, 3747-3749 (2000).
Ex situ tunnel junction process technique characterized by Coulomb blockade thermometry. M Prunnila, Journal of Vacuum Science & Technology B. 28Prunnila, M. et al. Ex situ tunnel junction process technique characterized by Coulomb blockade thermometry. Journal of Vacuum Science & Technology B 28, 1026-1029 (2010).
Hot-electron effects in metals. F Wellstood, C Urbina, J Clarke, Physical Review B. 49Wellstood, F., Urbina, C. & Clarke, J. Hot-electron effects in metals. Physical Review B 49, 5942-5955 (1994).
Anomalous electronic heat capacity of copper nanowires at sub-kelvin temperatures. K L Viisanen, J P Pekola, arXiv. 1606.02985Viisanen, K. L. & Pekola, J. P. Anomalous electronic heat capacity of copper nanowires at sub-kelvin temperatures. arXiv. 1606.02985 (2016).
Ultralow temperatures-how and why. W J Huiskamp, O V Lounasmaa, Reports on Progress in Physics. 36Huiskamp, W. J. & Lounasmaa, O. V. Ultralow temperatures-how and why. Reports on Progress in Physics 36, 423-496 (1973).
. C Enss, S Hunklinger, Low-Temperature, Physics. Springer Science & Business MediaEnss, C. & Hunklinger, S. Low-Temperature Physics (Springer Science & Business Media, Heidelberg, 2005).
Metallic Coulomb blockade thermometry down to 10 mK and below. L Casparis, Review of Scientific Instruments. 8383903Casparis, L. et al. Metallic Coulomb blockade thermometry down to 10 mK and below. Review of Scientific Instruments 83, 083903 (2012).
Coulomb blockade thermometry. K P Hirvi, J P Kauppinen, A N Korotkov, M A Paalanen, J P Pekola, Czechoslovak Journal of Physics. 46Hirvi, K. P., Kauppinen, J. P., Korotkov, A. N., Paalanen, M. A. & Pekola, J. P. Coulomb blockade thermometry. Czechoslovak Journal of Physics 46, 3345-3352 (1996).
New methods for nuclear cooling into the microkelvin regime. D I Bradley, Journal of Low Temperature Physics. 57Bradley, D. I. et al. New methods for nuclear cooling into the microkelvin regime. Journal of Low Temperature Physics 57, 359-390 (1984).
| [
"https://github.com/AivonOy/pyCBT."
] |
[
"Statisticalnetw orks em erging from link-node interactions",
"Statisticalnetw orks em erging from link-node interactions"
] | [
"A E A \nYerevan Physics Institute\nA l ikhanian B rothers St. 2375036YerevanA rm enia\n",
"K G Petrosyan \nYerevan Physics Institute\nA l ikhanian B rothers St. 2375036YerevanA rm enia\n"
] | [
"Yerevan Physics Institute\nA l ikhanian B rothers St. 2375036YerevanA rm enia",
"Yerevan Physics Institute\nA l ikhanian B rothers St. 2375036YerevanA rm enia"
] | [] | W e study a m odelfor a stati sti cal netw ork form ed by i nteracti ons betw een i ts nodes and l i nks. Each nodecan be i n one oftw o states(Isi ng spi n up ordow n)and the node-l i nk i nteracti on faci l i tates l i nki ng betw een the l i ke nodes. For hi gh tem peratures the i n uence ofthe nodes on the l i nks can be negl ected, and w e get the Isi ng ferrom agnet on the random (Erdos-R enyi ) graph. For l ow tem peratures the nodes get spontaneousl y ordered. D ue to thi s, the connecti vi ty of the netw ork enhances and l i nks havi ng a com m on node are correl ated. T he em erged netw ork i s cl ustered. T he node-l i nk i nteracti on shi fts the percol ati on threshol d ofthe random graph to m uch sm al l er val ues, and the very percol ati on transi ti on can becom e ofthe rst order: the gi ant cl uster coexi st w i th the unconnected phase l eadi ng to bi stabi l i ty and hysteresi s. T he m odelcan be appl i ed to the stri cti on phenom ena i n m agnets and to studyi ng opi ni on form ati on i n the soci ophysi calcontext. PA C S num bers: 64.60.C n,89.75.H c,89.65.-s Stati sti cal m echani cs of networks i s a grow i ng el d w i th a w i de range ofappl i cati ons [ 1] . T he subject ori gi nated w i th works ofchem i calphysi ci sts [ 2]and m athem ati ci ans[ 3] ,w hi l e the m odern trendsfocus on i nterdi sci pl i nary appl i cati ons (bi ol ogy,soci alsci ences) [ 1] .N orm al l y networks are m odel ed ei ther vi a passi ve nodes w i th a gi ven di stri buti on ofl i nks,or w i th a gi ven dynam i cs ofl i nk form ati on (grow i ng network)[ 1]. O ne ofthe basi c m odel s i n the rst cl ass i s the random (Erdos and R enyi ) graph [ 1, 3] : a set of N nodes w i th the l i nks i ndependentl y di stri buted between them . T he m odeladequatel y descri bes som e (e. g. ,percol ati on) but not al l the rel evant features of the realnetworks (e. g. , cl usteri ng). T hi s m oti vated general i zati ons of the random graphs w i thi n stati sti calm echani cs ofnetworks[ 4].H ereweproposeto m odela network such thati tsnodes and l i nks are acti ve vari abl esi n uenci ng each other. W e present bel ow possi bl e appl i cati ons ofthi s approach.C onsi der N l abel ed nodes w hi ch carry Isi ng spi ns = f i = 1g N i= 1 . Li nki ng between these nodes i s descri bed by the sym m etri c adjacency m atri x J = fJ ik g: J ik = J ki = 1(0)i ndi cateon thepresence(absence)ofthe correspondi ng undi rected l i nk. N o sel f-l i nks are present J ii = 0. T he H am i l toni an ofthe system i s | 10.1209/epl/i2006-10212-8 | [
"https://export.arxiv.org/pdf/cond-mat/0511133v1.pdf"
] | 36,448,837 | cond-mat/0511133 | 6912cca8f960c48722822ec9d95497f754f03fd4 |
Statisticalnetw orks em erging from link-node interactions
4 Nov 2005
A E A
Yerevan Physics Institute
A l ikhanian B rothers St. 2375036YerevanA rm enia
K G Petrosyan
Yerevan Physics Institute
A l ikhanian B rothers St. 2375036YerevanA rm enia
Statisticalnetw orks em erging from link-node interactions
4 Nov 2005(D ated: D ecem ber 10,2021)
W e study a m odelfor a stati sti cal netw ork form ed by i nteracti ons betw een i ts nodes and l i nks. Each nodecan be i n one oftw o states(Isi ng spi n up ordow n)and the node-l i nk i nteracti on faci l i tates l i nki ng betw een the l i ke nodes. For hi gh tem peratures the i n uence ofthe nodes on the l i nks can be negl ected, and w e get the Isi ng ferrom agnet on the random (Erdos-R enyi ) graph. For l ow tem peratures the nodes get spontaneousl y ordered. D ue to thi s, the connecti vi ty of the netw ork enhances and l i nks havi ng a com m on node are correl ated. T he em erged netw ork i s cl ustered. T he node-l i nk i nteracti on shi fts the percol ati on threshol d ofthe random graph to m uch sm al l er val ues, and the very percol ati on transi ti on can becom e ofthe rst order: the gi ant cl uster coexi st w i th the unconnected phase l eadi ng to bi stabi l i ty and hysteresi s. T he m odelcan be appl i ed to the stri cti on phenom ena i n m agnets and to studyi ng opi ni on form ati on i n the soci ophysi calcontext. PA C S num bers: 64.60.C n,89.75.H c,89.65.-s Stati sti cal m echani cs of networks i s a grow i ng el d w i th a w i de range ofappl i cati ons [ 1] . T he subject ori gi nated w i th works ofchem i calphysi ci sts [ 2]and m athem ati ci ans[ 3] ,w hi l e the m odern trendsfocus on i nterdi sci pl i nary appl i cati ons (bi ol ogy,soci alsci ences) [ 1] .N orm al l y networks are m odel ed ei ther vi a passi ve nodes w i th a gi ven di stri buti on ofl i nks,or w i th a gi ven dynam i cs ofl i nk form ati on (grow i ng network)[ 1]. O ne ofthe basi c m odel s i n the rst cl ass i s the random (Erdos and R enyi ) graph [ 1, 3] : a set of N nodes w i th the l i nks i ndependentl y di stri buted between them . T he m odeladequatel y descri bes som e (e. g. ,percol ati on) but not al l the rel evant features of the realnetworks (e. g. , cl usteri ng). T hi s m oti vated general i zati ons of the random graphs w i thi n stati sti calm echani cs ofnetworks[ 4].H ereweproposeto m odela network such thati tsnodes and l i nks are acti ve vari abl esi n uenci ng each other. W e present bel ow possi bl e appl i cati ons ofthi s approach.C onsi der N l abel ed nodes w hi ch carry Isi ng spi ns = f i = 1g N i= 1 . Li nki ng between these nodes i s descri bed by the sym m etri c adjacency m atri x J = fJ ik g: J ik = J ki = 1(0)i ndi cateon thepresence(absence)ofthe correspondi ng undi rected l i nk. N o sel f-l i nks are present J ii = 0. T he H am i l toni an ofthe system i s
H ( ;J)= X i< k J ik i k + X i< k J ik ;(1)
w here i sthecoupl i ng ofthel i nk-nodei nteracti on,w hi l e the second term w i th > 0 generates (for = 0) the proper random graph behavi or. For > 0 (ferrom agnet) l i nki ng between the l i ke nodes i k > 0 i s favored. Li kew i se,i fJ ik = 1,then i and k tend to l i ne up. W e assum e that i) J and are stati sti cal system s w i th di erent ti m es-scal es: spi ns (l i nks) are fast (sl ow ); ii) J and m ay have di erent tem peratures. For constructi ng the stati onary di stri buti on i n the spi ri t of the i nform ati on-theoreti cal approach [ 5] we i ntroduce the entropi es of the l i nks and spi ns [ 6] : w here T l n Z (J) i s an e ecti ve H am i l toni an form i ng a G i bbs di stri buti on at tem perature T J for P (J). A l ternati vel y,we can recover(2,3)vi a the m i croscopi c approach [ 6,7,8]subjecti ng J and to therm albaths at tem peratures T J and T ,respecti vel y.
Lim iting cases. For = 0 (no spi nsthuspassi venodes) we get from P (J) the standard random graph w i th the l i nksJ ik i ndependentl y assum i ng val ues0 and 1 w i th the probabi l i ti es,respecti vel y,1 c N and c N . H ere c i s ni te for N 1 and i s found from
J l n ([ N c] =c):(4)
Si nce l i nkscan be form ed between any pai rofthe nodes, m ost of J ik have to be zero to ensure a ni te average connecti vi ty: P k hJ ik i= c. W hen n = T =T J ! 0,but c (and thus J ) i s xed, the nodes do not react on the l i nks w hi ch sti l lform the random graph. T he Isi ng m odel on the random graph (w hi ch si m ul ates di sordered m edi a) i s w i del y em pl oyed for studyi ng m agnets,l atti ce gases,etc, [ 9,10] . Fi ni te n descri bes si tuati ons,w here the reacti on of the spi ns on the l atti ce i s i m portant,e. g. ,the stri cti on e ect. For or-dered l atti cessuch e ects were studi ed i n detai l [ 11] ;the presentm odeldescri besstri cti on fora di sordered l atti ce.
Phase structure. W e cal cul ate Z i n (3) vi a the notori ous repl i ca m ethod [ 9] , i . e. , rst taki ng n = T =T J i nteger and then anal yti cal l y conti nui ng to a realn:
For an arbi trary n we have to i nvol ve al lQ a1 :::ar ,w hi ch m akes the expl i ci t anal ysi s rather di cul t. T hi ngs are si m pl er for n = 1;2, w here at best onl y two param eters are i nvol ved Q 1 = M (m agneti zati on) and Q 12 = Q (Edwards-A nderson param eter [ 9] ). For n = 1 the assum pti on on thew i deseparati on ofcharacteri sti cti m esi s i rrel evant,because P ( ;J) i s a G i bbsi an at tem perature T = T J and doesnotdepend atal lon the characteri sti c ti m es. W e get from (6,7):
F = cb 1 M 2 2 l n[ 2cosh(cb 1 M )] ; M = tanh(cb 1 M ); (9)
w here the behavi orofM i scontrol l ed by a si ngl e parametercb 1 :forcb 1 < 1 the onl y sol uti on i sM = 0,w hi l e for cb 1 1 we have a sm ooth (second-order) transi ti on to the ferrom agneti c phase,w here M 6 = 0. Eq. T he transform ati on of (15) i s i l l ustrated for m = 1: i) A ppl y (5);ii) R ecal lthat the m odeldoes not have any space structure and that al lthe l i nks are equi val ent,i . e, the l i nki ng probabi l i ty hJ ik i = 1 can be sought for as T hi s hol ds as wel l for al lhi gher order correl atorsofJ' s and i s expl ai ned as fol l ow s:from (2,3,1)we see thatthe condi ti onalprobabi l i ty for J factori zes,P (Jj )= Q i< k P (J ik j ),because there i s no di rect J J i nteracti on i n the H am i l toni an (1). T hus J' s can be correl ated onl y due to uctuati ons ofthe spi ns from one real i zati on to another.
F = A 0 M 2 + A 4 M 4 ;(10)A 0 = cb 1 n 2 (1 cb 1 ); A 4 = nc 4 b 4 1 [ 1 cb 2 (3n1 N 2 P i6 = k hJ ik i. iii) N
T he two-l i nk correl ator 2 = hJ ik J jk i hJ ik ihJ jk i(i6 = j) i s conveni entl y di scussed for n = 1 (the m ore general and tedi ous expressi on l eads to si m i l ar concl usi ons):
2 = c 2 N 2 b 2 1 (1 M 2 )M 2 0:(17)
2 i s zero i n the param agnet, and thus hJ ik J jk i = c 2 b 2 0 =N 2 i s agai n ofthe random graph form . In the ferrom agnet 2 i s positive: i fthe nodes i and k are l i nked stronger than i n average,istrongl y l i nks w i th j. 2 > 0 al so m eans that the l i nks are attracted to the com m on node,si nce 1 2
= Pr[ J ik = 1j J jk = 1] Pr[ J ik = 1] .
T he probabi l i ty 3 hJ ij J jk J ki i to have three l i nks form i ng a tri angl e i s representati ve for n = 2:
3 = c 3 N 3 fb 3 0 + 2b 3 1 + b 3 2 + 6Q M 2 b 2 1 [ b 0 + b 2 + 2b 1 ]+ 6M 2 b 1 b 1 (b 0 + b 2 )+ b 2 2 + b 2 0 + 3Q 2 2b 3 1 + b 0 b 2 2 + b 2 b 2 0 g:
In the param agnet 3 = c 3 N 3 fb 3 0 + 2b 3 1 + b 3 2 g cl earl y devi ates from the random graph behavi or c 3 b 3 0 =N 3 . In the ferrom agnet 3 i ncreases,by a jum p i fthe transi ti on i s of the rst order. T he e ect of cl usteri ng (i nform al l y: fri ends ofour fri ends are our fri ends) i s w hen two nodes al ready connected vi a a thi rd node tend to establ i sh a di rect connecti on. It i s characteri zed through a coeci ent K = Pr[ J ij = 1j J ik = 1;J jk = 1] Pr[ J ij = 1] , w hi ch i s zero for the random graph. For our case K c 3 N 3 f2b 3 1 + b 3 2 g > 0 i n the param agnet,thus de ni ng a type ofshort-range order. T he random graph behavi or i s recovered onl y for very hi gh tem peratures w here b r = cosh n ( )tanh r ! 0 for r > 0. In the ferrom agnetK rsti ncreases,butgoesto zero forQ ;M ! 1. T he case n = 1| w here the system i s i n gl obalequi l i bri um and the ti m e-scal e separati on i s i rrel evant| i s excepti onal ,si nce here the cl usteri ng i n the param agneti s absent: K = b 2 1 (b 1 + 2b 0 )M 2 (1 M 2 ). A notherrel evantquanti ty forthenetwork i sthedegree d i P k J ik ofnode i(note hd i i= 1 ). For the random graph thedegreesofdi erentnodescorrel atevery weakl y, hd i d k i hd i ihd k i / 1=N , and the di stri buti on of d i i s Poi ssoni an: P (d) = e c c d d! [ 1] . T he rst feature hol ds i n our m odeldue to dom i nati on i n hd i d k i ofl i nks w i thout a com m on node. T he second feature hol ds onl y i n the param agnet;see [ 12]for detai l s. D iscussion. W estudi ed a m odelforstati sti calnetwork, w here both nodesand l i nks are acti ve vari abl esi n uenci ng each other. T he nodes can be i n two states (spi n up
S J Tr J P (J)l n P (J), S Tr J [Tr P ( j J)l n P ( j J)] , w here Tr J (Tr ) i s the sum m ati on over al l con gurati ons of J ( ). P (J) and P ( j J) are, respecti vel y, the probabi l i ty of the l i nks and the condi ti onal probabi l i ty of the spi ns. D ue to the above assum pti on i) the rel evant probabi l i ty for the spi ns i s P ( j J), and thus S i s the rel evant conditional entropy. P (J) and P ( j J) are found by m i ni m i zi ng the average energy U = Tr J Tr H ( ;J)P (J)P ( j J) for the xed val ues of S J and S . To thi s end m i ni m i ze the Lagrange functi on U T S T J S J ,w i th T = 1= and T J = 1= J bei ng the Lagrange factors or tem peratures [ 6] : P ( j J)= e H ( ;J ) Z (J) ; Z (J) Tr e H ( ;J ) ; (2) P (J)= Z 1 Z n (J); Z Tr J Z n (J); n T =T J ;(3)
F
i s the therm odynam i c potenti al of the probl em[ 6] . T heval uesofQ a1 :::ar arefound from treati ng thei ntegral i n (6) vi a the saddl e-poi nt m ethod, w here one searches the deepest m i ni m a ofF (Q ) (necessari l y, @F @Q a 1 :::a r = 0). Forsi tuati onsofouri nterestthe repl i ca sym m etry hol ds, i . e. ,Q a1 :::ar depend onl y on the num ber ofthe repl i cas: Q a1 :::ar = Q (r) . Si nce al lnodes are equi val ent[ 9] , Q a1 :::ar = Q (r) = Tr J P (J) [ Tr P ( j J) ] r :
( 9 )
9adm i ts two ferrom agneti c sol uti ons w i th M . A t the transi ti on poi ntone ofthese val uesi s chosen random l y. T he choi ce can be m ade determ i ni sti c by appl yi ng a sm al lm agneti c el d h,i . e. ,by addi ng h P N i= 1 i (h J)to the H am i l toni an.W e can getcb 1 > 1 by i ncreasi ng the spi n-l i nk coupl i ng ,ordecreasi ng T w i th constantn and c (therm al phase transi ti ons). T he presence ofm acroscopi c (/ N ) cl uster of connected nodes (gi ant com ponent) i s necessary for a therm al phase transi ti on. In the percol ation l im ittherm al uctuati ons are absent: T ;T J ! 0,w hi l e n and cb r are xed. A s seen from ( 4) thi s requi res very sm al lconnecti vi ty c. T hen cb r = c e does not depend on r. Eq. (9) then show s that the ferrom agnet appears vi a second-order phase transi ti on for c e = 1. T he presence ofthe gi antcom ponenti snecessary and su ci entforthe percol ati onalphase transi ti on. Second-ordertherm altransi ti onscan be studi ed foran arbi trary n. N ext to such a transi ti on Tr [ P ( j J) i ]i s sm al l ,and (8) i m pl i es 1 M Q . Expandi ng F i n Q and M (G i nzburg-Landau expansi on) and sol vi ng @F @Q = 0 so that Q = c 2 1 cb2 b 1 b 2 M i s expressed vi a M ,gi ves
'F
gh tem peraturesboth A 0 and A 4 areposi ti ve,and F i sm i ni m alforM = Q = 0 (param agnet).A 0 changesi ts si gn atcb 1 = 1,i . e,atct= (1 t 2 ) n 2 ,w heret tanh . T hen A 4 > 0 i s equi val ent to 1 > (3n 2)t. T hus for n < 1, A 4 > 0 for A 0 ' 0 that m eans a second-order transi ti on to ferrom agnet: at cb1 1,F i n ( 10) changes from m onostabl e to the bi stabl e shape w i th non-zero M and Q . For n > 1 thi s behavi or i s true for su ci entl y l arge c, e. g. , for n = 2 condi ti ons ct = (1 t 2 ) and 1 > (3n 2)t l ead to c > 15=4. For sm al l er c,however, A 4 changes i ts si gn before A 0 w hi ch i ndi cate on rstorder transi ti on as the next case n = 2 show s. H ere we have three order param eters Q 1 ,Q 2 and Q 12 = Q . T he deepest m i ni m a ofF correspondsto Q 1 = Q 2 = M ,F = cb 1 M 2 + cb 2 Q M = si nh(2cb 1 M ); 'Q = cosh(2cb 1 M ) e 2cb2 Q ;(13)w here Eqs.(13)i sderi ved from @F @M = @F @Q = 0,and w here ' e 2cb2 Q + cosh(2cb 1 M ). For Q 6 = 0 and M 6 = 0, Eqs.(13) are sol ved to gi ve ( tanh(2cb 1 M )) i nto(12) getti ng F as a functi on ofM . Pl otti ng thi s functi on for vari ous (or T ,w here c and n = 2 are xed) we sel ect those m i ni m a ofF w hi ch sati sfy (14. 2);see Fi g.1. A tsom e tem perature T = T 1 the ferrom agnet w i th Q > 0 and M > 0 rst appears as a m etastabl e phase (l ocalm i ni m um ofF ). ForT < T 2 thi s state provi des a val ue ofF l ower than the param agneti c l n 2.T 2 i sthusthe rst-orderphase-transi ti on tem perature,si nceQ and M jum p atthi spoi ntfrom Q = M = 0 to som e ni te val ues. D uri ng thi s transi ti on param agnet rem ai ns l ocal l y stabl e;i t,however,gets unstabl e forFIG . 1: T he di erence F betw een the ferrom agnet and param agnet val ues ofF . O n each curve the m i ni m um corresponds to the stati onary m agneti zati on M . T he curves refer to di erent tem peratures and c = = 1; from the top to bottom : T = 1: 452;1: 445;1: 44. T he top-l i ne m i ni m um corresponds to m etastabl e ferrom agnet, the bottom -l i ne one to the stabl e ferrom agnet,w hi l e the m i ddl e l i ne i s draw n at the transi ti on tem perature T = T2,w here F = 0.
cb 1 > 1,i . e. ,at som e l ower tem perature T 3 < T 2 ,w here T 3 i sdeterm i ned from cb 1 = 1.A san exam pl e:forc = 1, T 1 = 1: 457,T 2 = 1: 445,and T 3 = 1: 385. For c ! 15=4 al lthese three tem peratures m erge to T 3 . In contrast to the therm al transi ti ons, for the consi dered case n = 2 the percol ati on transi ti on (dri ven by cb 1 = cb 2 = c e ) i s al ways rst-order:at c y (gl obal l y) stabl e. T he param agnet i s (l ocal l y) stabl e ti l lc e = 1.T he structure of the network i n the present m odeli s a col l ecti ve e ect, si nce spi ns react on the l i nks. T he average ofm di sti nct l i nks i s deri ved anal ogousl y to (
egl ect uctuati ons of the m acroscopi c quanti ti es,e. g. ,h( P n j= 1 j ) 2 i= h P n j= 1 j i 2 . T hi s generalpoi nt ofthe stati sti calm echani cs can be proven di rectl y for the present m odel . T he nalform ul param agnet 1 = cb 0 =N correspondsto the usual random graph w i th the e ecti ve connecti vi ty c e = cb 0 .It i ncreases| by jum p for the rst-order transi ti on| i n the ferrom agnet.T hereason forthi si ncreasei sthatoncetwo spi ns are (i n average)l i ned up,they get l i nked stronger. T hesam etendency i sseen fortheprobabi l i ty oftwo l i nks. Instead ofthi squanti ty,i ti sm oreconveni entto study the two l i nk correl ator hJ ik J lj i hJ ik ihJ lj i. It i s zero i fJ ik and J lj do not have a com m on node.
or dow n) and the l i ke nodes tend to l i nk. Im portant aspectsofthe m odelarei)ti m e-scal eseparati on:the nodes change fasterthan the l i nksand ii) non-equi l i bri um :the tem peratures of the nodes (T ) and l i nks (T J ) are di fferent. T he phase structure of the m odel cruci al l y depends on w hether the reacti on ofthe nodes on the l i nks i s m ore (n T =T J > 1) or l ess (n 1) pronounced. For the rst case the spi ns m odi fy the l i nks and create persi stency:the phase transi ti on to the ferrom agneti sof rst order;param agnet (ferrom agnet) i s m etastabl e bel ow (above)the transi ti on poi nt. T hi sbi stabi l i ty i m pl i es hysteresi s and m em ory: w hen changi ng the tem perature not very sl ow l y, the nal state of the system (para or ferro) depends on i ts i ni ti al state. T he transi ti on becom es second-order,real i zed by i nstabi l i ty of the param agnet,ei ther i fthe network i s dense,or i fn 1. For n ! 0 no reacti on ofthe nodes on the l i nks i s present, and we return to Isi ng ferrom agneton the (Erdos-R enyi ) random graph[ 10].T he network i s cl ustered al ready i n the param agnet: two nodesconnected vi a a thi rd one,tend to l i nk di rectl y. In the ferrom agnet there are l i nk-l i nk correl ati ons: two l i nkshavi ng a com m on nodeattracteach other.T heconnecti vi ty (l i nki ng probabi l i ty), cl usteri ng and l i nk-l i nk correl ati onsi ncrease duri ng the phase transi ti on and can serve as al ternati ve order param eters.T he m odel can si m ul ate the m agnetostri cti on e ect for di sordered m agnets. For the Isi ng m odel on regul ar l atti ces the e ect was studi ed i n R efs.[ 11]and used to expl ai n experi m entson crystal sofM nA s,N H 4 C l ,etc, w hi ch al so di spl ay stri cti on-dri ven change of the ferrom agneti c transi ti on order from the second to the rst.Yet another appl i cati on can be m odel i ng the opi ni on form ati on,a soci ol ogi calsubject that recentl y got an i nput ofphysi cali deas[ 13]. H ere the spi n i = 1 refers to two opi ni ons each agent m ay have. It i s i n uenced by the (soci al ) noi se at tem perature T . T he concrete m ani festati on of\agents" and \opi ni ons" m ay be voti ng, propagati on offashi on,i nvestm ent,etc.T heagentsi nteract w i thi n sl ow l y changi ng soci alnetwork J. T he ferrom agneti ci nteracti on descri besthe herdi ng-col l aborati on: l i ke-m i nded agentstend to connect;l i nked agentsi m pose thei r opi ni on on one another. T he l i nki ng costs energy, asgi ven by the term / i n (1). A scom pared to vari ous appl i cati ons ofstati sti calphysi cs m odel s to the opi ni on form ati on[ 13], our approach em phasi zes the col l aborati on aspect ofthe l i ke-m i nded agents and the dynam i cal characterofthei nvol ved soci alnetwork;see[ 14]forsom ew hat rel ated i deas. H ere i s an outl i ne of our resul ts i n the opi ni on form ati on context.T he param agneti c phase corresponds to a pl uralsociety,w here each agent has hi s ow n opi ni on. H erdi ng and col l aborati on are not e ci ent ei ther because l i nki ng i s too costl y,ornoi sesare too strong.Sti l l ,there i sa shortrange order:three (notl ess! ) agentstend to connectand to share one opi ni on. For weaker noi ses or l ess costl y l i nki ng the herdi ng behavi or starts to dom i nate. If the i n uence ofthe agents on the l i nki ng i s not pronounced (n < 1) the m ajori ty ofagents coherentl y form s a si ngl e opi ni on dri ven by weak externali n uences (spontaneous second-ordertransi ti on).B el ow the transi ti on,al m ostno agentdi sagreesw i th the m ajori ty.Forpronounced i n uence on the l i nki ng (n > 1) the pl uralsoci ety survi ves the rst-order transi ti on (i . e. ,jum p-l i ke dom i nance ofa si ngl e opi ni on)and sl ow l y decayson a l ong ti m e-scal e.It gets unstabl e upon further decrease ofthe l i nki ng cost. W e see that revol uti onary (jum p-l i ke) changes can occur i n response to sm ooth vari ati ons of externalconditi ons i n soci eti es w i th strong trends to establ i sh groups ofl i ke-m i nded agents. Such changes can be prevented i f the i nfrastructure network i s dense enough. W i thi n the si ngl e-opi ni on dom i nated soci ety strongl y l i nked agents tend to be l i nked even stronger,and tend to l i nk di rectl y w i th i ndi rectl y rel ated agents. T he connecti vi ty i n thi s phase i s l arger than i n the pl uralphase.A . E. A . was supported by C R D F grant A R P2-2647-Y E-05.K . G .P.was supported by IST C grantA -820.
. S N , J F F Phys, A L ; R, Phys, SIA M R evi ew. M . E. J. N ew m an5147S. N . D orogovtsev and J. F. F. M endes, A dv. Phys. , 51, 1079, (2002). R . A l bert and A . L. B arabasi , R ev. M od. Phys. , 74, 47 (2002). M . E. J. N ew m an, SIA M R evi ew , 45,167,(2003).
Principl es of Pol ym er C hem istry. P J Ory, C ornel l U ni veri sty. Ithacachapter 9P. J. Fl ory, Principl es of Pol ym er C hem istry, (C ornel l U ni veri sty,Ithaca,1953;chapter 9).
ol l ob as, Random G raphs. B , A cadem i c PressN ew YorkB . B ol l ob as, Random G raphs, (A cadem i c Press, N ew York,1985).
N ew m an. J Erg, M Lassi G, Phys, Ev, ; J Lett, M E J M E J Park, Ew M An, A rX i v: cond-m at/0202208. Z. B urda. M . E. J. N ew m an8926118Phys. R ev. E. et al ., A rX i v: cond-m at/0312494J. B erg and M . Lassi g, Phys. R ev. Lett. , 89, 228701 (2002). J. Park and M . E. J. N ew m an, Phys. R ev. E, 70, 066117 (2004). M . E. J. N ew m an, et al ., Phys. R ev. E 64, 026118 (2001). M . E. J. N ew m an, A rX i v: cond-m at/0202208. Z. B urda, et al ., A rX i v: cond-m at/0312494.
. E Jaynes, Phys.R ev. 106620E.Jaynes,Phys.R ev. ,106,620 (1957).
. A E , . M Zen, Phys. R ev.E. 62845A . E. A l l ahverdyan and T h. M . N i euw enhui zen, Phys. R ev.E,62,845 (2000).
. R Landauer, J Oo, Phys.R ev.A. 62205R .Landauer and J.W oo,Phys.R ev.A 6,2205 (1972).
. A C C , Eur.Phys.J.B. 26317J.Phys.AA . C . C .C ool en,et al .,J.Phys.A 26,3681 (1993).A . E. A l l ahverdyan,et al .,Eur.Phys.J.B ,16,317,(2000).
. K , A P Young, Phys, 58801K . B i nder and A . P. Young, R ev. M od. Phys. , 58, 801 (1986).
. M J Stephen, G S Rest, ; L , A J Ray, Phys.R ev.Lett. 383037J. Phys. CM . J.Stephen and G . S.G rest,Phys.R ev.Lett. ,38,567 (1977). L. V i ana and A . J. B ray, J. Phys. C 18, 3037 (1985).
. C Ean, D Phys, 126104C . B ean and D . R odbel l , Phys. R ev. , 126, 104 (1962).
. A I Larki N, S A Pi Ki N, C . P.Sl i chter,etal .,Phys.R ev.B. 29907Sov. Phys. JET PA . I. Larki n and S. A . Pi ki n, Sov. Phys. JET P 29, 891 (1969).C . P.Sl i chter,etal .,Phys.R ev.B ,4,907 (1971).
. A E , K G Petrosyan, i n preparati onA . E.A l l ahverdyan and K . G .Petrosyan,i n preparati on.
K Sznajd-W Eron, J Sznajd, . J P L Int, S Edner, M i chard and J. -P. B ouchaud,arX i v: cond-m at/0504079.C urty and M .M ar. 11238701Phys. C. si l i ,arX i v: physi cs/0506151K . Sznajd-W eron and J. Sznajd, Int. J. M od. Phys. C , 11, 1157 (2000). P. L. K rapi vsky and S. R edner, Phys.R ev.Lett.90,238701 (2003).Q .M i chard and J. -P. B ouchaud,arX i v: cond-m at/0504079.C urty and M .M ar- si l i ,arX i v: physi cs/0506151.
. D Stau Er, arX i v: cond-m at/0402670D .Stau er, et al ., arX i v: cond-m at/0402670.
| [] |
[
"ScatterShot: Interactive In-context Example Curation for Text Transformation",
"ScatterShot: Interactive In-context Example Curation for Text Transformation"
] | [
"Tongshuang Wu ",
"Daniel S Weld weld@cs.uw.edu ",
"Jeffrey Heer jheer@cs.uw.edu ",
"Marco Tulio Ribeiro marcotcr@microsoft.com ",
"Tongshuang Wu ",
"Hua Shen huashen218@psu.edu ",
"Daniel S Weld ",
"Jeffrey Heer ",
"Marco Tulio ",
"\nCarnegie Mellon University\nHua ShenUSA\n",
"\nPennsylvania State University\nUSA\n",
"\nAllen Institute for Artificial Intelligence\nUniversity of Washington &\nUniversity of Washington\nUSA\n",
"\nMicrosoft Research\nUSA\n"
] | [
"Carnegie Mellon University\nHua ShenUSA",
"Pennsylvania State University\nUSA",
"Allen Institute for Artificial Intelligence\nUniversity of Washington &\nUniversity of Washington\nUSA",
"Microsoft Research\nUSA"
] | [
"28th International Conference on Intelligent User Interfaces (IUI '23)"
] | The in-context learning capabilities of LLMs like GPT-3 allow annotators to customize an LLM to their specific tasks with a small number of examples. However, users tend to include only the most obvious patterns when crafting examples, resulting in underspecified in-context functions that fall short on unseen cases. Further, it is hard to know when "enough" examples have been included even for known patterns. In this work, we present ScatterShot, an interactive system for building high-quality demonstration sets for in-context learning. ScatterShot iteratively slices unlabeled data into task-specific patterns, samples informative inputs from underexplored or not-yet-saturated slices in an active learning manner, and helps users label more efficiently with the help of an LLM and the current example set. In simulation studies on two text perturbation scenarios, ScatterShot sampling improves the resulting few-shot functions by 4-5 percentage points over random sampling, with less variance as more examples are added. In a user study, | 10.1145/3581641.3584059 | [
"https://export.arxiv.org/pdf/2302.07346v1.pdf"
] | 256,868,531 | 2302.07346 | ee805f55c98920f74d0182aaf136330a97b4123f |
ScatterShot: Interactive In-context Example Curation for Text Transformation
2023. March 27-31, 2023
Tongshuang Wu
Daniel S Weld weld@cs.uw.edu
Jeffrey Heer jheer@cs.uw.edu
Marco Tulio Ribeiro marcotcr@microsoft.com
Tongshuang Wu
Hua Shen huashen218@psu.edu
Daniel S Weld
Jeffrey Heer
Marco Tulio
Carnegie Mellon University
Hua ShenUSA
Pennsylvania State University
USA
Allen Institute for Artificial Intelligence
University of Washington &
University of Washington
USA
Microsoft Research
USA
ScatterShot: Interactive In-context Example Curation for Text Transformation
28th International Conference on Intelligent User Interfaces (IUI '23)
Sydney, NSW, Australia2023. March 27-31, 202310.1145/3581641.3584059. ACM, New York, NY, USA, 15 pages. https://
The in-context learning capabilities of LLMs like GPT-3 allow annotators to customize an LLM to their specific tasks with a small number of examples. However, users tend to include only the most obvious patterns when crafting examples, resulting in underspecified in-context functions that fall short on unseen cases. Further, it is hard to know when "enough" examples have been included even for known patterns. In this work, we present ScatterShot, an interactive system for building high-quality demonstration sets for in-context learning. ScatterShot iteratively slices unlabeled data into task-specific patterns, samples informative inputs from underexplored or not-yet-saturated slices in an active learning manner, and helps users label more efficiently with the help of an LLM and the current example set. In simulation studies on two text perturbation scenarios, ScatterShot sampling improves the resulting few-shot functions by 4-5 percentage points over random sampling, with less variance as more examples are added. In a user study,
INTRODUCTION
In-context learning [70] is a property of Large Language Models (LLMs), where a user can "write" a transformation function via an (optional) short set of instructions and a few (input, output) examples. For example, writing a function that "translates" a holiday * The work was mostly done when the first author was a PhD student at the University of Washington.
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). name (e.g. "Christmas") into its calendar date (e.g. "12/25") would previously require a complicated rule-based system capable of mapping various kinds of subtly different inputs (e.g. "Xmas", "Christmas day", etc) to a lookup table of dates. With LLMs like GPT-3 [7], the process is much simpler. A user can achieve the same functionality with a prompt (i.e., a natural language instruction) that contains a small number (e.g., two) of simple demonstrations, followed by a query (underlined): "Christmas => 12/25; Halloween => 10/31; Independence Day (US) =>". GPT-3 would take the prompt and return the right date "7/04" for this query. More impressively, LLM will also have some generalizability towards semantically relevant queries, e.g., queries with abbreviations ("xmas => 12/25", "nye => 12/31"), misspellings ("s patrics day => 03/17"), lesser-known name variations ("All Saints' Eve => 10/31"), and holidays that might be overlooked (e.g.,"Harriet Tubman Day => 3/10"). The much reduced programming effort (compared to e.g., rule-based systems) draws users' attention towards building their personalized in-context functions in various use scenarios, including code generation, question answering, creative writing, and others [36,54,64].
Although in-context learning has great potential, the quality of the learned function for non-trivial tasks depends on which incontext examples are used as demonstrations [32,46]. Techniques for automatic example selection [30] depend on existing labeled datasets and tasks that can be evaluated automatically (e.g., classification), and thus users "in the wild" rely on their own ingenuity and intuition when coming up with demonstrations [21]. Unfortunately, users tend to focus on the most obvious and memorable patterns for demonstration [18], leading to systematic omissions [66] and underspecification that might go unnoticed. As an example, in Figure 1 we use in-context learning to build a function to extract and normalize temporal information from a sentence [9]. Most users would provide demonstrations with common date formats (e.g. "Oct. 23, 1999"), and some might remember relative date references (e.g. "today"). However, some patterns are easy to miss, e.g. long-form dates with no capitalization or holidays (e.g. "nineteen ninety-six", "Thanksgiving Day" in Figure 1C), and the LLM may fail to learn them if they are omitted. Even sampling random examples from the unlabeled data might lead to the repetition of common patterns (Figure 1B Figure 1: An overview of how human annotators can use ScatterShot to iteratively collect effective in-context examples for temporal expression extraction and normalization. The function extracts phrases with temporal meaning from sentences (e.g.,"Oct. 23, 1999" in "Slepian was killed on Oct. 23, 1999"), and normalizes them into standard formats ("Oct. 23, 1999 == 1999-10-23") -the red spans represent information deleted from the input, and the green ones represent information generated in the output. Given an in-context example set that is likely underspecifying the intended functionality (A), ScatterShot applies slice-based sampling to return unlabeled inputs that either have novel patterns or are difficult cases, and uses the existing examples to drive an LLM (e.g., to suggest (possibly noisy) annotations, such that humans can correct the suggested annotations and possibly expand the in-context example bucket. Compared to random sampling and manual labeling (B), ScatterShot helps humans re-allocate annotation budgets towards informative examples, and increases the in-context function performance.
unlabeled data. As a result, prior work summarized the two major pain points of prompting to be (1) the effort required to source examples for a prompt, and (2) the difficulty of evaluating whether a prompt is improving [22]. In this work, we present ScatterShot, an interactive system for building high-quality demonstration sets for in-context learning. In a nutshell, ScatterShot helps users find informative input examples in the unlabeled data, annotate them efficiently with the help of the current version of the learned in-context function, and estimate the quality of said function. In each iteration, Scatter-Shot automatically slices the unlabeled data into clusters based on task-specific key phrases [66,69]. For example, given the existing examples in Figure 1A, it finds a cluster based on holiday key phrases ("Christmas", "Thanksgiving", etc.) and a cluster based on exact dates like "Oct. 23, 1999" (among others). ScatterShot keeps a running estimate of the error of each cluster, and thus prioritizes examples from clusters that have not yet been explored or learned effectively. It further uses the stability of the current in-context function with respect to minor changes in the prompt (e.g. ordering of in-context examples), prioritizing unlabeled examples that get different predictions with different prompt variations. Users are then presented with examples of underexplored clusters (e.g., Figure 1 C 1 ), or hard examples of explored clusters (e.g., C 2 , hard because the past tense refers to the Thanksgiving date of the previous year). Instead of having to label demonstrations from scratch, users can either accept correct predictions from the current function (Fig 1 C 1 ) or make edits to fix wrong predictions (Fig 1 C 2 ). These additional labels are used to update the in-context function, such that the user explores the different possible input patterns in an interactive manner, without wasting resources on patterns that have already been learned.
We evaluate ScatterShot both in terms of sampling efficiency and support for human annotators. In simulation experiments, we compare the sampling strategy in ScatterShot to random sampling on two text transformation tasks contemplated in prior work: the data wrangling task illustrated in Figure 1 [9], and rewriting question-answer pairs into logically equivalent pairs in order to evaluate model consistency [44]. In both cases, we find Scatter-Shot improves performance on corresponding metrics (e.g., Rouge-L, F1) by 4-5 percentage points, with less variance for various values of demonstrations. Further, we conduct a within-subject user study in which 10 participants build in-context functions for the QA-pair rewriting task either (1) manually, (2) with the Scatter-Shot interface but random sampling, or (3) with the fully-featured ScatterShot. We show that ScatterShot's interface alone is an improvement, by offloading input selection and providing sample outputs. Moreover, the sampling strategy in the fully-featured ScatterShot helps users notice diverse input patterns, leading to improvements in the resulting in-context function. For example, participants who thought their in-context examples were sufficient when using random samples labeled an additional 1.4 times of examples after switching to full ScatterShot (as they found new patterns), which further improved the function test performance. We conclude the paper with insights into challenges and opportunities that arise from our experiments, including e.g., explaining the sampling rationales, incorporating automated blind-spot detection, and the potential of using a ScatterShot setup to help users iteratively refine their task definition during data collection. functions. In order to be effective, a function must be able to handle common patterns (e.g., the temporal normalization function in Figure 1 must be able to handle common temporal expressions such as "today"), without neglecting less common ones (e.g., holidays such as "Christmas"). Further, we want the process to be cost-effective, not wasting annotation effort on demonstrations that are redundant with already covered patterns. To achieve these goals, we design ScatterShot to respond to every user interaction by offering assistance in three areas:
THE DESIGN OF SCATTERSHOT
• Help the user discover previously unexplored patterns. In each iteration, ScatterShot uses existing demonstrations and users' past interactions to cluster the remaining unlabeled data into task-specific slices. Such slices map the input space for users to explore. • Help the user prioritize the most informative examples.
ScatterShot uses the current in-context function to estimate the difficulty of slices and examples, prioritizing unexplored slices or slices and examples where the current function is not yet performing well. We call this variant of active learning slicebased sampling. • Minimize annotation cost. Rather than providing a gold output label from scratch for each example, the user is presented with the best guess output from the current in-context function (updated at every step), which they either accept when correct or edit the incorrect parts.
We wrap these functionalities with a lightweight interface, where at each round, users are presented with a batch of unlabeled examples to be (potentially) added to the set of demonstrations. Thus, at each round, users get a "picture" of their current in-context function, and interact with it for improvement. We now detail each one of these components.
Interactive Interface
We present ScatterShot as an interactive interface, shown in Figure 2. The interface contains a task description (A 1 ) and existing in-context examples as demonstrations, presented as input-output pairs (A 2 ). These pairs are color-encoded based on the text editing distance, with the spans deleted from the input in red, and the spans added in green. Both the description and demonstrations are editable, and are automatically translated into an LLM prompt ( Figure 2B) with the task description, demonstrations in the format » [example input] => [example output], and a candidate input for the LLM 1 to transform into an output.
Below the existing examples, ScatterShot proposes a batch of 5 candidate inputs sampled from the unlabeled dataset, with outputs computed with the current version of the in-context function (A 3 ), using the prompt in Figure 2B. Users then verify the candidates and provide feedback (A 3 ), editing outputs to fix mistakes when needed (e.g., changing from "Thanksgiving == 2000-11-25" to "Thanksgiving == 1999-11-25", A 4 ), and adding or removing examples to the few-shot examples for in-context learning (A 5 ). In addition to saving annotation time, LLM-generated outputs help users assess the quality of the current version of the in-context function. For example, if all LLM outputs are correct for a few consecutive batches, it is likely that the existing few-shot examples cover the patterns in the unlabeled data, and thus labeling can stop.
The interface is task-agnostic and can be used whenever users want to learn one-on-one text mapping between text inputs and outputs. This format is flexible, encompassing both classification tasks (where the output is just the class name) and generation tasks like summarization, though the color encoding would be most effective for text transformation tasks where the edits from inputs to outputs are worth highlighting. For example, Figure 5 shows Figure 3: An overview of ScatterShot's slice-based sampling. We use the data status from 1 to − 1-th iteration to perform sampling for the -th iteration. As shown in (A), we use the already sampled input-output pairs to extract templates for taskspecific key phrases. We use these templates to extract key phrases for each unlabeled input, which are the blue highlights in (B). For example, PRON helps extract "Christmas" from "@virreedom Merry Christmas!". We run Agglomerative Clustering on the sentence embedding of these key phrases to find task-specific data slices, which contain both not-yet labeled examples (marked with "?") as well as those that have been sampled ("✓" for correctly predicted in previous iterations and "✗ " for incorrect predictions). We rank these slices by an award function based on slice size, estimated model performance, and sample frequency, and draw samples from the top clusters.
how the same interface is used for another question-answer pair rewriting task. ScatterShot can be easily invoked in a Jupyter Notebook, and therefore can support users' natural workflows.
Slice-based Sampling
Identifying patterns with key phrase clustering.
To help users explore both common and less common patterns, we need a way to partition the unlabeled input examples. While there are a variety of task-agnostic distance metrics that could be used for clustering (e.g., cosine similarity of sentence embeddings [43]), our preliminary exploration indicated that these are typically too coarse when applied to specific tasks. For example, using the embeddings from Reimers and Gurevych [43], "Took a photo today. " is closer to "Saw a photo on Flickr. " (similarity = 0.56) than to "Are you going to yoga class today?" (similarity = 0.30). While this may make sense in the abstract, it does not correspond to how we would want to slice examples for the temporal extraction task in Figure 1, where date references "today" are more important than subject matter ("photos" vs "yoga class"). Thus, we propose a task-specific clustering method based on key phrases as explained below.
Detecting key phrases in demonstrations. While key phrase extraction in general may require domain knowledge [8,42,65], for text transformation we can leverage the signal present in the relationships between input and output, i.e., in which parts of the input are perturbed or retained. For example, "today" is retained in the output of both "Took a photo today. " and "Are you going to yoga class today?" (among many other samples), and thus it is probably a key phrase. Formally, given a labeled, positive example, i.e., a pair of original and perturbed sentences ( ) => , we extract as key phrases either the unmodified parts of when most of is changed (Levenshtein edit distance ( , ) ≥ 0.5, as is the case with the "today" examples above), or the modified parts when most of remain unchanged.
Applying key phrases to unlabeled inputs. Applying key phrases naively with an exact match would yield low coverage in the unlabeled data (especially for larger phrases). To get more coverage, at each iteration, we generalize key phrases extracted from labeled demonstrations into templates with combinations of tokens, lemmas, and part-of-speech tags [66,69], e.g.,"today" is expanded into today, NOUN, and DATE. We then select representative templates with a greedy weighted set coverage algorithm based on their specificity and the number of inputs they cover [59]. Example templates at various abstraction levels are shown in Figure 3A.
Key phrase clustering. We define the distance between two inputs as the minimum cosine distance between the sentence embeddings [43] of their key phrases, and use agglomerative clustering [33] to recursively merge pairs of clusters in the unlabeled data. We set the number of clusters to 20 (chosen empirically in Section 3), and aggregate all clusters with < 10 examples into a single "outlier" cluster ( Figure 3B 1 ). Note that we recompute clusters in every iteration, and thus the outlier cluster tends to shrink as the user interacts with the system. Figure 3B contains various examples of discovered clusters.
Note that as a result of the weighted coverage selection, the templates -and thereby the extracted key phrases -are dynamically changing, and will eventually become more dominant in the sampling procedure: when the few-shot set contains only a few (e.g., 3) seeding examples, the templates might be biased or even non-existent, most examples will just use the full sentences as key phrases, making it similar to vanilla clustering on full examples. Figure 4: An illustration of ScatterShot's two-step correctness estimation. When the in-context function demonstrates reasonable quality in the last two iterations, we first employ unanimity voting, i.e., we use three different orderings of in-context examples (noted with the three dots with different grey shades) to get three outputs, and say the function is correct if all the outputs are the same, without showing the input to the human (A). When the outputs are different, we return the one with the highest output probability for user inspection (underlined), in which manual editing would imply that the function is wrong (B).
However, as we add more examples, the templates will be more balanced and eventually stabilize, at which point the clustering can rely more on the extracted key phrases.
Selecting slices for exploration.
We want to explore the identified slices in an efficient way, avoiding slices already "solved," and making the user discovers any unexplored patterns. We take inspiration from the UCB algorithm [4], and use an upper bound estimate of the error of our function in each slice as part of the "reward" for sampling from that slice. Formally, suppose slice has examples, of which are labeled in previous iterations (see the next section for "labeling" details). Further, suppose that out of the previously labeled examples, the current function is correct on . 2 The reward of drawing from slice at iteration is then given by:
, = (1 − ) Error Rate · ln Size + √︂ ln Sample Rarity
In other words, we prioritize large slices (ln ), low performance (1 − / ), and slices that have not been sampled many times ( √︁ ln / , which would give higher weights to clusters with smaller as the iteration progresses). Thus, we avoid wasting annotation effort on slices that are already "solved", but keep drawing from slices we can't yet deal with and slices we have not yet explored. Figure 3B shows four data slices in temporal extraction ranked by reward . 1 is the "outlier" cluster, where patterns are not yet apparent. This slice still gets prioritized due to its large size ( = 449), even though it has been sampled = 10, which encourages either higher accuracy or further slicing in follow-up iterations. 2 is a slice with holiday-based key phrases. Though the slice is small ( = 19), the LLM failed whenever it was previously sampled ( / = 0), and thus it currently represents a hard pattern. 3 is a slice with past date references, while 4 is a slice with the common temporal pattern represented by the words "today", "yesterday", and "tomorrow". This last slice has low priority despite being common, since the LLM had perfect accuracy whenever a sample from it was drawn. To maximize diversity (similar to batched active learning [12,17,48]), we rank the slices by reward and select one example from each until the batch is filled (in our case, batch size = 5).
2.2.3
Saving user effort with implicit labels and pseudo-labeling. As mentioned above, our per-slice performance estimation requires labeled examples. Unfortunately, we only have firm labels on useradded in-context examples, which may be quite small, especially if users only add a portion of the sampled data. As a result, incontext examples offer limited power for estimation. Although we can modify the interface to collect additional user labels on output correctness, it requires additional interaction that can be cumbersome. To save user effort, we use implicit labeling, i.e., we label the LLM output of an example in a batch as correct if the user does not make any changes to the output, even if they do not add it to the in-context demonstration set. Of course, users might ignore model errors if they are frustrated or distracted, but we verified in pilot experiments that users almost always make corrections in the presence of model mistakes (∼87% of the time, and the selection method is robust to this small amount of noise). In comparison to explicit labeling, this method requires the bare minimum user interaction, and is easier to integrate into iteration workflows.
Still, implicit labeling requires users to actually see and interact with a sample. However, after a certain point in the process, the LLM is correct often enough that many interactions would simply be "accepted" (no changes) by the user. While important for estimating slice accuracy, too much of such interaction might also lead users to overestimate the in-context function quality, and stop the process before they explore the remaining slices. Thus, after we reach a threshold of quality (LLM is correct on 70% of examples in two consecutive rounds), we start leveraging pseudo-labeling with unanimity voting, a method inspired by the unanimity principle [23] and Query-by-Committee [34]. Following Lu et al. [32]'s observation that the order of in-context demonstrations can drastically change LLM performance, we form three different prompts by randomly reordering the examples. When the outputs of the prompts agree (i.e., are unanimous), we use that output as a pseudo-label, used both for estimating slice accuracy and as a filtering method (i.e., these examples are not shown to the user). Figure 4 illustrates this process, where "@viereedom Merry Christmas" (A) is pseudo-labeled due to unanimity, and "Atlanta nineteen ninety-six" (B) yields different predictions, and thus is shown to the user for manual inspection. Table 1: Quantitative results comparing ScatterShot with the random baseline on Temporal and QA-pair, averaged over 10 random seeds. ScatterShot outperformed the baseline on all metrics. The significant improvements, measured by student's t-test are marked with *: < 0.05, and **: < 0.01.
Conditions
Extraction Normalization In this section, we measure the effectiveness of slice-based sampling, when compared to random sampling on two text transformation tasks. We use datasets for which we have labels on both tasks, so that we can simulate the labeling process with an oracle at scale, and evaluate the learned function on a held-out portion of each dataset.
Tasks and Datasets
Temporal expression extraction and normalization. The Temporal task involves data wrangling [60], where the goal is extracting phrases with temporal expressions from sentences or documents, and normalizing them into a standard format [9]. As shown in Figure 1, these can include absolute or relative dates, and can have different granularity (e.g., exact date vs. year only).
Data. We take the data from [2], containing temporal expression datasets, including TimeBank [41] (news articles) and TweeTime [55] (tweets). We process each dataset into sentences, discarding any date annotations that could not be normalized to the format YYYY-MM-DD (for consistency), and keeping sentences involving absolute dates, dates relative to the document publication date, Evaluation. Following Chang and Manning [9], we report F1, recall, and accuracy both for the temporal expression extraction and normalization separately.
Question-Answer Pair Implication. For the QA-pair task, we use ScatterShot to replicate transformation functions from prior work. Given a question-answer (QA) pair, Ribeiro et al. [44] wrote a rule-based system (over 1, 000 lines of code 3 ) to generate a new QA pair that is implied by the original pair, to check whether question answering systems are consistent in their reasoning. We replicate their logical equivalence transformation, where the original QA is rewritten to a logically equivalent form, e.g. "Q: What room is this? A: bathroom" is transformed to "Q: Is this a bathroom? A: yes". Despite the heavy engineering, the rule-based system is not able to cover many inputs, and often produces text that does not look fluent or natural. We thus apply in-context learning to this task, and use ScatterShot to select the examples.
Data. We download the input sentences and rule-based implications from Ribeiro et al. [44], and manually inspect and label 1, 000 randomly sampled QA pairs (351 rule-based implications were noisy and had to be relabeled). We randomly sample 100 pairs as a test set, and use the remaining pairs as our unlabeled pool in the experiment.
Evaluation. We follow the standard in text generation and report the Rouge-L F scores [28], as well as BLEU-4 [28]. 3 https://github.com/marcotcr/qa_consistency/
Procedure and Baseline
We compare ScatterShot's slice-based sampling with a Random sampling baseline, which is the most common sampling method used especially in complex tasks, e.g., in text translation [1]. We use GPT-3 as our underlying LLM, with greedy decoding (temperature=0) in both conditions. In each simulation run, we start the process with three random samples (the same for both conditions) of inputoutput. At every iteration, we compare the ground truth label with the candidate label proposed by the current in-context function. When the labels differ, we add the pair (input, oracle output) to the in-context example set, simulating the case where the user corrects a transformation and adds it to the set; Otherwise, the oracle user does not perform any action, simulating cases where the user ignores examples where the current in-context function is correct.
The process is repeated until one of the following stopping conditions is met: (1) the in-context example set contains more than 40 data points (exceeding the LLM maximum context size), (2) The oracle user has been presented with 100 examples (i.e. annotation budget is met), (3) the in-context function provided the correct outputs in five consecutive iterations, or (4) the in-context function's estimated accuracy for all slices of data is ≥ 80%.
We run ten simulation rounds with different random seeds, and report the (averaged) final function performance. We further track the function improvement trajectory over iterations on three randomly selected simulation rounds, by evaluating the intermediate in-context functions after every five examples are added.
Results
As Table 1 shows, ScatterShot's slice-based sampling outperforms the baseline on both tasks. In Temporal, ScatterShot improves the 1 for date span extraction by around 2 points, and the normalization by 4 points. In QA-pair, ScatterShot outperforms Random by 6 points on Rouge-L, and even outperforms the heavily engineered rule-based system used to label most of the evaluation data, despite needing 40 or fewer in-context examples. Table 2 shows qualitative examples, where ScatterShot outperforms both baselines in terms of coverage, fluency, and correctness. These results point to ScatterShot's potential on saving human efforts in creating finegrained functions, alleviating the need for handcrafting templates. While ScatterShot helps users explore most patterns in the unlabeled data as they reach higher , early gains are especially useful in practice when users have low annotation budgets, e.g., prior work notes users selecting as few as five or ten examples [32,38].
Finally, we observe that ScatterShot is less liable to variance in quality as more examples are added (e.g. in QA-pair-2, baseline performance degrades by almost 15 points between = 20 and = 30). These results suggest that besides its interface and interactivity benefits, ScatterShot can improve in-context learning just by virtue of its sample selection function. In order to evaluate the benefits to actual humans, we now turn to a user study.
USER STUDY
ScatterShot sampling is effective in simulation, but does it actually aid humans to articulate their desired functions? We conducted a within-subject user study to evaluate whether human users can sense ScatterShot's support in exploring the data space.
Study Design
Task & Participants. We ran a user study on the QA-pair task using the same dataset as Section 3.1, with a split of 900 unlabeled inputs for participants to access, and 100 test examples for evaluating the in-context functions they built. We recruited ten CS graduate student participants (4 females, 6 males) on our CSE department mailing list. Eight of them had previously used GPT-3 and two had heard about it, but none were familiar with the task or Scatter-Shot. Each participant spent around 60 minutes in the study.
Settings & Conditions.
In order to isolate the effect of the different components in ScatterShot, we have two ablation settings in addition to our method: (1) Manual, where participants manually craft prompts without any help from ScatterShot, which is the de-facto status-quo of practitioners creating their own in- Every participant interacted with every setting in sequence and in a cumulative manner, i.e., the in-context demonstrations gathered in one setting carry over to the next, and we measured the additional benefit of moving to the next setting. We divided the participants into two groups, such that in one group the sequence is Manual → Random → ScatterShot (M-R-S), while in the other it is Manual → ScatterShot → Random (M-S-R). M-R-S represents a condition where participants are gradually exposed to more features, such that the step-wise gain maps directly to the benefit of the new feature, while M-S-R serves as the counterbalanced condition that combats the learning effect and the natural impact of accumulating examples on function qualities. Study Procedure. We designed our hour-long study to be selfcontained in a Jupyter Notebook, 4 and one of the authors was present in all studies to ensure that participants understood the task and to answer any questions.
Participants were first introduced to the basic concepts of LLM (GPT-3), in-context example construction, and the study task. Then, we randomly assigned the participants to one of the two conditions (M-R-S or M-S-R), and they completed the task by going through the three conditions in the assigned order. Participants were not instructed on the difference between ScatterShot and Random, and were instead told that "these two selection methods are randomly ordered, and one is not necessarily better than another. "
In each step (setting), participants were told to inspect the inputs and current function outputs (available in ScatterShot and Random), fix the erroneous outputs, and add demonstrations (input-output pairs) to the in-context example bucket if they believed the data would add additional value, e.g., instances where the current context function fails, as well as diverse input or output patterns. They were Figure 7: Participants' subjective ratings on their perceived differences between different settings as they switch between them. We use the rectangle to represent when participants first move from Manual (Step 1) to the ScatterShot interface (either ScatterShot or Random,
M-R-S M-S-R
Step 2), and circles to represent switches between ScatterShot interfaces, from one sampling method to the next (Step 2 to 3). Participants strongly preferred the ScatterShot interaction to manual example annotation, and felt they found more diverse patterns and difficult cases in ScatterShot than Random (Random→ScatterShot, blue). In contrast, people in the reversed condition did not find Random more useful than ScatterShot (orange).
asked to iterate within the step until they were satisfied with the incontext function at hand, or accumulated 40 examples. To prevent them from stopping too early, we also asked them to run at least three batches (i.e., see 10-15 examples). 5 Afterwards, participants completed an exit survey and a semi-structured interview, where they rated their perceived experience in each of the two consecutive steps. These questions concerned their perceived input/output pattern diversity, the example difficulty, and their confidence in estimating in-context function quality.
Collected Data. We observed and analyzed three sets of data. First, to quantify the change in function quality, we saved participants' in-context examples per step, and applied them to the held-out test set. Here, besides the absolute numbers as in Section 3, we calculated the difference in performance between two consecutive steps to see if adding (or, in the case of M-S-R, removing) Scat-terShot features impacted the quality of examples participants submitted. Second, to assess participants' self-perceived experience, we used a standard five-point Likert Scale [27] to collect their perceived step-wise differences. Third, to track participants' annotation trajectories, we logged their clickstreams in all the steps. This included both the number of examples they examined per step, the edits they made, and the number of examples they added.
Results
The ScatterShot interface made it easier to iterate on in-context examples. As shown in Figure 7, participants' found moving from Manual (Step 1) to a ScatterShot interface (Step 2) beneficial, regardless of the sampling setting. In particular, they found that the interface made it easier and more intuitive to construct the few-shot examples. (Easier to use in Figure 7, 4.7 ± 0.7 for Manual→Random and 4.2 ± 0.4 for Manual→ScatterShot). Users liked the fact that ScatterShot offers sample inputs (rather than having to go through the dataset on their own), and the that the interface provides easy access to all the existing in-context examples, allowing for fast back-and-forth iteration. For example, one participant (P7) kept revisiting their examples, and removed some earlier examples that they thought were less useful as they became more familiar with the unlabeled input space.
As part of the interface, LLM-generated outputs helped participants craft examples more efficiently, e.g., P6 comments that "it is less work to make edits than starting from scratch." Somewhat surprisingly, LLM-generated outputs also improved output diversity, i.e., users considered more diverse output patterns. For example, P10 commented that they were "pleasantly surprised by the LLM's clever output in several cases, " and that they would not have thought about transformations such as "Q: Is there more than 1 boy? A: no" → "Q: Is there no more than 1 boy? A: yes", which they added to their set of in-context examples. The observation is consistent with prior work showing AI-induced creativity gains [62]. We note that actual user behavior here differs from our simulation setup, where we assumed human users would only add new examples when the LLM output was wrong.
Participants' perceptions matched ScatterShot's slicebased sampling design goals: more diverse and more challenging patterns. As shown in Figure 7, participants in M-R-S clearly noticed the improvement moving from Random→ScatterShot (4.2 ± 1.2 for more diverse patterns and 4.8 ± 0.4 for more difficult case), whereas most users in M-S-R did not report improvements from ScatterShot→Random. Qualitative results confirm this, e.g., P7 in M-R-S commented: "Step 2 (Random) provided me with some worthy examples, but much less than Step 3 (ScatterShot). I went through several rounds of pretty similar examples, thinking the function is behaving quite decently, and didn't realize the function needed more diverse and edge cases until I reached Step 3. " P9 in M-R-S was also happy that ScatterShot helped them explore beyond typical patterns. In contrast, P10 in M-S-R reflected that their exploration seemed to have "quickly saturated in Step 3" (Random).
Despite not being given details, seven participants discerned the goals behind ScatterShot's sampling method by interacting with it. For example, P2 described it as "sample for additional variation based on the patterns in existing examples, and also sample for examples similar to previous error mistakes to track whether the function has been corrected." Two participants in M-S-R noticed that Random presented fewer mistakes, but attributed it to the increasing number of in-context examples (P5: "It's getting more correct, but Table 3: The performances of participants' in-context functions after each step. +/-represents the average performance change compared to the prior step, whereas the number in the parentheses are the absolute performances. M-R-S participants were able to keep adding useful examples, whereas M-S-R participants decreased the function performance by 0.6 in Step three (ScatterShot→Random), indicating that these efforts were wasted.
Condition
Step 1 Step 2
Step We report the quality of the resulting in-context function on the held-out set in Table 3, and note that Random→ScatterShot consistently increases performance, while ScatterShot→Random consistently decrease performance despite adding more in-context examples, which is in line with our simulation results.
ScatterShot helped participants estimate function quality and "debug" their example set. As expected, participants estimated their in-context function quality based on the candidate examples they reviewed. For example, P5 (M-S-R) tracked the function convergence: "I made mental notes on the LLM errors and hypothesized what types of examples were missing. For example, I noticed the model was wrong on N/A questions at first, but later got it right. " Participants in M-R-S seemed slightly more satisfied with their estimation, with 4.2 ± 0.9 in Manual→Random and then further 4.3 ± 0.7 Random→ScatterShot. P7 commented that "Step 2 showed me the function is quite smart on patterns it has already seen and has high precision, and Step 3 showed me there are more patterns and it has low recall". P2 further reflected that Random's sampling "created a false impression of convergence, when the function still had various blind spots. " The interactive process also helped participants debug their example sets, e.g., P4 saw big performance drops (4/5 to 1/5 accuracy) on two consecutive batches, which led them to remove in-context examples that were hurting performance.
Participants in M-S-R gave slightly lower ratings on their estimates. Qualitatively, the fact that ScatterShot prioritized potential mistakes seemed to discourage users, e.g., P3 noted they were driven into "an endless blackhole of errors," after which a round of repetitive patterns in Random was hard to make sense of. Once again, this could have been mitigated by explaining the sampling strategy to the users, and explicitly displaying the slice accuracy estimates ScatterShot keeps track of.
DISCUSSION
In this work, we design a human-LLM collaboration mechanism in ScatterShot to help humans craft diverse and informative incontext learning examples. By iteratively identifying data slices, sampling from low-performance or unseen slices, and providing best-guess outputs for the sampled examples, ScatterShot not only helps the collection of informative in-context examples, but also supports users in exploring the input space and assessing the function quality. At its core, ScatterShot is built on three concepts: data slicing and sampling, iterative human-model interaction, and collaborative human-model labeling. We now discuss challenges and potential future work for each of these.
Slice-based sampling can increase data space coverage. Our experiments showed that sampling from diverse and difficult data slices improves in-context function performance. Importantly, these slices cannot be surfaced via clustering on task-agnostic embeddings; rather, task-specific features should be considered to group examples, while task-irrelevant noise should be minimized. However, identifying these task-specific features remains a challenge. While effective for our function examples (and many others), keyphrase and template extraction would not generalize to tasks where input and output have little syntactic overlap, e.g., English-French translation, summarization, etc. Future work should look into incorporating more general slicing methods, e.g., asking practitioners for slicing functions [11,42,65], automatically detecting blind spots [16,47], etc.
In addition to data slicing, the sampling algorithm also plays a crucial role in narrowing down the actual slices to sample from. We adapt the UCB algorithm to prioritize slice size, performance, and sample rarity, but there are other interesting dimensions that could be explored. For example, if there are slices that cannot be learned after several rounds of sampling, UCB may be counterproductive and create a biased in-context example set that performs worse on other slices, whereas a strategy that penalized or just "gave up" on those slices might produce a better overall function. Moreover, we might want to explore better methods for example ranking within a slice.
Interacting with the latest function is essential for in-context learning. In-context learning enables rapid function updates, which are not possible in other current interactions with models (e.g., finetuning often takes long hours, and is often not suitable for interactivity). Allowing users to interact with the most current version of what is being learned helps them track progress, and backtrack when they introduce cascading errors [22]. The setup in Scatter-Shot is a step in this direction, since users always interact with the latest version of the in-context functions.
While participants were making progress with ScatterShot (more than with baselines), some participants felt frustrated by inspecting mistake after mistake, fearing that they would never be able to produce a good enough function. While this is by design (ScatterShot prioritizes potential errors), it might compromise annotators' estimates of the quality of their function, and their motivation for labeling more examples. Thus, we notice the importance of presenting quality metrics to the user and clearly explaining the sampling function so that the right expectations are set. For example, users may perform better mental calibration if they have access to hints like the number of slices that are considered "solved" (e.g., as a progress bar that allows people to zoom into concrete examples grouped by the slice), cross-validation accuracy on incontext examples, etc. Another alternative would be to let users exercise more control over which slices are explored, e.g., allowing them to "drill down" or "give up" on specific slices.
Human-AI collaborative labeling for building better functions with respect to better quality and better task definition. Essentially, ScatterShot enables human-LLM collaboration on data annotation. In our work, we mostly focused on evaluating the quality benefit of such annotation, but we observed additional interesting gains in bringing people inspiration. In Section 4, we notice that participants can take inspiration from the LLM not only on the input patterns, but also on potential output patterns even though our QA-pair task is relatively deterministic in its transformations. Thus, we hypothesize that similar systems supporting human-LLM collaborative labeling could play an important role in helping users iteratively refine their task definition and function behavior during data collection. Prior work has shown that annotation requesters refine their labeling instructions when they see noisy (and therefore unusable) crowdsourced labels on ambiguous examples. However, we have yet to examine how LLMs' suggestions (good or bad) might help users better specify their functions.It would be interesting to systematically analyze and measure users' own distribution shift as the example set expands. Recently, Lee et al. [25] proposes the "retaining rate" of LLM suggestions (in their case, suggested character names subsequently used in novels) as a metric of the usefulness of LLMs for ideation. An analogue to our case would be measuring the appearance of new patterns data slices when users use ScatterShot, compared to when they come up with their own patterns.
RELATED WORK 6.1 LLMs and In-context Learning
Transformer-based large language models (LLMs) [58] have recently led to large improvements in NLP. Pre-trained on a large amount of unlabeled text data, these models encapsulate rich, generalpurpose features of language both syntactically and semantically.
These features can help facilitate various downstream applications much more dynamically [31] -rather than having to train a new model for every custom task, users can just customize the model by feeding it natural language prompts at run time, like the holiday in the previous section. Such ability to recognize the desired task on-the-fly is called in-context learning [7].
The flexible in-context learning intrigues various work to explore designing prompts that can effectively invoke the user desired functionalities [35,37,46,70]. To date, the most common patterns for prompting are either zero-shot or few-shot prompts. Zero-shot prompts directly describe what ought to happen in a task. For example, we can enact the holiday date translator in Section 1 with a task description prompt: "Identify the date for a national holiday in the month/date format." Studies on improving zero-shot prompts typically study the effect of task instructions [15], induce LLM reasoning through task decomposition [63,67], etc. Zero-shot prompts do not use demonstrative examples and therefore tend to be less performative [7], but writing just the natural language descriptions is lightweight enough that it creates an intuitive natural language interface between humans and the model [64].
In contrast, few-shot prompts show the LLM what pattern to follow by feeding it examples of the desired input and output data. As can be seen in Section 1, given examples on "Christmas" and "Halloween", the LLM would produce a reasonable date for "Independence Day". These examples usually follow consistent structures with meaningful prefixes ("Holiday: [name] => Date: [date]"), which helps re-emphasize the desired intent [58]. The quality of few-shot prompts heavily relies on the five to thirty in-context examples that demonstrate the intended behavior [32,46], and LLMs can only perform in-context learning if it has seen the corresponding distribution or concept [35,46,70]. If developers omit corner cases in the few examples they created, the task quality can easily be affected [29]. For example, without a negative example where we denote ineligible inputs with a placeholder output "N/A" ("Holiday: yesterday => Date: N/A"), the LLM would attempt to produce the most plausible "label" even for negative examples -It may try to normalize "yesterday" to a most plausible date even though there is no holiday. Our work here tries to help users interactively identify high-quality in-context examples for text transformation. We review the literature on in-context example selection next.
Effective Example Selection
Prior work has explored selecting effective demonstrations, and has shown that because pre-trained models possess high-level semantic features, sampling or active learning tends to help identify informative examples [56]. In particular, dynamically selecting (retrieving) the most similar demonstrative examples for each given input significantly improves in-context learning performance [10,46]. However, such retrieval methods require fully labeled datasets as the search space. In contrast, our work studies the scenario where humans craft their personalized in-context functions, and therefore focuses on an unlabeled space.
In the unlabeled search space, prior work has explored effective dataset annotation that can support better in-context learning or few-shot finetuning. These studies strive to allocate annotation budgets to diverse and representative examples through clustering [10] or graph-based search [53]. For example, Su et al. [53] built a similarity graph by computing pairwise distances between input sentences and then iteratively selected and annotated examples based on graph density. They show such selection substantially reduces the annotation cost while achieving high and stable in-context learning performance. Despite being effective, these methods sample examples purely for input diversity. Because our work focuses more on supporting users' interactive function construction, we additionally emphasize current function quality in sampling, which helps users track their progress and prioritize improving the current in-context function. Moreover, these prior studies measures diversity with cosine similarities on input sentence embedding [43] which, as we argue in Section 2.2, is not reflective of various tasks [46]. As a workaround, our work focuses on measuring similarities only on the key phrase embeddings, which leads to more intuitive clusters.
On the interactive example selection side, our work is perhaps more similar to some literature in programming-by-demonstration (PBD). For example, Zhang et al. [72] explored effectively selecting examples that can help disambiguate and validate synthesized regular expressions. We share similar motivations that interactively and iteratively suggest corner cases help synthesize the right function, but unlike PBD where new examples are always pruning the function search space, ScatterShot focuses on expanding the function coverage. Therefore, it is essential to select examples that incentivize people to provide feedback.
Active Learning. Our work is also similar to the aforementioned, effective annotation work [10,53] in the sense that its selection method is akin to sampling approaches in active learning [49,57]. The key idea behind active learning is that machine learning models can achieve higher performance with fewer training examples, if it is allowed to choose its own, most informative training examples. Given a budget, an active learner iteratively selects examples-to-annotate from an unlabeled pool according to some ranking mechanism. While the previous work is more similar to diversity sampling [48], ours is closer to uncertainty sampling [26], where an active learner queries the instances about which it is least certain how to label. Because LLMs are generative in nature and do not have clear probabilistic distributions across all "labels" as in e.g., classification tasks, we estimate uncertainty using the LLM output stability (unanimity voting) which also conveniently serves as a correctness estimation. This voting strategy is also quite relevant to Query-By-Committee [50] where a list of "committee" models trained on the same labeled set vote on the labelings of query candidates. Other work has also been considered directly representing LLM confidence with the average log probability of the entire output [53,61], an alternative worth comparing against in the future.
Importantly, while many empirical results suggest that active learning is effective, it does suffer from certain limitations. For example, the labeled examples are not drawn i.i.d from the underlying data distribution [49], and therefore can sometimes be imbalanced [40] or less effective than random sampling [20]. Our method will likely share the same limitations, though we leave it to future work to articulate scenarios where ScatterShot is most useful.
Model-assisted Annotation
ScatterShot can also be seen as offering assistance in data annotation (for context learning). The idea of annotating data with both humans and AI models in the loop has been explored broadly. In this setup, AIs can play various roles [71], e.g., they may generate more examples that mimic difficult patterns [29,45], select uncertain examples for people to inspect [61], etc. ScatterShot is closer to work encouraging annotators to find model-fooling examples ("adversarial data collection. ") [6,13,14,24]. In particular, Bartolo et al. [5] found that in question-answering tasks, models trained on these adversarially collected data can generalize better to more challenging examples. However, because of the overhead of re-training, their analyses were performed post-hoc, i.e., they only updated the model offline after collecting a large batch of challenging examples. In contrast, we leverage the advantage of in-context learning, and directly study the dynamic of in-context function update.
The iterative nature also links ScatterShot to earlier work in interactive machine learning (IML) [3,68]. IML is a typical paradigm that facilitates iterative and exploratory model understanding and update -a system explains to users how the current model makes predictions, and users in turn give feedback back to the model, starting the cycle again. Labeling is one classic type of IML feedback [19,51]. However, because traditional ML tends to focus much more on the surface features (e.g., count trigrams in a training example without caring its semantic meanings), users find labeling to be not powerful enough, and prefer richer controls like feature selection [3,39,52]. Since LLMs have some capability to generalize individual examples more broadly to its semantically similar ones, we believe labeling in in-context learning would be more effective, and we use ScatterShot to reactivate labeling-based IML for in-context learning.
CONCLUSION
In this work, we present ScatterShot, an interactive system for building high-quality demonstration sets for in-context learning. ScatterShot helps users find informative input examples in the unlabeled data, annotate them efficiently with the help of the current version of the learned in-context function, and estimate the quality of said function. Results from both a simulation study and a 10-person evaluation show ScatterShot improves in-context function performance, as well as annotator's awareness and handling of diverse patterns. Our findings highlight the importance of data slicing and sampling, iterative human-model interaction, and collaborative human-model labeling, and point to interesting future directions such as AI-assisted task definition refinement, more concrete quality metrics that convey the in-context function progress, etc.
ACKNOWLEDGMENTS
This material is based upon work supported by NSF awards 1901386 and 2040196, ONR grant N00014-21-1-2707, and a gift from the Allen Institute for Artificial Intelligence (AI2). The authors thank the user study participants for their valuable feedback, and anonymous reviewers for helpful discussions and comments.
IUI '23, March 27-31, 2023, Sydney, NSW, Australia © 2023 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-0106-1/23/03. https://doi.org/10.1145/3581641.3584059
Figure 2 :
2(A) The ScatterShot interface, with (A 1 ) task description, (A 2 ) existing in-context examples, and (A 3 ) candidate examples awaiting human inspection. Through interactions A 4 and A 5 . Users can make edits to LLM outputs, sort the candidates into positive demonstrative examples (+), negative ones (-), or just not include the candidate (O). The description and the examples get transformed into raw text prompts. One set of in-context examples produces multiple prompts depending on how the examples are ordered; (B) shows a prompt with one possible ordering.
Figure 5 :
5The ScatterShot interface on the question-answer pair implication task.
or no time expressions at all (as the pool for negative examples). This resulted in 491 examples with YYYY-MM-DD outputs, and 369 negative examples with the output N/A. We sample 100 examples randomly from this dataset as a test set, and use the remaining examples as our unlabeled pool in the experiment.
Figure 6
6shows the trajectory of the in-context function quality as the simulated user adds more examples, for three randomly selected runs. ScatterShot dominates the baseline at almost all points in all runs, with the biggest gaps occurring when the number of in-context examples is small. We see particular gains at = 5, i.e. when the first two examples are added to the seed examples. Our hypothesis (based on qualitative observation) is that ScatterShot consistently selects examples that represent patterns not contained in the seed examples, e.g., negative examples (where the outcome is N/A) when all seed examples are positive.
Figure 6 :
6The in-context function performance trajectory, We evaluate the in-context function on the held-out test set every time we add five more examples to the in-context bucket until the stop condition is satisfied. ScatterShot tends to frequently outperform Random, and tends to have less performance oscillation. Note that the y-axis is different for Temporal and QA-pair.
context learning examples. (2) Random, where participants use the ScatterShot interface with slice-based sampling disabled, i.e., they review randomly selected examples. This condition still has the benefit of an interactive interface, and uses the intermediate in-context functions to suggest outputs and pseudo-label. (3) ScatterShot, where participants have access to ScatterShot, fully featured.
expect it given that I have annotated more examples"). After we explained the selection methods at the end, some users noted that understanding the methods would have helped them better calibrate their estimates of the learned function quality over time.ScatterShot helped participants explore the input space more holistically, and build better in-context functions. The perceived data difficulty and diversity encouraged participants to iterate more on their in-context examples. When looking at the number of in-context examples added in each setting, participants added 40% more examples in ScatterShot than Random when Scat-terShot came after (M-R-S), and 20% fewer examples in Random when Random came after (M-S-R), i.e., they stopped much earlier when Random came after ScatterShot. These additional examples are not only a result of more inspection effort (on average, participants in ScatterShot reviewed 20% more samples), but also that each batch in ScatterShot was more likely to contain a good in-context example -participants added 81% of the examples they inspected in ScatterShot, but only 48% of the examples in Random.
) at the expense of demonstrating less-common ones. What is worse, users may not know when they have provided enough examples, and whether there are any uncovered patterns in the arXiv:2302.07346v1 [cs.HC] 14 Feb 2023Existing in-context examples
Slice-based input sampling
LLM output suggestion
Users inspect & edit
✗
[Posted: 1989-10-31] Slepian was killed on Oct. 23, 1999
Oct. 23, 1999 == 1999-10-23
[Posted: 2000-01-05] Photo: today.
today == 2000-01-05
[Posted: 1989-10-31] It will control 5% of jewelry business
N/A
[Posted: 2014-12-25] @viereedom Merry Christmas!
Christmas == 2014-12-25
ScatterShot
[Posted: 2000-01-06] He was plucked on Thanksgiving Day.
Thanksgiving == 2000-11-25
Thanksgiving == 1999-11-25
✗
nineteen ninety-six == 1996
[Posted: 1998-02-27] nineteen ninety-six in Atlanta
nineteen ninety-six == 1996
✓
Random sampling
Users annotate
✗
[Posted: 1989-10-27] The sale will be due on Nov. 1, 2004.
[Posted: 1989-10-26] He Third period sales were $2.49 billion.
Nov. 1, 2004 == 2004-11-01
N/A
A
C
B
1
2
The goal of ScatterShot is to help users iteratively find and label high-quality demonstrative examples to build effective in-context>> [Posted: 2000-01-05] Photo: today. => Today == 2000-01-05 >> [Posted: 1989-10-31] Slepian was killed on Oct. 23, 1999 . => Oct. 23, 1999 == 1999-10-23 >> [Posted: 1989-10-31] It hopes to control 5% of jewelry business => N/A >> [[SELECTED ORIGINAL EXAMPLE]] => Extract all the mentioned dates as detailed as possible, in the ISO format of YEAR-MONTH-DAY.
[Posted: 1998-02-27] Atlanta nineteen ninety-six. ••• nineteen ninety-six == 1996-01 ••• nineteen ninety-six == 1996 ••• 1996 == 1996 nineteen ninety-six == 1996Unanimity voting
Manual inspection
[Posted: 2014-12-25] @viereedom Merry Christmas!
••• Christmas == 2014-12-25
••• Christmas == 2014-12-25
••• Christmas == 2014-12-25
Unanimity voting
Manual inspection
✗
✓
✓
✗
✓
✓
Edit
Christmas == 2014-12-25
Keep
B
A
Table 2 :
2Example outputs from transformation functions built in ScatterShot and Random condition, and from a rule-based system[44]. ScatterShot functions tend to have better coverage, fluency, and correctness.Coverage: Transforms more forms of inputs.Input Q: Are there more girls or boys? A: equal Q: How many hairs does the sheep in front have? A: infinite Rule-based ✗ (No generation) ✗ (No generation) Random ✓ Q: Are the girls and boys equal in number? A: yes ✗ N/A ScatterShot ✗ Q: Are the girls and boys equal? A: yes ✓ Q: Does the sheep in front have infinite hairs? A: yes Fluency: Generates outputs that sound natural. based ✗ Q: Make is the phone vtech? A: yes ✗ Q: Does the woman have on her face headband? A: yes Random ✓ Q: Is the phone a vtech? A: yes ✓ Q: Does the woman have a headband on her face? A: yes ScatterShot ✓ Q: Is the phone a vtech? A: yes ✓ Q: Does the woman have a headband on her face? A: yes Correctness: Produces desired outputs (the new question-pair are logically equivalent to the original pair). based ✗ Q: Are they next to unknown? A: yes ✗ Q: Is the mountain in the picture ? A: yes Random ✗ Q: Is the monument unknown? A: yes ✗ Q: Is the mountain type of motorcycle in the picture ? A: yes ScatterShot ✓ Q: Are they next to an unknown monument? A: yes ✓ Q: Is the motorcycle in the picture a mountain bike? A: yesInput Q: What make is the phone? A: vtech
Q: What does the woman have on her face? A: headband
Rule-Input Q: What monument are they next to? A: unknown
Q: What type of motorcycle is in the picture? A: mountain
Rule-
All of our studies and experiments are run on GPT-3[7], https://beta.openai.com/
If an example is in the in-context set, we perform cross-validation and predict its output using the remaining examples.
The full user study instructions, as well as the detailed exit survey, are in https: //github.com/tongshuangwu/scattershot
In Manual, this meant looking at three random batches of unlabeled data in the Jupyter notebook.
In-context Examples Selection for Machine Translation. Sweta Agrawal, Chunting Zhou, Mike Lewis, Luke Zettlemoyer, Marjan Ghazvininejad, abs/2212.02437ArXiv preprintSweta Agrawal, Chunting Zhou, Mike Lewis, Luke Zettlemoyer, and Marjan Ghazvininejad. 2022. In-context Examples Selection for Machine Translation. ArXiv preprint abs/2212.02437 (2022). https://arxiv.org/abs/2212.02437
BERT got a Date: Introducing Transformers to Temporal Tagging. Dennis Satya Almasian, Michael Aumiller, Gertz, abs/2109.14927ArXiv preprintSatya Almasian, Dennis Aumiller, and Michael Gertz. 2021. BERT got a Date: Introducing Transformers to Temporal Tagging. ArXiv preprint abs/2109.14927 (2021). https://arxiv.org/abs/2109.14927
Power to the people: The role of humans in interactive machine learning. Saleema Amershi, Maya Cakmak, William Bradley Knox, Todd Kulesza, Ai Magazine. 35Saleema Amershi, Maya Cakmak, William Bradley Knox, and Todd Kulesza. 2014. Power to the people: The role of humans in interactive machine learning. Ai Magazine 35, 4 (2014), 105-120.
Using confidence bounds for exploitation-exploration trade-offs. Peter Auer, Journal of Machine Learning Research. 3Peter Auer. 2002. Using confidence bounds for exploitation-exploration trade-offs. Journal of Machine Learning Research 3, Nov (2002), 397-422.
Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension. Max Bartolo, Alastair Roberts, Johannes Welbl, Sebastian Riedel, Pontus Stenetorp, 10.1162/tacl_a_00338Transactions of the Association for Computational Linguistics. 8Max Bartolo, Alastair Roberts, Johannes Welbl, Sebastian Riedel, and Pontus Stenetorp. 2020. Beat the AI: Investigating Adversarial Human Annotation for Reading Comprehension. Transactions of the Association for Computational Linguistics 8 (2020), 662-678. https://doi.org/10.1162/tacl_a_00338
Models in the Loop: Aiding Crowdworkers with Generative Annotation Assistants. Max Bartolo, Tristan Thrush, Sebastian Riedel, Pontus Stenetorp, Robin Jia, Douwe Kiela, 10.18653/v1/2022.naacl-main.275Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSeattle, United StatesAssociation for Computational LinguisticsMax Bartolo, Tristan Thrush, Sebastian Riedel, Pontus Stenetorp, Robin Jia, and Douwe Kiela. 2022. Models in the Loop: Aiding Crowdworkers with Generative Annotation Assistants. In Proceedings of the 2022 Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Seattle, United States, 3754-3767. https://doi.org/10.18653/v1/2022.naacl-main.275
Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz , Hugo Larochelle, Marc'aurelio Ranzato, Raia Hadsell, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020. Maria-Florina Balcan, and Hsuan-Tien LinScott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford2020Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot LearnersTom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin (Eds.). https://proceedings.neurips.cc/paper/2020/ hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html
What Did My AI Learn? How Data Scientists Make Sense of Model Behavior. Alexander Ángel, Marco Tulio Cabrera, Bongshin Ribeiro, Rob Lee, Adam Deline, Steven M Perer, Drucker, ACM Transactions on Computer-Human Interaction. Ángel Alexander Cabrera, Marco Tulio Ribeiro, Bongshin Lee, Rob DeLine, Adam Perer, and Steven M Drucker. 2022. What Did My AI Learn? How Data Scien- tists Make Sense of Model Behavior. ACM Transactions on Computer-Human Interaction (2022).
SUTime: A library for recognizing and normalizing time expressions. X Angel, Christopher Chang, Manning, Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12). European Language Resources Association (ELRA). the Eighth International Conference on Language Resources and Evaluation (LREC'12). European Language Resources Association (ELRA)Istanbul, TurkeyAngel X. Chang and Christopher Manning. 2012. SUTime: A library for rec- ognizing and normalizing time expressions. In Proceedings of the Eighth In- ternational Conference on Language Resources and Evaluation (LREC'12). Eu- ropean Language Resources Association (ELRA), Istanbul, Turkey, 3735-3740. http://www.lrec-conf.org/proceedings/lrec2012/pdf/284_Paper.pdf
On Training Instance Selection for Few-Shot Neural Text Generation. Ernie Chang, Xiaoyu Shen, Hui-Syuan Yeh, Vera Demberg, 10.18653/v1/2021.acl-short.2Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingAssociation for Computational Linguistics2Short Papers)Ernie Chang, Xiaoyu Shen, Hui-Syuan Yeh, and Vera Demberg. 2021. On Training Instance Selection for Few-Shot Neural Text Generation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers). Association for Computational Linguistics, Online, 8-13. https: //doi.org/10.18653/v1/2021.acl-short.2
Slice-based Learning: A Programming Model for Residual Learning in Critical Data Slices. Vincent S Chen, Sen Wu, Alexander J Ratner, Jen Weng, Christopher Ré, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems. Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman GarnettNeurIPS; Vancouver, BC, CanadaVincent S. Chen, Sen Wu, Alexander J. Ratner, Jen Weng, and Christopher Ré. 2019. Slice-based Learning: A Programming Model for Residual Learning in Critical Data Slices. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d'Alché-Buc, Emily B. Fox, and Roman Garnett (Eds.). 9392-9402. https://proceedings.neurips.cc/paper/2019/hash/ 351869bde8b9d6ad1e3090bd173f600d-Abstract.html
Anand Rajagopalan, Afshin Rostamizadeh, and Sanjiv Kumar. 2021. Batch active learning at scale. Gui Citovsky, Giulia Desalvo, Claudio Gentile, Lazaros Karydas, Advances in Neural Information Processing Systems. 34Gui Citovsky, Giulia DeSalvo, Claudio Gentile, Lazaros Karydas, Anand Ra- jagopalan, Afshin Rostamizadeh, and Sanjiv Kumar. 2021. Batch active learning at scale. Advances in Neural Information Processing Systems 34 (2021), 11933- 11944.
Build it Break it Fix it for Dialogue Safety: Robustness from Adversarial Human Attack. Emily Dinan, Samuel Humeau, Bharath Chintagunta, Jason Weston, 10.18653/v1/D19-1461Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsEmily Dinan, Samuel Humeau, Bharath Chintagunta, and Jason Weston. 2019. Build it Break it Fix it for Dialogue Safety: Robustness from Adversarial Human Attack. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, 4537-4546. https://doi.org/10.18653/v1/D19-1461
DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs. Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, Matt Gardner, 10.18653/v1/N19-1246Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2019. DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Association for Computational Linguistics, Minneapolis, Minnesota, 2368-2378. https://doi.org/ 10.18653/v1/N19-1246
The Turking Test: Can Language Models Understand Instructions? ArXiv preprint abs. Avia Efrat, Omer Levy, 11982Avia Efrat and Omer Levy. 2020. The Turking Test: Can Language Models Understand Instructions? ArXiv preprint abs/2010.11982 (2020). https://arxiv. org/abs/2010.11982
Domino: Discovering systematic errors with cross-modal embeddings. Maya Sabri Eyuboglu, Khaled Varma, Jean-Benoit Saab, Christopher Delbrouck, Jared Lee-Messer, James Dunnmon, Christopher Zou, Ré, abs/2203.14960ArXiv preprintSabri Eyuboglu, Maya Varma, Khaled Saab, Jean-Benoit Delbrouck, Christopher Lee-Messer, Jared Dunnmon, James Zou, and Christopher Ré. 2022. Domino: Discovering systematic errors with cross-modal embeddings. ArXiv preprint abs/2203.14960 (2022). https://arxiv.org/abs/2203.14960
Deep active learning over the long tail. Yonatan Geifman, Ran El-Yaniv, abs/1711.00941ArXiv preprintYonatan Geifman and Ran El-Yaniv. 2017. Deep active learning over the long tail. ArXiv preprint abs/1711.00941 (2017). https://arxiv.org/abs/1711.00941
Annotation Artifacts in Natural Language Inference Data. Swabha Suchin Gururangan, Omer Swayamdipta, Roy Levy, Samuel Schwartz, Noah A Bowman, Smith, 10.18653/v1/N18-2017Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics2Short PapersSuchin Gururangan, Swabha Swayamdipta, Omer Levy, Roy Schwartz, Samuel Bowman, and Noah A. Smith. 2018. Annotation Artifacts in Natural Language In- ference Data. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers). Association for Computational Linguistics, New Orleans, Louisiana, 107-112. https://doi.org/10.18653/v1/N18-2017
Visual classifier training for text document retrieval. Florian Heimerl, Steffen Koch, Harald Bosch, Thomas Ertl, IEEE Transactions on Visualization and Computer Graphics. 18Florian Heimerl, Steffen Koch, Harald Bosch, and Thomas Ertl. 2012. Visual classifier training for text document retrieval. IEEE Transactions on Visualization and Computer Graphics 18, 12 (2012), 2839-2848.
Optimal sampling in unbiased active learning. Henrik Imberg, Johan Jonasson, Marina Axelson-Fisk, PMLRThe 23rd International Conference on Artificial Intelligence and Statistics. Silvia Chiappa and Roberto CalandraOnline [Palermo, Sicily, Italy2020Proceedings of Machine Learning ResearchHenrik Imberg, Johan Jonasson, and Marina Axelson-Fisk. 2020. Optimal sam- pling in unbiased active learning. In The 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020, 26-28 August 2020, Online [Palermo, Sicily, Italy] (Proceedings of Machine Learning Research, Vol. 108), Silvia Chiappa and Roberto Calandra (Eds.). PMLR, 559-569. http://proceedings.mlr.press/v108/ imberg20a.html
Prompt-based Prototyping with Large Language Models. Ellen Jiang, Kristen Olson, Edwin Toh, Alejandra Molina, Aaron Donsbach, Michael Terry, Carrie J Cai, Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems. Ellen Jiang, Kristen Olson, Edwin Toh, Alejandra Molina, Aaron Donsbach, Michael Terry, and Carrie J. Cai. 2022. Prompt-based Prototyping with Large Language Models. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems.
PromptMaker: Prompt-based Prototyping with Large Language Models. Ellen Jiang, Kristen Olson, Edwin Toh, Alejandra Molina, Aaron Donsbach, Michael Terry, Carrie J Cai, CHI Conference on Human Factors in Computing Systems Extended Abstracts. Ellen Jiang, Kristen Olson, Edwin Toh, Alejandra Molina, Aaron Donsbach, Michael Terry, and Carrie J Cai. 2022. PromptMaker: Prompt-based Prototyping with Large Language Models. In CHI Conference on Human Factors in Computing Systems Extended Abstracts. 1-8.
Unanimous Prediction for 100% Precision with Application to Learning Semantic Mappings. Fereshte Khani, Martin Rinard, Percy Liang, 10.18653/v1/P16-1090Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational Linguistics1Long Papers)Fereshte Khani, Martin Rinard, and Percy Liang. 2016. Unanimous Prediction for 100% Precision with Application to Learning Semantic Mappings. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Berlin, Germany, 952-962. https://doi.org/10.18653/v1/P16-1090
Dynabench: Rethinking Benchmarking in NLP. Douwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengxuan Wu, Bertie Vidgen, Grusha Prasad, Amanpreet Singh, Pratik Ringshia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal, Christopher Potts, Adina Williams, 10.18653/v1/2021.naacl-main.324Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnlineAssociation for Computational LinguisticsDouwe Kiela, Max Bartolo, Yixin Nie, Divyansh Kaushik, Atticus Geiger, Zhengx- uan Wu, Bertie Vidgen, Grusha Prasad, Amanpreet Singh, Pratik Ringshia, Zhiyi Ma, Tristan Thrush, Sebastian Riedel, Zeerak Waseem, Pontus Stenetorp, Robin Jia, Mohit Bansal, Christopher Potts, and Adina Williams. 2021. Dynabench: Rethinking Benchmarking in NLP. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies. Association for Computational Linguistics, Online, 4110-4124. https://doi.org/10.18653/v1/2021.naacl-main.324
CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities. Mina Lee, Percy Liang, Qian Yang, ArXiv preprint abs/2201.06796Mina Lee, Percy Liang, and Qian Yang. 2022. CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities. ArXiv preprint abs/2201.06796 (2022). https://arxiv.org/abs/2201.06796
A sequential algorithm for training text classifiers. D David, William A Lewis, Gale, SIGIR'94. SpringerDavid D Lewis and William A Gale. 1994. A sequential algorithm for training text classifiers. In SIGIR'94. Springer, 3-12.
A technique for the measurement of attitudes. Archives of psychology. Rensis Likert, Rensis Likert. 1932. A technique for the measurement of attitudes. Archives of psychology (1932).
ROUGE: A Package for Automatic Evaluation of Summaries. Chin-Yew Lin, Text Summarization Branches Out. Barcelona, SpainAssociation for Computational LinguisticsChin-Yew Lin. 2004. ROUGE: A Package for Automatic Evaluation of Summaries. In Text Summarization Branches Out. Association for Computational Linguistics, Barcelona, Spain, 74-81. https://aclanthology.org/W04-1013
Wanli: Worker and ai collaboration for natural language inference dataset creation. Alisa Liu, Swabha Swayamdipta, A Noah, Yejin Smith, Choi, ArXiv preprint abs/2201.05955Alisa Liu, Swabha Swayamdipta, Noah A Smith, and Yejin Choi. 2022. Wanli: Worker and ai collaboration for natural language inference dataset creation. ArXiv preprint abs/2201.05955 (2022). https://arxiv.org/abs/2201.05955
What Makes Good In-Context Examples for GPT-3. Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, Weizhu Chen, 10.18653/v1/2022.deelio-1.10The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures. Dublin, Ireland and OnlineAssociation for Computational LinguisticsProceedings of Deep Learning Inside OutJiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. 2022. What Makes Good In-Context Examples for GPT-3?. In Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures. Association for Computational Linguistics, Dublin, Ireland and Online, 100-114. https: //doi.org/10.18653/v1/2022.deelio-1.10
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, Graham Neubig, abs/2107.13586ArXiv preprintPengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Gra- ham Neubig. 2021. Pre-train, Prompt, and Predict: A Systematic Survey of Prompt- ing Methods in Natural Language Processing. ArXiv preprint abs/2107.13586 (2021). https://arxiv.org/abs/2107.13586
Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity. Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, Pontus Stenetorp, 10.18653/v1/2022.acl-long.556Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandAssociation for Computational Linguistics1Long Papers)Yao Lu, Max Bartolo, Alastair Moore, Sebastian Riedel, and Pontus Stenetorp. 2022. Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Dublin, Ireland, 8086-8098. https://doi.org/10. 18653/v1/2022.acl-long.556
Hierarchical agglomerative clustering procedure. Alena Lukasová, Pattern Recognition. 11Alena Lukasová. 1979. Hierarchical agglomerative clustering procedure. Pattern Recognition 11, 5-6 (1979), 365-381.
Diverse ensembles for active learning. Prem Melville, Raymond J Mooney, 10.1145/1015330.1015385Machine Learning, Proceedings of the Twenty-first International Conference (ICML 2004). Carla E. BrodleyBanff, Alberta, CanadaACM69ACM International Conference Proceeding SeriesPrem Melville and Raymond J. Mooney. 2004. Diverse ensembles for active learning. In Machine Learning, Proceedings of the Twenty-first International Conference (ICML 2004), Banff, Alberta, Canada, July 4-8, 2004 (ACM Inter- national Conference Proceeding Series, Vol. 69), Carla E. Brodley (Ed.). ACM. https://doi.org/10.1145/1015330.1015385
Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? ArXiv preprint abs/2202. Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer, 12837Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2022. Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? ArXiv preprint abs/2202.12837 (2022). https://arxiv.org/abs/2202.12837
Cross-Task Generalization via Natural Language Crowdsourcing Instructions. Swaroop Mishra, Daniel Khashabi, Chitta Baral, Hannaneh Hajishirzi, 10.18653/v1/2022.acl-long.244Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandAssociation for Computational Linguistics1Long Papers)Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2022. Cross-Task Generalization via Natural Language Crowdsourcing Instructions. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Dublin, Ireland, 3470-3487. https://doi.org/10.18653/v1/2022.acl-long.244
Cross-Task Generalization via Natural Language Crowdsourcing Instructions. Swaroop Mishra, Daniel Khashabi, Chitta Baral, Hannaneh Hajishirzi, 10.18653/v1/2022.acl-long.244Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandAssociation for Computational Linguistics1Long Papers)Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2022. Cross-Task Generalization via Natural Language Crowdsourcing Instructions. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Dublin, Ireland, 3470-3487. https://doi.org/10.18653/v1/2022.acl-long.244
Social Simulacra: Creating Populated Prototypes for Social Computing Systems. Joon Sung Park, Lindsay Popowski, Carrie J Cai, Meredith Ringel Morris, Percy Liang, Michael S Bernstein, abs/2208.04024ArXiv preprintJoon Sung Park, Lindsay Popowski, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. 2022. Social Simulacra: Creating Populated Prototypes for Social Computing Systems. ArXiv preprint abs/2208.04024 (2022). https://arxiv.org/abs/2208.04024
Investigating statistical machine learning as a tool for software development. Kayur Patel, James Fogarty, A James, Beverly Landay, Harrison, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. the SIGCHI Conference on Human Factors in Computing SystemsKayur Patel, James Fogarty, James A Landay, and Beverly Harrison. 2008. In- vestigating statistical machine learning as a tool for software development. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 667-676.
Deep ensemble bayesian active learning: Addressing the mode collapse issue in monte carlo dropout via ensembles. Remus Pop, Patric Fulop, ArXiv preprint abs/1811.03897Remus Pop and Patric Fulop. 2018. Deep ensemble bayesian active learning: Addressing the mode collapse issue in monte carlo dropout via ensembles. ArXiv preprint abs/1811.03897 (2018). https://arxiv.org/abs/1811.03897
Timebank 1.2 documentation. Event London. James Pustejovsky, Jessica Littman, Roser Saurí, and Marc VerhagenJames Pustejovsky, Jessica Littman, Roser Saurí, and Marc Verhagen. 2006. Time- bank 1.2 documentation. Event London, no. April (2006), 6-11.
Snorkel: Rapid training data creation with weak supervision. Alexander Ratner, H Stephen, Henry Bach, Jason Ehrenberg, Sen Fries, Christopher Wu, Ré, Proceedings of the VLDB Endowment. International Conference on Very Large Data Bases. the VLDB Endowment. International Conference on Very Large Data BasesNIH Public Access11269Alexander Ratner, Stephen H Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christopher Ré. 2017. Snorkel: Rapid training data creation with weak supervision. In Proceedings of the VLDB Endowment. International Conference on Very Large Data Bases, Vol. 11. NIH Public Access, 269.
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. Nils Reimers, Iryna Gurevych, 10.18653/v1/D19-1410Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsNils Reimers and Iryna Gurevych. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Association for Computational Linguistics, Hong Kong, China, 3982-3992. https://doi.org/10.18653/v1/D19-1410
Are Red Roses Red? Evaluating Consistency of Question-Answering Models. Carlos Marco Tulio Ribeiro, Sameer Guestrin, Singh, 10.18653/v1/P19-1621Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsMarco Tulio Ribeiro, Carlos Guestrin, and Sameer Singh. 2019. Are Red Roses Red? Evaluating Consistency of Question-Answering Models. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, 6174-6184. https://doi.org/10. 18653/v1/P19-1621
Adaptive Testing and Debugging of NLP Models. Ribeiro Marco Tulio, Scott Lundberg, 10.18653/v1/2022.acl-long.230Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandAssociation for Computational Linguistics1Long Papers)Marco Tulio Ribeiro and Scott Lundberg. 2022. Adaptive Testing and Debugging of NLP Models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Dublin, Ireland, 3253-3267. https://doi.org/10.18653/v1/2022.acl- long.230
Learning To Retrieve Prompts for In-Context Learning. Ohad Rubin, Jonathan Herzig, Jonathan Berant, 10.18653/v1/2022.naacl-main.191Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSeattle, United StatesAssociation for Computational LinguisticsOhad Rubin, Jonathan Herzig, and Jonathan Berant. 2022. Learning To Retrieve Prompts for In-Context Learning. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Seattle, United States, 2655-2671. https://doi.org/10.18653/v1/2022.naacl-main.191
Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. Shiori Sagawa, Pang Wei Koh, B Tatsunori, Percy Hashimoto, Liang, abs/1911.08731ArXiv preprintShiori Sagawa, Pang Wei Koh, Tatsunori B Hashimoto, and Percy Liang. 2019. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. ArXiv preprint abs/1911.08731 (2019). https://arxiv.org/abs/1911.08731
Active Learning for Convolutional Neural Networks: A Core-Set Approach. Ozan Sener, Silvio Savarese, 6th International Conference on Learning Representations. Vancouver, BC, CanadaConference Track Proceedings. OpenReview.netOzan Sener and Silvio Savarese. 2018. Active Learning for Convolutional Neural Networks: A Core-Set Approach. In 6th International Conference on Learning Rep- resentations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenReview.net. https://openreview.net/forum?id=H1aIuk- RW
Active Learning Literature Survey. Burr Settles, 1648Computer Sciences ; University of Wisconsin-MadisonTechnical ReportBurr Settles. 2009. Active Learning Literature Survey. Computer Sciences Technical Report 1648. University of Wisconsin-Madison.
Query by committee. Manfred H Sebastian Seung, Haim Opper, Sompolinsky, Proceedings of the fifth annual workshop on Computational learning theory. the fifth annual workshop on Computational learning theoryH Sebastian Seung, Manfred Opper, and Haim Sompolinsky. 1992. Query by committee. In Proceedings of the fifth annual workshop on Computational learning theory. 287-294.
Ice: enabling non-experts to build models interactively for large-scale lopsided problems. Patrice Simard, David Chickering, Aparna Lakshmiratan, Denis Charles, Léon Bottou, Carlos Garcia Jurado Suarez, David Grangier, Saleema Amershi, Johan Verwey, Jina Suh, arXiv:1409.4814arXiv preprintPatrice Simard, David Chickering, Aparna Lakshmiratan, Denis Charles, Léon Bottou, Carlos Garcia Jurado Suarez, David Grangier, Saleema Amershi, Johan Verwey, and Jina Suh. 2014. Ice: enabling non-experts to build models interactively for large-scale lopsided problems. arXiv preprint arXiv:1409.4814 (2014).
Interacting meaningfully with machine learning systems: Three experiments. Simone Stumpf, Vidya Rajaram, Lida Li, Weng-Keen Wong, Margaret Burnett, Thomas Dietterich, Erin Sullivan, Jonathan Herlocker, International journal of human-computer studies. 67Simone Stumpf, Vidya Rajaram, Lida Li, Weng-Keen Wong, Margaret Burnett, Thomas Dietterich, Erin Sullivan, and Jonathan Herlocker. 2009. Interacting meaningfully with machine learning systems: Three experiments. International journal of human-computer studies 67, 8 (2009), 639-662.
Selective annotation makes language models better few-shot learners. Hongjin Su, Jungo Kasai, Chen Henry Wu, Weijia Shi, Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf, Luke Zettlemoyer, A Noah, Smith, abs/2209.01975ArXiv preprintHongjin Su, Jungo Kasai, Chen Henry Wu, Weijia Shi, Tianlu Wang, Jiayi Xin, Rui Zhang, Mari Ostendorf, Luke Zettlemoyer, Noah A Smith, et al. 2022. Selective annotation makes language models better few-shot learners. ArXiv preprint abs/2209.01975 (2022). https://arxiv.org/abs/2209.01975
Story Centaur: Large Language Model Few Shot Learning as a Creative Writing Tool. Ben Swanson, Kory Mathewson, Ben Pietrzak, Sherol Chen, Monica Dinalescu, 10.18653/v1/2021.eacl-demos.29Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations. the 16th Conference of the European Chapter of the Association for Computational Linguistics: System DemonstrationsAssociation for Computational LinguisticsBen Swanson, Kory Mathewson, Ben Pietrzak, Sherol Chen, and Monica Di- nalescu. 2021. Story Centaur: Large Language Model Few Shot Learning as a Creative Writing Tool. In Proceedings of the 16th Conference of the Euro- pean Chapter of the Association for Computational Linguistics: System Demon- strations. Association for Computational Linguistics, Online, 244-256. https: //doi.org/10.18653/v1/2021.eacl-demos.29
TweeTime : A Minimally Supervised Method for Recognizing and Normalizing Time Expressions in Twitter. Jeniya Tabassum, Alan Ritter, Wei Xu, 10.18653/v1/D16-1030Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsJeniya Tabassum, Alan Ritter, and Wei Xu. 2016. TweeTime : A Minimally Supervised Method for Recognizing and Normalizing Time Expressions in Twitter. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Austin, Texas, 307-318. https://doi.org/10.18653/v1/D16-1030
Active Learning Helps Pretrained Models Learn the Intended Task. Alex Tamkin, Dat Nguyen, Salil Deshpande, Jesse Mu, Noah Goodman, abs/2204.08491ArXiv preprintAlex Tamkin, Dat Nguyen, Salil Deshpande, Jesse Mu, and Noah Goodman. 2022. Active Learning Helps Pretrained Models Learn the Intended Task. ArXiv preprint abs/2204.08491 (2022). https://arxiv.org/abs/2204.08491
Bayesian Generative Active Deep Learning. Toan Tran, Thanh-Toan Do, Ian D Reid, Gustavo Carneiro, PMLRProceedings of the 36th International Conference on Machine Learning, ICML 2019. Kamalika Chaudhuri and Ruslan Salakhutdinovthe 36th International Conference on Machine Learning, ICML 2019Long Beach, California, USA97Proceedings of Machine Learning ResearchToan Tran, Thanh-Toan Do, Ian D. Reid, and Gustavo Carneiro. 2019. Bayesian Generative Active Deep Learning. In Proceedings of the 36th International Con- ference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA (Proceedings of Machine Learning Research, Vol. 97), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.). PMLR, 6295-6304. http://proceedings.mlr.press/ v97/tran19a.html
Attention is All you Need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman GarnettLong Beach, CA, USA, Isabelle GuyonAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Processing Systems 30: An- nual Conference on Neural Information Processing Systems 2017, December 4- 9, 2017, Long Beach, CA, USA, Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (Eds.). 5998-6008. https://proceedings.neurips.cc/paper/2017/hash/ 3f5ee243547dee91fbd053c1c4a845aa-Abstract.html
Approximation algorithms. V Vijay, Vazirani, Springer Science & Business MediaVijay V Vazirani. 2013. Approximation algorithms. Springer Science & Business Media.
Semantic programming by example with pre-trained models. Gust Verbruggen, Le Vu, Sumit Gulwani, Proceedings of the ACM on Programming Languages. 5OOPSLAGust Verbruggen, Vu Le, and Sumit Gulwani. 2021. Semantic programming by example with pre-trained models. Proceedings of the ACM on Programming Languages 5, OOPSLA (2021), 1-25.
Want To Reduce Labeling Cost? GPT-3 Can Help. Shuohang Wang, Yang Liu, Yichong Xu, Chenguang Zhu, Michael Zeng, 10.18653/v1/2021.findings-emnlp.354Findings of the Association for Computational Linguistics: EMNLP 2021. Association for Computational Linguistics. Punta Cana, Dominican RepublicShuohang Wang, Yang Liu, Yichong Xu, Chenguang Zhu, and Michael Zeng. 2021. Want To Reduce Labeling Cost? GPT-3 Can Help. In Findings of the Association for Computational Linguistics: EMNLP 2021. Association for Com- putational Linguistics, Punta Cana, Dominican Republic, 4195-4205. https: //doi.org/10.18653/v1/2021.findings-emnlp.354
Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd Ideation. Yunlong Wang, Priyadarshini Venkatesh, Brian Y Lim, CHI Conference on Human Factors in Computing Systems. Yunlong Wang, Priyadarshini Venkatesh, and Brian Y Lim. 2022. Interpretable Directed Diversity: Leveraging Model Explanations for Iterative Crowd Ideation. In CHI Conference on Human Factors in Computing Systems. 1-28.
Chain of thought prompting elicits reasoning in large language models. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, Denny Zhou, abs/2201.11903ArXiv preprintJason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. ArXiv preprint abs/2201.11903 (2022). https://arxiv.org/abs/ 2201.11903
Promptchainer: Chaining large language model prompts through visual programming. Tongshuang Wu, Ellen Jiang, Aaron Donsbach, Jeff Gray, Alejandra Molina, Michael Terry, Carrie J Cai, CHI Conference on Human Factors in Computing Systems Extended Abstracts. Tongshuang Wu, Ellen Jiang, Aaron Donsbach, Jeff Gray, Alejandra Molina, Michael Terry, and Carrie J Cai. 2022. Promptchainer: Chaining large language model prompts through visual programming. In CHI Conference on Human Factors in Computing Systems Extended Abstracts. 1-10.
Errudite: Scalable, Reproducible, and Testable Error Analysis. Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, Daniel Weld, 10.18653/v1/P19-1073Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsTongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, and Daniel Weld. 2019. Errudite: Scalable, Reproducible, and Testable Error Analysis. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, Florence, Italy, 747-763. https: //doi.org/10.18653/v1/P19-1073
Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and Improving Models. Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, Daniel Weld, 10.18653/v1/2021.acl-long.523Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingAssociation for Computational Linguistics1Long Papers)Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer, and Daniel Weld. 2021. Polyjuice: Generating Counterfactuals for Explaining, Evaluating, and Improving Models. In Proceedings of the 59th Annual Meeting of the Association for Computa- tional Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Association for Computational Linguistics, Online, 6707-6723. https://doi.org/10.18653/v1/2021.acl-long.523
AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts. Tongshuang Wu, Michael Terry, Carrie , 10.1145/3491102.3517582Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. the 2022 CHI Conference on Human Factors in Computing SystemsNew Orleans, LA, USA; New York, NY, USA, ArticleAssociation for Computing Machinery385CHI '22)Tongshuang Wu, Michael Terry, and Carrie Jun Cai. 2022. AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI '22). Association for Computing Machinery, New York, NY, USA, Article 385, 22 pages. https://doi.org/10.1145/3491102. 3517582
Local decision pitfalls in interactive machine learning: An investigation into feature selection in sentiment analysis. Tongshuang Wu, S Daniel, Jeffrey Weld, Heer, ACM Transactions on Computer-Human Interaction (TOCHI). 26Tongshuang Wu, Daniel S Weld, and Jeffrey Heer. 2019. Local decision pitfalls in interactive machine learning: An investigation into feature selection in sentiment analysis. ACM Transactions on Computer-Human Interaction (TOCHI) 26, 4 (2019), 1-27.
Tempura: Query Analysis with Structural Templates. Tongshuang Wu, Kanit Wongsuphasawat, Donghao Ren, Kayur Patel, Chris Dubois ; Regina, Florian 'floyd' Bernhaupt, David Mueller, Josh Verweij, Andres, CHI '20: CHI Conference on Human Factors in Computing Systems. Shengdong Zhao, Briane Paul Samson, and Rafal KocielnikHonolulu, HI, USA; Joanna McGrenere, Andy Cockburn, Ignacio Avellino, Alix Goguey, Pernille BjønTongshuang Wu, Kanit Wongsuphasawat, Donghao Ren, Kayur Patel, and Chris DuBois. 2020. Tempura: Query Analysis with Structural Templates. In CHI '20: CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, April 25-30, 2020, Regina Bernhaupt, Florian 'Floyd' Mueller, David Verweij, Josh Andres, Joanna McGrenere, Andy Cockburn, Ignacio Avellino, Alix Goguey, Pernille Bjøn, Shengdong Zhao, Briane Paul Samson, and Rafal Kocielnik (Eds.).
. 10.1145/3313831.3376451ACMACM, 1-12. https://doi.org/10.1145/3313831.3376451
An explanation of in-context learning as implicit bayesian inference. Sang Michael Xie, Aditi Raghunathan, Percy Liang, Tengyu Ma, abs/2111.02080ArXiv preprintSang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma. 2021. An explanation of in-context learning as implicit bayesian inference. ArXiv preprint abs/2111.02080 (2021). https://arxiv.org/abs/2111.02080
Mastering the Dungeon: Grounded Language Learning by Mechanical Turker Descent. Zhilin Yang, Saizheng Zhang, Jack Urbanek, Will Feng, Alexander H Miller, Arthur Szlam, Douwe Kiela, Jason Weston, 6th International Conference on Learning Representations. Vancouver, BC, CanadaConference Track ProceedingsZhilin Yang, Saizheng Zhang, Jack Urbanek, Will Feng, Alexander H. Miller, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Mastering the Dungeon: Grounded Language Learning by Mechanical Turker Descent. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenReview.net. https://openreview. net/forum?id=SJ-C6JbRW
Interactive Program Synthesis by Augmented Examples. Tianyi Zhang, London Lowmanstone, Xinyu Wang, Elena L Glassman, Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. the 33rd Annual ACM Symposium on User Interface Software and TechnologyTianyi Zhang, London Lowmanstone, Xinyu Wang, and Elena L Glassman. 2020. Interactive Program Synthesis by Augmented Examples. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology. 627-648.
| [
"https://github.com/marcotcr/qa_consistency/"
] |
[
"Stanford Pupper: A Low-Cost Agile Quadruped Robot for Benchmarking and Education",
"Stanford Pupper: A Low-Cost Agile Quadruped Robot for Benchmarking and Education"
] | [
"Nathan Kau "
] | [] | [] | We present Stanford Pupper, an easily-replicated open source quadruped robot designed specifically as a benchmark platform for legged robotics research. The robot features torque-controllable brushless motors with high specific power that enable testing of impedance and torque-based machine learning and optimization control approaches. Pupper can be built from the ground up in under 8 hours for a total cost under $2000, with all components either easily purchased or 3D printed. To rigorously compare control approaches, we introduce two benchmarks, Sprint and Scramble with a leaderboard maintained by Stanford Student Robotics. These benchmarks test high-speed dynamic locomotion capability, and robustness to unstructured terrain. We provide a reference controller with dynamic, omnidirectional gaits that serves as a baseline for comparison. Reproducibility is demonstrated across multiple institutions with robots made independently. All material is available at https: //stanfordstudentrobotics.org/quadruped-benchmark. | null | [
"https://arxiv.org/pdf/2110.00736v2.pdf"
] | 238,259,070 | 2110.00736 | 6cf64f8c693e7215afed55a3edebede811cda6c9 |
Stanford Pupper: A Low-Cost Agile Quadruped Robot for Benchmarking and Education
Nathan Kau
Stanford Pupper: A Low-Cost Agile Quadruped Robot for Benchmarking and Education
Index Terms-Legged roboticsopen sourceroboticsquadrupedbenchmark
We present Stanford Pupper, an easily-replicated open source quadruped robot designed specifically as a benchmark platform for legged robotics research. The robot features torque-controllable brushless motors with high specific power that enable testing of impedance and torque-based machine learning and optimization control approaches. Pupper can be built from the ground up in under 8 hours for a total cost under $2000, with all components either easily purchased or 3D printed. To rigorously compare control approaches, we introduce two benchmarks, Sprint and Scramble with a leaderboard maintained by Stanford Student Robotics. These benchmarks test high-speed dynamic locomotion capability, and robustness to unstructured terrain. We provide a reference controller with dynamic, omnidirectional gaits that serves as a baseline for comparison. Reproducibility is demonstrated across multiple institutions with robots made independently. All material is available at https: //stanfordstudentrobotics.org/quadruped-benchmark.
I. INTRODUCTION
Recent successes in learning and optimization-based controllers for legged locomotion have largely been isolated to a select set of complex and sophisticated hardware platforms such as Anymal [1] and MIT Cheetah 3 [2]. Given the complexity and slow iteration cycle of legged robots, we see little reproduction of legged robot research across different lab groups. One of the primary factors driving the complexity of these robots is that they depend on actuators that can precisely track torque commands. For learning approaches, torque control is important to ensure that the actuator models used in simulation closely parallel their physical counterparts. Similarly, optimization-based approaches often assume perfect torque control in the dynamics models, such as in the centroidal model-predictive-control method introduced in [3].
Additionally, new control methods are often presented without comparison to baseline or previously-developed controllers which makes it difficult to make direct comparison. More so, new methods are frequently introduced alongside new hardware, making it difficult to isolate the algorithmic contributions. This motivates the need for an open source legged robot that is 1) low-cost, 2) easily replicated, and 3) high performance with torque control.
To respond to this need, we present Stanford Pupper (Fig. 1) and an associated set of benchmarks. Pupper is an open source quadruped robot designed to make legged robotics research more accessible, cost-effective, and comparable across institutions. The robot is small, lightweight, and built with off-the- shelf components. The actuators are transparent and torquecontrollable, which makes the robot amenable to sim-to-real applications. Most importantly, the robot is simple and easy to build, which makes it reliable and enables fast iteration. We also present a set of benchmarks for Pupper to easily compare performance of different control algorithms across different institutions. Before detailing Pupper, we show where Pupper stands with relation to previous work.
II. RELATED WORK
Previous work has explored the development of legged robots that are low-cost, easily replicated, or high performance, but to the best of our knowledge, no work has yet integrated all three characteristics into one hardware platform. At the time of this writing, two of the primary robots used for state-ofthe-art learning and optimization-based control are Anymal and the MIT Cheetah variants [4]. These robots use highbandwidth, torque-controllable actuators to accurately track control policies. However, because of their complexity and their limited availability -Anymay is sold on a case-by-base basis, MIT Cheetah 3 is proprietary, and MIT Mini Cheetah is only partially open-source -few researchers outside of the robots' original labs have experimented with them.
A quadruped robot that is low-cost and easily replicated is introduced by ROBEL [5], but this platform sacrifices performance to make it accessible. They also provide three benchmarks for D'Kitty, their quadruped robot -stand, orient, and walk -that allow rigorous comparison between learning Torque Control yes yes no yes DOF 12 8 8 12 a Estimated to be >10K USD including manufacturing. TABLE I COMPARISON OF ROBOT SPECIFICATIONS ACROSS LOW-COST QUADRUPED ROBOTS. algorithms. D'Kitty offers a low barrier to entry with a cost of $4200, and an assembly time of less than 6 hours. The primary disadvantage of the ROBEL hardware platform is that the servo motors are too slow for agile locomotion and not transparent enough for torque control. Specifically, the servos have a maximum speed of 8 rad/s [6] which is too slow for dynamic locomotion. As a reference, the Pupper actuators reach speeds up to 16 rad/s during 0.7 m/s trotting. In terms of transparency, the D'Kitty servos have a currentcontrol mode similar to the actuators on Pupper, but D'Kitty servos have high-reduction multi-stage gear trains with high nonlinear friction which impedes accurate torque control [7].
A handful of open source quadruped robots have attempted to reduce the gap between inexpensive and open source quadrupeds and high performance systems. Stanford Doggo [8], Solo [9], and MIT Mini Cheetah [4] use low-cost brushless drone motors and open source motor controllers to achieve record agility in certain metrics. However, all three robots required custom-machined parts and laborious assembly that ultimately make them difficult to reproduce at the scale necessary for a benchmarking platform. Of the three, Solo reduces the build complexity the most by relying more heavily on 3D printed parts. However, Solo still uses some custommachined parts, and most importantly, uses custom actuators, transmissions, and motor controllers, which means it remains unsuitable as an accessible benchmarking platform. As we discuss below, Pupper achieves greater agility than Solo or Doggo while increasing reproducibility.
III. ROBOT OVERVIEW
Pupper is a 2.1kg, twelve degree of freedom quadruped robot capable of dynamic locomotion. We include a summary of the robot's specifications, compared against similar quadruped robots, in Tab. I. The following subsections detail the actuator, structure, electronics, and software design of Pupper.
A. Actuator
The core of Stanford Pupper is its inexpensive, yet highperformance off-the-shelf actuator consisting of the M2006 brushless gearmotor [10] and the C610 motor controller [11].
1) Brushless gearmotor: The M2006 brushless gearmotor is shown in Fig. 2. The gearmotor uses a 22mm diameter brushless DC motor and integrated absolute encoder at the input followed by a 36:1 two-stage planetary reduction. The gearbox contains two stacked planetary reductions with a total 1.0 n/a n/a 6.9 Torque Bandwidth (Hz) 17. n/a n/a 30 Efficiency (%) 68 -72 n/a n/a 90 -95 Output Inertia (kgm 2 ) 0.0024 n/a n/a 0.0023 a Includes mass of leg segment. b Estimated from given specs. ratio of 36:1. As summarized in Table II, the overall weight of the actuator is 90g and the peak torque is 1.8Nm.
2) Motor controller: The C610 motor controller uses fieldoriented-control (FOC) to operate the M2006 in currentcontrol mode. The controller supports current commands to the motor at 1kHz over a CAN connection, and reports back position, velocity, and measured current over the same CAN interface at the same rate.
B. Structure
The robot's frame is constructed out of 3D printed bulk heads and printed circuits boards (PCB) that double as structure and power distribution. The robot uses four identical legs, each with three actuated degrees of freedom. The feet of the robot are made of off-the-shelf wear-resistant rubber bumpers that are easily interchanged depending on the walking surface.
C. Electronics
A Teensy 4.0 microprocessor [12] and a Raspberry Pi embedded computer [13] split responsibilities for controlling the robot. The microprocessor receives motor position, velocity, and current estimates and sends motor current commands over CAN at 1kHz. It also reads data from an inertial measurement unit (IMU) and performs onboard state estimation. The embedded computer communicates with the microprocessor over USB serial at 12mbps and is responsible for high-level motion planning, whether it be a reinforcement learning policy, model predictive control, or any other method.
D. Software & Control
The software architecture splits the control responsibilities between the microprocessor and embedded computer. The microprocessor performs low-level actuator control, state estimation, and data logging at 1kHz. Three low-level control methods are supported. In mode 1), the microprocessor passes along motor torque commands sent by the embedded computer. In mode 2), the embedded computer sends joint angle commands and the microprocessor performs joint-space PD control to track the desired positions. This mode is intended to be used with learning-based methods, where the action space is commonly joint position commands and impedance gains. 3. Electronics architecture. The embedded computer commands high-level actions to the microcontroller, which runs task-and joint-space impedance control, and commands currents to the motor controllers. The motor controllers use field-oriented-control to close the loop on motor current.
In mode 3), the embedded computer sends desired foot locations in body-relative Cartesian space and the microprocessor performs the task-space impedance control law:
F i = Kp(r ref,i − r i ) + Kd(v ref,i − v i ) + F f f,i (1) τ i = J T i F i(2)
where i is the leg index (1 to 4), F i is the desired foot force, Kp and Kd are impedance gains, r ref,i is the reference foot position, r i is the measured (via forward kinematics) foot position, v ref,i is the reference foot velocity, v i is the measured (via forward kinematics) foot velocity, F f f,i is the feed forward force, and J T i is the foot Jacobian.
E. Robot Characterization
State-of-the-art learning and optimization-based methods rely on accurate physical robot models to calculate control commands. The Pupper actuator was tested in a dynamometer to understand the actuator limits, torque-current relationship, and control bandwidth. Actuator torque was measured over varying motor speeds and motor currents to determine a best-fit surface. Fig. 4 summarizes the results. The actuator's peak torque while doing positive work was 1.8Nm, while the peak torque doing negative work was 3.2Nm. The maximum continuous torque was 1.0Nm.
The motor friction was modelled well by Coulomb and damping terms, with a R 2 value over 0.999, as
τ f = −0.021Nm sgn(ω) − 0.0045
Nms rad w − 10.0sgn(ω)|τ m | (3) where τ f is the total friction in Nm, ω is the output velocity in rad s , and τ m is the motor torque at the gearbox input in Nm. The motor torque is given by
τ m = 0.0069 Nm A i(4)
where i is the motor current in A. The output torque is modeled as
τ output = 36τ m + τ f(5)
where τ output is the output torque, and the factor of 36 comes from the 36:1 reduction ratio.
The friction model was inverted and used to predict motor current necessary to achieve desired torque commands. At constant motor velocities, arbitrary torques can be commanded within 1% error. However, because the inverted model only corrects for velocity-dependent friction, stiction cannot be predicted, which leads to up to 28% torque error when the actuator has zero velocity.
Using the calibrated torque-current relationship, we estimated the rotor inertia by measuring output acceleration at fixed torques. The bandwidth of the actuator was determined by commanding a sinusoidal current and measuring the magnitude of the output torque as a function of frequency. A bode plot of the response is shown in Fig. 5. 5. Actuator frequency response. A sinusoidal current command ranging from 0A to 5A was commanded while the frequency increased from 0Hz to 40Hz. The gain of the resulting torque was measured.
F. Robot Simulator
We offer a physically accurate Unified Robot Description Format (URDF) model for rapid experimentation and straight forward integration with robot learning environments such as OpenAI Gym [14]. Work with undergraduates and high school students has highlighted a desire to integrate vision based navigation, motivating the inclusion of photo-realistic textures. The model and simulator environment can be found on the project page [15].
IV. DESIGN DISCUSSION
A. Tradeoffs
We optimized Pupper for easy reproducibility and experimentation while keeping performance sufficient for state-ofthe-art control methods. One of the key factors that enabled easy reproducibility was reducing the robot size. While large quadruped robots have higher payload capacities and can step over larger obstacles, these advantages are not necessary to benchmark different controllers' core agility and robustness. Instead, by making Pupper small and light, we were able to use a 3D printed frame instead of custom-machine aluminum pieces. The light weight also avoids the need for costly, larger actuators. Above all, Pupper's small form factor and low weight make it easy to use in a remote-work setting since it does not need a support crane or other apparatuses for testing.
We were further able to increase accessibility while maintaining performance by using hobbyist brushless motors instead of using more expensive industrial motors. Hobbyist brushless motors with high specific torque have been used in many of the state-of-the-art quadruped robots like Mini Cheetah and Solo, but not on robots as small as Pupper. However, decreasing the motor size decreases the motor torque hyperlinearly, so larger reduction ratios are needed at small scales to maintain adequate torque. The disadvantage of higher reductions ratios are greater inertia and greater inefficiency, both qualities that hurt actuator transparency [17]. While a higher quality actuator such as the Maxon ECXTQ22L BL KL A STD 24V motor and GPX22HP 35:1 gearbox would increase actuator efficiency from 72% to 93% and reduce inertia, the actuator cost would increase by more than ten times [18]. In this regard Pupper sacrifices torque-controllability and transparency for greater accessibility.
An additional tradeoff made was to co-locate the knee motor at the knee joint. Unlike the common pattern of mounting the knee motor at the hip such as in Solo and MIT Mini Cheetahwhich reduces leg inertia-Pupper mounts the knee motor at the knee to eliminate the need for an additional transmission. We found that the added leg inertia did not preclude agile trotting and other locomotion.
The robot structure was simplified wherever possible for better accessibility. The 3D printed structure eliminates the need for custom-machined parts, which the authors found to be critical for helping undergraduate students take part in the work. 3D printing the majority of robots part enables experimenters to easily modify the robot geometry and material choice.
B. Open Source
The design for Pupper is entirely open source under the MIT License and all documentation are available on the project page, https://stanfordstudentrobotics.org/ quadruped-benchmark. We include instructions for sourcing parts and a bill of materials, which totals less than 2000 USD for the entire robot including fabrication costs. The project page also hosts exhaustive documentation for completing the hardware assembly and software bring-up.
V. BENCHMARK TASKS
We propose a set of tasks designed for real-world benchmarking so that different controllers can be compared on equal footing across different robots and institutions. The benchmarks are designed to be repeatable and simple enough that the test can be quickly attempted for fast iteration and data collection.
A. Sprint 1) Task overview: Illustrated in Fig. 7a, Sprint requires Pupper to traverse an unobstructed five-meter course as fast as possible. While there are no constraints placed on the robot heading or the path it takes, Pupper must begin with zero velocity. The benchmark score is the average speed over the course, taken as the time to traverse the course divided by the five meter length. While not factored into the score, metrics including motor odometry and IMU data are logged by the onboard microcontroller to allow offline comparison of metrics including cost of transport and orientation error. Additional details are included on the project page [15].
2) Task rationale: Achieving fast locomotion has long been a goal of the legged robotics research community, but has often not been tested under standardized conditions. High speed locomotion has garnered much research because it requires fundamental understanding of locomotion to tightly coordinate full-body motion while managing ground impacts and destabilizing perturbations. However, new control techniques are often introduced in conjunction with novel mechanical designs, which makes it difficult to disentangle the separate contributions of control and mechanical design. Sprint is designed to provide a reproducible way to measure maximum forward speed. We hope to encourage researchers and students to compare both learned and hand designed gaits on a common platform, and to make the implementations of those gaits accessible to others.
B. Scramble 1) Task overview: This task requires Pupper to traverse a set of obstacles arranged in predefined locations shown in Fig. 7b. As with the Sprint task, Pupper must start with zero velocity and the goal is to traverse the course as quickly as possible. We expect to see methods based on proprioception alone, but also methods based on vision and terrain estimation. The benchmark score is time taken to complete the course. We expect that the offline metrics may be more interesting and in some cases even more insightful than just the benchmark score. Additional details are included on the project page [15].
2) Task rationale: This benchmark, consisting of a set of relatively tall obstacles, was designed to challenge many of the common locomotion models. In particular, the model predictive control method used in many recent quadrupeds assumes that the terrain is flat [2], [3]. We hope this benchmark showcases generalist controllers that rely on fewer assumptions about the terrain. Like with Sprint, we hope to see students and researchers publish their solutions for easy reproduction.
C. Reference Controller
We implemented a trotting controller to serve as a baseline for the two benchmark tasks. This controller runs on the embedded computer and generates foot position targets as a function of time and desired velocity in the horizontal plane. The architecture is similar to the position-based controller in Stanford Doggo [8] and the Foot Trajectory Generator (FTG) architecture formalized in [19]. Various parameters can be tuned to optimize for different gaits and terrain, including stepping height, trotting frequency, and stance/swing gait percentages. The onboard microcontroller uses task-space impedance control to calculate motor torques needed to track the foot position targets.
VI. EXPERIMENTS
We tested the hardware platform and reference controller over a variety of terrain and demonstrated reliability, robustness and agility. Fig. 8 and the supplementary video highlight the robot trotting omnidirectionally over gravel, cement, and loose natural terrain. The controller is robust to loose terrain like tanbark and pebbles, and can recover from unexpected drops going over curbs. On flat ground, the reference controller achieved a stable forward and backwards speed of 0.8 m/s, a sideways speed of 0.4 m/s, and a maximum turning rate of 2.5 rad/s. The forward speed of 0.8 m/s is comparable to the 0.8m/s trotting speed achieved with Anymal, but less than that of the more agile MIT Cheetah 3, which achieves 3.0m/s using its convex model-predictive controller.
The benchmarks tasks were tested across three different Pupper robots, each built at different universities, in order to demonstrate replicability and to establish baseline scores. One of the robots was built by the authors, and the two others were built by undergraduate engineering students at Massachusetts Institute of Technology and Worcester Polytechnic Institute in under a day. The robots completed the Sprint task with an average score of 0.66, and a standard deviation of 0.025. Fig. 9, illustrates the mean and interquartile ranges of the benchmark scores across trials for each of the robots. Fig. 10 compares the deviation and overall consistency of electrical power used by the three robots over a single trial on the Sprint benchmark. The Scramble task was conducted with one robot over several trials, resulting in an average score of 34.6 with a standard deviation of 4.3. VII. EDUCATIONAL OUTREACH One of the primary impacts we hope to have with this project is to introduce robotics research opportunities at the high school and college level. We have initiated several collaborations with high school and college educators to design curriculum and bring Pupper into their classrooms. Through partnership with HandsOnRobotics, we plan to donate robots at the community and high school level to kick start a competitive robotics league centered around the Pupper platform.
VIII. CONCLUSION
We introduced an accessible, open source, and high performance quadruped robot to make legged robotics research more accessible and reproducible. The robot is small and lightweight which makes it ideal for when lab environments are not available such as in high school, undergraduate, and remote-work environments. Two benchmarks were introduced to provide a standard measure from which to compare progress on controller research. Through our ongoing collaborations with educators, and by the release of extensive documentation and open source design, we hope to push forward the stateof-the-art while engaging more students in legged robotics research.
1
Nathan Kau is with the Department of Mechanical Engineering, Stanford University, Stanford, United States fleecy@stanford.edu
Fig. 1 .
1Stanford Pupper is an easily-replicated open source quadruped capable of dynamic locomotion.
Fig. 2 .
2(a) Disassembled M2006 motor illustrating the BLDC, two-stage planetary reduction, and ring gear; (b) detailed view of the leg linkage; (c) schematic view of the Pupper robot indicating hip separation and link lengths.
Fig.
Fig. 3. Electronics architecture. The embedded computer commands high-level actions to the microcontroller, which runs task-and joint-space impedance control, and commands currents to the motor controllers. The motor controllers use field-oriented-control to close the loop on motor current.
Fig. 4 .
4Measured torque versus commanded current across several motor velocities. The asymmetry between the positive and negative current cases is due to nonlinear friction in the actuator.
Fig.
Fig. 5. Actuator frequency response. A sinusoidal current command ranging from 0A to 5A was commanded while the frequency increased from 0Hz to 40Hz. The gain of the resulting torque was measured.
Fig. 6 .
6MuJoCo-based[16] simulator environment demonstrating (a) accurate physical model (b) multiple programmatically accessible camera views for closed loop visual navigation.
Fig. 7 .
7(a) The Sprint benchmark requires the robot to travel five meters forward as fast as possible. (b) Scramble tasks the robot with clambering over two tall obstacles in as little time as possible.
Fig. 8 .
8Pupper walking in a variety of natural environments: (a) pebbles, (b) wet ground, (c) grass inclined 20 degrees, (d) snow bank, (e) dirt inclined 20 degrees, (f) foliage
Fig. 9 .
9Comparison of benchmark scores between three different Pupper robots built at three different institutions. The stem and whiskers indicate the interquartile range of benchmark results. All three robots recorded repeatable benchmark scores within low relative error of each other.
Fig. 10 .
10Comparing total motor electrical power across the three Pupper robots for the Sprint benchmark task.
TABLE II COMPARISON
IIOF ACTUATOR SPECIFICATIONS BETWEEN LOW-COST QUADRUPED ROBOTS.
ACKNOWLEDGMENTWe thank the members of Stanford Student Robotics for their ongoing support of this project including Tarun Punnoose, Ian Chang, and Parthiv Krishna; Jeremy Trilling and Gregory Xie for replicating Pupper and contributing feedback; Mark Bowers for simulator integration; and Professor Zachary Manchester for discussion along the way.
Anymala highly mobile and dynamic quadrupedal robot. M Hutter, C Gehring, D Jud, A Lauber, C D Bellicoso, V Tsounis, J Hwangbo, K Bodie, P Fankhauser, M Bloesch, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). M. Hutter, C. Gehring, D. Jud, A. Lauber, C. D. Bellicoso, V. Tsounis, J. Hwangbo, K. Bodie, P. Fankhauser, M. Bloesch et al., "Anymal- a highly mobile and dynamic quadrupedal robot," in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
. IEEE. IEEE, 2016, pp. 38-44.
Mit cheetah 3: Design and control of a robust, dynamic quadruped robot. G Bledt, M J Powell, B Katz, J Di Carlo, P M Wensing, S Kim, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEG. Bledt, M. J. Powell, B. Katz, J. Di Carlo, P. M. Wensing, and S. Kim, "Mit cheetah 3: Design and control of a robust, dynamic quadruped robot," in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018, pp. 2245-2252.
Dynamic locomotion in the mit cheetah 3 through convex model-predictive control. J Di Carlo, P M Wensing, B Katz, G Bledt, S Kim, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEJ. Di Carlo, P. M. Wensing, B. Katz, G. Bledt, and S. Kim, "Dynamic locomotion in the mit cheetah 3 through convex model-predictive con- trol," in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2018, pp. 1-9.
Mini cheetah: A platform for pushing the limits of dynamic quadruped control. B Katz, J Di Carlo, S Kim, 2019 International Conference on Robotics and Automation (ICRA). IEEEB. Katz, J. Di Carlo, and S. Kim, "Mini cheetah: A platform for pushing the limits of dynamic quadruped control," in 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019, pp. 6295-6301.
ROBEL: RObotics BEnchmarks for Learning with low-cost robots. M Ahn, H Zhu, K Hartikainen, H Ponte, A Gupta, S Levine, V Kumar, Conference on Robot Learning (CoRL). M. Ahn, H. Zhu, K. Hartikainen, H. Ponte, A. Gupta, S. Levine, and V. Kumar, "ROBEL: RObotics BEnchmarks for Learning with low-cost robots," in Conference on Robot Learning (CoRL), 2019.
Dynamixel xm430-w210-r. "Dynamixel xm430-w210-r," http://www.robotis.us/ dynamixel-xm430-w210-r/, accessed: 2020-10-30.
Directional efficiency in geared transmissions: Characterization of backdrivability towards improved proprioceptive control. A Wang, S Kim, Proceedings -IEEE International Conference on Robotics and Automation. -IEEE International Conference on Robotics and Automation2015A. Wang and S. Kim, "Directional efficiency in geared transmissions: Characterization of backdrivability towards improved proprioceptive control," Proceedings -IEEE International Conference on Robotics and Automation, vol. 2015, pp. 1055-1062, 06 2015.
Stanford doggo: An opensource, quasi-direct-drive quadruped. N Kau, A Schultz, N Ferrante, P Slade, 2019 International Conference on Robotics and Automation (ICRA). IEEEN. Kau, A. Schultz, N. Ferrante, and P. Slade, "Stanford doggo: An open- source, quasi-direct-drive quadruped," in 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019, pp. 6309-6315.
An open torque-controlled modular robot architecture for legged locomotion research. F Grimminger, A Meduri, M Khadiv, J Viereck, M Wüthrich, M Naveau, V Berenz, S Heim, F Widmaier, T Flayols, IEEE Robotics and Automation Letters. 52F. Grimminger, A. Meduri, M. Khadiv, J. Viereck, M. Wüthrich, M. Naveau, V. Berenz, S. Heim, F. Widmaier, T. Flayols et al., "An open torque-controlled modular robot architecture for legged locomotion research," IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 3650-3657, 2020.
Dji m2006 brushless motor. "Dji m2006 brushless motor," https://store.dji.com/product/ rm-m2006-p36-brushless-motor, accessed: 2020-10-30.
Dji c610 brushless motor controller. "Dji c610 brushless motor controller," https://store.dji.com/product/ rm-c610-brushless-dc-motor-speed-control, accessed: 2020-10-30.
Raspberry pi. "Raspberry pi," https://www.raspberrypi.org/, accessed: 2020-10-30.
Openai gym. G Brockman, V Cheung, L Pettersson, J Schneider, J Schulman, J Tang, W Zaremba, G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba, "Openai gym," 2016.
Stanford pupper benchmark. "Stanford pupper benchmark," https://stanfordstudentrobotics.org/ quadruped-benchmark, accessed: 2020-10-30.
Mujoco: A physics engine for model-based control. E Todorov, T Erez, Y Tassa, 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. E. Todorov, T. Erez, and Y. Tassa, "Mujoco: A physics engine for model-based control," in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012, pp. 5026-5033.
Design principles for highly efficient quadrupeds and implementation on the mit cheetah robot. S Seok, A Wang, Meng Yee Chuah, D Otten, J Lang, S Kim, 2013 IEEE International Conference on Robotics and Automation. S. Seok, A. Wang, Meng Yee Chuah, D. Otten, J. Lang, and S. Kim, "Design principles for highly efficient quadrupeds and implementation on the mit cheetah robot," in 2013 IEEE International Conference on Robotics and Automation, 2013, pp. 3307-3312.
maxon motor. "maxon motor," https://www.maxongroup.com/maxon/view/content/ index, accessed: 2020-10-30.
Policies modulating trajectory generators. A Iscen, K Caluwaerts, J Tan, T Zhang, E Coumans, V Sindhwani, V Vanhoucke, A. Iscen, K. Caluwaerts, J. Tan, T. Zhang, E. Coumans, V. Sindhwani, and V. Vanhoucke, "Policies modulating trajectory generators," 2019.
| [] |
[
"Majorana Tower and Cellular Automaton Interpretation of Quantum Mechanics down to Planck Scales",
"Majorana Tower and Cellular Automaton Interpretation of Quantum Mechanics down to Planck Scales"
] | [
"Fabrizio Tamburini ",
"Ignazio Licata ",
"\n.l.f. IPSEOA Barbarigo\nZKM -Zentrum für Kunst und Medientechnologie\nLorentzstr. 19D-76135Venice, KarlsruheItaly, Germany\n",
"\nItaly and International Institute for Applicable Mathematics and Information Sciences (IIAMIS), B.M. Birla Science Centre\nInstitute for Scientific Methodology (ISEM) Palermo Italy School of Advanced International Studies on Theoretical and Nonlinear Methodologies of Physics\nAdarsh Nagar, Hyderabad -500 46370124BariIndia\n"
] | [
".l.f. IPSEOA Barbarigo\nZKM -Zentrum für Kunst und Medientechnologie\nLorentzstr. 19D-76135Venice, KarlsruheItaly, Germany",
"Italy and International Institute for Applicable Mathematics and Information Sciences (IIAMIS), B.M. Birla Science Centre\nInstitute for Scientific Methodology (ISEM) Palermo Italy School of Advanced International Studies on Theoretical and Nonlinear Methodologies of Physics\nAdarsh Nagar, Hyderabad -500 46370124BariIndia"
] | [] | A deterministic reformulation of quantum mechanics can bypass the usual philosophical interpretations of probability and stochasticity that are found in the literature. This can be obtained with the ontological formulation of quantum mechanics, obtained by writing the Hamiltonian of a quantum system in a way to render it mathematically equivalent to a deterministic system. Such deterministic models are thought to consist of elementary cells -cellular automata -inside which the quantities describing the dynamics oscillate in periodic orbits, extending and replacing the quantummechanical classical language based on harmonic oscillators. Here we show that the structure of the cellular automata sets find a clear physical interpretation with the infinite-components equation published by Majorana in 1932, also known as the Majorana Tower: the cellular automata are elementary building blocks generated by the Poincaré group of spacetime transformations with positive-defined energy down to the elementary building blocks of the fabric of spacetime. Interestingly, the mathematical approach here considered presents close relationships with those used for the distribution of prime numbers in the Pólya-Hilbert conjecture for the Riemann Hypothesis.PACS numbers: | 10.1134/s0040577923020101 | [
"https://arxiv.org/pdf/2111.01532v1.pdf"
] | 240,419,977 | 2111.01532 | 18d3ba38e9a815459d2a884859da109bf5581cd0 |
Majorana Tower and Cellular Automaton Interpretation of Quantum Mechanics down to Planck Scales
29 Oct 2021
Fabrizio Tamburini
Ignazio Licata
.l.f. IPSEOA Barbarigo
ZKM -Zentrum für Kunst und Medientechnologie
Lorentzstr. 19D-76135Venice, KarlsruheItaly, Germany
Italy and International Institute for Applicable Mathematics and Information Sciences (IIAMIS), B.M. Birla Science Centre
Institute for Scientific Methodology (ISEM) Palermo Italy School of Advanced International Studies on Theoretical and Nonlinear Methodologies of Physics
Adarsh Nagar, Hyderabad -500 46370124BariIndia
Majorana Tower and Cellular Automaton Interpretation of Quantum Mechanics down to Planck Scales
29 Oct 2021PACS numbers:
A deterministic reformulation of quantum mechanics can bypass the usual philosophical interpretations of probability and stochasticity that are found in the literature. This can be obtained with the ontological formulation of quantum mechanics, obtained by writing the Hamiltonian of a quantum system in a way to render it mathematically equivalent to a deterministic system. Such deterministic models are thought to consist of elementary cells -cellular automata -inside which the quantities describing the dynamics oscillate in periodic orbits, extending and replacing the quantummechanical classical language based on harmonic oscillators. Here we show that the structure of the cellular automata sets find a clear physical interpretation with the infinite-components equation published by Majorana in 1932, also known as the Majorana Tower: the cellular automata are elementary building blocks generated by the Poincaré group of spacetime transformations with positive-defined energy down to the elementary building blocks of the fabric of spacetime. Interestingly, the mathematical approach here considered presents close relationships with those used for the distribution of prime numbers in the Pólya-Hilbert conjecture for the Riemann Hypothesis.PACS numbers:
I. INTRODUCTION
The never ended debate whether quantum mechanics (QM) is either stochastic or fully deterministic in its intrinsic nature takes its birth from the historical Bohr-Einstein debate in 1935 [1]; there may also exist a more subtle level of the discrete deterministic type that gives the stochastic behavior as output. Then, in 1964, Bell's theorem [2][3][4] put an end to many ideal scenarios where hidden variables were the only engine for randomness, ideally favoring models where non locality arises [5,6].
The ontological quantum mechanics (OQM) here considered [7,8] is a reformulation of QM slightly different from the classical probabilistic formulations [9][10][11]. In OQM the Hamiltonian of a quantum system is rendered mathematically equivalent to that of a deterministic system with a novel mathematical language that describes structures evolving deterministically. This language can be used to describe the evolution of both a quantum and a classical system. This is a step forward the debate started with the discussions between Bohr and Einstein, where Einstein did never accept the intrinsic stochasticity present in the language of quanta, whilst Bohr did. Differently from the well-known hidden variable theories falsified by both the theory and the experiments on Bell's * Electronic address: fabrizio.tamburini@gmail.com † Electronic address: ignaziolicata3@gmail.com inequalities [12,13], this new approach is robust with respect to the Einstein-Podolski-Rosen paradox and the problem of hidden variables [14,15].
The quantum mechanical "ontological" states are represented by sets of orthonormal unit vectors in the Hilbert space of support that can be either finite or infinite-dimensional. By definition, a system is ontological if it evolves in time into other ontological states, with no difference between classical quantum and deterministic state. Locally ontological and deterministic systems can be constructed that can feature quantum mechanical properties including entanglement and the violation of Bell's inequalities. The classical dynamical system varies with time scales much smaller than the time related to the energy exchanged in any interaction there considered, ∆t ≪ 1/∆E int . The system is deterministic if from ontological states it evolves into other ontological states and any state either classical or quantum is identified by a ket vector |n . Of course one can have systems with continuous dynamics or with cyclic one. As usual, if the evolutional time step is discrete, then the Hamiltonian is periodic in its eigenvalues, introducing the concept of "beable", a vector state, as proposed by Bell to replace the traditional term observable of QM that might imply the interaction with an observing device or measurement issue and that of "changeable", "superimposable" and non-local phenomena, associated with cellular automata (CA) in Hilbert spaces [16].
II. DETERMINISTIC SYSTEMS
Here we analyze two main classes of deterministic systems leading to an ontological deterministic representation of quantum mechanics. The continuous systems have a set of equations describing a continuous dynamics whose QM-type indeterminism is due to a discretization in time or, equivalently, to a tassellation of the phase space, that can go down the Planck scales, as occurs in the search of the distribution of prime numbers with the Hilbert-Pólya approach, where are searched Hermitian Hamiltonians with eigenvalues that describe the distribution of the zeros of the Riemann's zeta function [17]. The second class is instead made of periodic models with SU (2) structure described by the infinite-components Majorana equation [18].
A. Continuous deterministic systems
The deterministic nature of a given physical system is revealed through the analysis of the eigenvalue spectrum of its Hamiltonian that can be written as
H = T ( p ) + V ( x ) + A( x ) · p ,(1)
where x and p are the usual coordinates and momenta, for which the usual [x i , p j ] = iδ ij holds. The kinetic term T ( p) ∼ 1/2 p 2 and the classical potential V ( x), responsible for the change of the geometry of the trajectories (and of spacetime, see [25]), represent the usual constituents of a standard Hamiltonian that one can find in both continuous and quantum systems, where interference patterns are present. A route to chaos and randomness from a continuous deterministic system is clearly provided if one considers as simple example the so-called magnetic term of the Hamiltonian only, A( x ) · p; this alone can describe a route to chaos when a Heisenberg-like texture is introduced in the phase space of a system with Hamiltonian H = A( x ) · p. In the simplest case, A( x ) = x when one assumes a lattice geometry for the time coordinate the Hamiltonian eigenvalues can also become periodic, with similarities to Berry-Keating and Connes [19][20][21] semiclassical dynamics based on the class of H = xp Hamiltonians that have been used in the attempts to solve the Riemann hypothesis from the Hilbert-Pólya approach. The Riemann Hypothesis is true if there exists a Hermitian or unitary operator whose eigenvalues distribute like the zeros of Riemann's ζ(z). In this way, space, time, and often also momentum, can be considered discrete. As described in the literature of prime number distribution, magnetic-term dominated Hamiltonians cannot always be Hermitian, showing the properties of PT-symmetric quantum systems [22,23] unless after some modifications and ad-hoc assumptions to the phase space which becomes rigged [24]. Following the idea by Hilbert and Pólya, Hermitian Hamiltonians of this type of dynamical systems can describe the distribution of the zeros of Riemann's zeta function and thus of primes [25], connecting two apparently distant worlds: the fabric of spacetime and the fabric of prime numbers. The limit to the lattice size, both for continuous and periodic dynamical systems finds its roots down to the Planck scale where the problems of undefined time coordinate below the Planck time τ P or their equivalence with both spatial and temporal coordinates are instead described by an indetermination relationship directly derived from Einstein's equations for the scalar proper energy E -averaged over a proper volume L 3 -and the corresponding interval of time τ [26]. In this case the lattice is directly provided by the fluctuations in the fabric of spacetime. What is important to point out is that Einstein's equations and deterministic continuous dynamical systems can hold down to the Planck scale, with a dynamics recalling that in a Minkowski spacetime with a lattice structure. The lattice-like structure is given by the indetermination relationship between the proper energy E averaged over a given proper volume L 3 in General Relativity (GR),
E =Ē ∼ g 2 L R (4) = L ∆ ∆g g + ∆g g 2 ,(2)
where g is the metric tensor, ∆g the corresponding fluctuation and R (4) is the rank-four Riemann curvature tensor R iklm ∈ ⊗ 4Ṫ , viz., in the cotangent bundleṪ of a given manifold (M, g).
If we rescale this relationship down to the Planck scale L p , by defining the light crossing time as τ = L and the Planck Time τ p , the Einstein equations retain their validity down to the Planck scale, even if metric fluctuations over a scale larger than L p can occur, extending the approach used in the Minkowski spacetime, making the CA structure more general and based on the properties of the spacetime. We find that these fluctuations can give rise to a relationship between the curvature tensor and spacetime fluctuations that holds down to the Planck scales. Once is fixed a characteristic spatial or temporal scale, L or τ , like in the building of a lattice structure, it corresponds to the introduction of fluctuations of the averaged quantity over L 3 of the proper energyĒ. If we set E = ∆E and τ = ∆t, considering fluctuations as large as the energy and time values considered, we can write a Heisenberg relationship that involves the Riemann tensor and the proper time,
∆E × ∆t = τ τ p 2 L 2 g 2 R (4)(3)
equivalently one can write τ /τ p = L/L P . At Planck scales Eq. 3 is written as
∆E × ∆t = L 2 p g 2 R (4) = ∆ ∆g g + ∆g g 2(4)
where ∆E is averaged on the volume L 3 of a 3D space-like hypersurface σ here considered, preserving the continuity of Einstein's equations down to the Planck scale, including the equivalence between Einstein-Rosen bridges and Einstein-Podolsky-Rosen states (ER=EPR) and graviton exchanges, as described in Ref. [26]. The indetermination relationship involving Planck's scales, here discussed, thus becomes an equivalence in a Minkowski-like manifold and a lattice structure. Quantum indetermination can arise from the lattice-like effects of spacetime fluctuations applied to deterministic continuous systems down to the Planck scales, as occurs to Einstein's equations, for which continuity holds. This is of course compatible with the holographic principle where any cell occupies a volume L × L 2 P and any spatial region with magnitude L cannot contain more than L 3 /(L L 2 P ) = L 2 /L 2 P cells; this agrees with the holographic principle for which the maximum of bit numbers stored in a region with characteristic length L is at all effects L 2 /L 2 P = τ /τ p , in agreement with Eq. 3 that can rewritten as
∆E × ∆t = L 2 g 2 R (4) N bM(5)
When one indicates N bM = L 2 /L 2 P the maximum number of bits there stored. QM appear as emerging from a lattice structure with the Holographic Principle (see e.g. [27]).
This information can be the main core of an interpretation of the physics of CA in the periodic deterministic systems we will discuss below or be stored as a particle, according to the Hamiltonian of the system considered such as the Standard Model in a lattice system [28][29][30], which can provide the characteristic levels of the energy exchanges, interactions and time intervals of its quanta.
B. Periodic deterministic systems
When one considers a periodic model, it holds an SU (2) symmetry, related to the rotation group [7], which is a subgroup of the Poincaré group. The elementary building blocks here considered consist on a CA system that updates itself at every time step, of duration t . , and then, after a period T = N t . , it returns to its initial position. They behave like gears that, cyclically rotating, concur to generate the perceived randomness of quantum mechanics when the dynamics has support in a lattice, where the time coordinate of the manifold is divided in discrete intervals τ and can be extended up to the hypothesis of a countably infinite lattice where the Hamiltonian eigenvalues will result in any case periodic [7]. Each single element of this construction with a finite number of states can thus be assumed to be periodic in time with a SU (2) symmetry in a discrete time-quantized manifold. When one extends this procedure to the continuum, the Hamiltonian would need to be linearly dependent from the linear momentum like in the H = xp-class of dynamical systems already discussed.
Deterministic models can be seen as consisting of elementary cells inside which the data just oscillate in periodic orbits with SU (2) symmetry. Rotation means angular momentum, as the invariant in the Poincaré group corresponding to rotation is the angular momentum. The energy eigenstates can be interpreted as the eigenstates |m of L 3 , the so-called z−component in a three dimensional rotator. The distribution of the eigenstates of these cells behave as being produced by the infinitecomponents Majorana equation (Majorana Tower), generated by the group of Lorentz boosts belonging to the Poincaré group of spacetime transformations. Of course, finite groups of rotators correspond to finite subgroups of the Majorana Tower, where the matrix elements x r|s p can be deduced from recursion relations, by fixing in this simple example where H = ωn and x = s/ √ ℓ, p = s/ √ ℓ, such as
2r x r|s p = ℓ, s|a x − ia y |ℓ, s + 1 x r|s − 1 p −2 ℓ, s|b x + ib y |ℓ − 1, s − 1 x r|s + 1 p ,(6)
in combination with x r|s p = p s|r x * and more with the cyclic relations from Ref [7] and [18], involving the infinitesimal Lorentz transformations in the variables (ct, x, y, z),
a x = i 0 0 0 0 0 0 0 0 0 0 0 −1 0 0 1 0 ; a y = i 0 0 0 0 0 0 0 1 0 0 0 0 0 −1 0 0 a z = i 0 0 0 0 0 0 −1 0 0 1 0 0 0 0 0 0 ; ,(7)
obtaining a Majorana equation that relates the coefficients of the CA with the infinitesimal Lorentz transformations of Eq 7 and 8 where the energy E depends on the angular momentum that characterizes the CA and is positive-defined,
W + (α, p) − E ℓ + 1 2 Ψ = 0.(9)
where W is the general energy from the Hamiltonian, α the set of Dirac matrices, p the momentum, ℓ the angular momentum eigenvalue and E the energy considered to build the lattice structure. In the limit δt → 0, with infinite period T that corresponds to ℓ = 0 turns this system into a point moving continuously along a circle, which behaves just like the standard harmonic oscillator. Any CA can be seen as an excited Majorana states of the state ℓ = 0. It is easy to show that down to the Planck scales one finds from Eq. 9 the rules dictated by the holographic principle for the energy E and the information there stored. The larger is ℓ the smaller is the energy and information density contained in a 3D hypervolume.
To preserve the total information content integrated in the hypervolume, the latter has to grow linearly with ℓ, with their corresponding vacuum and antivacuum states that grow with together with their entropy. As previously said, depending on the Hamiltonian, these excited states of vacuum described by the sets of CA, can assume mass breaking for these states the infinite tower ruling the CA reducing it to the value of ℓ = 1/2 if there exists the Majorana neutrino.
III. CONCLUSIONS
We formulate a Majorana representation of the cellular automata for the ontological formulation of quantum mechanics and give the interpretation of the CA in terms of symmetries of spacetime. From the Poincaré group of spacetime transformations and the subgroups of the Lorentz transformations and spatial rotations we obtain a relationship in terms of a Majorana-Dirac equation that gives the eigenvalues for the coefficients of the vector states describing the basic dynamics in a latticelike structure that can take its origins from the properties of spacetime at Planck's scales. This represents a deep link between the basic fabric constituents of spacetime represented by the transformation groups and cor-responding invariants and the structure of cellular automata that represent the fundamental building blocks of OQM. The indetermination relationship obtained from Einstein's equations shows that the scale at which determinism can become or remain manifest is the Planck scale, where OQM interpretation can be obtained form a QM system equivalent to a deterministic dynamical system, supporting also a new interpretation of nonlocality in the ER=EPR scenario, making a parallelism between a deterministic Einstein Rosen bridge and entangled EPR states [26], where one joins elementary cells into a construction where they interact, again allowing only deterministic interaction laws mathematically closely related to the search of prime numbers through the Pólya-Hilbert approach to the Riemann Hypothesis [25]. In other words, what is normally thought of as being classical stochastic quantum mechanics it can be attributed to the effect of fast, almost hidden, variables that in any case support Bell's inequalities down to the Planck scale, directly from the texture of energy and spacetime fluctuations. The concept of ontological QM is related to dynamical systems and variables rapidly oscillating at the Planck scale -where we are forced to revise the ordinary continuous concepts of space and time -something that is at all effects similar to a set of hidden variables and that forms particles. Ontological is intended therefore as a global reflection on the languages of physics, classical and quantum, to set the conceptual conditions for their unification. In this view, the Planck scale becomes a necessary scale where a lattice-like structure naturally arise and where one can find a whole topography of the "nonlocal", both below and above the Planck scale. Therefore, in a future and more complete formulation of these phenomena, QM, quantum field theory and GR will have to converge to a common language and set of concepts.
The Debate between Einstein and Bohr, or How to Interpret Quantum Mechanics. P Marage, G Wallenborn, Science Networks · Historical Studies. Marage P., Wallenborn G.22BirkhäuserThe Solvay Councils and the Birth of Modern PhysicsMarage P., Wallenborn G., The Debate between Einstein and Bohr, or How to Interpret Quantum Mechanics. In: Marage P., Wallenborn G. (eds) The Solvay Councils and the Birth of Modern Physics. Science Networks · Histor- ical Studies, vol 22. Birkhäuser, Basel (1999).
On the Einstein Podolsky Rosen Paradox. J S Bell, Physics Physique Fizika. 1Bell, J. S., On the Einstein Podolsky Rosen Paradox, Physics Physique Fizika 1, 195-200 (1964).
J Bell, A Aspect, 10.1017/CBO9780511815676Speakable and Unspeakable in Quantum Mechanics: Collected Papers on Quantum Philosophy. Cambridge (UKCambridge University Press2nd ed.Bell, J., and Aspect, A., Speakable and Unspeakable in Quantum Mechanics: Collected Papers on Quantum Philosophy (2nd ed.), Cambridge University Press, Cam- bridge (UK) (2004). doi:10.1017/CBO9780511815676
M Bell, S Gao, 10.1017/CBO9781316219393Quantum Nonlocality and Reality: 50 Years of Bell's Theorem. CambridgeCambridge University PressBell, M., and Gao, S. (Eds.). (2016). Quan- tum Nonlocality and Reality: 50 Years of Bell's Theorem. Cambridge: Cambridge University Press. doi:10.1017/CBO9781316219393
. V Scarani, Bell Nonlocality, Oxford University PressOxford, UKScarani, V., Bell nonlocality, Oxford University Press, Oxford, UK (2019).
A Suggested Interpretation of the Quantum Theory in Terms of "Hidden Variables" I, Physical Review. D Bohm, 85Bohm, D., A Suggested Interpretation of the Quantum Theory in Terms of "Hidden Variables" I, Physical Re- view. 85, 166-179 (1952).
Deterministic quantum mechanics: the mathematical equations, submitted to the Article Collection. G 't Hooft, Frontiers Research Topic. A. Casado et al.Towards a Local Realist View of the Quantum Phenomenon't Hooft, G., Deterministic quantum mechanics: the mathematical equations, submitted to the Article Col- lection: "Towards a Local Realist View of the Quantum Phenomenon", Frontiers Research Topic, ed. A. Casado et al. (2020).
Quantum Mechanics Interpretation on Planck Scale. I Licata, Ukr. J. Phys. 65Licata, I., Quantum Mechanics Interpretation on Planck Scale, Ukr. J. Phys., 65, 17-30 (2020).
P A M Dirac, The Principles of Quantum Mechanics. OxfordClarendon PressFirst ed. 1930, Third EditionP.A.M. Dirac, The Principles of Quantum Mechanics, Oxford, Clarendon Press, First ed. 1930, Third Edition (1947).
Testing super-deterministic hidden variables theories. S Hossenfelder, 10.1088/1742-6596/504/1/012018arXiv:1105.4326Journal of Physics: Conference Series. 4112018Found. Phys.Hossenfelder, S., Testing super-deterministic hidden variables theories, Found. Phys. 41 (2011) 1521 [arXiv:1105.4326 [quant-ph]], arXiv:1105.4326; id., Test- ing superdeterministic conspiracy, Journal of Physics: Conference Series 504, 012018 (2014) doi:10.1088/1742- 6596/504/1/012018.
. S Hossenfelder, T Palmer, Rethinking Superdeterminism, 10.3389/fphy.2020.00139Front. Phys. Hossenfelder, S. and Palmer, T., Rethink- ing Superdeterminism, Front. Phys., (2020), https://doi.org/10.3389/fphy.2020.00139.
. R Bertlmann, Zeilinger, A.Bertlmann, R.; Zeilinger, A. (Eds.) Quantum
. Speakables, Bell to Quantum Information. Speakables, from Bell to Quantum Information;
. Springer, Berlin/Heidelberg; GermanySpringer: Berlin/Heidelberg, Germany (2002);
. R Bertlmann, Zeilinger, A.Bertlmann, R.; Zeilinger, A. (Eds.) Quantum
. I I Speakables, Speakables II ;
. A G Springer Nature Switzerland, Basel, SwitzerlandSpringer Nature Switzerland AG: Basel, Switzerland (2017);
Can Quantum-Mechanical Description of Physical Reality Be Considered Complete?. A Einstein, B Podolsky, N Rosen, Phys. Rev. 4710Einstein, A., Podolsky, B. and Rosen, N. Can Quantum- Mechanical Description of Physical Reality Be Consid- ered Complete? Phys. Rev. 1935, 47 (10): 777-780.
Explicit construction of Local Hidden Variables for any quantum theory up to any desired accuracy. G 't Hooft, arXiv:2103.04335quant-ph't Hooft, G., Explicit construction of Local Hidden Vari- ables for any quantum theory up to any desired accuracy, arXiv:2103.04335 [quant-ph]
The Cellular Automaton Interpretation of Quantum Mechanics. G 't Hooft, 10.1007/978-3-319-41285-6Springer Open (Open Access't Hooft, G., The Cellular Automaton Interpretation of Quantum Mechanics, Springer Open (Open Access) https://doi.org/10.1007/978-3-319-41285-6 (2016).
Physics of the Riemann Hypothesis. D Schumayer, D A W Hutchinson, Rev. Mod. Phys. 83Schumayer, D. and Hutchinson, D. A. W., Physics of the Riemann Hypothesis, Rev. Mod. Phys. 83, 307-330 (2011).
Teoria Relativistica di Particelle Con Momento Intrinseco Arbitrario, (Relativistic theory of particles with intrinsic arbitrary momentum). E Majorana, Il Nuovo Cimento. 9335University of MarylandEnglish Translation by C.A. Orzalesi in Technical ReportMajorana, E., Teoria Relativistica di Particelle Con Mo- mento Intrinseco Arbitrario, (Relativistic theory of par- ticles with intrinsic arbitrary momentum), Il Nuovo Ci- mento 9, 335 (1932). English Translation by C.A. Orza- lesi in Technical Report, 792, University of Maryland (1968).
The Riemann zeros and eigenvalue asymptotics. M V Berry, J P Keating, SIAM Review. 412Berry, M. V., Keating, J. P., The Riemann zeros and eigenvalue asymptotics, SIAM Review, 41 (2): 236-266 (1999).
H = xp and the Riemann zeros. M V Berry, J P Keating, Supersymmetry and Trace Formulae: Chaos and Disorder. Keating, J. P., Khmelnitskii D. E. and Lerner, I.V., KluwerBerry, M.V. and Keating, J.P., H = xp and the Riemann zeros, in Supersymmetry and Trace Formulae: Chaos and Disorder, ed. Keating, J. P., Khmelnitskii D. E. and Lerner, I.V., Kluwer (1999).
Trace formula in noncommutative geometry and the zeros of the Riemann zeta function. A Connes, math.NT/9811068Selecta Mathematica New Series. 529Connes, A., Trace formula in noncommutative ge- ometry and the zeros of the Riemann zeta func- tion, Selecta Mathematica New Series 5 29, (1999), math.NT/9811068.
Interpretation of Quantum Mechanics with Indefinite Norm. A Strumia, Physics. 1Strumia, A., Interpretation of Quantum Mechanics with Indefinite Norm, Physics, 1, 17-32 (2019).
Hamiltonian for the Zeros of the Riemann Zeta Function. C M Bender, D C Brody, M P Müller, 1608.03679Phys. Rev. Lett. 11813130201Bender, C. M., Brody, D. C., Müller, M. P., Hamiltonian for the Zeros of the Riemann Zeta Function, Phys. Rev. Lett., 118 (13), 130201 (2017) arxiv 1608.03679.
The role of the rigged Hilbert space in Quantum Mechanics. R De La Madrid, Eur. J. Phys. 26287de la Madrid, R., The role of the rigged Hilbert space in Quantum Mechanics, Eur. J. Phys. 26, 287 (2005);
On the Riemann Hypothesis and Majorana's infinite-component equation. F Tamburini, I Licata, XXxxTamburini, F. and Licata, I., On the Riemann Hypoth- esis and Majorana's infinite-component equation, XXxx (2021).
General Relativistic Wormhole Connections from Planck-Scales and the ER = EPR Conjecture. F Tamburini, I Licata, Entropy. 221Tamburini, F., and Licata, I., "General Relativistic Wormhole Connections from Planck-Scales and the ER = EPR Conjecture", Entropy, 22(1), 3; (2020).
The Big Computer. Complexity and Computability in Physical Universe. I Licata, 10.1007/978-1-4757-4947-210Determinism, Holism, and Complexity. Benci V., Cerrai P., Freguglia P., Israel G., Pellegrini C.Boston, MASpringerLicata, I., The Big Computer. Complexity and Com- putability in Physical Universe. In: Benci V., Cerrai P., Freguglia P., Israel G., Pellegrini C. (eds) Determinism, Holism, and Complexity. Springer, Boston, MA (2003) https://doi.org/10.1007/978-1-4757-4947-2 10.
The standard model on Planck lattice: mixing angles vs quark masses. G Preparata, S Xue, Nuovo Cim. 109Preparata, G. and Xue, S., The standard model on Planck lattice: mixing angles vs quark masses, Nuovo Cim.A109:1455-1460 (1996).
Do we live on a lattice? Fermion masses from the Planck mass. G Preparata, S Xue, Phys. Lett. B. 264Preparata, G. and Xue, S., Do we live on a lattice? Fermion masses from the Planck mass, Phys. Lett. B, 264, 35-38 (1991).
Quantum gravity, the Planck lattice and the Standard Model, Invited Talk at the "VII Marcel Grossman Meeting on General Relativity. G Preparata, arXiv:hep-th/9503102StanfordPreparata, G., Quantum gravity, the Planck lattice and the Standard Model, Invited Talk at the "VII Marcel Grossman Meeting on General Relativity" -Stanford, July 1994, arXiv:hep-th/9503102.
| [] |
[
"GENERALIZED EXPECTATION MAXIMIZATION FRAMEWORK FOR BLIND IMAGE SUPER RESOLUTION",
"GENERALIZED EXPECTATION MAXIMIZATION FRAMEWORK FOR BLIND IMAGE SUPER RESOLUTION"
] | [
"Yuxiao Li \nDepartment of Electronic Engineering\nTsinghua University\nBeijingChina\n",
"Zhiming Wang \nDepartment of Electronic Engineering\nTsinghua University\nBeijingChina\n",
"Yuan Shen shenyuanee@tsinghua.edu.cn \nDepartment of Electronic Engineering\nTsinghua University\nBeijingChina\n"
] | [
"Department of Electronic Engineering\nTsinghua University\nBeijingChina",
"Department of Electronic Engineering\nTsinghua University\nBeijingChina",
"Department of Electronic Engineering\nTsinghua University\nBeijingChina"
] | [] | Learning-based methods for blind single image super resolution (SISR) conduct the restoration by a learned mapping between high-resolution (HR) images and their lowresolution (LR) counterparts degraded with arbitrary blur kernels. However, these methods mostly require an independent step to estimate the blur kernel, leading to error accumulation between steps. We propose an end-to-end learning framework for the blind SISR problem, which enables image restoration within a unified Bayesian framework with either full-or semi-supervision. The proposed method, namely SREMN, integrates learning techniques into the generalized expectation-maximization (GEM) algorithm and infers HR images from the maximum likelihood estimation (MLE). Extensive experiments show the superiority of the proposed method with comparison to existing work and novelty in semi-supervised learning. | 10.48550/arxiv.2305.13880 | [
"https://export.arxiv.org/pdf/2305.13880v1.pdf"
] | 258,841,736 | 2305.13880 | 61ee7130005cdf47625cd940a504d1ee1f6afdf9 |
GENERALIZED EXPECTATION MAXIMIZATION FRAMEWORK FOR BLIND IMAGE SUPER RESOLUTION
Yuxiao Li
Department of Electronic Engineering
Tsinghua University
BeijingChina
Zhiming Wang
Department of Electronic Engineering
Tsinghua University
BeijingChina
Yuan Shen shenyuanee@tsinghua.edu.cn
Department of Electronic Engineering
Tsinghua University
BeijingChina
GENERALIZED EXPECTATION MAXIMIZATION FRAMEWORK FOR BLIND IMAGE SUPER RESOLUTION
Index Terms-Blind SISRblur kernelBayesian modelGEM algorithmsemi-supervision
Learning-based methods for blind single image super resolution (SISR) conduct the restoration by a learned mapping between high-resolution (HR) images and their lowresolution (LR) counterparts degraded with arbitrary blur kernels. However, these methods mostly require an independent step to estimate the blur kernel, leading to error accumulation between steps. We propose an end-to-end learning framework for the blind SISR problem, which enables image restoration within a unified Bayesian framework with either full-or semi-supervision. The proposed method, namely SREMN, integrates learning techniques into the generalized expectation-maximization (GEM) algorithm and infers HR images from the maximum likelihood estimation (MLE). Extensive experiments show the superiority of the proposed method with comparison to existing work and novelty in semi-supervised learning.
INTRODUCTION
Single image super resolution (SISR) refers to the restoration of the plausible and detailed high-resolution (HR) image from the corresponding low-resolution (LR) image, with a wide range of applications [1,2]. A widely-adopted model for SISR is as a degradation process where the LR image is a blurred, decimated, and noisy version of its HR counterpart [3,4]. Since real-world distributions of these variables are difficult to access, restoring HR images is a classic ill-posed, inverse problem without a closed-form solution.
Learning-based methods for SISR represent to be a popular trend due to their superiority in dealing with complicated manifold-like image distributions. Pioneered by SRCNN [5], these methods learn the mapping between LR and HR image pairs with developed neural networks, such as residual networks (ResNets) [6] and generative adversarial networks (GANs) [7]. However, existing works typically synthesize LR images via a bicubic degradation model, and cannot generalize well to practical cases with complicated degradation settings.
Recently, learning-based methods for blind SISR has been proposed to address such problem [8,9,10]. Most works decompose SISR into two sequential steps, and independently estimate the blur kernel from LR images before inferring the HR images [11,12,13]. Such separation of kernel estimation and HR image restoring may not be compatible and result in error accumulation. [14] proposes an end-to-end network for blind SISR where both blur kernel and HR image can be obtained simultaneously. However, the proposed method lacks a unified theoretic framework in modeling, making the estimation for real-world blur kernel remains challenging. Besides, existing techniques on the blind SISR problem are supervised and requires LR-HR image pairs, leading to a waste of unpaired LR data.
We propose a unified framework based on the generalized expectation-maximization (GEM) algorithm for blind super resolution (SREMN), which can conduct the blind SISR problem in either supervised or semi-supervised manner. Our contributions are summarized as follows:
• We present a Bayesian model for image degradation.
The model inherits a transparent interpretation and is flexible to scale to multiple types of degradations with theoretical backbone.
• We propose an end-to-end GEM-based network for blind SISR, which is applicable both in supervised schemes, and in the novel semi-supervised scheme.
• The proposed method shows potential capability of deep neural networks on complex problems involving latent variables by employing statistical techniques.
• The proposed method shows competitive results against state-of-the-art methods on blind SISR under different settings, showing the superiority in practical use.
EXPECTATION-MAXIMIZATION FRAMEWORK
SISR can be modeled as a degradation process where the LR image y is a blurred, decimated, and noisy version of an HR image x: where ⊗ represents the convolution of x with kernel k, ↓ s denotes the standard s-fold downsampler, and n represents environment noise commonly modeled as AWGN with given variance σ 2 n I. The key challenge lies in the latent distribution over blur kernel k, which is hard to be either pre-known or fully simulated by training data.
y = (x ⊗ k) ↓ s +n,(1)
Mixture blur kernel
Suppose the blur kernel k can be approximated by a mixture of L Gaussian kernels with a spectrum of bandwidths, where the bandwidths b 2 l for the lth kernel is a random variable of an exponential distribution, i.e.,
k(p x , p y ; b 2 ) = 1/L L l=1 1 2πb 2 l exp − p 2 x +p 2 y 2b 2 l := k b , p(b 2 l ) = E(b 2 l ; λ l ), l = 1, . . . , L.(2)
where p x , p y are the distances from the origin in the horizontal and vertical axes, respectively. Denote the bandwidth vector b 2 = (b 2 1 , . . . , b 2 L ) T ∈ R L and the vector of the exponential parameters λ = (λ 1 , . . . , λ L ) T ∈ R L . Therefore, such blur kernel can represent the effort of a wide range of blurry types. We denote a mixture blur kernel with bandwidth vector b 2 as k b .
Bayesian Model
We assume that the blur kernel can be approximated by the mixture Gaussian kernel defined above. The prior distribution for the bandwidth vector is given as:
p(b 2 ) = E(b 2 ; λ).(3)
where E(·; λ) denotes an exponential distribution with parameter λ, arbitrarily given by experience in practice. According to Eq. (1), we have the likelihood distribution of the observed LR data as follows:
p(y|x, b 2 ) = N (y; (x ⊗ k b ) ↓ s , σ 2 n I),(4)
where N (·; µ, σ 2 ) denotes an Gaussian distribution with mean and variance being µ and σ 2 , respectively. Additionally, we assume independence between the HR image and the bandwidth vector distributions, i.e.,
p(x, b 2 ) = p(x)p(b 2 ).(5)
Thus, a full Bayesian model for the problem can be obtained from Eqs. (3)-(4).
Due to its intractability, we take the mixture kernel bandwidth b 2 as latent data. The complete data of the blind SISR problem is thus (y, b 2 ). The goal then turns to infer the posterior distribution of unknown parameter x conditioned on the observed data y as well as the latent data b 2 , i.e., p(x|y, b 2 ).
GEM Algorithm
With the complete data being (y, b 2 ) and unknown parameter being x, the unknown HR image can be obtained from the maximum likelihood estimate (MLE) of the parameter. Such likelihood can be written as:
log p(y|x) ≥ b 2 q(b 2 ) log p(y, b 2 |x) q(b 2 ) db 2 = E q(b 2 ) log p(y|b 2 , x) − D KL q(b 2 ) p(b 2 ) := F q, x; y ,(6)
where D KL is the Kullback-Leibler divergence, the inequality is resulted from the Jensen's inequality and achieves equality if and only if q(b 2 ) = p(b 2 |y, x). The GEM algorithm seeks to find the MLE of the marginal likelihood by iteratively applying the following two steps:
• Expectation step: q (n) = arg max q F q, x (n) ; y • Maximization step: x (n+1) = arg max x F q (n) , x; y .
NETWORK LEARNING SCHEMES
In conventional cases of GEM, the two steps are conducted alternatively by optimizing the analytical expression of F q, x; y in Eq. (6). However, it is hardly possible in blind SISR as the distribution of image data gets more complex and intractable. We construct two neural modules to learn the optimization results for the two steps.
Neural Modules
In E-step, the optimization of q is required. Following the mixture kernel assumption in Eq. (2), we assume that q is exponential with hyper-parameterλ learned by the E-Net, where f ϕ E (·) : y → λ denotes a vector-valued function parameterized by ϕ E learned with the E-Net. In M-step, the estimation of HR image x is achieved bŷ
q(b 2 ) = E(b 2 ;λ),λ = f ϕ E (y)(7)x = g ϕ M (y, k b )(8)
where g ϕ M (·) : y, k b → x denotes a vector-valued function parameterized by ϕ M learned with the M-Net. Therefore, the objective function of neural modules with respect to parameters ϕ E and ϕ M can be expressed as:
F(ϕ E , ϕ M ; y) = E E(b 2 ;λ) log p(y|b 2 ,x) − D KL E(λ) E(λ) (9) whereλ = f ϕ E (y) andx = g ϕ M (y, k b ), b 2 ∼ E(b 2 ;λ).
Unfolded into an end-to-end learning network in Fig. 1, the optimization w.r.t. ϕ E , ϕ M can be expressed as follows,
ϕ E , ϕ M = arg max ϕ E ,ϕ MF (ϕ E , ϕ M ; y)(10)
Supervised and Semi-supervised Learning Schemes
Suppose we are given a labeled dataset
D 1 (X, Y) with N i.i.d. sample pairs {x i , y i } N i=1 . Additionally we are given an unlabeled dataset D 2 (Ȳ), with M i.i.d. LR image sam- ples {ȳ j } M j=1 .
Other than the common supervised learning on dataset D 1 , the proposed SREMN can utilize both data sets.
The overall loss function over the two datasets of the SREMN network is as follows:
L(ϕ E , ϕ M ; X, Y,Ȳ) = α g · L GEM (ϕ E , ϕ M ; Y,Ȳ) + α r · L REG (ϕ E ; X, Y),(11)
where α g , α r are fine-tuning weights to balance the effect of the GEM term and the regularization term over the given datasets. It can be seen that the amount of supervision can be safely adjusted by the number of samples in the two datasets.
The first term L GEM (ϕ E , ϕ M ; Y,Ȳ) is unsupervised, set to be the negative expectation of the GEM objective function over the given datasets, expressed as:
L GEM (ϕ E , ϕ M ; Y,Ȳ) = −E y∼Y,Ȳ F (ϕ E , ϕ M ; y) .(12)
The second term is a supervised one on D 1 for regularization, expressed as:
L SU P (ϕ M ; X, Y) = E x,y∼X,Y ∥x − g ϕ M (y)∥ 2 . (13)
Note that if there exist ground-truth kernels in the training set, i.e.,
D k (K) = {k i } N i=1 paired with D 1 (X, Y) = {x i , y i } N i=1
is given, the supervised term can include the constraint on estimated kernels, expressed as:
L SU P (ϕ M ; X, Y, K) =E x,y∼X,Y ∥x − g ϕ M (y)∥ 2 + E k∼K ∥k − k b ∥ 2 .(14)
4. EXPERIMENTS
Experimental Setup
We adopt the experimental setting for training and testing first introduced in [8] and utilized in [14]. The training set includes 3450 HR images from DIV2K [22] and Flick2K [23], with LR images synthesized by anisotropic Gaussian kernels. Benchmark dataset DIV2KRK is utilized as the testing set. The Adam [24] optimizer is adopted for training. The learning rate is 0.0002, and the decay of first and second momentum of gradients are β 1 = 0.9, β 2 = 0.99, respectively. All models are trained for 1000 epochs with batch size 16 on a single RTX1080Ti GPU.
We apply PSNR and SSIM metircs for image quality assessment, calculated on the Y-channel of transformed YCbCr space. We compare our method with competitors from four different classes four completeness, shown in Table 1.
Quantitative Results
Our method outperforms SOTA SR results of Blind SR with sequential steps while achieves similar results as DAN [14], a single step Blind SR method. Quantitative results are shown in Table 1. Our method gets nice ranks in both evaluation metrics, as it encourages diverse outputs with semantic variables and also has a well-defined objective function to regu- tion. We define the unsupervised rate of the network learning as η = M/(M + N ), which increases with the amount of additional unlabeled data. We compare performances of the proposed approach under 3 different supervision rates, i.e., η = 0, 0.5, 0.8 for M = 0, N, 4N . Quantitative results are shown in Table 2. It can be seen that models trained with higher supervision rate tend to have better results. This validate the proposed methodology that the potential of unlabeled data could be excavated by probabilistic modeling.
The learning curves of the proposed SREMN under different supervision rates are illustrated in Fig.4. It can be seen that the learning processes converges at around 600 epochs, while the methods with higher supervision rates converge slower though with better ultimate performances, in accordance with intuition.
CONCLUSION
We propose a GEM framework for blind SISR (SREMN), which embeds the general image degradation model in a Bayesian model and enables efficient estimation of HR images. The proposed method leverages benefits from both model-based methods and learning-based methods, novel in conducting the blind SISR in a semi-supervised manner. Future work would be focused on a more flexible framework on image enhancing, integrating multiple related tasks.
Fig. 1 .
1The architecture of SREMN, consisting of an E-Net for blur kernel bandwidths and an M-Net for HR images.
Fig. 2 .
2Results of img 001 and 002 in Set5 (bandwidth 3.2).
Fig. 3 .
3Results of img 003 in Set5 with the kernel bandwidths in [1.8, 2.2, 2.6, 3.0, 3.2]. It can be seen that the proposed SREMN is more robust over different settings.
Table 1 .
1Quantitative comparisons with SOTA SR methods. Best results are highlighted in red and blue respectively. as illustrated inFig. 2. The inferiority may result from the following reasons: 1) We use a single RTX1080Ti GPU for training, while authors of DAN use distributed training with 8 RTX2080Ti GPUs. 2) Our model is trained with the batch size of 16, while DAN is trained with the batch size of 64, enabled by distributed training. Therefore, the proposed method is considered to have comparable and more robust performance compared to DAN.4.3. Semi-supervised LearningAdditional to N samples of LR-HR pairs claimed above, we introduce M LR samples without according HR samples to the training to see if our method can utilize such informa-Fig. 4. Learning curves under different unsupervised rates. A higher rate results in a more challenging convergence of training.TYPES
METHOD
SCALE ×2
SCALE ×4
PSNR
SSIM
PSNR
SSIM
CLASS 1
BICUBIC
28.73
0.8040
25.33
0.6795
BICUBIC KERNEL + ZSSR [15]
29.10
0.8215
25.61
0.6911
EDSR [16]
29.17
0.8216
25.64
0.6928
RCAN [14]
29.20
0.8223
25.66
0.6936
CLASS 2
PDN [17] -1ST IN NTIRE'19 TRACK4
/
/
26.34
0.7190
WDSR [18]-1ST IN NTIRE'19 TRACK2
/
/
21.55
0.6841
WDSR [18]-1ST IN NTIRE'19 TRACK3
/
/
21.54
0.7016
WDSR [18]-2ND IN NTIRE'19 TRACK4
/
/
25.64
0.7144
JI et al. [19] -1ST IN NTIRE'20 TRACK1
/
/
25.43
0.6907
CLASS 3
CORNILLERE et al. [20]
29.46
0.8474
/
/
MICHAELI et al. [21] + SRMD [11]
25.51
0.8083
23.34
0.6530
MICHAELI et al. [21] + ZSSR [15]
29.37
0.8370
26.09
0.7138
KERNELGAN [8] + SRMD [11]
29.57
0.8564
25.71
0.7265
KERNELGAN [8] + USRNET [13]
/
/
20.06
0.5359
KERNELGAN [8] + ZSSR [15]
30.36
0.8669
26.81
0.7316
CLASS 4
DAN [14]
32.56
0.8997
27.55
0.7582
SREMN (OURS)
32.25
0.9370
27.74
0.8015
larize the generation. Though a little inferior to DAN in some
cases, the proposed method shows some improvements in the
qualitative perspective,
Table 2 .
2Quantitative results of the proposed method with different unsupervised rates. Semi-SREMN (η = 0.5) 30.4901 0.9262 57.5 Semi-SREMN (η = 0.8) 30.7218 0.9300 82.6METHODS
PSNR
SSIM
TIME (H)
Semi-SREMN (η = 0)
30.4240 0.9252
40.7
Review of image interpolation and super-resolution. W Siu, K.-W Hung, Proceedings of The 2012 Asia Pacific Signal and Information Processing Association Annual Summit and Conference. The 2012 Asia Pacific Signal and Information Processing Association Annual Summit and ConferenceW. Siu and K.-W. Hung, "Review of image interpolation and super-resolution," Proceedings of The 2012 Asia Pa- cific Signal and Information Processing Association An- nual Summit and Conference, pp. 1-10, 2012.
Is image super-resolution helpful for other vision tasks. D Dengxin, W Yujian, C Yuhua, V G Luc, 2016 IEEE Winter Conference on Applications of Computer Vision (WACV). D. Dengxin, W. Yujian, C. Yuhua, and V. G. Luc, "Is im- age super-resolution helpful for other vision tasks?" in 2016 IEEE Winter Conference on Applications of Com- puter Vision (WACV), 2016, pp. 1-9.
Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images. M Elad, A Feuer, IEEE transactions on image processing : a publication of the IEEE Signal Processing Society. 6M. Elad and A. Feuer, "Restoration of a single super- resolution image from several blurred, noisy, and under- sampled measured images," IEEE transactions on image processing : a publication of the IEEE Signal Process- ing Society, vol. 6 12, pp. 1646-58, 1997.
On bayesian adaptive video super resolution. C Liu, D Sun, IEEE Transactions on Pattern Analysis and Machine Intelligence. 36C. Liu and D. Sun, "On bayesian adaptive video super resolution," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 36, pp. 346-360, 2014.
Learning a deep convolutional network for image super-resolution. C Dong, C C Loy, K He, X Tang, ECCV. C. Dong, C. C. Loy, K. He, and X. Tang, "Learning a deep convolutional network for image super-resolution," in ECCV, 2014.
Residual dense network for image super-resolution. Y Zhang, Y Tian, Y Kong, B Zhong, Y Fu, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. Y. Zhang, Y. Tian, Y. Kong, B. Zhong, and Y. Fu, "Residual dense network for image super-resolution," 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2472-2481, 2018.
Esrgan: Enhanced superresolution generative adversarial networks. X Wang, K Yu, S Wu, J Gu, Y.-H Liu, C Dong, C C Loy, Y Qiao, X Tang, ECCV Workshops. X. Wang, K. Yu, S. Wu, J. Gu, Y.-H. Liu, C. Dong, C. C. Loy, Y. Qiao, and X. Tang, "Esrgan: Enhanced super- resolution generative adversarial networks," in ECCV Workshops, 2018.
Blind superresolution kernel estimation using an internal-gan. S Bell-Kligler, A Shocher, M Irani, NeurIPSS. Bell-Kligler, A. Shocher, and M. Irani, "Blind super- resolution kernel estimation using an internal-gan," in NeurIPS, 2019.
To learn image super-resolution, use a gan to learn how to do image degradation first. A Bulat, J Yang, G Tzimiropoulos, ECCV. A. Bulat, J. Yang, and G. Tzimiropoulos, "To learn im- age super-resolution, use a gan to learn how to do image degradation first," in ECCV, 2018.
Blind super-resolution with iterative kernel correction. J Gu, H Lu, W Zuo, C Dong, 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). J. Gu, H. Lu, W. Zuo, and C. Dong, "Blind super-resolution with iterative kernel correction," 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1604-1613, 2019.
Learning a single convolutional super-resolution network for multiple degradations. K Zhang, W Zuo, L Zhang, 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. K. Zhang, W. Zuo, and L. Zhang, "Learning a sin- gle convolutional super-resolution network for multi- ple degradations," 2018 IEEE/CVF Conference on Com- puter Vision and Pattern Recognition, pp. 3262-3271, 2018.
Deep plug-and-play super-resolution for arbitrary blur kernels. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). --, "Deep plug-and-play super-resolution for arbi- trary blur kernels," 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1671-1681, 2019.
Deep unfolding network for image super-resolution. K Zhang, L Gool, R Timofte, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). K. Zhang, L. Gool, and R. Timofte, "Deep unfolding network for image super-resolution," 2020 IEEE/CVF Conference on Computer Vision and Pattern Recogni- tion (CVPR), pp. 3214-3223, 2020.
Unfolding the alternating optimization for blind super resolution. Z Luo, Y Huang, S Li, L Wang, T Tan, ArXiv. Z. Luo, Y. Huang, S. Li, L. Wang, and T. Tan, "Unfold- ing the alternating optimization for blind super resolu- tion," ArXiv, vol. abs/2010.02631, 2020.
zero-shot" superresolution using deep internal learning. A Shocher, N Cohen, M Irani, CVPR. A. Shocher, N. Cohen, and M. Irani, ""zero-shot" super- resolution using deep internal learning," in CVPR, 2018.
Enhanced deep residual networks for single image superresolution. B Lim, S Son, H Kim, S Nah, K M Lee, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). B. Lim, S. Son, H. Kim, S. Nah, and K. M. Lee, "En- hanced deep residual networks for single image super- resolution," 2017 IEEE Conference on Computer Vi- sion and Pattern Recognition Workshops (CVPRW), pp. 1132-1140, 2017.
Image superresolution via dense discriminative network. J Ma, X Wang, J Jiang, IEEE Transactions on Industrial Electronics. 67J. Ma, X. Wang, and J. Jiang, "Image superresolution via dense discriminative network," IEEE Transactions on Industrial Electronics, vol. 67, pp. 5687-5695, 2020.
Wide activation for efficient and accurate image super-resolution. J Yu, Y Fan, J Yang, N Xu, Z Wang, X Wang, T Huang, abs/1808.08718ArXiv. J. Yu, Y. Fan, J. Yang, N. Xu, Z. Wang, X. Wang, and T. Huang, "Wide activation for efficient and accurate image super-resolution," ArXiv, vol. abs/1808.08718, 2018.
Real-world super-resolution via kernel estimation and noise injection. X Ji, Y Cao, Y Tai, C Wang, J Li, F Huang, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). X. Ji, Y. Cao, Y. Tai, C. Wang, J. Li, and F. Huang, "Real-world super-resolution via kernel estimation and noise injection," 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1914-1923, 2020.
Blind image superresolution with spatially variant degradations. V Cornillère, A Djelouah, Y Wang, O Sorkine-Hornung, C Schroers, ACM Transactions on Graphics (TOG). 38V. Cornillère, A. Djelouah, Y. Wang, O. Sorkine- Hornung, and C. Schroers, "Blind image super- resolution with spatially variant degradations," ACM Transactions on Graphics (TOG), vol. 38, pp. 1 -13, 2019.
Nonparametric blind superresolution. T Michaeli, M Irani, 2013 IEEE International Conference on Computer Vision. T. Michaeli and M. Irani, "Nonparametric blind super- resolution," 2013 IEEE International Conference on Computer Vision, pp. 945-952, 2013.
Ntire 2017 challenge on single image super-resolution: Dataset and study. E Agustsson, R Timofte, 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW). E. Agustsson and R. Timofte, "Ntire 2017 challenge on single image super-resolution: Dataset and study," 2017 IEEE Conference on Computer Vision and Pat- tern Recognition Workshops (CVPRW), pp. 1122-1131, 2017.
Ntire 2017 challenge on single image superresolution: Methods and results. R T , 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops. CVPRWR. T. et al., "Ntire 2017 challenge on single image super- resolution: Methods and results," in 2017 IEEE Confer- ence on Computer Vision and Pattern Recognition Work- shops (CVPRW), 2017, pp. 1110-1121.
Adam: A method for stochastic optimization. D P Kingma, J Ba, abs/1412.6980CoRR. D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," CoRR, vol. abs/1412.6980, 2015.
| [] |
[
"Scalable Optimal Margin Distribution Machine",
"Scalable Optimal Margin Distribution Machine"
] | [
"Yilin Wang yilinwang@hust.edu.cn \nNational Engineering Research Center for Big Data Technology and System Services Computing Technology and System\nSchool of Computer Science and Technology\nLab, Cluster and Grid Computing Lab\nHuazhong University of Science and Technology\nChina\n",
"Nan Cao nancao@hust.edu.cn \nNational Engineering Research Center for Big Data Technology and System Services Computing Technology and System\nSchool of Computer Science and Technology\nLab, Cluster and Grid Computing Lab\nHuazhong University of Science and Technology\nChina\n",
"Teng Zhang tengzhang@hust.edu.cn \nNational Engineering Research Center for Big Data Technology and System Services Computing Technology and System\nSchool of Computer Science and Technology\nLab, Cluster and Grid Computing Lab\nHuazhong University of Science and Technology\nChina\n",
"Xuanhua Shi xhshi@hust.edu.cn \nNational Engineering Research Center for Big Data Technology and System Services Computing Technology and System\nSchool of Computer Science and Technology\nLab, Cluster and Grid Computing Lab\nHuazhong University of Science and Technology\nChina\n",
"Hai Jin hjin@hust.edu.cn \nNational Engineering Research Center for Big Data Technology and System Services Computing Technology and System\nSchool of Computer Science and Technology\nLab, Cluster and Grid Computing Lab\nHuazhong University of Science and Technology\nChina\n"
] | [
"National Engineering Research Center for Big Data Technology and System Services Computing Technology and System\nSchool of Computer Science and Technology\nLab, Cluster and Grid Computing Lab\nHuazhong University of Science and Technology\nChina",
"National Engineering Research Center for Big Data Technology and System Services Computing Technology and System\nSchool of Computer Science and Technology\nLab, Cluster and Grid Computing Lab\nHuazhong University of Science and Technology\nChina",
"National Engineering Research Center for Big Data Technology and System Services Computing Technology and System\nSchool of Computer Science and Technology\nLab, Cluster and Grid Computing Lab\nHuazhong University of Science and Technology\nChina",
"National Engineering Research Center for Big Data Technology and System Services Computing Technology and System\nSchool of Computer Science and Technology\nLab, Cluster and Grid Computing Lab\nHuazhong University of Science and Technology\nChina",
"National Engineering Research Center for Big Data Technology and System Services Computing Technology and System\nSchool of Computer Science and Technology\nLab, Cluster and Grid Computing Lab\nHuazhong University of Science and Technology\nChina"
] | [] | Optimal margin Distribution Machine (ODM) is a newly proposed statistical learning framework rooting in the novel margin theory, which demonstrates better generalization performance than the traditional large margin based counterparts. Nonetheless, it suffers from the ubiquitous scalability problem regarding both computation time and memory as other kernel methods. This paper proposes a scalable ODM, which can achieve nearly ten times speedup compared to the original ODM training method. For nonlinear kernels, we propose a novel distribution-aware partition method to make the local ODM trained on each partition be close and converge fast to the global one. When linear kernel is applied, we extend a communication efficient SVRG method to accelerate the training further. Extensive empirical studies validate that our proposed method is highly computational efficient and almost never worsen the generalization. | 10.48550/arxiv.2305.04837 | [
"https://export.arxiv.org/pdf/2305.04837v3.pdf"
] | 258,557,939 | 2305.04837 | 8fcb99deec4143f1eb552436faf4e0f013d5c483 |
Scalable Optimal Margin Distribution Machine
5 Jun 2023
Yilin Wang yilinwang@hust.edu.cn
National Engineering Research Center for Big Data Technology and System Services Computing Technology and System
School of Computer Science and Technology
Lab, Cluster and Grid Computing Lab
Huazhong University of Science and Technology
China
Nan Cao nancao@hust.edu.cn
National Engineering Research Center for Big Data Technology and System Services Computing Technology and System
School of Computer Science and Technology
Lab, Cluster and Grid Computing Lab
Huazhong University of Science and Technology
China
Teng Zhang tengzhang@hust.edu.cn
National Engineering Research Center for Big Data Technology and System Services Computing Technology and System
School of Computer Science and Technology
Lab, Cluster and Grid Computing Lab
Huazhong University of Science and Technology
China
Xuanhua Shi xhshi@hust.edu.cn
National Engineering Research Center for Big Data Technology and System Services Computing Technology and System
School of Computer Science and Technology
Lab, Cluster and Grid Computing Lab
Huazhong University of Science and Technology
China
Hai Jin hjin@hust.edu.cn
National Engineering Research Center for Big Data Technology and System Services Computing Technology and System
School of Computer Science and Technology
Lab, Cluster and Grid Computing Lab
Huazhong University of Science and Technology
China
Scalable Optimal Margin Distribution Machine
5 Jun 2023
Optimal margin Distribution Machine (ODM) is a newly proposed statistical learning framework rooting in the novel margin theory, which demonstrates better generalization performance than the traditional large margin based counterparts. Nonetheless, it suffers from the ubiquitous scalability problem regarding both computation time and memory as other kernel methods. This paper proposes a scalable ODM, which can achieve nearly ten times speedup compared to the original ODM training method. For nonlinear kernels, we propose a novel distribution-aware partition method to make the local ODM trained on each partition be close and converge fast to the global one. When linear kernel is applied, we extend a communication efficient SVRG method to accelerate the training further. Extensive empirical studies validate that our proposed method is highly computational efficient and almost never worsen the generalization.
Introduction
Recently, the study on margin theory [Gao and Zhou, 2013] demonstrates an upper bound disclosing that maximizing the minimum margin does not necessarily result in a good performance. Instead, the distribution rather than a single margin is much more critical. Later on, the study on lower bound [Grønlund et al., 2019] further proves that the upper bound is almost optimal up to a logarithmic factor. Inspired by these insightful works, Zhang and Zhou [2019] propose the Optimal margin Distribution Machine (ODM), which explicitly optimizes the margin distribution by maximizing the mean and minimizing the variance simultaneously and exhibits much better generalization than the traditional large margin based counterparts. Due to the superiority shown on both binary and multi-class classification tasks, many works attempt to extend ODM to more genreal learning settings, just to list a few, cost-sensitive learning [Zhou and Zhou, 2016;Cheng et al., 2017], weak supervised learning [Zhang and Zhou, 2018a;Zhang and Zhou, 2018b;Luan et al., 2020; * Contact Author Zhang and Jin, 2020], multi-label learning [Tan et al., 2020;Cao et al., 2021], online learning [Zhang et al., 2020], and regression [Rastogi et al., 2020]. Plenty of successes on various learning tasks validates the superiority of this new statistical learning framework. However, with the dramatic progress of digital technologies, the data generated devices become as diverse as computers, mobile phones, smartwatches, cars, etc., and the amount of data created each day grows tremendously, thus these ODM based extensions suffer from the scalability problem regarding both computation time and memory as other kernel methods.
There have been many works devoted to accelerating kernel methods, which can be roughly classified into three categories. The first category is based on approximation, e.g., the random Fourier feature [Rahimi and Recht, 2007] takes the trigonometric functions as basis functions to approximate the kernel mapping, the Nyström method [Williams and Seeger, 2001] approximates the kernel matrix by sampling only a small number of rows and columns, and the coreset [Tan et al., 2019] adaptively sketch the whole data by choosing some landmark points. The second category divides the data into partitions on which local models are trained and then combined to produce the larger local or global models, e.g., in [Graf et al., 2004;Hsieh et al., 2014;Singh et al., 2017], a tree architecture on partitions is designed first, guided by which the solutions of different partitions are aggregated; in [Yu et al., 2005;Navia-Vazquez et al., 2006;Loosli et al., 2007], the key instances identification and exchange are further introduced to accelerate the training; in , both low-rank and clustering structure of the kernel matrix are considered to get an approximation of kernel matrix. The third category is directly applying the distributed-style optimization method, such as the augmented Lagrangian method [Forero et al., 2010] and the alternating direction method of multipliers [Boyd et al., 2010], or extending existing solver to a distributed environment, e.g., distributed SMO [Cao et al., 2006].
Notice that the random Fourier feature adopts a dataindependent kernel mapping and the Nyström method takes a data distribution-unaware sampling, hence their performance are both inferior to the coreset method [Tan et al., 2019], which inspires us to leverage the data as heavily as possible. Moreover, the distributed off-the-shelf quadratic pro-gramming (QP) solvers can be directly applied to train ODM, but they are all general approaches thus ignore the intrinsic structure of the problem and can hardly achieve the greatest efficiency. To take the best of both worlds, this paper proposes a specially designed scalable ODM (SODM). Specifically, we put forward a novel data partition method so that ODM trained on each partition has a solution close to that trained on the whole data. When some partitions are merged to form a larger partition, the solution on it can be quickly obtained by concatenating the previous local solutions as the initial point. Besides, in the case of the linear kernel, we extend a communication efficient SVRG method to accelerate the training further. To summarize, the remarkable differences of SODM compared with existing scalable QP solvers are threefold:
1. SODM incorporates a novel partition strategy, which makes the local ODM on each partition be close to the global one so that the training can be accelerated. 2. SODM extends a communication efficient SVRG method to further accelerate the training when the linear kernel is applied. 3. SODM can maintain ODM's generalization performance in most situations, meanwhile, achieve nearly ten times speedup. The rest of this paper is organized as follows. We first introduce some preliminaries, and then present the technical detail of our method. After that we show the experimental results and empirical observations. Finally we conclude the paper with future work.
Preliminaries
Throughout the paper, scalars are denoted by normal case letters (e.g., m and M ). Vectors and matrices are denoted by boldface lower and upper case letters, respectively (e.g., x and X). The (i, j)-th entry of matrix X is [X] ij . Sets are designated by upper case letters with mathcal font (e.g., S). The input space is X ⊆ R N and Y = {1, −1} is the label set. For any positive integer M , the set of integers {1, . . . , M } is denoted by [M ]. For the feature mapping φ : X → H associated to some positive definite kernel κ where H is the corresponding reproducing kernel Hilbert space (RKHS), κ(x, z) = φ(x), φ(z) H holds for any x and z.
Optimal Margin Distribution Machine
The traditional large margin based methods maximize the minimum margin, and the obtained decision boundary is only determined by a small number of instances with the minimum margin [Schölkopf and Smola, 2001], which may hurt the generalization performance.
On the other hand, ODM explicitly optimizes the margin distribution. Given a labeled data set {(x i , y i )} i∈ [M] , ODM is formalized by maximizing the margin mean and minimizing the margin variance simultaneously:
min w,ξi,ǫi p(w) = 1 2 w 2 + λ 2M i∈[M] ξ 2 i + υǫ 2 i (1 − θ) 2 s.t. 1 − θ − ξ i ≤ y i w ⊤ φ(x i ) ≤ 1 + θ + ǫ i , ∀i ∈ [M ],
where the margin mean has been fixed as 1 since scaling w does not affect the decision boundary, the hyperparameter λ is to balance the regularization and empirical loss, the hyperparameter υ is for trading-off the two different kinds of deviation from margin mean, and the hyperparameter θ is introduced to tolerate small deviations no more than θ. By introducing the Lagrange multipliers ζ, β ∈ R M + for the 2M inequality constraints respectively, the dual problem of ODM is
min ζ,β∈R M + d(ζ, β) = 1 2 (ζ − β) ⊤ Q(ζ − β) + M c 2 (υ ζ 2 + β 2 ) + (θ − 1)1 ⊤ M ζ + (θ + 1)1 ⊤ M β,(1)
where [Q] ij = y i y j κ(x i , x j ) and c = (1 − θ) 2 /λυ is a constant. The detailed derivation can be found in supplementary. By denoting α = [ζ; β], the dual ODM can be rewritten as a standard convex QP problem:
min α∈R 2M + f (α) = 1 2 α ⊤ Hα + b ⊤ α,(2)
where
H = Q + M cυI −Q −Q Q + M cI , b = (θ − 1)1 M (θ + 1)1 M .
Notice that Eqn.
(2) only involves 2M decoupled box constraints α 0, thus it can be efficiently solved by a dual coordinate descent method [Zhang and Zhou, 2019]. To be specific, in each iteration, only one variable is selected to update while other variables are kept as constants, which yields the following univariate QP problem of t:
min t f (α + te i ) = 1 2 [H] ii t 2 + [∇f (α)] i t + f (α),(3)
which has a closed-form solution:
[α] i ← max([α] i − [∇f (α)] i /[H] ii , 0).
3 Proposed Method SODM works in distributed data level, i.e., dividing the data into partitions on which local models are trained and used to find the larger local or global models. For simplicity, we assume initially there are K = p L partitions with the same cardinality m, i.e., m = M/K. The data set {(x i , y i )} i∈ [M] are ordered so that the first m instances are on the first partition, and the second m instances are on the second partition, etc. That is for any instance (x i , y i ), the index of partition to which it belongs is P
(i) = ⌈i/m⌉ where ⌈·⌉ is the round up function. Suppose {(x (k) i , y (k) i )} i∈[m] is the data of the k-th parti- tion, the local ODM trained on it is [cf. Eqn. (1)] min ζ k ,β k ∈R m + d k (ζ k , β k ) = 1 2 (ζ k − β k ) ⊤ Q (k) (ζ k − β k ) + mc 2 (υ ζ k 2 + β k 2 ) + (θ − 1)1 ⊤ m ζ k + (θ + 1)1 ⊤ m β k , where [Q (k) ] ij = y (k) i y (k) j κ(x (k) i , x (k) j )
. This problem can be rewritten as a standard convex QP problem in the same α (l−1) = α (l) . 13: end for 14: return α (l) . manner as Eqn. (2), and efficiently solved by dual coordinate descent method as Eqn. (3).
Algorithm 1 SODM Input: Data set D = {(x i , y i )} M i=1 ,
Once the parallel training of p L local ODMs are completed, we get p solutions. Then we merge every p partitions to form K/p = p L−1 larger partitions. On each larger partition, a new local ODM is trained again by dual coordinate descent method, but the optimization procedure is not started from the scratch. Instead, the previous p solutions are concatenated as the initial point of the optimization. By our proposed novel partition strategy, this concatenated solution is already a good approximation to the optimal solution thus converge much faster. The above procedure is repeated until the solution converges or all the partitions are merged together. Algorithm 1 summarizes the pseudo-code of SODM.
Convergence
In this section, we present a theorem to guarantee the convergence of the proposed method. Notice that the optimization variables on each partition are decoupled, they can be jointly optimized by the following problem [cf. Eqn. (1)]
min ζ,β∈R M + d(ζ, β) = 1 2 (ζ − β) ⊤ Q(ζ − β) + mc 2 (υ ζ 2 + β 2 ) + (θ − 1)1 ⊤ M ζ + (θ + 1)1 ⊤ M β,(4)
where Q = diag(Q (1) , . . . , Q (K) ) is a block diagonal matrix. It can be seen that the smaller the K, the more close the Eqn. (4) to ODM, and when K = 1, it exactly degenerates to ODM. Therefore, SODM deals with ODM by solving a series of problems which approaches to it, and the solution of former problems can be helpful for the optimization of the latter ones.
Theorem 1. Suppose the optimal solutions of ODM and its approximate problem, i.e., Eqn. (4), are α ⋆ = [ζ ⋆ ; β ⋆ ] and α ⋆ = [ ζ ⋆ ; β ⋆ ] respectively, then the gaps between these two optimal solutions satisfy
0 ≤ d( ζ ⋆ , β ⋆ ) − d(ζ ⋆ , β ⋆ ) ≤ U 2 (Q + M (M − m)c), α ⋆ − α ⋆ 2 ≤ U 2 M cυ (Q + M (M − m)c), where U = max( α ⋆ ∞ , α ⋆ ∞ ) upperbounds the infinity norm of solutions, and Q = i,j:P (i) =P (j) |[Q] ij | is the sum of the absolute values of Q's entries which turn to zero in Q.
Due to the page limitations, we only provide the sketch of proof here. The full proof can be found in arXiv website 1 .
Proofsketch. The left-hand side of the first inequality is due to the optimality of ζ ⋆ and β ⋆ .
By comparing the definition of d(ζ, β) in Eqn.
(1) and d(ζ, β) in Eqn. (4), we can find that the only differences are the change of Q to Q and M to m. Therefore the gap between d(ζ ⋆ , β ⋆ ) and d(ζ ⋆ , β ⋆ ) can be upper bounded by U and Q. The gap between d( ζ ⋆ , β ⋆ ) and d( ζ ⋆ , β ⋆ ) can be upper bounded in a similar way. Combining these together with
d( ζ ⋆ , β ⋆ ) ≤ d(ζ ⋆ , β ⋆ )
can yield the right-hand side of the first inequality.
Notice that f ( α ⋆ ) is a quadratic function, hence besides the gradient g and Hessian matrix H, all its higher derivatives are all zero, and it can be precisely expanded at α ⋆ as
f (α ⋆ ) + g ⊤ ( α ⋆ − α ⋆ ) + 1 2 ( α ⋆ − α ⋆ ) ⊤ H( α ⋆ − α ⋆ ), in which g ⊤ ( α ⋆ − α ⋆ )
is nonnegative according to the the first order optimality condition. Furthermore, H can be lower bounded by the sum of a positive semidefinite matrix and a scalar matrix:
H Q −Q −Q Q + M cυ I I .
By putting all these together, we can show that
α ⋆ − α ⋆ 2 is upper bounded by f ( α ⋆ )−f (α ⋆ ), i.e., d( ζ ⋆ , β ⋆ )−d(ζ ⋆ , β ⋆ ),
and with the right-hand side of the first inequality we can derive the second inequality.
This theorem indicates that the gap between the optimal solutions and the suboptimal solutions obtained in each iteration depends on M − m and Q. As the iteration going on, the partitions become larger and larger, then the number of instances m on each partition approaches to the total number of instances M ; on the other hand, the matrix Q approaches to Q which makes Q decrease. Therefore, the solution obtained in each iteration of SODM is getting closer and closer to that of ODM, that is to say, our proposed algorithm converges.
Partition Strategy
In this section we detail the partition strategy. It can significantly affect the optimization efficiency thus plays a more important role in our proposed method. Up to now, most partition strategies utilize the clustering algorithms to form the partitions. For example, Hsieh et al. [2014] regards each cluster of the kernel k-means as a partition. However, ODM heavily depends on the mean and variance of training data. Directly treating clusters as partitions will lead to huge difference between the distribution of each partition and the whole data, and consequently huge difference between local solutions and global solution.
To preserve the original distribution possibly, we borrow the idea from stratified sampling, i.e., we first divide the data set into some homogeneous stratums, and then apply random sampling within each stratum. To be specific, suppose the goal is to generate K partitions. We first choose S landmark points {φ(z s )} s∈ [S] in RKHS, and then construct one stratum for each landmark point by assigning the rest of instances to the stratum in which its nearest landmark point lies, i.e., the index of stratum containing x i is
ϕ(i) = argmin s∈[S] φ(x i ) − φ(z s ) .(5)
For each stratum C s , we equally divide it into K pieces by random sampling without replacement and take one piece from each stratum to form a partition, hence totally K partitions are created. The remaining question is how to select these landmark points. Obviously, they should be representative enough to sketch the whole data distribution. To this end, we introduce the minimal principal angle between different stratum:
τ = min i =j arccos x, z x z x ∈ C i , z ∈ C j .
Apparently, the larger the angle, the higher variation among the stratums, and the more representative each partition is, which is strictly described by the following theorem. Theorem 2. For shift-invariant kernel κ with κ(x, z) = κ(x − z), assume κ(0) = r 2 that is φ(x) = r for any x. With the partition strategy described above, we have
d k (ζ k , β k ) − d(ζ ⋆ , β ⋆ ) ≤ U 2 M 2 c + 2U M + U 2 2 (M 2 r 2 + r 2 cos τ (2C − M 2 )), ∀k ∈ [K],
where C = i,j∈[M] 1 ϕ(i) =ϕ(j) , and U is defined the same with theorem 1.
Proofsketch. We construct the auxiliary data set D k by repeating each instance in D k for K times, and then show that primal ODM on D k and D k have the same optimal objective. Since the strong dual theorem holds for ODM, we have
d k (ζ k , β k ) = p k (w k ) = p k (w) = d k ( ζ k , β k ). Next we decompose d k ( ζ k , β k ) − d(ζ ⋆ , β ⋆ ) into 1 2 ( ζ k − β k ) ⊤ Q k ( ζ k − β k ) − 1 2 (ζ ⋆ − β ⋆ ) ⊤ Q(ζ ⋆ − β ⋆ ), and M cυ 2 ( ζ k 2 − ζ ⋆ 2 ) + M c 2 ( β k 2 − β ⋆ 2 ) + (θ − 1)1 ⊤ M ( ζ k − ζ ⋆ ) + (θ + 1)1 ⊤ M ( β k − β ⋆ )
. By putting the upper bounds of these two terms together can conclude the proof.
As shown in theorem 2, we derive an upper bound on the gap between the optimal object value on D and on D k . Note that 2C > M 2 holds for any s ∈ [S] when |C s | < M/2, which can be easily satisfied. Thus, we can get more approximate solution in each partition by maximizing the minimal principal angle τ .
However, due to the high computation cost, it is impossible to directly maximize minimal principal angle. Instead, we implement a greedy iterative process:
z 1 = argmax z∈D κ(z, z), z s+1 = argmax z∈D/{z1,...,zs} κ(z, z) − K ⊤ z,s K −1 s K z,s ,(6)
where
K z,s = [κ(z, z 1 ), ..., κ(z, z s )], K s ∈ R s×s is defined as [K s ] i,j = κ(z i , z j ).
That is, z s is chosen to maximize the Schur complement regarding z 1 , ..., z s−1 . It is obvious that the maximum determinant of K s is achieved when φ(z s ) is orthogonal to span{φ(z 1 ), ..., φ(z s−1 )}. By computing Schur complement, we transform the kernel matrix into diagonal matrix. Therefore, we can maximize the minimal principal angle by choosing the maximal element in Schur complement. It is noteworthy that each partition generated by our proposed strategy extracts proportional instances from each stratum, thus preserves the distribution. Besides, compared with other partition strategies based on kmeans [Singh et al., 2017], we consider not only in the original feature space, but also in the situation where data can hardly be linear separated. Last but not the least, our partition strategy has lower time complexity.
Acceleration for Linear Kernel
Dual coordinate descent method requires too many computing and storage resources, mainly caused by the enormous kernel matrix. It is noteworthy that when linear kernel is used, we can directly solve the primal form of ODM to avoid computing and storing kernel matrix.
The objective function of ODM is differentiable and the gradient of p(w) on instance (x i , y i ) is et al., 2017] can be exploited in this scenario. It generates a series of extra auxiliary data sets sampling from the the original data set without replacement which share the same data distribution as the whole data set, so that an unbiased estimation of the gradient can be computed. In each iteration, all nodes are joined together to compute the full gradient first. Then each node performs the iterative update of SVRG in serial in a "round robin" fashion, i.e., let all nodes stay idle except one node performing a certain steps of iterative updates using its local auxiliary data and passing the solution to the next node. We show the process of solving SODM by DSVRG algorithm in Algorithm 2. Algorithm 2 Accelerated SODM for linear kernel Input:
∇p i (w) = w + λ(y i w ⊤ x i + θ − 1)y i x i 1 i∈I1 (1 − θ) 2 + λυ(y i w ⊤ x i − θ − 1)y i x i 1 i∈I2 (1 − θ) 2 , where I 1 = {i | y i w ⊤ x i < 1 − θ} and I 2 = {i | y i w ⊤ x i > 1+θ}. Distributed SVRG (DSVRG) [LeeTraining data set D = {(x i , y i )} M i=1
, number of partitions K, number of stratums S, number of stages S t , step size η, the number of iterations T . Output: Solution at S t stage w (St) 1: Initialize S stratums C 1 , . . . , C S by Eqn. (5)-(6). 2: Initialize partitions D 1 , . . . , D K by sampling without replacement from stratums C 1 , . . . , C S . 3: Generate the auxiliary array R 1 , . . . , R k where R i = {j | (x j , y j ) ∈ D i }. 4: s = 1. 5: for l = 0, 1, . . . , S t − 1 do 6:
The center node sends w (l) to each node.
7:
for each node j = 1, 2, . . . , K in parallel do 8:
Compute h (l) j = i∈Dj ∇p i (w (l) ).
Sample instances (x i , y i ) from D s where i ∈ R s . 14: w (l+1) t+1 = w (l+1) t − η(∇p i (w (l+1) t ) − ∇p i (w (l) ) + h (l)
Experiments
In this section, we evaluate the performance of our proposed algorithms by comparing with other QP solvers.
Setup
We evaluate the performance of our proposed algorithms on eight real-world data sets. The statistics of these data sets are summarized in Table 4. All features are normalized into the interval [0, 1]. For each data set, eighty percent of instances are randomly selected as training data, while the rest are testing data. All the experiments are performed on a Spark [Zaharia et al., 2012] cluster with one master and five workers. Each machine is equipped with 16 Intel Xeon E5-2670 CPU cores and 64G RAM. Our implementation are available on Github 2 .
SODM is compared with three state-of-the-art scalable QP solvers, that is, Cascade approach (Ca-ODM) [Graf et al., 2004], DiP approach (DiP-ODM) [Singh et al., 2017] and DC approach (DC-ODM) [Hsieh et al., 2014].
Besides, to evaluate the efficiency of the accelerated SODM for linear kernel, two state-of-the-art gradient based methods are implementd, that is SVRG method ( Figure 1 presents the time cost and test accuracy on six data sets. It can be seen that SODM performs significantly better than other methods. Specifically, SODM achieves the best test accuracy on 7 data sets and just slightly worse than DC-ODM on other 1 data sets. On time cost, SODM achieves the fastest training speed on all 8 data sets. The detailed test accuracy and time cost are presented in Table 2. Besides, we also Figure 3 presents the time cost and test accuracy on data sets with linear kernel. It can be seen that SODM shows highly competitive performance compared with other methods. Specifically, SODM achieves the best test accuracy on 6 data sets and just slightly worse than DC-ODM on other 2 data sets. On time cost, SODM achieves faster training speed on all 8 data sets. The detailed test accuracy and time cost are presented in Table 3. In Figure 2, we show the training speedup ratio with cores increasing from 1 to 32 for linear kernel SODM and RBF kernel SODM, respectively. When 32 cores used, RBF kernel SODM achieves more than 9 times training speedup while linear kernel SODM achieves over 5 times training speedup. Figure 4 compares the training speed and generalization performance between our acceleration method and other gradient based methods. We observe that our method can get competitive test accuracy. Meanwhile, our method achieves over 5 times faster speed than other methods. This indicates that our scalable acceleration method achieves great training speed while hold the generalization performance.
Results with RBF Kernel
Results with Linear Kernel
Comparison with Gradient Based Methods
Conclusion
Although lots of works have been proposed to solve QP problems, these off-the-shelf solvers usually ignore the intrinsic structure of the optimization problem, thus can hardly achieve the greatest efficiency when directly applied to ODM. We propose a scalable ODM with a novel partition strategy, in order to retain the first-and second-order statistics in both the original instance space and the high-dimensional feature space induced by some nonlinear kernel. In addition, an accelerating method is implemented to further improve the training when linear kernel is used. As shown in the experiments, SODM has great superiority to other scalable QP solvers in terms of both time cost and generalization performance. In the future, we will consider the circumstance in which data is located on different devices and can not be gathered together due to limited bandwidth or user privacy.
A Theoretical Proof
In this section, we first infer the formulation of SODM in details. Then we give the full proof of Theorem 1 and Theorem 2.
A.1 Preliminaries
Given a labeled data set {(x i , y i )} i∈ [M] , the primal problem of ODM is min w,ξi,ǫi
p(w) = 1 2 w 2 + λ 2M i∈[M] ξ 2 i + υǫ 2 i (1 − θ) 2 , s.t. 1 − θ − ξ i ≤ y i w ⊤ φ(x i ) ≤ 1 + θ + ǫ i , ∀i ∈ [M ]. Denote X = [φ(x 1 ), . . . , φ(x M )], Y = diag(y 1 , . . . , y M ), ξ = [ξ 1 ; . . . ; ξ M ], ǫ = [ǫ 1 ;
. . . ; ǫ M ], the above formulation can be rewritten as
min w,ξ,ǫ p(w) = 1 2 w 2 + λ( ξ 2 + υ ǫ 2 ) 2M (1 − θ) 2 , s.t. (1 − θ)1 M − ξ ≤ YX ⊤ w ≤ (1 + θ)1 M + ǫ,(7)
where 1 M is the M -dimensional all one vector. With Lagrange multipliers ζ, β ∈ R M + for the two constraints respectively, the Lagrangian of Eqn. (7) leads to
L = 1 2 w 2 + λ( ξ 2 + υ ǫ 2 ) 2M (1 − θ) 2 − ζ ⊤ (YX ⊤ w − (1 − θ)1 M + ξ) + β ⊤ (YX ⊤ w − (1 + θ)1 M − ǫ),(8)
and the KKT conditions are
w = XY(ζ − β), ξ = M (1 − θ) 2 λ ζ, ǫ = M (1 − θ) 2 λυ β,(9)ζ i (y i w ⊤ φ(x i ) − (1 − θ) + ξ i ) = 0, β i (y i w ⊤ φ(x i ) − (1 + θ) − ǫ i ) = 0, ∀i ∈ [M ].(10)
Eqn. (9) is derived by setting the partial derivative of L w.r.t. {w, ξ, ǫ} to zero. Eqn. (10) is the complementary slackness conditions. Observe that y i w ⊤ φ(x i ) < 1 − θ and y i w ⊤ φ(x i ) > 1 + θ cannot hold simultaneously, therefore at least one of the two slack variables ξ i and ǫ i is zero. According to Eqn. (9), we have ζ i β i = 0 for any i ∈ [M ].
The following dual problem of ODM follows by substituting Eqn. (9) We conclude the detailed test accuracy and time cost in Table 4 and get the following observation.
• It can be seen that DC-SVM performs significantly better generality than other SVM methods. Specifically, DC-SVM achieves the best test accuracy on 6 data sets among SVM methods, just slightly worse than DC-SVM on phishing dataset and worse than Ca-SVM on a7a dataset. On time cost, SSVM achieves the fastest training speed on all 7 data sets and worse than Ca-SVM on skin-nonskin dataset. • Compared with SODM, SSVM achieves worse test accuracy on all 8 datasets and lower training speed on 7 datasets. Since SODM considers margin distribution by partitioning data and making local data distribution similar with the global one, it is more suitable for this task.
• Compared with DC-ODM, DC-SVM performs better test accuracy on 5 datasets. Besides, the training time of these two methods are closed since they have the same parallel mechanism. • Compared with Ca-ODM and Dip-ODM, the corresponding SVM methods achieves better time efficiency. Ca-SVM achieves better time efficiency on 6 datasets, while Dip-SVM achieves better time efficiency on 7 datasets. Since these two methods greedily discard data during optimization. On generality, Ca-SVM outperforms Ca-ODM on 3 datasets, while Dip-SVM outperforms Dip-ODM on 4 datasets.
partition control parameter p, number of stratums S, number of iterations L. Output: The solution α.1: Initialize S stratums C 1 , . . . , C S by Eqn. (5)-(6). 2: Initialize partitions D 1 , . . . , D p L by sampling without replacement from stratums C 1 , . . . , C S . 3: Initial the dual solution α (L) = 0. 4: for l = L, . . . ,
ODM svrg ) [Johnson and Zhang, 2013] and CSVRG method (ODM csvrg ) [Tan et al., 2019].
Figure 1 :Figure 2 :
12Comparisons of different methods using RBF kernel. Each point indicates the result when stop at different levels. Training speedup ratio with cores increasing from 1 to 32 for SODM.compared the time cost and test accuracy with corresponding SVM in appendix B.
Figure 3 :Figure 4 :
34Comparisons of different methods using linear kernel. Each point of SODM indicates the result when every one third of stages executed. Other points indicate the result stop at different levels. Comparisons of different gradient based methods.
Table 1 :
1Data set statistics.Data sets
ODM
Ca-ODM
DiP-ODM
DC-ODM
SODM
Acc. Acc. Time(s) Acc. Time(s) Acc. Time(s) Acc. Time(s)
gisette
.976
.957
90.22
.970
68.02
.964
70.44
.972
59.89
svmguide1
.970
.872
38.90
.903
35.25
.943
50.11
.944
28.74
phishing
.941
.880
49.60
.901
52.61
.936
59.47
.938
25.22
a7a
.882
.824
68.36
.813
61.24
.815 106.51 .838
32.67
cod-rna
N/A
.892 499.38 .905 532.68 .931 400.61 .933
55.41
ijcnn1
N/A
.889 185.20 .893 182.71 .915 226.26 .927
40.32
skin-nonskin N/A
.806 338.73 .830 437.20 .962 407.46 .956 283.36
SUSY
N/A
.733 4280.23 .744 5678.66 .747 7009.36 .760 1004.33
Table 2 :
2The test accuracy and time cost of different methods using RBF kernel. The best accuracy on each data set is bolded. N/A means the corresponding method does not return results in 48 hours.
).R s = R s \i.t+1 to node s + 1.15:
16:
if R s = ∅ then
17:
Send w
(l+1)
18:
s = s + 1.
19:
end if
20:
end for
21:
w (l+1) = w
(l+1)
T
.
22: end for
23: return w (St) .
Table 3 :
3The test accuracy and time cost of different methods using linear kernel. The best accuracy on each data set is bolded.
back into Eqn. (8): In this section, We supplement the experiments of Scalable SVM. Here we compared the results of Ca-ODM, DiP-ODM, DC-ODM and SODM with corresponding SVM methods on all datasets using rbf kernel as supplementary. Acc. Time(s) Acc. Time(s) Acc. Time(s) Acc. Time(s) 150.11 .889 185.20 .824 156.27 .893 182.71 skin-nonskin .811 299.96 .806 338.73 .855 343.82 .830 437.20 SUSY .720 4153.10 .733 4280.23 .752 5377.99 .744 5678.66 Acc. Time(s) Acc. Time(s) Acc. Time(s) Acc. Time(s) skin-nonskin .959 420.51 .962 407.46 .848 320.05 .956 283.36 SUSY .758 7520.00 .747 7009.36 .720 3920.28 .760 1004.33min
ζ,β∈R M
B Supplementary Experiments
Data sets
Ca-SVM
Ca-ODM
Dip-SVM
Dip-ODM
gisette
.932 104.67 .957
90.22
.925
67.98
.970
68.02
svmguide1
.904
49.20
.872
38.90
.895
33.20
.903
35.25
phishing
.910
43.85
.880
49.60
.902
55.02
.901
52.61
a7a
.817
59.40
.824
68.36
.815
58.92
.813
61.24
cod-rna
.880 458.43 .892 499.38 .873 508.33 .905 532.68
ijcnn1
.803 Data sets
DC-SVM
DC-ODM
SSVM
SODM
gisette
.966
72.50
.964
70.44
.948
53.32
.972
59.89
svmguide1
.952
37.63
.943
50.11
.902
20.33
.944
28.74
phishing
.928
42.53
.936
59.47
.929
30.70
.938
25.22
a7a
.818
97.99
.815 106.51 .810
40.54
.838
32.67
cod-rna
.915 430.26 .931 400.61 .889
64.91
.933
55.41
ijcnn1
.920 266.95 .915 226.26 .803 105.11 .927
40.32
Table 4 :
4The test accuracy and time cost of different methods using RBF kernel.
https://arxiv.org/abs/****.*****
https://github.com/CGCL-codes/SODM
AcknowledgmentsThis work was supported in part by the National Key R&D Program of China under Grant 2020AAA0108501, the National Natural Science Foundation of China under Grant 62006088, and the Key R&D Program of Hubei under Grant 2020BAA020.where Q = YX ⊤ XY and c = (1 − θ) 2 /λυ is a constant. By denoting α = [ζ; β], above problem can be rewritten as a standard convex quadratic programming:i )} i∈[m]are the instances in the k-th partition, then the dual problem of ODM on the k-th partition is. Notice that the optimization variables ζ k and β k are decoupled on each partition, by merging all the K problems together, we can get the formulation of SODM:A.2 Proof of Theorem 1 Theorem 1. Suppose the optimal solutions of ODM and SODM are α ⋆ = [ζ ⋆ ; β ⋆ ] and α ⋆ = [ ζ ⋆ ; β ⋆ ], respectively. The gaps between the optimal objective values and solutions satisfywhere U = max( α ⋆ ∞ , α ⋆ ∞ ) and Q = i,j:Proof. The left-hand side of Eqn. (12) is due to the optimality of ζ ⋆ and β ⋆ . Without loss of generality, suppose {(x i , y i )} i∈[M]are ordered by partition index, i.e., the first m instances are on the first partition, and the second m instances are on the second partition, etc. According to the definition of d(ζ, β) and d(ζ, β), and by denoting γ = ζ − β, we haveIn particular, the following holds:Notice that at least one of ζ i and β i is zero, thus(14)from Eqn.(15)yields the right-hand side of Eqn.(12):where the first inequality follows from the optimality of ζ ⋆ and β ⋆ , and the third inequality is derived by the boundness of ζ, β, γ and υ ≤ 1.Since f (α) is a quadratic function, it can be expanded at α ⋆ aswhere the first inequality follows from the first order optimal condition, and the third inequality uses the fact υ ≤ 1. Thus α ⋆ − α ⋆ 2 can be upper bounded bywhich shows that Eqn. (13) holds and concludes the proof.A.3 Proof of Theorem 2Theorem 2. For shift-invariant kernel κ with κ(0) = r 2 , that is φ(x) = r for any x. With the partition strategy described above, for any k ∈ [K], we havewhere C = i,j∈[M] I(ϕ(x i ) = ϕ(x j )), and U is defined in theorem 1.Proof. Construct the auxiliary data set D k by repeating each instance in D k for K times, i.e.,It can be seen that primal ODM on D k and D k have the same constraints (by removing repetitions), thus for any w, it is feasible on D k iff it is feasible on D k . In addition, we haveTherefore, primal ODM on D k and D k have the same optimal objective. Since strong dual theorem holds for ODM, we haveThen the left-hand side of Eqn.(16)isNotice that 0 ≤ υ ≤ 1, 0 ≤ θ ≤ 1 and ζ, β is upper bounded by U , we haveAs for the first term, by denoting γ = ζ − β, it can be seen thatSuppose the angle between φ(x i ) and φ(x j ) is ϑ, thencos ϑ = r 2 cos ϑ ∈ [−r 2 , r 2 cos τ ], ϕ(x i ) = ϕ(x j ), [r 2 cos τ, r 2 ], ϕ(x i ) = ϕ(x j ).The arguments for [ Q k ] ij is similar and we haver 2 (1 + cos τ ) + ϕ(xi)=ϕ(xj) r 2 (1 − cos τ ) = U 2 2 (Cr 2 (1 + cos τ ) + (M 2 − C)r 2 (1 − cos τ )) = U 2 2 (M 2 r 2 + r 2 cos τ (2C − M 2 )).By putting the upper bound of A and B together concludes the proof.
Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning. [ References, Boyd, Proceedings of the 30th International Joint Conference on Artificial Intelligence. Fanyong Cheng, Jing Zhang, Cuihong Wen, Zhaohua Liu, and Zuoyong Lithe 30th International Joint Conference on Artificial Intelligence3Journal of Machine Learning ResearchReferences [Boyd et al., 2010] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed opti- mization and statistical learning via the alternating direc- tion method of multipliers. Foundations and Trends in Ma- chine Learning, 3(1):1-122, 2010. [Cao et al., 2006] Lijuan Cao, S. S. Keerthi, Chong Jin Ong, Jianqiu Zhang, Uvaraj Periyathamby, Xiuju Fu, and Henry P. Lee. Parallel sequential minimal optimization for the training of support vector machines. IEEE Transac- tions on Neural Networks, 17(4):1039-1049, 2006. [Cao et al., 2021] Nan Cao, Teng Zhang, and Hai Jin. Partial multi-label optimal margin distribution machine. In Pro- ceedings of the 30th International Joint Conference on Ar- tificial Intelligence, pages 2198-2204, Montreal-themed virtual reality, 2021. [Cheng et al., 2017] Fanyong Cheng, Jing Zhang, Cuihong Wen, Zhaohua Liu, and Zuoyong Li. Large cost-sensitive margin distribution machine for imbalanced data classifi- cation. Neurocomputing, 224:45-57, 2017. [Forero et al., 2010] Pedro A. Forero, Alfonso Cano, and Georgios B. Giannakis. Consensus-based distributed sup- port vector machines. Journal of Machine Learning Re- search, 11:1663-1701, 2010.
On the doubt about margin explanation of boosting. Wei Gao, Zhi-Hua Zhou, Artificial Intelligence. 203and Zhou, 2013and Zhou, 2013] Wei Gao and Zhi-Hua Zhou. On the doubt about margin explanation of boosting. Artificial In- telligence, 203:1-18, 2013.
Margin-based generalization lower bounds for boosted classifiers. Graf, Advances in Neural Information Processing Systems. Vancouver, Canada; Vancouver, CanadaMIT Press17Advances in Neural Information Processing SystemsGraf et al., 2004] Hans P Graf, Eric Cosatto, Leon Bottou, Igor Dourdanovic, and Vladimir Vapnik. Parallel support vector machines: The cascade svm. In Advances in Neural Information Processing Systems 17, pages 521-528, Van- couver, Canada, 2004. MIT Press. [Grønlund et al., 2019] Allan Grønlund, Lior Kamma, Kasper Green Larsen, Alexander Mathiasen, and Jelani Nelson. Margin-based generalization lower bounds for boosted classifiers. In Advances in Neural Information Processing Systems, pages 11963-11972, Vancouver, Canada, 2019.
Distributed stochastic variance reduced gradient methods by sampling extra data with replacement. Hsieh, Proceedings of the 31st International Conference on Machine Learning. the 31st International Conference on Machine LearningChina; Lake Tahoe, NVJohnson and Zhang26Large-Scale Kernel MachinesHsieh et al., 2014] Cho-Jui Hsieh, Si Si, and Inderjit S. Dhillon. A divide-and-conquer solver for kernel support vector machines. In Proceedings of the 31st International Conference on Machine Learning, pages 566-574, Bei- jing, China, 2014. [Johnson and Zhang, 2013] R. Johnson and T. Zhang. Accel- erating stochastic gradient descent using predictive vari- ance reduction. In Advances in Neural Information Pro- cessing Systems 26, pages 315-323, Lake Tahoe, NV, 2013. [Lee et al., 2017] Jason D. Lee, Qihang Lin, Tengyu Ma, and Tianbao Yang. Distributed stochastic variance reduced gradient methods by sampling extra data with replace- ment. Journal of Machine Learning Research, 18(122):1- 43, 2017. [Loosli et al., 2007] Gaëlle Loosli, Stéphane Canu, and Léon Bottou. Training invariant support vector machines us- ing selective sampling. In Large-Scale Kernel Machines, pages 301-320, 2007.
Optimal representative distribution margin machine for multi-instance learning. IEEE Access. 8Angel Navia-Vazquezet al., 2020] Tianxiang Luan, Tingjin Luo, Wenzhang Zhuge, and Chenping Hou. Optimal representative distri- bution margin machine for multi-instance learning. IEEE Access, 8:74864-74874, 2020. [Navia-Vazquez et al., 2006] Angel Navia-Vazquez,
Coreset stochastic variance-reduced gradient with application to optimal margin distribution machine. D Gutierrez-Gonzalez, Emilio Parrado-Hernandez, J J Navarro-Abellan ;, Ali Rahimi, Benjamin Recht, ; Rastogi, Proceedings of the 29th International Joint Conference on Artificial Intelligence. Teng Zhang, Peng Zhao, and Hai Jinthe 29th International Joint Conference on Artificial IntelligenceVancouver, Canada; Cambridge, MA; Honolulu, HI; Cambridge, MA; San Jose, CA; New Orleans, LA; Stockholm; New York, NYWilliams and Seeger17Proceedings of the 34th AAAI Conference on Artificial IntelligenceD. Gutierrez-Gonzalez, Emilio Parrado-Hernandez, and J. J. Navarro-Abellan. Distributed support vector machines. IEEE Transactions on Neural Networks, 17(4):1091-1097, 2006. [Rahimi and Recht, 2007] Ali Rahimi and Benjamin Recht. Random features for large-scale kernel machines. In Ad- vances in Neural Information Processing Systems, pages 1177-1184, Vancouver, Canada, 2007. [Rastogi et al., 2020] Reshma Rastogi, Pritam Anand, , and Suresh Chandra. Large-margin distribution machine- based regression. Neural Computing and Applications, 32:3633-3648, 2020. [Schölkopf and Smola, 2001] Bernhard Schölkopf and Alexander J. Smola. Learning with kernels: support vec- tor machines, regularization, optimization, and beyond. MIT Press, Cambridge, MA, 2001. [Si et al., 2017] Si Si, Cho-Jui Hsieh, and Inderjit S. Dhillon. Memory efficient kernel approximation. Journal of Ma- chine Learning Research, 18:1-32, 2017. [Singh et al., 2017] Dinesh Singh, Debaditya Roy, and C Kr- ishna Mohan. Dip-svm : Distribution preserving kernel support vector machine for big data. IEEE Transactions on Big Data, 3(1):79-90, 2017. [Tan et al., 2019] Zhi-Hao Tan, Teng Zhang, and Wei Wang. Coreset stochastic variance-reduced gradient with applica- tion to optimal margin distribution machine. In Proceed- ings of the 33th AAAI Conference on Artificial Intelligence, pages 5083-5090, Honolulu, HI, 2019. [Tan et al., 2020] Zhi-Hao Tan, Peng Tan, Yuan Jiang, and Zhi-Hua Zhou. Multi-label optimal margin distribution machine. Machine Learning, 109(3):623-642, 2020. [Williams and Seeger, 2001] Christopher Williams and Matthias Seeger. Using the nyström method to speed up kernel machines. In Advances in Neural Information Processing Systems, pages 682-688, Cambridge, MA, 2001. [Yu et al., 2005] Hwanjo Yu, Jiong Yang, Jiawei Han, and Xiaolei Li. Making svms scalable to large data sets using hierarchical cluster indexing. Data Mining and Knowledge Discovery, 11(3):295-321, 2005. [Zaharia et al., 2012] Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauly, Michael J. Franklin, Scott Shenker, and Ion Stoica. Re- silient distributed datasets: A fault-tolerant abstraction for in-memory cluster computing. In 9th USENIX Symposium on Networked Systems Design and Implementation, pages 15-28, San Jose, CA, 2012. [Zhang and Jin, 2020] Teng Zhang and Hai Jin. Optimal margin distribution machine for multi-instance learning. In Proceedings of the 29th International Joint Conference on Artificial Intelligence, pages 2383-2389, 2020. [Zhang and Zhou, 2018a] Teng Zhang and Zhi-Hua Zhou. Optimal margin distribution clustering. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, pages 4474-4481, New Orleans, LA, 2018. [Zhang and Zhou, 2018b] Teng Zhang and Zhi-Hua Zhou. Semi-supervised optimal margin distribution machines. In Proceedings of the 27th International Joint Conference on Artificial Intelligence, pages 3104-3110, Stockholm, Swe- den, 2018. [Zhang and Zhou, 2019] Teng Zhang and Zhi-Hua Zhou. Optimal margin distribution machine. IEEE Transactions on Knowledge and Data Engineering, 32(6):1143-1156, 2019. [Zhang et al., 2020] Teng Zhang, Peng Zhao, and Hai Jin. Optimal margin distribution learning in dynamic environ- ments. In Proceedings of the 34th AAAI Conference on Artificial Intelligence, pages 6821-6828, New York, NY, 2020.
Large margin distribution learning with cost interval and unlabeled data. Zhou ; Yu-Hang Zhou, Zhi-Hua Zhou, IEEE Transactions on Knowledge and Data Engineering. 287and Zhou, 2016] Yu-Hang Zhou and Zhi-Hua Zhou. Large margin distribution learning with cost interval and unlabeled data. IEEE Transactions on Knowledge and Data Engineering, 28(7):1749-1763, 2016.
| [
"https://github.com/CGCL-codes/SODM"
] |
[
"Beyond the disk: EUV coronagraphic observations of the Extreme Ultraviolet Imager on board Solar Orbiter",
"Beyond the disk: EUV coronagraphic observations of the Extreme Ultraviolet Imager on board Solar Orbiter"
] | [
"F Auchère \nUniversité Paris-Saclay\nCNRS\nInstitut d'Astrophysique Spatiale\n91405OrsayFrance\n",
"D Berghmans \nSolar-Terrestrial Centre of Excellence -SIDC\nRoyal Observatory of Belgium\nBrusselsBelgium\n",
"C Dumesnil \nUniversité Paris-Saclay\nCNRS\nInstitut d'Astrophysique Spatiale\n91405OrsayFrance\n",
"J.-P Halain \nCentre Spatial de Liège\nUniversité de Liège\nAv. du Pré-Aily B294031AngleurBelgium\n",
"R Mercier \nInstitut d'Optique Graduate School\nLaboratoire Charles Fabry\nUniversité Paris-Saclay\n91127Palaiseau CedexFrance\n",
"P Rochus \nCentre Spatial de Liège\nUniversité de Liège\nAv. du Pré-Aily B294031AngleurBelgium\n",
"F Delmotte \nInstitut d'Optique Graduate School\nLaboratoire Charles Fabry\nUniversité Paris-Saclay\n91127Palaiseau CedexFrance\n",
"S François \nUniversité Paris-Saclay\nCNRS\nInstitut d'Astrophysique Spatiale\n91405OrsayFrance\n",
"A Hermans \nCentre Spatial de Liège\nUniversité de Liège\nAv. du Pré-Aily B294031AngleurBelgium\n",
"V Hervier \nUniversité Paris-Saclay\nCNRS\nInstitut d'Astrophysique Spatiale\n91405OrsayFrance\n",
"E Kraaikamp \nSolar-Terrestrial Centre of Excellence -SIDC\nRoyal Observatory of Belgium\nBrusselsBelgium\n",
"E Meltchakov \nInstitut d'Optique Graduate School\nLaboratoire Charles Fabry\nUniversité Paris-Saclay\n91127Palaiseau CedexFrance\n",
"G Morinaud \nUniversité Paris-Saclay\nCNRS\nInstitut d'Astrophysique Spatiale\n91405OrsayFrance\n",
"A Philippon \nUniversité Paris-Saclay\nCNRS\nInstitut d'Astrophysique Spatiale\n91405OrsayFrance\n",
"P J Smith \nUCL-Mullard Space Science Laboratory\nHolmbury St. Mary\nRH5 6NTDorkingSurreyUK\n",
"K Stegen \nSolar-Terrestrial Centre of Excellence -SIDC\nRoyal Observatory of Belgium\nBrusselsBelgium\n",
"C Verbeeck \nSolar-Terrestrial Centre of Excellence -SIDC\nRoyal Observatory of Belgium\nBrusselsBelgium\n",
"X Zhang \nUniversité Paris-Saclay\nCNRS\nInstitut d'Astrophysique Spatiale\n91405OrsayFrance\n",
"V Andretta \nINAF -Osservatorio Astronomico di Capodimonte\nNapoliItaly\n",
"L Abbo \nINAF -Osservatorio Astrofisico di Torino\nPino TorineseItaly\n",
"E Buchlin \nUniversité Paris-Saclay\nCNRS\nInstitut d'Astrophysique Spatiale\n91405OrsayFrance\n",
"F Frassati \nINAF -Osservatorio Astrofisico di Torino\nPino TorineseItaly\n",
"S Gissot \nSolar-Terrestrial Centre of Excellence -SIDC\nRoyal Observatory of Belgium\nBrusselsBelgium\n",
"M Gyo \nPhysikalisch-Meteorologisches Observatorium Davos\nWorld Radiation Center\nDavos Dorf\n7260Switzerland\n",
"L Harra \nPhysikalisch-Meteorologisches Observatorium Davos\nWorld Radiation Center\nDavos Dorf\n7260Switzerland\n\nETH-Zurich\nHönggerberg campus, HIT buildingZürichSwitzerland\n",
"G Jerse \nINAF -Osservatorio Astronomico di Trieste\nBasovizza, TriesteItaly\n",
"F Landini \nINAF -Osservatorio Astrofisico di Torino\nPino TorineseItaly\n",
"M Mierla \nSolar-Terrestrial Centre of Excellence -SIDC\nRoyal Observatory of Belgium\nBrusselsBelgium\n\nInstitute of Geodynamics of the Romanian Academy\nBucharestRomania\n",
"B Nicula \nSolar-Terrestrial Centre of Excellence -SIDC\nRoyal Observatory of Belgium\nBrusselsBelgium\n",
"S Parenti \nUniversité Paris-Saclay\nCNRS\nInstitut d'Astrophysique Spatiale\n91405OrsayFrance\n",
"E Renotte \nCentre Spatial de Liège\nUniversité de Liège\nAv. du Pré-Aily B294031AngleurBelgium\n",
"M Romoli \nDipartimento di Fisica e Astronomia\nUniversità di Firenze\nItaly\n",
"G Russano \nINAF -Osservatorio Astronomico di Capodimonte\nNapoliItaly\n",
"C Sasso \nINAF -Osservatorio Astronomico di Capodimonte\nNapoliItaly\n",
"U Schühle \nMax Planck Institute for Solar System Research\nJustus-von-Liebig-Weg 337077GöttingenGermany\n",
"W Schmutz \nPhysikalisch-Meteorologisches Observatorium Davos\nWorld Radiation Center\nDavos Dorf\n7260Switzerland\n",
"E Soubrié \nUniversité Paris-Saclay\nCNRS\nInstitut d'Astrophysique Spatiale\n91405OrsayFrance\n",
"R Susino \nINAF -Osservatorio Astrofisico di Torino\nPino TorineseItaly\n",
"L Teriaca \nMax Planck Institute for Solar System Research\nJustus-von-Liebig-Weg 337077GöttingenGermany\n",
"M West \nSouthwest Research Institute\n1050 Walnut Street, Suite 30080302BoulderCOUSA\n",
"A N Zhukov \nSolar-Terrestrial Centre of Excellence -SIDC\nRoyal Observatory of Belgium\nBrusselsBelgium\n\nSkobeltsyn Institute of Nuclear Physics\nMoscow State University\n119992MoscowRussia\n"
] | [
"Université Paris-Saclay\nCNRS\nInstitut d'Astrophysique Spatiale\n91405OrsayFrance",
"Solar-Terrestrial Centre of Excellence -SIDC\nRoyal Observatory of Belgium\nBrusselsBelgium",
"Université Paris-Saclay\nCNRS\nInstitut d'Astrophysique Spatiale\n91405OrsayFrance",
"Centre Spatial de Liège\nUniversité de Liège\nAv. du Pré-Aily B294031AngleurBelgium",
"Institut d'Optique Graduate School\nLaboratoire Charles Fabry\nUniversité Paris-Saclay\n91127Palaiseau CedexFrance",
"Centre Spatial de Liège\nUniversité de Liège\nAv. du Pré-Aily B294031AngleurBelgium",
"Institut d'Optique Graduate School\nLaboratoire Charles Fabry\nUniversité Paris-Saclay\n91127Palaiseau CedexFrance",
"Université Paris-Saclay\nCNRS\nInstitut d'Astrophysique Spatiale\n91405OrsayFrance",
"Centre Spatial de Liège\nUniversité de Liège\nAv. du Pré-Aily B294031AngleurBelgium",
"Université Paris-Saclay\nCNRS\nInstitut d'Astrophysique Spatiale\n91405OrsayFrance",
"Solar-Terrestrial Centre of Excellence -SIDC\nRoyal Observatory of Belgium\nBrusselsBelgium",
"Institut d'Optique Graduate School\nLaboratoire Charles Fabry\nUniversité Paris-Saclay\n91127Palaiseau CedexFrance",
"Université Paris-Saclay\nCNRS\nInstitut d'Astrophysique Spatiale\n91405OrsayFrance",
"Université Paris-Saclay\nCNRS\nInstitut d'Astrophysique Spatiale\n91405OrsayFrance",
"UCL-Mullard Space Science Laboratory\nHolmbury St. Mary\nRH5 6NTDorkingSurreyUK",
"Solar-Terrestrial Centre of Excellence -SIDC\nRoyal Observatory of Belgium\nBrusselsBelgium",
"Solar-Terrestrial Centre of Excellence -SIDC\nRoyal Observatory of Belgium\nBrusselsBelgium",
"Université Paris-Saclay\nCNRS\nInstitut d'Astrophysique Spatiale\n91405OrsayFrance",
"INAF -Osservatorio Astronomico di Capodimonte\nNapoliItaly",
"INAF -Osservatorio Astrofisico di Torino\nPino TorineseItaly",
"Université Paris-Saclay\nCNRS\nInstitut d'Astrophysique Spatiale\n91405OrsayFrance",
"INAF -Osservatorio Astrofisico di Torino\nPino TorineseItaly",
"Solar-Terrestrial Centre of Excellence -SIDC\nRoyal Observatory of Belgium\nBrusselsBelgium",
"Physikalisch-Meteorologisches Observatorium Davos\nWorld Radiation Center\nDavos Dorf\n7260Switzerland",
"Physikalisch-Meteorologisches Observatorium Davos\nWorld Radiation Center\nDavos Dorf\n7260Switzerland",
"ETH-Zurich\nHönggerberg campus, HIT buildingZürichSwitzerland",
"INAF -Osservatorio Astronomico di Trieste\nBasovizza, TriesteItaly",
"INAF -Osservatorio Astrofisico di Torino\nPino TorineseItaly",
"Solar-Terrestrial Centre of Excellence -SIDC\nRoyal Observatory of Belgium\nBrusselsBelgium",
"Institute of Geodynamics of the Romanian Academy\nBucharestRomania",
"Solar-Terrestrial Centre of Excellence -SIDC\nRoyal Observatory of Belgium\nBrusselsBelgium",
"Université Paris-Saclay\nCNRS\nInstitut d'Astrophysique Spatiale\n91405OrsayFrance",
"Centre Spatial de Liège\nUniversité de Liège\nAv. du Pré-Aily B294031AngleurBelgium",
"Dipartimento di Fisica e Astronomia\nUniversità di Firenze\nItaly",
"INAF -Osservatorio Astronomico di Capodimonte\nNapoliItaly",
"INAF -Osservatorio Astronomico di Capodimonte\nNapoliItaly",
"Max Planck Institute for Solar System Research\nJustus-von-Liebig-Weg 337077GöttingenGermany",
"Physikalisch-Meteorologisches Observatorium Davos\nWorld Radiation Center\nDavos Dorf\n7260Switzerland",
"Université Paris-Saclay\nCNRS\nInstitut d'Astrophysique Spatiale\n91405OrsayFrance",
"INAF -Osservatorio Astrofisico di Torino\nPino TorineseItaly",
"Max Planck Institute for Solar System Research\nJustus-von-Liebig-Weg 337077GöttingenGermany",
"Southwest Research Institute\n1050 Walnut Street, Suite 30080302BoulderCOUSA",
"Solar-Terrestrial Centre of Excellence -SIDC\nRoyal Observatory of Belgium\nBrusselsBelgium",
"Skobeltsyn Institute of Nuclear Physics\nMoscow State University\n119992MoscowRussia"
] | [] | Context. Most observations of the solar corona beyond 2 R ⊙ consist of broadband visible light imagery carried out with coronagraphs. The associated diagnostics mainly consist of kinematics and derivations of the electron number density. While the measurement of the properties of emission lines can provide crucial additional diagnostics of the coronal plasma (temperatures, velocities, abundances, etc.), these types of observations are comparatively rare. In visible wavelengths, observations at these heights are limited to total eclipses. In the ultraviolet (UV) to extreme UV (EUV) range, very few additional observations have been achieved since the pioneering results of the Ultraviolet Coronagraph Spectrometer (UVCS). Aims. One of the objectives of the Full Sun Imager (FSI) channel of the Extreme Ultraviolet Imager (EUI) on board the Solar Orbiter mission has been to provide very wide field-of-view EUV diagnostics of the morphology and dynamics of the solar atmosphere in temperature regimes that are typical of the lower transition region and of the corona. Methods. FSI carries out observations in two narrowbands of the EUV spectrum centered on 17.4 nm and 30.4 nm that are dominated, respectively, by lines of Fe ix/x (formed in the corona around 1 MK) and by the resonance line of He ii (formed around 80 kK in the lower transition region). Unlike previous EUV imagers, FSI includes a moveable occulting disk that can be inserted in the optical path to reduce the amount of instrumental stray light to a minimum. Results. FSI detects signals at 17.4 nm up to the edge of its field of view (7 R ⊙ ), which is about twice further than was previously possible. Operation at 30.4 nm are for the moment compromised by an as-yet unidentified source of stray light. Comparisons with observations by the LASCO and Metis coronagraphs confirm the presence of morphological similarities and differences between the broadband visible light and EUV emissions, as documented on the basis of prior eclipse and space-based observations. Conclusions. The very-wide-field observations of FSI out to about 3 and 7 R ⊙ , without and with the occulting disk, respectively, are paving the way for future dedicated instruments. | 10.1051/0004-6361/202346039 | [
"https://export.arxiv.org/pdf/2305.15308v1.pdf"
] | 258,248,001 | 2305.15308 | aad797f0f5ed8dd62b660fb82f342c599581609b |
Beyond the disk: EUV coronagraphic observations of the Extreme Ultraviolet Imager on board Solar Orbiter
May 25, 2023
F Auchère
Université Paris-Saclay
CNRS
Institut d'Astrophysique Spatiale
91405OrsayFrance
D Berghmans
Solar-Terrestrial Centre of Excellence -SIDC
Royal Observatory of Belgium
BrusselsBelgium
C Dumesnil
Université Paris-Saclay
CNRS
Institut d'Astrophysique Spatiale
91405OrsayFrance
J.-P Halain
Centre Spatial de Liège
Université de Liège
Av. du Pré-Aily B294031AngleurBelgium
R Mercier
Institut d'Optique Graduate School
Laboratoire Charles Fabry
Université Paris-Saclay
91127Palaiseau CedexFrance
P Rochus
Centre Spatial de Liège
Université de Liège
Av. du Pré-Aily B294031AngleurBelgium
F Delmotte
Institut d'Optique Graduate School
Laboratoire Charles Fabry
Université Paris-Saclay
91127Palaiseau CedexFrance
S François
Université Paris-Saclay
CNRS
Institut d'Astrophysique Spatiale
91405OrsayFrance
A Hermans
Centre Spatial de Liège
Université de Liège
Av. du Pré-Aily B294031AngleurBelgium
V Hervier
Université Paris-Saclay
CNRS
Institut d'Astrophysique Spatiale
91405OrsayFrance
E Kraaikamp
Solar-Terrestrial Centre of Excellence -SIDC
Royal Observatory of Belgium
BrusselsBelgium
E Meltchakov
Institut d'Optique Graduate School
Laboratoire Charles Fabry
Université Paris-Saclay
91127Palaiseau CedexFrance
G Morinaud
Université Paris-Saclay
CNRS
Institut d'Astrophysique Spatiale
91405OrsayFrance
A Philippon
Université Paris-Saclay
CNRS
Institut d'Astrophysique Spatiale
91405OrsayFrance
P J Smith
UCL-Mullard Space Science Laboratory
Holmbury St. Mary
RH5 6NTDorkingSurreyUK
K Stegen
Solar-Terrestrial Centre of Excellence -SIDC
Royal Observatory of Belgium
BrusselsBelgium
C Verbeeck
Solar-Terrestrial Centre of Excellence -SIDC
Royal Observatory of Belgium
BrusselsBelgium
X Zhang
Université Paris-Saclay
CNRS
Institut d'Astrophysique Spatiale
91405OrsayFrance
V Andretta
INAF -Osservatorio Astronomico di Capodimonte
NapoliItaly
L Abbo
INAF -Osservatorio Astrofisico di Torino
Pino TorineseItaly
E Buchlin
Université Paris-Saclay
CNRS
Institut d'Astrophysique Spatiale
91405OrsayFrance
F Frassati
INAF -Osservatorio Astrofisico di Torino
Pino TorineseItaly
S Gissot
Solar-Terrestrial Centre of Excellence -SIDC
Royal Observatory of Belgium
BrusselsBelgium
M Gyo
Physikalisch-Meteorologisches Observatorium Davos
World Radiation Center
Davos Dorf
7260Switzerland
L Harra
Physikalisch-Meteorologisches Observatorium Davos
World Radiation Center
Davos Dorf
7260Switzerland
ETH-Zurich
Hönggerberg campus, HIT buildingZürichSwitzerland
G Jerse
INAF -Osservatorio Astronomico di Trieste
Basovizza, TriesteItaly
F Landini
INAF -Osservatorio Astrofisico di Torino
Pino TorineseItaly
M Mierla
Solar-Terrestrial Centre of Excellence -SIDC
Royal Observatory of Belgium
BrusselsBelgium
Institute of Geodynamics of the Romanian Academy
BucharestRomania
B Nicula
Solar-Terrestrial Centre of Excellence -SIDC
Royal Observatory of Belgium
BrusselsBelgium
S Parenti
Université Paris-Saclay
CNRS
Institut d'Astrophysique Spatiale
91405OrsayFrance
E Renotte
Centre Spatial de Liège
Université de Liège
Av. du Pré-Aily B294031AngleurBelgium
M Romoli
Dipartimento di Fisica e Astronomia
Università di Firenze
Italy
G Russano
INAF -Osservatorio Astronomico di Capodimonte
NapoliItaly
C Sasso
INAF -Osservatorio Astronomico di Capodimonte
NapoliItaly
U Schühle
Max Planck Institute for Solar System Research
Justus-von-Liebig-Weg 337077GöttingenGermany
W Schmutz
Physikalisch-Meteorologisches Observatorium Davos
World Radiation Center
Davos Dorf
7260Switzerland
E Soubrié
Université Paris-Saclay
CNRS
Institut d'Astrophysique Spatiale
91405OrsayFrance
R Susino
INAF -Osservatorio Astrofisico di Torino
Pino TorineseItaly
L Teriaca
Max Planck Institute for Solar System Research
Justus-von-Liebig-Weg 337077GöttingenGermany
M West
Southwest Research Institute
1050 Walnut Street, Suite 30080302BoulderCOUSA
A N Zhukov
Solar-Terrestrial Centre of Excellence -SIDC
Royal Observatory of Belgium
BrusselsBelgium
Skobeltsyn Institute of Nuclear Physics
Moscow State University
119992MoscowRussia
Beyond the disk: EUV coronagraphic observations of the Extreme Ultraviolet Imager on board Solar Orbiter
May 25, 2023Received ; acceptedAstronomy & Astrophysics manuscript no. mainSun: UV radiation -Sun: corona -Telescopes
Context. Most observations of the solar corona beyond 2 R ⊙ consist of broadband visible light imagery carried out with coronagraphs. The associated diagnostics mainly consist of kinematics and derivations of the electron number density. While the measurement of the properties of emission lines can provide crucial additional diagnostics of the coronal plasma (temperatures, velocities, abundances, etc.), these types of observations are comparatively rare. In visible wavelengths, observations at these heights are limited to total eclipses. In the ultraviolet (UV) to extreme UV (EUV) range, very few additional observations have been achieved since the pioneering results of the Ultraviolet Coronagraph Spectrometer (UVCS). Aims. One of the objectives of the Full Sun Imager (FSI) channel of the Extreme Ultraviolet Imager (EUI) on board the Solar Orbiter mission has been to provide very wide field-of-view EUV diagnostics of the morphology and dynamics of the solar atmosphere in temperature regimes that are typical of the lower transition region and of the corona. Methods. FSI carries out observations in two narrowbands of the EUV spectrum centered on 17.4 nm and 30.4 nm that are dominated, respectively, by lines of Fe ix/x (formed in the corona around 1 MK) and by the resonance line of He ii (formed around 80 kK in the lower transition region). Unlike previous EUV imagers, FSI includes a moveable occulting disk that can be inserted in the optical path to reduce the amount of instrumental stray light to a minimum. Results. FSI detects signals at 17.4 nm up to the edge of its field of view (7 R ⊙ ), which is about twice further than was previously possible. Operation at 30.4 nm are for the moment compromised by an as-yet unidentified source of stray light. Comparisons with observations by the LASCO and Metis coronagraphs confirm the presence of morphological similarities and differences between the broadband visible light and EUV emissions, as documented on the basis of prior eclipse and space-based observations. Conclusions. The very-wide-field observations of FSI out to about 3 and 7 R ⊙ , without and with the occulting disk, respectively, are paving the way for future dedicated instruments.
Introduction
Remote sensing observations of the solar corona beyond 2 R ⊙ mostly consist of visible light (VL) imagery from space coronagraphs. The Large Angle Spectroscopic Coronagraph (LASCO, Brueckner et al. 1995) on board the Solar and Heliospheric Ob-servatory (SOHO, Domingo et al. 1995) has been providing several images per hour quasi-continuously since January 1996. The COR1 and COR2 instruments, part of the Sun Earth Connection Coronal and Heliospheric Investigation (SECCHI, Howard et al. 2008) on board the Solar Terrestrial Relations Observatory (STEREO) have been in operation since January 2006. These observations are primarily used to study the macroscopic kine-Article number, page 1 of 10 arXiv:2305.15308v1 [astro-ph.SR] 24 May 2023 A&A proofs: manuscript no. main matics of the plasma and to derive the electron number density from Thomson scattering.
In addition to complementary measurements of the electron number density, measurements of the properties of emission lines from multiply ionized ions give access to essential quantities, such as the chemical composition of the plasma or the temperatures of the electrons and ions (see, e.g., Phillips et al. 2008;Del Zanna & Mason 2018, and references therein). However, observations of emission lines beyond 2 R ⊙ are relatively rare.
From the ground, visible and near-infrared coronal emission lines have been observed since the invention of the coronagraph (Lyot 1932;Lyot & Marshall 1933), however, due to the sky brightness, measurements beyond 2 R ⊙ are limited to total solar eclipses. During the 2006 March 29 eclipse, Habbal et al. (2007) observed the Fe xi 789.2 nm line out to 3 R ⊙ in streamers. Habbal et al. (2011) obtained simultaneous images with signal out to 2.4 to 3.4 R ⊙ in the Fe ix 435.9 nm, Fe x 637.4 nm, Fe xi 789.2 nm, Fe xiii 1074.7 nm, Fe xiv 530.3 nm, and Ni xv 670.2 nm spectral lines. Their formation temperatures, ranging from 0.5 to 2.5 MK, have allowed for detailed analysis and modeling (Boe et al. 2022) of the temperature structure of the corona during the 2010 July 10 eclipse.
Without the limitation of the sky brightness, emission lines can be detected even further away from space. The Ultraviolet Coronagraph Spectrometer (UVCS, Kohl et al. 1995a) on board SOHO provided groundbreaking spectroscopic UV observations from 1.2 R ⊙ to 10 R ⊙ in two channels centered on the Lyman α line of H i and on the 103.2/103.7 nm doublet of O vi. These were pioneered by an earlier version of UVCS during Spartan 201 flights (Kohl et al. 1994(Kohl et al. , 1995bMiralles et al. 1999). We refer the reader to Kohl et al. (1997Kohl et al. ( , 2006, and references therein) for reviews of results. The spectroheliographic soft X-ray imaging telescope (SPIRIT) on-board Coronas-F (in operation from 2001 to 2005) included a slitless grazing incidence spectrometer from 28 nm-33.5 nm with a field of view (FOV) of 5 R ⊙ (Zhitnik et al. 2003a,b). A similar instrument (TESIS) was flown on the Coronas-Photon mission (Kuzin et al. 2009(Kuzin et al. , 2011. These spectroheliographs were not coronagraphs, however, and were thus affected by instrumental stray light. Since the decommissioning of UVCS in 2012, no UV spectroscopic measurements have been made at these heights.
The narrowband imaging of emission lines from space beyond 2 R ⊙ is also very rare. During the first two years of the SOHO mission, the LASCO C1 coronagraph had the capability to image up to 3 R ⊙ the Ca xv (564.9 nm), Fe x (637.4 nm), and Fe xiv (530.3 nm) coronal lines (Schwenn et al. 1997). SPIRIT included a narrowband normal incidence imager equipped with a moveable occulting disk and a steerable mirror to observe at 17.4 and 30.4 nm up to 5 R ⊙ (Slemzin et al. 2008). The HeCor EUV coronagraph (Auchère et al. 2007) made a single narrowband image at 30.4 nm up to 3 R ⊙ during the first flight of the Herschel sounding rocket (Moses et al. 2020). Since 1996, the Extreme-ultraviolet Imaging Telescope (EIT, Delaboudinière et al. 1995) on board SOHO and its successors provide regular narrowband EUV images, but the widest instantaneous FOV, that of the Extreme Ultraviolet Imager on board the STEREO B spacecraft (EUVI, Wülser et al. 2007), does not extend beyond 1.81 R ⊙ at 1 au. In addition, image compression frequently affects the data quality of EUVI in the outer field, which limits its practical FOV. SOHO has performed a dedicated offpoint maneuver on 1996 April 4, allowing EIT to observe up to 2.5 R ⊙ (Delaboudinière 1999;Slemzin et al. 2008). The Sun Watcher with Active Pixels and Image Processing (SWAP, Seaton et al. 2013) and the Solar Ultra-Violet Imager (SUVI, Vasudevan et al. 2019) with half-FOVs of 1.69 R ⊙ and 1.67 R ⊙ respectively, have also performed offpoint maneuvers to explore the solar EUV corona up to about 3.5 R ⊙ at 17.4 nm (SWAP, see West et al. 2022) and at 17.1 and 19.3 nm (SUVI, see Tadikonda et al. 2019;Seaton et al. 2021). However, since EIT, SWAP, and SUVI are not coronagraphs, their observations at large angles from the limb are significantly affected by instrumental stray light.
The Solar Orbiter mission (Müller et al. 2020) includes two instruments capable of wide-field coronagraphic narrowband UV or EUV imaging: Metis ) with its Lyman α channel, and the Extreme Ultraviolet Imager (EUI, Rochus et al. 2020), with the Full Sun Imager (FSI) telescope. A description of the scientific objectives and associated observing programs of the payload can be found in . In Section 2, we describe FSI and, in particular, its capability to operate in coronagraphic mode. In Section 4, We present the observations made so far by FSI in this particular mode during dedicated campaigns in chronological order. We compare the images qualitatively with simultaneous VL and UV coronagraphic observations made with Metis. Section 5 summarizes the results and presents a discussion of future developments.
The Full Sun Imager: Need for an occulting disk
The FSI is the wide field channel of the Extreme Ultraviolet Imager (EUI, Rochus et al. 2020), initially described in Auchère et al. (2005) and shown here in Figure 1. It images the transition region and the corona in two narrowbands of the extreme ultraviolet (EUV) spectrum centered on 17.4 and 30.4 nm, at an average plate-scale of 4.46 ′′ per 10 µm pixel of the 3072 × 3072 active pixel sensor (APS). One of the purposes of FSI is to serve as a context imager for the Solar Orbiter payload, thus, the two passbands were chosen to capture two major temperature regimes of the solar atmosphere. The 17.4 nm passband (0.6 nm full width at half maximum) is centered on emission lines of Fe ix and Fe x, making it sensitive to coronal plasma around 1 MK. This passband is very similar to the corresponding ones of EUVI and SWAP and is 0.1 MK hotter than that of AIA (Chen et al. 2021). The 30.4 nm passband (4 nm FWHM) is centered on the resonance line of He ii, formed around 80 kK, in the lower transition region. Over the mission duration, the orbital period of the spacecraft varies between 150 and 180 days, with the furthest aphelion at 1.02 au and the closest perihelion at 0.28 au. The FSI was designed with a 3.8 • ×3.8 • FOV in order to cover two solar diameters at its closest approach, so that the whole disk can be seen even when the spacecraft is pointed at the limb. As a result, the half-FOV of FSI expressed in solar radii ranges from 2 to 7.25 R ⊙ depending on the distance to the Sun, much wider than that of any previous solar EUV imager (Fig. 2).
The typical exposure time of 10 s is set not to saturate the on-disk structures (Fig. 2). In these regular exposures, except for prominence eruptions at 30.4 nm (Mierla et al. 2022), the signal beyond 2 R ⊙ is generally too faint to be detectable. Observations of the outer corona require longer exposure times. The left image of Fig. 3 Rochus et al. 2020). The 8.874 mm circular occulting disk is located on the entrance door (Fig. 4), 135.9 mm behind the entrance aperture. a mix of diffraction by the hexagonal mesh grid supporting the entrance aluminum filter (diagonal extensions) and of scattering by the mirror. The mesh grid used in FSI blocks the same fraction of the beam -thus diffracting the same amount of lightas those used in EUVI or in the Atmospheric Imaging Assembly (AIA, Lemen et al. 2012). The mirror substrate has a roughness of 0.2 nm rms and the multilayer coating has an interface roughness of 0.5 nm rms (Meltchakov et al. 2013), which are state-of-the-art values. Compared to Fig. 2, the stray light in the left panel of Fig. 3 is particularly prominent because the exposure is much deeper. Measurements made in EIT images up to 1.8 R ⊙ during a transit of Mercury (Auchère & Artzner 2004) had already indicated that the solar signal in FSI (which uses the same technologies as EIT) would be dominated by instrumental stray light be-yond a couple solar radii from the Sun's center. However, since very wide field imagery is not the primary objective of FSI, the addition of an occulting disk came late in the development of the instrument. It was introduced as a means of mitigating the descope of the 30.4 nm channel originally present in Metis. The possibility to obtain 30.4 nm wide field images with significant spatial overlap with the Lyman α (121.6 nm) channel of Metis enabled to preserve the scientific objective of making instantaneous maps of the abundance of Helium in the corona, using the method demonstrated by Moses et al. (2020).
Since the primary purpose of FSI is to observe the solar disk, the occulting disk has to be retractable. The decision was made, therefore, to implement it on the entrance door (as shown in Fig. 4). Since the door mechanism was originally not qualified for more than 200 operations and because of the necessity to see the solar disk most of the time, usage of the occulting disk was foreseen to be limited to dedicated campaigns. Once in flight, given the interest of the data obtained with the occulting disk, the decision was made to further qualify the door up to a total of 720 open-close cycles using an engineering model, which now allows for more frequent use. Tests made during the development of HeCor (Auchère et al. 2007), which was a demonstrator for FSI, showed that a triple disk occulting system was overperforming. Instead, FSI uses a single occulting disk located after the entrance pupil, making it an inverted coronagraph, such as Metis. The occulting disk diameter was set for the vignetting cutoff to start at 0.711 • .
The image in the right panel of Fig. 3 was taken one hour before the one on the left, with the same exposure time and with the occulting disk in place. The brighter vertical band at the right edge of the image corresponds to a hotter area of the detector for which the dark correction is imperfect. The black hexagonal shape indicates the computed vignetting cut-off which (at this date) was at 2 R ⊙ . It is not circular because the entrance pupil is hexagonal. The right-hand side bulge is caused by the two rods holding the disk. The stray light haze is completely gone. The remaining solar signal is at most 5 DNs, but streamers are now clearly visible beyond 2 R ⊙ . A radial profile averaged tangentially over the sector shown in the right panel of Fig. 3 is shown in Fig. 5 (bottom curve). The blue curve is corrected from the vignetting function (computed by ray-tracing using as-built dimensions) and can thus be directly compared to the data taken without the occulting disk (top curve). The stray light-free intensity represents 10% and 1% of the non-occulted signal at 2 R ⊙ and 4.5 R ⊙ , respectively. The signal rise above 4.5 R ⊙ is caused by dark signal not perfectly corrected along the hotter right edge Article number, page 3 of 10 The decrease of intensity with height can be used to investigate the formation process of the spectral lines that contribute to the passband. A number of studies have found evidence that resonant scattering plays a significant role in the formation of the 17.1 nm resonance line of Fe x (Schrijver & McMullen 2000; Slemzin et al. 2008;Goryaev et al. 2014). However, the estimation of the amount of stray light is crucial in this respect and FSI is the first instrument to provide images free of stray light at this wavelength. Photometric analysis of these images will be the subject of future research.
Data processing
FSI
The FSI images shown in this paper are based on the Level-1 data files published as part of the EUI Data Release 5.0 (Mampaey et al. 2022), except for the 2022 December data. Unless stated otherwise, all the coronagraph mode FSI images presented in this paper have been processed in the same way. First, a master dark frame with matching exposure time was subtracted. For the 1000 s exposures, the master dark frame was created from eight dark frames taken on 2021 November 2 and 4. In order to remove local spikes (e.g., cosmic rays) in the dark frame, we used ten iterations of 2 σ-clipping, followed by four iterations of a spatial 13×13 5 σ-clipping. For the 640 s exposures, a single dark frame was available, so only a spatial filtering was applied. Spikes in the dark-subtracted images were removed using ten iterations of seven temporal points running a median-centered 2 σ-clipping, followed by four iterations of a spatial 13 × 13 5 σ-clipping. For isolated exposures (e.g., the one shown in Fig. 3), only spatial filtering was applied. Variations in the detector offset between acquisitions causes vertical banding in the dark-subtracted frames. The bands are estimated in each image by averaging detector rows 50 to 250 and 2700 to 2900 followed by linear interpolation in the vertical dimension. Finally, the data cubes were denoised using the wavelet-based method described in Starck & Murtagh (1994); Murtagh et al. (1995); Auchère et al. (2023).
Metis
Each Metis VL polarized frame was acquired with a 2 × 2 binning and an exposure time of 30 s. A set of 14 frames at the same polarization angle was averaged on board. During the ac- quisition, the polarizer was cycling through four polarization angles to create a quadruplet of polarized images, which were finally combined on ground to obtain a single polarized brightness (pB) image with an effective exposure time of 1680 s. During the November 2021 campaign, Metis was also operating its UV Lyman α channel. For those dates, the UV frames were acquired cotemporally with the pB sequence, with a 4 × 4 binning and exposure time of 60 s, at a cadence of 120 s. The UV data cube was σ-clipped with the same parameters as FSI to remove cosmic spikes and the images were averaged together by groups of ten to increase the signal-to-noise ratio.
The VL and UV images were processed and calibrated on the ground as described in Romoli et al. (2021), with an updated radiometric calibration described in Andretta et al. (2021) and in De ; De Leo & Metis Team (2023). A dark and bias frame was subtracted from the images, which were then corrected for flat-field and vignetting, and normalized by the exposure time. A radiometric calibration was also applied, which takes into account a revised in-flight calibration obtained from a set of VL and UV standard stars (De De Leo & Metis Team 2023). The bias and dark images for both detectors were acquired in flight, whereas the flat-field and vignetting images were measured on ground during the laboratory calibrations ). In the particular case of UV images, the individual frames have been then corrected using the dark frames acquired closest in time with matching acquisition parameters. However, a residual variation in the dark level on short time scales is still noticeable in these data sets. We therefore applied a further correction to the UV dark levels, which is described in De Leo & Metis Team (2023) and Russano et al. (2023).
Campaigns
The door mechanism holding the occulting disk suffered from a positioning repeatability issue during the first year of the mission. Only a few test images were taken prior to that shown in the right panel of Fig. 3, which is the first one obtained after the solution was found. Since then, the instrument has been run in coronagraph mode during dedicated campaigns, either in support of specific observations (e.g. Herschel sounding rocket flight, Sect. 4.4), or at times of upper conjunction with either Earth or the STEREO A spacecraft (Sect. 4.2 and 4.5). Indeed, in this latter case, the FSI images in disk mode provide mostly redundant information, as compared to those from other EUV imagers in operation (EIT, AIA, SWAP, SUVI, and EUVI). Table 1 lists the main characteristics of each campaign. All 30.4 nm images obtained so far in coronagraph mode exhibit an unexplained parasitic pattern, the intensity of which is comparable to that of the expected signal, so they are not used in this work. Except for test images during the November 2021 campaign and until the source of the parasite is identified and suppressed, all coronagraph mode images were taken in the 17.4 nm passband.
September 2021 campaign
This first campaign was run while the mission was still in cruise phase. A sequence of 50 images was obtained from 2021 September 9 at 00:42 UT to 2021 September 9 at 09:30 with 640 s exposures at 17.4 nm. Figure 6 shows of one of the resulting images composited 2 with an image taken 48 minutes before in disk mode. All EUV disk images in this paper are displayed with a 1/γ power scaling, with γ = 4. All FSI coronagraph images are displayed with a linear scaling. Quasi-radial striations in the streamer above the north-east limb are similar to the fine structure of streamers reported by Ko et al. (2022).
November 2021 campaign
Shortly after a major EUI onboard software update (2021 October 27), a longer campaign was attempted, while the mission was still in cruise phase. Altogether, 129 images were acquired at 17.4 nm from 2021 November 1 at 00:42 UT to 2021 November 3 at 23:42 and 38 images were acquired at 30.4 nm on 2021 November 4 from 00:12 UT to 21:12 UT. Following an analysis of the data from the previous campaign, the exposure time A&A proofs: manuscript no. main was increased to 1000 s. The 30.4 nm images were affected by the above-mentioned parasitic pattern and are not suited for scientific analysis. For an unknown reason, the subtraction of the matching exposure dark frame did not satisfactorily suppress the dark signal in the 17.4 nm images for this period. In addition to the processing steps described in Sect. 4.2, from each frame we subtracted a background image corresponding (at each pixel) to the third percentile of intensity over the sequence. Figure 7 shows a composite between the first image of the campaign and a near simultaneous image from SWAP (Solar Orbiter was only 2 • off the Sun-Earth line). At 0.83 au from the Sun the vignetting cutoff starts at 2.5 R ⊙ and there is no overlap with the SWAP FOV. Figure 8 shows composites with Metis VL (left half of each panel) and UV (right half) data at four different dates during the propagation of a coronal mass ejection (CME) over the west limb. Metis has observed other CMEs simultaneously in VL and at Lyman α (Andretta et al. 2021;Bemporad et al. 2022), but it is the first time that simultaneous imaging up to 5.6 R ⊙ at 17.4 nm is available. The expansion of the front is clearly seen in the first two frames in FSI before it reaches the Metis FOV. For a typical CME velocity of 400 km s −1 , the motion blur is 0.57 and 0.96 R ⊙ in the plane of the sky during the 1000 and 1680 s exposure times used by FSI and Metis, respectively. This is only 10 %-17 % of the CME's height in the top right panel, which explains why it is visible despite the very long exposure time. The match of the substructures of the CME is quite good between the EUV and VL, as can be seen in two bottom panels. The Metis UV data exhibits less structuring than VL, possibly because it is noisier, which makes the comparison more difficult.
February 2022 campaign
This short campaign on 2022 February 8 from 04:15 UT to 07:45 UT was a test run (and thus technically identical) to the subsequent 2022 March campaign discussed below. Only 8 17.4 nm exposures were acquired of 1000 s each taken every 30 minutes. Solar Orbiter was 0.79 au from the Sun. Figure 9 shows the last image of the sequence composited with a disk image taken 4 hours and 30 minutes later. The quasi-linear ray visible over the north-east limb in the outer image corresponds to the tip of a helmet streamer in the disk image.
March 2022 campaign
This campaign was run on 2022 March 7 from 16:00 UT to 19:30 UT with 17.4 nm exposures of 1000 s each taken every 30 minutes, in support of the second flight of the Herschel sounding rocket, while Solar Orbiter was 0.5 au from the Sun. The FSI images were intended to provide the constraints on the coronal temperature required for the derivation of the abundance of Helium from the measurements made by the Herschel payload. The rocket did not return useful data, but since this campaign was run closer to the Sun than the others, FSI imaged brighter regions of the corona and revealed interesting structures up to the edge of the FOV (see Fig. 10 and top panel of Fig. 11). The morphology of the streamer above the north-west limb resembles the classical bipolar streamer magnetic field model by Pneuman & Kopp (1971). The beginning of the sequence caught the trailing edge of a CME, clearly visible in the south-east in LASCO C2 (Fig. 10).
There is overall a good match between the structures visible in LASCO C2 and in FSI, both in the original images (left panel of Fig. 10) and those enhanced with the wavelets-optimized whitening filter (WOW, right panel of Fig. 10, Auchère et al. 2023). There are some significant differences, however. Radial polar plumes revealed in the WOW-enhanced LASCO C2 im- age do not have a counterpart in FSI beyond 1.4 R ⊙ . Plumes are fully visible in Fe x 789.2 nm and Fe xi 789.2 nm out to 2 R ⊙ during eclipses (Habbal et al. 2011). It is possible that they are less visible in the FSI images because of the lower signal-to-noise ratio. The FOV of FSI overlaps with that of LASCO C2 above 2.2 R ⊙ (inner dashed circle in the top panel of Fig. 11) 3 . In this region, the north-east and north-west streamers exhibit different morphologies in VL and in the EUV. Morphological differences between emission lines and broadband VL have already been described by, for example, Habbal et al. (2007); Mierla et al. (2007). There are several possible causes for the observed differences. Thomson scattering, which is responsible for the emission observed in VL, shows the free electrons and is proportional to their number density, while the emission lines dominating the FSI passband are emitted by multiply ionized iron atoms following collisional excitation (although resonant scattering may also play a role), which is proportional to the square of the electron number density. Density variations in the corona therefore produce more contrasted features in EUV images than in VL. For the same reason, the line of sight integration path is shorter in the EUV, resulting in fewer structures being superimposed. The intensity of emission lines is also a sharply peaked function of the temperature. Temperature variations may be responsible for, for example, the dark lane visible in the north-eastern streamer in the EUV but not in VL. There was a 3 • separation between the lines of sight of the two instruments, which could explain small differences. A monthly average was subtracted from the LASCO C2 data and this could possibly produce gradients that 3 The dark hexagonal pattern visible near the FSI vignetting cutoff is caused by the mesh grids supporting the front and focal filters (Fig. 1). As described in Auchère et al. (2011), the modulation pattern cancels out in the nominal configuration because the footprint of the beam on the filters is an integer multiple of one grid tile. This condition is not satisfied with the occulting disk in place in the inner part of the FOV where the beam is partially vignetted.
are not present in the EUV images. Motion blur (mostly radial) can also affect the FSI images due to the 1000 s exposure time used.
December 2022 campaign
The campaign started on 2022 December 5 at 04:00 UT and ended on 2023 January 1 at 21:36 UT. It consisted of 17.4 nm exposures of 1000 s, each taken every 30 minutes. It was interrupted on December 11 from 03:00 to 08:00 UT to support quadrature observations with Parker Solar Probe. It is the longest FSI coronagraph mode campaign run to date and includes 1547 images. Figure 12 shows a composite image of FSI and Metis VL data, with a disk image from EUVI-A, which was the closest EUV imager (Table 1). As in Fig. 10, some of the radial structures in the Metis FOV do not have a counterpart in FSI, either because they are below the detection limit or because of the different formation processes among the observed emissions. In the animated version of this figure covering the whole sequence, several CMEs can be tracked through the FSI and Metis FOVs. The bright linear rays visible in the animation line up with the edge of streamers. They likely appear thinner than in VL because the contrast in the EUV is increased due to the proportionality of the intensity to the square of the electron number density.
Conclusions
We present observations carried out thus far by FSI in coronagraphic mode at 17.4 nm. Going up to 6.4 R ⊙ at the detector's edge, these are the widest field EUV images of the corona made to date. Without the presence of the occulting disk, stray light amounts to 90% of the signal at 2 R ⊙ to 99% at 5 R ⊙ . The stray light-free images provided by FSI in the coronagraph mode enable photometric studies of the distribution of intensity of the observed emission lines. Some items of particular interest will be the comparison with eclipse observations of other emission lines (Boe et al. 2022), the quantification of the contribution of resonant scattering as a function of height, and the joint inversion of Metis and FSI data to constrain the coronal temperature in the regions of overlap (Abbo et al. 2023). More coronagraph mode campaigns are planned in the near future, with a focus on periods during which the separation angle with other EUV disk imagers is small. The instrument will also provide novel images of the corona later in the mission from out of ecliptic viewpoints.
While this paper demonstrates the possibility of wide field coronagraphic EUV imaging, FSI was not optimized for this mode of observation. The entrance pupil was sized to ensure the image quality and to limit the heat flux entering the instrument while allowing sufficient signal on disk in 10 s exposures. Furthermore, the occulting disk was installed late in the project life at a sub-optimal position. With minor modifications, the efficiency of an FSI-based coronagraph could be increased by two orders of magnitude, which would allow images similar to those presented here to be acquired in 10 s. This opens up the perspective of a single EUV telescope able to monitor the coronal activity from Sun center to 6 R ⊙ . Compared to a VL coronagraph, an EUV instrument offers several advantages. There is no background emission from scattering off dust (F-corona). The contrast between the disk and the corona is several orders of magnitude smaller in the EUV than in VL, and since the corresponding wavelengths are around 20 times shorter, diffraction by the edges is reduced. Instrumental stray light is thus easier to control. Indeed, a single circular occulting disk is sufficient in FSI, while complex multiple stage systems are necessary in VL -and Fig. 10), extending to the edges of the detector. The outer dashed circle corresponds to the 3.4 R ⊙ boundary between FSI and LASCO C2 used in Fig. 10. The inner dashed circle corresponds to the boundary between FSI and LASCO C2 used in the bottom panel. Bottom: Same as above, with LASCO C2 above 2.2 R ⊙ , the inner limit of the useful LASCO C2 FOV.
without the ability to completely suppress stray light. This also makes an EUV coronagraph less demanding in terms of platform pointing accuracy and stability. The shorter line of sight in EUV images compared to VL may possibly make the EUV coronagraph less sensitive to halo CMEs -which are of particular interest for space weather applications; however, the possibility that resonant scattering plays a significant role at large distances in spectral lines may allow for their detection.
Fig. 2 .
2Regular 10 s exposure time 17.4 nm FSI image taken at 0.68 au from the Sun on 2022 March 23 at 23:01. At this distance, the FOV extends to 4.9 R ⊙ . With this exposure time, signal is detected up to 2 R ⊙ . Longer exposures and the use of an occulting disk are required at larger angles(Fig. 3). The cause of the two vertical dark bands has not yet been identified.
Fig. 3 .
3Effect of the occulting disk on 640 s FSI 17.4 nm exposures taken on 2021 March 21. Left: View without the occulting disk at 01:48:45 UT (logarithmic color scale). The position of the solar disk is marked by the white circle. Right: View with the occulting disk at 00:45:45 UT (linear color scale). The black hexagonal shape marks the vignetting cut-off. A matching exposure dark frame was subtracted from each image. The intensity profiles at row 1536 are displayed on the same logarithmic scale for comparison.of the detector (right panel ofFig. 3), where the electrical strap is connected.
Fig. 4 .Fig. 5 .
45Entrance door of FSI, looking towards the Sun. The occulting disk is held off the door lid by two supporting rods. The occulting disk is shown in position. Rotating the lid around its axis (red cross) clockwise closes the door, rotating it counter-clockwise opens it. Raw (lower gray) and vignetting-calibrated (blue) radial 17.4 nm intensity profiles, averaged over the sector shown inFig. 3. The top curve corresponds to the data taken without the occulting disk.
Fig. 6 .
6Composite of FSI 17.4 nm images taken in disk mode (below 1.81 R ⊙ , 2021 September 8 at 23:55 UT) and coronagraphic mode (September 9 at 00:42:03 UT). As in all subsequent figures, the axes of the helio-projective coordinate system are plotted to materialize the roll angle of the spacecraft.
Fig. 7 .
7Composite of a SWAP 17.4 nm image (below 2 R ⊙ , 2021 November 1 at 10:43 UT) with an FSI 17.4 nm image in coronagraphic mode (10:42 UT).
Fig. 8 .
8Propagation of a CME across the FSI and Metis FOVs. Left and right sides of each panel show a comparison of SWAP (below 2 R ⊙ ) and FSI 17.4 nm (up to 5.6 R ⊙ ) with Metis VL and Lyman α data, respectively. The dashed circle marks the position of the edge of the Metis occulting disk. An animated version of the figure is available online.
Fig. 9 .
9Composite of FSI 17.4 nm images taken in disk mode (below 2.38 R ⊙ , 2022 February 8 at 12:15 UT) and coronagraphic mode (07:45 UT).
Fig. 10 .Fig. 11 .
1011Composites of images at 17.4 nm from FSI in disk mode (below 1.5 R ⊙ , 11:29 UT) and coronagraph mode (below 3.4 R ⊙ , 2022 March 7 16:00 UT) and in VL from LASCO C2 (16:12 UT). Left: Original LASCO C2 and FSI images, displayed using linear and square root scaling, respectively. Right: Images enhanced with the WOW filter. Comparison of the northern half of the overlapping region between FSI and LASCO C2 on 2022 March 7. Top: FSI composite (same images as in
Fig. 12 .
12Composite of images taken on 2022 December 15: EUVI-A (below 1.9 R ⊙ , 18:34 UT), FSI 17.4 nm (below 6.1 R ⊙ , 18:30 UT) and Metis VL (18:30 UT). The dashed circle marks the position of the edge of the Metis occulting disk. An animated version of this figure is available online.
is a 640 s high-gain 1 exposure taken on 2021 March 21 at 01:48:45 UT while Solar Orbiter was at 0.68 au from the Sun, giving a half-FOV of 4.9 R ⊙ . The image is completely saturated up to 1.2 R ⊙ . Solar structures are visible up to 2 R ⊙ but beyond, a diffuse haze of stray light dominates. It is caused byZr Zr Mg Mg
Hex. pupil
2.75 mm edge
Mirror
66 x 66 mm
459.4 mm
70 mm
Filter wheel
6.82°8
.66°D
etector
30.72 x 30.72 mm
Al filter
20 x 20 mm
Oculting disk
8.87 mm
737.35 mm
137.9 mm
Fig. 1. Optical layout of FSI (reproduced from
A&A proofs: manuscript no. main1536
1536
2 6
2 8 2 10 2 12
Intensity [DN]
0
768
1536
2304
3072
[pixels]
2 0
2 4
2 8
2 12
Intensity [DN]
1
0
1
2
3
4
5
Intensity [DN]
0
768
1536
2304
3072
[pixels]
2 0
2 4
2 8
2 12
Intensity [DN]
Table 1 .
1Summary of the FSI coronagraph mode campaigns.Separation angle
EUI detectors are natively 12 bit and can be operated with two gains (high and low) that can either be used independently, or combined on board into 15 bit images. High-gain images have the lowest read noise.Article number, page 2 of 10 F.Auchère et al.: Beyond the disk: EUV coronagraphic observations of the Extreme Ultraviolet Imager on board Solar Orbiter
Composite images in this paper were created using SunPy (The SunPy Community et al. 2020) and AstroPy (Astropy Collaboration et al. 2022).
Acknowledgements. The author would like to thank Pierre Rochus, Principal Investigator of EUI until the launch, for letting him add the occulting disk to the door design as an undocumented feature in 2014, after CDR. Solar Orbiter is a space mission of international collaboration between ESA and NASA, operated by ESA. TheA&A proofs: manuscript no. main While EUV coronagraphs offer an alternative solution, the two types of instruments are in fact complementary, with VL providing the electron number density and the EUV offering access to the emission measure as well as the plasma temperature, provided that several passbands are available. This would provide unprecedented diagnostic capability for the coronal plasma beyond 2 R ⊙ .
. L Abbo, R Susino, F Auchère, A&A, this issue Andretta. 65614A&AAbbo, L., Susino, R., Auchère, F., et al. 2023, A&A, this issue Andretta, V., Bemporad, A., De Leo, Y., et al. 2021, A&A, 656, L14
. E Antonucci, M Romoli, V Andretta, A&A. 64210Antonucci, E., Romoli, M., Andretta, V., et al. 2020, A&A, 642, A10
. A M Price-Whelan, Astropy CollaborationP L Lim, Astropy Collaborationapj. 935167Astropy Collaboration, Price-Whelan, A. M., Lim, P. L., et al. 2022, apj, 935, 167
. F Auchère, G E Artzner, Sol. Phys. 219217Auchère, F. & Artzner, G. E. 2004, Sol. Phys., 219, 217
F Auchère, M.-F Ravet-Krill, J D Moses, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. S. Fineschi & R. A. Viereck668966890Solar Physics and Space Weather Instrumentation IIAuchère, F., Ravet-Krill, M.-F., Moses, J. D., et al. 2007, in Society of Photo- Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 6689, So- lar Physics and Space Weather Instrumentation II, ed. S. Fineschi & R. A. Viereck, 66890A
. F Auchère, J Rizzi, A Philippon, P Rochus, J. of the Opt. Soc. of America A. 2840Auchère, F., Rizzi, J., Philippon, A., & Rochus, P. 2011, J. of the Opt. Soc. of America A, 28, 40
F Auchère, X Song, F Rouesnel, Solar Physics and Space Weather Instrumentation. S. Fineschi & R. A. Viereck5901Proc. SPIEAuchère, F., Song, X., Rouesnel, F., et al. 2005, in Proc. SPIE, Vol. 5901, Solar Physics and Space Weather Instrumentation, ed. S. Fineschi & R. A. Viereck, 298-304
. F Auchère, E Soubrié, G Pelouze, É Buchlin, A Bemporad, V Andretta, R Susino, A&A. 6657Auchère, F., Soubrié, E., Pelouze, G., & Buchlin, É. 2023, A&A, this volume Bemporad, A., Andretta, V., Susino, R., et al. 2022, A&A, 665, A7
. B Boe, S Habbal, C Downs, M Druckmüller, ApJ. 935173Boe, B., Habbal, S., Downs, C., & Druckmüller, M. 2022, ApJ, 935, 173
. G E Brueckner, R A Howard, M J Koomen, Sol. Phys. 162357Brueckner, G. E., Howard, R. A., Koomen, M. J., et al. 1995, Sol. Phys., 162, 357
. Y Chen, D Przybylski, H Peter, A&A. 6567Chen, Y., Przybylski, D., Peter, H., et al. 2021, A&A, 656, L7
. De Leo, Metis TeamY Burtovoi, Metis TeamA Teriaca, Metis TeamL , Metis TeamLiving Reviews in Solar Physics. Del Zanna, G. & Mason, H. E155A&ADe Leo, Y., Burtovoi, A., Teriaca, L., et al. 2023, A&A, this issue De Leo, Y. & Metis Team. 2023, A&A, in preparation Del Zanna, G. & Mason, H. E. 2018, Living Reviews in Solar Physics, 15, 5
. J P Delaboudinière, Sol. Phys. 188259Delaboudinière, J. P. 1999, Sol. Phys., 188, 259
. J.-P Delaboudinière, G E Artzner, J Brunaud, Sol. Phys. 162291Delaboudinière, J.-P., Artzner, G. E., Brunaud, J., et al. 1995, Sol. Phys., 162, 291
. V Domingo, B Fleck, A I Poland, Sol. Phys. 1621Domingo, V., Fleck, B., & Poland, A. I. 1995, Sol. Phys., 162, 1
. F Goryaev, V Slemzin, L Vainshtein, D R Williams, ApJ. 781100Goryaev, F., Slemzin, V., Vainshtein, L., & Williams, D. R. 2014, ApJ, 781, 100
. S R Habbal, M Druckmüller, H Morgan, ApJ. 734120Habbal, S. R., Druckmüller, M., Morgan, H., et al. 2011, ApJ, 734, 120
. S R Habbal, H Morgan, J Johnson, ApJ. 663598Habbal, S. R., Morgan, H., Johnson, J., et al. 2007, ApJ, 663, 598
. R A Howard, J D Moses, A Vourlidas, Space Sci. Rev. 13667Howard, R. A., Moses, J. D., Vourlidas, A., et al. 2008, Space Sci. Rev., 136, 67
. Y.-K Ko, G Stenborg, J Linker, ApJ. 93395Ko, Y.-K., Stenborg, G., Linker, J., et al. 2022, ApJ, 933, 95
. J L Kohl, R Esser, L D Gardner, Sol. Phys. 162313Kohl, J. L., Esser, R., Gardner, L. D., et al. 1995a, Sol. Phys., 162, 313
. J L Kohl, L D Gardner, L Strachan, R Fisher, M Guhathakurta, Space Sci. Rev. 7229Kohl, J. L., Gardner, L. D., Strachan, L., Fisher, R., & Guhathakurta, M. 1995b, Space Sci. Rev., 72, 29
. J L Kohl, L D Gardner, L Strachan, D M Hassler, Space Sci. Rev. 70253Kohl, J. L., Gardner, L. D., Strachan, L., & Hassler, D. M. 1994, Space Sci. Rev., 70, 253
. J L Kohl, G Noci, E Antonucci, Sol. Phys. 175613Kohl, J. L., Noci, G., Antonucci, E., et al. 1997, Sol. Phys., 175, 613
. J L Kohl, G Noci, S R Cranmer, J C Raymond, A&A Rev. 1331Kohl, J. L., Noci, G., Cranmer, S. R., & Raymond, J. C. 2006, A&A Rev., 13, 31
. S V Kuzin, S A Bogachev, I A Zhitnik, Advances in Space Research. 431001Kuzin, S. V., Bogachev, S. A., Zhitnik, I. A., et al. 2009, Advances in Space Research, 43, 1001
. S V Kuzin, I A Zhitnik, S V Shestov, Solar System Research. 45162Kuzin, S. V., Zhitnik, I. A., Shestov, S. V., et al. 2011, Solar System Research, 45, 162
. J R Lemen, A M Title, D J Akin, Sol. Phys. 27517Lemen, J. R., Title, A. M., Akin, D. J., et al. 2012, Sol. Phys., 275, 17
. B Lyot, 573ZApLyot, B. 1932, ZAp, 5, 73
. B Lyot, R K Marshall, JRASC. 27225Lyot, B. & Marshall, R. K. 1933, JRASC, 27, 225
SolO/EUI Data Release 5. B Mampaey, F Verbeeck, K Stegen, 10.24414/2qfw-tr950BelgiumROBMampaey, B., Verbeeck, F., Stegen, K., et al. 2022, SolO/EUI Data Release 5.0 2022-04, https://doi.org/10.24414/2qfw-tr95, published by Royal Obser- vatory of Belgium (ROB)
E Meltchakov, S De Rossi, R Mercier, Proc. SPIE. SPIE877787771Synergy between Laboratory and Space IIIMeltchakov, E., De Rossi, S., Mercier, R., et al. 2013, in Proc. SPIE, Vol. 8777, Damage to VUV, EUV, and X-ray Optics IV; and EUV and X-ray Optics: Synergy between Laboratory and Space III, 87771C
M Mierla, R Schwenn, L Teriaca, G Stenborg, B Podlipnik, Advances in Space Research. 401049Mierla, M., Schwenn, R., Teriaca, L., Stenborg, G., & Podlipnik, B. 2007, Ad- vances in Space Research, 40, 1049
. M Mierla, A N Zhukov, D Berghmans, A&A. 6625Mierla, M., Zhukov, A. N., Berghmans, D., et al. 2022, A&A, 662, L5
. M P Miralles, L Strachan, L D Gardner, Space Sci. Rev. 87277Miralles, M. P., Strachan, L., Gardner, L. D., et al. 1999, Space Sci. Rev., 87, 277
. J D Moses, E Antonucci, J Newmark, Nature Astronomy. 41134Moses, J. D., Antonucci, E., Newmark, J., et al. 2020, Nature Astronomy, 4, 1134
. D Müller, St, O C Cyr, I Zouganelis, A&A. 6421Müller, D., St. Cyr, O. C., Zouganelis, I., et al. 2020, A&A, 642, A1
. F Murtagh, J L Starck, A Bijaoui, A&AS. 112179Murtagh, F., Starck, J. L., & Bijaoui, A. 1995, A&AS, 112, 179
Ultraviolet and X-ray Spectroscopy of the Solar Atmosphere. K J H Phillips, U Feldman, E Landi, Cambridge University PressPhillips, K. J. H., Feldman, U., & Landi, E. 2008, Ultraviolet and X-ray Spec- troscopy of the Solar Atmosphere (Cambridge University Press)
. G W Pneuman, R A Kopp, Sol. Phys. 18258Pneuman, G. W. & Kopp, R. A. 1971, Sol. Phys., 18, 258
. P Rochus, F Auchère, D Berghmans, A&A. 6428Rochus, P., Auchère, F., Berghmans, D., et al. 2020, A&A, 642, A8
. M Romoli, E Antonucci, V Andretta, A&A. 65632Romoli, M., Antonucci, E., Andretta, V., et al. 2021, A&A, 656, A32
. G Russano, V Andretta, Y De Leo, A&A. 5311121ApJRussano, G., Andretta, V., De Leo, Y., et al. 2023, A&A, this issue Schrijver, C. J. & McMullen, R. A. 2000, ApJ, 531, 1121
. R Schwenn, B Inhester, S P Plunkett, Sol. Phys. 175667Schwenn, R., Inhester, B., Plunkett, S. P., et al. 1997, Sol. Phys., 175, 667
. D B Seaton, D Berghmans, B Nicula, Sol. Phys. 28643Seaton, D. B., Berghmans, D., Nicula, B., et al. 2013, Sol. Phys., 286, 43
. D B Seaton, J M Hughes, S K Tadikonda, Nature Astronomy. 51029Seaton, D. B., Hughes, J. M., Tadikonda, S. K., et al. 2021, Nature Astronomy, 5, 1029
. V Slemzin, O Bougaenko, A Ignatiev, Annales Geophysicae. 263007Slemzin, V., Bougaenko, O., Ignatiev, A., et al. 2008, Annales Geophysicae, 26, 3007
. J.-L Starck, F Murtagh, A&A. 288Starck, J.-L. & Murtagh, F. 1994, A&A, 288, 342
. S K Tadikonda, D C Freesland, R R Minor, Sol. Phys. 29428Tadikonda, S. K., Freesland, D. C., Minor, R. R., et al. 2019, Sol. Phys., 294, 28
. The Sunpy Community, W T Barnes, M G Bobra, The Astrophysical Journal. 89068The SunPy Community, Barnes, W. T., Bobra, M. G., et al. 2020, The Astrophys- ical Journal, 890, 68
G Vasudevan, L Shing, D Mathur, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. 11180111807Vasudevan, G., Shing, L., Mathur, D., et al. 2019, in Society of Photo-Optical In- strumentation Engineers (SPIE) Conference Series, Vol. 11180, International Conference on Space Optics — ICSO 2018, 111807P
. M J West, D B Seaton, E Huys, Sol. Phys. 297136West, M. J., Seaton, D. B., D'Huys, E., et al. 2022, Sol. Phys., 297, 136
J.-P Wülser, J R Lemen, N Nitta, Proc. SPIE. SPIE6689668905Wülser, J.-P., Lemen, J. R., & Nitta, N. 2007, in Proc. SPIE, Vol. 6689, Solar Physics and Space Weather Instrumentation II, 668905
. I Zhitnik, S Kuzin, A Afanas'ev, Advances in Space Research. 32473Zhitnik, I., Kuzin, S., Afanas'ev, A., et al. 2003a, Advances in Space Research, 32, 473
. I Zhitnik, S Kuzin, O Bugaenko, Advances in Space Research. 322573Zhitnik, I., Kuzin, S., Bugaenko, O., et al. 2003b, Advances in Space Research, 32, 2573
. I Zouganelis, A De Groof, A P Walsh, A&A. 6423Zouganelis, I., De Groof, A., Walsh, A. P., et al. 2020, A&A, 642, A3
| [] |
[
"Secure Integrated Sensing and Communication Exploiting Target Location Distribution",
"Secure Integrated Sensing and Communication Exploiting Target Location Distribution"
] | [
"Kaiyue Hou kaiyue.hou@connect.polyu.hk \nDepartment of Electronic and Information Engineering\nThe Hong Kong Polytechnic University\n\n",
"Shuowen Zhang shuowen.zhang@polyu.edu.hk \nDepartment of Electronic and Information Engineering\nThe Hong Kong Polytechnic University\n\n"
] | [
"Department of Electronic and Information Engineering\nThe Hong Kong Polytechnic University\n",
"Department of Electronic and Information Engineering\nThe Hong Kong Polytechnic University\n"
] | [] | In this paper, we study a secure integrated sensing and communication (ISAC) system where one multi-antenna base station (BS) simultaneously serves a downlink communication user and senses the location of a target that may potentially serve as an eavesdropper via its reflected echo signals. Specifically, the location information of the target is unknown and random, while its a priori distribution is available for exploitation. First, to characterize the sensing performance, we derive the posterior Cramér-Rao bound (PCRB) which is a lower bound of the mean squared error (MSE) for target sensing exploiting prior distribution. Due to the intractability of the PCRB expression, we further derive a novel approximate upper bound of it which has a closed-form expression. Next, under an artificial noise (AN) based beamforming structure at the BS to alleviate information eavesdropping and enhance the target's reflected signal power for sensing, we formulate a transmit beamforming optimization problem to maximize the worst-case secrecy rate among all possible target (eavesdropper) locations, under a sensing accuracy threshold characterized by an upper bound on the PCRB. Despite the non-convexity of the formulated problem, we propose a two-stage approach to obtain its optimal solution by leveraging the semi-definite relaxation (SDR) technique. Numerical results validate the effectiveness of our proposed transmit beamforming design and demonstrate the non-trivial trade-off between secrecy performance and sensing performance in secure ISAC systems. | null | [
"https://export.arxiv.org/pdf/2306.04543v1.pdf"
] | 259,095,474 | 2306.04543 | f423cb3073b0ad1f7a3e009e6e571f11f0063d7f |
Secure Integrated Sensing and Communication Exploiting Target Location Distribution
Kaiyue Hou kaiyue.hou@connect.polyu.hk
Department of Electronic and Information Engineering
The Hong Kong Polytechnic University
Shuowen Zhang shuowen.zhang@polyu.edu.hk
Department of Electronic and Information Engineering
The Hong Kong Polytechnic University
Secure Integrated Sensing and Communication Exploiting Target Location Distribution
In this paper, we study a secure integrated sensing and communication (ISAC) system where one multi-antenna base station (BS) simultaneously serves a downlink communication user and senses the location of a target that may potentially serve as an eavesdropper via its reflected echo signals. Specifically, the location information of the target is unknown and random, while its a priori distribution is available for exploitation. First, to characterize the sensing performance, we derive the posterior Cramér-Rao bound (PCRB) which is a lower bound of the mean squared error (MSE) for target sensing exploiting prior distribution. Due to the intractability of the PCRB expression, we further derive a novel approximate upper bound of it which has a closed-form expression. Next, under an artificial noise (AN) based beamforming structure at the BS to alleviate information eavesdropping and enhance the target's reflected signal power for sensing, we formulate a transmit beamforming optimization problem to maximize the worst-case secrecy rate among all possible target (eavesdropper) locations, under a sensing accuracy threshold characterized by an upper bound on the PCRB. Despite the non-convexity of the formulated problem, we propose a two-stage approach to obtain its optimal solution by leveraging the semi-definite relaxation (SDR) technique. Numerical results validate the effectiveness of our proposed transmit beamforming design and demonstrate the non-trivial trade-off between secrecy performance and sensing performance in secure ISAC systems.
I. INTRODUCTION
The inherent broadcast nature of wireless communication exposes it to various risks such as eavesdropping and jamming, posing a significant threat to communication security in wireless applications. Traditional security techniques based on cryptographic approaches are difficult to distribute in large-scale heterogeneous networks due to the challenge of managing secret keys. On the other hand, physical-layer security solutions that exploit the unique wireless channel characteristics to achieve secure transmission without keys have emerged as promising approaches [1]. For example, artificial noise (AN) [2] has been proposed as an effective method to mitigate the amount of information leakage to the eavesdropper by adding extra AN beams at the transmitter, which has been widely investigated in recent years [3]- [5].
Along this line, most existing works focused on the scenario where the channel between the base station (BS) and the eavesdropper is known for transmit signal design. However, in practice, the eavesdropping channel or even the location of the eavesdropper may be unknown. To estimate the locations of targets including eavesdroppers, integrated sensing and communication (ISAC) [6] has recently emerged as a promising technology, where the transmit signals at the BS can be used to simultaneously serve communication users and perform target sensing. Nevertheless, most existing works on ISAC focused on the target detection and tracking towards a given location, while the case for sensing a target at an unknown location has not been thoroughly investigated. In [7], a novel iterative approach was proposed to successively refine the sensing performance given an uncertain target location. However, the signal design at the BS in each iteration was still focused on improving the sensing performance corresponding to a given location. On the other hand, it is worth noting that the distribution of the target may be known a priori via exploring empirical observations or target movement pattern, which can be exploited to enhance the sensing performance. To the best of our knowledge, transmit signal design for ISAC exploiting the distribution of unknown and random target location still remains an open problem, especially for the more challenging case where the target may serve as a potential eavesdropper. Motivated by the above, we consider a secure ISAC system with a multi-antenna BS, a single-antenna communication user, and a sensing target which may serve as an eavesdropper. The target's exact location is unknown and random, while its distribution is available to be exploited. First, we introduce a novel posterior Cramér-Rao bound (PCRB) based method to characterize a lower bound of the mean squared error (MSE) exploiting prior distribution information. Note that in contrast to the Cramér-Rao bound (CRB), PCRB does not depend on the exact location of the target. We then derive a novel approximate upper bound of the PCRB in a tractable closed form. Next, considering an AN-based transmit beamforming structure, we formulate the beamforming optimization problem to maximize the worst-case secrecy rate among all possible target (eavesdropper) locations, subject to a requirement on the sensing accuracy characterized by an upper bound on the PCRB. By leveraging the semi-definite relaxation (SDR) technique, we obtain the optimal solution to the formulated problem. It is shown via numerical results that our proposed design achieves superior secrecy and sensing performance over various benchmark schemes, due to the smart exploitation of the target (eavesdropper) location distribution.
II. SYSTEM MODEL
We consider a secure ISAC system where a BS equipped with N t ≥ 1 transmit antennas and N r ≥ 1 co-located receive antennas serves a single-antenna communication user in the downlink. Moreover, the BS aims to sense the location of a target which serves as a potential eavesdropper via the received echo signals reflected by the target. 1 The exact location information of the target is unknown and random, while its distribution is available to be exploited as prior information. Specifically, we assume that the target has K ≥ 1 possible locations, as illustrated in Fig. 1. For ease of revealing fundamental insights, we consider a two-dimensional (2D) coordinate system, where each k-th location has the same distance r in meters (m) and a distinct angle θ k ∈ [−π, π) with respect to the reference point at the BS. The common distance r is assumed to be known a priori, 2 while the probability for the target to be located at the k-th possible angle is denoted by p k ∈ [0, 1], with K k=1 p k = 1. The probability mass function (PMF) of θ is thus given by
p Θ (θ) = p k , if θ = θ k , k = 1, ..., K, 0, otherwise.(1)
We consider a challenging scenario for secrecy communication where the target has a line-of-sight (LoS) channel with the BS, and the downlink eavesdropping channel denoted by h H E (θ) ∈ C 1×Nt is unknown due to the unknown target's angle θ. On the other hand, the channel from the BS to the user denoted by h H ∈ C 1×Nt is assumed to be perfectly known at the BS. Furthermore, we assume h H E (θ k )'s and h are linearly independent, which can hold for various user channel models including the LoS model (with a distinct user angle) and random Rayleigh fading model.
Our objective is to achieve high-quality secrecy communication and sensing performance by optimizing the BS transmit signals via smart exploitation of the distribution information about the eavesdropping target's location. Specifically, we aim to maximize the worst-case secrecy rate corresponding to the most favorable eavesdropping location, while ensuring a sensing accuracy of the eavesdropping target's location, which will facilitate more tailored signal designs with further improved secrecy performance in future communication instances.
To this end, we introduce an AN-based beamforming design, where the transmitted signal vector is the superposition of an information beam and K AN beams. Denote s ∼ CN (0, 1) as the information symbol for the user, and w ∈ C Nt×1 as the information beamforming vector. We further denote v k ∈ C Nt×1 as the k-th AN beamforming vector, and s k ∼ CN (0, 1) 1 Note that although the target may serve as an eavesdropper and potentially possess active sensing capability, we consider device-free passive sensing since the target will not proactively share its location with the BS. 2 The range information r can be obtained by exploiting empirical observations or estimated a priori using e.g., time-of-arrival (ToA) methods. as the k-th independent AN signal which is also independent with s. The transmitted signal vector is thus given by
x = ws + K k=1 v k s k ,(2)
Let P denote the transmit power constraint, which yields E[∥x∥ 2 ] = ∥w∥ 2 + K k=1 ∥v k ∥ 2 ≤ P . Note that the motivation for the AN-based approach is two-fold. Firstly, by introducing additional Gaussian-distributed noise signals which are the worst-case noise for eavesdropping, the received signal-to-interference-plus-noise ratio (SINR) at the potential eavesdropper can be decreased, thus enhancing the communication secrecy. Secondly, the extra AN beams provide more design flexibility in strengthening the echo signals from possible target locations, thus enhancing the sensing accuracy.
Based on (2), the received signal at the user is given by
y = h H x + z = h H ws + h H K k=1 v k s k + z,(3)
where z ∼ CN 0, σ 2 denotes the circularly symmetric complex Gaussian (CSCG) noise at the user receiver. The SINR at the user receiver is thus given by
SINR = |h H w| 2 K k=1 |h H v k | 2 + σ 2 .(4)
The received signal at the potential eavesdropper is given by
y E (θ) = h H E (θ)x+z E = h H E (θ)ws+h H E (θ) K k=1 v k s k +z E ,(5)
where z E ∼ CN (0, σ 2 E ) denotes the CSCG noise at the eavesdropper receiver. By noting that the BS-eavesdropper channel h H E (θ) follows the LoS model and considering a uniform linear array (ULA) at the BS, we have h H E (θ) = √ β0 r a H (θ), where β 0 denotes the reference channel power at 1 m; a H (θ) = [e −jπ∆(Nt−1)sin θ , e −jπ∆(Nt−3)sin θ , . . . , e jπ∆(Nt−1)sin θ ] denotes the steering vector at the BS transmit array, with ∆ denoting the antenna spacing over wavelength ratio. Hence, the SINR at the potential eavesdropper can be expressed as
SINR E (θ) = |a H (θ)w| 2 K k=1 |a H (θ)v k | 2 + σ 2 E r 2 β0 , θ ∈ {θ 1 , ..., θ K }. (6)
The achievable secrecy rate at the user when there exists an eavesdropper at location k with angle θ k is given by [8]: (7) in bps/Hz, where [a] + = max{a, 0}. The worst-case achievable secrecy rate among all possible eavesdropper locations is thus given by R = min k=1,...,K R k .
R k = [log 2 (1+SINR)−log 2 (1+SINR E (θ k ))] + , ∀k
Besides reaching the user and eavesdropper receivers, the transmit signal will be reflected by the target. Let α ∈ C denote the radar cross section (RCS) coefficient, which is generally an unknown and deterministic parameter. Let b(θ) = [e −jπ∆(Nr−1) sin θ , e −jπ∆(Nr−3) sin θ , . . . , e jπ∆(Nr−1) sin θ ] H denote the steering vector at the BS receive array. Thus, the received echo signal at the BS receive antennas is given by
y R = β 0 r 2 b(θ)αa H (θ)x + z R ≜ βM (θ)x + z R ,(8)
where β ∆ = β0 r 2 α denotes the overall reflection coefficient including the two-way channel gain and RCS; z R ∼ CN (0, σ 2 R I Nr ) denotes the CSCG noise vector at the BS receive antennas; and M (θ)
∆ = b(θ)a H (θ).
In the next section, we aim to characterize the performance of estimating θ based on the received signal vector in (8).
III. SENSING PERFORMANCE CHARACTERIZATION EXPLOITING PRIOR DISTRIBUTION INFORMATION
Notice from (8) that the overall reflection coefficient β = β R + jβ I is also an unknown (and deterministic) parameter, which thus also needs to be estimated to obtain an accurate estimation of θ. Let ω = [θ, β R , β I ] T denote the collection of unknown parameters to be estimated. With the prior distribution information of θ available for exploitation, we propose to employ PCRB as the performance metric, which characterizes a global lower bound of the MSE of unbiased estimators exploiting prior information. To the best of our knowledge, most classic PCRB derivation methods are suitable for estimation parameters with continuous and differentiable probability density functions (PDFs) [9]. For consistence, we propose to approximate the discrete PMF in (1) with a continuous Gaussian mixture PDF given bȳ
p Θ (θ) = K k=1 p k 1 σ θ √ 2π e − (θ−θ k ) 2 2σ 2 θ .(9)
Specifically,p Θ (θ) is the weighted sum of K Gaussian PDFs, where each k-th Gaussian PDF is centered at mean θ k with a small variance σ 2 θ , and carries a weight of p k . Note that as σ 2 θ decreases,p Θ (θ) becomes increasingly similar to p Θ (θ). Moreover, with a sufficiently small σ 2 θ , the probability for θ under (9) to exceed the original [−π, π) region is negligible.
Based on (9), the Fisher information matrix (FIM) for the estimation of ω consists of two parts as follows [10]:
J = J D + J P .(10)
The first part J D ∈ C 3×3 represents the FIM extracted from the observed data in y R , which is given by
[J D ] ij = −E y R ,ω ∂ 2 L y R (ω) ∂ω i ∂ω j , i, j ∈ {1, 2, 3}, (11) with L y R (ω) = −N r ln(πσ 2 R )− 1 σ 2 R (∥y R ∥ 2 +|β| 2 ∥M (θ)x∥ 2 ) + 2 σ 2 R Re{β * x H M H (θ)y R }
being the log-likelihood function for the parameters in ω. J D can be further derived as
J D = J θθ J θβ J H θβ J ββ ,(12)
where each block is given as
J θθ = 2|β| 2 σ 2 R ∞ −∞p Θ (θ)tr Ṁ H (θ)Ṁ (θ) R x dθ,(13)J θβ = 2 σ 2 R ∞ −∞p Θ (θ)tr Ṁ H (θ)M H (θ)R x [β R , β I ]dθ,(14)J ββ = 2 σ 2 R ∞ −∞p Θ (θ)tr M H (θ) M (θ) R x I 2 dθ, (15) withṀ (θ) = ∂M (θ) ∂θ and R x = E[xx H ] = ww H + K k=1 v k v H
k denoting the transmit covariance matrix. The second part J P ∈ C 3×3 represents the FIM extracted from the prior distribution information, which is given by
[J P ] ij = −E ω ∂ 2 ln p w (ω) ∂ω i ∂ω j , i, j ∈ {1, 2, 3},(16)
where p w (ω) denotes the PDF of ω. Note that since β R and β I are both deterministic variables, J P only has a non-zero entry in the first column and first row, which is given by
[J P ] θθ = − +∞ −∞ ∂ 2p Θ (θ) ∂ 2 θ dθ + +∞ −∞ ∂pΘ(θ) ∂θ 2 p Θ (θ) dθ. (17)
The first term in the right-hand side of (17) can be derived as
− +∞ −∞ ∂ 2p Θ (θ) ∂ 2 θ dθ = K k=1 p k (θ − θ k ) σ 3 θ √ 2π e − (θ−θ k ) 2 2σ 2 θ +∞ −∞ = 0. (18)
The second term can be derived as
+∞ −∞ ∂pΘ(θ) ∂θ 2 p Θ (θ) dθ = +∞ −∞ K k=1 p k (θ−θ k ) σ 3 θ √ 2π e − (θ−θ k ) 2 2σ 2 θ 2 p Θ (θ) dθ −ϵ = K k=1 p k 1 σ 2 θ − ϵ = 1 σ 2 θ − ϵ,(19)
where ϵ
∆ = ∞ −∞ K k=1 K n=1 f k (θ)f n (θ) (θn−θ k ) 2 σ 4 θ /(2 K k=1 f k (θ))dθ with f k (θ) ∆ = p k σ θ √ 2π e − (θ−θ k ) 2 2σ 2 θ . Thus, we have [J P ] θθ = 1 σ 2 θ − ϵ.
Based on the above, the overall FIM J can be expressed as
J = J D + J P = J θθ + 1 σ 2 θ − ϵ J θβ J H θβ J ββ .(20)
The PCRB for the estimation MSE of θ denoted by PCRB θ is then given by the entry in the first column and first row of J −1 , which can be expressed as
PCRB θ = J θθ + 1 σ 2 θ − ϵ − J θβ J −1 ββ J H θβ −1 (21) = σ 2 R g 1 (R x ) 2|β| 2 g 2 (R x ) + σ 2 R 2|β| 2 1 σ 2 θ − ϵ g 1 (R x ) − |g 3 (R x )| 2 . where g 1 (R x ) = ∞ −∞p Θ (θ)tr(M H (θ)M (θ)R x )dθ; g 2 (R x ) = ∞ −∞p Θ (θ)tr(Ṁ H (θ)Ṁ (θ)R x )dθ; and g 3 (R x ) = ∞ −∞p Θ (θ)tr(Ṁ H (θ)M (θ)R x )dθ.
Note that the PCRB in (21) is a complicated function with respect to the transmit covariance matrix R x and consequently the beamforming vectors w and v k 's, which is difficult to be theoretically analyzed or numerically examined. Motivated by this, we derive a more tractable upper bound of PCRB as follows. Specifically, we first re-express (21) as
PCRB θ = σ 2 R 2|β| 2 g 1 (R x ) g 4 (R x )g 1 (R x )+ σ 2 R g1(Rx) 2|β| 2 1 σ 2 θ −ϵ +g 5 (R x ) ,(22)where g 4 (R x ) = ∞ −∞p Θ (θ)∥ḃ(θ)∥ 2 a H (θ)R x a(θ)dθ; g 5 (R x ) = 1 2 ∞ −∞ ∞ −∞ ∥b(θ p )∥ 2 ∥b(θ q )∥ 2 |ȧ H (θ p )R x a(θ q ) −a H (θ p )R xȧ (θ q )| 2p
Θ (θ p )p Θ (θ q )dθ p dθ q ≥ 0. By noting both g 1 (R x ) and g 5 (R x ) are non-negative, an upper bound of PCRB θ can be obtained as
PCRB θ ≤ PCRB U θ ∆ = 1 2|β| 2 σ 2 R g 4 (R x ) + 1 σ 2 θ − ϵ = 1 2|β| 2 σ 2 R w H Qw + K k=1 v H k Qv k + 1 σ 2 θ − ϵ ,(23)where Q = K k=1 +∞ −∞ f k (θ)∥ḃ(θ)∥ 2 a(θ)a(θ) H dθ.
Note that Q and ϵ in (23) still involve complicated integrals over the continuous θ, for which an analytical expression is difficult to obtain. By leveraging the fact that we consider a small variance σ 2 θ in the Gaussian mixture model, we propose an approximation of (23) in closed form, which will facilitate our optimization of the beamforming vectors in the next section.
Proposition 1: With a small σ 2 θ , an approximate expression for the PCRB upper bound PCRB U θ is given by
PCRB U θ ≈PCRB U θ ∆ = 1 2|β| 2 σ 2 R (w HQ w+ K k=1 v H kQ v k )+ 1 σ 2 θ ,(24)whereQ ∆ = ρ 0 K k=1 p k (cos(2θ k )+1)a(θ k )a H (θ k ) with ρ 0 ∆ = Nr n=1 π 2 ∆ 2 (n−1) 2 .
Proof: Please refer to Appendix A. Notice that the approximate PCRB upper boundPCRB U θ has a closed-form expression which is an explicit function of the beamforming vectors w and v k 's, thus will be adopted as the performance metric for sensing the target's location. It is worth noting that the tightness of the PCRB upper bound PCRB U θ with respect to the exact PCRB PCRB θ as well as the accuracy of its approximationPCRB U θ has been validated numerically with moderate values of σ θ (e.g., σ θ < 10 −2 ), for which the details are omitted due to limited space.
Furthermore, note thatPCRB U θ is a decreasing function of the amplitude of the overall reflection coefficient β and consequently the amplitude of the target's unknown RCS coefficient α. Thus, a global upper bound of the PCRB that holds for any value of α can be obtained by considering the minimum value of |α| denoted by |ᾱ| = min |α| (which can be obtained a priori by exploiting properties of the target), and replacing β in (24) with the corresponding |β| = β0 r 2 |ᾱ| = min |β|.
IV. PROBLEM FORMULATION
In this paper, our objective is to optimize the transmit beamforming vectors w and v k 's to maximize the worst-case secrecy rate among all possible eavesdropper locations, while ensuring the sensing PCRB for the eavesdropping target is always below a given threshold Γ. Motivated by the tractability of the approximate PCRB upper boundPCRB U θ , we aim to achieve this goal by ensuring thatPCRB U θ corresponding to the minimum RCS amplitude |ᾱ| is no larger than Γ. Thus, the optimization problem is formulated as follows.
(P1) max w,{v k } min k log 2 (1+SINR)−log 2 (1+SINR E (θ k )) (25) s.t. ∥w∥ 2 + K k=1 ∥v k ∥ 2 ≤ P (26) 1 2|β| 2 σ 2 R (w HQ w+ K k=1 v H kQ v k )+ 1 σ 2 θ ≤ Γ.(27)
Note that the objective function in (P1) involves logarithm functions of fractional quadratic functions, and can be shown to be non-concave. Moreover, the constraint in (27) is also nonconvex sinceQ can be observed to be a positive semi-definite (PSD) matrix. Therefore, (P1) is a non-convex problem. Moreover, it is worth noting that in order to maximize the secrecy rate, w should be designed such that the received power of the information beam at each possible target location is as small as possible; on the other hand, to minimize the approximate PCRB upper bound, w should be designed to maximize w HQ w, which is proportional to the weighted summation of the received information beam powers among all possible target locations. Therefore, there exists a non-trivial trade-off between the secrecy performance and sensing performance in the secure ISAC system. Furthermore, note that both the secrecy rate and the PCRB are critically dependent on the distribution of θ, which makes the problem more challenging. In the following, we derive the optimal solution to (P1).
V. OPTIMAL SOLUTION TO (P1)
A. Equivalent Problem Reformulation
First, note that one key difficulty in (P1) lies in the fractional expressions of the SINR. To deal with this issue, we introduce an auxiliary variable γ to characterize the SINR constraint at each possible eavesdropper (target) location. Based on this, according to [12], [13], it can be proved that there always exists a γ > 0 at all possible eavesdropper locations such that problem (P1.1) below has the same optimal solution with (P1).
(P1.1) max w,{v k } |h H w| 2 K k=1 |h H v k | 2 + σ 2 (28) s.t. |a(θ k ) H w| 2 K k=1 |a(θ k ) H v k | 2 +σ 2 E r 2 /β 0 ≤ γ, ∀k (29) ∥w∥ 2 + K k=1 ∥v k ∥ 2 ≤ P (30) w HQ w+ K k=1 v H kQ v k ≥ σ 2 R 2|β| 2 1 Γ − 1 σ 2 θ .(31)
Moreover, denote f (γ) as the optimal value of (P1.1) with a given γ > 0. Then, the following problem can be shown to have the same optimal value as (P1) [12]:
(P1.2) max γ>0 log 2 ((1 + f (γ))/(1 + γ)).(32)
Therefore, the optimal solution to (P1) can be obtained via one-dimensional search of γ > 0 in (P1.2) based on the values of f (γ). Thus, our remaining task is to obtain the optimal solution to (P1.1).
B. Optimal Solution to (P1.1)
Motivated by the quadratic functions involved in (P1.1), we consider an SDR based approach. Let H
∆ = hh H , W ∆ = ww H , V ∆ = K k=1 v k v H k , and A k ∆ = a(θ k )a H (θ k )
, ∀k. Then, (P1.1) can be equivalently expressed as the following problem with an additional constraint of rank(W ) = 1:
(P1.1R) max W ,V tr (HW ) tr (HV ) + σ 2 (33) s.t. tr (A k W ) ≤ γ tr (A k V )+ σ 2 E r 2 β 0 , ∀k (34) tr (W ) + tr (V ) ≤ P (35) tr((W + V )Q) ≥ σ 2 R 2|β| 2 1 Γ − 1 σ 2 θ (36) W ⪰ 0, V ⪰ 0.(37)
The objective function of (P1.1R) is non-concave. However, we can leverage the Charnes-Cooper transformation [14] to transform (P1.1R) into an equivalent convex problem as:
(P2.1R) max W ,V ,t tr (HW ) (38) s.t. tr (A k W ) ≤ γ tr (A k V )+ tσ 2 E r 2 β 0 ,∀k (39) tr (HV ) + tσ 2 = 1 (40) tr (W ) + tr (V ) ≤ tP (41) tr((W + V )Q) ≥ tσ 2 R 2|β| 2 1 Γ − 1 σ 2 θ (42) (37), t > 0.(43)
Since (P2.1R) is a convex optimization problem, its optimal solution can be obtained efficiently via interior-point method or CVX. Moreover, the duality gap is equal to zero. Let {β k }, λ, ρ, and ψ denote the dual variables associated with the constraint(s) in (39), (40), (41), and (42), respectively. The Lagrangian of (P2.1R) is given by
L(W , V , t, {β k }, λ, ρ, ψ) = tr(SW )+tr(BV )+ξt+λ, (44) where S = H − K k=1 β k A k + ψQ − ρI Nt ; B = −λH + γ K k=1 β k A k + ψQ − ρI Nt ; and ξ = −λσ 2 + γ σ 2 E r 2 β0 K k=1 β k + ρP − ψ σ 2 R 2|β| 2 ( 1 Γ − 1 σ 2 θ )
. Let λ * , {β * k ≥ 0}, ρ * ≥ 0, and ψ * ≥ 0 denote the optimal dual variables to (P2.1R) Then, we have the following lemma.
Lemma 1: The optimal dual solution satisfies λ * > 0 and ρ * > 0 when γ > 0.
Proof: Please refer to Appendix B. Since ρ * > 0, the constraint in (41) must be satisfied with equality by the optimal solution to (P2.1R) due to the complementary slackness. Define D * = −λ * H − K k=1 β * k A k + ψ * A k − ρ * I Nt with l = rank(D * ). The orthogonal basis of the null space of D * can be represented as Z ∈ C Nt×(Nt−l) , where z 1,n denotes the n-th column of Z; if l = N t , Z = 0. Then, we have the following proposition for (P2.1R).
Proposition 2: For (P2.1R), the optimal V * satisfies rank(V * ) ≤ min(K, N t ). The optimal W * can be written as
W * = Nt−l n=1 a n z 1,n z H 1,n + brr H ,(45)
where a n ≥ 0 for ∀n, b > 0, and r ∈ C Nt×1 satisfies r H Z = 0. If rank(W * ) > 1, the following set of the solution with a rank-one solution of W can be constructed which achieves the same optimal value of (P2.1R):
W * = brr H (46) V * = V + Nt−l n=1 a n z 1,n z H 1,n (47) t * = t * .(48)
Proof: Please refer to Appendix C. To summarize, if the optimal solution (W * , V * , t * ) of (P2.1R) satisfies rank(W * ) = 1, we can get an optimal solution (W * /t * , V * /t * ) for (P1.1R) and consequently (P1.1). Otherwise, we can reconstruct (W * , V * , t * ) with rank(W * ) = 1 according to (46), (47), and (48), and (W * /t * , V * /t * ) will be the optimal solution for (P1.1).
VI. NUMERICAL RESULTS
In this section, we provide numerical results to evaluate the performance of the proposed transmit beamforming design for secure ISAC. We assume the BS is equipped with N t = 8 transmit antennas and N r = 10 receive antennas, where the antenna spacing is half of the wavelength (i.e., ∆ = 1 2 ). We consider K = 4 possible target (eavesdropper) locations, with θ 1 = −55 • , θ 2 = −35 • , θ 3 = 65 • , θ 4 = 45 • ; p 1 = 0.2, p 2 = 0.3, p 3 = 0.1, and p 4 = 0.4. The transmit power is set as P = 20 dBm. The path loss of the BS-target channel is 10 dB. The lower bound of the target's RCS is set as |ᾱ| = 0.0071. The average receiver noise power is set as σ 2 R = σ 2 E = σ 2 = −60 dBm. The BS-user channel follows an LoS model, where the user's angle with respect to the BS reference point is −10 • .
First, in Fig. 2, we illustrate the secrecy rate log 2 ((1 + f (γ))/(1+γ)) versus the SINR constraint at the eavesdropper, γ, for different sensing accuracy thresholds Γ's. We set the path loss of the user as 30 dB. It is observed that for every value of Γ, the secrecy rate first increases and then decreases with γ, and there exists a unique optimal solution of γ. Moreover, as the sensing accuracy constraint becomes more stringent (with a smaller Γ), it can be observed that the maximized secrecy rate decreases, which demonstrates the trade-off between secrecy and sensing.
Next, we consider Γ = 2.68 × 10 −5 and illustrate in Fig. 3 the beampattern over different angles at distance r. The path loss of the user is set as 60 dB. It is observed that at the user's angle, the information beam is much stronger than the AN beam; whereas at the possible eavesdropper's locations, the information beam's power reaches its local minimum, but the AN beam power is generally very strong. This thus validates the effectiveness of our proposed beamforming design for enhancing the communication secrecy and sensing accuracy.
Finally, we compare the performance of our proposed transmit beamforming scheme with two benchmark schemes.
• Benchmark 1: Simple maximum ratio transmission (MRT) beamforming with only an information beamforming vector w = √ P h/∥h∥, which is designed to maximize the user's achievable rate without consideration of the secrecy and sensing performance. • Benchmark 2: Beamforming without AN, where w is obtained by solving (P1) with v k = 0, ∀k. In Fig. 4, we show the secrecy rate versus the sensing accuracy threshold Γ. It is observed that with our proposed scheme, the secrecy rate increases as the sensing accuracy constraint becomes less stringent, which further demonstrates the secrecysensing trade-off. On the other hand, Benchmark Scheme 1 is observed to be infeasible for the first five samples of Γ, due to its lack of consideration of the sensing performance; while both Benchmark Scheme 1 and Benchmark Scheme 2 fail to achieve a non-zero secrecy rate, due to the incapability of controlling the information leakage without the AN beam.
VII. CONCLUSIONS
This paper investigated a secure ISAC system where the sensing target may potentially serve as an eavesdropper. The location of the target is unknown and random, while its distribution is available for exploitation. First, to characterize the sensing performance, we derived the PCRB of MSE exploiting prior distribution, and further proposed a novel approximate PCRB upper bound with a closed-form expression. Then, we considered an AN-based transmit beamforming structure, and formulated the beamforming optimization problem to maximize the minimum secrecy rate among all possible target (eavesdropper) locations, under a sensing accuracy constraint represented by an upper bound on the PCRB. Although the formulated problem is non-convex, we adopted the SDR technique to obtain the optimal beamforming solution. Numerical results revealed that our proposed scheme outperforms various benchmark schemes in terms of both secrecy and sensing. Denote
APPENDIX A PROOF OF PROPOSITION 1 First, we denote S(θ k ) = +∞ −∞ f k (θ)∥ḃ(θ)∥ 2 a(θ)a(θ) H dθ, which yields Q = K k=1 S(θ k ).µ 1 = p k √ 2π 2
Nr n=1 π 2 ∆ 2 (n − 1) 2 , µ 2 = −2π∆, and t = θ−θ k . Then, [S(θ k )] 1,2 can be further simplified as:
[S(θ k )] 1,2 = µ 1 1 σ θ +∞ −∞ e − (θ−θ k ) 2 2σ 2 θ cos 2 (θ)e µ2jsin(θ) dθ (50) = µ 1 2σ θ ∞ −∞ e − t 2
2σ 2 θ e µ2j(sin θ k cos t+cos θ k sin t)
× (cos(2t) cos(2θ k ) − sin(2t) sin(2θ k ) + 1)dt (a) ≈ [S(θ k )] 1,2 = µ 1 2σ θ +∞ −∞ e − t 2 2σ 2 θ (α 0 +α 1 t+α 2 t 2 +o(t 3 ))dt,
where α 0 = e µ2jsin(θ k ) (cos(2θ k ) + 1); α 1 = −2sin(2θ k ) − jµ 2 (1 + cos(2θ k ))sin(θ k ); and α 2 = −2cos(2θ k ) − of cos(2t), sin(2t), e µ2jsin(θ k )cos(t) and e µ2jcos(θ k )sin(t) and
noting σ 2 θ is a small value. Notice that e
θ t 2 dt = √ 2πσ 3 θ .
Thus, we have [S(θ k )] 1,2 = µ 1 (cos(2θ k ) + 1) π 2 e µ2jsin(θ k ) . Similarly, other entries in S(θ k ) can be approximated in the same manner, which yields S(θ k ) = µ 1 π/2(cos(2θ k ) + 1)a(θ k )a H (θ k ).
(51)
Secondly, due to the small value of σ 2 θ , the non-zero values of ∂p Θ (θ)/(∂θ) will only occur in the close vicinity of θ k 's.
Thus, we have +∞ −∞ ∂p Θ (θ) ∂θ 2 pΘ(θ) dθ ≈ 1 σ 2 θ ,
i.e., ϵ ≈ 0. Based on this and (51), Proposition 1 is proved. APPENDIX B PROOF OF LEMMA 1 First, we prove that λ * > 0 when γ > 0. According to (44), the dual problem of (P2.1R) can be given by
min {β k },λ,ρ,ψ λ s.t. S ⪯ 0, B ⪯ 0, ξ ≤ 0 {β k } ≥ 0, ∀k ψ ≥ 0, ρ ≥ 0.(52)
To ensure that the Lagrangian problem in (44) is bounded so that the dual function exists, it follows that
S * ⪯ 0, B * ⪯ 0, ξ * ≤ 0(53)
which are given by substituting the optimal dual solution λ * . Based on strong duality, the duality gap is zero, thus λ * is equal to the optimal value of (P2.1R). Hence, we have λ * > 0. Next, we demonstrate ρ * > 0 by contradiction. We havē Q = K k=1 w k A k , where w k = ρ 0 p k (cos(2θ k ) + 1) ≥ 0, ∀k. Assume ϕ = {k|(β * k ) 2 + (ψ * w k ) 2 > 0, k = 1, . . . , K}. Then, we prove that ρ * ̸ = 0 by discussing the following two cases with the assumption that ρ * = 0.
• Case 1: For ϕ = ∅, we have S * = H ⪰ 0, which contradicts with (53). We can get ρ * > 0.
• Case 2: For ϕ ̸ = ∅, we have B * = −λ * H + k∈ϕ β * k γA k + ψ * Q = −λ * H + k∈ϕ β * k γA k + K k=1 ψ * w k A k .
Then, we k∈ϕ (β * k γ + ψ * w k ) A k ⪰ 0 and λ * ≥ 0. To ensure B * ⪯ 0, it requires that any vector that lies in the null space of H must be in the null space of A k , ∀k ∈ ϕ. However, in our scenario, the channel h and a k for ∀k is linearly independent. Thus, we can get ρ * > 0. By combining the aforementioned cases, we can conclude that ρ * > 0. Therefore, Lemma 1 is proven.
APPENDIX C PROOF OF PROPOSITION 2
The optimal solutions for (P2.1R) satisfy the Karush-kuhn-Tucker (KKT) conditions which can be expressed as
S * W * = 0 B * V * = 0.(54)
1) We prove that rank(V * ) ≤ min(K, N t ). If K ≥ N t , rank(V * ) ≤ N t = min(K, N t ). Next, we introduce the following Lemma 2.
Lemma 2: Let Y and X be two matrices of the same dimension. It holds that rank(Y +X) ≥ rank(Y )−rank(X).
Proof: If Y and X are of the same dimension, rank(Y ) + rank(X) ≥ Y + X. Thus, due to rank(X) = rank(−X), we have rank(Y + X) + rank(−X) ≥ rank(Y ).
We define C * = −λ * H − ρ * I. Due to λ * > 0 and ρ * > 0, we have C * ≺ 0, it follows that rank(C * ) = N t . Therefore,
B * = C * + K k=1 (β * k γ + ψ * w k ) A k . Due to rank K k=1 (β * k γ + ψ * w k ) A k ≤ K , based on Lemma 2, we have rank(B * ) ≥ rank(C * )−rank K k=1 (β * k γ + ψ * w k ) A k (55) ≥ N t − K.
By combining the two cases mentioned above, namely K ≥ N t and K < N t , we can deduce that rank(V * ) ≤ min(K, N t ).
2) We prove that optimal solution W * can be written as (44) from two cases as follows.
According to S * W * = 0 and Lemma 2, we can conclude that rank(S * ) ≥ N t − 1. If rank(S * ) = N t , it follows that W * = 0. It is not true for the optimal solution. Then, we conclude that rank(W * ) = brr H . • Case 2: l < N t . We have z H 1,n S * z 1,n = (1 + λ * )|h H z 1,n | 2 , 1 ≤ n ≤ N t − l, (57) by taking (56). Due to S * ⪯ 0 and (1 + λ * ) > 0, we can get |h H z 1,n | 2 = 0 for ∀n by (57), which means HZ = 0. Since Z is the orthogonal basis for the null space of D * , we can get S * Z = (D * + (1 + λ * )H) Z = 0.
(58)
In addition, according to Lemma 2 and (56), we can get rank(S * ) ≥ rank(D * ) − rank((1 + λ * )H) = l − 1. We define Ω as the orthogonal basis for the null space of S * , which satisfies that rank(Ω) = N t − rank(S * ) ≤ N t − l + 1.
We prove that rank(Ω) = N t − l + 1 by following cases.
-Case 2.1: rank(Ω) ≥ N t − l. Since Z spans N t − l orthogonal dimensions of the null space of S * , we have rank(Ω) ≥ N t − l. -Case 2.2: rank(Ω) ̸ = N t − l. If rank(Ω) = N t − l, we can get Ω = Z. Then, we have W * = Nt−l n=1 a n z 1,n z H 1,n , a n ≥ 0 for ∀n. However, z 1,n , ∀n lie in the null space of H, which means there is no information transmitted to the user.
-Case 2.3: rank(Ω) = N t − l + 1. according to (59), there exists only one single subspace spanned by r of unit norm, which lies in the null space of S * and is orthogonal to the span of Z. Thus, we have
Ω = [Z, r],(60)
where rank(Ω) = N t − l + 1. According to (53) and (54), the optimal solution W * can be expressed as W * = Nt−l n=1 a n z 1,n z H 1,n + brr H
where a n ≥ 0 for ∀n, b > 0, and r satisfies r H Z = 0.
The proof of the expressions of optimal solutions W * and V * for the problem (P2.1R) is completed.
3) We consider that (W * , V * , t * ) given in (46)
= tr (HW * ) , tr HV * + t * σ 2 = tr H V * + Nt−l n=1 a n z 1,n z H 1,n (63) +t * σ 2 = tr (HV * ) + t * σ 2 = 1, tr A k W * ≤ tr (A k W * ) ≤ γ tr (A k V * ) + t * σ 2 E r 2 β 0 (64) ≤ γ tr A k V * + t * σ 2 E r 2 β 0 , ∀k tr W * + tr V * = tr (W * ) + tr (V * ) ≤ t * P, (65)
tr W * + V * Q = tr (W * + V * )Q (66) ≥ t * σ 2 R 2|β| 2 1 Γ − 1 σ 2 θ , tr W * ≥ 0, tr V * ≥ 0, t * ≥ 0.(67)
Therefore, (W * , V * , t * ) can be an optimal solution for (P2.1R) with rank(W * ) = 1.
Proposition 2 is thus proved.
Fig. 1 .
1Illustration of a secure ISAC system with random target location.
Fig. 2 .
2Secrecy rate versus the SINR constraint at the eavesdropper, γ.
Fig. 3 .
3Beampattern versus angle.
Fig. 4 .
4Secrecy rate versus sensing accuracy threshold for different schemes.
Based on the definition of a(θ) and b(θ), S(θ k ) can be further expressed as jπ2∆ sin θ ... e −jπ2(Nt−1)∆ sin θ e jπ2∆ sin θ 1 ... e −jπ2(Nt−2)∆ sin θ . . . . . . . . . e jπ2(Nt−1)∆ sin θ e jπ2(Nt−2)∆ sin θ ...
•
Case 1: rank(D * ) = l = N t , where D * = B * − K k=1 (1 + γ)β * k A k . Then, we have S * = D * + (1 + λ * )H.
with rank(W * ) = 1 is also an optimal solution to (P2.1R). Take (46), (47) and (48) into (P2.1R), we have tr HW * = tr H W * −
2 µ 2 (jsin(θ k ) + cos(θ k ))(cos(θ k ) + 1) − 2jµ 2 sin(2θ k )cos(θ k ). Note that (a) is derived by taking the Maclaurin series
The challenges facing physical layer security. W Trappe, IEEE Commun. Mag. 536W. Trappe, "The challenges facing physical layer security," IEEE Com- mun. Mag., vol. 53, no. 6, pp. 16-20, Jun. 2015.
Guaranteeing secrecy using artificial noise. S Goel, R Negi, IEEE Trans. Wireless Commun. 76S. Goel and R. Negi, "Guaranteeing secrecy using artificial noise," IEEE Trans. Wireless Commun., vol. 7, no. 6, pp. 2180-2189, Jun. 2008.
Secure transmission with artificial noise over fading channels: Achievable rate and optimal power allocation. X Zhou, M R Mckay, IEEE Trans. Veh. Technol. 598X. Zhou and M. R. McKay, "Secure transmission with artificial noise over fading channels: Achievable rate and optimal power allocation," IEEE Trans. Veh. Technol., vol. 59, no. 8, pp. 3831-3842, Oct. 2010.
Optimal and robust transmit designs for MISO channel secrecy by semidefinite programming. Q Li, W.-K Ma, IEEE Trans. Signal Process. 598Q. Li and W.-K. Ma, "Optimal and robust transmit designs for MISO channel secrecy by semidefinite programming," IEEE Trans. Signal Process., vol. 59, no. 8, pp. 3799-3812, Aug. 2011.
Secure dual functional radar-communication transmission: Exploiting interference for resilience against target eavesdropping. N Su, F Liu, Z Wei, Y.-F Liu, C Masouros, IEEE Trans. Wireless Commun. 219N. Su, F. Liu, Z. Wei, Y.-F. Liu, and C. Masouros, "Secure dual func- tional radar-communication transmission: Exploiting interference for resilience against target eavesdropping," IEEE Trans. Wireless Commun., vol. 21, no. 9, pp. 7238-7252, 2022.
Integrated sensing and communications: Towards dual-functional wireless networks for 6G and beyond. F Liu, Y Cui, C Masouros, J Xu, T Han, Y Eldar, S Buzzi, IEEE J. Sel. Areas Commun. 406F. Liu, Y. Cui, C. Masouros, J. Xu, T. Han, Y. Eldar, and S. Buzzi, "In- tegrated sensing and communications: Towards dual-functional wireless networks for 6G and beyond," IEEE J. Sel. Areas Commun., vol. 40, no. 6, pp. 1728-1767, Jun. 2022.
Sensing-assisted physical layer security. N Su, F Liu, C Masouros, Proc. Int. WSA & SCC. Int. WSA & SCCN. Su, F. Liu, and C. Masouros, "Sensing-assisted physical layer security," in Proc. Int. WSA & SCC, May 2023.
Compound wire-tap channels. Y Liang, G Kramer, H V Poor, S Shamai, Proc. 45th Ann. Allerton Conf. Commun., Contr., Comput. 45th Ann. Allerton Conf. Commun., Contr., ComputShitz)Y. Liang, G. Kramer, H. V. Poor, and S. Shamai (Shitz), "Compound wire-tap channels," in Proc. 45th Ann. Allerton Conf. Commun., Contr., Comput., pp. 136-143, Sep. 2007.
Van Trees, Detection, estimation, and modulation theory, Part I: detection, estimation, and linear modulation theory. H L , WileyHoboken, NJ, USAH. L. Van Trees, Detection, estimation, and modulation theory, Part I: detection, estimation, and linear modulation theory. Hoboken, NJ, USA: Wiley, Apr. 2004.
Fundamental limits of wideband localization-Part I: A general framework. Y Shen, M Z Win, IEEE Trans. Inf. Theory. 5610Y. Shen and M. Z. Win, "Fundamental limits of wideband localization- Part I: A general framework," IEEE Trans. Inf. Theory, vol. 56, no. 10, pp. 4956-4980, Oct. 2010.
Secure integrated sensing and communication exploiting target location distribution. K Hou, S Zhang, K. Hou and S. Zhang, "Secure integrated sensing and communication exploiting target location distribution," Available: https://www.eie.polyu. edu.hk/%7Eshuowenzhang/GC2023.pdf
On the relationship between the multi-antenna secrecy communications and cognitive radio communications. L Zhang, R Zhang, Y C Liang, Y Xin, S Cui, IEEE Trans. Commun. 586L. Zhang, R. Zhang, Y. C. Liang, Y. Xin, and S. Cui, "On the relationship between the multi-antenna secrecy communications and cognitive radio communications," IEEE Trans. Commun., vol. 58, no. 6, pp. 1877-1886, Jun. 2010.
Secrecy wireless information and power transfer with MISO beamforming. L Liu, R Zhang, K.-C Chua, IEEE Trans. Signal Process. 627L. Liu, R. Zhang, and K.-C. Chua, "Secrecy wireless information and power transfer with MISO beamforming," IEEE Trans. Signal Process., vol. 62, no. 7, pp. 1850-1863, Apr. 2014
A slacks-based measure of efficiency in data envelopment analysis. K Tone, Eur. J. Oper. Res. 1303K. Tone, "A slacks-based measure of efficiency in data envelopment analysis," Eur. J. Oper. Res., vol. 130, no. 3, pp. 498-509, May 2001.
| [] |
[
"Fairness-Sensitive Policy-Gradient Reinforcement Learning for Reducing Bias in Robotic Assistance",
"Fairness-Sensitive Policy-Gradient Reinforcement Learning for Reducing Bias in Robotic Assistance"
] | [
"Jie Zhu ",
"Mengsha Hu ",
"Xueyao Liang ",
"Amy Zhang ",
"Ruoming Jin ",
"Rui Liu "
] | [] | [] | Robots assist humans in various activities, from daily living public service (e.g., airports and restaurants), and to collaborative manufacturing. However, it is risky to assume that the knowledge and strategies robots learned from one group of people can apply to other groups. The discriminatory performance of robots will undermine their service quality for some people, ignore their service requests, and even offend them. Therefore, it is critically important to mitigate bias in robot decision-making for more fair services. In this paper, we designed a self-reflective mechanism -Fairness-Sensitive Policy Gradient Reinforcement Learning (FSPGRL), to help robots to self-identify biased behaviors during interactions with humans. FSPGRL identifies bias by examining the abnormal update along particular gradients and updates the policy network to support fair decision-making for robots. To validate FSPGRL's effectiveness, a human-centered service scenario, "A robot is serving people in a restaurant," was designed. A user study was conducted; 24 human subjects participated in generating 1,000 service demonstrations. Four commonlyseen issues "Willingness Issue," "Priority Issue," "Quality Issue," "Risk Issue" were observed from robot behaviors. By using FSPGRL to improve robot decisions, robots were proven to have a self-bias detection capability for a more fair service. We have achieved the suppression of bias and improved the quality during the process of robot learning to realize a relatively fair model. | null | [
"https://export.arxiv.org/pdf/2306.04167v1.pdf"
] | 259,095,515 | 2306.04167 | a05f836b5da73d4d8a62fe41011e9462bc197d0c |
Fairness-Sensitive Policy-Gradient Reinforcement Learning for Reducing Bias in Robotic Assistance
Jie Zhu
Mengsha Hu
Xueyao Liang
Amy Zhang
Ruoming Jin
Rui Liu
Fairness-Sensitive Policy-Gradient Reinforcement Learning for Reducing Bias in Robotic Assistance
Index Terms-FairnessHuman-Robot InteractionRein- forcement Learning
Robots assist humans in various activities, from daily living public service (e.g., airports and restaurants), and to collaborative manufacturing. However, it is risky to assume that the knowledge and strategies robots learned from one group of people can apply to other groups. The discriminatory performance of robots will undermine their service quality for some people, ignore their service requests, and even offend them. Therefore, it is critically important to mitigate bias in robot decision-making for more fair services. In this paper, we designed a self-reflective mechanism -Fairness-Sensitive Policy Gradient Reinforcement Learning (FSPGRL), to help robots to self-identify biased behaviors during interactions with humans. FSPGRL identifies bias by examining the abnormal update along particular gradients and updates the policy network to support fair decision-making for robots. To validate FSPGRL's effectiveness, a human-centered service scenario, "A robot is serving people in a restaurant," was designed. A user study was conducted; 24 human subjects participated in generating 1,000 service demonstrations. Four commonlyseen issues "Willingness Issue," "Priority Issue," "Quality Issue," "Risk Issue" were observed from robot behaviors. By using FSPGRL to improve robot decisions, robots were proven to have a self-bias detection capability for a more fair service. We have achieved the suppression of bias and improved the quality during the process of robot learning to realize a relatively fair model.
I. INTRODUCTION
As sensor and artificial intelligence technologies advance, robots now are able to assist humans in various activities from daily living, to public service, and manufacturing. For example, humanoid robots in hotels help guests to greet people, carry luggage, deliver packages to guest rooms, and clean rooms [1]; armed robots in collaborative manufacturing handover parts to workers and assist workers to control processing safety [2].
AI is developing fast and serving wide application areas, while fairness in AI attracts social attention as people are getting more concerned about safety, ethics, and accessibility issues during AI tool implementations. In 2016, the COMPAS software revealed algorithmic bias in deciding if set free an offender or not [3]. Blacks are nearly twice as likely as whites to be labeled as higher risk, even though they do not actually re-offend. Subsequent research pointed out how unfair algorithms can discriminate against certain groups in hiring and lending decisions [4]- [6]. For example, Fig. 1.
The illustration of robot biased behaviors in the restaurant environment. Our method can detect those biased behaviors and correct them using bias detection guidance.
One resume screening company discovered that being named "Jared" and participating in high school lacrosse were strong indicators of success in their model. To improve the safety and fairness of AI models, bias mitigation techniques for AI have been developed. Work [7], [8] investigated optimal fair raking algorithm to solve online selection problems where decisions are often biased like hiring and credit risk estimating.
Inspired by the AI breakthroughs in fairness, it is crucial to address similar fairness issues in robots. While users are from diverse biological backgrounds, such as race and gender, it is risky to assume that the knowledge and strategies learned from one group of people can apply to others. Discriminatory performance by robots will undermine service quality for particular groups of people, ignore their service requests, and cause offense. For instance, self-driving cars may make racial decisions to harm people while in the "Trolley problem" scenarios to decide who will be harmed in a crash [9]. It is imperative to mitigate the bias in robots' decision-making processes to improve the inclusivity and quality of robot services.
To mitigate fairness issues in human-robot interactions, our research proposes a self-reflective mechanism -Fairness-Sensitive Policy Gradient Reinforcement Learning (FSP-GRL), to help robots to self-identify biased behaviors during interactions with humans. FSPGRL identifies and mitigates bias in robot learning by examining abnormal updates along specific gradients and updating the policy network to enable fair decision-making. As illustrated in Fig.1, biased robot performance without and with mitigations are compared. This paper has three main contributions: 1) A novel method "bias detection" was proposed based on a knowledge-informed principal component analysis (PCA); PCA abstracts varying behavior status and then extract the bias-related behavior patterns for timely bias detection during robot service. This bias detection method will enable robots to self-reflect their discriminative behaviors without human reminders. 2) A bias mitigation model was proposed using reinforcement algorithms to adjust robot motions and mitigate its bias dynamically. The robot can perceive biased behaviors and make corrections during the learning procedure and elevate the quality of learning. 3) A novel bias study was designed to investigate abstract altitude-level bias from various and specific behavior sensory observations (e.g., responding distance and priority). The question design, data processing, and metrics for bias identification and evaluation will provide a protocol for the research community for doing psychology-related robotics and AI research, such as fairness, trust, and safety in human-robot/AI interactions.
II. RELATED WORK
A. Fairness in AI and Its Inspiration.
In general idea, fairness is evaluated by comparing differences in treatment between protected and non-protected groups [10]. According to the real-world examples of bias in machine learning algorithms, many works proposed methods of fairness definitions and evaluations like Equal Opportunity [11] and Demographic Parity [12], [13]. Data bias in machine learning is represented as the distribution of specific sensitive attributes that are biased or imbalanced [14]. The solutions of data bias tend to remove discrimination from training data or alter the distribution of sensitive variables [15]. For instance, [16] introduced FairFace dataset by changing the distribution of the unbalanced data to eliminate racial bias in the dataset. Model bias focused on mitigating the prediction bias in machine learning. The IBM Research Trusted AI team came up with an open source toolkit AI Fairness 360 that can examine, and mitigate discrimination in machine learning models like learning fair representations about protected attributes, etc [17]. Mengnan proposed Representation Neutralization for Fairness (RNF) by neutralizing fairness-sensitive information in an encoder to reduce the correlation between labels and sensitive properties [18]. [19] proposed a Rényi correlation to measure the fairness of the model and use a novel min-max formulation to balance the accuracy and fairness metrics. [20] proposed a flexible approach that does not need to retrain existing models to achieve fairness by learning to perturb input data to blind deployed models on fairness-related features. The model and data fairness research focused on comparing model performance in various normal and abnormal statuses to identify issues.
B. Robot Fairness Issues and The Consequence.
Learning from the fairness in AI, some research was done to investigate the fairness issues of robotics and its consequences. For example, [21] pointed out the importance of fairness in robot navigation, that unfair algorithms of robots will ignore people like black men or interrupt the interaction between people, which might cause people dissatisfaction and even undermine human safety. In tasks involving teamwork, like rescue missions, fairness is essential in building Workflow illustration for bias detection guidance to mitigate robot's discrimination. A pre-trained bias detection model was used to detect if the robot has bias. The robot is trained with a reinforcement learning algorithm and biased behaviors will be corrected during training.
trust between robots and teams to improve team effectiveness [22]. People are concerned about how tasks are distributed and whether the outcome is fair.
Several methods were proposed to eliminate the potential bias of robots in diverse working environments. For example, [21] proposed a framework called Learning-Relearning to eliminate bias in social robot navigation by learning social context in navigation first and then relearning with model detection when the navigation model makes biased decisions. [23] proposed continual learning as a bias mitigation strategy for facial recognition tasks in robots to balance learning and robustness against changes in data distributions. [24] introduced a two-phase distributed algorithm to allocate fair resources for a task by selecting a resource requester first and then forming a robot team. However, these methods mainly focused on robot learning procedures and ignored the interaction between humans and robots. Additionally, the research ignored human feelings from reality and lacked discrimination types or only had empirical definitions of bias, lacking flexibility in real-world usages. Therefore, our work actively explores the relationship between robot behaviors and human feedback through human studies; a sensory data exploration method was developed to detect bias autonomously based on abnormal behavior analysis.
III. METHOD
Preliminaries of FSPGRL. The illustration of the workflow of bias detection guidance is shown in Fig. 2. Consider a robot serving a requested people i with status S t , where S t = (P i , D t , t). P i ∈ R 5 denotes sensitive identities of people like race, sex, etc. t ∈ R denotes the timestep at status S t . D t ∈ R denotes the distance between the robot and requested people at t. A t ∈ R 2 denotes robot reaction {walk, respond} at t. R t ∈ R denotes the reward of the robot at t.
A. Bias Issues
We novelly propose the definition of four bias issues: "Willingness Issue", "Priority Issue", "Quality Issue" and "Risk Issue". Let g c be the group of people that have different identities and R be the behaviors of the robot, the general definition of the issue could be:
d(gc) = |P r(R = r | P ∈ gc) − P r(R = r | P / ∈ gc)| (1)
Definition 1 (Willingness Issue):
The Willingness Issue d w g c describes the different possibilities that robots fail to respond toward the group of people with and without sensitive identities. Based on [25], the ignorance of disadvantaged groups harms their rights and would result in poor user experience. Definition 2 (Quality Issue): The Quality Issue d q g c describes the different possibilities that robots respond in an inappropriate position toward the group of people with and without sensitive identities. This issue developed based on the navigation robots should avoid an uncomfortable distance towards people [21], which will lead to people distrusting robot services and wasting extra time for service compensation. Definition 3 (Priority Issue): The Priority Issue d p g c describes the different possibilities that robots with overtime response toward the group of people with and without sensitive identities. This issue is inspired by [26] that Pakistan sellers have a delayed discriminatory service. It will lead to people feeling wronged and trigger social criticism. Definition 4 (Risk Issue): The Risk Issue d r g c describes the different possibilities that robots have a risky distance toward the group of people with and without sensitive identities. In [21], It is likely to bring about unsafe collisions between robots and humans. With S t generated by the mission environment, bias detection module will calculate the average value of four bias issue score I(G c ) ∈ R 4 to describe the total bias issue score: I(Gc) = 1 |Gc| g ci ∈Gc dw(gci), dq(gci), dp(gci), dr(gci)
B. Bias Detection
In this section, we aim to identify robot discrimination based on its implicit behaviors and attitudes during serving people. We identified bias through a more comprehensive method hybrid with human experience, instead of only using individual thresholds of single abnormal behavior. First, we innovatively use a human study to learn about the human judgment of the biased result denoted as y and collect the dataset denoted as D. Second, we use Principal component analysis (PCA) [27] to extract the implicit proportion of the issue score I(g c ) denoted as P CA(I(g c )) ∈ R 3 from human study and visualize the behavior of robots via t-SNE. PCA is a method of extracting the main component of features and reducing the dimension of input features. Third, With y and D, a logistic regression model was trained to make robot behavior classification. The logistic regression model can learn the experience from human study and judge the discrimination of robot's behavior. The general loss function of logistic regression is defined as:
where y ′ is the predicted labels from model. The reward penalty parameter τ penalty ∈ R 4 is weighted and based on different biased issues. we presented following strategies to generate τ penalty : 1). When risk issues appear, human-robot interaction has more possibility to hurt sensitive identities and lead to permanent robots prohibition. Therefore, this kind of bias can be set in the first class τ penalty ;
2). When willingness issues appear, the robot is more likely to ignore the sensitive identities which result in mission failures. Therefore such bias can be set in the second class τ penalty ;
By utilizing the τ penalty , we can manually adjust the tolerance for various bias issues, allowing for hierarchical reward feeding to the correction section. The reward function is shown as:
Rt = τ penalty (1 − I(gc))(4)
In addition, if the robot was detected as biased, the reward should be lower than normal. To help the robot distinguish between fair and unfair behaviors, we optimized the reward function to guide it. Overall penalty λ penalty ∈ R is given if the robot was detected as biased by the bias detection model.
C. Bias Mitigation
In this section, the robot is expected to learn what behaviors would be fairer and correct its action during serving. It needs a complicated learning algorithm to achieve the goal. Therefore, we use a vanilla version of the policy gradient method called REINFORCE algorithm [28] and Proximal Policy Optimization(PPO) [29] to learn the task and eliminate bias in the behavior correction module. Robot has an Actor-Critic network denoted as A(S t ), C(S t , a t ). PPO takes a balance between ease of implementation and sample complexity. For each training epoch, robot will serve T = 30 people and collect the set of behaviors {a 1 , a 2 , ..., a t } by policy π θ old (a t | s t ) then compute the reward {r 1 , r 2 , ..., r t } base on formulation (4) (5). An advantage estimate t of people t was used to evaluate current policy:
−C(St, at) + rt + γrt+1 + · · · + γ T −t+1 rT −1 + γ T −t C(ST , aT )(6)
PPO maximizes the objective via L CLIP (θ) =Êt min rt(θ)Ât, clip (rt(θ), 1 − ϵ, 1 + ϵ)Ât (7) where r t (θ) is the probability ratio:
rt(θ) = π θ (at | St) π θ old (at | St) (8)
π θ is the stochastic policy strategies. γ, ϵ are the hyperparameters. clip (r t (θ), 1 − ϵ, 1 + ϵ) clips the probability ratio in the interval [1 − ϵ, 1 + ϵ].
IV. EVALUATION
To evaluate robots' biased behavior and their discrimination against humans, a simulated physic-based restaurant environment was designed. The following aspects were validated: (i)The effectiveness of bias detection. (ii) The effectiveness of bias detection guidance in reducing robot discrimination during learning and testing. A user study with 24 human volunteers participated in robot discrimination evaluation and 1,000 preference data were simulated. Totally 9,000 interaction demos were recorded and analyzed for comparison. Fig. 3 shows the mission environment "A robot is serving people in a restaurant". The robot was designed to deliver food to people and also to assist people if they request a service. The robot has the ability to walk and respond to people. It contains a distance sensor to detect the distance of people and a timer to record response time. There were four types of discrimination: Racism, Sexism, Ageism, and Ableism. Inappropriate distance was set as D t / ∈ [1m, 1.5m]. Late response time was set as t > 200 steps, where the step is the refresh step in MUJOCO. The risky distance was set as D t < 0.5m. τ penalty was set to [1,1,1,1] and λ penalty was set to 0.5.
A. Experiment Setting
Based on the fairness dataset [16], we discuss the sensitive identities from five perspectives: race, gender, age, the status of disability, and skin color. The race includes White, Black, American Indian, Asian, Native Hawaiian, and Other race. The gender includes Female, Male, and Other. The age was divided into Child, Teenager, Adult, Middle Aged, and Elder. The disability of people was set as Yes or No. The skin color was set as Type I to Type VI based on Fitzpatrick scale [30].
There is a single task in the scenario: human interactive task. The robot needs to provide continuous service to a group of people. One person will request service in each Fig. 4. The illustration of robot biased behaviors. Willingness issue is represented as the robot does not respond to requested people. Quality issue is represented as the robot responds but at a close distance. Priority issue is represented as the robot responds late. Risk issue is represented as the robot at a risky distance towards people in any step. episode, and the robot needs to respond. If the robot completes the task or is beyond the time limit, the episode ends and goes to the next episode. People's identities and positions are randomly generalized in each episode.
B. Result Analysis
User study. A questionnaire was set up by collecting the robot interaction data in our environment. We ensure the diversity of volunteers in order to get a relatively fair analysis. In the questionnaire, volunteers were asked some questions like which issue is most serious and what they think about the robot's behaviors. The result of respondents and the simulated data distribution is shown in Fig. 5. Racism is strongly related to Willingness Issue with 35.7% of the volunteers thinking the robot responded badly to the Black race. 21.4% of the volunteers chose ableism in scenario 4 since it is closed to people when the customer is disabled. Bias detection validation. In this part, we built a bias detection model by the simulated data from the questionnaire. Since sexism was not included in the simulated data, our further analysis will not contain it. Visualized bias character- istic is shown in Fig. 6. We observed that each discrimination has its own signatures considering 4 bias issues learned from user study. Results indicate racism is strongly related to the Willingness Issue with an issue score of 0. 5 Fig. 9, we demonstrated the validity of our bias detection model with 98% detection accuracy compared with manual detection and model detection on the overall bias. The grey bar indicates the number of biased epochs exceeding the manually set bias threshold (0.5). Bias detection guidance validation. The result of the experiment was shown in Fig. 7. The method with bias detection guidance can reach lower total bias issue scores to mitigate bias. The detailed issue score of the test dataset is shown in Fig. 8. With bias detection guidance, both REINFORCE and PPO can reach a significantly more than 22% lower total bias issue score than without it. As shown in Fig. 9, the robot successfully achieved the suppression of bias and enhanced the quality of learning during the learning process, which validates the effectiveness of bias detection guidance. PPO with bias detection guidance reached 27.8% more fairness epochs with bias detection guidance than the PPO without it. Once the bias detection model identifies the bias in training, it will try to eliminate bias via penalty. The detail of bias detection guidance is shown in Fig. 10. With our method, robots can identify biased behaviors and correct them rapidly, so the total bias issue score can be confined to a small interval. Without bias detection guidance, the total bias issue score exhibited higher volatility and increment. Robot learning performance comparison. The robot with bias detection guidance (green) detected 39 biased epochs and the robot without guidance (purple) detected 54 biased epochs by bias detection model during the learning procedure. Fig. 10. The illustration of bias detection guidance. The red areas indicate the epochs that are detected as biased. The green areas show that with bias detection guidance, the robot perceives biased behaviors and corrects it. The yellow area shows that without bias detection guidance, the robot can not correct its biased behavior and has a worse performance further.
Biased behavior evaluation. Our results revealed that robot has different action patterns of robots with respect to biased behaviors. Each type of bias has different emphases on bias issues. Fig. 11 displayed the data disparities of four types of bias. Ableism behavior has the farthest average response distance at 1.89 than other behavior, while ageism behavior has the longest average response time at 124.93.
Racism behavior has the largest ignore rate at 4.36 compared to other behavior. Fig. 12 illustrates the distinctive distribution of four types of discrimination. In conclusion, racism is represented by robots refusing to provide services to them. Ageism is expressed as providing poor service resulting in long wait times. Ableism is exposed as an inappropriate social distance when robots serve the disabled. These findings highlight the similarities between robots' biased representation and humans' biased experience in Fig. 6.
V. CONCLUSION AND FUTURE WORK
In this work, We measured bias from more comprehensive perspectives by incorporating human experience and introduced a bias detection guidance method to effectively mitigate robot discrimination against people and achieved robot self-awareness of bias. To validate our method's effectiveness, a human interactive task was deployed in a restaurant environment; a user study was done to identify the robot's biased behaviors. The superiority of our method for robot bias detection and correction was validated by the efficiency of detection accuracy, total bias issue score reduction, and the suppression of bias during training. In the future, bias evaluation and mitigation methods can be further improved with more sensitive detection methods and more dimensions to measure robot behavior; complicated situations like multi-robot cooperation that are closer to real life can be investigated.
Fig. 2 .
2Fig. 2. Workflow illustration for bias detection guidance to mitigate robot's discrimination. A pre-trained bias detection model was used to detect if the robot has bias. The robot is trained with a reinforcement learning algorithm and biased behaviors will be corrected during training.
log y ′ − (1 − y) log 1 − y ′
Fig. 3 .
3Illustration of the simulated restaurant environment. It contains a robot and a group of people. People with different sensitive identities can raise their hands to indicate they need services from the robot. The robot serves people in the environment.
Fig. 5 .
5Result from the questionnaire. The left chart is the distribution of respondents' results of the scenarios. The right chart is the dataset distribution based on the simulated data.
Fig. 6 .
6Characteristic of discrimination learning from user study.
Fig. 7 .
7Total bias issue score comparison during learning. The background color area indicates the total bias issue score of each epoch. Color lines indicate the 40 average epochs of the total bias issue score.
Fig. 8 .
8Result of bias issue score comparison in the test dataset.
Fig. 9 .
9Fig. 9. Robot learning performance comparison. The robot with bias detection guidance (green) detected 39 biased epochs and the robot without guidance (purple) detected 54 biased epochs by bias detection model during the learning procedure.
Fig. 11 .
11Result of the response distance, response time and ignore rate of four types of behavior. Data in bold and underline indicate the maximum value of columns.
Fig. 12 .
12t-SNE of dataset distribution. The density distribution of discrimination on 3 planes indicates a strong correlation between different metrics and bias.
The emergence of service robots at restaurants: Integrating trust, perceived risk, and satisfaction. K H Seo, J H Lee, Sustainability. 1382021K. H. Seo and J. H. Lee. The emergence of service robots at restau- rants: Integrating trust, perceived risk, and satisfaction. Sustainability, 13(8), 2021.
Human-robot collaboration in manufacturing applications: A review. E Matheson, Robotics. 84100E. Matheson et al. Human-robot collaboration in manufacturing applications: A review. Robotics, 8(4):100, 2019.
Machine bias risk assessments in criminal sentencing. J A , J. A. et al. Machine bias risk assessments in criminal sentencing. [Accessed 05-Jun-2023].
A survey on bias and fairness in machine learning. N Mehrabi, ACM Comput. Surv. 5462021N. Mehrabi et al. A survey on bias and fairness in machine learning. ACM Comput. Surv., 54(6), 2021.
Efficient candidate screening under multiple tests and implications for fairness. L Cohen, arXiv:1905.11361arXiv preprintL. Cohen et al. Efficient candidate screening under multiple tests and implications for fairness. arXiv preprint arXiv:1905.11361, 2019.
Multi-objective evolutionary algorithms for the risk-return trade-off in bank loan management. A Mukerjee, ITORA. Mukerjee et al. Multi-objective evolutionary algorithms for the risk-return trade-off in bank loan management. ITOR, 9(5):583-597, 2002.
A Singh, abs/2107.06720Fairness in ranking under uncertainty. CoRR. A. Singh et al. Fairness in ranking under uncertainty. CoRR, abs/2107.06720, 2021.
Fairness and bias in online selection. ICML. J Correa, PMLRJ. Correa et al. Fairness and bias in online selection. ICML, pp. 2112-2121. PMLR, 2021.
The ugly truth about ourselves and our robot creations: the problem of bias and social inequity. A Howard, J Borenstein, Sci. Eng. Ethics. 24A. Howard and J. Borenstein. The ugly truth about ourselves and our robot creations: the problem of bias and social inequity. Sci. Eng. Ethics, 24:1521-1536, 2018.
Fairness in machine learning: Lessons from political philosophy. R Binns, PMLRR. Binns. Fairness in machine learning: Lessons from political philosophy. ACM FAccT, pp. 149-159. PMLR, 2018.
Equality of opportunity in supervised learning. M Hardt, NIPS. 29M. Hardt et al. Equality of opportunity in supervised learning. NIPS, 29, 2016.
. C Dwork, Fairness through awareness. ITCS. C. Dwork et al. Fairness through awareness. ITCS, pp. 214-226, 2012.
Counterfactual fairness. NIPS, 30. M J Kusner, M. J. Kusner et al. Counterfactual fairness. NIPS, 30, 2017.
S Caton, C Haas, arXiv:2010.04053Fairness in machine learning: A survey. arXiv preprintS. Caton and C. Haas. Fairness in machine learning: A survey. arXiv preprint arXiv:2010.04053, 2020.
Data preprocessing techniques for classification without discrimination. F Kamiran, T Calders, Knowl. Inf. Syst. 331F. Kamiran and T. Calders. Data preprocessing techniques for classification without discrimination. Knowl. Inf. Syst., 33(1):1-33, 2012.
Fairface: Face attribute dataset for balanced race, gender, and age for bias measurement and mitigation. WACV. K Karkkainen, J Joo, K. Karkkainen and J. Joo. Fairface: Face attribute dataset for balanced race, gender, and age for bias measurement and mitigation. WACV, pp. 1548-1558, 2021.
Ai fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. R K Bellamy, arXiv:1810.01943arXiv preprintR. K. Bellamy et al. Ai fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. arXiv preprint arXiv:1810.01943, 2018.
Fairness via representation neutralization. M Du, NIPS. 34M. Du et al. Fairness via representation neutralization. NIPS, 34:12091-12103, 2021.
. S Baharlouei, arXiv:1906.12005Rényi fair inference. arXiv preprintS. Baharlouei et al. Rényi fair inference. arXiv preprint arXiv:1906.12005, 2019.
Fairness-aware adversarial perturbation towards bias mitigation for deployed deep models. CVPR. Z Wang, Z. Wang et al. Fairness-aware adversarial perturbation towards bias mitigation for deployed deep models. CVPR, pp. 10379-10388, 2022.
From learning to relearning: A framework for diminishing bias in social robot navigation. J V Hurtado, Front. Robot. AI. 8650325J. V. Hurtado et al. From learning to relearning: A framework for diminishing bias in social robot navigation. Front. Robot. AI, 8:650325, 2021.
Why criteria of decision fairness should be considered in robot design. S Ötting, S.Ötting et al. Why criteria of decision fairness should be considered in robot design. 2017.
Towards fair affective robotics: continual learning for mitigating bias in facial expression and action unit recognition. O Kara, arXiv:2103.09233arXiv preprintO. Kara et al. Towards fair affective robotics: continual learning for mitigating bias in facial expression and action unit recognition. arXiv preprint arXiv:2103.09233, 2021.
Deep reinforcement learning for fairness in distributed robotic multi-type resource allocation. ICMLA. Q Zhu, J Oh, IEEEQ. Zhu and J. Oh. Deep reinforcement learning for fairness in distributed robotic multi-type resource allocation. ICMLA, pp. 460- 466. IEEE, 2018.
A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions. A Chouldechova, PMLRA. Chouldechova et al. A case study of algorithm-assisted decision making in child maltreatment hotline screening decisions. ACM FAccT, pp. 134-148. PMLR, 2018.
Most-favored-customer protection versus price discrimination over time. I P Png, J. Pol. Econ. 995I. P. Png. Most-favored-customer protection versus price discrimina- tion over time. J. Pol. Econ., 99(5):1010-1028, 1991.
Principal component analysis: a review and recent developments. I T Jolliffe, J Cadima, Philos. trans., Math. phys. eng. sci. 37420150202I. T. Jolliffe and J. Cadima. Principal component analysis: a review and recent developments. Philos. trans., Math. phys. eng. sci., 374(2065):20150202, 2016.
Simple statistical gradient-following algorithms for connectionist reinforcement learning. R J Williams, Reinforcement learningR. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Reinforcement learning, pp. 5- 32, 1992.
J Schulman, arXiv:1707.06347Proximal policy optimization algorithms. arXiv preprintJ. Schulman et al. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
Fitzpatrick skin typing: Applications in dermatology. S Sachdeva, IJDVL. 7593S. Sachdeva. Fitzpatrick skin typing: Applications in dermatology. IJDVL, 75:93, 2009.
| [] |
[
"Variability and evolution of the optical polarization of a sample of gamma-ray blazars",
"Variability and evolution of the optical polarization of a sample of gamma-ray blazars"
] | [
"J Otero-Santos \nInstituto de Astrofísica de Canarias (IAC)\nE-38200La Laguna, TenerifeSpain\n\nDepartamento de Astrofísica\nUniversidad de La Laguna (ULL)\nE-38206La LagunaTenerifeSpain\n",
"J A Acosta-Pulido \nInstituto de Astrofísica de Canarias (IAC)\nE-38200La Laguna, TenerifeSpain\n\nDepartamento de Astrofísica\nUniversidad de La Laguna (ULL)\nE-38206La LagunaTenerifeSpain\n",
"★ J Becerra González \nInstituto de Astrofísica de Canarias (IAC)\nE-38200La Laguna, TenerifeSpain\n\nDepartamento de Astrofísica\nUniversidad de La Laguna (ULL)\nE-38206La LagunaTenerifeSpain\n",
"★ C M Raiteri \nINAF-Osservatorio Astrofisico di Torino\nvia Osservatorio 20, 10025 Pino TorineseItaly\n",
"M I Carnerero \nINAF-Osservatorio Astrofisico di Torino\nvia Osservatorio 20, 10025 Pino TorineseItaly\n",
"N Castro Segura \nDepartamento de Astrofísica\nUniversidad de La Laguna (ULL)\nE-38206La LagunaTenerifeSpain\n\nSchool of Physics & Astronomy\nUniversity of Southampton\nHighfieldSO17 1BJSouthamptonUK\n",
"O González-Martín \nInstituto de Radioastronomía y Astrofísica (IRyA-UNAM)\n3-72 (Xangari)8701MoreliaMexico\n",
"A Luashvili \nLaboratoire Univers et Théories\nObservatoire de Paris\nUnversité PSL\nCNRS\nUniversité de Paris\n92190MeudonFrance\n"
] | [
"Instituto de Astrofísica de Canarias (IAC)\nE-38200La Laguna, TenerifeSpain",
"Departamento de Astrofísica\nUniversidad de La Laguna (ULL)\nE-38206La LagunaTenerifeSpain",
"Instituto de Astrofísica de Canarias (IAC)\nE-38200La Laguna, TenerifeSpain",
"Departamento de Astrofísica\nUniversidad de La Laguna (ULL)\nE-38206La LagunaTenerifeSpain",
"Instituto de Astrofísica de Canarias (IAC)\nE-38200La Laguna, TenerifeSpain",
"Departamento de Astrofísica\nUniversidad de La Laguna (ULL)\nE-38206La LagunaTenerifeSpain",
"INAF-Osservatorio Astrofisico di Torino\nvia Osservatorio 20, 10025 Pino TorineseItaly",
"INAF-Osservatorio Astrofisico di Torino\nvia Osservatorio 20, 10025 Pino TorineseItaly",
"Departamento de Astrofísica\nUniversidad de La Laguna (ULL)\nE-38206La LagunaTenerifeSpain",
"School of Physics & Astronomy\nUniversity of Southampton\nHighfieldSO17 1BJSouthamptonUK",
"Instituto de Radioastronomía y Astrofísica (IRyA-UNAM)\n3-72 (Xangari)8701MoreliaMexico",
"Laboratoire Univers et Théories\nObservatoire de Paris\nUnversité PSL\nCNRS\nUniversité de Paris\n92190MeudonFrance"
] | [
"MNRAS"
] | We present a polarization variability analysis of a sample of 26 -ray blazars monitored by the Steward Observatory between 2008 and 2018 in the optical band. We investigate the properties and long-term variability of their optical polarization, searching for differences between blazar types. We observe that BL Lac objects are typically less polarized and less variable than flat spectrum radio quasars (FSRQs). Moreover, BL Lacs display a distribution of their polarization angle typically oriented in a preferential direction, contrary to the rather random distribution of FSRQs. For the latter blazar type, as well as those sources showing a bright stellar emission, we take into account the depolarizing effect introduced by the broad line region and the host galaxy on the measured polarization degree. In this sample we also observe that BL Lacs present an uncorrelated evolution of the flux and the polarization. Contrary, FSRQs show a correlation before the depolarization correction, that is lost however after considering this effect. In addition, we study the behaviour of the polarization angle, searching for angle rotations in its long-term evolution. We derive that the FSRQs studied here show rotations more frequently than BL Lac objects by a factor ∼1.5. During these periods we also observe a systematic decrease of the polarization fraction, as well as a marginal flux increase, not significant however to connect rotations with optical flares. We interpret these results within the extended shock-in-jet scenario, able to explain the overall features observed here for the polarization of the blazar sample. synchrotron emission(Maraschi et al. 1992). Blazars are often subdivided in two classes, depending on their optical spectrum(Urry & Padovani 1995): BL Lacertae (BL Lac) objects and flat spectrum radio quasars (FSRQs). The former present an optical spectrum with no features or weak features with an equivalent width |EW| < 5 Å in the rest frame, while in the optical spectrum of the latter type, narrow and broad lines with |EW| > 5 Å are visible.The low energy emission of AGN, from radio to optical wavelengths, is characterised by a high polarization degree. This polarized emission is related to the non-thermal synchrotron emission and the magnetic field of the relativistically boosted jet. Among the large variety of AGN types, blazars show an extraordinarily high optical polarization w.r.t. other classes, as reported byAngelakis et al. (2016), with values that can reach a fraction of ∼50% (Smith 2017). This polarization is affected by the characteristic intense variability displayed by blazars, exhibiting for instance large changes of the polarization fraction or the orientation of the polarization angle. Owing to the origin of this polarized emission, this variability is linked to the magnetic field of the jet and its variations. As a consequence, variability studies of the polarization of blazars are one of the most accessible | null | [
"https://export.arxiv.org/pdf/2306.03919v1.pdf"
] | 259,095,616 | 2306.03919 | ad5ceb8e2fe4b2a1470d78b64bff4955150f697a |
Variability and evolution of the optical polarization of a sample of gamma-ray blazars
2023
J Otero-Santos
Instituto de Astrofísica de Canarias (IAC)
E-38200La Laguna, TenerifeSpain
Departamento de Astrofísica
Universidad de La Laguna (ULL)
E-38206La LagunaTenerifeSpain
J A Acosta-Pulido
Instituto de Astrofísica de Canarias (IAC)
E-38200La Laguna, TenerifeSpain
Departamento de Astrofísica
Universidad de La Laguna (ULL)
E-38206La LagunaTenerifeSpain
★ J Becerra González
Instituto de Astrofísica de Canarias (IAC)
E-38200La Laguna, TenerifeSpain
Departamento de Astrofísica
Universidad de La Laguna (ULL)
E-38206La LagunaTenerifeSpain
★ C M Raiteri
INAF-Osservatorio Astrofisico di Torino
via Osservatorio 20, 10025 Pino TorineseItaly
M I Carnerero
INAF-Osservatorio Astrofisico di Torino
via Osservatorio 20, 10025 Pino TorineseItaly
N Castro Segura
Departamento de Astrofísica
Universidad de La Laguna (ULL)
E-38206La LagunaTenerifeSpain
School of Physics & Astronomy
University of Southampton
HighfieldSO17 1BJSouthamptonUK
O González-Martín
Instituto de Radioastronomía y Astrofísica (IRyA-UNAM)
3-72 (Xangari)8701MoreliaMexico
A Luashvili
Laboratoire Univers et Théories
Observatoire de Paris
Unversité PSL
CNRS
Université de Paris
92190MeudonFrance
Variability and evolution of the optical polarization of a sample of gamma-ray blazars
MNRAS
0002023Accepted XXX. Received YYY; in original form ZZZPreprint 8 June 2023 Compiled using MNRAS L A T E X style file v3.0galaxies: active -galaxies: jets -galaxies: nuclei -BL Lacertae objects: general -polarization
We present a polarization variability analysis of a sample of 26 -ray blazars monitored by the Steward Observatory between 2008 and 2018 in the optical band. We investigate the properties and long-term variability of their optical polarization, searching for differences between blazar types. We observe that BL Lac objects are typically less polarized and less variable than flat spectrum radio quasars (FSRQs). Moreover, BL Lacs display a distribution of their polarization angle typically oriented in a preferential direction, contrary to the rather random distribution of FSRQs. For the latter blazar type, as well as those sources showing a bright stellar emission, we take into account the depolarizing effect introduced by the broad line region and the host galaxy on the measured polarization degree. In this sample we also observe that BL Lacs present an uncorrelated evolution of the flux and the polarization. Contrary, FSRQs show a correlation before the depolarization correction, that is lost however after considering this effect. In addition, we study the behaviour of the polarization angle, searching for angle rotations in its long-term evolution. We derive that the FSRQs studied here show rotations more frequently than BL Lac objects by a factor ∼1.5. During these periods we also observe a systematic decrease of the polarization fraction, as well as a marginal flux increase, not significant however to connect rotations with optical flares. We interpret these results within the extended shock-in-jet scenario, able to explain the overall features observed here for the polarization of the blazar sample. synchrotron emission(Maraschi et al. 1992). Blazars are often subdivided in two classes, depending on their optical spectrum(Urry & Padovani 1995): BL Lacertae (BL Lac) objects and flat spectrum radio quasars (FSRQs). The former present an optical spectrum with no features or weak features with an equivalent width |EW| < 5 Å in the rest frame, while in the optical spectrum of the latter type, narrow and broad lines with |EW| > 5 Å are visible.The low energy emission of AGN, from radio to optical wavelengths, is characterised by a high polarization degree. This polarized emission is related to the non-thermal synchrotron emission and the magnetic field of the relativistically boosted jet. Among the large variety of AGN types, blazars show an extraordinarily high optical polarization w.r.t. other classes, as reported byAngelakis et al. (2016), with values that can reach a fraction of ∼50% (Smith 2017). This polarization is affected by the characteristic intense variability displayed by blazars, exhibiting for instance large changes of the polarization fraction or the orientation of the polarization angle. Owing to the origin of this polarized emission, this variability is linked to the magnetic field of the jet and its variations. As a consequence, variability studies of the polarization of blazars are one of the most accessible
INTRODUCTION
Active galactic nuclei (AGN) are one of the most powerful sources in the entire Universe, sometimes with the ability of developing powerful relativistic jets acting as natural particle accelerators. Among the different types, blazars are extremely boosted due to relativistic effects, owing to their orientation being closely aligned with the line of sight of the Earth. They can emit from radio to very-high-energy -rays, displaying a characteristic double-bump shape in their spectral energy distribution (SED, see e.g. Abdo et al. 2010). The low energy bump, with its peak typically found at infrared (IR) to X-ray frequencies, is explained with non-thermal synchrotron emission of relativistic electrons from the jet (Ghisellini et al. 2010). The high energy bump found at -ray energies on the other hand is often interpreted as an inverse Compton (IC) scattering of low energy photons with the same population of relativistic electrons responsible for the ways of obtaining information of the magnetic field in these objects and in the extreme Universe, the role it plays in the particle acceleration, the interaction with its environment, and its relation with the overall emission detected from blazars. The polarized emission has been observed in the past with many different behaviours w.r.t. the total optical emission. For instance, Raiteri et al. (2012) reported a correlated evolution of the optical total and polarized flux for the FSRQ B2 1633+38. On the other hand, anticorrelations between the total and polarized emission of blazars have also been reported, e.g. Fraija et al. (2017) for Mkn 421. In addition, the lack of relation between these two quantities has also been observed in some cases (e.g. for PKS 1424+240 as reported by Covino et al. 2015).
In this regard, the RoboPol programme has been leading the effort on studying the polarized emission of astrophysical objects and in particular of AGN and blazars. These studies have evaluated different characteristics of the polarized emission of these objects and their variability in a systematic way. For instance, Angelakis et al. (2016) have studied the relation between the frequency of the synchrotron peak and the radio loudness of several AGN. Properties such as variability and distribution of the polarization in AGN and blazars have also been characterized by e.g. Pavlidou et al. (2014); Hovatta et al. (2016). Numerous efforts have also been dedicated to understand the development of rotations of the polarization angle (see Blinov et al. 2015Blinov et al. , 2016aBlinov et al. ,b, 2018, as well as their connection with the -ray activity of blazars and/or whether they have a stochastic origin (Blinov et al. 2015(Blinov et al. , 2018.
Studying the polarized emission and variability of -ray blazars is also key to understand the connection of the magnetic field with the mechanisms responsible for the broadband emission and the particle acceleration in the jet. However, the fact that each blazar seems to behave in its own unique way makes the interpretation within a general framework rather difficult, leading to a large variety of models developed for understanding their behaviour. For example, models based on chaotic magnetic fields have been proposed by Laing (1980). Moreover, helical jets with helical magnetic fields like the model developed by Raiteri et al. (2013) have been commonly used to explain the evolution of the polarization of blazars (see e.g. Gupta et al. 2017). Marscher et al. (2010) suggest a model based on knots moving under the influence of a toroidal magnetic field following a spiral path. Another possibility involves magnetic reconnection of oppositely directed field lines, releasing magnetic energy and accelerating particles (Zhang et al. 2018). Alternatively, models based on turbulent emitting cells in the jet have also been widely used scenario for interpreting the polarization variability (see Marscher 2014).
Here we study the behaviour, variability and evolution of the polarization and its characteristics for a sample of -ray bright blazars. These blazars have been regularly monitored by the Steward Observatory for approximately 10 years, providing an excellent spectropolarimetric database for a long-term analysis of the properties and variability of their polarization. This database has been explored in the past for individual sources. However, a systematic study of a large sample of sources observed by this monitoring programme is still lacking. Here we perform an extensive evaluation of the polarized emission of our blazar sample owing to the remarkable time coverage and sampling, only comparable to the RoboPol programme, that has observed a larger number of sources with however a shorter coverage of ∼3 years (see e.g. Blinov et al. 2016aBlinov et al. ,b, 2018. This allows a thorough comparison with the results derived by RoboPol. In addition, taking advantage of the spectropolarimetry performed by the Steward Observatory, a comparison of the behaviour of the whole spectral dataset studied in Otero-Santos et al. (2022) (hereafter OS22) and the polarized spectra is also conducted. The paper is structured as follows: in Section 2 we detail the observations and data reduction used in this analysis; the methodology followed for the data analysis is explained in Section 3; and the main results and discussion are detailed in Sections 4 and 5, respectively. Finally, a brief summary of the main conclusions is also included in Section 6.
OBSERVATIONS AND DATA REDUCTION
In this work we have included the polarimetric data of 26 -ray bright blazars monitored by the Steward Observatory and previously analysed in OS22. These sources were observed for roughly 10 years, between 2008 and 2018, with spectropolarimetric observations in support of the Fermi-LAT telescope. The Steward Observatory provides the optical total and polarized spectra between 4000 and 7550 Å, with resolutions ranging from 16 to 24 Å and a dispersion of approximately 4 Å/pixel, depending on the slit width. In addition, the average polarization values, including the Stokes parameters, the polarization degree and the orientation of the electric vector position angle (EVPA) in the range of 5000-7000 Å are also provided. The blazar sample analysed here is presented in Table 1, separated by their blazar type as BL Lacs, FSRQs, and BL Lacs dominated by the emission of the stellar population according to OS22. We have also included their classification according to the position of the synchrotron peak of the SED as low-(LSPs, sync < 10 14 Hz), intermediate-(ISPs, 10 14 Hz < sync < 10 15 Hz) or high-synchrotron peaked (HSPs, sync > 10 15 Hz) blazars, as defined in Ajello et al. (2020). Following the procedure from OS22, we have eliminated the telluric absorption features, performed the rest frame conversion, corrected small shifts in and accounted for the Galactic extinction using the model from Fitzpatrick (1999). More details are given in Section 2 of OS22.
In addition to the Galactic extinction, the interstellar medium can also have a small contribution to the observed polarization degree. We have evaluated this contribution for the sources with the highest expected interstellar polarization, that depends on the extinction value , i.e. 1ES 1959+650 ( = 0.474), BL Lacertae ( = 0.901) and 1ES 2344+514 ( = 0.580). We have used the approximation from Serkowski et al. (1975), where the interstellar contribution can be expressed as ∼ 4.5 − . In this expression, − represents the reddening of the sources, − = / . We adopted a value of = 3.1 and retrieved the values of for these sources from Schlafly & Finkbeiner (2011). With these considerations, the expected interstellar medium polarization is 0.67%, 1.3% and 0.84%, respectively. To crosscheck this result, we have made use of the catalogue from Heiles (2000) to estimate the contribution of the interstellar medium from the reddening of a known star located close to the position of the source. This alternative approach leads to contributions of ∼0.78%, ∼0.65% and ∼0.19%, respectively. Therefore, we assume that the interstellar polarization contribution is negligible for the present work.
The ambiguity on the EVPA value was taken into account with the approach used in other studies (e.g. Kiehlmann et al. 2016). We consider that consecutive measurements of the polarization angle correspond to those that minimize the difference between them by considering the criterion
Δ = | +1 − | < 90 • .(1)
Following this equation, ±n·180 • is added to +1 if Δ >90 • . We note that other authors (for instance, Carnerero et al. 2017) include the uncertainties in the determination of the EVPA when performing Danforth et al. (2013). † Sources that have not been observed by the RoboPol programme between 2013 and 2017. this correction,
Δ = | +1 − | − √︃ ( +1 ) 2 + ( ) 2 < 90 • .(2)
This can lead to a correction between two measurements that would not be performed with the criterion of Equation (1 Here we use the latter, that corresponds to the interval used by the Steward Observatory data. We stress that our data are affected by seasonal observational gaps. Since blazars are characterised by a strong variability on many different time scales (see e.g. Kiehlmann et al. 2021), we only apply this condition to measurements taken with a temporal separation <200 days. The blazars monitored by the Steward Observatory have been selected based on their interest as emitting Fermi-LAT -ray sources with extreme variability. This could lead to the introduction of a bias towards -ray bright sources in the present case. This contrasts with the criterion used by, for instance, the RoboPol programme, that consists on an unbiased sample of -ray loud blazars (see Pavlidou et al. 2014;Blinov et al. 2015). Nevertheless, we also note that all sources included here except PKS 2155-304 and PKS 0736+017 (as shown in Table 1), are coincident with sources observed by RoboPol.
METHODOLOGY
The aim of this work is to study the properties, behaviour and evolution of the polarization for the blazar sample considered here. For this, we have evaluated the variability and the overall behaviour of the polarization degree and angle (Section 3.1). Moreover, as the host galaxy and broad line region (BLR) from sources with a bright stellar emission and from FSRQs, respectively, are not expected to be polarized, we have studied the depolarizing effect that these two contributions can have on the polarization degree of the non-thermal synchrotron emission of the relativistic jet (Section 3.2). We have also studied the variability of the polarization degree and searched for rotations in the evolution of the polarization angle (Section 3.3).
Variability of the polarization
With the aim of studying and quantifying the variability of the polarization, we have tested the statistical distribution that provides a best fit for the polarization degree. Following the approach from Blinov et al. (2016a), we have evaluated the probability density function (PDF) with a Beta distribution defined as
PDF Beta ( , , ) = −1 (1 − ) −1 ( , ) ,(3)
where represents the polarization degree, and (with , > 0) are the parameters that define the distribution and ( , ) is the Beta function. This consideration has been used by other authors in the past (for instance, Hovatta et al. 2016), with a good agreement between the data and the Beta distribution. In order to assess the reliability and success of this function when describing our data, we have applied a 2 test (Pearson 1900) to account for the goodness of the fit, considering values of 2 /d.o.f. ≲ 1 as good fits. As a comparison, we have also tested the performance of the commonly used Gaussian distribution, typically assumed for the estimation of several parameters in blazar studies, e.g. the fractional variability, .
Assuming that the Beta distribution is able to reproduce the measured polarization degree, the mean value can be expressed as a function of and ,
= + .(4)
In addition, the parameters of the distribution can also be used for quantifying the variability displayed by the polarization degree, providing an estimation of the so-called modulation index
= √ 2 ,(5)
where 2 is the variance of the Beta distribution (Hovatta et al. 2016). This quantity can be obtained from and as 2 = ( + ) 2 ( + + 1)
.
The modulation index is analogous to the fractional variability, , commonly used in blazar studies, that quantifies the intensity of the variability. As a crosscheck, we have also estimated the following the prescription from Vaughan et al. (2003), as
= √︄ 2 − ⟨ 2 ⟩ ⟨ ⟩ 2 .(7)
2 represents the variance of the data sample, ⟨ 2 ⟩ is the mean square error and ⟨ ⟩ corresponds to the mean value of the sample. The uncertainty of this quantity can be estimated following Equation (B2) from Vaughan et al. (2003). Notice that the is associated with a Gaussian distribution. Therefore, it is used only for comparative purposes. The results of the behaviour of the polarized emission using the tools presented here will be discussed in Section 4.
Depolarizing effect of non-polarized components
Studies such as Sosa et al. (2017) or Blinov et al. (2021) have claimed in the past that the host galaxy can have a depolarizing effect in the measured polarization degree of the synchrotron emission. This effect can be especially important in those sources with a bright host galaxy (e.g. those presented as galaxy-dominated blazars in OS22), with an optical emission comparable to that from the relativistic jet. To account for this effect in the three galaxy-dominated sources in our sample (H 1426+428, Mkn 501 and 1ES 2344+514) we have made use of the estimation of the host galaxy performed in OS22. This estimation is based on the decomposition of all the spectral dataset of the blazar using the minimum number of meaningful components, associated with the different contributions to the emission, that are able to successfully reproduce and model the variability displayed by the source by using the Non-Negative Matrix Factorization (NMF, Paatero & Tapper 1994). With this approach we have been able to estimate the overall contributions of both the jet and the host galaxy and separate their emission. With this estimate of the host galaxy, we can subtract it in order to correct the measured polarization degree from the stellar contribution as
( ) [%] = ( ) [%] 1 − host galaxy total ( ) .(8)
This correction can also be applied to FSRQs, where the BLR is also expected to be unpolarized. Making use of the BLR component extracted from the NMF reconstruction from OS22, and again following Equation (8), we can obtain an estimation of the intrinsic polarization degree of the synchrotron emission, higher than the observed polarization fraction. Notice that by BLR contribution we mean not only the broad emission line component but also the thermal continuum likely coming from the accretion disc as observed in radio-quiet QSOs (Vanden Berk et al. 2001). Hereafter we will refer to this contribution as the BLR+AD component.
EVPA rotations
The variability of the polarization is not only restricted to changes in the polarization degree. Changes of the polarization angle orientation are also observed in the temporal evolution of blazars. These changes are often seen with an erratic behaviour. However, in some occasions, smooth and continuous swings in the variability of the EVPA are detected (see for instance Blinov et al. 2015Blinov et al. , 2016aKiehlmann et al. 2017). These events, typically identified as EVPA rotations or swings, have been one of the ways to study the behaviour of the polarization and magnetic field of the relativistic jets in these objects.
Smooth rotations
There is no standard definition of an EVPA rotation in the framework of blazars. Therefore, we have followed a similar definition to that used by the RoboPol programme, detailed in Blinov et al. (2015). The main characteristics that an EVPA variation must fulfill to be considered a rotation are the following:
• A minimum variation of the polarization angle Δ ≥ 90 • . • In order to exclude possible false identifications, the EVPA swing must contain at least 4 observations.
• Owing to the fast variability displayed by blazars, consecutive points of a rotation must be separated in time by Δ ≤ 30 days.
• The start and end of the rotation must be accompanied by a change of slope Δ /Δ w.r.t. previous observations. Following the prescription from Blinov et al. (2015), we used a factor 5 change, or a change of sign, to define the start and end of the rotation.
Furthermore, due to the fast variability and fluctuations observed in the emission of blazars, we do not reject rotations in which a point slightly fluctuates from the general trend of the EVPA swing, allowing one measurement to deviate <5 • if the following measurement continues the general trend of the rotation. An example of these rotations is represented in the top panel of Figure 1.
Slow and non-smooth rotations
In addition to the aforementioned smooth rotations, here we have also considered those rotations showing a non-smooth, and typically much slower change of the EVPA. A visual inspection of these nonsmooth swings reveals that they tend to have a higher angle variation and duration than the smooth swings, leading to the appearance of several fluctuations and fast variability in shorter time scales during the development of the angle rotation.
In order to identify these non-smooth swings, we use the same definition introduced above, without the condition demanding a continuous change of the polarization angle during the rotation (see bottom panel of Figure 1). Prior to the identification, we also apply a cubic spline interpolation on the binned EVPA light curve, with bins of 60-100 days depending on the source, to remove the variability in short time scales of the polarization angle. This smoothing leads to an easier identification of these non-smooth EVPA rotations.
RESULTS
General behaviour of the polarization
The long-term evolution of the polarization and its relation with the total optical flux was studied from the polarization-flux diagrams. As an illustration we represent an example for each class: the galaxydominated blazar Mkn 501, the BL Lac object 3C 66A and the FSRQ 3C 454.3, in Figures 2, 3 and 4, respectively. The figures corresponding to the rest of the sources are included as online material (Figures S1 to S3). The polarization-flux diagrams prior to the depolarizing correction for galaxy-dominated blazars and FSRQs are also displayed the in left panels of Figures 2 and 4, as well as the corresponding online figures for each blazar. The galaxy-dominated blazars show a very low polarization in comparison to BL Lacs and FSRQs, with values of the polarization degree before host galaxy correction <6%, and average values of ∼3%. A linear correlation coefficient of ∼ 0.55, indicates a mild correlation between the polarization degree and the total optical emission.
Concerning the BL Lac objects, we do not observe any relation between their polarization and their total optical emission. This can be seen for example in Figure 3. They tend to have a higher polarization fraction than those BL Lacs dominated by the emission of the stellar population, with typical mean percentages of 5-10%. Additionally, through a visual inspection of Figure 5 where the mean polarization degree is represented w.r.t. the frequency of the synchrotron peak of the SED, we observe that for BL Lacs with a lower peaking frequency, a higher polarization degree is measured, in comparison to those with a high , with very low values of the polarization fraction. This is also reflected in the correlation coefficient between these two quantities estimated for the BL Lac population, with a value of = −0.65, p-value = 0.03.
On the contrary, FSRQs show different types of behaviour before the BLR+AD correction, as it can be observed for example in the left panel of Figure 4 for 3C 454.3 before correcting the effect of the BLR+AD. Five of the sources of this type show hints or evidence of a correlated evolution of the polarized and total emission, with correlation coefficients ≥ 0.50: PKS 0420-014, OJ 248, B2 1633+38, 3C 454.3 and CTA 102. This behaviour was reported already in the past for B2 1633+38 by Raiteri et al. (2012). Remarkably, for one of the FSRQs of the sample, 3C 345, the polarization degree displays an anticorrelation with the total optical flux. The correlation coefficient for this blazar is = −0.49, p-value = 10 −3 . The remaining six FSRQs do not show any significant correlation or behaviour in their long-term polarization variability. Regarding the relation between the mean polarization and the location of the synchrotron peak of the SED, FSRQs and cover a higher range of mean polarization fractions than BL Lac objects with a high and galaxy-dominated blazars ( > 10 15 Hz), as shown in Figure 5. In addition to the long-term variability of the polarization degree, we have evaluated the characteristics of the polarized emission of this blazar sample. We have tested the distribution that describes best the measured polarization degree through a 2 test. This test was used to compare the goodness of the fit of the Beta distribution introduced in Section 3 and a Gaussian distribution, using the best fit to the data of each of them as the null hypothesis. We obtain typical values of 2 /d.o.f. ≲ 1 and p-values<0.05 for 21 of the 26 blazars studied here. Therefore, the Beta distribution leads to a good agreement with the data for most of the sources of the sample. An example of a Beta PDF fit for BL Lacertae is presented in Figure 6. For the remaining five sources, the Beta distribution is not able to accurately describe the distribution of their polarization degree. It is important to highlight that for three of these five blazars, the low number of observations ( ⩽ 47) is most likely affecting the result of the test. The last two targets for which the Beta distribution results in a poor fit to the data are 3C 273 and OJ 248. These two blazars are characterised by an almost unpolarized emission, with the exception of a bright flare followed by an increase of the polarization degree for the latter. The fact that the majority of the population is well represented by a Beta distribution is in line with the results presented by Blinov et al. (2016a). On the other hand, the Gaussian distribution fits lead to values of 2 /d.o.f. > 1, proving that it is not suitable for describing the distribution of the polarization degree of blazars.
Depolarizing effect of the host galaxy and the BLR+AD
We have considered the effect of the host galaxy in the measured polarization degree for those blazars showing a bright stellar emission, namely H 1426+428, Mkn 501 and 1ES 2344+514. Following Equation (8) and considering the estimation of the host galaxy performed in OS22, we are able to calculate the intrinsic polarization degree of the emission of the jet. We observe that after subtracting the contribution of the host galaxy, the polarization degree of these sources shows an increase of approximately a factor 2 w.r.t. the observed measurements. Moreover, the corrected polarization degree does not show the mildly correlated behaviour with the total flux commented in Section 4.1. On the other hand, it now presents the same behaviour as the rest of the BL Lac sample, with no apparent trend between the total optical flux and the polarization. This behaviour can be observed in the right panel of Figure 2 for the case of Mkn 501, and in Figure S1 for H 1426+428 and 1ES 2344+514.
This correction has also been applied to FSRQs, where the contribution of the BLR+AD can also have a depolarizing effect. Depending on the different relative contribution of the BLR+AD for each FSRQ, the correction ranges from almost an unchanged polarization degree for those sources with a faint BLR+AD, up to a factor ≳3 for the brightest BLR+AD (e.g. PKS 1510-089). The corrected polarization degree for these sources w.r.t. the flux is presented in the right panel of Figure 4 for the case of 3C 454.3, and in Figure S3 for the rest of the FSRQs. It is also remarkable that after this correction, we observe polarization degrees as high as 50-70% for example for Ton 599, PKS 1222+216 or 3C 454.3.
Regarding the behaviour of the polarization degree with the total flux, as for the galaxy-dominated blazars, the trends observed between these two quantities are no longer observable after the correction, with the exception of OJ 248 and 3C 345. The former presents a positive correlation with a linear correlation coefficient =0.76 (p-value=5.4×10 −44 ). Carnerero et al. (2015) and Raiteri & Villata (2021) also reported the observed correlation for this source, attributing the variability to a turbulent magnetic field component. The latter displays an anticorrelated behaviour, with a coefficient =-0.80 (p-value=1.4×10 −6 ).
Polarization variability
We have evaluated the variability of the polarization. As a first step, we have compared the intensity of the variability derived from the modulation index and the fractional variability defined by Equations (5) and (7), respectively. We find that both measurements are compatible within errors. Therefore, owing to the good agreement of the Beta distribution with the data, and previous works using to define the variability of the polarization (e.g. Hovatta et al. 2016), hereafter we will refer to the results of the intrinsic modulation index. The modulation index for each source is represented in the left panel of Figure 7 w.r.t.
. In the right panel we also present with the frequency of the synchrotron peak of the SED.
This analysis clearly shows a higher variability coming from FS-RQs in comparison to BL Lac and galaxy-dominated sources. The mean modulation index for BL Lacs and galaxy-dominated sources was estimated to be = 0.53 ± 0.08 and = 0.50 ± 0.03, respectively. On the other hand, FSRQs yield a higher value = 0.70 ± 0.13. This separation is also visible in the right panel of Figure 7, where sources with a lower synchrotron peak display a higher value. This trend leads to a correlation coefficient =-0.52 (p-value=0.007). The left panel of Figure 7 does not reveal any evident difference in the mean polarization degree prior to the host galaxy and BLR+AD correction between BL Lacs and FSRQs with = (7.97±0.04)% and = (7.16±0.05)%, respectively, with only the galaxy-dominated blazars presenting a significantly lower mean polarization degree = (2.02 ± 0.01)%. However, after accounting for the depolarization produced by the BLR+AD, FSRQs appear to be more polarized than BL Lac objects, with a mean intrinsic polarization degree of = (17.37 ± 0.10)%. In addition, the sources presenting a bright host galaxy show an increase of their mean polarization degree of a factor ∼2 after correcting by the contribution of the host galaxy, = (3.95 ± 0.02)%. In fact, after accounting for the depolarizing effect, we observe an anticorrelation between the mean polarization and the value of , with =-0.57 (p-value=0.002), as shown in Figure 5. Despite this increase of the mean polarization for these blazar types, no significant change in the intrinsic modulation index is observed after the correction. Compatible results in this regard have also been reported by . Open and filled green stars represent the galaxy-dominated sources with and without the contribution of the host galaxy, respectively. Blue triangles correspond to the BL Lac objects. Open and filled red squares correspond to the FSRQs, with and without the BLR+AD contribution, respectively (see Section 4.2). Angelakis et al. (2016). A similar synchrotron peak frequency dependence of the polarization degree and its variability has been reported by Smith (1996). We note that for some sources, the frequency of the synchrotron peak was observed to change depending on the emission state (e.g. Mkn 501, see MAGIC Collaboration et al. 2020). However, as observed from Figure 5, this trend is clearly visible despite small shifts of . In addition to this, we have also evaluated the relation between the intrinsic modulation index of the polarization degree and the fractional variability of the total optical flux, extracted from Table 2 of OS22, and represented in Figure 8. This figure clearly shows that sources with a higher variability in the optical band also present a higher variability and for the polarization degree, as expected.
Orientation of the EVPA
We have also studied the distribution of the EVPA. We have evaluated whether the sources analyzed here show a preferred or a random and stochastic orientation of the EVPA. First, we note that the orientation of the polarization angle suffers from a ±180 • ambiguity introduced by the alignment degeneracy, causing that, for instance, 90 • = 270 • . We have taken into account this degeneracy by evaluating the distribution of the polarization angle in the range [-90 • , 90 • ). The evaluation of these distributions is included in Table 2.
We compare a Von Mises distribution with a preferential orientation in the considered angle interval and a uniform distribution with no value of the polarization angle favoured. For the BL Lac subsample, 9 of the 11 sources present a dominant or preferred orientation of the polarization angle (see left panel of Figure 9). Contrary to Open and filled green stars represent the galaxy-dominated sources with and without the contribution of the host galaxy, respectively. Blue triangles correspond to the BL Lac objects. Open and filled red squares correspond to the FSRQs, with and without the BLR+AD contribution, respectively. this general behaviour we find AO 0235+164 and S5 0716+714. The former is know for showing mixed properties between the BL Lac and FSRQ classes (Raiteri et al. 2006(Raiteri et al. , 2014Otero-Santos et al. 2022). The latter on the other hand presents a low synchrotron peak in comparison with the bulk of the BL Lac population, as reported in Table 1. FSRQs mostly present a random distribution of the polarization angle, as shown in the right panel of Figure 9. This is in line with the results presented by Angelakis et al. (2016), where a dependence between the frequency of the synchrotron peak and a preferential direction in the distribution of the EVPA is claimed. Therefore, FSRQs and BL Lacs with a low value of tend to show a uniformly distributed polarization angle, whereas BL Lacs with a high have a favoured orientation of the polarization angle. Nevertheless, four FSRQs of our sample still show a preferential orientation, i.e. PKS 1222+216, 3C 273, 3C 279 and 3C 345. We also note that, for three sources of our blazar sample, PKS 0735+178, H 1219+305 and H 1426+428, we have not tested their distribution due to the low number of observations.
EVPA rotations
The smooth EVPA rotation search performed according to the criteria established in Section 3.3.1 resulted in a total of 128 angle swings detected in our blazar sample. The results of this analysis are gathered in Table 3. A total of 52 of the rotations were detected within the BL Lac subsample, while the 76 remaining swings correspond to the FSRQ population. We have not detected any rotation in the three blazars dominated by the stellar emission. All the smooth rotations are represented in Figures S4 to S29 in the online version.
We have characterised each rotation by estimating their duration , amplitude of the EVPA change Δ and variation rate, estimated as the mean rate of consecutive measurements in the rotation, ⟨|Δ |/Δ ⟩. No significant differences are observed in the angle change measured for BL Lac objects and FSRQs, with both values compatible within errors. However, a small difference in the duration was found, with BL Lacs displaying slightly faster rotations than FSRQs. This leads to a faster variation rate measured for the former, 19.5 ± 3.1 • /day, whereas the latter show a rate of 14.6 ± 1.9 • /day. These rotations present a rather fast development, with the shortest one showing a duration of 1.1 days, while the slowest spans for 194.7 days approximately. By definition, the shortest possible swing corresponds to 90 • . Moreover, the largest EVPA change measured was 468.0 • . The relation between the rotation rate and the duration of the angle rotation is also represented in Figure 10, showing that rotations developing in shorter time scales present higher rate values, with a similar angle variation range as those occurring in longer time scales. This correlation between these quantities is also observed in Figure 7 from Blinov et al. (2016a) for the blazars monitored by the RoboPol programme. The relation between the rate and the duration of the rotations follows a power law function ⟨|Δ |/Δ ⟩ = · − • /day, with = 117.3 • ± 40.8 • and = 0.89 ± 0.01 estimated from all the smooth rotations detected here. For the results presented here, we observe that all the smooth rotations are within the 3 limit of this power law fit. Visual in- (1)
(2) This source shows a double peak in the orientation in the EVPA. Due to its position, far from the edges of the distribution, the orientation of this source was evaluated with a double Gaussian function instead of a Von Mises distribution. spection of this figure also reveals no significant differences between BL Lacs and FSRQs. Moreover, we have also evaluated a possible correlation of the rotation duration and variation amplitude, with no apparent relation between them. This was also reported by Blinov et al. (2016a). Thus, a lack of correlation between the variation amplitude of the angle and the duration of the rotation is translated into a relation between the rate and the duration ⟨|Δ |/Δ ⟩ ∝ −1 . Therefore, the relation found between the rate and the duration with the power law function reported above are consistent with the expected behaviour.
Given the total number of rotations detected, we can make an estimation for the frequency of the blazar sample of displaying an EVPA swing considering the procedure from Blinov et al. (2016b). These authors estimate the frequency as = / , where and are the number of rotations displayed and the number of days during which the sources were observed, respectively. This estimation was made for the full blazar sample, and for the BL Lac and FSRQ populations individually. The results are presented in Table 3. A frequency = 0.013 days −1 was estimated for the complete blazar population studied here. In comparison to the frequency es- (1)
( RoboPol during the first three seasons (∼1 week), which could be leading to this difference. In fact, as pointed out by Kiehlmann et al. (2021), the frequency of observations plays a crucial role in the observations of EVPA rotations. These authors measured a frequency of EVPA rotations a factor 3 higher from season 4 of RoboPol (1-day cadence) than that estimated from the first three seasons. Table 3 also presents the frequency estimated for BL Lacs and FSRQs, with the latter showing a higher frequency than the former by a factor ∼1.5. Table 2 shows that eight blazars of the sample do not present any rotation during the monitored period. We have compared the characteristics of these eight non-rotating sources with those from the ones displaying EVPA swings. This comparison is presented in Table 4. The rotating population presents a higher polarization degree than those sources with no rotations in the evolution of their polarization angle, with mean values = (8.32 ± 0.97)% and = (4.70 ± 0.37)%, respectively. Moreover, they also show a higher maximum change of the EVPA, as well as a higher mean variation rate than the non-rotators by a factor ∼5.
2) (3) (4) (5)(6)
In addition, we have also investigated the differences in the behaviour of rotating blazars between rotating and non-rotating periods. This comparison has been done for 103 rotations out of the 128 detected. We have discarded those cases in which the optical spectra from the Steward Observatory database are not flux calibrated. We observe that for 12 of the 18 sources studied here presenting EVPA rotations, the mean polarization degree is lower during rotations that during their non-rotating periods. For two targets, this ratio has a value close to 1, while for the four sources remaining we measure a higher polarization degree during their rotations. Considering all the blazar sample, the mean polarization degree during rotations was estimated to be = (6.50 ± 0.68)%, lower than that during non-rotating observations with = (8.44 ± 1.02)%. This is in line with the results reported by Blinov et al. (2016a) for 18 blazars studied in the RoboPol programme.
We have also evaluated the connections of smooth rotations with optical flares or enhanced flux states. We do not observe any systematic or general relation between flares and EVPA rotations for the bulk of the population, with several sources presenting polarization angle swings during both low and high emission states (for instance Ton 599). Therefore, no clear association can be made between these two types of events. However, we do observe that some sources show a clear increase of their mean optical flux during rotations. For instance, for 3C 345 we measured an average optical flux (no-rotation) = (4.95 ± 0.25) × 10 −16 erg cm −2 s −1 during non-rotating states, in comparison with (rotation) = (1.11 ± 0.10) × 10 −15 erg cm −2 s −1 during its rotation, roughly a factor 2 higher than the former. Other blazars showing this same behaviour are e.g. CTA 102 and OJ 248. In contrast to these sources, we also observe some of the targets of the sample with the opposite behaviour, i.e. a higher flux during nonrotations than during EVPA swings. Some examples of this are 3C 279 and AO 0235+164. For instance, the former has an optical flux (rotation) = (1.80 ± 0.18) × 10 −15 erg cm −2 s −1 and (no-rotation) = (2.89 ± 0.10) × 10 −15 erg cm −2 s −1 during rotations and non-rotations, respectively. We have also estimated the rate between the flux during rotating and non-rotating periods for the bulk of the blazar population included in this work. This rate yields a value of (rotation)/ (no-rotation) = 1.24 ± 0.12, slightly higher for the former state, despite this behaviour not being the general rule for all the sources. This rate was also compared with that estimated by Blinov et al. (2016a) for a different blazar population, with a value of 1.12, similar to the one estimated here.
Finally, as introduced in Section 3.3.2, we have also considered those rotations developed in longer time scales than the aforementioned smooth EVPA swings, showing fluctuations and fast variability in the measured EVPA during the rotation. A total of 24 non-smooth rotations in the 9 of the 26 blazars included in this study. Following the approach used for the smooth angle swings, we have characterised each of these rotations. The average duration of these events, ⟨ ⟩ = 361 ± 41 days, is significantly longer than that of smooth rotations as in Table 3. The fastest non-smooth rotations display a duration comparable with the slowest smooth EVPA swings, while the rest are clearly slower, as represented in Figure 10. The mean polarization angle variation during these events is also higher than for the smooth rotations, with ⟨Δ ⟩ = 253.1 • ±19.1 • , in comparison with ⟨Δ ⟩ = 140.9 • ±4.9 • reported in Table 3. As a consequence of this difference, the measured rate is lower for the non-smooth rotations than for the smooth ones, with ⟨|Δ |/Δ ⟩ = 1.0 ± 0.1 • /day. It is also clear that these slow and non-smooth rotations follow the same relation between the rate and the duration of the rotation estimated for the smooth swings, as represented in Figure 10. The values of power law fit after including also the non-smooth rotations were found to be = 117.2 • ± 33.4 • and = 0.89 ± 0.01, consistent with those derived when considering only smooth rotations. As for the smooth rotations, we observe in Figure 10 that all the non-smooth rotations are contained in the 3 confidence limit of the fit with the exception of a non-smooth rotation displayed by 3C 66A.
Connection with the NMF analysis from OS22
Taking advantage of the spectral decomposition performed in OS22 using the NMF algorithm, we have investigated interesting be-haviours and features displayed by the sources included here with the components derived from this analysis. The NMF study allowed us to decompose the total optical spectra in a small number of components that we could easily associate with the different parts of the blazar contributing to the optical emission. Galaxy-dominated blazars were reconstructed with a stellar template accounting for the emission of the host galaxy and a power law that modeled the synchrotron emission of the jet. For BL Lac objects we used two to four power laws to account for the emission of the jet and its variability. Finally, FSRQs needed an extra component corresponding to a quasi-stellar template to model the emission of the BLR. For this latter blazar type, we sometimes identified a steep and almost constant power law associated with a bright accretion disc.
In this work we have investigated the connection of these components with the features and characteristics of the polarization and its variability. Making use of the polarized optical spectra of each blazar provided by the Steward Observatory, we have calculated the polarized spectral index assuming a power law shape as ∝ − . Moreover, following the methodology from OS22, we have estimated the minimum number of components needed to model the variability of the polarized spectra using the residual sum of squares method. We have estimated a total of two components to account for the observed variability in all three blazar types. This result is expected as the jet is the only part that contributes significantly to the observed polarized emission. This can be compared with the same estimations from OS22 for the total spectra, where BL Lacs also yielded a total of two components, as their total optical emission is coming mainly from the relativistic jet. Contrary, FSRQs needed three or four components according to this approach owing to their more complex morphology, with a bright BLR and accretion disc, contributing only to the total optical emission.
Galaxy-dominated blazars
Galaxy-dominated blazars present a very bright unpolarized stellar emission, with a very low variability in the total flux OS22. The high dominance of this contribution masks the spectral variability of the total spectra, reflected in the short variability range of the total spectral index (for instance, ∼ 1.35 − 1.45 for Mkn 501). On the other hand, the spectral variability of the polarized spectra is much higher, since the only contribution to this emission is the relativistic jet. The polarized spectral index presents a much higher variability range, with indices between ∼ 0.5 − 2.5. The difference between and for these sources confirms that the bright stellar emission is masking the real variability of the optical jet emission. Moreover, this stellar emission is also having a depolarizing effect in the measured polarization degree, as mentioned in Section 4.2.
BL Lac objects
BL Lac objects display a similar variability range for the total and polarized spectral indices. The slopes derived from the polarized spectra are also comparable to those derived from the NMF analysis, and reported in Table 2 from OS22. However, no strong correlation is observed between the total and polarized spectral indices, with typical values of the linear correlation coefficient ∼ 0.3 − 0.4. Nevertheless, when the EVPA rotations are detected simultaneously with an optical flare (<10% of the rotations), we typically observe the same spectral behaviour of the total and polarized spectra. The spectral indices and during these events show rather similar values, meaning that the radiation producing these might have a common origin. In addition to this general behaviour, we also observe interesting features in some of the BL Lac objects studied here. Some comments on each BL Lac objects are included in Appendix A2.
FSRQs
Following the behaviour of BL Lac objects, FSRQs also show a similar spectral variability in the total and polarized spectra. The correlation between both indices is absent. Moreover, roughly 30% of the EVPA rotations identified in these objects were found to be connected with optical flares. During these events, similar total and polarized spectral indices are observed, often coincident with a dominant NMF component. Moreover, we do also observe peculiar behaviours for some FSRQs. In Appendix A3 we include some comments on the FSRQs of the sample.
DISCUSSION
Several models have been developed in recent years to explain the behaviour of the polarized emission observed from blazars. The results and differences between FSRQs/LSPs and ISPs/HSPs found here are consistent with those reported by Angelakis et al. (2016) and Hodge et al. (2018). This behaviour can be explained through the shockin-jet scenario proposed by Angelakis et al. (2016). This framework is based on a magnetic field with a helical structure plus a turbulent component. In such scenario, a relativistic shock propagating in the jet accelerates particles through diffusive shock acceleration (see Figure 16 from Angelakis et al. 2016). The particles cool down as they are advected away from the shock due to the synchrotron and inverse Compton radiation. The high-energy particles that are responsible for the emission at frequencies around or above the synchrotron peak are located downstream in a small volume where the magnetic field has a highly ordered component plus a strong turbulent and variable contribution generated by the shock. On the other hand, the emission at frequencies below the synchrotron peak comes from particles contained in a much larger volume dominated by the ordered magnetic field. Therefore, a higher and more variable polarization is expected for the former region, while the second is expected to be less variable. This model has explained the behaviour observed for the blazar sample monitored by RoboPol. Due to the location of the optical emission w.r.t. the synchrotron peak for FSRQs/LSPs and ISPs/HSPs, this model explains the higher polarization and variability for the former in comparison to the latter. In addition, it can naturally explain the different EVPA distributions for each type. The optical emission from FSRQs is produced in the region with the strong turbulent component. Therefore, a random orientation is expected. Contrary to this, the optical emission of BL Lacs comes mostly from the region with a highly ordered, stable magnetic field, leading to a more constant orientation of the EVPA. Therefore, this model is also able to explain the results observed here. This is in line with one of the possible explanations proposed by Smith (1996) for the wavelength dependence reported between different blazar types, with a more ordered and aligned magnetic field component radiating mostly in bluer wavelengths. In fact, Smith (1996) associates this dependence to reasons intrinsic to the emitting region and rules out that it is introduced by external causes (e.g. accretion disc).
Concerning the EVPA rotation study carried out here, the results are compatible with those reported by Blinov et al. (2016b), where a higher frequency of rotations is measured for FSRQs than for BL Lacs, as reported in Table 2. It is also important to note that among the BL Lacs displaying rotations, almost all correspond to the LSP and ISP subtypes. We also observe that the results found here are compatible with the distinction of rotations proposed by Blinov et al. (2015) and Kiehlmann et al. (2016), with deterministic smooth rotations occurring in the dominant large-scale magnetic field; and stochastic rotations taking place downstream in the turbulent region. While the former can take place in both blazar types (however are expected to be more important in HBLs), the latter would be observed more often for FSRQs and LSPs. Moreover, owing to the higher contribution of the turbulent component for FSRQs, non-smooth rotations can also be related to this region, where short time-scale fluctuations are more likely to appear in the measured EVPA during the rotation. This is compatible with the fact that most of the non-smooth rotations observed here correspond to FSRQs. The shock-in-jet model from Angelakis et al. (2016) can also explain the higher flux, variability and polarization degree measured for blazars displaying EVPA rotations in comparison to those where no rotations were observed. Indeed, rotations are more frequently observed in FSRQs and LBLs, which tend to be more variable and luminous than HSPs (see Figure 8). The higher variability of the polarization for these sources w.r.t. HSPs is also explained through the stronger contribution of the turbulent component under this model, also explaining why FSRQs reach higher values of the polarization degree. Moreover, Kiehlmann et al. (2017) finds random walk models to be incompatible with the origin and development of EVPA swings, disfavouring turbulent models like the one proposed by Marscher (2014).
Focusing on those blazars displaying EVPA rotations, we also observe differences between the periods in which a rotation is observed from those with no EVPA swings. In agreement with the results reported by Blinov et al. (2016a); Kiehlmann et al. (2017), the polarization degree was found to decrease during rotating periods. Zhang et al. (2014Zhang et al. ( , 2016 claim that this is an expected feature in a jet with a shock passing through the emitting region. However, random walk models are also able to reproduce such behaviour, as well as a flux increase during the rotations (see for instance Blinov et al. 2016a). Here we observe a marginal flux increase during this periods by a factor 1.24. However, this behaviour is not systematically observed for our blazar sample. Moreover, both of these scenarios are able to explain the absence of correlation between the amplitude of the polarization angle variability and the duration of the rotations. Kiehlmann et al. (2017) also arise the possibility of the differences between rotating and non-rotating periods to be produced in random walk models.
Rotations have been associated in the past with the development of -ray flares, with significant evidence for this connection reported by Blinov et al. (2015Blinov et al. ( , 2018. We have also evaluated the possible relation between optical flares and EVPA rotations for our sample. As reported in Section 4, we measure a hint of flux increase during rotating periods by a factor 1.24. Nevertheless, this brightening does not occur in all the blazars of our sample. In fact, a variety of different behaviours is observed, with some sources showing a clear flux increase (e.g. 3C 345), some with no significant change of their optical emission (BL Lacertae), and some showing a lower flux (for instance 3C 279). We do not find any significant correlation between optical flares and the EVPA rotations measured here, in agreement with the results reported by Blinov et al. (2016a) for the RoboPol blazar sample. The shock-in-jet model explains this as a random walk variability of the polarization if the rotations are being produced by the turbulent region (Blinov et al. 2016a). Alternatively, if the shocks propagating along the jet are mildly relativistic, this model also predicts large variability of the polarization and the EVPA, leading to the observed rotations, with small flux variations (Zhang et al. 2016). For those rotations that are observed correlated with an optical flare, as reported in Section 4, the variability of the total and polarized spectral indices is approximately the same, meaning that the mechanisms leading to the flux and polarization variability may be related. In addition, we can also relate these events to the different components derived from the NMF reconstruction presented in OS22. We observe that BL Lac objects tend to show a higher correlation between the total and the polarized spectral indices than FSRQs. This can also be related to the fact that FSRQs show a higher dominance of the turbulent component and therefore, as aforementioned, their polarization may present a random walk variability. Contrarily for BL Lacs, dominated by the large-scale, stable helical magnetic field component, the variability mechanisms are more likely to be the same for the total and polarized emission.
Under the characteristics and behaviors detailed above, we find that the shock-in-jet model from Angelakis et al. (2016) can provide a reasonable explanation for the reported results. However, it is important to note that this interpretation was made using the averaged properties extracted from all the blazar sample studied here. We also stress that more models have tried to reproduce the variability of the polarization and the EVPA. Some of them are models involving changes in the jet orientation (for instance Marscher et al. 2008;Lyutikov & Kravchenko 2017) or kink instabilities leading to a re-structuring of the magnetic field (e.g. Nalewajko 2017). However, each model presents certain limitations when reproducing some of the observed features. For instance, the former example is expected to produce a determined direction of the rotation, something not observed here. For the second type of model highlighted above, these rotations would be limited to a maximum change of the EVPA of 180 • , always expected to happen in the same direction. Therefore, our observations disfavour these characteristics. Finally, models based on turbulent cells like the one proposed by Marscher (2014) have also been successful in the past for explaining the characteristics of the polarization variability. Nevertheless, large EVPA swings are rarely observed in such scenario, as well as not expected to appear correlated with MWL flares. We also note that models based on shocks propagating along helical jets similar to the one proposed by Angelakis et al. (2016) have also been proposed for other sources such as S5 0716+714 and CTA 102 by Larionov et al. (2013Larionov et al. ( , 2016, interpreting successfully the long-term behaviour of the polarization of several blazars.
CONCLUSIONS
We have analysed 10 years of spectropolarimetric data for 26 -ray blazars monitored by the Steward Observatory, studying the properties of the polarization and its variability. Our results point towards a clear difference between FSRQs/LSPs and ISPs/HSPs. The former population presents a higher variability and reaches higher values of the polarization degree than the latter. Moreover, FSRQs tend to show a random distribution of the EVPA, in comparison to the preferential orientation displayed by BL Lacs. Concerning the relation between the flux and the polarization degree, BL Lacs do not show any correlated variability. On the other hand, FSRQs do appear to display a correlation between them. However, this correlation disappears after considering the depolarizing effect introduced by the BLR+AD.
We have also performed a systematic search for EVPA rotations in the evolution of the polarization angle. This study also led to the observation of differences between FSRQs and BL Lacs, with more frequent rotations for the former. These rotations have not been statistically connected to optical flares. We have compared the characteristics of the blazars showing EVPA rotations with those with no polarization angle swings, observing a higher variability, flux and polarization for FSRQs. In addition, among the sources with EVPA rotations we have also detected differences between periods with ongoing rotations and periods were no rotations were taking place. The polarization degree measured during EVPA rotations was found to be lower than during non-rotating periods. Moreover, a marginal increase of the optical flux was also observed. However, this last effect does not appear systematically in all the blazars of the sample, as derived from the lack of correlations between rotations and flares.
Finally, we have evaluated the averaged characteristics of the polarization and its variability in the context of different models proposed in the literature. We conclude that the shock-in-jet model proposed by Angelakis et al. (2016) can provide a plausible explanation for the observed average behaviour. This model assumes a large-scale helically ordered magnetic field, plus a turbulent magnetic field component. The former dominates the emission at frequencies below the synchrotron peak frequency, becoming more important in the optical emission for ISPs and HSPs; while the latter becomes stronger at frequencies around or above , dominating the optical emission in the case of FSRQs and LSPs.
DATA AVAILABILITY
All the data are publicly available at the Steward Observatory blazar monitoring programme webpage: http://james.as.arizona. edu/~psmith/Fermi/.
SUPPLEMENTARY MATERIAL
We include the polarization-flux diagrams for all the sources of the work, except those presented as an example in Section 4.1. For galaxy-dominated blazars and FSRQs, we include the diagrams before and after correcting the depolarizing effect. We also include the long-term polarization light curves of the blazars studied here. Figure S1. Polarization-flux diagrams for galaxy-dominated blazars (same description as Figure 2). Figure S2. Polarization-flux diagrams for BL Lac objects. Figure S3. Polarization-flux diagrams for FSRQs (same description as Figure 4). Figure S4. Long-term polarization light curves of H 1426+428. Figure S5. Long-term polarization light curves of Mkn 501. Figure S6. Long-term polarization light curves of 1ES 2344+514. Figure S7. Long-term polarization light curves of 3C 66A. Figure S8. Long-term polarization light curves of AO 0235+164. Figure S9. Long-term polarization light curves of S5 0716+714. Figure S10. Long-term polarization light curves of PKS 0735+178. Figure S1. Long-term polarization light curves of OJ 287. Figure S12. Long-term polarization light curves of Mkn 421. Figure S13. Long-term polarization light curves of W Comae. Figure S14. Long-term polarization light curves of H 1219+305. Figure S15. Long-term polarization light curves of 1ES 1959+650. Figure S16. Long-term polarization light curves of PKS 2155-304. Figure S17. Long-term polarization light curves of BL Lacertae. Figure S18. Long-term polarization light curves of PKS 0420-014. Figure S19. Long-term polarization light curves of PKS 0736+017. Figure S20. Long-term polarization light curves of OJ 248. Figure S21. Long-term polarization light curves of Ton 599. Figure S22. Long-term polarization light curves of PKS 1222+216. Figure S23. Long-term polarization light curves of 3C 273. Figure S24. Long-term polarization light curves of 3C 279. Figure S25. Long-term polarization light curves of PKS 1510-089. Figure S26. Long-term polarization light curves of B2 1633+38. Figure S27. Long-term polarization light curves of 3C 345. Figure S28. Long-term polarization light curves of CTA 102. Figure S29. Long-term polarization light curves of 3C 454.3.
APPENDIX A: INDIVIDUAL REMARKS
Here we include individual remarks for the sources considered in the polarization analysis carried out in this study.
A1 Galaxy-dominated Sources
H 1426+428 This is the blazar with the lowest observed polarization fraction of the three galaxy-dominated sources, with values <2% prior to the host galaxy correction, and reaching fractions of ∼3.5% after accounting for the depolarizing effect. Its flux is rather stable during its short monitored period (see Figure S4). No EVPA rotations where observed in the variability of the EVPA for this source. The total and polarized spectral indices adopt different values, with a higher variability of the latter ( ∼ −0.3 − 2.5) in comparison to former ( ∼ 1.3 − 1.4). This is due to the dominance of the host galaxy on the total spectra, as reported in Section 4.6.1, masking its real variability.
Mkn 501
It is the only galaxy-dominated blazar showing EVPA swings. It displays a slow non-smooth rotation lasting ∼500 days at the beginning of the monitoring programme, with a swing of more than 100 • (see Figure S5). After this rotation, the EVPA stays rather stable -50 • . As for the other galaxy-dominated sources, the polarization degree of Mkn 501 is rather low in comparison with BL Lac objects and FSRQs, as well as its variability.
1ES 2344+514
As for the other two galaxy-dominated blazars, 1ES 2344+514 shows a low polarization fraction (<6% before host galaxy correction) and a very stable EVPA. Its polarized flux shows one of the lowest variability among the blazar sample, ranging from ∼3 × 10 −15 erg cm −2 s −1 to ∼4 × 10 −15 erg cm −2 s −1 (see Figure S6).
The total spectral index also shows a very narrow variability range ( ∼ 0.8 − 1.0), in contract with the highly variable polarized index ( ∼ 0.0 − 2.5).
A2 BL Lac Objects
3C 66A This object displays two non-smooth rotations around MJD 56600 and MJD 57050, approximately, one of them deviating from the power law fit describing the relation between the amplitude and the rate of the rotations detected here. 3C 66A displays preferred orientations of the EVPA. However, contrary to the rest of the BL Lacs with such behaviour, it presents a double peak in the EVPA distribution. During the period prior to the non-smooth rotations, the EVPA is stable at approximately 25 • . After these events, the peak is observed at -20 • . During these two events, and show a correlated variability, with a correlation coefficient = 0.60, with a p-value = 10 −35 .
AO 0235+164
Roughly the same variability range is observed for and . However, there is no correlation between both indices, following the FSRQ-like behaviour rather than the mild correlation shown by BL Lacs. This is consistent with the fact that this blazar has shown several times mixed properties between both blazar classes (Raiteri et al. 2007(Raiteri et al. , 2014. The NMF reconstruction derives a component consistent with the presence of a BLR visible during low flux states (see OS22). Another indication of this comes from the uniform EVPA distribution displayed by this source, more typical of FSRQs.
S5 0716+714
This BL Lac object presents several EVPA rotations in its long-term evolution, as reported in Table 2 (see Figure S9). As an example, we highlight one of these events, occurring on MJD 57000 simultaneous to optical flares. The polarized spectral index measured during this event ranges between ∼ 0.9 − 1.2. We observe that this variation coincides with that from the two NMF components dominating the emission during the flare, C1 and C2 with indices 1 = 0.89 and 2 = 1.17, respectively. This points towards a common origin of these components, the optical flare, and the observed polarization variability. The variability of for this source is consistent with that shown by both and the indices of the NMF components derived in OS22.
PKS 0735+178
This BL Lac is one of the least monitored during the 10-year period considered here. Its variability is also low, and the EVPA varies between 0 • and 100 • approximately, however without displaying any EVPA rotation (see Figure S1).
OJ 287
The evolution of the EVPA for this object shows a slow decreasing trend, varying from roughly 180 • to 120 • in approximately 2000 days. After this period on MJD 57000, and anticipating a bright flare occurring on MJD 57300, a counter-clockwise non-smooth rotation takes place, leading to an orientation of the EVPA at ∼360 • (see Figure S11). Then, simultaneously to the flare, a second clockwise non-smooth rotation happens, leading to a total EVPA swing of more than 400 • . During these events, the total and polarized indices vary correlated with values ∼1.0, compatible with the slopes of the NMF components that dominate during the flare (see OS22). This BL Lac displays a very clear correlation between and , with a linear correlation coefficient = 0.70 (p-value = 10 −75 ). Both indices vary roughly in the same interval, coincident with that from the different NMF components ( ∼ 0.5 − 1.2).
Mkn 421
Several rotations are observed in the evolution of the EVPA in Mkn 421.One of them happens simultaneously with an optical flux increase on MJD 55600, while another one is observed right before the highest emission detected for this source in this 10-year period on MJD 56400, approximately, as represented in Figure S12. As reported in OS22, during this event the component C2 from the NMF reconstruction dominates the emission of this flare, with 2 = 2.15. This index is compatible with those measured during the rotation, ∼ 1.8 − 2.2, suggesting a common origin of the flare and the EVPA swing.
W Comae Five rotations are identified in the long-term evolution the EVPA of W Comae (see Figure 13). The first four occur during a low emission state, when the contributions of the three NMF components derived in OS22 are comparable. The last one is detected during the flare that takes place on MJD 58250, approximately, and dominated by C2 ( 2 = 1.13). The total index during this event changes from values around 0.5 before the flare to around 1.0, compatible with the index of C2. The polarized spectral index varies around 1.2, showing consistent values.
H 1219+305
As for PKS 0735+178, the data sample of this BL Lac is rather low w.r.t. the most monitored blazars. We observe a low polarization degree ( < 7%) and a stable EVPA. No EVPA rotations are detected in the polarization angle variability of H 1219+305.
1ES 1959+650
This source is one of the least variable BL Lac objects. It shows a stable EVPA and a rather low polarization degree (∼2.5-9%), as shown in Figure S15. No EVPA rotations are observed for this source. This is consistent with the fact that some hints of the host galaxy were observed in OS22. The polarized spectral index shows values between 1.0 and 2.0, approximately, while the total spectral index varies between 0.6 and 1.5.
PKS 2155-304
The EVPA is mainly oriented in the angle range between 60 • and 90 • , showing only a few rotations during the 10-year period. Despite the variability displayed by this BL Lac object, no apparent connections between the rotations, the polarization degree and the flux variability are observed. These rotations are represented in Figure S16. The polarized spectral index varies within a range ∼ 1.0 − 2.5, similar to the total spectral index, ∼ 1.0 − 2.1.
BL Lacertae
A clear preferred EVPA orientation of 13 • is observed for this source. Moreover, a large number of rotations are observed for BL Lacertae, being the object of the sample with the highest number of rotations identified (see Figure S17), as reported in Table 2. However, as for PKS 2155-304, no evident relation with the flux or the NMF components is observed, with several rotations appearing during both high and low emission states. Both the total and spectral indices adopt values between 0.0 and 1.0, approximately. However, no strong correlation is observed in their variability.
A3 FSRQs
PKS 0420-014 Despite the short time coverage of this FSRQ, a clear smooth rotation starting on MJD 55150 appears (see Figure S18), coincident with the highest emission state displayed by PKS 0420-014. During this event both spectral indices vary between 0.5 and 1.0. In addition, this rotation also occurs during an enhancement of the contribution coming from the component C2 of the NMF analysis, characterized by a consistent spectral index 2 = 0.89.
PKS 0736+017
Several rotations are identified in the evolution of the EVPA for this FSRQ. The first one occurs simultaneously to the highest emission detected for this object in the monitored period. During this event, the total and polarized indices display values around 1.0, compatible with those from the dominant C2 and C3 components from the NMF analysis of OS22 during the development of this flare ( 2 = 0.92 and 3 = 0.50). The second rotation is identified along a second flare, less bright than the first one, but also dominated by C3 and showing compatible indices for the total and polarized spectra. The rest of the rotations are observed during low emission states.
OJ 248 This FSRQ shows a clear correlation of the polarization degree with the total flux, with an almost unpolarized low state, and reaching values of ∼18% during the brightest state (higher than 20% after correcting from the depolarizing effect of the BLR+AD). OJ 248 also presents two rotations, coincident with the two peaks of the bright flare on MJD 56500, as shown in Figure S20, during which the total and polarized spectral indices adopt approximate values between ∼ 1.3 − 2.3 and ∼ 1.0 − 2.5, respectively. These events are also coincident with the dominance of components C3 and C2 of the NMF reconstruction, respectively, with indices 2 = 1.33 and 3 = 1.06.
Ton 599 A clockwise rotation is observed correlated with the flare occurring on MJD 56070 approximately (see Figure S21), related with the component C3 of the reconstruction ( 3 = 1.74). During this event, the spectral indices and vary between 1.3 and 2.0, approximately, consistent with the spectral index of C3. An increase of the polarization degree is also measured. In addition, several rotations are detected in the low activity state right before and after the bright flare observed on MJD 58100, as shown in Figure S21.
PKS 1222+216
Contrary to the bulk of the FSRQ population, this blazar shows a clear preferential orientation of the EVPA at 0 • . It also shows four rotations, three of them linked to an increase of the flux of the reconstructed components C2 and C4 of the NMF analysis from OS22, between MJD 55600 and MJD 56100 (see Figure S22). This components have a spectral index 2 = 1.70 and 4 = 0.52. The polarized spectral index shows values of ∼ 0.3 − 2.1 in the period during which the rotations occur. However, the total spectra show somewhat higher indices ( ∼ 2.2 − 3.3). This could be due to the presence of a bright accretion disc, as reported in OS22, that makes the total spectra steeper than the polarized spectra, where the only expected relevant contribution is the synchrotron emission of the jet.
3C 273
As can be seen from Figure S23, 3C 273 is the least polarized blazar of the sample, with polarization degrees <1.6%. This is also one of the few FSRQs showing a stable EVPA, showing almost no variability in both the polarization degree and angle, as well as in its total optical emission, as reported in OS22. This peculiar behaviour for a FSRQ, typically more variable and more polarized than BL Lac objects, may be due to the fact that the optical emission of this source is dominated by its bright accretion disc, as reported by Raiteri et al. (2014) and in OS22. Therefore, the very low contribution to the polarized emission may be due to a very faint synchrotron emission, as well as a possible contamination from the interstellar medium or the accretion disc itself. Given the vary low variability displayed by the EVPA, no rotations are observed for 3C 273.
3C 279
This FSRQ is the only source of this type showing a clear correlation between and . The measured linear correlation coefficient for these indices is = 0.62 (p-value= 10 −53 ). Moreover, it is also one of the few FSRQs with a preferred EVPA orientation, contrary to the commonly observed uniform distribution for these sources. The EVPA is oriented towards ∼50 • , as reported in Table 2. One of the rotations observed for 3C 279 appears coincident with the enhanced state measured at MJD 58150 (see Figure S24). The NMF reconstruction from OS22 reports a dominance of components C2 and C4, with 2 = 1.20 and 4 = 1.05. The polarized spectral index varies around ∼ 1.25 during this rotation, similar to the components contributing the most to the optical emission, and suggesting a connection of this rotation and the flare. Larionov et al. (2020) reports a predominance of a helical magnetic field, which could be in line with the preferred orientation shown by 3C 279.
PKS 1510-089
The highly variable polarized emission of PKS 1510-089 shows a large number of rotations. Some of these events are found connected to optical flares, mostly coming from the same behaviour in the NMF component C3 ( 3 = 2.52). However, this is not a general behaviour for this FSRQ, as some rotations are also observed during low emission states. It also presents non-smooth rotations, following the same rate-duration relation represented in Figure 10.
B2 1633+38
Several smooth rotations are observed in the highly variable behaviour of the polarization angle displayed by B2 1633+38, as shown in Figure S26. However, no clear association with flaring states or NMF components is observed. Moreover, four slow and non-smooth rotations are also present in the evolution of the polarization of this object. The first one takes place on MJD 56000, approximately. The first of the three remaining starts on MJD 56700 and the next two occur consecutively, with a duration of ∼500-800 days and swings >300 • . As for the case of PKS 1222+216, this FSRQ also shows total spectral index values ( ∼ 1.3 − 3.0) slightly higher than those derived from the polarized spectra ( ∼ 0.1 − 2.2). Again, this could be due to the presence of a bright accretion disc affecting the total spectral index values, as reported in OS22.
3C 345
It shows an orientation of the EVPA towards ∼ −52 • . However, the low amount of data may be introducing a bias in the determination of the polarization angle distribution. Moreover, it shows a fast, smooth rotation coincident with its brightest flare, at MJD 55100. This EVPA swing also corresponds with an enhancement of the component C2 of the NMF reconstruction, with a spectral index 2 = 1.39. The total and spectral indices adopt similar values between 1.4 and 2.0 approximately during this event, compatible with that from the NMF component C2 responsible for the flare.
CTA 102 This blazar shows the typical lack of correlation between the total and polarized spectral indices observed for FSRQs. During a bright flare corresponding to the brightest optical state observed in this monitoring, occurring on MJD 57750 approximately, two EVPA rotations are also observed (see Figure S28). As reported in OS22, this flare is described with the component C2, with 2 = 1.23. The measured polarized spectral index in this period is similar to that derived of C2, with ∼ 1.5. Another important feature of CTA 102 is a clear increase of the polarization degree during its EVPA rotations, contrary to the behaviour reported for the bulk of the population. 3C 454.3 This is one of the most variable sources of the sample, with several bright flares dominating its optical emission. The spectral indices and show no correlation in their variability. During two enhanced optical states occurring at MJD 55200 and MJD 56800, we observe three EVPA rotations that happen simultaneously. One of them is coincident with a minor flare on MJD 55200, and the two remaining are coincident with the brightest flare on MJD 56850, approximately. These events can be seen in Figure S29. The spectral variability of these events suggest, as already claimed for other sources, a common origin of the flux and polarization variability, with total and polarized spectral indices 3 = 0.89 and ∼ 1.0 for the first flare, and 2 = 0.54 and ∼ 0.5 for the second one, where 3 and 2 correspond to the indices of the NMF components C3 and C2, respectively. These components are the ones dominating the emission during each of these flares, as reported by OS22. This paper has been typeset from a T E X/L A T E X file prepared by the author.
Figure 1 .
1Examples of the two types of rotations considered here. Top: Smooth rotation detected for AO 0235+164. Empty markers represent the observations during the rotation. Bottom: Slow non-smooth rotation observed for OJ 287. The EVPA swing occurs during the period contained in the blue contour.
Figure 2 .
2Polarization-flux diagrams for the galaxy-dominated blazar Mkn 501. The flux value is estimated from the mean flux of each total optical spectrum. Left: Before correcting by the host galaxy. Right: After correcting by the host galaxy.
Figure 3 .
3Polarization-flux diagram for the BL Lac object 3C 66A. The flux value is estimated from the mean flux of each total optical spectrum.
Figure 4 .
4Polarization-flux diagrams for the FSRQ blazar 3C 454.3. The flux value is estimated from the mean flux of each total optical spectrum. Left: Before correcting by the BLR+AD. Right: After correcting by the BLR+AD.
Figure 5 .
5Mean polarization degree vs. the frequency of the synchrotron peak
Figure 6 .
6Beta distribution of BL Lacertae for its normalized polarization degree. The grey shaded area represents the distribution of the polarization degree measurements and the black line corresponds to the best fit of the Beta distribution. The fitted values of and are also shown in the legend of the figure.
Figure 7 .
7Intrinsic modulation index of the polarization degree. Left: vs. the mean polarization degree. Right:vs . Open and filled green stars represent the galaxy-dominated sources with and without the contribution of the host galaxy, respectively. Blue triangles correspond to the BL Lac objects. Open and filled red squares correspond to the FSRQs, with and without the BLR+AD contribution, respectively.
Figure 8 .
8Intrinsic modulation of the polarization vs. the optical fractional variability.
2 .
2Sample of polarization measurements. (4) Mean observing cadence. (5) Mean observed polarization degree. (6) Intrinsic modulation index of the polarization. (7) Preferential EVPA direction. (8) Linear correlation coefficient between the total flux and the polarization degree. (9) Number of displayed smooth and fast rotations. (10) Total spectral index range. (11) Polarized spectral index range.
Figure 9 .
9EVPA orientation histograms for the analysed blazar sample. Each colour represents the EVPA of one source. Left: BL Lac objects. The EVPA tends to show a preferential orientation for each source. Right: FSRQs. A more uniform or random distribution can be observed except for 3 objects at approximately 0 • and 50 • .
Table 3 .
3Results of the EVPA rotation analysis on the different blazar types. (1) Source type. (2) Number of detected rotations. (3) Mean EVPA variation amplitude. (4) Mean rotation duration. (5) Mean EVPA variation rate. (6) Frequency of the rotations. The uncertainties are estimated as the standard deviation of the distributions.
Table 1 .
1Sample of blazars from the Steward Observatory monitoring programme included in this work.Source
Sp. Type
SED Type
H 1426+428
Galaxy
dominated
0.129
HSP
1.0·10 18
Mkn 501
Galaxy
dominated
0.033
HSP
2.8·10 15
1ES 2344+514
Galaxy
dominated
0.044
HSP
1.6·10 16
3C 66A
BL Lac
0.444
ISP
7.1·10 14
AO 0235+164
BL Lac
0.940
LSP
2.5·10 14
S5 0716+714
BL Lac
0.30
ISP
1.5·10 14
PKS 0735+178
BL Lac
0.424
LSP
2.7·10 13
OJ 287
BL Lac
0.306
LSP
1.8·10 13
Mkn 421
BL Lac
0.031
HSP
1.7·10 16
W Comae
BL Lac
0.103
ISP
4.5·10 14
H 1219+305
BL Lac
0.184
HSP
1.9·10 16
1ES 1959+650
BL Lac
0.047
HSP
9.0·10 15
PKS 2155-304 †
BL Lac
0.116
HSP
5.7·10 15
BL Lacertae
BL Lac
0.069
LSP
3.9·10 13
PKS 0420-014
FSRQ
0.916
LSP
5.9·10 12
PKS 0736+017 †
FSRQ
0.189
LSP
1.5·10 13
OJ 248
FSRQ
0.941
LSP
3.4·10 12
Ton 599
FSRQ
0.725
LSP
1.2·10 13
PKS 1222+216
FSRQ
0.434
LSP
2.9·10 13
3C 273
FSRQ
0.158
LSP
7.0·10 13
3C 279
FSRQ
0.536
LSP
5.2·10 12
PKS 1510-089
FSRQ
0.360
LSP
1.1·10 13
B2 1633+38
FSRQ
1.814
LSP
3.0·10 12
3C 345
FSRQ
0.593
LSP
1.0·10 13
CTA 102
FSRQ
1.032
LSP
3.0·10 12
3C 454.3
FSRQ
0.859
LSP
1.3·10 13
Extracted from the 4LAC-DR2 catalogue of the Fermi-LAT satellite (Ajello
et al. 2020).
Redshift value still under debate. Lower limit of ⩾ 0.33 determined by
Torres-Zafra et al. (2018).
Redshift still uncertain. Upper limit of < 0.322 estimated by
). For the present work, we have adopted the first criterion presented here. Nevertheless, we have checked both criteria, with no different results in this case. Moreover, this correction is also dependant on the initial choice of EVPA interval, i.e. [-90 • , 90 • ] or [0 • , 180 • ].
Table
Figure 10. Rotation rate vs. duration of the observed rotations. Blue triangles and red squares correspond to BL Lac objects and FSRQs, respectively. Black circles correspond to the non-smooth slow rotations measured for all the blazar population. The black solid line corresponds to a rotation of |Δ | = 117.3 • .Type
N
⟨Δ ⟩
⟨
⟩
⟨ |Δ |/Δ ⟩
[ • ]
[days]
[ • /days]
[days −1 ]
BL Lacs
52
144.7 ± 9.1
20.4 ± 2.4
19.5 ± 3.1
0.010
FSRQs
76
138.3 ± 5.4
21.1 ± 1.5
14.6 ± 1.9
0.017
All sample
128
140.9 ± 4.9
20.8 ± 1.3
16.6 ± 1.7
0.013
timated by the RoboPol programme based on 24 sources showing
EVPA rotations in the observations performed between 2013 and
2015 (Blinov et al. 2016b), the value estimated here is roughly an
order of magnitude larger. We note that the monitoring programme
carried out by the Steward Observatory has a faster mean observ-
ing cadence for several sources (shown in Table 2) w.r.t. that from
Table 4. Characteristics of the rotator and non-rotator populations. (1) Source
type. (2) Mean polarization degree. (3) Maximum EVPA change. (4) Mean
EVPA variation rate. The uncertainties are estimated as the standard deviation
of the distributions.
(1)
(2)
(3)
(4)
Type
Δ
⟨ |Δ |/Δ ⟩
[%]
[ • ]
[ • /days]
Rotators
8.32 ± 0.97 441.2 ± 21.4
16.6 ± 2.1
Non-rotators
4.70 ± 0.37
67.0 ± 4.2
3.6 ± 0.9
J.Otero-Santos et al.
MNRAS 000, 1-17(2023)
DGAPA PAPIIT from UNAM. Data from the Steward Observatory spectropolarimetric monitoring project were used. This programme is supported by Fermi Guest Investigator grants NNX08AW56G, NNX09AU10G, NNX12AO93G, and NNX15AU81G. We thank the anonymous referee for his/her comments that helped to improve the manuscript.
. A A Abdo, 10.1088/0004-637X/716/1/30ApJ. 71630Abdo A. A., et al., 2010, ApJ, 716, 30
. M Ajello, 10.3847/1538-4357/ab791eApJ. 892105Ajello M., et al., 2020, ApJ, 892, 105
. E Angelakis, 10.1093/mnras/stw2217MNRAS. 4633365Angelakis E., et al., 2016, MNRAS, 463, 3365
. D Blinov, 10.1093/mnras/stv1723MNRAS. 4531669Blinov D., et al., 2015, MNRAS, 453, 1669
. D Blinov, 10.1093/mnras/stw158MNRAS. 4572252Blinov D., et al., 2016a, MNRAS, 457, 2252
. D Blinov, 10.1093/mnras/stw1732MNRAS. 4621775Blinov D., et al., 2016b, MNRAS, 462, 1775
. D Blinov, 10.1093/mnras/stx2786MNRAS. 4741296Blinov D., et al., 2018, MNRAS, 474, 1296
. D Blinov, 10.1093/mnras/staa3777MNRAS. 5013715Blinov D., et al., 2021, MNRAS, 501, 3715
. M I Carnerero, 10.1093/mnras/stv823MNRAS. 4502677Carnerero M. I., et al., 2015, MNRAS, 450, 2677
. M I Carnerero, 10.1093/mnras/stx2185MNRAS. 4723789Carnerero M. I., et al., 2017, MNRAS, 472, 3789
. S Covino, 10.1051/0004-6361/201525674A&A. 57868Covino S., et al., 2015, A&A, 578, A68
. C W Danforth, K Nalewajko, K France, B A Keeney, 10.1088/0004-637X/764/1/57ApJ. 76457Danforth C. W., Nalewajko K., France K., Keeney B. A., 2013, ApJ, 764, 57
. E L Fitzpatrick, 10.1086/316293PASP. 11163Fitzpatrick E. L., 1999, PASP, 111, 63
. N Fraija, 10.3847/1538-4365/aa82ccApJS. 2327Fraija N., et al., 2017, ApJS, 232, 7
. G Ghisellini, F Tavecchio, L Foschini, G Ghirland, L Maraschi, A Celotti, 10.1111/j.1365-2966.2009.15898.xMNRAS. 402497Ghisellini G., Tavecchio F., Foschini L., Ghirland a G., Maraschi L., Celotti A., 2010, MNRAS, 402, 497
. A C Gupta, 10.1093/mnras/stx2072MNRAS. 472788Gupta A. C., et al., 2017, MNRAS, 472, 788
. C Heiles, 10.1086/301236AJ. 119923Heiles C., 2000, AJ, 119, 923
. M A Hodge, M L Lister, M F Aller, H D Aller, Y Y Kovalev, A B Pushkarev, T Savolainen, 10.3847/1538-4357/aacb2fApJ. 862151Hodge M. A., Lister M. L., Aller M. F., Aller H. D., Kovalev Y. Y., Pushkarev A. B., Savolainen T., 2018, ApJ, 862, 151
. T Hovatta, 10.1051/0004-6361/201628974A&A. 59678Hovatta T., et al., 2016, A&A, 596, A78
. S Kiehlmann, 10.1051/0004-6361/201527725A&A. 59010Kiehlmann S., et al., 2016, A&A, 590, A10
. S Kiehlmann, D Blinov, T J Pearson, I Liodakis, 10.1093/mnras/stx2167MNRAS. 4723589Kiehlmann S., Blinov D., Pearson T. J., Liodakis I., 2017, MNRAS, 472, 3589
. S Kiehlmann, 10.1093/mnras/stab2055MNRAS. 507225Kiehlmann S., et al., 2021, MNRAS, 507, 225
. R A Laing, 10.1093/mnras/193.3.439MNRAS. 193439Laing R. A., 1980, MNRAS, 193, 439
. V M Larionov, 10.1088/0004-637X/768/1/40ApJ. 76840Larionov V. M., et al., 2013, ApJ, 768, 40
. V M Larionov, 10.1093/mnras/stw1516MNRAS. 4613047Larionov V. M., et al., 2016, MNRAS, 461, 3047
. V M Larionov, 10.1093/mnras/staa082MNRAS. 4923829Larionov V. M., et al., 2020, MNRAS, 492, 3829
. M Lyutikov, MAGIC CollaborationE V Kravchenko, MAGIC Collaboration10.1051/0004-6361/201834603MNRAS. 46786A&ALyutikov M., Kravchenko E. V., 2017, MNRAS, 467, 3876 MAGIC Collaboration et al., 2020, A&A, 637, A86
. L Maraschi, G Ghisellini, A Celotti, 10.1086/186531ApJ. 3975Maraschi L., Ghisellini G., Celotti A., 1992, ApJ, 397, L5
. A P Marscher, 10.1088/0004-637X/780/1/87ApJ. 78087Marscher A. P., 2014, ApJ, 780, 87
. A P Marscher, 10.1038/nature06895Nature. 452966Marscher A. P., et al., 2008, Nature, 452, 966
. A P Marscher, 10.1088/2041-8205/710/2/L126ApJ. 710126Marscher A. P., et al., 2010, ApJ, 710, L126
. K Nalewajko, 10.3390/galaxies5040064Galaxies. 564Nalewajko K., 2017, Galaxies, 5, 64
. J Otero-Santos, J A Acosta-Pulido, J Becerra González, A Luashvili, Castro Segura, N González-Martín, O Raiteri, C M Carnerero, M I , 10.1093/mnras/stac475MNRAS. 5115611Otero-Santos J., Acosta-Pulido J. A., Becerra González J., Luashvili A., Castro Segura N., González-Martín O., Raiteri C. M., Carnerero M. I., 2022, MNRAS, 511, 5611
. P Paatero, U Tapper, 10.1002/env.3170050203Environmetrics. 5111Paatero P., Tapper U., 1994, Environmetrics, 5, 111
. V Pavlidou, 10.1093/mnras/stu904MNRAS. 4421693Pavlidou V., et al., 2014, MNRAS, 442, 1693
The London, Edinburgh, and Dublin Philosophical Magazine and. K Pearson, Journal of Science. 50157Pearson K., 1900, The London, Edinburgh, and Dublin Philosophical Maga- zine and Journal of Science, 50, 157
C M Raiteri, M Villata, 10.3390/galaxies90200422021, Galaxies. 942Raiteri C. M., Villata M., 2021, Galaxies, 9, 42
. C M Raiteri, 10.1051/0004-6361:20065744A&A. 459731Raiteri C. M., et al., 2006, A&A, 459, 731
. C M Raiteri, M Villata, A Capetti, J Heidt, M Arnaboldi, A Magazzù, 10.1051/0004-6361:20066599A&A. 464871Raiteri C. M., Villata M., Capetti A., Heidt J., Arnaboldi M., Magazzù A., 2007, A&A, 464, 871
. C M Raiteri, 10.1051/0004-6361/201219492A&A. 54548Raiteri C. M., et al., 2012, A&A, 545, A48
. C M Raiteri, 10.1093/mnras/stt1672MNRAS. 4361530Raiteri C. M., et al., 2013, MNRAS, 436, 1530
. C M Raiteri, 10.1093/mnras/stu886MNRAS. 442629Raiteri C. M., et al., 2014, MNRAS, 442, 629
. E F Schlafly, D P Finkbeiner, 10.1088/0004-637X/737/2/103ApJ. 737103Schlafly E. F., Finkbeiner D. P., 2011, ApJ, 737, 103
. K Serkowski, D S Mathewson, V L Ford, 10.1086/153410ApJ. 196261Serkowski K., Mathewson D. S., Ford V. L., 1975, ApJ, 196, 261
P S Smith, Astronomical Society of the Pacific Conference Series. Miller H. R., Webb J. R., Noble J. C.110135Blazar Continuum VariabilitySmith P. S., 1996, in Miller H. R., Webb J. R., Noble J. C., eds, Astronomical Society of the Pacific Conference Series Vol. 110, Blazar Continuum Variability. p. 135
The Astronomer's Telegram. P S Smith, 110471Smith P. S., 2017, The Astronomer's Telegram, 11047, 1
. M S Sosa, C Von Essen, I Andruchow, S A Cellone, 10.1051/0004-6361/201629823A&A. 60749Sosa M. S., von Essen C., Andruchow I., Cellone S. A., 2017, A&A, 607, A49
. J Torres-Zafra, S A Cellone, A Buzzoni, I Andruchow, J G Portilla, 10.1093/mnras/stx2561MNRAS. 4743162Torres-Zafra J., Cellone S. A., Buzzoni A., Andruchow I., Portilla J. G., 2018, MNRAS, 474, 3162
. C M Urry, P Padovani, 10.1086/133630PASP. 107803Urry C. M., Padovani P., 1995, PASP, 107, 803
. Vanden Berk, D E , 10.1086/321167AJ. 122549Vanden Berk D. E., et al., 2001, AJ, 122, 549
. S Vaughan, R Edelson, R S Warwick, P Uttley, 10.1046/j.1365-2966.2003.07042.xMNRAS. 3451271Vaughan S., Edelson R., Warwick R. S., Uttley P., 2003, MNRAS, 345, 1271
. H Zhang, X Chen, M Böttcher, 10.1088/0004-637X/789/1/66ApJ. 78966Zhang H., Chen X., Böttcher M., 2014, ApJ, 789, 66
. H Zhang, W Deng, H Li, M Böttcher, 10.3847/0004-637X/817/1/63ApJ. 81763Zhang H., Deng W., Li H., Böttcher M., 2016, ApJ, 817, 63
. H Zhang, X Li, F Guo, D Giannios, 10.3847/2041-8213/aad54fApJ. 86225Zhang H., Li X., Guo F., Giannios D., 2018, ApJ, 862, L25
| [] |
[
"STATUS: arXiv pre-print Michael O'Neil3 Fast adaptive high-order integral equation methods for electromagnetic scattering from smooth perfect electric conductors",
"STATUS: arXiv pre-print Michael O'Neil3 Fast adaptive high-order integral equation methods for electromagnetic scattering from smooth perfect electric conductors"
] | [
"Felipe Vico1 felipe.vico@gmail.com ",
"Leslie Greengard2 greengard@cims.nyu.edu ",
"Manas Rachh mrachh@flatironinstitute.org ",
"\nand Center for Computational Mathematics\nInstituto de Telecomunicaciones y Aplicaciones Multimedia (ITEAM)\nUniversitat Politècnica de València\nCourant Institute, NYU\n10012New YorkNY\n",
"\nCenter for Computational Mathematics\nFlatiron Institute New York\nFlatiron Institute New York\n10010., 10010NY, NY\n"
] | [
"and Center for Computational Mathematics\nInstituto de Telecomunicaciones y Aplicaciones Multimedia (ITEAM)\nUniversitat Politècnica de València\nCourant Institute, NYU\n10012New YorkNY",
"Center for Computational Mathematics\nFlatiron Institute New York\nFlatiron Institute New York\n10010., 10010NY, NY"
] | [] | Many integral equation-based methods are available for problems of time-harmonic electromagnetic scattering from perfect electric conductors. Not only are there multiple integral representations that can be used, there are numerous ways in which the geometry can be represented, numerous ways to represent the relevant surface current and/or charge densities, numerous quadrature methods that can be deployed, and numerous fast methods that can be used to accelerate the solution of the large linear systems which arise from discretization. Among the many issues that arise in such scattering calculations are the avoidance of spurious resonances, the applicability of the chosen method to scatterers of non-trivial topology, the robustness of the method when applied to objects with multiscale features, the stability of the method under mesh refinement, the ease of implementation with high-order basis functions, and the behavior of the method as the frequency tends to zero. Since three-dimensional scattering is a challenging, large-scale problem, many of these issues have been historically difficult to investigate. It is only with the advent of fast algorithms and modern iterative methods that a careful study of these issues can be carried out effectively. In this paper, we use GMRES as our iterative solver and the fast multipole method (FMM) as our acceleration scheme in order to investigate some of these questions. In particular, we compare the behavior of the following integral equation formulations with regard to the issues noted above: the standard electric, magnetic, and combined field integral equations (EFIE, MFIE, and CFIE) with standard RWG basis functions [1], the non-resonant charge-current integral equation (NRCCIE) [2], and the decoupled potential integral equation DPIE [3]. Various numerical results are provided to demonstrate the behavior of each of these schemes. Furthermore, we provide some analytical properties and comparisons of the electric charge-current integral equation (ECCIE) [4] and the augmented regularized combined source integral equation (auRCSIE) [5]. Keywords: Electromagnetic (EM) scattering, perfect electric conductor (PEC), second kind integral equation, fast multipole method (FMM), multi-level fast multipole algorithm (MLFMA), high-order adaptive discretization. | null | [
"https://export.arxiv.org/pdf/2306.04473v1.pdf"
] | 259,095,799 | 2306.04473 | f10a98f4b7ec7bb6fa215a63ea62a60db99c11f8 |
STATUS: arXiv pre-print Michael O'Neil3 Fast adaptive high-order integral equation methods for electromagnetic scattering from smooth perfect electric conductors
June 8, 2023 7 Jun 2023
Felipe Vico1 felipe.vico@gmail.com
Leslie Greengard2 greengard@cims.nyu.edu
Manas Rachh mrachh@flatironinstitute.org
and Center for Computational Mathematics
Instituto de Telecomunicaciones y Aplicaciones Multimedia (ITEAM)
Universitat Politècnica de València
Courant Institute, NYU
10012New YorkNY
Center for Computational Mathematics
Flatiron Institute New York
Flatiron Institute New York
10010., 10010NY, NY
STATUS: arXiv pre-print Michael O'Neil3 Fast adaptive high-order integral equation methods for electromagnetic scattering from smooth perfect electric conductors
June 8, 2023 7 Jun 2023Courant Institute, NYU New York, NY 10012
Many integral equation-based methods are available for problems of time-harmonic electromagnetic scattering from perfect electric conductors. Not only are there multiple integral representations that can be used, there are numerous ways in which the geometry can be represented, numerous ways to represent the relevant surface current and/or charge densities, numerous quadrature methods that can be deployed, and numerous fast methods that can be used to accelerate the solution of the large linear systems which arise from discretization. Among the many issues that arise in such scattering calculations are the avoidance of spurious resonances, the applicability of the chosen method to scatterers of non-trivial topology, the robustness of the method when applied to objects with multiscale features, the stability of the method under mesh refinement, the ease of implementation with high-order basis functions, and the behavior of the method as the frequency tends to zero. Since three-dimensional scattering is a challenging, large-scale problem, many of these issues have been historically difficult to investigate. It is only with the advent of fast algorithms and modern iterative methods that a careful study of these issues can be carried out effectively. In this paper, we use GMRES as our iterative solver and the fast multipole method (FMM) as our acceleration scheme in order to investigate some of these questions. In particular, we compare the behavior of the following integral equation formulations with regard to the issues noted above: the standard electric, magnetic, and combined field integral equations (EFIE, MFIE, and CFIE) with standard RWG basis functions [1], the non-resonant charge-current integral equation (NRCCIE) [2], and the decoupled potential integral equation DPIE [3]. Various numerical results are provided to demonstrate the behavior of each of these schemes. Furthermore, we provide some analytical properties and comparisons of the electric charge-current integral equation (ECCIE) [4] and the augmented regularized combined source integral equation (auRCSIE) [5]. Keywords: Electromagnetic (EM) scattering, perfect electric conductor (PEC), second kind integral equation, fast multipole method (FMM), multi-level fast multipole algorithm (MLFMA), high-order adaptive discretization.
Introduction
Boundary integral equation methods are widely used in computational electromagnetics (CEM), especially for exterior scattering problems. They impose the outgoing radiation condition exactly and, for piecewise constant homogeneous dielectrics or perfect conductors, reduce the dimensionality of the problem by requiring the discretization of the boundary alone. While the number of degrees of freedom required is dramatically reduced, these methods lead to dense linear systems of equationshence, fast algorithms are needed to address large-scale problems. At present, state-of-the-art solvers rely on iterative algorithms such as GMRES or BiCGstab. These algorithms work particularly well when the system to be solved is well-conditioned with a spectrum that clusters away from the origin. When computing a solution via an iterative solver, each step requires a matrix-vector product involving the system matrix. There are many algorithms available for accelerating this step, and since it is by now fairly standard in both academic and commercial software, we will rely here on the fast multipole method (FMM) [6][7][8][9][10]. Iterative solvers with FMM acceleration only require an amount of work on the order of ( iter ( log )) for any frequency, where is the system size and iter is the total number of iterations.
The focus of the present paper is on the choice of integral formulation, the discretization process, and their effect on performance and accuracy (which can often be dramatic). Currently, the most widely used solvers rely on the electric field, magnetic field, and combined field integral equations (EFIE, MFIE and CFIE) with RWG basis functions and a conforming mesh model of the scatterer [1,11,12]. These methods are subject to a host of numerical difficulties, including low-frequency breakdown, high-density mesh breakdown, and standard mathematical ill-conditioning. Rather than changing the underlying formulation, the dominant approach in CEM has been to introduce additional ideas to mitigate these problems. Loop-star basis functions [13][14][15][16], for example, improve accuracy and conditioning in the low-frequency regime. Linear algebraic pre-conditioners [17][18][19] alleviate the difficulties produced by the hypersingular integral operator in the EFIE, especially when dense meshes are needed to resolve sub-wavelength features in the geometry. At the same time, there has been a significant effort in the research community to develop well-condtioned Fredholm integral equations of the second kind4. While we do not seek to review the literature here, these include the use of Calderon identities to analytically pre-condition the EFIE [18,20,21] and the use of regularizing operators to pre-condition the CFIE [22][23][24]. A complementary class of methods are the so-called charge-current formulations. These methods are also aimed at developing well-conditioned formulations that are free from low-frequency breakdown [2,4,15,[25][26][27][28], but achieve the goal by introducing extra unknowns in the problem. Other formulations that lead to resonance-free, second-kind equations include those based on generalized Debye sources [29,30] and decoupled potential formulations [3,31,32]. More recently an augmented regularized combined source integral equation (auRCSIE) was introduced in [5]. Rather than an exhaustive analysis of all such formulations, we focus here on three representative second-kind integral formulations (NRCCIE, auRCSIE, and DPIE) and use the standard EFIE with RWG basis functions (EFIE-RWG) for comparison.
Once an appropriate second-kind integral formulation has been selected, the accuracy of the obtained solution will depend on the discretization and quadrature methods used. In this paper, we propose and investigate a fast, high-order, adaptive Nyström method that yields high-order 4Integral equations are said to be Fredholm equations of the second kind when the system matrix is of the form + , where is the identity operator and is a smoothing integral operator whose spectrum clusters at the origin. The condition number of such systems is typically independent of the number of degrees of freedom and stable under mesh refinement.
convergence and permits adaptive refinement to capture small features in the geometry. Of course, the quality of the geometry description itself also has an impact on the accuracy of the results. Here, we use the method described in [33] which allows for the efficient construction of globally smooth complex surfaces with multiscale features, high-order mesh generation, and local refinement.
With all of this machinery in place, we are able to address challenging electromagnetic scattering problems with millions of degrees of freedom in physically delicate regimes. We show that for the effective solution of such problems, all of the above ingredients play a role in robustly achieving userspecified accuracies in the electric and magnetic fields: well-conditioned formulations, high-order surface representations, and high-order quadratures (complemented by suitable fast algorithms).
PEC integral equations with physical unknowns
Electromagnetic scattering from a perfect electric conductor can be studied in the time harmonic regime, where the full Maxwell equations reduce to:
∇ × tot = − tot , ∇ × tot = tot . (2.1)
Here, we assume that the permittivity and permeability are scalar constants. The perfect electric conductor (PEC) is defined by a bounded region Ω, possibly with components Ω = ∪ =1 Ω , whose boundary is given by Γ = Ω. The boundary of the th component will be denoted Γ = Ω . As is well-known, the boundary conditions on a PEC are [34,35]:
× tot = 0, · tot = 0 , (2.2) × tot = , · tot = ,
together with the continuity condition along the surface of the scatterer,
= ∇ Γ · . (2.3)
It is sufficient to enforce the boundary conditions on the tangential components of the electric field, as done in the EFIE, but one or more of the other (redundant) boundary conditions are often used in the alternative formulations mentioned above and discussed below. Furthermore, it is convenient to write the total field as the sum of a known incoming field and an unknown scattered field:
tot = in + scat , tot = in + scat . (2.4)
The standard representation for the scattered fields is given in terms of a vector and scalar potential, A, , in the Lorenz gauge:
scat = scat − ∇ scat , (2.5) scat = 1 ∇ × scat , (2.6) with scat [ ] ( ) = [ ] ( ), (2.7) scat [ ] ( ) = 1 [ ] ( ),(2.8)
and where ∈ R 3 \ Ω. The above layer potential operators are defined by
[a] ( ) = ∫ Γ ( − ) a( ) , (2.9) [ ] ( ) = ∫ Γ ( − ) ( ) ,(2.10)
with kernel given by the Green's function
( − ) = | − | 4 | − | . (2.11)
Here, a is a tangential vector field and is a scalar on the boundary Γ. It is important to note that the charge in (2.8) is not an extra degree of freedom, but must satisfy the continuity condition (2.3). This ensures that the resulting electromagnetic fields scat , scat are Maxwellian. Using the representation above for the scattered electric and magnetic fields, the EFIE is obtained by imposing the boundary condition × tot = 0, the magnetic field integral equation (MFIE) is obtained by imposing the boundary condition × tot = , and the standard CFIE is obtained as a linear combination of × tot = and − × × tot = 0. Charge-current formulations, on the other hand, are based on considering the charge term as an independent unknown and imposing another one (or more) of the other conditions in (2.2) in order to obtain a uniquely solvable system of equations [2,4,[26][27][28]36]. We will often refer to these as augmented formulations, since they have increased the number of unknowns and imposed additional constraints. We turn now to the derivation of one such scheme.
The Electric Charge-Current Integral Equations
The electric charge-current integral equation (ECCIE) is presented in [4], following the ideas and nomenclature of [26,37]. It is obtained from the representations (2.5) and (2.6) by imposing the conditions × tot = , · tot = ,
yielding 2 − [ ] = × in (2.12) and − · [ ] + 2 + ′ [ ] = · in ,(2.13)
where
[ ] = × ∇ × [ ] ,(2.14)
is interpreted on surface in the principal value sense, and
′ [ ] ( ) = ∫ Γ ( − ) ( ) . (2.15)
An analogous integral equation known as the Magnetic Charge-Current Integral Equation (MCCIE) can be derived, but it shares similar properties and we will not discuss the formulation in this paper.
The Non-Resonant Charge-Current Integral Equation
The non-resonant charge-current integral equation (NRCCIE) was introduced in [26,37], with a modified version in [28]. The basic idea is to make use of (2.12) and (2.13), together with the equation derived from imposing · tot = / and a weak form of the continuity condition (2.3) obtained by integration over the surface. These two equations take the form
× [ ] − 1 × ∇ [ ] = − × in , (2.16) ∇ · [ ] − [ ] = 0 . (2.17)
The NRCCIE is a system of two equations, the first obtained as a linear combinations of (2.12) and (2.13), and the second obtained as a linear combinations of (2.16) and (2.17):
2 − [ ] + × × [ ] − 1 × ∇ [ ] = × in − × × in 2 + ′ [ ] − · [ ] + ∇ · [ ] − [ ] = · in .
(2.18)
Here, is an arbitrary real positive constant. The NRCCIE is known to have a unique solution at any frequency > 0 (see [28]). The operators ∇ · [ ] and × ∇ [ ] are not compact, however, and therefore the coupled system (2.18) is not strictly speaking a Fredholm equation of the second kind. Nevertheless, we will show that it has similar properties in terms of a small condition number and the absence of high-density mesh breakdown.
Augmented Regularized Combined Source Integral Equation
An alternative to the charge-current approach is based on using two tangential vector fields as sources (analogous to using both electric and magnetic current-like variables as unknowns). We focus on the formulation presented in [5], which uses ideas from [22] and [38]. The scheme begins from the following representation for the scattered electric and magnetic fields:
scat = ∇ × [ ] + ∇ × ∇ × [ ] scat = [ ] + ∇ + ∇ × [ ], (3.1) with = [ ] + [ [ ]] = × [ ],(3.2)
and where is an unknown scalar and
[ ] ( ) = ∫ Γ ( − ) ( ) . (3.3)
In the original paper [5], the relation between and was taken to be
= × 2 [ ],(3.4)
but the modification above is simpler and more efficient, as there appears to be little difference in conditioning and the number of iterations required for solution.
The corresponding integral equation (auRCSIE) is obtained by imposing the boundary conditions × tot = 0 and · tot = 0 along the surface Γ, yielding:
2 + [ ] + [ × [ ]] = − × in − 2 + ′ [ ] + ′ [ [ ]] = .
(3.5)
In the system of equations above, the operator ′ is defined by
′ [ ] ( ) = ∫ Γ ( − ) ( ) , (3.6)
the operator is defined by
[ ] ( ) = ∫ Γ × ∇ × ∇ × ( − ) ( ) ,(3.7)
and the right hand side is given by
= − · in − · [ ] − · ∇ × [ × [ ]].
(3.8)
The auRCSIE system is uniquely solvable for > 0; it is also uniquely solvable at = 0 in simply connected geometries. The reader may have noted that no continuity condition was imposed on the unknowns and . It was shown in [5] that the scattered field (3.1) is Maxwellian so long as the right hand side corresponds to a valid incoming Maxwellian electromagnetic field.
Decoupled Potential Integral Equation
Instead of solving for the physical quantities, current and charge, one can instead indirectly solve for the vector and scalar potentials themselves. Such an approach leads to the decoupled potential integral equation (DPIE), originally introduced in [3] to address the ubiquitous problem of topological low-frequency breakdown endemic in almost all integral formulations for electromagnetic scattering. The DPIE approach is based on considering two uncoupled boundary value problems: one for the scalar potential, and one for the vector potential. Trivially, both potentials satisfy the homogeneous Helmholtz equation (due to the choice of Lorenz gauge). For the scalar problem, consider the boundary value problem:
Δ scat + 2 scat = 0 scat | Γ − = − in | Γ ∫ Γ scat = − ∫ Γ in , (4.1)
where is an unknown constant (voltage) assigned to component . And similarly, for the vector potential, consider the boundary value problem:
Δ scat + 2 scat = 0 × scat | Γ = − × in | Γ ∇ · scat | Γ − = −∇ · in | Γ ∫ Γ · scat = − ∫ Γ · in , (4.2)
where, as above, is an unknown constant assigned to component . See [3] for a thorough discussion of the role that these constants play in the representation of the fields. Each of these boundary value problems can be solved by means of a second-kind integral equation using the following representations for the scattered scalar and vector potentials:
scat ( ) = [ ] ( ) − [ ] ( ), (4.3) scat ( ) = ∇ × [a] ( ) − [ ] ( ) + [ × a] ( ) + ∇ [ ] ( ) ,(4.4)
where we require that > 0 (but can be chosen freely), and where
[ ] ( ) = ∫ Γ ( − ) ( ) (4.5)
is the double layer potential. Imposing the boundary conditions above, and using the fact that ∫ [3], eq. (A.11), we obtain the following system of equations for the unknowns , , , the 's, and the 's, for ℓ = 1, 2, . . . , :
Γ ′ 0 [ ] = 0, see2 + [ ] − [ ] − ∑︁ =1 = − in | , ∫ Γ ℓ ( ′ − ′ 0 ) [ ] + 2 − ′ [ ] = − ∫ Γ ℓ in , (4.6) 1 2 a + L a + R a + 0 =1 = − × in | Γ −∇ · in | Γ ℓ , (4.7) ∫ Γ − · [ ] + · [ × a] − 2 + ′ [ ] = − ∫ Γ · in , (4.8)
where is the indicator function such that ( ) = 1 if ∈ Γ and 0 otherwise. The matrix integral operators L and R above are defined by:
L a = 11 [a] + 12 [ ] 21 [a] + 22 [ ] ,(4.9)
where 11 [a] =ˆ× [a], 21 [a] = 0,
12 [ ] = −ˆ× [ˆ]),22 [ ] = [ ],(4.22 [ ] = − 2 [ ].
(4.12)
The vector integral equation above in (4.7) and (4.8) is not, strictly speaking, a Fredholm equation of the second kind since 12 and 21 are bounded but not compact operators. Nevertheless, we will show that it has similar properties. The formulation is resonance free and stable at arbitrarily low frequencies for geometries of any genus (see [3] for further detail). The original formulation in [3] contains an additional regularizing operator that we have omitted here for simplicity. Stability does not appear to be compromised in our experiments. The coefficient is included above to avoid spurious resonances; we typically set = 1, but for complicated geometries, it may be possible to optimize the choice in order to reduce the total number of iterations. Remark 1. If = 0, we will refer to the resulting (simpler) integral equation as the resonant DPIE (rDPIE). The spurious resonances are actually the same as those for the MFIE.
Properties of various integral formulations
We summarize the expected properties (based on a mathematical analysis) of the various formulations in the table below. We further describe some of the items in the left-hand column of Table 1:
• A spurious resonance is a frequency where the integral equation is not invertible but the scattering problem is itself well-posed.
• High-density mesh breakdown refers to a significant growth in the numerical condition number of the finite-dimensional linear system to be solved under mesh refinement. Some integral equations are Fredholm equations of the second kind which, in the absence of spurious resonances, have bounded condition numbers independent of the number of degrees of freedom.
• Catastrophic cancellation in scat , scat refers to a loss of precision in computing the scattered fields of interest once the integral equation has been solved (see section 7).
• Second kind integral equations and equations whose system matrices are of the form + , where is the discretiztion of a bounded operator, tend to converge rapidly using GMRES or BiCGSTAB as an iterative method.
• An equation is stable at low frequency if the condition number does not grow as the frequency tends to zero. This can be the case for surfaces without holes (of genus zero) or more generally (for surfaces of arbitrary genus).
Following a description of discretization schemes, subsequent sections of the paper provide numerical evidence that the properties summarized above have practical consequences.
Surface representation, discretization, and quadrature
Given a flat triangulated surface, it is standard to discretize the EFIE, MFIE or CFIE using edgebased RWG basis functions [1] in a Galerkin framework; this corresponds to linear current profiles on each triangle. Since the formulation is standard, we will not describe it in further detail. We will also investigate the performance of higher-order non-Galerkin discretizations. In this case, we must also assume that the surface Γ is described as a set of triangular patches Γ = ∪ patches =1 Γ , where patches is the number of curved triangular patches. For each , we assume there exists a known parameterization such that : → Γ ⊂ R 3 , (6.1) where is the canonical unit triangle:
= ( , ) : ≥ 0, ≥ 0, + ≤ 1 . (6.2)
Remark 2. Since many computer-aided design systems or meshing algorithms produce only flat triangulations, the surfaces used as examples in this paper are generated using the algorithm of [33]. This results in a surface of the desired form, with the regularity (curvature) of the surface controlled locally, permitting adaptive refinement and resolution of multiscale features.
Given the surface Γ, described by an atlas of functions , we also require a suitable set of sampling/quadrature nodes and a suitable representation of smooth functions on each Γ . For this task, we will use Vioreanu-Rokhlin nodes/weights [39], and Koornwinder polynomials as a basis for smooth functions, respectively. The Vioreanu-Rokhlin nodes and weights have been designed so that the quadrature rule
∫ ( , ) ≈ ∑︁ =1 ( , ), (6.3) with = ( order + 1) ( order + 2)/2 (6.4)
nodes exactly integrates all monomials in two variables whose total order satisfies + ≤ order . Furthermore, the Koornwinder polynomial , on the standard triangle are given explicitly by
, ( , ) = (1 − ) 0,2 +1 − (1 − 2 ) 0,0 2 1 − − 1 ,(6.5)
with = 0, 1, 2, ... and = 0, 1, . . . , . Here , for ∈ N are the standard Jacobi polynomials which are orthogonal with respect to the weight function (1 − ) (1 + ) on the interval [-1,1] [40]. This is an orthogonal basis that comes equipped with fast and stable recurrence formulas for their evaluation. Moreover, the mapping from samples of functions at Vioreanu-Rokhlin nodes to the corresponding coefficients in the Koornwinder basis is well-conditioned and straightforward to generate. We refer the reader to [39,41,42] for further details. Once the Koornwinder expansion of a function is available, it is a simple matter of evaluation to interpolate that function with high-order accuracy to any other point on the triangle.
Additionally, we also require a basis in which to describe tangential vector fields along each patch. To this end, we construct two sets of vector-valued basis functions on each patch Γ as follows. We first set
( , ) = ( , ) = × (6.6) andˆ( , ) = ( , ) | ( , )| ( , ) = ( , ) | ( , )| ( , ) =ˆ( , ) ׈( , ).∫ U , ( , ) · U ′ , ′ ( , ) = , ′ , ′ , ∫ V , ( , ) · V ′ , ′ ( , ) = , ′ , ′ , ∫ U , ( , ) · V ′ , ′ ( , ) = 0. (6.9)
In our method, tangential vector fields are represented at each Vioreanu-Rokhlin node ( , ) via an expansion in the two sets of basis functions {U , , V , } (and evaluated at additional points on Γ as needed using the Koornwinder basis).
Remark 3.
The reader may have noted that the basis functions used to discretize tangential vector fields, such as the electric current, do not correspond to a div-conforming discretization. Indeed, no continuity of any kind is enforced between adjacent triangles. This makes discretization very straightforward, as it can be done independently for each triangular patch. As we will see in the numerical examples, this choice does not introduce any artifacts, even at second-order accuracy. The robustness of the method is due to the accuracy of the integration method described below and a fundamental fact about second-kind integral equations: when using Nyström discretizations, the order of accuracy of the method is equal to the order of accuracy of the underlying quadrature scheme [43,44]. A Nyström method is one in which the integral equation is converted to a finite dimensional linear system by merely sampling the kernel and the unknown at a collection of quadrature nodes and approximating the integral operator by a quadrature rule over those same nodes [45].
Near and far field quadrature
Since the integral operators appearing in all of our representations are non-local, it is convenient to make use of a quadrature scheme that exploits the smoothness of the integrand when the integral is taken over some triangle (which we will call the source triangle) and the target point is triangle , with ≠ . Following the discussion in [42], we distinguish the self-interaction (when = ), the near field (when ≠ but the triangles are adjacent or nearby), and the far field (when ≠ and the triangles are far apart). For the far field interactions, we use the Vioreanu-Rokhlin quadrature described above (suitable for smooth functions) and for which the fast multipole method FMM [6], [7] can be applied directly to the discrete sum to accelerate the computation. The self interaction is computed by a specialized high-order quadrature rule due to Bremer and Gimbutas [46]. The near interactions correspond to an integrand which is formally smooth but very sharply peaked at the target. These are, in some sense, the most cumbersome integrals to evaluate. For these, we rely on the method introduced in [42], which uses adaptive quadrature on the source triangle Γ with a carefully precomputed multiscale hierarchy of interpolants for the underlying density to reduce the cost. (Although the cost is linear in the total number of degrees of freedom, the accurate evaluation of near field quadratures is the most expensive step in quadrature generation.)
Error Estimation
The use of orthogonal basis functions to represent the source densities on each triangle has an additional advantage beyond high-order accuracy itself. Namely, these representations can be used for a posteriori error estimation and as a monitor for identifying regions which need further geometric refinement. The procedure is straightforward: from the samples of the unknown densities on each patch, we obtain the coefficients of the corresponding function approximation in the Koornwinder basis. This basis has the property that a well-resolved function has a rapid decay of its Koornwinder coefficients. A basic heuristic for the local error is to simply examine the norm of the highest order basis functions. More precisely, let us first consider a scalar quantity, such as the induced charge on Γ . From the discussion above, using the Nyström method, after solving our integral equation we have the discrete values ( ( , )) = at the Vioreanu-Rokhlin nodes. Let us denote the corresponding Koornwinder approximation by: which serves as a triangle-by-triangle error estimate. The global absolute and relative errors can then be estimated as
∥ ∥ 2 ≈ √︄ ∫ Γ | tail ( )| 2 (6.13) and ∥ error ∥ 2 ∥ ∥ 2 ≈ ∫ Γ | tail ( )| 2 ∫ Γ | ( )| 2
, (6.14)
respectively. As we will see below, these error estimates are approximately one order of magnitude larger than the error in the radiated field. This is not surprising, since the field quantity is obtained from the density through the process of integration. Note that the ℓ 2 norm of the sequence { } patches =1 equals ∥ ∥ 2 . Plotting the piecewise constant function on the triangulated surface helps visualize regions with large errors and identifies triangles which require local refinement if the obtained accuracy is not sufficient. The error estimation is analogous for vector densities, such as the electric current.
Far Field estimation
The far field induced by a given electric or magnetic current can be computed from the Fourier transform of the currents themselves (see [47]). In some of our formulations, such as the auRCSIE and the DPIE, the unknowns are non-physical quantities. One could develop expressions for the far field in terms of these unknowns using standard parallel-ray approximations. This approach, however, has some disadvantages that we will discuss later. A second option is to use a spherical proxy surface that contains the full scatterer and first compute the corresponding electric and magnetic fields on that sphere. The principle of equivalent currents can then be used to compute the field at any point in the far field (or the far field pattern itself). This latter method has some stability advantages, and is worth describing in more detail.
For known electric and magnetic currents along the proxy sphere 0 of radius 0 (with 0 sufficiently large so as to enclose the scatterer), the far field pattern is given by:
(ˆ) = | | 2 | | (ˆ) + (ˆ) , (ˆ) = | | 2 | | (ˆ) − (ˆ) , (ˆ) = 1 (ˆ), (ˆ) = − 1 (ˆ),(7.1)
where
(ˆ) = ∫ 0 ( ) −ˆ· , (ˆ) = ∫ 0 ( ) −ˆ· . (7.2)
The relevant currents can be computed on 0 from the scattered fields , (using the FMM for efficiency) according to the equivalent current principle:
=ˆ× , = −ˆ× . (7.3)
Hereˆis the outward unit normal to the sphere 0 . If the scatterer is electrically large, the projection integrals in (7.2) are expensive to evaluate naively by direct quadrature over a sufficiently fine mesh on 0 . In that case, the fast Fourier transform (FFT) or its non-uniform variant (NUFFT) can be used to accelerate the calculation [48][49][50]. Unfortunately, the expressions in (7.2) are unstable at low-frequency and subject to catastrophic cancellation. This problem is discussed in [4] and stems from the fact that the magnitude of the far field is ( ) while the integrand is (1). The stabilization introduced in [4] is based on introducing equivalent electric and magnetic charges. These equivalent charges can easily be obtained from the normal components of the fields , on the spherical proxy surface. Numerically stable (and exact) expressions for and are then given by
(ˆ) = ∫ 0 ( ) ( −ˆ· − 1) − ( ) , (ˆ) = ∫ 0 ( ) ( −ˆ· − 1) − ( ) , (7.4) where =ˆ· , =ˆ· . (7.5)
Note that the term ( −ˆ· − 1) is also of the order ( ) and can be evaluated without catastrophic cancellation as
( −ˆ· − 1) = 2 2ˆ· sin 2ˆ· . (7.6)
In short, the expressions in (7.2) are slightly more accurate at high frequencies, while the expressions in (7.4) are significantly more accurate and stable at low frequencies. Thus, we recommend the use of (7.2) for scatterers that are larger than 0.5 wavelengths in size and (7.4) otherwise.
Numerical Examples
In this section, we illustrate the behavior of the integral representations and discretization methods discussed in the preceding sections. For sections 8.1, and 8.2, the scatterer is either a sphere of radius = 1m or a smooth version of a rectangular torus, see Figure 1. The toroidal geometry was obtained via the surface smoothing algorithm of [33] applied to a rectangular torus defined as the union of rectangular faces parallel to the coordinate axes. The code was implemented in Fortran and compiled using the GNU Fortran 11.2.0 compiler. We use the fast multipole method implementation from the FMM3D package5, the high order local quadrature corrections from the fmm3dbie package6, and the high-order mesh generation code from the surface smoother package7.
In each of the examples, unless stated otherwise, the surface is represented using flat triangles and the integral equations are discretized using a Galerkin approach with the Rao-Wilton-Glisson basis and test functions [1,51] for the EFIE, MFIE, and CFIE. On the other hand, for the NRCCIE and DPIE, the surface is represented using a collection of high-order curvilinear triangles, and the integral equations are discretized using a Nyström approach with locally-corrected quadratures.
Accuracy
To test the accuracy of the solvers, we generate an exact solution to the boundary value problem (i.e. the scattering problem) and validate our numerical approximation. Suppose that the boundary data in , in is generated using a magnetic dipole located in the interior of the perfect electric conductor. Then, the solution in the exterior of the conductor is given exactly by the field due to the magnetic dipole. Let and denote the relative 2 error in the electric and magnetic fields at a collection of targets in the exterior region, and let = max( , ). When the conductor is a sphere, we test the accuracy of the computed scattered fields generated by an incident plane wave, where the exact solution in the exterior is given by the Mie series. In a slight abuse of notation, we will use to denote this error as well.
Convergence
In Figure 2, we plot the error corresponding to scattering from a PEC sphere with radius = 1 m and wavenumber = 1 m −1 (the diameter of the sphere is / ) due to an incoming linearly polarized planewave for each of the EFIE, MFIE, CFIE, NRCCIE, and DPIE; results for the EFIE, MFIE, and CFIE are reported using RWG basis functions, and results for the NRCCIE and DPIE are reported for discretization orders order = 2, 4, 6, 8. The errors decrease at the expected rate of (ℎ) for the EFIE, MFIE, and CFIE, and at the expected rate of (ℎ order +1 ) for NRCCIE and DPIE. Here ℎ is the diameter of a typical triangle in the discretization.
Absence of spurious resonances
The exterior scattering problem has a unique solution for any real wavenumber . However, it is well-known that the MFIE has spurious resonances, i.e. wavenumbers for which the integral equation is not invertible. On the sphere, these spurious resonances can be computed analytically. To demonstrate the absence of spurious resonances for the NRCCIE, and DPIE, we plot the condition number of the discretized integral equations as a function of in Figure 3. All of the integral equations were discretized using 192 patches and order = 2. The interval ∈ [1.9, 3.5] has one internal resonance of the MFIE on the sphere of radius = 1 m given by 1 spurious resonant wavenumber. To further confirm the presence of the spurious resonance, we also plot the condition number for the MFIE using 768 patches and observe that the condition number of the resulting system increases as we obtain a more accurate discretization of the integral equation at the spurious resonance, while there is very little impact on the condition number at the other wavenumbers. When computing the condition numbers of the discretized linear systems, we scale both the unknowns and the boundary data using the square root of the smooth quadrature weights to obtain a better approximation of the integral equation in an 2 sense [52].
Static limit
In the static limit, the boundary value problems for the electric and magnetic fields completely decouple. The fields computed at finite, but small wavenumbers, converge to the solutions of the boundary value problems for the electrostatic and magnetostatic fields. Since there exists a stable limit for the underlying system of differential equations, it is a desirable feature that the integral equations remain stable in the static limit as well. Integral equation methods tend to have two kinds of failure modes in the static limit: (1) deterioration in the accuracy of the computed solution using a fixed discretization which resolves both the geometry and the boundary data as → 0; and (2) failure to converge at the expected rate upon mesh refinement for a fixed, but small . In Figure 4, we plot the error as a function of , with ∈ [10 −10 , 10 −1 ]m −1 for the MFIE, EFIE, CFIE, NRCCIE, and DPIE. All of the integral equations were discretized using 192 patches; for the NRCCIE and DPIE we use an order = 2 discretization. We note that the CFIE, NRCCIE, and DPIE have no deterioration in accuracy in the limit → 0, however, for the EFIE, the error increases to (1) as we decrease . For the MFIE, the error increases like (1/ ) as → 0.
The nature of the limiting static equations depends on the genus of the conductor and the number of connected components. Thus, the stability of the integral equation may be a function of the topology of the conductor. In Figure 4, we compare the convergence rates for the CFIE, NRCCIE, and DPIE on the smooth torus as we refine the mesh for = 1 m −1 and = 10 −10 m −1 . The error in the computed fields converge at the expected rate for all the integral formulations when = 1 m −1 . On the other hand, for = 10 −10 m −1 , the error in the fields computed via the DPIE continues to converge at the expected rate, while the accuracy deteriorates upon mesh refinement for the CFIE and NRCCIE.
Remark 4.
For conductors whose dimensions are extremely small compared to the wavelength of the incident field, one could in principle use the solution to the static problems (possibly with including corrections on the Green's function up to ( )) in order to obtain high fidelity approximations of the corresponding low-frequency solutions. Such approximations are widely used in practice, see [53,54], for example.
A posteriori error estimation
For high-order discretizations, the tail of Koornwinder expansions on each patch can be used as an estimate for the error in the solution computed via integral equations. Following the discussion in Section 6.2, let denote the tail of the Koornwinder expansion of the current computed using the NRCCIE and consider the following monitor function
= ∥ ∥ 2 /∥ ∥ 2 max . (8.2)
The monitor function is piecewise constant on each triangle, is proportional to , and its maximum ∥ ∥ 2 /∥ ∥ 2 is the expected accuracy in the induced current. Typically, the error obtained with the spectral monitor function tail = max is within an order of magnitude of the relative error in the computed scattered field , i.e. 0.1 ≤ / tail ≤ 10.
For the NRCCIE on the sphere with wavenumber = 1 m −1 , patches = 192, and order = 4, the estimated error from the Koornwinder tails of the current is tail = 1.8 × 10 −4 , while the error in the field measurements is = 3.2 × 10 −5 . This behavior is independent of the wavenumber, geometry, order of discretization, number of patches used to discretize the conductor, and also holds for other high-order discretizations of second-kind integral equations including, the DPIE. Thus, the error monitor function can reliably be used to determine adaptive mesh refinement strategies, and tail is a reasonable empirical indicator of the error of the solution on geometries where an analytic solution is not known.
Iterative solver performance
In this section, we compare the performance of the integral equations when coupled to an iterative solver like GMRES. It is a desirable feature for the GMRES residual to reduce at a rate which is only dependent on the underlying physical problem, e.g. the complexity of the geometry and the boundary data, but independent of the mesh used to discretize the surface. In Figure 5, we plot the relative GMRES residue as a function of the iteration number for the NRCCIE and EFIE. Both the integral equations were discretized with patches = 192 and patches = 768 patches, and second-order patches were used for discretizing the surface in the NRCCIE. The GMRES residual as a function of iteration number is stable under refinement for the NRCCIE, while for the EFIE, the residual decreases at a slower rate upon mesh refinement. This stability in performance for the NRCCIE can be attributed to its second-kind nature, while the increased number of iterations for the EFIE can be attributed to the hypersingular nature of the EFIE operator -this phenomenon is often referred to as high density mesh breakdown. The iteration count is independent of the discretization order, and number of patches for other second-kind integral equations, such as the DPIE and MFIE, while integral equations with hypersingular kernels like the CFIE also suffer from the high-density mesh breakdown.
Speed
In Table 2, we tabulate the total wall time for computing the solution on the PEC sphere with = 1 m −1 for an incoming linearly polarized plane wave using the NRCCIE and DPIE representations with ( order , GMRES ) = {(2, 10 −7 ), (4, 10 −9 ), (6, 10 −10 ), (8, 10 −13 )}. The wall time, as expected, scales linearly with patches . For each combination of ( order , GMRES ) we note that the total wall time is the smallest for the NRCCIE, followed by the DPIE. The table suggests that for both contractible and non-contractible geometries, away from the static limit, using the NRCCIE would be the optimal choice due to better CPU time performance, while for non-contractible geometries in the static limit, the DPIE is the optimal choice (since the NRCCIE is unstable in that environment). All CPU timings in the table were obtained on a Intel Xeon Gold 6128 Desktop with 24 cores and 192 GB RAM.
Large-scale examples
We next demonstrate the performance of the integral equations on several large-scale examples. We first demonstrate the efficiency of the DPIE on a multi-genus complicated surface in the static limit, followed by a comparison of the CFIE and NRCCIE for computing the far-field pattern from a bent rectangular cavity. Finally, we illustrate the efficacy of NRCCIE for computing the far-field pattern from a multiscale ship geometry.
High-genus object in the static limit
In the following example, we demonstrate the efficacy and stability of computing the far-field pattern from a genus 17 surface (see Figure 6) in the static limit using the DPIE. None of the other integral equations considered in this manuscript are both numerically and mathematically stable in this regime. The incoming field is a plane wave with wavenumber = 10 −10 m −1 . The geometry is contained in a bounding box of size 1.6 × 10 −10 wavelengths in each dimension. As noted in Remark 4, one could, in principle, solve a limiting PDE to obtain a high accuracy approximation of the solution at such low frequencies. However, computing the static solutions requires knowledge of A-cycles and B-cycles on the geometry (i.e. loops through the holes of the surface), which can pose a computational geometry challenge on such complicated high-order surface meshes. The DPIE, on the other hand, can be used directly on the surface triangulation without the need to compute these global loops on the surface. We first compute a reference solution for this geometry where the surface is discretized with order = 8 and patches = 3840. In Figure 6, we plot the induced source on the surface of the conductor using this discretization. In order to estimate the accuracy of the computed solution and demonstrate the efficiency of the error monitor function discussed in Section 8.1.4, we also compute the solution using order = 2, and patches = 960. For this configuration, GMRES required iter = 72 for the vector equation and iter = 21 for the scalar part for the relative residual to reduce to below 10 −6 , and the solution was obtained in 42s. The tolerance for computing the quadrature corrections was 10 −4 . The accuracy in the computed far field pattern (as measured in dB) as compared to the far field pattern computed using the reference solution is 1.5 × 10 −4 . Another remarkable feature of this calculation is that the DPIE can stably evaluate the far field pattern with values ranging between [−146, −134] dB, which is orders of magnitude smaller than the induced current or the size of the conductor.
Cavity
Next we analyze a cavity in moderately high frequency regime. The rectangular cavity is open at one end, and around 16 in length along the center line. The closed end of the cavity cannot be seen from the opening, see Figure 7. The incoming field is a plane wave propagating in the −ˆdirection and polarized in theˆdirection. Due to multiple internal reflections, the physical condition number of the problem is expected to be high, and therefore this problem is a good stress test for high-order methods. The surface of the cavity was discretized using the NRCCIE with ( order , patches ) = (2, 11392), (4, 2848), (4,11392), and using the CFIE with patches = 11392, and patches = 45568. The reference solution for the far field was computed using the NRCCIE with patches = 11392 patches of order = 8. The estimated error in the reference solution based on the error monitor function tail = 6 × 10 −5 . The dominant contributor to the error in the reference solution was the tolerance used for the fast multipole methods and quadrature corrections which was set to 5 × 10 −7 . In Figure 7, we plot the magnitude of the induced current computed using the NRCCIE.
In Table 3, we tabulate the CPU time required to compute the solution ( ) and the number of iterations required for the residual to drop below the specified GMRES tolerance GMRES = 10 −7 ( iter ). The precision for computing layer potentials and the FMM was set to 10 −4 . We also tabulate the relative 2 error in the far field of the electric field denoted by . In Figure 8, we plot the far field ( ) = (sin ( ), 0, cos ( )) corresponding to the NRCCIE, the CFIE, and the reference solution for ∈ [0, 180] degrees, where (ˆ) is as defined in (7.4).
The table highlights features of the integral equations already observed in the previous sections with respect to linear scaling of the time required to compute the solution for the NRCCIE, the stability of the number of GMRES iterations required for the NRCCIE, and the growth in the Table 3: Total wall time in seconds, iteration count, and error in far field pattern for the solution on a rectangular cavity of approximately 16 wavelengths along the center line due to an incoming linearly polarized plane wave.
number of iterations required for the CFIE. As can be seen from the plots, the error in the far field measurements corresponding to the CFIE with the fine mesh is still (1), while the far field computed using the fine mesh is nearly indistinguishable from the reference solution. The solution obtained using the NRCCIE can be computed around 4 times faster than a less accurate solution computed using the CFIE. The table also demonstrates that it is faster and more accurate to use the NRCCIE with order = 4 and the coarse mesh as opposed to using NRCCIE with order = 2 and the fine mesh.
A multiscale ship simulation
Finally, we demonstrate the performance of NRCCIE on a multiscale ship. The ship is discretized using patches = 30752 patches of order = 4. The incoming field is a plane wave propagating in the −ˆdirection and polarized in theˆdirection. Let denote the radius of the smallest bounding sphere containing patch Γ centered at its centroid. The ratio of the largest to smallest patch size, measured by the enclosing sphere radius is 21.5. The ship is approximately 42 in length, 5.7 in width, and 8.7 in height. The precision for computing the layer potentials and the FMM were set to 10 −4 . For this configuration, 289 GMRES iterations were required for the relative residual to drop below 10 −6 , and the solution was computed in 9.1 × 10 3 s. The estimated error in the solution is tail = 3.8 × 10 −3 . In Figure 9, we plot the absolute value of the induced current on the surface of the ship.
Conclusions
We have demonstrated the numerical properties of various integral equation methods for solving exterior Maxwell scattering problems, comparing standard RWG discretizations of standard formulations (e.g. EFIE, MFIE, CFIE) to high-order Nyström discretizations of more modern integral formulations especially designed to overcome the failure modes of existing ones (e.g. DPIE, NRC-CIE). Furthermore, we've shown that when all aspects of the problem are discretized to high-order -the geometry, quadrature, fast algorithm, etc. -that high-order accuracy can be achieved at a cost which is less than that required for lower accuracy using standard 1st-order discretizations. We plan to perform a similar analysis comparing existing integral equation formulations with more modern ones for scattering from piecewise homogeneous dielectric bodies.
ˆ,ˆ, form a pointwise orthonormal set of coordinates along Γ . Then, we set U , ( , ) = , ( , )ˆ( , ) V , ( , ) = , ( , )ˆ( , ).
basis functions are furthermore orthonormal in the sense that
then define the following function as our error monitor on this patch:tail ( ) = tail ( ( , ))
Figure 1 :
1A smoothed rectangular torus of genus one.
Figure 2 :Figure 3 :
23Relative error in the scattered field of a p.e.c. sphere of diameter = 1 and incoming linearly polarized plane wave. We compare NRCCIE, integral equations with discretization order 2,4,6 and, 8 and the standard CFIE, MFIE, EFIE integral equations discretized with RWG basis functions Condition number of the discretized integral equations for the MFIE, NRCCIE, and DPIE.
Figure 4 :
4(left) Relative error as a function of wavenumber for the MFIE, EFIE, CFIE, NRCCIE, and DPIE on the unit sphere discretized using patches = 192, (right) relative error as a function of number of patches patches for the CFIE, NRCCIE, and DPIE on a smooth torus with = 1 m −1 and = 10 −10 m −1 .
Figure 5 :
5Relative GMRES residue for the NRCCIE and the EFIE.
Figure 6 :
6Induced source on the surface of the genus 17 geometry.
Figure 7 :
7(left) Different views of the cavity, (right) induced current | | for incoming plane wave.
Figure 8 :
8Far field produced by the cavity inFigure 7for an incoming plane wave.| J |
Figure 9 :
9Induced current for incoming plane wave.
Table 1 :
1Properties of various integral equation formulationsEFIE
MFIE
CFIE
ECCIE
NRCCIE
RCSIE
auRCSIE
DPIE
Resonance-free
✓
✓ ✓ ✓
No high-density
mesh breakdown
✓
✓ ✓ ✓ ✓ ✓
Second kind
Fredholm eq.
✓
✓
✓ ✓ ✓
Free from catastrophic
cancellation in scat , scat
✓ ✓
✓ ✓
Rapid convergence with
GMRES, BiCGSTAB
✓
✓ ✓ ✓ ✓ ✓
Stable at low frequencies
for surfaces of genus zero
✓
✓ ✓ ✓ ✓ ✓
Stable at low frequencies
for surfaces of any genus
✓
Table 2 :
2Total wall time in seconds for the solution on a PEC sphere at = 1 −1 due to an incoming linearly polarized plane wave.order GMRES
patches
192 768 3072 12288
NRCCIE
2
10 -7 1.43 7.11 20.7
84.9
4
10 -9 9.68 28.6 143
-
6 10 -10 22.9 83.3
-
-
8 10 -13 63.2 340
-
-
DPIE
2
10 -7 3.18 12.5 48.5
199
4
10 -9 18.9 64.2 296
-
6 10 -10 48.1 179
-
-
8 10 -13 129 594
-
-
5https://github.com/flatironinstitute/FMM3D 6https://github.com/fastalgorithms/fmm3dbie 7https://github.com/fastalgorithms/surface-smoother
AcknowledgmentsThe authors would like to thank James Bremer, Charlie Epstein, and Zydrunas Gimbutas for various codes and discussions during the preparation of this work.
Electromagnetic scattering by surfaces of arbitrary shape. S Rao, D Wilton, A Glisson, IEEE Trans. Antennas Propag. 303S. Rao, D. Wilton, and A. Glisson, "Electromagnetic scattering by surfaces of arbitrary shape," IEEE Trans. Antennas Propag., vol. 30, no. 3, pp. 409-418, 1982.
A high-order locally corrected nyström scheme for charge-current integral equations. F Vico, M Ferrando-Bataller, A Valero-Nogueira, A Berenguer, IEEE Trans. Antennas Propag. 634F. Vico, M. Ferrando-Bataller, A. Valero-Nogueira, and A. Berenguer, "A high-order lo- cally corrected nyström scheme for charge-current integral equations," IEEE Trans. Antennas Propag., vol. 63, no. 4, pp. 1678-1685, 2015.
The decoupled potential integral equation for time-harmonic electromagnetic scattering. F Vico, M Ferrando, L Greengard, Z Gimbutas, Comm. Pure Appl. Math. 694F. Vico, M. Ferrando, L. Greengard, and Z. Gimbutas, "The decoupled potential integral equation for time-harmonic electromagnetic scattering," Comm. Pure Appl. Math., vol. 69, no. 4, pp. 771-812, 2015.
Overcoming low-frequency breakdown of the magnetic field integral equation. F Vico, Z Gimbutas, L Greengard, M Ferrando-Bataller, IEEE Trans. Antennas Propag. 613F. Vico, Z. Gimbutas, L. Greengard, and M. Ferrando-Bataller, "Overcoming low-frequency breakdown of the magnetic field integral equation," IEEE Trans. Antennas Propag., vol. 61, no. 3, pp. 1285-1290, 2013.
An augmented regularized combined source integral equation for nonconforming meshes. F Vico, L Greengard, M Ferrando-Bataller, E Antonino-Daviu, IEEE Trans. Antennas Propag. 674F. Vico, L. Greengard, M. Ferrando-Bataller, and E. Antonino-Daviu, "An augmented regu- larized combined source integral equation for nonconforming meshes," IEEE Trans. Antennas Propag., vol. 67, no. 4, pp. 2513-2521, 2019.
A fast algorithm for particle simulations. L Greengard, V Rokhlin, J. Comput. Phys. 732L. Greengard and V. Rokhlin, "A fast algorithm for particle simulations," J. Comput. Phys., vol. 73, no. 2, pp. 325-348, 1987.
A new version of the fast multipole method for screened coulomb interactions in three dimensions. L F Greengard, J Huang, Journal of Computational Physics. 1802L. F. Greengard and J. Huang, "A new version of the fast multipole method for screened coulomb interactions in three dimensions," Journal of Computational Physics, vol. 180, no. 2, pp. 642-658, 2002.
Diagonal forms of translation operators for the helmholtz equation in three dimensions. V Rokhlin, Appl. Comput. Harmonic Anal. 1V. Rokhlin, "Diagonal forms of translation operators for the helmholtz equation in three dimensions," Appl. Comput. Harmonic Anal., vol. 1, pp. 82-93, 1993.
Fast and efficient algorithms in computational electromagnetics. W Chew, E Michielssen, J Song, J Jin, Artech House, IncW. Chew, E. Michielssen, J. Song, and J. Jin, Fast and efficient algorithms in computational electromagnetics. Artech House, Inc., 2001.
A wideband fast multipole method for the Helmholtz equation in three dimensions. H Cheng, W Crutchfield, Z Gimbutas, L Greengard, J Ethridge, J Huang, V Rokhlin, N Yarvin, J Zhao, J. Comput. Phys. 2161H. Cheng, W. Crutchfield, Z. Gimbutas, L. Greengard, J. Ethridge, J. Huang, V. Rokhlin, N. Yarvin, and J. Zhao, "A wideband fast multipole method for the Helmholtz equation in three dimensions," J. Comput. Phys., vol. 216, no. 1, pp. 300-325, 2006.
On the formulation of a general scattering problem by means of an integral equation. A Maue, Z. Phys. 1267A. Maue, "On the formulation of a general scattering problem by means of an integral equation," Z. Phys, vol. 126, no. 7, pp. 601-618, 1949.
H-field, e-field, and combined field solutions for bodies of revolution. J Mautz, R Harrington, SYRACUSE UNIV NY DEPT OF ELECTRICAL AND COMPUTER ENGINEERING. Tech. Rep.J. Mautz and R. Harrington, "H-field, e-field, and combined field solutions for bodies of revolu- tion," SYRACUSE UNIV NY DEPT OF ELECTRICAL AND COMPUTER ENGINEERING, Tech. Rep., 1977.
A study of two numerical solution procedures for the electric field integral equation at low frequency. W Wu, A Glisson, D Kajfez, Appl. Comput. Electromagnetics Soc. J. 103W. Wu, A. Glisson, and D. Kajfez, "A study of two numerical solution procedures for the electric field integral equation at low frequency," Appl. Comput. Electromagnetics Soc. J., vol. 10, no. 3, pp. 69-80, 1995.
Loop-star decomposition of basis functions in the discretization of the EFIE. G Vecchi, IEEE Trans. Antennas Propag. 472G. Vecchi, "Loop-star decomposition of basis functions in the discretization of the EFIE," IEEE Trans. Antennas Propag., vol. 47, no. 2, pp. 339-346, 1999.
Integral equation solution of Maxwell's equations from zero frequency to microwave frequencies. J Zhao, W Chew, IEEE Trans. Antennas Propag. 4810J. Zhao and W. Chew, "Integral equation solution of Maxwell's equations from zero frequency to microwave frequencies," IEEE Trans. Antennas Propag., vol. 48, no. 10, pp. 1635-1645, 2000.
On a well-conditioned electric field integral operator for multiply connected geometries. F Andriulli, K Cools, I Bogaert, E Michielssen, IEEE Trans. Antennas Propag. 614F. Andriulli, K. Cools, I. Bogaert, and E. Michielssen, "On a well-conditioned electric field integral operator for multiply connected geometries," IEEE Trans. Antennas Propag., vol. 61, no. 4, pp. 2077-2087, 2013.
Incomplete LU preconditioner for FMM implementation. K Sertel, J Volakis, Microwave Opt. Tech. Lett. 264K. Sertel and J. Volakis, "Incomplete LU preconditioner for FMM implementation," Microwave Opt. Tech. Lett., vol. 26, no. 4, pp. 265-267, 2000.
A multiplicative Calderón preconditioner for the electric field integral equation. F Andriulli, K Cools, H Bagci, F Olyslager, A Buffa, S Christiansen, E Michielssen, IEEE Trans. Antennas Propag. 568F. Andriulli, K. Cools, H. Bagci, F. Olyslager, A. Buffa, S. Christiansen, and E. Michielssen, "A multiplicative Calderón preconditioner for the electric field integral equation," IEEE Trans. Antennas Propag., vol. 56, no. 8, pp. 2398-2412, 2008.
A Helmholtz-stable fast solution of the electric field integral equation. F Andriulli, G Vecchi, IEEE Trans. Antennas Propag. 605F. Andriulli and G. Vecchi, "A Helmholtz-stable fast solution of the electric field integral equation," IEEE Trans. Antennas Propag., vol. 60, no. 5, pp. 2357-2366, 2012.
A preconditioner for the electric field integral equation based on Calderón formulas. S Christiansen, J Nedelec, SIAM J. Numer. Anal. 403S. Christiansen and J. Nedelec, "A preconditioner for the electric field integral equation based on Calderón formulas," SIAM J. Numer. Anal., vol. 40, no. 3, pp. 1100-1135, 2002.
Well-conditioned boundary integral equations for three-dimensional electromagnetic scattering. H Contopanagos, B Dembart, M Epton, J Ottusch, V Rokhlin, J Visher, S Wandzura, IEEE Trans. Antennas Propag. 5012H. Contopanagos, B. Dembart, M. Epton, J. Ottusch, V. Rokhlin, J. Visher, and S. Wandzura, "Well-conditioned boundary integral equations for three-dimensional electromagnetic scatter- ing," IEEE Trans. Antennas Propag., vol. 50, no. 12, pp. 1824-1830, 2002.
Inverse acoustic and electromagnetic scattering theory. D Colton, R Kress, Springer93D. Colton and R. Kress, Inverse acoustic and electromagnetic scattering theory. Springer, 2013, vol. 93.
A new well-conditioned integral formulation for maxwell equations in three dimensions. S Borel, D Levadoux, F Alouges, IEEE Trans. Antennas Propag. 539S. Borel, D. Levadoux, and F. Alouges, "A new well-conditioned integral formulation for maxwell equations in three dimensions," IEEE Trans. Antennas Propag., vol. 53, no. 9, pp. 2995-3004, 2005.
Electromagnetic integral equations requiring small numbers of Krylov-subspace iterations. O Bruno, T Elling, R Paffenroth, C Turc, J. Comput. Phys. 22817O. Bruno, T. Elling, R. Paffenroth, and C. Turc, "Electromagnetic integral equations requiring small numbers of Krylov-subspace iterations," J. Comput. Phys., vol. 228, no. 17, pp. 6169- 6183, 2009.
Magnetic field integral equation at very low frequencies. Y Zhang, T Cui, W Chew, J Zhao, IEEE Trans. Antennas Propag. 518Y. Zhang, T. Cui, W. Chew, and J. Zhao, "Magnetic field integral equation at very low frequencies," IEEE Trans. Antennas Propag., vol. 51, no. 8, pp. 1864-1871, 2003.
Current and charge integral equation formulation. M Taskinen, P Yla-Oijala, IEEE Trans. Antennas Propag. 541M. Taskinen and P. Yla-Oijala, "Current and charge integral equation formulation," IEEE Trans. Antennas Propag., vol. 54, no. 1, pp. 58-67, 2006.
Current and charge integral equation formulations and Picard's extended Maxwell system. M Taskinen, D Vanska, IEEE Trans. Antennas Propag. 5512M. Taskinen and D. Vanska, "Current and charge integral equation formulations and Picard's extended Maxwell system," IEEE Trans. Antennas Propag., vol. 55, no. 12, pp. 3495-3503, 2007.
Extension to nonconforming meshes of the combined current and charge integral equation. A Bendali, F Collino, M Fares, B Steif, IEEE Trans. Antennas Propag. 6010A. Bendali, F. Collino, M. Fares, and B. Steif, "Extension to nonconforming meshes of the combined current and charge integral equation," IEEE Trans. Antennas Propag., vol. 60, no. 10, pp. 4732-4744, Oct 2012.
Debye sources and the numerical solution of the time harmonic Maxwell equations. C Epstein, L Greengard, Comm. Pure Appl. Math. 634C. Epstein and L. Greengard, "Debye sources and the numerical solution of the time harmonic Maxwell equations," Comm. Pure Appl. Math., vol. 63, no. 4, pp. 413-463, 2010.
Method of Generalized Debye Sources for the Analysis of Electromagnetic Scattering by Perfectly Conducting Bodies With Piecewise Smooth Boundaries. E Chernokozhin, A Boag, IEEE Trans. Antennas Propag. 614E. Chernokozhin and A. Boag, "Method of Generalized Debye Sources for the Analysis of Elec- tromagnetic Scattering by Perfectly Conducting Bodies With Piecewise Smooth Boundaries," IEEE Trans. Antennas Propag., vol. 61, no. 4, pp. 2108-2115, 2013.
A vector potential integral equation method for electromagnetic scattering. Q Liu, S Sun, W Chew, Applied Computational Electromagnetics (ACES). IEEE2015 31st International Review of Progress inQ. Liu, S. Sun, and W. Chew, "A vector potential integral equation method for electromag- netic scattering," in Applied Computational Electromagnetics (ACES), 2015 31st International Review of Progress in. IEEE, 2015, pp. 1-2.
Decoupled potential integral equations for electromagnetic scattering from dielectric objects. J Li, X Fu, B Shanker, IEEE Trans. Antennas Propag. 67J. Li, X. Fu, and B. Shanker, "Decoupled potential integral equations for electromagnetic scattering from dielectric objects," IEEE Trans. Antennas Propag., vol. 67, pp. 1729-1739, 2019.
A fast boundary integral method for high-order multiscale mesh generation. F Vico, L Greengard, M O'neil, M Rachh, SIAM J. Sci. Comput. 42F. Vico, L. Greengard, M. O'Neil, and M. Rachh, "A fast boundary integral method for high-order multiscale mesh generation," SIAM J. Sci. Comput., vol. 42, pp. A1380-A1401, 2020.
. J D Jackson, Classical Electrodynamics, John Wiley & SonsNew YorkJ. D. Jackson, Classical Electrodynamics. John Wiley & Sons: New York, 1975.
Theory of electromagnetic wave propagation. C Papas, Courier. Dover PublicationsC. Papas, Theory of electromagnetic wave propagation. Courier Dover Publications, 1988.
Fast full-wave surface integral equation solver for multiscale structure modeling. Z Qian, W Chew, IEEE Transactions on Antennas and Propagation. 5711Z. Qian and W. Chew, "Fast full-wave surface integral equation solver for multiscale structure modeling," IEEE Transactions on Antennas and Propagation, vol. 57, no. 11, pp. 3594-3601, 2009.
Advanced surface integral equation methods in computational electromagnetics. M T P Yla-Oijala, S Jarvenpaa, Electromagnetics in Advanced Applications, 2009. ICEAA'09. International Conference on. IEEEM. T. P. Yla-Oijala and S. Jarvenpaa, "Advanced surface integral equation methods in com- putational electromagnetics," in Electromagnetics in Advanced Applications, 2009. ICEAA'09. International Conference on. IEEE, 2009, pp. 369-372.
Well-conditioned high-order algorithms for the solution of three-dimensional surface acoustic scattering problems with Neumann boundary conditions. W Bruno, T Elling, C Turc, J. Numer. Meth. Eng. 9110W. Bruno, T. Elling, and C. Turc, "Well-conditioned high-order algorithms for the solution of three-dimensional surface acoustic scattering problems with Neumann boundary conditions," J. Numer. Meth. Eng, vol. 91, no. 10, 2009.
Spectra of multiplication operators as a numerical tool. B Vioreanu, V Rokhlin, SIAM J. Sci. Comput. 361B. Vioreanu and V. Rokhlin, "Spectra of multiplication operators as a numerical tool," SIAM J. Sci. Comput., vol. 36, no. 1, pp. A267-A288, 2014.
F W Olver, D W Lozier, R F Boisvert, C W Clark, NIST Handbook of Mathematical Functions. Cambridge University PressF. W. Olver, D. W. Lozier, R. F. Boisvert, and C. W. Clark, NIST Handbook of Mathematical Functions. Cambridge University Press, 2010.
Two-variable analogues of the classical orthogonal polynomials," in Theory and application of special functions. T Koornwinder, ElsevierT. Koornwinder, "Two-variable analogues of the classical orthogonal polynomials," in Theory and application of special functions. Elsevier, 1975, pp. 435-495.
Fast multipole methods for evaluation of layer potentials with locally-corrected quadratures. L Greengard, M O'neil, M Rachh, F Vico, J. Comput. Phys.: X. 10100092L. Greengard, M. O'Neil, M. Rachh, and F. Vico, "Fast multipole methods for evaluation of layer potentials with locally-corrected quadratures," J. Comput. Phys.: X, vol. 10, p. 100092, 2021.
Collectively Compact Operator Approximation Theory and Applications to Integral Equations. P M Anselone, Prentice-HallEnglewood Cliffs, NJP. M. Anselone, Collectively Compact Operator Approximation Theory and Applications to Integral Equations. Englewood Cliffs, NJ: Prentice-Hall, 1971.
R Kress, Linear integral equations. New YorkSpringerR. Kress, Linear integral equations. Springer, New York, 2014.
The numerical solution of integral equations of the second kind. K E Atkinson, Cambridge University PressK. E. Atkinson, The numerical solution of integral equations of the second kind. Cambridge University Press, 1997.
On the numerical evaluation of the singular integrals of scattering theory. J Bremer, Z Gimbutas, J. Comput. Phys. 251J. Bremer and Z. Gimbutas, "On the numerical evaluation of the singular integrals of scattering theory," J. Comput. Phys., vol. 251, pp. 327 -343, 2013.
. Á C Aznar, J R Robert, J M R Casals, L J Roca, S B Boris, M F Bataller, Antenas. Univ. Politèc. de CatalunyaÁ. C. Aznar, J. R. Robert, J. M. R. Casals, L. J. Roca, S. B. Boris, and M. F. Bataller, Antenas. Univ. Politèc. de Catalunya, 2004.
A fast physical optics (FPO) algorithm for high frequency scattering. A Boag, IEEE Trans. Antennas Propag. 521A. Boag, "A fast physical optics (FPO) algorithm for high frequency scattering," IEEE Trans. Antennas Propag., vol. 52, no. 1, pp. 197-204, 2004.
Accelerating the non-uniform fast Fourier transform. L Greengard, J.-Y. Lee, SIAM Review. 463L. Greengard and J.-Y. Lee, "Accelerating the non-uniform fast Fourier transform," SIAM Review, vol. 46, no. 3, pp. 443-454, 2004.
The type 3 nonuniform fft and its applications. J.-Y Lee, L Greengard, Journal of Computational Physics. 2061J.-Y. Lee and L. Greengard, "The type 3 nonuniform fft and its applications," Journal of Computational Physics, vol. 206, no. 1, pp. 1-5, 2005.
The evaluation of MFIE integrals with the use of vector triangle basis functions. R E Hodges, Y Rahmat-Samii, Microwave Opt. Tech. Lett. 141R. E. Hodges and Y. Rahmat-Samii, "The evaluation of MFIE integrals with the use of vector triangle basis functions," Microwave Opt. Tech. Lett., vol. 14, no. 1, pp. 9-14, 1997.
On the numerical evaluation of the singular integrals of scattering theory. J Bremer, Z Gimbutas, Journal of Computational Physics. 251J. Bremer and Z. Gimbutas, "On the numerical evaluation of the singular integrals of scattering theory," Journal of Computational Physics, vol. 251, pp. 327-343, 2013.
An octree multigrid method for quasi-static Maxwell's equations with highly discontinuous coefficients. E Haber, S Heldmann, J. Comput. Phys. 2232E. Haber and S. Heldmann, "An octree multigrid method for quasi-static Maxwell's equations with highly discontinuous coefficients," J. Comput. Phys., vol. 223, no. 2, pp. 783-796, 2007.
IES3: A fast integral equation solver for efficient 3-dimensional extraction. S Kapur, D E Long, Proceedings of IEEE International Conference on Computer Aided Design. IEEE International Conference on Computer Aided Design97S. Kapur and D. E. Long, "IES3: A fast integral equation solver for efficient 3-dimensional extraction," in Proceedings of IEEE International Conference on Computer Aided Design, vol. 97, 1997, pp. 448-455.
| [
"https://github.com/flatironinstitute/FMM3D",
"https://github.com/fastalgorithms/fmm3dbie",
"https://github.com/fastalgorithms/surface-smoother"
] |
[
":6)",
":6)"
] | [
"Lars Birkedal birkedal@itu.dk \nIT University of Copenhagen\nDenmark\n",
"Hongseok Yang hyang@dcs.qmul.ac.uk \nQueen Mary\nUniversity of London\nUK\n"
] | [
"IT University of Copenhagen\nDenmark",
"Queen Mary\nUniversity of London\nUK"
] | [
"Logical Methods in Computer Science"
] | Separation logic is a recent extension of Hoare logic for reasoning about programs with references to shared mutable data structures. In this paper, we provide a new interpretation of the logic for a programming language with higher types. Our interpretation is based on Reynolds's relational parametricity, and it provides a formal connection between separation logic and data abstraction. | 10.2168/lmcs-4(2:6)2008 | [
"https://arxiv.org/pdf/0805.0783v2.pdf"
] | 642,633 | 0805.0783 | f851f1676bc1d99ab39d1026ce0521ef98491edd |
:6)
2008
Lars Birkedal birkedal@itu.dk
IT University of Copenhagen
Denmark
Hongseok Yang hyang@dcs.qmul.ac.uk
Queen Mary
University of London
UK
:6)
Logical Methods in Computer Science
42200810.2168/LMCS-4Submitted Sep. 21, 2007
Separation logic is a recent extension of Hoare logic for reasoning about programs with references to shared mutable data structures. In this paper, we provide a new interpretation of the logic for a programming language with higher types. Our interpretation is based on Reynolds's relational parametricity, and it provides a formal connection between separation logic and data abstraction.
Introduction
Separation logic [18,13,7] is a Hoare-style program logic, and variants of it have been applied to prove correct interesting pointer algorithms such as copying a dag, disposing a graph, the Schorr-Waite graph algorithm, and Cheney's copying garbage collector. The main advantage of separation logic compared to ordinary Hoare logic is that it facilitates local reasoning, formalized via the so-called frame rule using a connective called separating conjunction. The development of separation logic initially focused on low-level languages with heaps and pointers, although in recent work [14,8] it was shown how to extend separation logic first to languages with a simple kind of procedures [14] and then to languages also with higher-types [8]. Moreover, in [14] a second-order frame rule was proved sound and in [8] a whole range of higher-order frame rules were proved sound for a separation-logic type system.
In [14] and [8] it was explained how second and higher-order frame rules can be used to reason about static imperative modules. The idea is roughly as follows. Suppose that we prove a specification for a client c, depending on a module k,
{P 1 } k {Q 1 } ⊢ {P } c(k) {Q}.
The proof of the client depends only on the "abstract specification" of the module, which describes the external behavior of k. Suppose further that an actual implementation m of the module satisfies Here I is the internal resource invariant of the module m, describing the internal heap storage used by the module to implement the abstract specification. We can then employ a (higher-order) frame rule on the specification for the client to get A key advantage of this approach to modularity is that it facilitates so-called "ownership transfer." For example, if the module is a queue, then the ownership of cells transfers from the client to module upon insertion into the queue. Moreover, the discipline allows clients to maintain pointers into cells that have changed ownership to the module. See [14] for examples and more explanations of these facts. Note that the higher-order frame rules in essence provide implicit quantification over internal resource invariants. In [5] it is shown how one can employ a higher-order version of separation logic, with explicit quantification of assertion predicates to reason about dynamic modularity (where there can be several instances of the same abstract data type implemented by an imperative module), see also [15]. The idea is to existentially quantify over the internal resource invariants in a module, so that in the above example, c would depend on a specification for k of the form
∃I. {P 1 * I} k {Q 1 * I}.
As emphasized in the papers mentioned above, note that, both in the case of implicit quantification over internal resource invariants (higher-order frame rules) and in the case of explicit quantification over internal resource invariants (existentials over assertion predicates), reasoning about a client does not depend on the internal resource invariant of possible module implementations. Thus the methodology allows us to formally reason about mutable abstract data types, aka. imperative modules.
However, the semantic models in the papers mentioned above do not allow us to make all the conclusions we would expect from reasoning about mutable abstract data types. In particular, we would expect that clients should behave parametrically in the internal resource invariants: When a client is applied to two different implementations of a mutable abstract data type, it should be the case that the client preserves relations between the internal resource invariants of the two implementations. This is analogous to Reynolds's style relational parametricity for abstract data types with quantification over type variables [17].
To understand this issue more clearly, consider the two implementations of a counter in Figure 1. A counter has three operations: init(i) for initializing the counter, and inc and read for increasing and reading the value of the counter. In the first implementation, init 0 (i) takes a heap cell i containing an initial value for the counter, and stores its address i in the internal variable c, thereby setting the value of the counter to the contents of i. The intention is that when a client program calls this initialization routine with cell i, it should transfer the ownership of the cell to the counter -it should not dereference the cell after calling init 0 (i). The operation inc 0 increases the value of the transferred cell i, and read 0 returns the value of cell i, by storing it in a pre-determined global variable g. The second implementation is almost identical to the first, except that the value of the counter is negated. Thus, when R is the relation that relates a heap containing cell i and variable c with the same heap with the value of cell i negated, all operations of these two implementations preserve this relation R.
Now suppose that we are given a client program of the form let i=new in i.data:=n; init(i); b(inc, read) whose body b satisfies the following specification in separation logic:
{emp} inc {emp}, {g → -} read {g → -} ⊢ {P } b(inc, read) {Q}
for some P, Q that do not mention cell i. We expect that the body b of the client preserves the relation R of the two implementations, and that the client cannot detect the difference between the two. Our expectation is based on the specification for b, which says that the triple {P } b(inc, read) {Q} can be proved in separation logic, assuming only the "abstract specification" of the inc and read operations, where all the internal resources of the module, such as cell i, are hidden. This provability should prevent b from accessing the internal resources of a counter directly and thus detecting the difference between the two implementations. However, none of the existing models of separation logic can justify our expectation on the client program above. In this paper we provide a new parametric model of separation logic, which captures that clients behave parametrically in internal resource invariants of mutable abstract data types. For instance, our model shows that b(inc, read) preserves the relation R, and thus it behaves in the same way no matter whether we use the first or second implementation of a counter. In the present paper, we will focus on the implicit approach to quantification over internal resource invariants via higher-order frame rules, since it is technically simpler than the explicit approach. 1 Our new model of separation logic is based on two novel ideas. The first is to read specifications in separation logic as relations between two programs. For instance, in our model, the Hoare triple {P } b(inc, read) {Q} describes a relationship between two instantiations [[b(inc, read)]] η 0 and [[b(inc, read)]] η 1 of the client's body b by environments η 0 and η 1 . Intuitively, environment η i defines an implementation of module operations inc and read, so [[b(inc, read)]] η i means b is linked with the implementation η i (inc) and η i (read). Note that when used with appropriate η 0 , η 1 (i.e., η i that maps inc and read to the meaning of inc i and read i ), the triple expresses how b(inc 0 , read 0 ) is related to b(inc 1 , read 1 ). 1 The reason is that the implicit quantification of separation logic uses quantification in a very disciplined way so that the usual reading of assertions as sets of heaps can be maintained; if we use quantification without any restrictions, as in [3], it appears that we cannot have the usual reading of assertions as sets of heaps because, then, the rule of consequence is not sound.
The second idea is to parameterize the interpretation by relations on heaps. Mathematically, this means that the interpretation uses a Kripke structure that consists of relations on heaps. The relation parameter describes how the internal resource invariants of two modules are related, and it lets us express the preservation of this relation by client programs. In our counter example, an appropriate parameter is the relation R above. When the triple {P } b(inc, read) {Q} is interpreted with R (and η i corresponding to inc i , read i ), it says, in particular, that b(inc 0 , read 0 ) and b(inc 1 , read 1 ) should preserve the relation R between the internal resources of the two implementations of a counter.
1.1. Related Work. Technically, it has proven to be a very non-trivial problem to define a parametric model for separation logic. One of the main technical challenges in developing a relationally parametric model of separation logic, even for a simple first-order language, is that the standard models of separation logic allow the identity of locations to be observed in the model. This means in particular that allocation of new heap cells is not parametric because the identity of the location of the allocated cell can be observed in the model. (We made this observation in earlier unpublished joint work with Noah Torp-Smith, see [20,Ch. 6].)
This problem of non-parametric memory allocation has also been noticed by recent work on data refinement for heap storage, which exploits semantic ideas from separation logic [10,11]. However, the work on data refinement does not provide a satisfactory solution. Either it avoids the problem by assuming that clients do not allocate cells [10], or its solution has difficulties for handling higher-order procedures and formalizing (observational) equivalences, not refinements, between two implementations of a mutable abstract data type [11].
Our solution to this challenge is to define a more refined semantics of the programming language using FM domain theory, in the style of Benton and Leperchey [4], in which one can name locations but not observe the identity of locations because of the built-in use of permutation of locations. Part of the trick of loc. cit. is to define the semantics in a continuation-passing style so that one can ensure that new locations are suitably fresh with respect to the remainder of the computation. (See Section 4 for more details.) Benton and Leperchey used the FM domain-theoretic model to reason about contextual equivalence and here we extend the approach to give a semantics of separation logic in a continuationpassing style. We relate this new interpretation to the standard direct-style interpretation of separation logic via the so-called observation closure (−) ⊥ ⊥ of a relation, see Section 7.
The other main technical challenge in developing a relationally parametric model of separation logic for reasoning about mutable abstract data types is to devise a model which validates a wide range of higher-order frame rules. Our solution to this challenge is to define an intuitionistic interpretation of the specification logic over a Kripke structure, whose ordering relation intuitively captures the framing-in of resources. Technically, the intuitionistic interpretation, in particular the associated Kripke monotonicity, is used to validate a generalized frame rule. Further, to show that the semantics of the logic does indeed satisfy Kripke monotonicity for the base case of triples, we interpret triples using a universal quantifier, which intuitively quantifies over resources that can possibly be framed in. In the earlier non-parametric model of higher-order frame rules for separation-logic typing in [8] we also made use of a Kripke structure. The difference is that in the present work the elements of the Kripke structure are relations on heaps rather than predicates on heaps because we build a relationally parametric model.
In earlier work, Banerjee and Naumann [1] studied relational parametricity for dynamically allocated heap objects in a Java-like language. Banerjee and Naumann made use of a non-trivial semantic notion of confinement to describe internal resources of a module; here instead we use separation logic, in particular separating conjunction and frame rules, to describe which resources are internal to the module. Our model directly captures that whenever a client has been proved correct in separation logic with respect to an abstract view of a module, then it does not matter how the module has been implemented internally. And, this holds for a higher-order language with higher-order frame rules.
This paper is organized as follows. In Section 2 we describe the programming and assertion languages we consider and in Section 3 we define our version of separation logic. In Section 4 we define the semantics of our programming language in the category of FMcpos, and describe our relational interpretation of separation logic in Section 5. In Section 6 we present a general abstract construction that provides models of specification logic with higher-order frame rules and show that the semantics of the previous section is in fact a special case of the general construction. Section 7 relates our relational interpretation to the standard interpretation of separation logic, and in Section 8 we present the abstraction theorem that our parametric model validates. We describe examples in Section 9, and finally we conclude and discuss future work in Section 10.
An extended abstract of this paper was presented at the FOSSACS 2007 conference [9]. This paper includes proofs that were missing in the conference version, and describes a general mathematical construction that lies behind our parametric model of separation logic. We also include a new example that illustrates the subtleties of the problems and results.
Programs and Assertions
In this paper, we consider a higher-order language with immutable stack variables. The types and terms of the language are defined as follows:
Types τ ::= com | val → τ | τ → τ Expressions E ::= i | 0 | 1 | −1 | E + E | E − E Terms M ::= x | λi. M | M E | λx : τ. M | M M | fix M | if (E=E) then M else M | M ; M | let i=new in M | free(E) | let i=E.f in M | E.f :=E (f ∈ {0, 1})
The language separates expressions E from terms M . Expressions denote heap-independent values, which are either the address of a heap cell or an integer. Expressions are bound to stack variables i, j. On the other hand, terms denote possibly heap-dependent computations, and they are bound to identifiers x, y. The syntax of the language ensures that expressions always terminate, while terms can diverge. The types are used to classify terms only. com denotes commands, val → τ means functions that take an expression parameter, and τ → τ ′ denotes functions that takes a term parameter. Note that to support two different function types, the language includes two kinds of abstraction and application, one for expression parameters and the other for term parameters. We assume that term parameters are passed by name, and expression parameters are passed by value.
To simplify the presentation, we take a simple storage model where each heap cell has only two fields 0 and 1. Command let i=new in M allocates such a binary heap cell, binds the address of the cell to i, and runs M under this binding. The f 'th field of this newly The language uses typing judgments of the form ∆ ⊢ E( : val) and ∆ | Γ ⊢ M : τ , where ∆ is a finite set of stack variables and Γ is a standard type environment for identifiers x. The typing rules for expressions and terms are shown in Figure 2.
∆, i | Γ ⊢ M : com ∆ ⊢ E ∆ | Γ ⊢ let i=E.f in M : com f ∈ {0, 1} ∆ ⊢ E ∆ ⊢ F ∆ | Γ ⊢ E.f := F : com f ∈ {0, 1}
We use the standard assertions from separation logic to describe properties of the heap: 2
P ::= E = E | E ≤ E | E → E, E | emp | P * P | P ∧ P | ¬P | ∃i. P.
The points-to predicate E → E 0 , E 1 means that the current heap has only one cell at address E and that the i-th field of the cell has the value E i . The emp predicate denotes the empty heap, and the separating conjunction P * Q means that the current heap can be split into two parts so that P holds for the one and Q holds for the other. The other connectives have the usual meaning from classical logic. All the missing connectives from classical logic are defined as usual.
In the paper, we will use the three abbreviations (E → -), (E → -, E 1 ) and (E → E 0 , -). The first E → -is a syntactic sugar for ∃i, j. E → i, j, and denotes heaps with cell E only. E → -, E 1 is an abbreviation for ∃i. E → i, E 1 , and means heaps that contain only cell E and store E ′ in the second field of this unique cell E. The last E → E 0 , -is defined similarly.
Assertions only depend on stack variables i, j, not identifiers x, y. Thus assertions are typed by a judgment ∆ ⊢ P : Assertion. The typing rules for this judgment are completely standard, and thus omitted from this paper.
Separation Logic
Our version of separation logic is the first-order intuitionistic logic extended with Hoare triples and invariant extension. The formulas in the logic are called specifications, and they are defined by the following grammar:
ϕ ::= {P }M {Q} | ϕ ⊗ P | E = E | M = M | ϕ ∧ ϕ | ϕ ∨ ϕ | ϕ ⇒ ϕ | ∀x : τ.ϕ | ∃x : τ.ϕ | ∀i.ϕ | ∃i.ϕ
The formula ϕ ⊗ P means the extension of ϕ by the invariant P . It can be viewed as a syntactic transformation of ϕ that inserts P * − into the pre and post conditions of all triples in ϕ. For instance,
({P }x{Q} ⇒ {P ′ }M (x){Q ′ })⊗P 0 is equivalent to {P * P 0 }x{Q * P 0 } ⇒ {P ′ * P 0 }M (x){Q ′ * P 0 }.
We write Specs for the set of all specifications. Specifications are typed by the judgment ∆ | Γ ⊢ ϕ : Specs, where we overloaded Specs to mean the type for specifications.
The logic includes all the usual proof rules from first-order intuitionistic logic with equality, and a rule for fixed-point induction. In addition, it contains proof rules from separation logic, and higher-order frame rules, expressed in terms of rules for invariant introduction and distribution. Figure 3 shows some of these additional rules and a rule for fixed-point induction. In the figure, we often omit contexts ∆ | Γ for specifications and also conditions about typing.
The rules for Hoare triples are the standard proof rules of separation logic adapted to our language. Note that in the rule of consequence, we use the standard semantics of assertions P, P ′ , Q, Q ′ , in order to express semantic implications between those assertions (of course, standard logical derivability ∆ | P ⊢ P ′ and ∆ | Q ′ ⊢ Q are sufficient conditions). The rules for invariant extension formalize higher-order frame rules, extending the idea in [8]. The generalized higher-order frame rule ϕ ⇒ ϕ ⊗ P adds an invariant P to specification ϕ, and the other rules distribute this added invariant all the way down to the triples. We just show one use of those rules that lead to the second-order frame rule:
∆ | Γ, x : com ⊢ {P }x{Q} ⇒ {P ′ }M (x){Q ′ } ∆ | Γ, x : com ⊢ ({P }x{Q} ⇒ {P ′ }M (x){Q ′ }) ⊗ P 0 ∆ | Γ, x : com ⊢ {P }x{Q} ⊗ P 0 ⇒ {P ′ }M (x){Q ′ } ⊗ P 0 ∆ | Γ, x : com ⊢ {P * P 0 }x{Q * P 0 } ⇒ {P ′ * P 0 }M (x){Q ′ * P 0 }
The last rule is for fixed-point induction, and it relies on the restriction that a specification is of the form γ(fix M ). The grammar for γ guarantees that γ(x) defines an admissible predicate for x, thus ensuring the soundness of fixed-point induction. Moreover, it also guarantees that γ(x) holds when M means ⊥, so allowing us to omit a usual base case, "γ(⊥)," from the rule.
Note that the rules do not include the so-called conjunction rule:
({P }M {Q} ∧ {P ′ }M {Q ′ }) ⇒ {P ∧ P ′ }M {Q ∧ Q ′ }
The omission of this rule is crucial, since our parametricity interpretation does not validate the rule. We discuss the conjunction rule further in Section 10.
(∀i.{P }M {Q}) ⇒ {∃i. P }M {∃i. Q} (where i ∈ fv(M )) ({P }M {Q} ∧ {P ′ }M {Q ′ }) ⇒ {P ∨ P ′ }M {Q ∨ Q ′ } {P ∧ E=F }M {Q} ∧ {P ∧ E =F }N {Q} ⇒ {P }if (E=F ) then M else N {Q} {P }M {P 0 } ∧ {P 0 }N {Q} ⇒ {P }M ; N {Q} (∀i. {P * i → 0, 0}M {Q}) ⇒ {P }let i=new in M {Q} (where i ∈fv(P, Q)) (∀i. {P * E → i, E 1 }M {Q}) ⇒ {∃i. P * E → i, E 1 }let i=E.0 in M {Q} (where i ∈fv(E, Q)) {E → -}free(E){emp} {E → -, E 1 }E.0 := F {E → F, E 1 } [[P ]] ρ ⊆ [[P ′ ]] ρ and [[Q ′ ]] ρ ⊆ [[Q]] ρ for all ρ ∈ [[∆]] ∆ | Γ ⊢ {P ′ }M {Q ′ } ⇒ {P }M {Q} Proof Rules for Invariant Extension − ⊗ P ϕ ⇒ ϕ ⊗ P {P }M {P ′ } ⊗ Q ⇔ {P * Q}M {P ′ * Q} (E = F ) ⊗ Q ⇔ E = F (M = N ) ⊗ Q ⇔ (M = N ) (ϕ ⊗ P ) ⊗ Q ⇔ ϕ ⊗ (P * Q) (ϕ ⊕ ψ) ⊗ P ⇔ (ϕ ⊗ P ) ⊕ (ψ ⊗ P ) (where ⊕ ∈ {⇒, ∧, ∨}) (κx : τ. ϕ) ⊗ P ⇔ κx : τ. ϕ ⊗ P (κi. ϕ) ⊗ P ⇔ κi. ϕ ⊗ P (where κ ∈ {∀, ∃}) (where κ ∈ {∀, ∃} and i ∈ fv(P ))
Rule for Fixed-Point Induction In our logic, we can prove that the body of the client satisfies:
C ::= [ ] | λi.C | C E | λx : τ.C | C M | fix C | C; M γ ::= {P }C{Q} | γ∧γ | ∀x : τ.γ | ∀i.γ (∀x. γ(x) ⇒ γ(M x)) ⇒ γ(fix M ) where γ(N ) is a capture-avoiding insertion of N into the hole [−] in γ.∆ | Γ ⊢ ϕ ⇒ {g → -}inc; read{g → -}
where ∆ is a set of stack variables containing g and Γ, ϕ are defined by
Γ def = {inc : com, read : com}, ϕ def = {emp}inc{emp} ∧ {g → -}read{g → -}.
Note that cell i, which is transferred to the counter by init(i), does not appear in any assertion of the specification for the client's body. This implies, correctly, that the client does not dereference the transferred cell i, after calling init(i).
The formal proof of the specification of the body uses the first-order frame rule, and it is given below:
∆ | Γ ⊢ ϕ ⇒ {emp}inc{emp} 1 ∆ | Γ ⊢ ϕ ⇒ ({emp}inc{emp} ⊗ (g → -)) 2 ∆ | Γ ⊢ ϕ ⇒ {emp * g → -}inc{emp * g → -} 3 ∆ | Γ ⊢ ϕ ⇒ {g → -}inc{g → -} 4 ∆ | Γ ⊢ ϕ ⇒ {g → -}read{g → -} 5 ∆ | Γ ⊢ ϕ ⇒ {g → -}inc; read{g → -} 6
The interesting parts of the proof are steps 2, 3, where we use rules for invariant extensions, in order to add the frame axiom g → -into the pre and post conditions of a triple. Note that the addition of this frame axiom starts with a generalized frame rule ϕ ⇒ ϕ ⊗ P , and continues with the rule that moves P inside ϕ. The remaining steps 1, 5, 4, 6 are instances of usual rules for first-order intuitionistic logic or Hoare logic, such as the elimination rule for conjunction and the rule of Consequence.
Semantics of Programming Language
Let Loc be a countably infinite set of locations. The programming language is interpreted in the category of FM-cpos on Loc.
We remind the reader of the basics of FM domain theory. Call a bijection π on Loc a permutation when π(l) = l only for finitely many l, and let perm be the set of all permutations. An FM-set is a pair of a set A and a function · of type perm × A → A, such that (1) id · a = a and π · (π ′ · a) = (π • π ′ ) · a, and (2) every a ∈ A is supported by some finite subset L of Loc, i.e., ∀π ∈ perm. (∀l ∈ L. π(l) = l) =⇒ π · a = a.
It is known that every element a in an FM-set A has a smallest set L that supports a. This smallest set is denoted supp(a). An FM function f from an FM-set A to an FM-set B is a function from A to B such that f (π · a) = π · (f (a)) for all a, π.
An FM-poset is an FM-set A with a partial order ⊑ on A such that a ⊑ b =⇒ π·a ⊑ π·b for all π, a, b. We say that a (ω-)chain {a i } i in FM-poset A is finitely supported iff there is a finite subset L of Loc that supports all elements in the chain. Finally, an FM-cpo is an FM-poset (A, ⊑) for which every finitely-supported chain {a i } i has a least upper bound, and an FM continuous function f from an FM-cpo A to an FM-cpo B is an FM function from A to B that preserves the least upper bounds of all finitely supported chains.
Types are interpreted as pointed FM-cpos, using the categorical structure of the category of FM-cpos, see Figure 4. In the figure, we use the FM-cpo Val of references defined by:
Val def = Loc + Int + {default } where π · v def = if (v ∈ Loc)
then v else π(v) and default denotes a default value used for typeincorrect expressions, such as the addition of two locations. The only nonstandard part is the semantics of the command type com, which we define in the continuation passing style following [19,4]:
Val def = Loc + Int + {default } O def = {normal , err } ⊥ Heap def = Loc ⇀ fin Val × Val Cont def = Heap → O [[val → τ ]] def = Val → [[τ ]] [[τ → τ ′ ]] def = [[τ ]] → [[τ ′ ]] [[com]] def = Heap × Cont → O [[∆]] def = i∈∆ Val [[Γ]] def = x : τ ∈Γ [[τ ]].O def = {normal , err } ⊥ (with π · o = o) Heap def = Loc ⇀ fin Val × Val Cont def = (Heap → O ) [[com]] def = (Heap × Cont → O ).
Here A × B and A → B are cartesian product and exponential in the category of FM-cpos. And A ⇀ fin B is the FM-cpo of the finite partial functions from A to B whose order and permutation action are defined below:
(1) f ⊑ g def ⇐⇒ dom(f ) = dom(g) and f (a) ⊑ g(a) for all a ∈ dom(f ), (2) (π · f )(a) def = if (a ∈ π(dom(f ))) then (π · ((f • π −1 )(a))
) else undefined. The first FM-cpo O specifies all possible observations, which are normal termination normal , erroneous termination err or divergence ⊥. The next FM-cpo Heap denotes the set of heaps. It formalizes that a heap contains only finitely many allocated cells and each cell in the heap has two fields. The third FM-cpo Cont represents the set of continuations that consume heaps. Finally, [[com]] is the set of cps-style commands. Those commands take a current heap h and a continuation k, and compute an observation in O (often by computing a final heap h ′ , and calling the given continuation k with h ′ ).
Note that Heap has the usual heap disjointness predicate h#h ′ , which denotes the disjointness of dom(h) and dom(h ′ ), and the usual partial heap combining operator •, which takes the union of (the graphs of) two disjoint heaps. The # predicate and • operator fit well with FM domain theory, because they preserve all permutations:
h#h ′ ⇐⇒ (π · h)#(π · h ′ ) and π · (h • h ′ ) = (π · h) • (π · h ′ ).
The semantics of typing contexts ∆ and Γ is given by cartesian products:
[[∆]] def = i∈∆ Val and [[Γ]] def = x : τ ∈Γ [[τ ]].
The products here are taken over finite families, so they give well-defined FM-cpos. 3 We will use symbols ρ and η to denote environments in [[∆]] and [[Γ]], respectively.
The semantics of expressions and terms is shown in Figures 5 and 6. It is standard, except for the case of allocation, where we make use of the underlying FM domain theory: The interpretation picks a location that is fresh with respect to currently known values (i.e., supp(h, η, ρ)) as well as those that will be used by the continuation (i.e., supp(k)). The cps-style interpretation gives us an explicit handle on which locations are used by the continuation, and the FM domain theory ensures that supp(h, η, ρ, k) is finite (so a new location l can be chosen) and that the choice of l does not matter, as long as l is not in . (Formally, one shows by induction that the semantics is well-defined.) We borrowed this interpretation from Benton and Leperchey [4].
[[∆ ⊢ E]] : [[∆]] → Val [[∆, i ⊢ i]] ρ def = ρ(i) [[∆ ⊢ 0]] ρ def = 0 [[∆ ⊢ E 1 + E 2 ]] ρ def = if ([[E 1 ]] ρ , [[E 2 ]] ρ ∈ Int) then ([[E 1 ]] ρ + [[E 2 ]] ρ ) else default [[∆ ⊢ E 1 − E 2 ]] ρ def = if ([[E 1 ]] ρ , [[E 2 ]] ρ ∈ Int) then ([[E 1 ]] ρ − [[E 2 ]] ρ ) else default[[∆ | Γ ⊢ M : τ ]] : [[∆]] × [[Γ]] → [[τ ]] [[∆ | Γ, x : τ ⊢ x : τ ]] ρ,η def = η(x) [[∆ | Γ ⊢ λi. M : val → τ ]] ρ,η def = λv : Val . [[∆, i | Γ ⊢ M : τ ]] ρ[i→v],η [[∆ | Γ ⊢ M E : τ ]] ρ,η def = ([[∆ | Γ ⊢ M : val → τ ]] ρ,η ) [[E]] ρ [[∆ | Γ ⊢ λx : τ ′ . M : τ ′ → τ ]] ρ,η def = λm : [[τ ′ ]]. [[∆ | Γ, x : τ ′ ⊢ M : τ ]] ρ,η[x→m] [[∆ | Γ ⊢ M N : τ ]] ρ,η def = ([[∆ | Γ ⊢ M : τ ′ → τ ]] ρ,η ) [[∆ | Γ ⊢ N : τ ′ ]] ρ,η [[∆ | Γ ⊢ fix M : τ ]] ρ,η def = leastfix [[∆ | Γ ⊢ M : τ → τ ]] ρ,η [[∆ | Γ ⊢ if (E=F ) then M else N : com]] ρ,η def = if [[E]] ρ =[[F ]] ρ then [[∆ | Γ ⊢ M : com]] ρ,η else [[∆ | Γ ⊢ N : com]] ρ,η [[∆ | Γ ⊢ M ; N : com]] ρ,η (h, k) def = let k ′ be λh ′ . [[∆ | Γ ⊢ N : com]] ρ,η (h ′ , k) in [[∆ | Γ ⊢ M : com]] ρ,η (h, k ′ ) [[∆ | Γ ⊢ let i=new in M : com]] ρ,η (h, k) def = [[∆, i | Γ ⊢ M : com]] ρ[i→l],η (h • [l→0, 0], k) (where l ∈ (Loc−supp(h, ρ, η, k))) [[∆ | Γ ⊢ free(E) : com]] ρ,η (h, k) def = if [[E]] ρ ∈dom(h) then err else (k(h ′ ) for h ′ s.t. h ′ • [[[E]] ρ →h([[E]] ρ )] = h) [[∆ | Γ ⊢ let i=E.0 in M : com]] ρ,η (h, k) def = if [[E]] ρ ∈dom(h) then err else let (v, v ′ ) = h([[E]] ρ ) in [[∆, i | Γ ⊢ M : com]] ρ[i→v],η (h, k) [[∆ | Γ ⊢ E.0 := F : com]] ρ,η (h, k) def = if [[E]] ρ ∈dom(h) then err else (let (v, v ′ ) = h([[E]] ρ ) in k(h[[[E]] ρ →([[F ]] ρ , v ′ )]))
Relational Interpretation of Separation Logic
We now present the main result of this paper, a relational interpretation of separation logic. In this interpretation, a specification means a relation on terms, rather than a set of terms "satisfying" the specification. This relational reading formalizes the intuitive claim that proof rules in separation logic ensure parametricity with respect to the heap.
Our interpretation has two important components that ensure parametricity. The first is a Kripke structure R. The possible worlds of R are finitely supported binary relations r on heaps, 4 and the accessibility relation is the preorder defined by the separating conjunction for relations:
h 0 [r * s]h 1 def ⇔ there exist splittings n 0 • m 0 = h 0 and n 1 • m 1 = h 1 such that n 0 [r]n 1 and m 0 [s]m 1 , r ⊑ r ′ def ⇔ there exists s such that r * s = r ′ .
Intuitively, r ⊑ r ′ means that r ′ is a * -extension of r by some s. The Kripke structure R parameterizes our interpretation, and it guarantees that all the logical connectives behave parametrically wrt. relations between internal resource invariants. The second is semantic quadruples, which describe the relationship between two commands. We use the semantic quadruples to interpret Hoare triples relationally. Consider c 0 , c 1 ∈ [[com]] and r, s ∈ R. For each subset D 0 of an FM-cpo D, define eq(D 0 ) to be the partial identity relation on D that equates only the elements in D 0 . A semantic quadruple
[r](c 0 , c 1 )[s] holds iff ∀r ′ ∈ R. ∀h 0 , h 1 ∈ Heap. ∀k 0 , k 1 ∈ Cont . (h 0 [r * r ′ ]h 1 ∧ k 0 [s * r ′ → eq(G)]k 1 ) =⇒ (c 0 (h 0 , k 0 )[eq(G)]c 1 (h 1 , k 1 )),
where G is the set O − {err } = {normal , ⊥} of good observations, and where k 0 [s * r ′ → eq(G)]k 1 means that k 0 , k 1 map heaps related in s * r ′ into the diagonal of G. The above condition indirectly expresses that if the input heaps h 0 , h 1 are r * r ′ -related, then the output heaps are related by s * r ′ . Note that the definition quantifies over relations r ′ for new heaps, thus implementing relational parametricity. In Section 7, we show how semantic quadruples are related to a more direct way of relating two commands and we also show that the parametricity in the definition of semantic quadruples implies the locality condition in separation logic [18]. (ρ, η 0 , η 1 , r |= ∆|Γ ϕ) ∧ (r ⊑ r ′ ) =⇒ (ρ, η 0 , η 1 , r ′ |= ∆|Γ ϕ).
One way to understand the satisfaction relation is to assume two machines that execute the same set of terms. Each of these machines contains a chip that implements a module with a fixed set of operations. Intuitively, the (ρ, η 0 , η 1 , r) parameter of |= specifies the configurations of those machines: one machine uses (ρ, η 0 ) to bind free stack variables and identifiers of terms, and the other machine uses (ρ, η 1 ) for the same purpose; and the internal resources of the built-in modules in those machines are related by r. The judgment (ρ, η 0 , η 1 , r) |= ∆|Γ ϕ means that if two machines are configured by (ρ, η 0 , η 1 , r), then the meanings of the terms in two machines are ϕ-related. Note that we allow different environments for the Γ context only, not for the ∆ context. This is because we are mainly concerned with parametricity with respect to the heap and only Γ entities, not ∆ entities, depend on the heap. Figure 7 shows the detailed interpretation of specifications. In the figure, we make use of the standard semantics of assertions [18]. We now explain three cases in the definition of |=. The first case is implication. Our interpretation of implication exploits the specific notion of accessibility in R. It is equivalent to the standard Kripke semantics of implication: for all r ′ ∈ R, if r ⊑ r ′ and (ρ, η 0 , η 1 , r ′ ) |= ϕ, then (ρ, η 0 , η 1 , r ′ ) |= ψ, because r ⊑ r ′ iff r ′ = r * s for some s.
(ρ, η 0 , η 1 , r) |= {P }M {Q} def ⇐⇒ [eq([[P ]] ρ ) * r]([[M ]] ρ,η 0 , [[M ]] ρ,η 1 )[eq([[Q]] ρ ) * r] (ρ, η 0 , η 1 , r) |= ϕ ⊗ P def ⇐⇒ (ρ, η 0 , η 1 , r * eq([[P ]] ρ )) |= ϕ (ρ, η 0 , η 1 , r) |= E = F def ⇐⇒ [[E]] ρ = [[F ]] ρ (ρ, η 0 , η 1 , r) |= M = N def ⇐⇒ [[M ]] ρ,η 0 = [[N ]] ρ,η 0 and [[M ]] ρ,η 1 = [[N ]] ρ,η 1 (ρ, η 0 , η 1 , r) |= ϕ ⇒ ψ def ⇐⇒ for all s ∈ R, if (ρ, η 0 , η 1 , r * s) |= ϕ, then (ρ, η 0 , η 1 , r * s) |= ψ (ρ, η 0 , η 1 , r) |= ∀i. ϕ def ⇐⇒ for all v ∈ Val, (ρ[i→v], η 0 , η 1 , r) |= ϕ (ρ, η 0 , η 1 , r) |= ∃i. ϕ def ⇐⇒ there exists v ∈ Val s.t. (ρ[i→v], η 0 , η 1 , r) |= ϕ (ρ, η 0 , η 1 , r) |= ∀x : τ. ϕ def ⇐⇒ for all m, n ∈ [[τ ]], (ρ, η 0 [x→m], η 1 [x→n], r) |= ϕ (ρ, η 0 , η 1 , r) |= ∃x : τ. ϕ def ⇐⇒ there exist m, n ∈ [[τ ]] s.t. (ρ, η 0 [x→m], η 1 [x→n], r) |= ϕ (ρ, η 0 , η 1 , r) |= ϕ ∧ ψ def ⇐⇒ (ρ, η 0 , η 1 , r) |= ϕ and (ρ, η 0 , η 1 , r) |= ψ (ρ, η 0 , η 1 , r) |= ϕ ∨ ψ def ⇐⇒ (ρ, η 0 , η 1 , r) |= ϕ or (ρ, η 0 , η 1 , r) |= ψ
The second case is quantification. If a stack variable i is quantified, we consider one semantic value, but if an identifier x is quantified, we consider two semantic values. This is again to reflect that in our relational interpretation, we are mainly concerned with heapdependent entities. Thus, we only read quantifiers for heap-dependent entities x relationally.
The last case is invariant extension ϕ ⊗ P . Mathematically, it says that if we extend the r parameter by the partial equality for predicate P , specification ϕ holds. Intuitively, this means that some heap cells not appearing in a specification ϕ satisfy the invariant P .
A specification ∆ | Γ ⊢ ϕ is valid iff (ρ, η 0 , η 1 , r) |= ϕ holds for all (ρ, η 0 , η 1 , r). A proof rule is sound when it is a valid axiom or an inference rule that concludes a valid specification from valid premises. Proof. All the axioms for ⊗ have the form ϕ ⇒ ψ or ϕ ⇔ ψ. When proving those axioms, we use the fact that ϕ ⇒ ψ is valid if and only if (ρ, η 0 , η 1 , r) |= ϕ implies (ρ, η 0 , η 1 , r) |= ψ for all (ρ, η 0 , η 1 , r).
First, consider the generalized frame rule ϕ ⇒ ϕ ⊗ P . Suppose that (ρ, η 0 , η 1 , r) |= ϕ. Then, by Kripke monotonicity, (ρ, η 0 , η 1 , r * eq([[P ]] ρ )) |= ϕ. Thus, (ρ, η 0 , η 1 , r) |= ϕ ⊗ P .
Second, consider the distribution rule for triples. We prove the validity of this rule as follows:
(ρ, η 0 , η 1 , r) |= {P }M {Q} ⊗ P 0 ⇐⇒ (ρ, η 0 , η 1 , r * eq([[P 0 ]] ρ )) |= {P }M {Q} (by the semantics of ⊗P ). ⇐⇒ [eq([[P ]] ρ ) * eq([[P 0 ]] ρ ) * r]([[M ]] ρ,η 0 , [[M ]] ρ,η 1 )[eq([[Q]] ρ ) * eq([[P 0 ]] ρ ) * r] ⇐⇒ [eq([[P * P 0 ]] ρ ) * r]([[M ]] ρ,η 0 , [[M ]] ρ,η 1 )[eq([[Q * P 0 ]] ρ ) * r] ⇐⇒ (ρ, η 0 , η 1 , r) |= {P * P 0 }M {Q * P 0 }
(by the semantics of triples).
The second equivalence is by the semantics of triples, and the third equivalence holds because eq maps * for predicates to * for relations. Third, we prove the soundness of the distribution rules for equality. Note that the semantics of E = F and M = N is independent of the heap relation r in (ρ, η 0 , η 1 , r). Thus, once we fix the ρ, η 0 , η 1 components, either E = F and M = N hold for all r, or E = F and M = N hold for no r. Let ϕ be E = F or M = N . From the property of ϕ that we have just pointed out, it follows that
(ρ, η 0 , η 1 , r) |= ϕ ⇐⇒ (ρ, η 0 , η 1 , r * eq([[P ]] ρ )) |= ϕ ⇐⇒ (ρ, η 0 , η 1 , r) |= ϕ ⊗ P.
Finally, consider all the remaining rules, which are distribution rules for logical connectives. All cases can be proved mostly by unrolling and rolling the definition of |=. Here we explain two cases. The first case is the distribution rule for existential quantification of i. We prove that this rule is sound below:
(ρ, η 0 , η 1 , r) |= ∃i. ϕ ⊗ P ⇐⇒ there exists v ∈ val s.t. (ρ[i→v], η 0 , η 1 , r) |= ϕ ⊗ P ⇐⇒ there exists v ∈ val s.t. (ρ[i→v], η 0 , η 1 , r * eq([[P ]] ρ[i→v] )) |= ϕ ⇐⇒ there exists v ∈ val s.t. (ρ[i→v]
, η 0 , η 1 , r * eq([[P ]] ρ )) |= ϕ (since i ∈ fv(P )) ⇐⇒ (ρ, η 0 , η 1 , r * eq([[P ]] ρ )) |= ∃i : δ. ϕ ⇐⇒ (ρ, η 0 , η 1 , r) |= (∃i : δ. ϕ) ⊗ P.
All the equivalences except the third follow by rolling/unrolling the definition of |=. The next case is the rule for implication, which we prove sound as follows:
(ρ, η 0 , η 1 , r) |= (ϕ ⇒ ψ) ⊗ P ⇐⇒ (ρ, η 0 , η 1 , r * eq([[P ]] ρ )) |= ϕ ⇒ ψ ⇐⇒ ∀s. (ρ, η 0 , η 1 , r * eq([[P ]] ρ ) * s) |= ϕ =⇒ (ρ, η 0 , η 1 , r * eq([[P ]] ρ ) * s) |= ψ ⇐⇒ ∀s. (ρ, η 0 , η 1 , r * s) |= ϕ ⊗ P =⇒ (ρ, η 0 , η 1 , r * s) |= ψ ⊗ P ⇐⇒ (ρ, η 0 , η 1 , r) |= (ϕ ⊗ P ) ⇒ (ψ ⊗ P ).
Again, all the equivalences are obtained by rolling/unrolling the definition of |=. Proof. The interpretation of all the logical connectives is standard, so that the semantics validates all the usual rules from first-order intuitionistic logic with equality. Moreover, by Lemma 5.1, all the rules about ⊗ are sound as well. Thus, it remains to show that the rules about Hoare triples and fixed point induction are sound.
Note that most of the rules about triples and fixed point induction have the form ϕ ⇒ ψ. When proving the soundness of those rules, we use the fact that ϕ ⇒ ψ is valid if and only if (ρ, η 0 , η 1 , r) |= ϕ implies (ρ, η 0 , η 1 , r) |= ψ for all (ρ, η 0 , η 1 , r).
The first case is the rule for memory allocation:
(∀i.{P * i → 0, 0}M {Q}) ⇒ {P }let i=new in M {Q}.
Consider (ρ, η 0 , η 1 , r) satisfying the assumption of the above axiom. We need to prove that (ρ, η 0 , η 1 , r) also satisfies the conclusion, i.e.,
[eq([[P ]] ρ ) * r]([[let i=new in M ]] ρ,η 0 , [[let i=new in M ]] ρ,η 1 )[eq([[Q]] ρ ) * r].
Choose arbitrary h 0 , h 1 ∈ Heap, k 0 , k 1 ∈ Cont , and s ∈ R such that
h 0 [eq([[P ]] ρ ) * r * s]h 1 and k 0 [(eq([[Q]] ρ ) * r * s) → eq(G)]k 1 .
Pick l ∈ Loc − supp(h 0 , h 1 , ρ, η 0 , η 1 , k 0 , k 1 ). Then, the FM domain theory ensures that for
j = 0, 1, [[let i=new in M ]] ρ,η j (h j , k j ) = [[M ]] ρ[i→l],η j (h j • [l→0, 0], k j ). (5.1) Let ρ ′ be ρ[i→l], and let h ′ j be h j •[l→0, 0]
. We prove the required relationship for let i=new in M as follows:
h 0 eq([[P ]] ρ ) * r * s h 1 ∧ k 0 (eq([[Q]] ρ ) * r * s) → eq(G) k 1 =⇒ h 0 eq([[P ]] ρ ′ ) * r * s h 1 ∧ k 0 (eq([[Q]] ρ ′ ) * r * s) → eq(G) k 1 =⇒ h ′ 0 eq([[P * i → 0, 0]] ρ ′ ) * r * s h ′ 1 ∧ k 0 (eq([[Q]] ρ ′ ) * r * s) → eq(G) k 1 =⇒ [[M ]] ρ ′ ,η 0 (h ′ 0 , k 0 ) eq(G) [[M ]] ρ ′ ,η 1 (h ′ 1 , k 1 ) =⇒ [[let i=new in M ]] ρ,η 0 (h 0 , k 0 ) eq(G) [[let i=new in M ]] ρ,η 1 (h 1 , k 1 ).
The first implication holds, because ρ and ρ ′ are different only for i but i ∈ fv(P, Q). The second implication follows from the definition of h ′ j , and the third implication from the assumption that (ρ, η 0 , η 1 , r) |= ∀i. {P * i → 0, 0}M {Q}. Finally, the last implication holds, because of the equation 5.1.
The second case is the axiom for lookup
(∀i.{P * E → i, E 1 }M {Q}) ⇒ {∃i.P * E → i, E 1 }let i=E.0 in M {Q}.
Consider (ρ, η 0 , η 1 , r) that satisfies (∀i.{P * E → i, E 1 }M {Q}), and pick arbitrary h 0 , h 1 ∈ Heap, k 0 , k 1 ∈ Cont and s ∈ R such that
h 0 eq([[∃i.P * E → i, E 1 ]] ρ ) * r * s h 1 ∧ k 0 eq([[Q]] ρ ) * r * s → eq(G) k 1 .
Let l be [[E]] ρ (which is well-defined since i ∈ fv(E)). By the first conjunct above, l is in dom(h 0 ) ∩ dom(h 1 ), and there exist v and ρ ′ such that v=proj 0 (h 0 (l))=proj 0 (h 1 (l)), ρ ′ =ρ[i→v], and h 0 eq([[P * E → i,
E 1 ]] ρ ′ ) * r * s h 1 .
Here proj 0 is the projection of the first component of pairs. The two equalities above about v and ρ ′ imply that for j = 0, 1,
[[let i=E.0 in M ]] ρ,η j (h j , k j ) = [[M ]] ρ ′ ,η j (h j , k j ). (5.2)
We derive the desired relationship about let i=E.0 in M as follows:
k 0 eq([[Q]] ρ ) * r * s → eq(G) k 1 ∧ h 0 eq([[P * E → i, E 1 ]] ρ ′ ) * r * s h 1 =⇒ k 0 eq([[Q]] ρ ′ ) * r * s → eq(G) k 1 ∧ h 0 eq([[P * E → i, E 1 ]] ρ ′ ) * r * s h 1 =⇒ [[M ]] ρ ′ ,η 0 (h 0 , k 0 ) eq(G) [[M ]] ρ ′ ,η 1 (h 1 , k 1 ) =⇒ [[let i=E.0 in M ]] ρ,η 0 (h 0 , k 0 ) eq(G) [[let i=E.0 in M ]] ρ,η 1 (h 1 , k 1 ).
The first implication holds because i ∈ fv(Q), the second follows from the fact that (ρ, η 0 , η 1 , r) satisfies the assumption of this axiom, and the last implication follows from the equation 5.2.
The third case is the axiom {E → -}free(E){emp}. Choose arbitrary (ρ, η 0 , η 1 , r), h 0 , h 1 ∈ Heap, k 0 , k 1 ∈ Cont , and s ∈ R, such that
h 0 eq([[E → -]] ρ ) * r * s h 1 ∧ k 0 eq([[emp]] ρ ) * r * s → eq(G) k 1 .
By the first conjunct above, there are splittings m 0 • n 0 = h 0 and m 1 • n 1 = h 1 such that m 0 [eq([[E → -]])]m 1 and n 0 [r * s]n 1 . Note that the relationship between m 0 and m 1 implies that [[free(E)]] ρ,η j (h j , k j ) = k j (n j ) for j = 0, 1. Thus, it is sufficient to show that k 0 (n 0 )[eq(G)]k 1 (n 1 ). Note that n 0 and n 1 are already related by r * s, and k 0 and k 1 by eq([[emp]] ρ ) * r * s → eq(G). The conclusion follows from these two relationships, because eq([[emp]] ρ ) * r * s = r * s.
The fourth case is the axiom {E → -, E 1 }E.0 := F {E → F, E 1 }. Choose arbitrary (ρ, η 0 , η 1 , r), h 0 , h 1 ∈ Heap, k 0 , k 1 ∈ Cont , and s ∈ R, such that
h 0 eq([[E → -, E 1 ]] ρ ) * r * s h 1 ∧ k 0 eq([[E → F, E 1 ]] ρ ) * r * s → eq(G) k 1 .
Because of the first conjunct, there are splittings m 0 • n 0 = h 0 and m 1 •
n 1 = h 1 such that m 0 [eq([[E → -, E 1 ]] ρ )]m 1 and n 0 [r * s]n 1 . Let m ′ be the heap [[[E]] ρ →([[F ]] ρ , [[E 1 ]] ρ )]
. Then, we have the following two facts:
(
1) (m ′ • n 0 ) eq([[E → F, E 1 ]] ρ ) * r * s (m ′ • n 1 ), and (2) for all j = 0, 1, [[E.0 := F ]] ρ,η j (h j , k j ) = k j (m ′ • n j ). By the first fact, k 0 (m ′ • n 0 )[eq(G)]k 1 (m ′ • n 1 ). Now, the second fact gives the required [[E.0 := F ]] ρ,η 0 (h 0 , k 0 ) eq(G) [[E.0 := F ]] ρ,η 1 (h 1 , k 1 ).
The fifth case is the rule of Consequence. Suppose that [
[P ]] ρ ⊆ [[P ′ ]] ρ and [[Q ′ ]] ρ ⊆ [[Q]] ρ , and (ρ, η 0 , η 1 , r) |= {P ′ }M {Q ′ }. Consider h 0 , h 1 ∈ Heap, k 0 , k 1 ∈ Cont , and s ∈ R, such that h 0 [eq([[P ]] ρ ) * r * s]h 1 ∧ k 0 [eq([[Q]] ρ ) * r * s → eq(G)]k 1 .
Since eq is monotone and * preserves the subset order for relations, The sixth case is the rule for introducing existential quantification for assertions:
(ρ ′ = ρ[i→v], h 0 [eq([[P ]] ρ ′ ) * r * s]h 1 , and k 0 [(eq([[Q]] ρ ′ ) * r * s) → eq(G)]k 1 .
From what we have just shown, we derive the conclusion as follows:
(h 0 [eq([[P ]] ρ ′ ) * r * s]h 1 ) ∧ (k 0 [(eq([[Q]] ρ ′ ) * r * s) → eq(G)]k 1 ) =⇒ [[M ]] ρ ′ ,η 0 (h 0 , k 0 )[eq(G)][[M ]] ρ ′ ,η 1 (h 1 , k 1 ) (since (ρ, η 0 , η 1 , r) |= ∀i.{P }M {Q}) =⇒ [[M ]] ρ,η 0 (h 0 , k 0 )[eq(G)][[M ]] ρ,η 1 (h 1 , k 1 ) (since i ∈ fv(M )).
The seventh case is the disjunction rule. Suppose that (ρ, η 0 , η 1 , r) satisfies triples {P }M {Q} and {P ′ }M {Q ′ }. Consider h 0 , h 1 ∈ Heap, s ∈ R, and k 0 , k 1 ∈ Cont, such that
h 0 eq([[P ∨ P ′ ]] ρ ) * r * s h 1 ∧ k 0 eq([[Q ∨ Q ′ ]] ρ ) * r * s → eq(G) k 1 .
By the definition of eq([[P ∨ P ′ ]] ρ ), heaps h 0 and h 1 are related by eq([[P ]] ρ ) * r * s or eq([[P ′ ]] ρ ) * r * s. Without loss of generality, we assume that
h 0 eq([[P ]] ρ ) * r * s h 1 . (5.3)
Since eq is monotone and * preserves the subset order for relations, relation eq(
[[Q ∨ Q ′ ]] ρ ) * r * s → eq(G) is a subset of eq([[Q]] ρ ) * r * s → eq(G)
, and so,
k 0 eq([[Q]] ρ ) * r * s → eq(G) k 1 .h 0 eq([[P ]] ρ ) * r * s h 1 ∧ k 0 eq([[Q]] ρ ) * r * s → eq(G) k 1 .
We do the case analysis depending on whether [[E]]
ρ = [[F ]] ρ . Suppose that [[E]] ρ = [[F ]] ρ . In this case, h 0 eq([[P ∧ E = F ]] ρ ) * r * s h 1 , and [[if (E=F ) then M else N ]] ρ,η j (h j , k j ) = [[M ]] ρ,η j (h j , k j ) for all j = 0, 1. (5.5)
Using these facts, we derive the conclusion as follows:
h 0 eq([[P ∧ E = F ]] ρ ) * r * s h 1 ∧ k 0 eq([[Q]] ρ ) * r * s → eq(G) k 1 =⇒ [[M ]] ρ,η 0 (h 0 , k 0 ) eq(G) [[M ]] ρ,η 1 (h 1 , k 1 ) =⇒ [[if (E=F ) then M else N ]] ρ,η 0 (h 0 , k 0 ) eq(G) [[if (E=F ) then M else N ]] ρ,η 1 (h 1 , k 1 ).
The first implication follows from our assumption that {P ∧ E = F }M {Q} is satisfied by (ρ, η 0 , η 1 , r), and the second follows from the equation 5
h 0 eq([[P ]] ρ ) * r * s h 1 ∧ k 0 eq([[Q]] ρ ) * r * s → eq(G) k 1 . Let k ′ j be λh ′ j .[[N ]] ρ,η j (h ′ j , k j ). Since (ρ, η 0 , η 1 , r) |= {P 0 }N {Q}, k ′ 0 = λh ′ 0 .[[N ]] ρ,η 0 (h ′ 0 , k 0 ) eq([[P 0 ]] ρ ) * r * s → eq(G) λh ′ 1 .[[N ]] ρ,η 1 (h ′ 1 , k 1 ) = k ′ 1 . Since (ρ, η 0 , η 1 , r) |= {P }M {Q 0 }, the above relationship between k ′ 0 and k ′ 1 implies [[M ]] ρ,η 0 (h 0 , k ′ 0 ) eq(G) [[M ]] ρ,η 1 (h 1 , k ′ 1 ). This gives the conclusion, because [[M ; N ]] ρ,η j (h j , k j ) is equal to [[M ]] ρ,η j (h j , k ′ j ), for all j = 0, 1.
The last case is the rule for fixed point induction. We note two properties of C and γ.
(1) For all ρ, η, if η(x) = ⊥, then [[C(x)]] ρ,η = ⊥.
(2) For all (ρ, η 0 , η 1 , r), the following set is admissible:
{(m 0 , m 1 ) | (ρ, η 0 [x→m 0 ], η 1 [x→m 1 ], r) |= γ(x)}.
These properties can be proved by a straightforward induction on the structure of C and γ. The soundness of the induction rule follows from the second property.
A General Construction
Our Kripke semantics of specifications presented in the previous section is in fact an instance of a general, abstract construction that allows one to interpret a specification logic with higher-order frame rules. In this section, we describe the general construction. The remaining part of the paper can be read and understood without reading this section, in which we assume some basic knowledge of categorical logic (see, e.g., [5] for a quick recap).
Before explaining our construction, we remind the reader of FM-cousins of monoid, preorder, Heyting algebra and complete Heyting algebra. For FM-sets A, B, C, we call an element a 0 ∈ A, a function f : A × B → C or a relation r ⊆ A × B equivariant when they preserve the permutation action in the following sense: for all a ∈ A, b ∈ B and π ∈ perm, (π · a 0 = a 0 ) ∧ (π · (f (a, b)) = f (π · a, π · b)) ∧ ((a, b) ∈ r ⇐⇒ (π · a, π · b) ∈ r). Our construction starts with an FM-monoid (M, I, * ) in which * is commutative. The FM-monoid (M, I, * ) generalizes the set of finitely supported binary relations r on heaps, where the monoid unit I is the singleton relation ([], []) of two empty heaps and the monoid operator * is the separating conjunction for relations. Intuitively, each m in M represents information about the internal resource invariants of modules, and the * operator of M is used to combine two pieces of information that describe disjoint resources. Throughout this section, we assume given a fixed FM-monoid (M, I, * ) with * commutative, and describe a construction over this FM-monoid.
First, we define a preorder ⊑ for M :
m ⊑ n ⇐⇒ ∃m ′ .m * m ′ = n.
Intuitively, m ⊑ n means that n is an extension of m with information about additional disjoint resources. It is well known that Kripke models of intuitionistic propositional logic are obtained by taking the upwards closed subsets of a preorder and that the upwards closed subsets form a complete Heyting algebra. Thus, our next step is to form such a model over M , but in the world of FM-sets. Hence we construct an FM-complete Heyting algebra L(M ) whose underlying set L consists of finitely supported upwards closed subsets of M , and which is ordered by subset inclusion, denoted ⊑ L . The Heyting operations for L are defined in the standard way: when M 0 , M 1 ∈ L(M ), The lattice L(M ) has two interesting properties, which we used in our semantics of separation logic. The first property is that the ⇒ operator involves quantification over information about disjoint resources:
⊥ def = ∅ ⊤ def = M M 0 ⊔ M 1 def = M 0 ∪ M 1 M 0 ⊓ M 1 def = M 0 ∩ M 1 M 0 ⇒ M 1 def = {m | ∀m ′ . (m ⊑ m ′ ∧ m ′ ∈ M 0 ) =⇒ m ′ ∈ M 1 }.Lemma 6.3. An element m belongs to M 0 ⇒ M 1 if and only if ∀m ′ . m * m ′ ∈ M 0 =⇒ m * m ′ ∈ M 1 .
The second property is about the operator that frames in information about disjoint resources. We define a binary operator − ⊗ − : In our semantics of separation logic, we used this ⊗ operator to interpret invariant extension ϕ⊗P , and designed its proof rules, based on the general properties of ⊗ summarized in the above lemma.
L(M ) × M → L(M ) by M 0 ⊗ m def = {m ′ | m ′ * m ∈ M 0 }.
Finally, we construct a hyperdoctrine FMSet(−, L(M )), which can be used to interpret the specification logic, including quantifiers and invariant extensions (i.e., ϕ ⊗ P ). In summary, the previous two lemmas provide alternative proofs to large parts of Lemma 5.1 and Theorem 5.2. In the proof of the latter theorem in the previous section we omitted the detailed proof of soundness of the rules for predicate logic; it is a consequence of the above Lemma 6.5. Finally, we remark that the general construction actually gives us more than we use in the previous section: First, since we have a hyperdoctrine, we in fact have a model of higher-order specification logic in which one can also model quantification over specifications. Second, L(M ) is in fact not only an FM-complete Heyting algebra but an FM-complete BI algebra. This means that we can have * and − * connectives also for specifications. We have not yet made use of these additional facts.
Properties of Semantic Quadruples
In this section, we prove two properties of semantic quadruples. The first clarifies the connection between our new interpretation of Hoare triples and the standard interpretation, and the second shows how our cps-style semantic quadruples are related to a more direct way of relating two commands.
First, we consider the relation between semantic quadruples and Hoare triples. Define an operator cps that cps-transforms a state transformer semantically: (1) for every h in p, either c(h) = ⊥ or c(h) ∈ q, hence c(h) cannot be err ;
cps : (Heap → (Heap + {err }) ⊥ ) → (Heap × Cont → O ) cps(c) def = λ(h, k). if (c(h) ∈ {⊥, err }) then k(c(h)) else c(h).
(2) for every h in p and h 1 such that
h#h 1 , (a) if c(h) = ⊥, then c(h • h 1 ) = ⊥, (b) if c(h) = ⊥, then c(h) • h 1 is defined and equal to c(h • h 1 ).
Note that the first condition is the usual meaning of Hoare triples, and the second is the locality condition of commands in separation logic restricted to heaps in p [18]. Since the locality condition merely expresses the parametricity of commands with respect to new heaps, the proposition indicates that our interpretation of triples is the usual one enhanced by an additional parametricity requirement.
Proof of Proposition 7.1. (⇒) Pick an arbitrary heap h in p. Let k be the continuation defined by
k(h) def = if (h ∈ q) then ⊥ else err .
Then, k[eq(q) → eq(G)]k and h[eq(p)]h. By the assumption on the validity of the quadruple, cps(c)(h, k)[eq(G)]cps(c)(h, k). By the definition of k, this relationship on cps(c) implies that cps(c)(h, k) = ⊥, which in turn gives
(c(h) = ⊥) ∨ (c(h) ∈ Heap ∧ k(c(h)) = ⊥).
The second disjunct of this disjunction is equivalent to c(h) ∈ q because k(h ′ ) = ⊥ ⇐⇒ h ′ ∈ q. So, the disjunction gives the first condition.
For the second condition, consider h, h 1 such that h ∈ p and h#h 1 . Let r be the relation {([], h 1 )} on heaps, and define three continuations k 0 , k 1 , k 2 as follows:
k 0 (h ′ ) def = normal , k 1 (h ′ ) def = if (h ′ = c(h)) then normal else ⊥, k 2 (h ′ ) def = if (c(h) ∈ Heap ∧ h ′ = c(h) • h 1 ) then normal else ⊥.
By the definition of r and k i , we have that h[eq(p) * r](h • h 1 ), k 0 [eq(q) * r → eq(G)]k 0 , and k 1 [eq(q) * r → eq(G)]k 2 .
To see why the third relationship holds, note
that if h ′ 1 [eq(q) * r]h ′ 2 , then h ′ 1 • h 1 is defined and h ′ 2 = h ′ 1 • h 1 . Thus, h ′ 1 = c(h) holds precisely when c(h) ∈ Heap ∧ h ′ 2 = c(h) • h 1 holds. This implies that k 1 (h ′ 1 ) = normal iff k 2 (h ′ 2 ) = normal .
Now, by the assumption on the validity of the quadruple, we have that
cps(c)(h, k 0 )[eq(G)]cps(c)(h • h 1 , k 0 ) and cps(c)(h, k 1 )[eq(G)]cps(c)(h • h 1 , k 2 ).
The first conjunct about k 0 implies that if c(h) = ⊥, then c(h • h 1 ) = ⊥, and the second conjunct about
k 1 , k 2 implies that if c(h) = ⊥, then c(h • h 1 ) = c(h) • h 1 .
(⇐) Consider a relation r on heaps and pick heaps h 1 , h 2 and continuations k 1 , k 2 such that h 1 [eq(p) * r]h 2 and k 1 [eq(q) * r → eq(G)]k 2 . Then, there exist two splittings h ′
1 • h ′′ 1 = h 1 and h ′ 2 • h ′′ 2 = h 2 such that h ′ 1 = h ′ 2 ∈ p and h ′′ 1 [r]h ′′ 2 . If c(h 1 ) = ⊥, then c(h ′ 1 ) = ⊥1 ) = c(h ′ 1 ) • h ′′ 1 and c(h 2 ) = c(h ′ 1 ) • h ′′ 2 . Since c(h ′ 1 ) ∈ q by the condition (1), c(h 1 ) = c(h ′ 1 ) • h ′′ 1 [eq(q) * r]c(h ′ 1 ) • h ′′ 2 = c(h 2 )
. This implies cps(c)(h 1 , k 1 )[eq(G)]cps(c)(h 2 , k 2 ), as desired.
Next, we relate our cps-style notion of semantic quadruples to the direct-style alternative. The notion underlying this relationship is the observation closure, denoted (−) ⊥ ⊥ . For each FM-cpo D and relation r ⊆ D × D, we define two relations, r ⊥ on [D → O ] and r ⊥ ⊥ on D, as follows:
k 1 [r ⊥ ]k 2 def ⇐⇒ ∀d 1 , d 2 ∈ D. (d 1 [r]d 2 =⇒ k 1 (d 1 )[eq(G)]k 2 (d 2 )), d 1 [r ⊥ ⊥ ]d 2 def ⇐⇒ ∀k 1 , k 2 ∈ [D → O ]. (k 1 [r ⊥ ]k 2 =⇒ k 1 (d 1 )[eq(G)]k 2 (d 2 )).
Operator (−) ⊥ dualizes a relation on D to one on observations on D, and (−) ⊥ ⊥ closes a given relation r under observations.
∀(r ′ , h 1 , h 2 ). h 1 [r * r ′ ]h 2 =⇒ (c 1 (h 1 )=c 2 (h 2 )=⊥ ∨ c 1 (h 1 )[(s * r ′ ) ⊥ ⊥ ]c 2 (h 2 )).
This proposition shows that our semantic quadruples are close to what one might expect at first for relating two commands parametrically. The only difference is that our quadruple always closes the post-relation s * r ′ under observations.
Proof of Proposition 7.2. (⇒) Consider r ′ , h 1 , h 2 such that h 1 [r * r ′ ]h 2 . We first show that
c 1 (h 1 ) = ⊥ ⇐⇒ c 2 (h 2 ) = ⊥.
Let k be the continuation λh ′ .normal . Then, k[s * r ′ → eq(G)]k. By the assumption on the quadruple for cps(c 1 ), cps(c 2 ), we have that cps(c 1 )(h 1 , k)[eq(G)]cps(c 2 )(h 2 , k).
This relationship implies that c 1 (h 1 ) = ⊥ ⇐⇒ c 2 (h 2 ) = ⊥, because c i (h i ) = ⊥ ⇐⇒ cps(c i )(h i , k) = ⊥ by the choice of k.
Next, we prove that if c 1 (h 1 ) = ⊥ or c 2 (h 2 ) = ⊥, then c 1 (h 1 )[(s * r ′ ) ⊥ ⊥ ]c 2 (h 2 ). By what we have just shown, c 1 (h 1 ) = ⊥ iff c 2 (h 2 ) = ⊥. We will assume that neither c 1 (h 1 ) nor c 2 (h 2 ) is ⊥. Take two continuations k 1 , k 2 such that k 1 [(s * r ′ ) ⊥ ]k 2 , i.e., k 1 [s * r ′ → eq(G)]k 2 . Since the quadruple [r](cps(c 1 ), cps(c 2 ))[s] holds by assumption and h 1 [r * r ′ ]h 2 , we have that cps(c 1 )(h 1 , k 1 )[eq(G)]cps(c 2 )(h 2 , k 2 ). Since both c 1 (h 1 ) and c 2 (h 2 ) are different from ⊥, the relationship is equivalent to
k 1 (c 1 (h 1 ))[eq(G)]k 2 (c 2 (h 2 )).
We have just shown that
c 1 (h 1 )[(s * r ′ ) ⊥ ⊥ ]c 2 (h 2 ).
(⇐) Pick an arbitrary relation r ′ , heaps h 1 , h 2 and continuations k 1 , k 2 such that h 1 [r * r ′ ]h 2 and k 1 [s * r ′ → eq(G)]k 2 (i.e., k 1 [(s * r ′ ) ⊥ ]k 2 .) By the assumption of this if direction,
either c 1 (h 1 ) = c 2 (h 2 ) = ⊥ or c 1 (h 1 )[(s * r ′ ) ⊥ ⊥ ]c 2 (h 2 )
. In the first case,
cps(c 1 )(h 1 , k 1 ) = ⊥ [eq(G)] ⊥ = cps(c 2 )(h 2 , k 2 ),
and in the second case, both c 1 (h 1 ) and c 2 (h 2 ) are in Heap, so that
cps(c 1 )(h 1 , k 1 ) = k 1 (c 1 (h 1 )) [eq(G)] k 2 (c 2 (h 2 )) = cps(c 2 )(h 2 , k 2 ).
The conclusion follows from these two relationships.
Abstraction Theorem
The abstraction theorem below formalizes that well-specified programs (specified in separation logic with implicit quantification over internal resource invariants by frame rules) behave relationally parametrically in internal resource invariants. The easiest way to understand this intuition may be from the corollary following the theorem.
Some readers might feel that it is too much to call the abstraction theorem a "theorem" since it really is a trivial corollary of the soundness theorem -but that is just as it should be: the semantics was defined to achieve that. Intuitively, x corresponds to a module with a single operation, and M a client of the module. This corollary says that if we prove a property of the client M , assuming only an abstract external specification {P 1 }x{Q 1 } of the module, the client cannot tell apart two different implementations c 0 , c 1 of the module, as long as c 0 , c 1 have identical external behavior. The four instances of eq in the proposition formalize that the external behaviors of c 0 , c 1 are identical and that the client M behaves the same externally regardless of whether it is used with c 0 or c 1 . The relation r is a simulation relation for internal resource invariants of c 0 and c 1 .
∀j. {∃i. c → i, - * i → -, j} inc 0 {∃i. c → i, - * i → -, (j+1)} ∀j. {∃i. c → i, - * i → -, j * g → -}read 0 {∃i. c → i, - * i → -, j * g → -, j} inc 0 ≡ let i=c.0 in (let j=i.1 in i.1 := j+1) read 0 ≡ let i=c.0 in (let j=i.1 in g.1 := j) ∀j. {∃i. c → i, - * i → -, j} inc 1 {∃i. c → i, - * i → -, (j−1)} ∀j. {∃i. c → i, - * i → -, j * g → -}read 1 {∃i. c → i, - * i → -, j * g → -, (−j)} inc 1 ≡ let i=c.0 in (let j=i.1 in i.1 := j−1) read 1 ≡ let i=c.0 in (let j=i.1 in g.1 := −j) ∆ | Γ ⊢ ({emp}inc{emp} ∧ {g → -}read{g → -}) ⇒ {g → -}inc; read{g → -} (where ∆ ≡
Examples
Our first example is the two implementations of a counter in the introduction and the simple client (inc; read) in Example 3.1. We remind the reader of the implementations and the specification of the client in Figure 8 (here we use the formally correct 0 and 1 for the fields named data and next in the introduction for readability). The figure also shows the concrete specifications of the implementations. Note that the concrete specifications describe that both implementations use an internal cell c.0 to keep the value of the counter, and that the second implementation stores the negated value of the counter in this internal cell.
Pick a location l ∈ Loc and an environment ρ ∈ [[{c, g}]] with ρ(c) = l, and define f 0 , f 1 , g 0 , g 1 , b 0 , b 1 as follows:
f i def = [[inc i ]] ρ,[] , g i def = [[read i ]] ρ,[] , b i def = [[inc; read]] ρ,[inc→f i ,read→g i ] .
Now, by the Abstraction Theorem, we get that, for all r,
∀i, v. {i → -, v * k → -}put 0 (i){k → -, v} ∀j, v. {j → - * k → -, v}get 0 (j){j → -, v * k → -, v} put 0 ≡ λi. let v = i.1 in (free(i); k.1 := v) get 0 ≡ λj. let v = k.1 in j.1 := v ∀i, v. {i → -, v * (∃k ′ . k → k ′ , - * k ′ → -)}put 1 (i){∃k ′ . k → k ′ , - * k ′ → -, v} ∀j, v. {j → - * (∃k ′ . k → k ′ , - * k ′ → -, v)}get 1 (j){j → -, v * (∃k ′ . k → k ′ , - * k ′ → -, v)} put 1 ≡ λi. let k ′ =k.0 in (free(k ′ ); k.0:=i) get 1 ≡ λj. let k ′ =k.0 in let v=k ′ .1 in j.1:=v ∆ | Γ ⊢ (∀i.{i → -}put(i){emp}) ∧ (∀j.{j → -}get(j){j → -}) ⇒ {j → -}c{j → -} (where ∆ ≡ {j, k} and Γ ≡ {put : val → com, get : val → com})
c ≡ let i=new in (i.1:=5; put(i); get(j)) Figure 9: Two Implementations of a Buffer and a Simple Client
We now sketch a consequence of this result; for brevity we allow ourselves to be a bit informal. Let r be the following simulation relation between the two implementations: We then conclude that h ′ 0 will be of the form h ′ 00 • h ′ 01 and that h ′ 1 will be of the form h ′ 10 • h ′ 11 with (h ′ 01 , h ′ 11 ) ∈ r and with (h ′ 00 , h ′ 10 ) ∈ eq([[g → -]] ρ ). Thus the relation between the internal resource invariants is maintained and, for the visible part, b 0 and b 1 both produce the same heap with exactly one cell.
r def = { (h 0 , h 1 ) | ∃i ∈ Loc. ∃n ∈ Int. ∃v 0 , v 1 , v ′ 0 , v ′ 1 ∈ Val . i = l ∧ h 0 = [c→i, v 0 ] • [i→v ′ 0 , n] ∧ h 1 = [c→i, v 1 ] • [i→v ′ 1 , −n] }.
The next example is a buffer of size one, and it illustrates the ownership transfer. Our buffer has operations put and get. Intuitively, put(i) stores the value found at i in the buffer, and get(j) retrieves the value stored in the buffer and stores it at j. We assume the following abstract specifications of this mutable abstract data type:
(∀i. {i → -}put(i){emp}) and (∀j. {j → -}get(j){j → -}. Figure 9 shows two implementations of the buffer and a client, as well as the concrete specifications for the implementations and the specification for the client. Note that the first implementation just uses one cell for the buffer and that the implementation follows the intuitive description given above. The second implementation uses two cells for the buffer. The additional cell is used to hold the cell pointed to by i itself. Note that this additional cell is transferred from the caller of put 2 (i), i.e., a client of the buffer. Finally, the specification of the client describes the safety property of c, assuming the abstract specification for the buffer. This result implies that the client behaves the same no matter whether we run it with the first or second implementation of the buffer. To see this, let l be ρ(k) and define a simulation relation r between the two implementations: ] ρ ) * r-related heaps, which means that they behave the same for cell j and preserve the r relation for the internal resource invariants of the two implementations.
r def = { (h 0 , h 1 ) | ∃l ′ ∈ Loc. ∃n, v 0 , v 1 , v ′ 1 ∈ Val . l = l ′ ∧ h 0 = [l→v 0 , n] ∧ h 1 = [l→l ′ , v 1 ] • [l ′ →v ′ 1 , n] }.
Conclusion and Future Work
We have succeeded in defining the first relationally parametric model of separation logic. The model captures the informal idea that well-specified clients of mutable abstract data types should behave parametrically in the internal resource invariants of the abstract data type.
We see our work as a first step towards devising a logic for reasoning about mutable abstract data types, similar in spirit to Abadi and Plotkin's logic for parametricity [16,6]. To this end, we also expect to make use of the ideas of relational separation logic in [21] for reasoning about relations between different programs syntactically. The logic should include a link between separation logic and relational separation logic so that one could get a syntactic representation of the semantic Abstraction Theorem and its corollary presented above.
One can also think of our work as akin to the O'Hearn-Reynolds model for idealized algol based on translation into a relationally parametric polymorphic linear lambda calculus [12]. In loc. cit. O'Hearn and Reynolds show how to provide a better model of stack variables for idealized algol by making a formal connection to parametricity. Here we provide a better model for the more unwieldy world of heap storage by making a formal connection to parametricity.
As mentioned in Section 3, the conjunction rule is not sound in our model. This is a consequence of our interpretation, which "bakes-in" the frame rule by quantifying over all relations r ′ . Indeed, using the characterization given by Proposition 7.2, one sees that for the conjunction rule to hold, we would need something like (r 1 ∧ r 2 ) * r = (r 1 * r) ∧ (r 2 * r) to hold. We "bake-in" the frame rule in order to get a model that validates a wide range of higher-order frame rules and it is known that already for second-order frame rules, the conjunction rule is not sound without some restrictions on the predicates involved [14]. We don't know whether it is possible to develop a parametric model in which the conjunction rule is sound.
Future work further includes developing a parametric model for the higher-order version of separation logic with explicit quantification over internal resource invariants. Finally, we hope that ideas similar to those presented here can be used to develop parametric models for other recent approaches to mutable abstract data types (e.g., [2]).
{P 1 *Figure 1 :
11I} m {Q 1 * I}. init 0 (i) ≡ c.next := i inc 0 ≡ let i = c.next in let v = i.data in i.data := v+1 read 0 ≡ let i = c.next in let v = i.data in g.data := v init 1 (i) ≡ let v = i.data in i.data := −v; c.next := i inc 1 ≡ let i = c.next in let v = i.data in i.data := v−1 read 1 ≡ let i = c.next in let v = i.data in g.data := −v Counter Modules
{P 1 *
1I} k {Q 1 * I} ⊢ {P * I} c(k) {Q * I}, and combine it with the specification for m to obtain {P * I} c(m) {Q * I}.
Figure 2 :
2Typing Rules for Expressions and Terms allocated cell at address i is read by let j = i.f in N and updated by i.f := E. The cell i is deallocated by free(i).
Example 3. 1 .
1Recall the counter example from the introduction and consider the following simple client let i=new in i.0 := 5; init(i); inc; read , Proof Rules for Hoare Triples
Figure 3 :
3Sample Proof Rules whose body consists of inc; read. The client initializes the value of the counter to 5, increases the counter, and reads the value of the counter.
Figure 4 :
4Interpretation of Types and Typing Contexts
Figure 5 :
5Interpretation of Expressions
Figure 6 :
6Interpretation of Terms supp(h, η, ρ, k)
For all environments ρ ∈ [[∆]] and η 0 , η 1 ∈ [[Γ]] and all worlds r ∈ R,
Figure 7 :
7Relational Interpretation of Separation Logic
Lemma 5. 1 .
1The axioms for ⊗ are sound.
Theorem 5. 2 .
2All the proof rules are sound.
eq([[P ]] ρ ) * r * s ⊆ eq([[P ′ ]] ρ ) * r * s, and [eq([[Q]] ρ ) * r * s → eq(G)] ⊆ [eq([[Q ′ ]] ρ ) * r * s → eq(G)].Thus, h 0 [eq([[P ′ ]] ρ ) * r * s]h 1 and k 0 [eq([[Q ′ ]] ρ ) * r * s → eq(G)]k 1 . These two relationships imply the required [[M ]] ρ,η 0 (h 0 , k 0 ) eq(G) [[M ]] ρ,η 1 (h 1 , k 1 ), because (ρ, η 0 , η 1 , r) satisfies {P ′ }M {Q ′ }.
supposition, (ρ, η 0 , η 1 , r) satisfies {P }M {Q}. Thus, the relationships 5.3 and 5.4 imply the required [[M ]] ρ,η 0 (h 0 , k 0 ) eq(G) [[M ]] ρ,η 1 (h 1 , k 1 ). The eighth case is the rule for conditional statement. Suppose that (ρ, η 0 , η 1 , r) satisfies{P ∧ E = F }M {Q} and {P ∧ E = F }N {Q}.Consider h 0 , h 1 ∈ Heap, s ∈ R, and k 0 , k 1 ∈ Cont, such that
An FM-monoid is an FM-set M with monoid operations (I ∈ M, * : M × M → M ) such that I and * are equivariant, and an FM-preorder is an FM-set A with an equivariant preorder ⊑ on A. An FM-Heyting algebra is an FM-poset (A, ⊑) with operations ⊥, ⊤ ∈ A, and ⊔, ⊓, ⇒: A × A → A, such that all of those operations are equivariant and (A, ⊑, ⊥, ⊤, ⊔, ⊓) forms a Heyting algebra. Finally, an FM-complete Heyting algebra is an FM-Heyting algebra (A, ⊑, ⊥, ⊤, ⊔, ⊓, ⇒) such that every finitely supported subset of A has a least upper bound and a greatest lower bound.
Lemma 6. 1 .
1(M, ⊑) is an FM-preorder.
Lemma 6. 2 .
2(L(M ), ⊑ L , ⊥, ⊤, ⊔, ⊓, ⇒) is an FM-complete Heyting algebra.
Lemma 6 . 4 .
64The function − ⊗ − is well-defined, and it satisfies the following three properties: (1) − ⊗ m commutes with ⇒ and all the existing least upper bounds or greatest lower bounds of subsets of L(M ). (2) (M 0 ⊗ m) ⊗ m ′ = M 0 ⊗ (m * m ′ ) for all M 0 ∈ L(M ) and m, m ′ ∈ M . (3) M 0 is a subset of M 0 ⊗ m, for every M 0 ∈ L(M ).
Lemma 6 . 5 .
65FMSet(−, L(M )) satisfies all the axioms for hyperdoctrines, thereby allowing the interpretation of intuitionistic predicate logic. For each m ∈ M , consider the fibred endo-functor FMSet(−, − ⊗ m) : FMSet(−, L(M )) → FMSet(−, L(M )),which maps a predicate ϕ over X, that is, an equivariant function ϕ from X to L(M ), to (− ⊗ m) • ϕ.
Lemma 6 . 6 .
66The fibred functor FMSet(−, − ⊗ m) preserves ⊥, ⊤, ⊔, ⊓, ⇒ in each fibre and commutes with quantifiers ∃ and ∀.
Proposition 7 . 1 .
71For all p, q ⊆ Heap and all c ∈ Heap → (Heap + {err }) ⊥ , the quadruple [eq(p)](cps(c), cps(c))[eq(q)] holds iff the two conditions below hold:
by the condition (2-b) of the assumption, and c(h ′ 2 ) = ⊥ by the condition (2-a) of the assumption. Thus, in this case, we have cps(c)(h 1 , k 1 ) = cps(c)(h 2 , k 2 ) = ⊥ and cps(c)(h 1 , k 1 )[eq(G)]cps(c)(h 2 , k 2 ), as desired. Otherwise, i.e., if c(h 1 ) = ⊥, then c(h ′ 1 ) = ⊥ by the condition (2-a). Thus, by the condition (2-b), we have that c(h
Proposition 7 . 2 .
72Let r, s be relations in R. Consider functions c 1 , c 2 from Heap to (Heap + {err }) ⊥ . A quadruple [r](cps(c 1 ), cps(c 2 ))[s] holds, iff
Figure 8 :
8{g, c} and Γ ≡ {inc : com, read : com}) Two Implementations of a Counter and a Simple ClientProof of Corollary 8.2. Define environments η 0 , η 1 and heap sets p, p 1 , q, q 1 as follows:η 0 = [x→c 0 ], η 1 = [x→c 1 ], and (p 1 , q 1 , p, q) = ([[P 1 ]] ρ , [[Q 1 ]] ρ , [[P ]] ρ , [[Q]] ρ ).By Theorem 8.1, we have, for any r, that (ρ, η 0 , η 1 , r) |= {P 1 }x{Q 1 } ⇒ {P }M {Q}. From this, we derive the conclusion of the proposition: (ρ, η 0 , η 1 , r) |= {P 1 }x{Q 1 } ⇒ {P }M {Q} =⇒ (∀s ∈ R. (ρ, η 0 , η 1 , r * s) |= {P 1 }x{Q 1 } =⇒ (ρ, η 0 , η 1 , r * s) |= {P }M {Q}) =⇒ ((ρ, η 0 , η 1 , r) |= {P 1 }x{Q 1 } =⇒ (ρ, η 0 , η 1 , r) |= {P }M {Q}) =⇒ ([eq(p 1 ) * r](c 0 , c 1 )[eq(q 1 ) * r] =⇒ [eq(p) * r]([[M ]] η 0 , [[M ]] η 1 )[eq(q) * r]).
[eq([[emp]] ρ ) * r](f 0 , f 1 )[eq([[emp]] ρ ) * r] ∧ [eq([[g → -]] ρ ) * r](g 0 , g 1 )[eq([[g → -]] ρ ) * r] ⇒ [eq([[g → -]] ρ ) * r](b 0 , b 1 )[eq([[g → -]] ρ ) * r].(9.1) 24 L. BIRKEDAL AND H. YANG
Then one can verify that the antecedent of the implication in (9.1) holds, and thus conclude that [eq([[g → -]] ρ ) * r](b 0 , b 1 )[eq([[g → -]] ρ ) * r] holds. Take (h 0 , h 1 ) ∈ eq([[g → -]] ρ ) * r, and denote the result of running b 0 on h 0 by h ′ 0 , and the result of running b 1 on h 1 by h ′ 1 .
Pick ρ ∈ [[{j, k}]], and define f 0 , f 1 , g 0 , g 1 , c 0 , c 1 byf i def = [[put i ]] ρ,[] , g i def = [[get i ]] ρ,[] , c i def = [[c]] ρ,[put→f i ,get→g i ] .Our Abstraction Theorem gives that, for all r,(∀v ∈ Val . [eq([[i → -]] ρ[i→v] ) * r](f 0 (v), f 1 (v))[eq([[emp]] ρ[i→v] ) * r]) ∧ (∀v ∈ Val . [eq([[j → -]] ρ[j→v] ) * r](g 0 (v), g 1 (v))[eq([[j → -]] ρ[j→v] ) * r]) ⇒ [eq([[j → -]] ρ ) * r](c 0 , c 1 )[eq([[j → -]] ρ ) * r].
For this relation r, one can verify that the antecedent of the implication in (9.2) holds, and thus conclude that [eq([[j → -]] ρ ) * r](c 0 , c 1 )[eq([[j → -]] ρ ) * r] holds. This quadruple says, in particular, that c 0 and c 1 map eq([[j → -]] ρ ) * r-related heaps to eq([[j → -]
[r 1
1](cps(c 1 ), cps(c 2 ))[s 1 ] ∧ [r 2 ](cps(c 1 ), cps(c 2 ))[s 2 ] =⇒ [r 1 ∧ r 2 ](cps(c 1 ), cps(c 2 ))[s 1 ∧ s 2 ]
The semantics of the logic is defined by the satisfaction relation |= ∆|Γ between [[∆]] × [[Γ]] 2 × R and Specs, such that |= ∆|Γ satisfies Kripke monotonicity:
∀i.{P }M {Q}) ⇒ {∃i.P }M {∃i.Q}.Consider (ρ, η 0 , η 1 , r) that satisfies ∀i.{P }M {Q}. We should show that (ρ, η 0 , η 1 , r) satisfies
{∃i.P }M {∃i.Q}, i.e.,
[eq([[∃i.P ]] ρ ) * r]([[M ]] ρ,η 0 , [[M ]] ρ,η 1 )[eq([[∃i.Q]] ρ ) * r].
Pick arbitrary h 0 , h 1 ∈ Heap, k 0 , k 1 ∈ Cont, and s ∈ R such that
h 0 [eq([[∃i.P ]] ρ ) * r * s]h 1 and k 0 [(eq([[∃i.Q]] ρ ) * r * s) → eq(G)]k 1 .
By the definition of eq, [[∃i.P ]] and [[∃i.Q]], these two conjuncts imply the existence of v and
ρ ′ such that
.5 above. The other case [[E]] ρ = [[F ]] ρ can be proved similarly, so it is omitted here.The ninth case is the rule for sequential composition. Suppose that (ρ, η 0 , η 1 , r) satisfies {P }M {P 0 } and {P 0 }N {Q}. Consider h 0 , h 1 ∈ Heap, s ∈ R, and k 0 , k 1 ∈ Cont, such that
Theorem 8.1 (Abstraction Theorem). If ∆ | Γ ⊢ ϕ is provable in the logic, then for all (ρ, η 0 , η 1 , r) ∈ [[∆]] × [[Γ]] 2 × R, we have that (ρ, η 0 , η 1 , r) |= ϕ.Proof. By Theorem 5.2, we get that ∆ | Γ ⊢ ϕ is valid, which is just what the conclusion expresses.Corollary 8.2. Suppose that ∆ | x : com ⊢ {P 1 }x{Q 1 } ⇒ {P }M {Q} is provable in the logic. Then for all (ρ, c 0 , c 1 , r), if [eq([[P 1 ]] ρ ) * r](c 0 , c 1 )[eq([[Q 1 ]] ρ ) * r] holds,then [eq([[P ]] ρ ) * r]([[M ]] [x→c 0 ] , [[M ]] [x→c 1 ] )[eq([[Q]] ρ ) * r] holds as well.
L. BIRKEDAL AND H. YANG
We omit separating implication − * to simplify presentation.
An infinite product of FM-cpos is not necessarily an FM-cpo.
A relation r is finitely supported iff there is L ⊆ fin Loc s.t. for every permutation π, if π(l) = l for all l ∈ L, then ∀h0, h1. h0[r]h1 ⇐⇒ (π · h0)[r](π · h1).
This work is licensed under the Creative Commons Attribution-NoDerivs License. To view a copy of this license, visit http://creativecommons.org/licenses/by-nd/2.0/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.
AcknowledgmentsWe would like to thank Nick Benton, Jacob Thamsborg and the anonymous referees for their insightful comments. This work was supported by FUR (FIRST). Yang was supported also by EPSRC.
Ownership Confinement Ensures Representation Independence for Object-oriented Programs. A Banerjee, D Naumann, Journal of the ACM. 526A. Banerjee and D. Naumann. Ownership Confinement Ensures Representation Independence for Object-oriented Programs. Journal of the ACM, 52(6):894-960, 2005.
Towards imperative modules: Reasoning about invariants and sharing of mutable state. M Barnett, D Naumann, Proc. of LICS'04. of LICS'04M. Barnett and D. Naumann. Towards imperative modules: Reasoning about invariants and sharing of mutable state. In Proc. of LICS'04, 2004.
Abstracting Allocation:The New new Thing. N Benton, Proc. of CSL'06. of CSL'06N. Benton. Abstracting Allocation:The New new Thing. In Proc. of CSL'06, 2006.
Relational reasoning in a nominal semantics for storage. N Benton, B Leperchey, Proc. of TLCA'05. of TLCA'05Nara, JapanN. Benton and B. Leperchey. Relational reasoning in a nominal semantics for storage. In Proc. of TLCA'05, pages 88-101, Nara, Japan, 2005.
BI-hyperdoctrines and higher order separation logic. B Biering, L Birkedal, N Torp-Smith, Proc. of ESOP'05. of ESOP'05Edinburgh, UKB. Biering, L. Birkedal, and N. Torp-Smith. BI-hyperdoctrines and higher order separation logic. In Proc. of ESOP'05, pages 233-247, Edinburgh, UK, 2005.
Categorical models for Abadi-Plotkin's logic for parametricity. L Birkedal, R Møgelberg, Mathematical Structures in Computer Science. 15L. Birkedal and R. Møgelberg. Categorical models for Abadi-Plotkin's logic for parametricity. Mathe- matical Structures in Computer Science, 15:709-772, 2005.
Local reasoning about a copying garbage collector. L Birkedal, N Torp-Smith, J C Reynolds, Proc. of POPL'04. of POPL'04Venice, ItalyL. Birkedal, N. Torp-Smith, and J. C. Reynolds. Local reasoning about a copying garbage collector. In Proc. of POPL'04, pages 220-231, Venice, Italy, 2004.
Semantics of separation-logic typing and higher-order frame rules. L Birkedal, N Torp-Smith, H Yang, Proc. of LICS'05. of LICS'05L. Birkedal, N. Torp-Smith, and H. Yang. Semantics of separation-logic typing and higher-order frame rules. In Proc. of LICS'05, pages 260-269, 2005.
Relational Parametricity and Separation Logic. L Birkedal, H Yang, Proc. of FOSSACS'07. of FOSSACS'07L. Birkedal and H. Yang. Relational Parametricity and Separation Logic. In Proc. of FOSSACS'07, pages 93-107, 2007.
Refinement and separation context. I Mijajlović, N Torp-Smith, P O'hearn, Proc. of FSTTCS'04. of FSTTCS'04Chennai, IndiaI. Mijajlović, N. Torp-Smith, and P. O'Hearn. Refinement and separation context. In Proc. of FSTTCS'04, pages 421-433, Chennai, India, 2004.
Data refinements with low-level pointer operations. I Mijajlović, H Yang, Proc. of APLAS'05. of APLAS'05Tsukuba, JapanI. Mijajlović and H. Yang. Data refinements with low-level pointer operations. In Proc. of APLAS'05, pages 19-36, Tsukuba, Japan, 2005.
From Algol to polymorphic linear lambda-calculus. P O'hearn, J Reynolds, Journal of the ACM. 471P. O'Hearn and J. Reynolds. From Algol to polymorphic linear lambda-calculus. Journal of the ACM, 47(1):167-223, 2000.
Local reasoning about programs that alter data structures. P W O'hearn, H Yang, J C Reynolds, Proc. of CSL'01. of CSL'01Paris, FranceP. W. O'Hearn, H. Yang, and J. C. Reynolds. Local reasoning about programs that alter data structures. In Proc. of CSL'01, pages 1-19, Paris, France, 2001.
Separation and information hiding. P W O'hearn, H Yang, J C Reynolds, Proc. of POPL'04. of POPL'04Venice, ItalyP. W. O'Hearn, H. Yang, and J. C. Reynolds. Separation and information hiding. In Proc. of POPL'04, pages 268-280, Venice, Italy, 2004.
Separation logic and abstraction. M Parkinson, G Bierman, Proc. of POPL'05. of POPL'05Long Beach, CA, USAM. Parkinson and G. Bierman. Separation logic and abstraction. In Proc. of POPL'05, pages 247-258, Long Beach, CA, USA, 2005.
A logic for parametric polymorphism. G Plotkin, M Abadi, Proc. of TLCA'93. of TLCA'93Utrecht, NetherlandsG. Plotkin and M. Abadi. A logic for parametric polymorphism. In Proc. of TLCA'93, pages 361-375, Utrecht, Netherlands, 1993.
Types, abstraction, and parametric polymorphism. J Reynolds, 83Information ProcessingJ. Reynolds. Types, abstraction, and parametric polymorphism. Information Processing, 83:513-523, 1983.
Separation logic: A logic for shared mutable data structures. J C Reynolds, Proc. of LICS'02. of LICS'02Copenhagen, DenmarkJ. C. Reynolds. Separation logic: A logic for shared mutable data structures. In Proc. of LICS'02, pages 55-74, Copenhagen, Denmark, 2002.
On a monadic semantics for freshness. M R Shinwell, A M Pitts, Theoretical Computer Science. 342M. R. Shinwell and A. M. Pitts. On a monadic semantics for freshness. Theoretical Computer Science, 342:28-55, 2005.
Advances in Separation Logic -A Study of Logics for Reasoning about Stateful Programs. N Torp-Smith, IT University of CopenhagenPhD thesisN. Torp-Smith. Advances in Separation Logic -A Study of Logics for Reasoning about Stateful Pro- grams. PhD thesis, IT University of Copenhagen, 2005.
Relational separation logic. H Yang, Theoretical Comput. Sci. to appearH. Yang. Relational separation logic. Theoretical Comput. Sci., 2005. (to appear).
| [] |
[
"Multiple-Splitting Projection Test for High-Dimensional Mean Vectors",
"Multiple-Splitting Projection Test for High-Dimensional Mean Vectors"
] | [
"Wanjun Liu \nLinkedIn\n\n",
"Xiufan Yu \nUniversity of Notre Dame\n\n",
"Runze Li \nPennsylvania State University\n\n"
] | [
"LinkedIn\n",
"University of Notre Dame\n",
"Pennsylvania State University\n"
] | [] | We propose a multiple-splitting projection test (MPT) for one-sample mean vectors in high-dimensional settings. The idea of projection test is to project high-dimensional samples to a 1-dimensional space using an optimal projection direction such that traditional tests can be carried out with projected samples. However, estimation of the optimal projection direction has not been systematically studied in the literature. In this work, we bridge the gap by proposing a consistent estimation via regularized quadratic optimization. To retain type I error rate, we adopt a data-splitting strategy when constructing test statistics. To mitigate the power loss due to data-splitting, we further propose a test via multiple splits to enhance the testing power. We show that the p-values resulted from multiple splits are exchangeable. Unlike existing methods which tend to conservatively combine dependent p-values, we develop an exact level α test that explicitly utilizes the exchangeability structure to achieve better power.Numerical studies show that the proposed test well retains the type I error rate and is more powerful than state-of-the-art tests. | null | [
"https://arxiv.org/pdf/2110.15480v2.pdf"
] | 240,288,807 | 2110.15480 | 56885ac5d88b7431e9c86b870fad0fd8552fa145 |
Multiple-Splitting Projection Test for High-Dimensional Mean Vectors
April 17, 2022
Wanjun Liu
LinkedIn
Xiufan Yu
University of Notre Dame
Runze Li
Pennsylvania State University
Multiple-Splitting Projection Test for High-Dimensional Mean Vectors
April 17, 2022Exchangeable p-valuesHigh-dimensional mean testsMultiple data-splittingOptimal projection directionRegularized quadratic optimization
We propose a multiple-splitting projection test (MPT) for one-sample mean vectors in high-dimensional settings. The idea of projection test is to project high-dimensional samples to a 1-dimensional space using an optimal projection direction such that traditional tests can be carried out with projected samples. However, estimation of the optimal projection direction has not been systematically studied in the literature. In this work, we bridge the gap by proposing a consistent estimation via regularized quadratic optimization. To retain type I error rate, we adopt a data-splitting strategy when constructing test statistics. To mitigate the power loss due to data-splitting, we further propose a test via multiple splits to enhance the testing power. We show that the p-values resulted from multiple splits are exchangeable. Unlike existing methods which tend to conservatively combine dependent p-values, we develop an exact level α test that explicitly utilizes the exchangeability structure to achieve better power.Numerical studies show that the proposed test well retains the type I error rate and is more powerful than state-of-the-art tests.
Introduction
Hypothesis testing on mean vectors is a fundamental problem in statistical inference theory and attracts considerable interest in numerous scientific applications. For example, neuroscientists make inferences on the average signals of fMRI data to monitor brain activities and diagnose abnormal tissues (Ginestet et al., 2017). Geneticists analyze gene expression levels to understand the mechanism of how genes are related to diseases (Wang et al., 2015). In these applications, the data dimension p is typically comparable with or much larger than sample size n, making traditional tests ineffective or practically infeasible.
In this work, we study the problem of testing whether a population mean µ equals to some known vector µ 0 under high-dimensional regime where p ą n. Without loss of generality, we set µ 0 " 0 throughout the paper. To formally formulate the problem, let X " px 1 , . . . , x n q J be a random sample from a p-dimensional population x with mean µ and covariance Σ. Of interest is to test H 0 : µ " 0 versus H 1 : µ ‰ 0.
(1.1)
The Hotelling's T 2 test has been well studied when p ă n and p is fixed. As p exceeds n, the sample covariance matrix becomes singular and hence T 2 is not well-defined. Even in the case p ă n, the testing power of T 2 is largely defective if p{n Ñ c P p0, 1q (Bai and Saranadasa, 1996).
Three types of tests have been developed in efforts to handle the high-dimensional challenge. The first type is quadratic-form test, which replaces the singular sample covariance matrix with an invertible matrix (e.g., identity matrix) (Bai and Saranadasa, 1996;Chen et al., 2011;Chen and Qin, 2010). These tests tend to neglect the dependence among covariates and may suffer from low power when covariates are strongly correlated. The second type is known as extreme-type test, which utilizes the extreme value of a sequence of marginal test statistics, see Cai et al. (2014); Zhong et al. (2013). Such extreme-type statistics typically converge to some extreme value distribution and are generally disadvantaged by slow convergence, making it hard to control the type I error when n is small. The third type is projection test (Lopes et al., 2011;Huang, 2015;Liu and Li, 2020;Li and Li, 2021), which maps the high-dimensional sample to a low-dimensional space, and subsequently applies traditional methods (e.g., Hotelling's T 2 ) to the projected sample. Intuitively, the projection procedure seeks to transform the data in such a way that the dimension is reduced, while the statistical distance between H 0 and H 1 is mostly preserved through the transformed distributions.
Recently, Huang (2015) proved that the optimal choice of projection direction is Σ´1µ.
To facilitate a data-driven decision regarding the projection direction, Huang (2015) also proposed a projection test based on a data-splitting procedure, i.e., half of the sample is employed to estimate the optimal projection direction, while the other half is used to perform the test. However, there are two main drawbacks with this data-splitting projection test.
First, a ridge-type estimator is used to estimate the projection direction. Their power analysis relies on the assumption that the ridge-type estimator is consistent, which is no longer true in high-dimensional settings. Secondly, the single data-splitting procedure is often criticized as only half of the sample is used to perform the test, which inevitably results in power loss. These two drawbacks actually reveal two existing unsolved issues with the projection test based on a data-splitting procedure:
1. How to estimate the optimal projection direction with statistical guarantee?
2. How to mitigate the power loss caused by the data-splitting procedure?
In this paper, we propose a multiple-splitting projection test for high-dimensional mean vectors. Our proposed test addresses the aforementioned issues in the following two ways:
(1) the optimal projection is estimated via a regularized quadratic optimization such that a consistent estimator is obtained; and (2) a multiple data-splitting procedure is proposed to improve the testing power. The main contributions can be summarized in three folds.
First, we propose a consistent estimation of the optimal projection direction via nonconvex regularized quadratic programming. Non-asymptotic error bounds are established, which hold for all stationary points with high probability. In other words, we do not need to solve the global solution to the nonconvex optimization problem as any stationary point has desirable statistical guarantee.
Second, we prove that p-values constructed from a multiple data-splitting procedure are exchangeable. Furthermore, we generalize the exchangeability of p-values proposition to a more general permutation framework. As an extension, the methodology proposed in this work can be further applied to many statistical inference problems.
Third, an exact level α test is proposed to combine multiple p-values which explicitly utilizes the exchangeability of these p-values. Such exchangeability is often neglected in traditional combination approaches. By doing so, our test is more powerful than the singlesplitting test as well as existing combination approaches. To the best of our knowledge, this is the first work that exploits the exchangeability of p-values and utilizes such exchangeability in developing high-dimensional hypothesis testing.
The rest of this paper is organized as follows. In Section 2, we introduce a new estimation of the optimal projection direction via regularized quadratic programming. In Section 3, we investigate the dependency structure of p-values resulted from a multiple-splitting procedure and propose an exact level α multiple-splitting projection test. In Section 4, we conduct numerical studies to compare the proposed MPT with existing tests as well as other p-value combination methods. We conclude this paper with discussion on potential applications of this multiple-splitting framework to other statistical inference problems in Section 5.
Estimation of Optimal Projection Direction
In this section, we introduce a consistent estimation of the optimal projection direction for projection tests. Section 2.1 provides a brief introduction to projection tests. Section 2.2 presents the estimator as a stationary point of a regularized quadratic optimization problem and establishes its non-asymptotic error bounds.
Background on Projection Tests
The idea of projection test is to project the high-dimensional vector x P R p onto a space of low dimension such that traditional tests can be applied. Let P be a pˆq full column-rank projection matrix (or vector if q " 1) with q ă n and define y i " P J x i P R q , i " 1, . . . , n.
Under H 0 , Epy i q " 0 and Hotelling's T 2 test can be applied to the q-dimensional projected sample y i 's,
T 2 P " ns x J PpP J p ΣPq´1P J s x,
where s x and p Σ are the sample mean and sample covariance matrix. Under H 0 , T 2 P converges to χ 2 q distribution as n Ñ 8.
The projection test pivots the attention to the question on how to effectively construct the projection matrix P. Various approaches have been developed with respect to different choices of P. A data-dependent method was proposed in Lauter (1996)
by setting P " d,
where d is a pˆ1 vector depending on data only through X J X. Lopes et al. (2011) proposed a random projection test in which the entries in P are randomly drawn from standard normal distribution. More recently, Huang (2015) proved that under normality assumption, the optimal choice q is 1 and the optimal projection direction is of the form P " Σ´1µ in the sense that the power of T 2 P is maximized. For non-Gaussian samples, the direction P " Σ´1µ is still asymptotically optimal as long as the sample mean of projected sample is asymptotically normal.
The estimation of the optimal projection direction has not been systematically studied yet, leaving a gap between theory and practice for projection tests. In what follows, we propose a new estimating procedure such that a consistent estimator is obtained.
Notations: Before proceeding, we first set up some notations. For a vector v " pv j q p j"1 P R p , let }v} k be its k norm, k " 1, 2. Its 0 norm }v} 0 is the number of nonzero entries in v and 8 norm is }v} 8 " max |v j |. For a matrix M " pm ij q p i,j"1 P R pˆp , its elementwise 8 norm is }M} max " max |m ij |. For a set D, |D| denotes its cardinality. We use a _ b to denote the larger one of a and b.
Estimation via Regularized Quadratic Optimization
In this subsection, we aim to bridge the gap between theoretical analysis and practical implementation regarding the optimal projection direction. The empirical performance of a data-driven projection test relies heavily on the estimation accuracy of the optimal projection direction. However, in high-dimensional settings, there is no statistical guarantee for the ridge-type estimator introduced in Huang (2015).
We propose a new consistent estimator to improve the test performance with the assumption that w ‹ is sparse. Observing that Σ´1µ is the minimizer of 1 2 w J Σw´µ J w, we propose to estimate w ‹ " Σ´1µ using the following regularized quadratic optimization
minimize w 1 2 w J p Σw´x J w`P λ pwq, (2.1)
where P λ pwq " ř p j"1 P λ pw j q is a penalty function satisfying the following conditions (i) P λ p0q " 0 and P λ ptq is symmetric around 0, (ii) P λ ptq is differentiable for t ‰ 0 and lim tÑ0`P 1 λ ptq " λ, (iii) P λ ptq is a non-decreasing function on t P r0, 8q, (iv) P λ ptq{t is a non-increasing function on t P r0, 8q,
(v) There exists γ ą 0 such that P λ ptq`γ 2 t 2 is convex.
Such conditions on P λ are mild (Loh and Wainwright, 2015) and are satisfied by a wide variety of penalties including the Lasso (Tibshirani, 1996) and nonconvex regularizers such as the SCAD (Fan and Li, 2001), and the MCP (Zhang, 2010). We further assume that the sample covariance matrix p Σ satisfies the following restricted strong convexity (RSC) condition,
∆ J p Σ∆ ě ν}∆} 2 2´τ
c log p n }∆} 1 for ∆ P R p and }∆} 1 ě 1,
where ν ą 0 is a strictly positive constant and τ ě 0 is a non-negative constant. When p ă n, p Σ is positive definite, one can set τ " 0 and ν be the smallest eigenvalue of p Σ. In the highdimensional setting where p ą n, p Σ is semi-positive definite and ∆ J p Σ∆ ě 0 for all ∆ P R p .
Thus the RSC condition (2.2) holds trivially for t∆ : }∆} 1 {}∆} 2 2 ą cu with c " ν τ b n log p . As a result, we only require the RSC condition to hold in the set t∆ : }∆} 1 {}∆} 2 2 ď c, }∆} 1 ě 1u.
The RSC condition (2.2) is imposed on p Σ only for }∆} 1 ě 1, and it turns out the condition actually holds for all ∆ P R p , see Lemma 1. Such RSC-type condition is widely used in establishing the non-asymptotic error bounds in high-dimensional statistics and is satisfied with high probability under sub-Gaussianity assumption (Agarwal et al., 2012;Wainwright, 2015, 2017). Alternatively, the RSC-type condition can also be replaced by a similar condition known as restricted eigenvalue (RE) condition (Bickel et al., 2009;Van De Geer and Bühlmann, 2009).
It is quite challenging to obtain the global solution to the optimization problem (2.1) if a nonconvex penalty P λ is used. Instead of searching for the global solution, we establish the non-asymptotic error bounds for any stationary point p w that satisfies the following first-order condition,
p Σ p w´x`∇P λ p p wq " 0, (2.3)
where ∇P λ denotes the sub-gradient of P λ . The condition (2.3) is a necessary condition for p w to achieve a local minimum. Therefore the set of p w satisfying (2.3) includes all local minimizers as well as the global one.
Lots of efficient algorithms have been developed to attain stationary points even when the objective function is nonconvex. These algorithms include local linear approximation (Fan et al., 2014;Wang et al., 2013;Zou and Li, 2008), composite gradient descent method (Loh and Wainwright, 2015;Nesterov, 2013), and proximal-gradient method (Wang et al., 2014). In practice, we may choose the tuning parameter λ in the penalty function by crossvalidation or the high-dimensional BIC criterion proposed in Wang et al. (2013). We impose the following conditions, (C1) x 1 , . . . , x n are identically and independently distributed sub-Gaussian vectors.
(C2) The sample covariance matrix p Σ satisfies the RSC condition in (2.2) with 3γ ď 4ν.
(C3) There exists constant C 1 ą 0 such that }w ‹ } 1 ď C 1 .
Remark 1. Condition (C3) is posited to ensure a good estimation of w ‹ . By the definition of w ‹ , p Σw ‹ should be somewhat close to µ. Note that } p
Σw ‹´µ } 8 " } p Σw ‹´Σ w ‹ } 8 ď } p Σ´Σ} max¨} w ‹ } 1 . A diverging }w ‹ } 1 would amplify the estimation error of p Σ.
The following theorem establishes the 1 and 2 error bounds for all stationary points p w under the alternative hypothesis.
Theorem 1. Suppose conditions (C1)-(C3) hold. Let p w be any stationary point of the
problem (2.1) with λ " C a log p{n for some large constant C. Then under H 1 (i.e., w ‹ ‰ 0),
with probability at least 1´cp´1 for some absolute constant c, we have
} p w´w ‹ } 1 " O˜s c log p n¸a nd } p w´w ‹ } 2 " O˜c s log p n¸,
where s " }w ‹ } 0 is the number of nonzero entries in w ‹ .
Remark 2. Though inspired by Loh and Wainwright (2015), we would like to clarify the difference between Theorem 1 and the results in Loh and Wainwright (2015). The optimization problem in Loh and Wainwright (2015) requires an additional constraint }w} 1 ď R for some tuning parameter R to ensure } p w} 1 is bounded by R. R needs to be chosen carefully such that w ‹ is feasible and both the penalty parameter λ and sample size n also depend on R. However, how to choose R is not clear in practice. In our work, we modify the RSC condition by substituting }∆} 2 2 for }∆} 2 in the RSC condition (2.2) so that the constraint } p w} 1 ď R is no longer needed.
Note that the error bounds in Theorem 1 hold for all stationary points. In other words, any local solution is guaranteed with desirable statistical accuracy and a global one is unnecessary if it is too challenging to achieve. Theorem 1 implies p w is a consistent estimator under H 1 if w ‹ is sparse (more precisely, a s log p{n Ñ 0). Such consistency under H 0 is not guaranteed as there is no signal in the true parameter w ‹ " 0. Fortunately, with the data-splitting technique, we will see in Section 3 that the size of the proposed projection test is always controlled regardless of the consistency of the estimator p w.
Data-Splitting Based Projection Test
In this section, we present full methodological details of our proposed multiple-splitting projection test (MPT) together with its theoretical properties. After carefully studying the dependency of p-values resulted from a multiple-splitting procedure, we introduce a new combination framework that makes use of the dependency structure. Section 3.1 demonstrates a single-splitting projection test using the estimator introduced in the previous section. Section 3.2 studies the exchangeability of p-values. Section 3.3 provides a brief overview of traditional approaches for combining multiple p-values. Section 3.4 formally presents our combination framework and the proposed MPT.
Single-Splitting Projection Test (SPT)
Data-splitting technique has a long history in statistical applications and remains attractive in modern statistics (Wasserman and Roeder, 2009;Barber and Candès, 2019). We begin with one single data-splitting. Let D " tx 1 , . . . , x n u denote the set of full sample and we partition the full sample into two disjoint sets D 1 " tx 1 , . . . , x n 1 u and D 2 " tx n 1`1 , . . . , x n u with |D 1 | " n 1 and |D 2 | " n 2 " n´n 1 . The idea is to use D 1 to estimate the optimal projection direction while use D 2 to conduct the test with projected sample. To be more specific, we estimate the optimal projection direction w ‹ using a stationary point p w of the following regularized quadratic optimization problem
minimize w 1 2 w J p Σ 1 w´x J 1 w`P λ pwq,(3.1)
wherex 1 and p Σ 1 are sample mean and sample covariance matrix computed from D 1 . Then we project the observations in D 2 to a 1-dimensional space as follows:
y i " p w J x i , i "
n 1`1 , . . . , n. The one-sample t-test is readily applied to the projected sample and the resulting test statistic is
T p w " ? n 2 s y{s y , (3.2)
where s y and s 2 y are the sample mean and sample variance of ty n 1`1 , . . . , y n u. Due to the data-splitting, the estimator p w is independent of D 2 . As a result, the test T p w is an exact one-sample t-test if x i 's are normally distributed, and the p-value of test T p w is given by
ppT p w q " 2 p1´G n 2´1 p|T p w |qq , (3.3)
where G n 2´1 is the cdf of t n 2´1 distribution. Without normality assumption, T p w has an asymptotic standard normal distribution, and the p-value is given by ppT
p w q " 2 p1´Φp|T p w |qq .
We reject the null hypothesis at significance level α whenever ppT p w q ă α.
We refer to T p w as the single-splitting projection test (SPT). Ideally, one would like to use full sample to estimate w ‹ and use full sample to perform the test, which makes the limiting distribution challenging to derive since the projection p w and sample are dependent. Thanks to the data-splitting procedure, an exact t-test can be achieved as p w is independent of D 2 .
Furthermore, we would like to point out that the size of the SPT is well controlled regardless of how p w is estimated, but a consistent p w ensures high power under the alternative. The following theorem demonstrates the asymptotic power of the SPT.
Theorem 2. Suppose that the conditions in Theorem 1 hold. Further assume p1 _ }µ} 8 qs a log p{n Ñ 0 and n 2 {n Ñ κ P p0, 1q, where n 2 is the sample size of D 2 . Let ζ " µ J Σ´1µ and z α{2 be the upper α{2 quantile of N p0, 1q, then the asymptotic power of the proposed SPT at a given significance level α is
βpT p w q " Φp´z α{2`a nκζq.
Remark 3. The term ζ can be interpreted as the signal strength of alternative hypothesis.
As long as nµ J Σ´1µ Ñ 8, the proposed SPT has asymptotic power approaching 1. Under local alternative µ " δ{ ? n for some fixed δ ‰ 0, the asymptotic power of the SPT is Φp´z α{2`? κδ J Σ´1δq. To achieve high power empirically, we adopt the same strategy as in Huang (2015) and recommend to take n 2 " tκnu with κ P r0.4, 0.6s.
Exchangeability of p-values
To compensate the power loss due to the data-splitting procedure, we consider a multiplesplitting procedure (formally presented in Section 3.4), which repeats the data-splitting multiple times and aggregates all the information in p-values to make inferences on H 0 .
More specifically, we consider m times of data-splitting for some fixed integer m. Let π k , k " 1, . . . , m, be a random permutation of t1, . . . , nu. Accordingly, let D π k " tx π k p1q , . . . , x π k pnq u denote the permutated sample. For each k " 1, . . . , m, we apply the SPT to D π k and obtain the p-value p k according to (3.3). Before proceeding with the combination of p-values, we first investigate the dependence structure of these p-values. The following theorem establishes the exchangeability among the p-values.
Theorem 3. The p-values pp 1 , p 2 , . . . , p m q resulted from the multiple-splitting procedure are exchangeable, i.e., pp 1 , . . . , p m q d " pp πp1q , . . . , p πpmq q for any π, a permutation of t1, . . . , mu.
The exchangeability of p-values holds no matter it is under H 0 or H 1 . We would like to point out that such exchangeability structure of p-values holds for a general permutation framework. In many statistical problems, with the technique of data-splitting, the first half sample D π k 1 can be used to learn the underlying model (e.g., parameter estimation, variable selection). We denote the acquired knowledge by f pD π k 1 q. Then together with the second half sample D π k 2 , a p-value (or some other statistic) can be derived for some specific inference problem p k " gpf pD π k 1 q, D π k 2 q. With fixed mappings f and g, it can be shown that the p k 's are also exchangeable. For instance, let us consider the high-dimensional linear regression problem and of interest is to test whether some coefficient, say β j , is 0 or not. With data-splitting, one can use D π k 1 to select a set of important covariates p A such that | p A| ă n´1.
Then we can fit ordinary least squares with the covariate set p A Y tju and obtain the pvalue p k of a test regarding whether β j " 0. Since the p-values are exchangeable, the MPT introduced in Section 3.4 is readily to combine the p-values.
In fact, such exchangeability holds even without data-splitting. The key of exchangeability lies in that conditioning on the dataset D, its m permutations D π 1 , . . . , D πm are independent from each other. The following theorem generalizes the results in Theorem 3.
Theorem 4. Let D " tx 1 , . . . , x n u be a random sample and π 1 ,¨¨¨, π m be m permutations of t1,¨¨¨, nu. D π 1 , . . . , D πm denote the m permutated samples of D. Let g be a mapping from D π k to a statistic: T k " gpD π k q, then T 1 , . . . , T m are exchangeable. A natural question is how we can decide whether H 0 should be rejected based on the m p-values such that the type I error rate is still retained.
Traditional Combination of p-values
Classical approaches require independence assumption among p-values. Examples include the Fisher's method, the Pearson's method, the Stouffer's method, the Tippett's method, and many others. In the meantime, significant efforts were made to combine dependent p-values. Rüschendorf (1982) proved that twice the average p-values remains a valid pvalue and proposed an average-based combination test, which rejects H 0 at level α if the average of p 1 , . . . , p m is less than or equal to α{2. Romano and DiCiccio (2019)
H 0 at level α if ř m k"1 tantp0.5´p i qπu ě mc α ,
where c α is the upper α-quantile of standard Cauchy distribution.
In general, these methods tend to be over-conservative in order to control Type I error rate without taking advantage of certain dependence structure (e.g., exchangeability). This can be regarded as a trade-off between potential size inflation and possible power loss. The traditional combination methods generally ignore the dependency structure, therefore tend to make unnecessarily large comprise in testing power in order to retain a correct size for a less favorable scenario. In Section 4.1, we use simulation studies to compare the size and power of the proposed MPT with those of traditional combination methods. On the one hand, the numerical studies show that the size of MPT is slightly below the level α " 0.05 while the size of traditional combination methods are almost 0. On the other hand, the power of MPT is much higher than those traditional combination methods. In summary, compared to the traditional ones, the proposed MPT is less conservative in terms of testing size, and exhibits much higher testing power.
Multiple-Splitting Projection Test (MPT)
Based on the m exchangeable p-values tp 1 , . . . , p m u, the question is how we can make a decision on whether H 0 should be rejected or not. To improve upon the traditional methods, we propose a new framework to combine p-values obtained from multiple splits. The proposed framework takes advantage of the exchangeability structure among those p-values, as a result, achieving higher testing power than existing commonly used combination approaches.
Let Z k " Φ´1pp k q, k " 1, . . . , m. Under H 0 , Z 1 , . . . , Z m are exchangeable standard Figure 1: Density plots of pZ 1 , Z 2 q under H 0 and H 1 with autocorrelation (AR) covariance structure and compound symmetry (CS) covariance structure when n " 40 and p " 1000.
normal random variables, with correlation corrpZ i , Z j q " ρ ě 0 for any pair pi, jq, i ‰ j due to exchangeability. Figure 1 depicts the density of pZ 1 , Z 2 q with m " 2 under different covariance structures (see Section 4 for detailed descriptions of simulation settings). It shows that pZ 1 , Z 2 q are clearly exchangeable (symmetric). Under H 0 , pZ 1 , Z 2 q are approximately normally distributed centering at p0, 0q. Under H 1 , the joint distribution of pZ 1 , Z 2 q is not normal and its center is far away from p0, 0q.
Let s Z be the sample mean, then we have EpZq " 0 and VarpZq " p1`pm´1qρq{m
under H 0 . If pZ 1 , . . . , Z m q are jointly normally distributed, then the standardized statistic of s Z, M ρ "Z{ a p1`pm´1qρq{m " N p0, 1q. In general, by the central limit theorem for exchangeable random variables (e.g., see Klass and Teicher (1987)), we know
M ρ "Z{ a p1`pm´1qρq{m d Ñ N p0, 1q. (3.4)
The correlation ρ is typically unknown and needs to be estimated from the sample. Two approaches to estimate ρ will be provided later in this subsection. Let p ρ denote an estimator of ρ. We first present the asymptotic distribution of M We refer to the test (3.6) as multiple-splitting projection test (MPT) and summarize full methodological details in Algorithm 1. Note that the critical value cpm, α{2q depends on the way you estimate ρ. With the choice cpm, α{2q, the MPT is still an exact level α test but no longer a size α test.
Algorithm 1 Multiple-splitting Projection Test (MPT)
1: Input: dataset D, the number of splits m, n 1 , and significance level α 2:
Step 1: randomly generate m permutations of t1, . . . , nu, denoted by π k , k " 1, . . . , m 3:
Step 2: obtain multiple p-values 4: for k " 1 to m do 5:
(1) partition the permuted sample D π k into D π k 1 and D π k 2 and obtainx k 1 , p Σ k 1 from D π k 1 6:
(2) estimate p w k using a stationary point of minimize w 1 2 w J p Σ k 1 w´x kJ 1 w`P λ pwq 7:
(3) project D π k 2 and obtain y k i " p w kJ x π k piq , i " n 1`1 , . . . , n 8:
(4) T p w k "
? n 2 s y k {s k y , where s y k and ps k y q 2 are the sample mean and variance of ty k n 1`1 , . . . , y k n u 9:
(5) compute the p-values by p k " 2 p1´Φp|T p w k |qq 10: end for 11:
Step 3: combine the p-values 12:
(1) compute the sample mean s Z and variance s 2 Z of tZ k " Φ´1pp k q, k " 1, . . . , mu 13:
(2) compute test statistic M p ρ " s Z{ a p1`pm´1qp ρq{m 14: Return:
Reject H 0 at level α if |M p ρ | ą cpm, α{2q
Two estimators of ρ are introduced in Follmann and Proschan (2012). Let s 2 Z is the sample variance of Z i 's. The first estimator is given by p ρ 1 " maxp0, 1´s 2 Z q. The second estimator, which is quantile based, is given by p
ρ 2 " maxp0, 1´pm´1qs 2 Z {χ 2 m´1 p1´βqq,
where χ 2 m´1 p1´βq is the upper p1´βq quantile of χ 2 m´1 . An appealing approach is to choose β as large as possible so that the test with cpm, α{2q " z α{2 remains level α for all ρ. We refer to Follmann and Proschan (2012) for more details. Follmann and Proschan (2012) also provides the table of critical values cpm, α{2q and β for different m, see Tables 2 and 3 in Appendix A. We see that cpm, α{2q increases dramatically as m increases, leading to low power when m is large. Hence the quantile approach is preferred when m is relatively large.
As for the choice of m, Figure 2 shows how the testing power changes with the number of splits m under settings with different correlation structure, samples sizes and dimensions.
We would like to point out that the proposed MPT is a valid level α test regardless the dependence structure since the critical value is chosen such that Type I error is controlled for all dependence structure (i.e., for all ρ). In other words, the proposed MPT is able to control Type I error regardless the number of splits m and data characteristics (e.g., sample size, data dimensionality). The main purpose of conducting multiple splitting (i.e., choosing m ą 1) is to mitigate power loss brought by single data-splitting procedure and improve the testing power over SPT. Under alternative hypothesis, the testing power from each split depends on the original data characteristics. Hence theoretically the exact relationship between m and the testing power of MPT also depend on original data characteristics. As shown in the plot, the testing power increases as m increases but the improvement of power becomes insignificant when m is relatively large. Considering the fact that a large number of splits will increase computational cost, we recommend to set m P r30, 60s as a reasonable choice in practice considering the trade-off between testing power and computational cost.
Numerical Studies
In this section, we conduct numerical studies to demonstrate the finite-sample performance of the proposed MPT through both Monte Carlo simulation and a real data example.
Monte Carlo Simulation
We compare the proposed MPT with other state-of-the-art tests and p-value combination approaches. In particular, we include the following tests in our experiments:
• Projection test: our proposed SPT and MPT (with m " 40), ridge projection test (Ridge) (Huang, 2015), and random projection test (RPT) (Lopes et al., 2011).
• Combining p-values: Median-based combination (Median) (Romano and DiCiccio, 2019), average-based combination (Average) (Rüschendorf, 1982), average-based combination using Φ´1pp k q (Z-average) (Romano and DiCiccio, 2019), and Cauchy combination (Cauchy) (Liu and Xie, 2020).
• Quadratic-form test: CQ test (Chen and Qin, 2010).
• Extreme-type test: CLX test (Cai et al., 2014).
We generate a random sample of size n from N p pcµ, Σq with µ " p1 J 10 , 0 J p´10 q J . We set c " 0, 0.5 to examine the size and the power of these tests, respectively. To examine the test robustness to non-normally distributed data, we also generate random samples from a multivariate t 6 -distribution. Let σ ij be the pi, jq entry in Σ. For r P p0, 1q, we consider the following two covariance matrices: (1) compound symmetry (CS) with σ ij " r if i ‰ j and σ ij " 1 if i " j and (2) autocorrelation (AR) with σ ij " r |i´j| . We vary r from 0.1 to 0.9 with step size 0.1 to examine the impact of correlation on size and power. We set sample size n " 40, 100 and dimension p " 1000.
In the above settings, the optimal projection direction Σ´1µ is sparse or approximately sparse. When Σ has the compound symmetry structure, Σ´1 is an approximately sparse matrix in the sense that the off-diagonal entries are of order p´1 and dominated by its diagonal entries. Then optimal projection direction Σ´1µ is also approximately sparse since the first 10 entries dominate the rest entries. When Σ has the autocorrelation structure, Σ´1
is a 3-sparse matrix, meaning that at most three entries in each row or column are nonzero, and the resulting optimal projection direction Σ´1µ is sparse as well. We set κ " 0.5 when implementing the SPT and the MPT, i.e., half of the sample is used to estimate the projection direction and the other half is used to perform the test. The quantile approach p ρ 2 is used to estimate pairwise correlation among Z k 's. We set the type I error rate α " 0.05.
All simulation results are based on 10,000 independent replications. Other conservative combination tests are even less powerful than the SPT, which indicates such conservative combination methods do not necessarily improve the testing power.
We also examine the finite-sample performance of the proposed MPT when the normality assumption is not satisfied. Figure 4 shows the size and power comparisons of different tests when x i 's are generated from multivariate t 6 distribution with AR and CS covariance structure. The results show a similar pattern as those in the normal settings, which provide numerical evidences on the robustness of the MPT to non-Gaussianity. When n " 100, the patterns of size and power are similar to that of n " 40. Due to the limit of space, we relegate the results for n " 100 to the appendix, see Figures 5 and 6 in Appendix C.
The numerical results in this subsection emphasize that the MPT greatly improves the testing power upon the SPT thanks to the multiple splits. In a brief summary, our proposed MPT successfully controls the type I error rate and achieves the highest testing power in comparison with all other state-of-the-arts level α tests. In addition, the studies reveal that the proposed MPT is quite robust to non-Gaussianity.
Real Data Analysis
We apply the proposed MPT and SPT together with other tests introduced above to a real dataset of high resolution micro-computed tomography (Percival et al., 2014). This dataset contains skull bone densities of n " 29 mice with genotype "T0A1" in a genetic mutation study. For each mouse, bone density is measured for 16 different areas of its skull at density levels between 130 -249. In this empirical analysis, we are interested in comparing the bone density patterns of two areas in the skull, namely "Mandible" and "Nasal". We use all density levels between 130 -249 for our analysis, and hence dimension p " 120. Since the two areas come from the same mouse, we first take the difference of bone density in the two areas at the corresponding density level for each observation. Then we normalize the bone density in the sense that 1 29 ř 29 i"1 X 2 ij " 1 for all 1 ď j ď 120. The null hypothesis is the density patterns of two skull areas are the same. To be specific, letx be the sample mean and r i " x i´x be the residual for the ith subject.
Then a new observation z i " δx`r i is constructed for the ith subject for some δ P r0, 1s.
By this construction, a smaller δ leads to a weaker signal strength and would make the test more challenging. difference. This real data example demonstrates that the proposed MPT is more powerful than existing tests and performs well even when the signal is very weak.
Discussion
In this work, we study the hypothesis test for one-sample mean vectors in high dimensions.
We first study the question of estimating optimal projection direction and provide statistical guarantee. Furthermore, we propose the multiple-splitting projection test, which makes use of the exchangeability of multiple p-values, to mitigate the power loss arising from the single data-splitting procedure. The proposed multiple data-splitting framework can be easily extended to a two-sample problem in which the optimal projection direction is Σ´1pµ 1μ 2 q (Huang, 2015). Sharing the same spirit, half of the sample can be used to estimate the projection direction and the remaining half is used to perform the two-sample t-test.
Then resultant multiple p-values can be combined similarly to the MPT. As pointed out in Theorem 4, the exchangeability phenomenon generally holds for a permutation framework.
This work can be extended to many other statistical inference problems, such as testing the coefficients for a high-dimensional regression model. We hope such insight provides new ideas for researchers from related areas. Another interesting extension is to develop more refined combination methods which better handle the exchangeability. We leave these interesting questions as future work.
Appendices
The appendices provide additional materials for the main manuscript. Appendix A provides the tables of critical values for the proposed MPT. Appendix B presents technical lemmas and complete proofs of theoretical results. Appendix C reports additional numerical results of size and power comparisons for n " 100 to serve as a complementary to the numerical studies in Section 4.
A Tables of Critical Values
In this section, we present the tables of critical values for the proposed MPT. Follmann and Proschan (2012) In this subsection, we introduce a few technical lemmas to help establish theoretical results. Before proceeding, we first introduce some notations for sub-Gaussian and subexponential random variables. The sub-Gaussian norm of a sub-Gaussian random variable X is
}X} ψ 2 " sup pě1 p´1 2 pE|X| p q 1{p .
The sub-exponential norm of a sub-exponential random variable X is
}X} ψ 1 " sup pě1 p´1pE|X| p q 1{p .
Lemma 1. If the RSC condition (2.2) holds, then
∆ J W∆ ě ν}∆} 2 2´τ c log p n }∆} 1 for all ∆ P R p .
Proof. For any }∆} 1 ă 1, the L 1 norm of ∆{}∆} 1 is 1 and hence satisfies the RSC condition in (2.2). We have
∆ J }∆} 1 W ∆ }∆} 1 ě ν }∆} 2 2 }∆} 2 1´τ c log p n }∆} 1 }∆} 1 ∆ J }∆} 1 W ∆ }∆} 1 ě ν }∆} 2 2 }∆} 2 1´τ c log p n }∆} 2 1 }∆} 2 1 ∆ J W∆ ě ν}∆} 2 2´τ c log p n }∆} 2 1 .
Since }∆} 1 ă 1, then }∆} 2 1 ď }∆} 1 , implying
∆ J p Σ∆ ě ν}∆} 2 2´τ c log p n }∆} 1 .
The proof of Lemma 1 is complete.
Lemma 2. Suppose x 1 , . . . , x n P R p " pµ, Σq are independent and identically distributed sub-Gaussian random vectors. Letx and p Σ " pp σ ij q pˆp be the sample mean and sample covariance matrix. If log p ă n, then with probability at least 1´2p´1, we have (i) }x´µ} 8 ď C a log p{n for some large C.
(ii) } p Σ´Σ} max ď C a log p{n for some large C.
Proof. Letx " 1 n ř n k"1 x k be the sample mean and p Σ " 1 n ř n k"1 px k´x qpx k´x q J be the sample covariance matrix. Without loss of generality, we assume Epx i q " 0 and the sub-Gaussian parameter for x i is σ 2 . Write x k " px k1 , . . . , x kp q J and each X kj is a sub-Gaussian random variable with parameter σ 2 and let K " max 1ďjďp }x kj } ψ 2 . Obviously,x is also sub-Gaussian random vector with parameter σ 2 { ? n. For any t ą 0, we have
Prp}x´µ} 8 ą tq ď 2p exp ´cnt 2 {K 2 ( .
Take t " C a log p{n for some large C ą 0, we have
Prp}x´µ} 8 ă C a log p{nq ě 1´2p´1. (B.1)
The sample covariance p Σ can be decomposed as
p Σ " 1 n n ÿ k"1 x k x J k´xx J .
Hence we know,
max i,j |p σ ij´σij | ď max i,j | 1 n n ÿ k"1 x ki x kj´σij |`max i,j |x ixj |.
In addition, we have
}x ki x kj } ψ 1 ď 2}x ki } ψ 2 }x kj } ψ 2 ď 2K 2 .
Hence }x ki x kj´σij } ψ 1 ď 4K 2 . According to the inequality of tail probability for subexponential variables, we have Pr˜max i,jˇ1 n n ÿ k"1
x ki x kj´σijˇą t¸ď maxˆ2p 2 exp "´c n t 2 16K 4 * , 2p 2 exp "´c n t 4K 2 *˙.
Combining (B.6) and (B.7), we know that with probability at least 1´4p´1, we have
} p Σw ‹´x } 8 ď M 1 a log p{n, with M 1 " M 1`M2 C 1 . Take λ " M a log p{n with M " 4 maxtM 1 , τ u, we have } p Σw ‹´x } 8`τ a log p{n ď λ{2. Hence pν´γ{2q} p ∆} 2 2 ď P λ pw ‹ q´P λ p p wq`λ 2 } p ∆} 1 ď P λ pw ‹ q´P λ p p wq`1 2 P λ p p ∆q`γ 4 } p ∆} 2 2 ,
where the second inequality is because λ 2 } p ∆} 1 ď 1 2 P λ p p ∆q`γ 4 } p ∆} 2 2 by Lemma 3(b). By the subadditivity of P λ , we have P λ p p ∆q " P λ p p w´w ‹ q ď P λ p p wq`P λ pw ‹ q. Then pν´γ{2q } p ∆} 2 2 ď P λ pw ‹ q´P λ p p wq`1 2 P λ p p wq`1 2 P λ pw ‹ q`γ 4 } p ∆} 2 2 pν´3γ{4q } p ∆} 2 2 ď 3 2 P λ pw ‹ q´1 2 P λ p p wq p2ν´3γ{2q } p ∆} 2 2 ď 3P λ pw ‹ q´P λ p p wq.
By Lemma 3(c), we have 3λ} p ∆ I } 1´λ } p ∆ I c } 1 ě 3P λ pw ‹ q´P λ p p wq ě 0, where I denotes the index set of the s largest elements of p ∆ in magnitude. Since ν ě 3γ{4, we have 0 ď p2ν´3γ{2q } p ∆} 2 2 ď 3λ} p ∆ I } 1´λ } p ∆ I c } 1 .
As a result, we have } p ∆ I c } 1 ď 3} p ∆ I } 1 and 2ν´3 2 γ˙} p ∆} 2 2 ď 3λ} p ∆ I } 1´λ } p ∆ I c } 1 ď 3λ} p ∆ I } 1 ď 3λ ? s} p ∆ I } 2 , Given p w, we know that y n 1`1 , . . . , y n are i.i.d. random variables with mean µ J p w and variance p w J Σ p w. By central limit theorem and p w J Σ p w´w ‹J Σw ‹ " op1q, we know are independent from each other. Therefore, the resultant statistics T k " gpD π k q are independent and identically distributed conditioning on D. By the de Finete theorem (Aldous, 1985) which states that a mixture of independent and identically distributed sequences are exchangeable, we know is pT 1 , T 2 , . . . q is an exchangeable sequence, and hence pT 1 , . . . , T m q is exchangeable for any finite m.
B.5 Proof of Theorem 5
According to the central limit theorem for exchangeable random variables, we have
M ρ "Z{ a p1`pm´1qρq{m d Ñ N p0, 1q.
Let p ρ be a consistent estimator for ρ ‰ 0, i.e., p ρ p Ñ ρ. Hence, a p1`pm´1qρq{m a p1`pm´1qp ρq{m p Ñ 1.
As a result, we know
C Additional Numerical Results
Figures 5 and 6 reports the size and power of different tests with n " 100 for samples following multivariate normal distribution and t 6 distribution, respectively. The pattern of size and power for different tests is similar to that of n " 40. The proposed MPT can control the type I error rate below the pre-specified significance level α " 0.05 while the CLX test completely fails to control the size. Among those tests which can retain the type I error rate, the proposed MPT is the most powerful one for both CS and AR covariance structure. The studies also reveal that the proposed MPT is quite robust to non-Gaussianity.
popular strategy to enhance testing power is via the combination of p-values (Romano and DiCiccio, 2019; Yu et al., 2019, 2020, 2022). In fact, combining multiple p-values from a set of hypothesis tests has been widely used in statistical literature. Let p 1 , . . . , p m denote m valid p-values. That is, under H 0 ,Prpp k ď uq " u, 0 ă u ă 1 for k " 1, . . . , m.
Figure 2 :
2Testing power versus number of splits m for autocorrelation (AR) structure and compound symmetry (CS) structure with different choices of n and p. Different colors correspond to different correlations r.
Figure 3 :
3Size and power of different tests for normally distributed samples with n " 40. Panels (a) and (b) show size (c " 0) under the null hypothesis for the CS and AR structure, respectively. Panels (c) and (d) show power (c " 0.5) under the alternative hypothesis for the CS and AR structure, respectively.
Figure 3
3reports the size and power of different tests for normally distributed samples with n " 40.
Figure 4 :
4Size and power of different tests for samples following multivariate t 6 distribution with n " 40. Panels (a) and (b) show size (c " 0) under the null hypothesis for the CS and AR structures, respectively. Panels (c) and (d) show power (c " 0.5) under the alternative hypothesis for the CS and AR structures, respectively.
derives the critical values of cpm, α{2q and β for tests M p ρ 1 and M p ρ 2 at level α " 0.05, respectively. We summarize the critical values inTables 2-3.
Figure 5 :Figure 6 :
56Size and power of different tests for normally distributed samples with n " 100. Panels (a) and (b) show size (c " 0) under the null hypothesis for the CS and AR structure, respectively. Panels (c) and (d) show power (c " 0.5) under the alternative hypothesis for the CS and AR structure, respectively. Size and power of different tests for samples following multivariate t 6 distribution with n " 100. Panels (a) and (b) show size (c " 0) under the null hypothesis for the CS and AR structure, respectively. Panels (c) and (d) show power (c " 0.5) under the alternative hypothesis for the CS and AR structure, respectively.
introduced a quantile-based combination test. A special case is we reject H 0 at level α if the median of p-values is less than or equal to α{2.Under H 0 , we know Z k " Φ´1pp k q " N p0, 1q,where Φp¨q is the cdf of N p0, 1q. Assuming pZ 1 , . . . , Z m q J follows a multivariate normal Z k | ě mz α{2 , where z α{2 is the upper α{2-quantile of N p0, 1q. More recently, Liu and Xie (2020) introduced a new combination test based on the Cauchy transformation which is insensitive to dependencies among p-values. The test rejectsdistribution, Romano and DiCiccio (2019) proposed a Z-average test based on the sample
mean of Z k 's, that rejects H 0 if |
ř m
k"1
converges at a slower rate m as the variance ofZ is not asymptotically degenerate.ρ and is very difficult to derive, so is cpρ, m, α{2q. Instead, we use the critical value cpm, αq that is chosen against the least favorable ρ such that type I error is controlled regardless ρ, i.e., cpm, α{2q " sup ρPp0,1q cpρ, m, α{2q. Then we reject H 0 at level α ifp
ρ under H 0 .
Theorem 5. Let p
ρ be a consistent estimator of ρ ą 0. Under H 0 , we have as m Ñ 8,
M
p
ρ "Z{
a
p1`pm´1qp ρq{m
d
Ñ N p0, 1q.
(3.5)
M ρ converges to the standard normal distribution at rate
?
m as m Ñ 8. However, when
ρ ‰ 0, M
p
ρ Hypothesis testing based on such a slow convergence rate is more likely to fail in controlling
the type I error in finite-sample performance.
Remark 4 suggests that the asymptotic distribution in (3.5) does not serve as a good
cornerstone to test H 0 . The slow convergence rate and potential size inflation become major
concerns for practitioners. In practice, one may conduct a large number of splits which bring
in extra computational burden. This motivates us to seek an exact level α test to ensure the
finite-sample performance. Let cpρ, m, α{2q be the upper α{2 quantile of the distribution of
M
p
ρ and we reject H 0 if |M
p
ρ | ą cpρ, m, α{2q. Given p
ρ, the exact distribution of M
p
ρ depends on
|M
p
ρ | ą cpm, α{2q.
(3.6)
In terms of size, the proposed MPT successfully controls the type I error rate below α. It is slightly conservative since the critical value is chosen against the worstρ. The size of Cauchy test and CQ test are slightly inflated. The Median test, Average
test and Z-average test are too conservative and their size are very close to 0. The CLX
test completely fails to control the type I error rate due to the slow convergence rate of the
limiting distribution. As for power analysis, under the CS structure, the MPT outperforms
all other tests. Cauchy test is slightly less powerful than the MPT but more powerful than
other conservative combination approaches. The power of CLX test and CQ test decreases
as the correlation increases since both tests ignore the dependence among variables. In
addition, CLX test and CQ test require the largest eigenvalue of Σ is upper bounded by
some constant, which is not satisfied in the CS structure. Under the AR structure, note that
the CLX test cannot control the size at all under H 0 (can be as large as 0.20). The size
inflation makes the testing power artificially high and hence it is not trustworthy. Excluding
the CLX test, the proposed MPT has the best performance, followed by the Cauchy test.
Table 1 :
1We apply the proposed MPT and SPT together with other tests to the bone density dataset. The decisions as well as p-values if applicable (in the parentheses) are reported in the first column inTable 1. All tests are able to reject the null hypothesis, implying that the bone density patterns are significantly different. To compare the power of different tests, we further conduct tests as we decrease the signal strength in the bone density difference.Decisions on whether null hypothesis should be rejected or not at significance level
α " 0.05 based on different tests for the bone density dataset with various signal strengths.
The numbers in the parentheses in the p-values if applicable.
δ
1.0
0.8
0.6
0.4
0.3
0.2
0.18
MPT
Cauchy
Median
Average
Z-average
SPT
p10´1 0 q p10´9q p10´7q p10´6q p10´4q (0.042) (0.246)
Ridge
p10´8q p10´7q p10´5q (0.001) (0.014) (0.146) (0.387)
RPT
p10´9q p10´8q p10´6q p10´4q p0.010q (0.203) (0.347)
CQ
(0)
(0)
(0)
p10´4q (0.081) (0.772) (0.945)
CLX
(0)
p10´1 4 q p10´8q p0.004q (0.189) (0.965) (0.994)
Table 1
1also reports the decisions and p-values (in the parentheses) for
δ " 1.0, 0.8, 0.6, 0.4, 0.3, 0.2, 0.18. When δ ě 0.4, all the tests perform well and reject the
null hypothesis at level 0.05. When δ decreases to 0.3, the CQ test, the CLX test and the
average based combination test start to fail to reject the null hypothesis. When δ " 0.2, the
Table 2 :
2Critical value cpm, α{2q with respect to m for the test M p ρ 1 at level α " 0.05 number of splits mMethod
Value
2
3
4
5
10
20
40
100 1000 10000
M
p
ρ 1
cpm, α{2q 1.988 2.058 2.133 2.204 2.489 2.865 3.126 4.115 7.17 12.66
Table 3 :
3The smallest value β with respect to m to control the type I error of the test M with cpm, α{2q " z α{2 at level α " 0.05 number of splits mp
ρ 2
By the choice of t " C 2 b log p n for some large C ą 0, we have max i,j |p σ ij´σij | ď C a log p{n with probability at least 1´2p´1, which completes the proof.Lemma 3(Loh and Wainwright (2015)). Assume penalty function P λ ptq satisfies conditions (i)-(v), then (a) |P λ pt 1 q´P λ pt 2 q| ď λ|t 1´t2 | for any t 1 , t 2 P R.(b) For any w P R p , we have λ}w} 1 ď P λ pwq`ν 2 }w} 2 2 .(c) Suppose }w ‹ } 0 " s ą 0, then for any w P R p such that cP λ pw ‹ q´P λ pwq ě 0 with c ě 1,set of the s largest elements of δ in magnitude.(d) Define J λ ptq " λ|t|´P λ ptq. Then the function J λ ptq´µ 2 t 2 " λ|t|´P λ ptq´µ 2 t 2 is concave and differentiable.B.2 Proof of Theorem 1Lemma 1 shows that the RSC condition in (2.2) actually holds for all ∆ P R p . Now we are ready to prove Theorem 1. Define w ‹ " Σ´1µ and p ∆ " p w´w ‹ . The first orderBy the RSC condition (2.2), we haveLemma 3 shows that P λ,γ pwq " P λ pwq`γ 2 }w} 2 2 is a convex function, henceBy triangle inequality, we know } p Σw ‹´x } 8 ď } p Σw ‹´µ } 8`}x´µ } 8 . According to Lemma 2, there exists M 1 , M 2 ą 0 such thatThen with probability at least 1´2p´1," O˜c s log p n¸.The 1 norm bound follows immediately from the 2 norm boundB.3 Proof of Theorem 2According to Theorem 1, we know } p w´w ‹ } 1 " Ops a log p{n 1 q " op1q with high probability. Letx 2 and p Σ 2 be the sample mean and sample covariance matrix based on D 2 " tx n 1`1 , . . . , x n u. On one hand,One the other hand,Hence by triangle inequality,The test statistic of the SPT is T p w "? n 2ȳ {s y and we reject H 0 whenever |T p w | ą z α{2 . The power function for the SPT isNotice thatAs a result, we know the asymptotic power is βpT p w q " Φˆ´z α{2`b n 2 µ J Σ´1µ˙.B.4 Proof of Theorems 3 and 4Theorem 3 is a direct corollary of Theorem 4 by setting T k " p k , we only prove Theorem 4here. Conditioning on the observed data D, we know its random permutations D π 1 , D π 2 , . . .
Fast global convergence of gradient methods for high-dimensional statistical recovery. A Agarwal, S Negahban, M J Wainwright, The Annals of Statistics. 405Agarwal, A., Negahban, S. and Wainwright, M. J. (2012), 'Fast global convergence of gradient methods for high-dimensional statistical recovery', The Annals of Statistics 40(5), 2452- 2482.
Exchangeability and related topics, in 'École d'Été de Probabilités de Saint-Flour XIII-1983. D J Aldous, SpringerAldous, D. J. (1985), Exchangeability and related topics, in 'École d'Été de Probabilités de Saint-Flour XIII-1983', Springer, pp. 1-198.
Effect of high dimension: By an example of a two sample problem. Z Bai, H Saranadasa, Statistica Sinica. 6Bai, Z. and Saranadasa, H. (1996), 'Effect of high dimension: By an example of a two sample problem', Statistica Sinica 6, 311-329.
A knockoff filter for high-dimensional selective inference. R F Barber, E J Candès, The Annals of Statistics. 475Barber, R. F. and Candès, E. J. (2019), 'A knockoff filter for high-dimensional selective inference', The Annals of Statistics 47(5), 2504-2537.
Simultaneous analysis of lasso and dantzig selector. P J Bickel, Y Ritov, A B Tsybakov, The Annals of Statistics. 374Bickel, P. J., Ritov, Y. and Tsybakov, A. B. (2009), 'Simultaneous analysis of lasso and dantzig selector', The Annals of Statistics 37(4), 1705-1732.
Two-sample test of high dimensional means under dependence. T T Cai, W Liu, Y Xia, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 762Cai, T. T., Liu, W. and Xia, Y. (2014), 'Two-sample test of high dimensional means under dependence', Journal of the Royal Statistical Society: Series B (Statistical Methodology) 76(2), 349-372.
A regularized hotelling's T 2 test for pathway analysis in proteomic studies. L S Chen, D Paul, R L Prentice, P Wang, Journal of the American Statistical Association. 106496Chen, L. S., Paul, D., Prentice, R. L. and Wang, P. (2011), 'A regularized hotelling's T 2 test for pathway analysis in proteomic studies', Journal of the American Statistical Association 106(496), 1345-1360.
A two-sample test for high-dimensional data with applications to gene-set testing. S X Chen, Y.-L Qin, The Annals of Statistics. 382Chen, S. X. and Qin, Y.-L. (2010), 'A two-sample test for high-dimensional data with ap- plications to gene-set testing', The Annals of Statistics 38(2), 808-835.
Variable selection via nonconcave penalized likelihood and its oracle properties. J Fan, R Li, Journal of the American Statistical Association. 96456Fan, J. and Li, R. (2001), 'Variable selection via nonconcave penalized likelihood and its oracle properties', Journal of the American Statistical Association 96(456), 1348-1360.
Strong oracle optimality of folded concave penalized estimation. J Fan, L Xue, H Zou, The Annals of Statistics. 423Fan, J., Xue, L. and Zou, H. (2014), 'Strong oracle optimality of folded concave penalized estimation', The Annals of Statistics 42(3), 819-849.
A test of location for exchangeable multivariate normal data with unknown correlation. D Follmann, M Proschan, Journal of Multivariate Analysis. 1041Follmann, D. and Proschan, M. (2012), 'A test of location for exchangeable multivariate normal data with unknown correlation', Journal of Multivariate Analysis 104(1), 115- 125.
Hypothesis testing for network data in functional neuroimaging. C E Ginestet, J Li, P Balachandran, S Rosenberg, E D Kolaczyk, The Annals of Applied Statistics. 112Ginestet, C. E., Li, J., Balachandran, P., Rosenberg, S. and Kolaczyk, E. D. (2017), 'Hypoth- esis testing for network data in functional neuroimaging', The Annals of Applied Statistics 11(2), 725-750.
Projection test for high-dimensional mean vectors with optimal direction. Y Huang, Department of Statistics, The Pennsylvania State University at University ParkHuang, Y. (2015), 'Projection test for high-dimensional mean vectors with optimal direction', Department of Statistics, The Pennsylvania State University at University Park .
The central limit theorem for exchangeable random variables without moments. M Klass, H Teicher, The Annals of Probability. 151Klass, M. and Teicher, H. (1987), 'The central limit theorem for exchangeable random vari- ables without moments', The Annals of Probability 15(1), 138-153.
Exact t and F tests for analyzing studies with multiple endpoints. J Lauter, Biometrics. 523Lauter, J. (1996), 'Exact t and F tests for analyzing studies with multiple endpoints', Bio- metrics 52(3), 964-970.
Linear hypothesis testing in linear models with high-dimensional responses. C Li, R Li, Journal of the American Statistical Association (forthcoming. Li, C. and Li, R. (2021), 'Linear hypothesis testing in linear models with high-dimensional responses', Journal of the American Statistical Association (forthcoming), 1-13.
Projection test with sparse optimal direction for high-dimensional one sample mean problem, in 'Contemporary Experimental Design, Multivariate Analysis and Data Mining. W Liu, R Li, SpringerLiu, W. and Li, R. (2020), Projection test with sparse optimal direction for high-dimensional one sample mean problem, in 'Contemporary Experimental Design, Multivariate Analysis and Data Mining', Springer, pp. 295-309.
Cauchy combination test: A powerful test with analytic p-value calculation under arbitrary dependency structures. Y Liu, J Xie, Journal of the American Statistical Association. 115529Liu, Y. and Xie, J. (2020), 'Cauchy combination test: A powerful test with analytic p-value calculation under arbitrary dependency structures', Journal of the American Statistical Association 115(529), 393-402.
Regularized M -estimators with nonconvexity: statistical and algorithmic theory for local optima. P.-L Loh, M J Wainwright, Journal of Machine Learning Research. 161Loh, P.-L. and Wainwright, M. J. (2015), 'Regularized M -estimators with nonconvexity: statistical and algorithmic theory for local optima', Journal of Machine Learning Research 16(1), 559-616.
Support recovery without incoherence: A case for nonconvex regularization. P.-L Loh, M J Wainwright, The Annals of Statistics. 456Loh, P.-L. and Wainwright, M. J. (2017), 'Support recovery without incoherence: A case for nonconvex regularization', The Annals of Statistics 45(6), 2455-2482.
A more powerful two-sample test in high dimensions using random projection. M Lopes, L Jacob, M J Wainwright, Advances in Neural Information Processing Systems. Lopes, M., Jacob, L. and Wainwright, M. J. (2011), 'A more powerful two-sample test in high dimensions using random projection', Advances in Neural Information Processing Systems pp. 1206-1214.
Gradient methods for minimizing composite functions. Y Nesterov, Mathematical Programming. 1401125Nesterov, Y. (2013), 'Gradient methods for minimizing composite functions', Mathematical Programming 140(1), 125.
Embryonic craniofacial bone volume and bone mineral density in Fgfr2+/P253R and nonmutant mice. C J Percival, Y Huang, E W Jabs, R Li, J T Richtsmeier, Developmental Dynamics. 2434Percival, C. J., Huang, Y., Jabs, E. W., Li, R. and Richtsmeier, J. T. (2014), 'Embryonic craniofacial bone volume and bone mineral density in Fgfr2+/P253R and nonmutant mice', Developmental Dynamics 243(4), 541-551.
Multiple data splitting for testing. J P Romano, C Diciccio, Technical reportRomano, J. P. and DiCiccio, C. (2019), Multiple data splitting for testing, Technical report.
Random variables with maximum sums. L Rüschendorf, Advances in Applied Probability. 143Rüschendorf, L. (1982), 'Random variables with maximum sums', Advances in Applied Prob- ability 14(3), 623-632.
Regression shrinkage and selection via the Lasso. R Tibshirani, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 581Tibshirani, R. (1996), 'Regression shrinkage and selection via the Lasso', Journal of the Royal Statistical Society: Series B (Statistical Methodology) 58(1), 267-288.
On the conditions used to prove oracle results for the lasso. S A Van De Geer, P Bühlmann, Electronic Journal of Statistics. 3Van De Geer, S. A. and Bühlmann, P. (2009), 'On the conditions used to prove oracle results for the lasso', Electronic Journal of Statistics 3, 1360-1392.
Calibrating non-convex penalized regression in ultrahigh dimension. L Wang, Y Kim, R Li, The Annals of Statistics. 4152505Wang, L., Kim, Y. and Li, R. (2013), 'Calibrating non-convex penalized regression in ultra- high dimension', The Annals of Statistics 41(5), 2505.
A high-dimensional nonparametric multivariate test for mean vector. L Wang, B Peng, R Li, Journal of the American Statistical Association. 110512Wang, L., Peng, B. and Li, R. (2015), 'A high-dimensional nonparametric multivariate test for mean vector', Journal of the American Statistical Association 110(512), 1658-1669.
Optimal computational and statistical rates of convergence for sparse nonconvex learning problems. Z Wang, H Liu, T Zhang, The Annals of Statistics. 426Wang, Z., Liu, H. and Zhang, T. (2014), 'Optimal computational and statistical rates of convergence for sparse nonconvex learning problems', The Annals of Statistics 42(6), 2164- 2201.
High dimensional variable selection. L Wasserman, K Roeder, Annals of Statistics. 375AWasserman, L. and Roeder, K. (2009), 'High dimensional variable selection', Annals of Statistics 37(5A), 2178-2201.
| [] |
[
"arXiv:astro-ph/0609788v1 28 Sep 2006 s-Process Nucleosynthesis in Advanced Burning Phases of Massive Stars",
"arXiv:astro-ph/0609788v1 28 Sep 2006 s-Process Nucleosynthesis in Advanced Burning Phases of Massive Stars"
] | [
"Lih-Sin The \nDepartment of Physics and Astronomy\nClemson University\n29634-0978ClemsonSC\n",
"Mounib F El Eid \nDepartment of Physics and Astronomy\nClemson University\n29634-0978ClemsonSC\n\nDepartment of Physics\nAmerican University of Beirut\nBeirutLebanon\n",
"Bradley S Meyer \nDepartment of Physics and Astronomy\nClemson University\n29634-0978ClemsonSC\n"
] | [
"Department of Physics and Astronomy\nClemson University\n29634-0978ClemsonSC",
"Department of Physics and Astronomy\nClemson University\n29634-0978ClemsonSC",
"Department of Physics\nAmerican University of Beirut\nBeirutLebanon",
"Department of Physics and Astronomy\nClemson University\n29634-0978ClemsonSC"
] | [] | We present a detailed study of s-process nucleosynthesis in massive stars of solar-like initial composition and masses 15, 20, 25, and 30 M ⊙ . We update our previous results of s-process nucleosynthesis during the core He-burning of these stars and then focus on an analysis of the s-process under the physical conditions encountered during the shell-carbon burning. We show that the recent compilation of the 22 Ne(α, n) 25 Mg rate leads to a remarkable reduction of the efficiency of the s-process during core He-burning. In particular, this rate leads to the lowest overproduction factor of 80 Kr found to date during core He-burning in massive stars. The s-process yields resulting from shell carbon burning turn out to be very sensitive to the structural evolution of the carbon shell. This structure is influenced by the mass fraction of 12 C attained at the end of core helium burning, which in turn is mainly determined by the 12 C(α, γ) 16 O reaction. The still present uncertainty in the rate for this reaction implies that the s-process in massive stars is also subject to this uncertainty. We identify some isotopes like 70 Zn and 87 Rb as the signatures of the s-process during shell carbon burning in massive stars. In determining the relative contribution of our s-only stellar yields to the solar abundances, we find it is important to take into account the neutron exposure of shell carbon burning. When we analyze our yields with a Salpeter Initial Mass Function, we find that massive stars contribute at least 40% to s-only nuclei with mass A ≤ 87. For s-only nuclei with mass A > 90, massive stars contribute on average ∼7%, except for 152 Gd, 187 Os, and 198 Hg which are ∼14%, ∼13%, and ∼11%, respectively. compare the production factor distribution of s-only nuclei from our stellar models with the distribution from the solar abundance. In §7 and in 8 we discuss and summarize the main conclusions of this paper.Stellar ModelsThe results of the s-process presented in this paper have been obtained by using stellar models described in our previous paper (EMT04) for stars of masses 15, 20, 25, and 30 M ⊙ with initial solar-like composition. We evolved our models beyond core oxygen burning, and this allows us to investigate the s-process nucleosynthesis not only during core He-burning, but also during the important phase of shell C-burning. The network we have used for the s-process is listed inTable 1of The et al. (2000, hereafter TEM00). It includes 632 nuclei up to 210 Bi and is sufficiently inclusive that the s-process nucleosynthesis can be studied in detail. The sources of the important nuclear reaction rates for each studied model are summarized inTable 1. The nuclear data have been updated as follows:1. The nuclear masses were taken from the compilation byAudi & Wapstra (1995)2. The thermonuclear reaction rates were taken from the compilation of the "NACRE" collaboration(Angulo et al. 1999), and the "Non-Smoker" rates according toRauscher & Thielemann (2000). Most of the electron capture and β-decay rates (referred to as weak interaction rates) are taken fromTuli (1995). Certain weak interaction rates are temperature and density dependent. These rates were taken fromTakahashi & Yokoi (1987). However, we had to extrapolate some of the weak interaction rates (e.g., the β-decay rate of 79 Se) to higher temperatures based on experimental results byKlay & Käppeler (1988). We have also used some of the extrapolated data given byRaiteri et al. (1993)in theirTable 2.3. In our previous evolutionary calculations (EMT04), we investigated the effect of two different 12 C(α, γ) 16 O reaction rates: the NACRE rate and that due toKunz et al. (2002). The first rate is larger than the second in the temperature range T=(1-4)×10 8 K (Fig. 1 of EMT04), which is relevant to core He-burning in the massive stars under consideration. However, Buchmann (1996) recommends a rate that is close to the rate given byKunz et al. (2002)at temperature T 9 ≤0.4 but is significantly larger at temperature range of 0.4 ≤ T 9 ≤ 3.0. These different rates lead to several consequences during the late stage of core He-burning and also beyond this phase as described by EMT04. Here, we summarize some of the consequences that are relevant to the present study of the s-process. | 10.1086/509753 | [
"https://arxiv.org/pdf/astro-ph/0609788v1.pdf"
] | 16,710,606 | astro-ph/0609788 | 8e507489276b82442c5c5badbb3e1820fa4f481e |
arXiv:astro-ph/0609788v1 28 Sep 2006 s-Process Nucleosynthesis in Advanced Burning Phases of Massive Stars
Lih-Sin The
Department of Physics and Astronomy
Clemson University
29634-0978ClemsonSC
Mounib F El Eid
Department of Physics and Astronomy
Clemson University
29634-0978ClemsonSC
Department of Physics
American University of Beirut
BeirutLebanon
Bradley S Meyer
Department of Physics and Astronomy
Clemson University
29634-0978ClemsonSC
arXiv:astro-ph/0609788v1 28 Sep 2006 s-Process Nucleosynthesis in Advanced Burning Phases of Massive Stars
Subject headings: nuclear reactionsnucleosynthesisabundances-stars:evolution -stars: interiors
We present a detailed study of s-process nucleosynthesis in massive stars of solar-like initial composition and masses 15, 20, 25, and 30 M ⊙ . We update our previous results of s-process nucleosynthesis during the core He-burning of these stars and then focus on an analysis of the s-process under the physical conditions encountered during the shell-carbon burning. We show that the recent compilation of the 22 Ne(α, n) 25 Mg rate leads to a remarkable reduction of the efficiency of the s-process during core He-burning. In particular, this rate leads to the lowest overproduction factor of 80 Kr found to date during core He-burning in massive stars. The s-process yields resulting from shell carbon burning turn out to be very sensitive to the structural evolution of the carbon shell. This structure is influenced by the mass fraction of 12 C attained at the end of core helium burning, which in turn is mainly determined by the 12 C(α, γ) 16 O reaction. The still present uncertainty in the rate for this reaction implies that the s-process in massive stars is also subject to this uncertainty. We identify some isotopes like 70 Zn and 87 Rb as the signatures of the s-process during shell carbon burning in massive stars. In determining the relative contribution of our s-only stellar yields to the solar abundances, we find it is important to take into account the neutron exposure of shell carbon burning. When we analyze our yields with a Salpeter Initial Mass Function, we find that massive stars contribute at least 40% to s-only nuclei with mass A ≤ 87. For s-only nuclei with mass A > 90, massive stars contribute on average ∼7%, except for 152 Gd, 187 Os, and 198 Hg which are ∼14%, ∼13%, and ∼11%, respectively. compare the production factor distribution of s-only nuclei from our stellar models with the distribution from the solar abundance. In §7 and in 8 we discuss and summarize the main conclusions of this paper.Stellar ModelsThe results of the s-process presented in this paper have been obtained by using stellar models described in our previous paper (EMT04) for stars of masses 15, 20, 25, and 30 M ⊙ with initial solar-like composition. We evolved our models beyond core oxygen burning, and this allows us to investigate the s-process nucleosynthesis not only during core He-burning, but also during the important phase of shell C-burning. The network we have used for the s-process is listed inTable 1of The et al. (2000, hereafter TEM00). It includes 632 nuclei up to 210 Bi and is sufficiently inclusive that the s-process nucleosynthesis can be studied in detail. The sources of the important nuclear reaction rates for each studied model are summarized inTable 1. The nuclear data have been updated as follows:1. The nuclear masses were taken from the compilation byAudi & Wapstra (1995)2. The thermonuclear reaction rates were taken from the compilation of the "NACRE" collaboration(Angulo et al. 1999), and the "Non-Smoker" rates according toRauscher & Thielemann (2000). Most of the electron capture and β-decay rates (referred to as weak interaction rates) are taken fromTuli (1995). Certain weak interaction rates are temperature and density dependent. These rates were taken fromTakahashi & Yokoi (1987). However, we had to extrapolate some of the weak interaction rates (e.g., the β-decay rate of 79 Se) to higher temperatures based on experimental results byKlay & Käppeler (1988). We have also used some of the extrapolated data given byRaiteri et al. (1993)in theirTable 2.3. In our previous evolutionary calculations (EMT04), we investigated the effect of two different 12 C(α, γ) 16 O reaction rates: the NACRE rate and that due toKunz et al. (2002). The first rate is larger than the second in the temperature range T=(1-4)×10 8 K (Fig. 1 of EMT04), which is relevant to core He-burning in the massive stars under consideration. However, Buchmann (1996) recommends a rate that is close to the rate given byKunz et al. (2002)at temperature T 9 ≤0.4 but is significantly larger at temperature range of 0.4 ≤ T 9 ≤ 3.0. These different rates lead to several consequences during the late stage of core He-burning and also beyond this phase as described by EMT04. Here, we summarize some of the consequences that are relevant to the present study of the s-process.
Introduction
The s-process nucleosynthesis is the slow neutron-capture process of heavy nuclei in which the neutron-capture rate is slow relative to the beta-decay rate of the heavy nuclei near the line of beta stability (Burbidge et al. 1957;Cameron 1957). In this scenario, the seeds to synthesize the heavy nuclei are the iron-group nuclei. It is well known that massive stars of masses above M ≃ 12M ⊙ are the sites of the so-called "weak component" of s-process nucleosynthesis. The nuclei produced in this site are restricted to the atomic mass range A ≃ 65-90 (Kappeler et al. 1989;Busso et al. 1999). Many papers have explored this weak component of the s-process, but mainly during the core He-burning phase (Prantzos et al. 1987;Langer et al. 1989;Prantzos et al. 1990;Raiteri et al. 1991a; Baraffe et al. 1992;Rayet & Hashimoto 2000;The et al. 2000;El Eid et al. 2004). Few papers have investigated the s-process during the late evolution phases of massive stars, especially during the phase of shell carbon-burning (for short: shell C-burning) in these stars (Langer et al. 1989;Raiteri et al. 1991bRaiteri et al. , 1993. The study of the s-process during shell C-burning requires special effort, because this burning episode lasts until the end of a massive star's evolution, as indicated by many works dealing with the advanced burning phases of massive stars (Nomoto & Hashimoto 1988;Chieffi et al. 1998;Limongi et al. 2000;Woosley et al. 2002;El Eid et al. 2004).
The contribution of the carbon shell to the s-process is important because the nuclei synthesized in this site will be eventually ejected largely unmodified during a supernova explosion of the star due to the lack of a neutron source during the explosive burning. Raiteri et al. (1993) argued on the basis of the calculations by Nomoto & Hashimoto (1988) that only stars of mass about 25 M ⊙ are able to eject the s-nuclei synthesized in the C-shell. We further quantify this issue in §5. However, our main goal is to present a detailed study of the s-process during core He-burning and shell C-burning, thereby emphasizing the physical uncertainties influencing the results. We will benefit from our detailed calculations (El Eid et al. 2004, hereafter EMT04), where we have investigated the influence of several important physical quantities on the characteristics of the stellar models during the advanced burning phases of massive stars.
In §2, we summarize the main features of our previous stellar evolution calculations, which we have carried through the end of central oxygen burning. In §3, we present our updated results of the s-process during core He-burning. In §4, we present the characteristics of shell C-burning and discuss the results obtained for the s-process during shell C-burning. In §5 we show the location (as function of the interior radius) where nuclei are exposed to neutrons. In this section, we also show the effectiveness of neutron captures in producing heavy elements by calculating the total neutron capture in the stellar models. In §6 we In particular, using two different rates of the 12 C(α, γ) 16 O reaction in a 25 M ⊙ star of initial solar-like composition, we found in the case labeled 25K in Table 1 of EMT04, where the rate is that according to Kunz et al. (2002), that the mass fraction of carbon at the center is X( 12 C)=0.280 (Table 4 of EMT04) at the end of core Heburning. In the case 25N, where the NACRE rate was adopted, X( 12 C)=0.236. In case 25C, where we have used the rate by Caughlan & Fowler (1988, hereafter CF88), X( 12 C)=0.257. Finally, the case labeled 25NM, where mass loss has been neglected during the evolution, has the lowest value X( 12 C)=0.193. This relatively reduced value is due to the higher central temperature achieved during core helium burning (see EMT04 for more details).
The lifetime of the core carbon burning phase in Table 3 of EMT04 is larger for a larger value of X( 12 C) as is the mass of the convective core (indicated by M cc in Table 3 of EMT04). In addition, §4 shows that shell carbon-burning is sensitive to the physical conditions achieved during core carbon-burning. In particular, it occurs in different regions of the star as indicated by Figures 4 to 10 of EMT04. A more detailed discussion of the effect of X( 12 C) is available in EMT04. In the present work, we investigate its effect on the s-process itself. Our results are presented in §3 and §4.
4. In our previous calculation of the s-process (TEM00), we adopted a rate for the neutroncapture reaction 16 O(n, γ) 17 O based on a cross section σ 16 =0.20 µb according to Beer et al. (1992). However, in our present calculations, we have used the new rate for this reaction as obtained by Igashira et al. (1995). These authors have included the 434 keV resonance and obtained a cross section σ 16 = 34 µb at T=30 keV or 170 times larger than the cross section obtained by Beer et al. (1992). The new rate is given by:
< σ > 16 = (kT) −1/2 + 5.88(kT) 1/2 (1)
Since 16 O is a neutron sink, the new rate is expected to reduce the efficiency of the sprocess during core He-burning, as has been emphasized by Rayet & Hashimoto (2000). Our results agree with this conclusion and are described in §3.
5.
Another difference with our previous work (TEM00) on the s-process concerns the reactions 22 Ne(α, n) 25 Mg and 22 Ne(α, γ) 26 Mg. The first reaction is known to be the main neutron source of the s-process in massive stars, while the second is a competing reaction since it captures part of the alpha particles. Figure 1 illustrates the rate of the 22 Ne(α, n) 25 Mg reaction, where the NACRE rate is lower than that obtained by CF88 in the temperatures range below T 8 ≃ 2.4, a regime that comprises most of the core He-burning phase in the stars under consideration.
However, the NACRE rate is larger by a factor of up to three in the temperature range T 8 ≃ (3 − 6), which is relevant to the more advanced burning phases, in particular the core and the shell carbon-burning. Fig. 1 displays the rate according to Jaeger et al. (2001), which shows some characteristics similar to the NACRE rate, although systematically slightly lower. Note that the situation at temperatures below T 8 ∼ 2.0 is controversial. Fortunately this temperature range does not affect our results of the s-process. In §3, we show that the s-process yields do strongly depend on the value of the 22 Ne(α, n) 25 Mg reaction rate.
In our evolutionary calculations (EMT04), we have made a special effort to analyze the effects of several important physical quantities on the internal structure of the stellar models. Besides considering the variation in 12 C(α, γ) 16 O as described above, we have investigated the effects of mass loss on the structure of the stellar models.
Mass loss is certainly important for stars more massive than 15 M ⊙ , especially if the star evolves on the Kelvin-Helmholtz time scale to the red giant stage, where mass loss is known to increase significantly (de Jager et al. 1988). As shown in our previous paper (EMT04), a rapid evolution to the red giant branch occurs when the effect of the gradient of molecular weight is taken into account. In this case, the Ledoux criterion for convection inhibits the convective instability in the region of the hydrogen-burning shell. Such evolution is found in the calculation by Woosley et al. (2002) for the 15 and 25 M ⊙ stars. We find that mass loss has an insignificant effect on the s-process during core He-burning. However, as we shall see in §4, the characteristics of shell-carbon burning are influenced, which may affect the s-process in turn.
s-Process in Core He-Burning: updated results
It is worth updating our previous results (TEM00) of the s-process during core helium burning in massive stars, mainly because many reaction rates determining the s-process efficiency have been revised as described in §2. Table 2 presents a comparison of some key physical quantities characterizing the efficiency of the s-process during core He-burning for the stars indicated. Comparing our new results with the former ones, labeled TEM00 in Table 2, we see that the new results show a significantly reduced s-process efficiency. In particular, we obtain in the case of the 25 M ⊙ star (case 25C) an overproduction factor of 618 for 80 Kr compared to 1100 in our former calculations (TEM00). This remarkable decrease by a factor of 1.8 is mainly due to the larger rate of 16 O(n, γ) 17 O used in the present study. Both the present calculation (25C) and that in TEM00 used the same rate for the 22 Ne(α, n) 25 Mg reaction. Therefore neutron capture on 16 O is an efficient sink of neutrons, especially during the advanced stage of core He-burning. It is interesting to note that Rayet & Hashimoto (2000) found a reduction of 1.2 to 1.6 due to this neutron sink. Hence, the statement in our previous work (TEM00) that most of the neutrons captured by 16 O will be returned by the reaction 17 O(α, n) 20 Ne is not fully justified because it was based on one-zone calculations in which the effect of convection was neglected.
In Tables 2 and 3 we show the effect of the lower NACRE rate for the 22 Ne(α, n) 25 Mg reaction (see Fig. 1) as compared with that due to CF88 in the temperature range up to T 8 ≃ 2.4. It is clear that the NACRE rate leads to a significantly reduced s-process efficiency: for example, Table 2 shows that the overproduction of 80 Kr in the case of the 25 M ⊙ star is 174 with the NACRE rate while it is 618 with the rate of CF88, a reduction by a factor of 3.6. The low overproduction value is rather close to that obtained by Rayet & Hashimoto (2000). In their calculations for an 8 M ⊙ helium star (corresponding roughly to an initial mass of 25 M ⊙ ), they used the same rate as we do for 16 O(n, γ) 17 O and considered different rates for 12 C(α, γ) 16 O. They used the rate given by Drotleff et al. (1993) for the 22 Ne(α,n) 25 Mg, which is quite similar to the NACRE rate shown in Fig. 1. Taking the "adopted" NACRE rate for 12 C(α, γ) 16 O (1.92 times larger than that of CF88 at T 9 =0.3), they found an overproduction factor of 92 for 80 Kr. However, when they used the lower limit on the NACRE rate for 12 C(α, γ) 16 O (1.16× of CF88) the overproduction factor increased to 180. The difference between their results and our 25N model demonstrates the importance of the 22 Ne(α, n) 25 Mg rate. Our results in Table 2 may be considered an update of the results of Rayet & Hashimoto (2000), since our rate for the 12 C(α, γ) 16 O is based on a new compilation as described in §2.
One conclusion we may draw from the discussion above is that the efficiency of the s-process during core He-burning in massive stars depends crucially on the neutron production reaction 22 Ne(α, n) 25 Mg and on the neutron-sink reaction 16 O(n, γ) 17 O. Both reactions become effective during the late stage of core He-burning where the 16 O becomes abundant. Fortunately, the present uncertainty in the 12 C(α, γ) 16 O does not have a significant effect on the s-process efficiency, as indicated by the comparison of the results of the two cases "present work, NACRE" and "present work, (25K)" in which the rate for this reaction is different as described in §2. Table 2 show that the efficiency of the s-process is remarkably reduced for all the masses considered.
It is worth analyzing our results of the s-process when the NACRE rate is applied for the 22 Ne(α, n) 25 Mg reaction instead of using the CF88 rate. Fig. 2 displays several physical quantities as a function of the central helium mass fraction as obtained for the 25 M ⊙ models 25N and 25C during core He-burning. The larger rate by CF88 leads to an increase in the neutron density (see Fig. 2b) at larger helium mass fraction (or at earlier time). Consequently, a higher neutron exposure, τ n (m r ) ≡ t 0 n n (m r , t ′ ) v th dt ′ (Clayton et al. 1961, Fig. 2c) is achieved leading to an earlier increase of the overproduction of 80 Kr (Fig. 2d). In other words, despite the higher peak neutron density achieved in case of the NACRE rate, the s-process is less robust because the neutron exposure is lower due to the shorter time scale until the end core He-burning. The conclusion of this discussion is that the s-process during He-burning in massive stars is a race against time since the whole process occurs during the late stage of this phase.
In Fig. 3 we present the overabundances of heavy nuclei averaged over the convective helium burning core for models 15N, 20N, 25N, and 30N, and in Tables 3 and 4 we list the overabundances of selected nuclei at the end of the core helium burning. The figures show that the larger the stellar mass, the more efficient is the neutron-capture of the s-process. The figures also show the well-known feature of the weak s-process that the overabundance distribution in the mass range A = 60 -90 has a peak at 80 Kr.
s-process in shell carbon-burning
At the end of core helium burning, the star begins to contract, and, when its central temperature exceeds ∼5×10 8 K, the neutrino energy loss dominates the energy balance (Woosley et al. 2002). The carbon burning with its 12 C + 12 C reaction starts to be effective at temperature ∼6×10 8 K and density ∼3×10 4 g cm −3 (EMT04). There are three effective reaction channels of 12 C + 12 C fusion and overall carbon burning converts most of the initial carbon primarily into 16 O, 20 Ne, 23 Na, 24,25,26 Mg, 28 Si, and, secondarily, into 27 Al and 29,30 Si (Clayton 1983;Woosley et al. 2002).
In our previous evolution calculations (EMT04), we found that the carbon shell burning depends sensitively on the profile of 12 C resulting at the end of core carbon burning (Woosley et al. 2002;Imbriani et al. 2001;Arnett 1996). This profile depends in turn on how core carbon burning proceeds: in a convective core or in a radiative region. The nature of that burning depends on the central mass fraction of 12 C attained at the end of core helium burning, which is crucially influenced by the still uncertain 12 C(α, γ) 16 O reaction. Thus, the rate for 12 C(α, γ) 16 O has a significant influence on the behavior of the shell C-burning, as we have previously shown (EMT04). Our results of the s-process presented in the following are expected to depend on this rate as well.
The neutron source of the s-process nucleosynthesis during shell carbon burning is the 22 Ne(α,n) 25 Mg reaction with an initial amount of X( 22 Ne)≃10 −2 , the amount left over from the end of core helium burning. The alpha particles for the 22 Ne(α,n) 25 Mg reaction are the product of the 12 C + 12 C → 20 Ne + α reaction channel. The 22 Ne(α,n) 25 Mg cross section increases by a factor of ∼10 10 from the phase of core helium burning (T 9 ≃0.3) to the phase of shell carbon burning (T 9 ≃ 1.0-1.1). The 22 Ne ( 4 He) mass fraction during the s-process of core helium burning is ∼10 −1 (∼10 −2 ), whereas during the s-process of shell carbon burning, X( 22 Ne)∼10 −2 (X( 4 He)∼10 −9 ). These factors, together with the change in the density from ∼10 3 g cm −3 during core helium burning to ∼10 5 g cm −3 during shell carbon burning, explain the difference in the neutron density during the s-process of core helium burning (n n ∼10 7 cm −3 ) and the s-process of shell carbon burning (n n ∼10 10 cm −3 ).
The consequence of the higher neutron density in the shell carbon burning is the opening of the (n,γ) path of some branchings in the s-process path. Particularly important in the weak s-process are the branchings at 79 Se and 85 Kr. To illustrate the consequence, we take the data of 79 Se at T=26 keV, < σ n,γ >= 225 mb and beta-decay t 1/2 = 5.46 yr and at T=91 keV, < σ n,γ >= 97.3 mb and beta-decay t 1/2 = 0.38 yr (Raiteri et al. 1993). The time scale of beta-decay is τ β = t 1/2 /ln(2) and the time scale of neutron capture is τ n = 1/n n < σv th >, where the neutron thermal velocity v th = 2.4×10 8 (T/30 keV) 1/2 cm s −1 . Therefore when τ β = τ n = τ , at T=26 keV, n n = 8.0×10 7 cm −3 and at T=91 keV, n n = 1.4×10 9 cm −3 . This implies that during core helium (T≃26 keV) with n n ≃ 10 7 cm −3 (< 8.0×10 7 cm −3 ) most of 79 Se beta-decays to 79 Br(n,γ) 80 Br(β − ) 80 Kr, producing 80 Kr. However, during shell carbon burning (T≃91 keV), with n n ≃10 10 cm −3 (> 1.4×10 9 cm −3 ), the path of neutron capture 79 Se(n,γ) is available and less 80 Kr is produced. The path differences followed by the s-process during these two burning phases produce different ratios of 80 Kr and 82 Kr abundances.
Shell Carbon Burning and Its s-process Characteristics
Figures 4 to 10 of EMT04 show the convective structure of our stellar models. Particularly relevant for this section are the various locations of the carbon convective shells. In Table 5 we present some of the properties of the last shell carbon burning phase in our stellar models. In Figure 4 we show the overabundance factors of heavy nuclei averaged over the convective carbon burning shell for model 15N, 20N, 25N, and 30N, and in Tables 3 and 4 we tabulate some of their values. The nucleosynthesis products from this carbon shell are the parts of the carbon shells that would be ejected in the supernova event. The locations, the mass ranges, and the durations of the C-shell burning are not well correlated from one stellar model to the others perhaps because of the variations due to the mass fraction of 12 C produced at the end of core helium burning, to the temperature and density variations with the stellar mass, and to the neutrino energy loss. However, although the temperature and density of the innermost region of the carbon burning shell vary quite significantly during the convective carbon burning, their average values are quite similar among the models. The average temperature at the bottom of convective shell is ∼1.0-1.1×10 9 K and the average density is ∼1-2×10 5 gm cm −3 .
The change of neutron exposures during shell C burning ∆τ n at the bottom of the Cshell convective zones (shown in Table 5) is significantly lower than the central τ n produced at the center during core helium burning (shown in Table 2) by at least a factor of 4×. The average neutron exposures over convective zones τ n produced during C-shell burning are also lower than during core helium burning by a factor of 2 in the 15 M ⊙ model and by a factor of 7 in the 30 M ⊙ model. The ratio of the average neutron capture per iron seed over the convective regions ∆n c of core helium burning (Table 2) relative to the value produced by carbon shell (Table 5) is ∼1 for the 15 M ⊙ model, increases to ∼2 for the 20 M ⊙ model, and to ∼4-7 for the 25 M ⊙ models, and to ∼7 for the 30 M ⊙ model. This shows the trend of increasing robustness of the s-process in the carbon shell with decreasing stellar mass.
The maximum neutron density n max n at the bottom of the convective C-shell varies from 1.0×10 10 cm −3 to 70×10 10 cm −3 among our models, as shown in Table 5. The large variation is due to the fact that the 22 Ne(α,n) 25 Mg rate varies by a factor of 17× for a 15% change of temperature near T=1×10 9 K. The difference in temperatures and densities at the bottom of the convective C-shell among models is about ∼20%. These physical variables also vary during the C-shell evolution as a result of the C-shell burning itself or of the inner Ne and O shell burning. Interestingly, the higher the temperature and density of the C-shell, the shorter the burning duration. This results in roughly the same neutron exposure despite the higher neutron density.
The shell carbon burning decreases the mass fraction of 22 Ne by at least a factor of 5 from the value of X( 4 He)≃10 −2 at the end of core helium burning to X( 4 He)≃10 −3 at the end of shell carbon burning. This low value of X( 22 Ne) near the end of core oxygen burning prevents significant change to the heavy element abundances during the short time left before the star explodes. During the explosive phase, little subsequent alteration occurs except for zones that achieve temperatures in excess of T 9 ≈ 2.3. In such zones, the rate of 22 Ne(α,n) 25 Mg increases by a factor of 3×10 4 relative to the rate at T 9 =1.0, but the dynamic time scale decreases by a factor of 10 7 relative to the time scale during shell carbon burning. Only disintegration reactions are likely to modify the abundances in these zones significantly during the explosion (e.g., Arnould & Goriely 2003).
The overabundance of 88 Sr increases by a factor of ∼2 during shell carbon burning in each model studied, and these overabundances increase monotonically with increasing stellar mass. In contrast to this, the overabundances of 80 Kr decrease during shell carbon burning for sequences 25N, 25NM, and 30N. In these sequences, a significant fraction of the convective carbon shell has a neutron density larger than ∼1.2×10 9 n cm −3 . At this neutron density (and at a temperature T≈ 30 keV), the neutron-capture rate of 79 Se dominates its betadecay rate. This allows the s-process flow to bypass 80 Kr thereby leading to its destruction (see also Raiteri et al. 1991a). We show this in more detail in the next section.
25 M ⊙ s-process abundances
In this section, we describe how the abundances of the s-process products change with time as a massive star evolves through core He-burning and through shell C-burning. We focus on the products of the s-process in the last shell C-burning because these layers will be ejected mostly without further nucleosynthesis processing in a supernova explosion.
In Fig. 5 overabundance factors are shown as a function of time for the important nuclear species produced by the s-process during core He-burning and shell C-burning in a 25 M ⊙ star (evolutionary sequence labeled 25N in Table 1). These curves represent the overabundance factors at a mass coordinate M r =2.26 M ⊙ , that is, at a mass shell inside the convective core during core helium burning but at the bottom of the convective carbon-burning shell in this case (see Fig. 6 in EMT04). The overabundances of all nuclear species shown in Fig. 5 increase during core He-burning except 54,56 Fe, 70 Zn, and 152 Gd, which decrease because of neutron capture. As expected, the pure s-nuclei ( 70 Ge, 76 Se, 80 Kr, 82 Kr, 86 Sr, 87 Sr) are produced in particular during this phase, as summarized in Table 3.
The modification of the s-process products by shell C-burning is most effective during core neon burning and core oxygen burning in case 25N, as shown in Fig. 5 at the time coordinate between 0.0 and -1.0. In this case, the convective carbon-burning shell is most effective as it has settled in the mass range 2.26-4.94 M ⊙ (see Table 5 and Fig. 13). Note that the increase of the overabundance of 87 Rb before the onset of shell C-burning is a result of the β + -decay of 87 Sr, whose rate is sensitive to temperature according to the work of Takahashi & Yokoi (1987).
Figs. 5, 6, and 7 show that the effect of shell C-burning on the overabundance factors is distinct from that due to core He-burning (time coordinate between 4.0 and 5.0) for case 25N, 25K, and 25C, respectively. The neutron density achieved in case 25 N during shell C burning achieves a peak value of ≃ 7 × 10 11 cm −3 (see Fig. 8). Therefore, the branchings at the sites of the unstable nuclei 63 Ni, 69 Zn, 79 Se, and 85 Kr become effective. This can be traced in Fig. 5 and Table 3 as follows:
• The decrease of the overabundance of 63 Cu and 65 Cu and the increase of 64 Ni indicate the branching at 63 Ni.
• The branching at 69 Zn is indicated by the increase of the overabundance of 70 Zn and the decrease of the abundance ratio of the Germanium isotopes 70 Ge/ 72 Ge. Notice that 70 Zn was destroyed during core He-burning, but produced by shell C-burning (see 4.2.1 for more discussion on 70 Zn).
• The branching at 79 Se leads to a modification of the overabundance of the isotopes 80,82 Kr. However, the overabundance of 80 Kr is diminished by a factor of about 4 compared to its value reached at the end of core He-burning while the overabundance of 82 Kr increases by a factor of about 1.7.
• The effect of the branching at 85 Kr leads to the increase of the overabundance of the isotopes 87 Rb (Raiteri et al. 1993).
• Finally, there is a decrease in the abundance of the isotopes 86,87 Sr (affected by the branchings at 85 Kr and 86 Rb) and an increase of the overabundance of 96 Zr to a value larger than 1 ( 96 Zr is destroyed during core helium burning).
In the cases 25K, the second stage of convective shell carbon-burning comprises an extended mass range of 1.30-4.54 M ⊙ (see Table 5). The overabundance factors are shown in Fig. 6 and are taken at a mass coordinate of 1.38 M ⊙ specifying the bottom of the convective carbon-burning shell in this case with its physical condition as a function of time shown in Fig. 9. These factors are distinct from those in the case 25N in many respects:
• The overabundance of 80 Kr is increased by shell C-burning in contrast to the case in 25N. The reason is the lower neutron density achieved during this phase in this sequence (see Fig. 9).
• This lower neutron density explains the lower overabundance factor of 86 Kr (about a factor 5 lower than in case 25N) and also the relatively higher overabundance of 86,87 Sr. In other words, the nuclear-reaction flow in case 25K does not proceed beyond the Sr isotopes to reach the Zirconium region, as in case of 25N.
In the case of 25C the overabundance factors are displayed in Fig. 7. Despite the similarity with Fig. 6, the s-process is generally more efficient in this sequence in both the core He-burning or the shell C-burning due to the adoption of the CF88 rates.
It is worth relating these differences to the evolution of the stellar models. To do so, we compare Figures 8 and 9, which show several physical variables characterizing the properties of the carbon-burning shell in the cases 25N and 25K, respectively. We recall that 25N at the end of core helium burning has a central carbon mass fraction X( 12 C)=0.236 while the 25K has X( 12 C)=0.280. This slight difference has a significant effect on the ensuing evolution beyond core He-burning, as described in detail by EMT04.
We emphasize some points that help to understand the s-process during shell C-burning. The shell burning proceeds differently in the sequence 25N and 25K. The relatively lower value of X( 12 C) of the sequence 25N leads to a convective core of mass M CC =0.36 M ⊙ , while M CC =0.47 M ⊙ in the case 25K. Due to this difference, shell C-burning proceeds in 25N in three convective episodes but in two convective episodes in 25K. This can be seen in Figs. 8 and 9, especially in the behaviour of the neutron number density as a function of time. A peak value of the neutron density of ∼7x10 11 cm −3 is achieved in 25N compared to ∼1.1x10 10 cm −3 in case 25K. The higher neutron density in 25N creates a flow of neutroncapture reactions which reaches the region of Zirconium (see Fig. 12. On the other hand, the third convective C-shell phase in 25N lasts ∼0.5 yr, while the second convective C-shell in 25K lasts for ∼24 yr, or ∼50× longer. This longer burning time leads to more depletion of carbon in the C-shells where X
f ( 12 C)=7.2×10 −3 in 25K compared to X f ( 12 C)=9.4×10 −2 in 25N.
The neutron density and the duration of neutron exposure in the two models produce the neutron exposure, τ n ≃0.9 mbarn −1 at the bottom of the respective convective carbon shells. However, the longer duration and larger mass range of convective carbon shell burning in 25K (M r = 1.30-4.54 M ) than in 25N produces a higher value of neutron capture per iron seed nucleus n cap in 25K (see Figs. 8 and 9). In order to explain those values of τ n and n cap , we performed several test calculations of nuclear burning and mixing of the convective zones.
As a reference calculation, we constructed spherical shells with the temperatures, densities, diffusion coefficients, and mass coordinates of model 25K when the convective C-shell is at its maximum extent (M r = 1.30-4.54 M ). As the initial composition of all test calculations of C-shell burning, we took the composition of the stellar model at the end of core helium burning, at which point n cap =3.70. We ran a simultaneous nuclear burning and mixing code (Jordan et al. 2005) for about 20 yrs duration so that X f ( 12 C)≤1×10 −3 . In this simultaneous burning and mixing code, the temperature, density, and diffusion coefficient of each zone remained fixed in time for simplicity. The results of this reference calculation show a neutron exposure at the bottom of the convective shell of τ n =0.20 mbarn −1 and a number of neutrons captured per Fe seed of n cap =4.74.
To understand the effect of the size of convective zones (since model 25N has a thinner range of convective zones than in model 25K), we ran a second calculation in which we reduced the thickness of convective zones to about 70% in mass range. We kept the other variables the same as in the reference calculation. In this second calculation, we obtained τ n =0.07 mbarn −1 and n cap =4.74 by the time X f ( 12 C) had dropped to 1 × 10 −3 in about 20 years. This shows that the less available neutron source due to the reduced size of convective zones produces a smaller value of neutron exposure. On the other hand, there were a correspondingly smaller number of seed nuclei, so the number of neutron captures per Fe seed nuclei was the same as in the reference calculation. The relation between this test calculation to the reference one clearly does not mimic the relation between the 25N and 25K results.
To test the effect of the higher temperature in the carbon shell in 25N, we performed another test calculation in which we kept all physical variables the same as in the reference calculation but increased the temperature of each zone by 15%. For this structure, we found τ n =0.21 mbarn −1 and n cap =3.85 at time t=0.04 yr and τ n =0.26 mbarn −1 and n cap =4.86 at time t=20 yr. This test shows that the structure with higher temperature produces a much higher neutron density and hence the same value of τ n (≈ 0.2 mbarn −1 ) as in the reference calculation but in only 0.04 years. Because of the shorter time to reach τ n = 0.21 mbarn −1 , this test calculation yields a smaller value of n cap at the same τ n . This result indeed mimics the difference between the results of 25N and 25K, and we conclude that the similar values of τ n in the C-shell of 25N and 25K and the higher value of n cap in 25K at the end of their C-shell burning is mostly due to the higher temperature in 25N than in 25K.
In Figure 10 we show the overabundance distribution of 12 C, 16 O, 22 Ne, 70 Zn, 70 Ge, 80 Kr, and 86 Sr-nuclei that are important for the s-process nucleosynthesis during shell carbon burning-at the end of core oxygen burning for sequences 25N and 25K to illustrate Table 3 and the discussion above. The values of the overabundances of light nuclei 12 C, 16 O, and 22 Ne drop significantly from 92 (78), 72(77), 60(59) at the end of core helium burning to 2.4(31), 61(68), and 3(5) at the end of core oxygen burning for the last carbon shell region in sequence 25K(25N), respectively. The overabundance of the s-only nuclide 70 Ge increases by a factor of ∼2× for both sequences during shell carbon burning. The overabundance of 70 Zn at the end of core helium burning is 0.4 for sequences 25N and 25K, but increases significantly to a value of 5 for sequence 25K and to a value of 40 for sequence 25N during shell carbon burning, as shown in the figure.
Our general conclusion of this analysis is that s-process nucleosynthesis occurring in shell C-burning is rather sensitive to the central mass fraction of 12 C attained at the end of core He-burning, and this in turn depends on the rate of 12 C(α, γ) 16 O, a value still under debate, as outlined in §2.
70 Zn
The production of 70 Zn in explosive carbon and neon burning is discussed in Arnett (1996). The source of neutrons for the neutron-capture reactions in explosive burning is the 12 C+ 12 C reaction. In C or Ne explosive burning, 70 Zn is produced in almost equal amounts as 68 Zn (Arnett 1996;Howard et al. 1972) whereas the ratio of 70 Zn to 68 Zn in solar abundance is 0.033. An analysis by Arnett (1996) concludes that solar 70 Zn is produced in a nuclear burning process with time scale that is longer than a typical explosive time scale, which suggests the hydrostatic burning of carbon or neon as the site production of 70 Zn. In such burning, the 70 Zn overproduction should be a fraction of 68 Zn overproduction, as we find in the present analysis.
The solar abundance of 70 Zn isotope is 0.62% of all Zinc isotopes (Anders & Grevesse 1989). The small fraction of 70 Zn abundance relative to other Zinc isotopes makes it difficult to detect in the spectra of stellar atmosphere or interstellar medium. It is possible that hints of 70 Zn might be preserved in meteoritic samples. For example, 70 Zn isotopic anomalies have been measured in Allende meteorite inclusions. A clear excess of 66 Zn and a deficit of 70 Zn in FUN inclusions (Volkening & Papanastassiou 1990;Loss & Lugmair 1990) is correlated with excesses for the neutron-rich isotopes of 48 Ca, 50 Ti, 54 Cr, and 58 Fe. The source of these anomalies was attributed to neutron-rich e-process nucleosynthesis in massive stars (Hartmann et al. 1985), but the current thinking is that these isotopes were produced in rare Type Ia supernova (e.g., Meyer et al. 1996;Woosley 1997). Since such nucleosynthesis does not produce 70 Zn, the correlated deficit of this isotope with excesses in 48 Ca, for example, is expected. A more promising cosmochemical sample that might provide evidence of 70 Zn production in C-shell s-processing is a presolar grain. It is quite reasonable to expect that some shell carbon burning products might condense or be implanted into grains which, if then preserved in meteorites, would show the excesses of 70 Zn and 87 Rb isotopes, along with other s-process products listed in Table 3 for correlation analysis.
87 Rb
Rb solar abundance is comprised of 85 Rb and 87 Rb isotopes with the 87 Rb abundance 27.8% of the total (Anders & Grevesse 1989). The abundance of the long-lived radioactive 87 Rb, which decays to daughter nuclei 87 Sr with a half-life of 4.9×10 9 yr, is often used in radioactive dating of rocks and meteorites (e.g., Cowley 1995;Misawa et al. 2006). It is probably not possible to observe Rb isotope ratio directly from massive stars, however, the ejected abundances into interstellar medium or molecular clouds could be measured. Recently Federman et al. (2004) reported the first measurement of the interstellar 85 Rb/ 87 Rb isotope ratio from the diffuse gas toward ρ Oph A. They obtained a value of 1.21 which is significantly lower than the solar abundance value of 2.59. A proper understanding of the origin of 87 Rb in the diffuse gas will require chemical evolution calculations with mixing of several generations of stars. Correlating the 87 Rb observed abundance with other heavy isotopic abundances could reveal interesting insights into carbon-shell nucleosynthesis.
s-process path
In order to see the differences in the s-process paths of core helium burning and shell carbon burning, we performed two one-zone s-process nucleosynthesis calculations using the central temperature, central density, central mass fraction X( 4 He) of the core helium burning in sequence 25N and the temperature, density, and mass fraction X( 12 C) of the innermost shell of the shell carbon burning in sequence 25N, respectively. The initial composition of the calculation is taken from the initial composition of the burning phase in sequence 25N. For each time step, we compute the reaction flow. For example, for the reaction i+ j → k + l, the integrated flow over time step dt is f i+j,k+l = N A < σv > ij,kl ρ Y i Y j dt, where Y i and Y j are the abundances of species i and j, respectively. The total of the integrated net flow for all timesteps, F i+j,k+l = n [f i+j,k+l − f k+l,i+j ] dt n shows the total flow of the reaction. The net integrated nuclear reaction flows are shown for the case 25N in Fig. 11 and Fig. 12 for the s-process during core He-burning and shell C-burning, respectively. Similar features shown in Fig. 11 and Fig. 12 are also reproduced in sequence 25K. The differences in temperature, density, and, therefore, the neutron density, cause some differences in the branchings of the flows. The (n,γ) branching paths that are opened or enhanced during the third convective C-shell relative to s-process paths during the core helium burning in 25N are at 57,60 Fe, 64 Cu, 69 Zn, 70 Ga, 75 Ge, 76 As, 81 Se, 80,82 Br, 85 Kr, 86,87 Rb, 87,89,90 Sr, 90 Y, and 95 Zr.
It is interesting to note that Fig. 12 shows that proton-capture reactions occur on nuclei with Z=26-30 during shell C-burning. The largest (p,γ) flow for Z≥26 is the 58 Fe(p,γ) flow. The ratio of 58 Fe(p,γ)/ 58 Fe(n,γ) ≃ 1.4×10 −3 , which clearly shows that proton capture reactions do not affect the s-process flow.
In these one-zone nucleosynthesis calculations of shell carbon burning, we find the five largest neutron source reactions are the (α,n) reactions on 22 Ne, 21 Ne, 17 O, 13 C, and 26 Mg, with 21 Ne produced through 20 Ne(n,γ) 21 Ne, 17 O produced through 16 O(n,γ) 17 O, 13 C pro-duced through 12 C(p,g) 13 N(β + ) 13 C, and 26 Mg produced through 12 C + 12 C → 23 Na + p followed by 23 Na(α,p) 26 Mg. The alphas (58%) and protons (42%) are produced by the two channels of carbon burning.
Recently Travaglio et al. (2004) in their study of Sr, Y, and Zr Galactic evolution infer some hints of a primary s-process in low-metallicity massive stars. These authors suggested that 12 C + 12 C → 23 Mg + n and 26 Mg(α,n) 29 Si during carbon burning could be the neutron source reactions in the extremely metal poor (EMP) massive stars. To analyze this suggestion, we evolved a 25 M ⊙ star with initial metallicity [O/H]=[Fe/H]= -4 up to the end of core helium burning and then took this composition as the initial composition of a one-zone calculation of shell carbon burning using the physical conditions of sequence 25N above. We find the ratios of α, p, and n channels of 12 C + 12 C reaction flow are 1.0:0.71:0.0, respectively. The largest neutron source reactions are the (α,n) reactions on 13 C, 17 O, 21 Ne, and 22 Ne with their ratios of 1.0:0.17:0.06:0.02, respectively. The 13 C(α,n) 16 O reaction is the major neutron source during shell carbon burning, which can also be shown for core helium burning (Baraffe et al. 1992;Rayet & Hashimoto 2000), instead of the 22 Ne(α,n) 25 Mg reaction in the EMP massive star. We find that most of the Sr nuclei (77%) are produced in core helium burning rather than in the shell carbon burning. Therefore, carbon burning could not provide enough neutrons to explain the enhancement of the observed Sr abundances in the EMP stars.
Neutron Exposure and Neutron Capture per Iron Seed Nuclei
Several burning sequences in massive stars produce neutrons through the 22 Ne(α,n) 25 Mg reaction. A simple way to show where nuclei are exposed to these neutrons during the stellar lifetime is to plot the total neutron exposure versus the interior mass radius as shown in Fig. 13 at the end of core oxygen burning (the last model calculated). τ n (M r ) ≡ t 0 n n (M r , t ′ ) v th dt ′ , is the neutron exposure that would be experienced by a nucleus if it remained at M r at all times (TEM00). No nucleus has this history because of convective mixing in the star, but Fig. 13 clearly shows where neutrons were liberated during the star's evolution.
The central neutron exposure of each curve in Fig. 13 is mostly due to the neutron exposure during core helium burning (TEM00). Farther out are several peaks of neutron exposures produced during shell carbon burning. As in the core helium burning, the highest neutron exposure occurs at the innermost convective region where the temperature and density are the highest and the neutron-liberating reactions the fastest. It is worth remembering that convection continually replenishes the supply of neutron sources to these zones.
The outermost peak of the neutron exposure curves in Fig. 13 are due to the shell helium burning. The width of the peak is due to the outward migration of the innermost part of the helium shell during its evolution.
While the neutron exposure as function of interior radius, τ n (M r ) is a good tool to show where nuclei are exposed to neutrons, it is less effective in showing the degree of production of heavy nuclei. As mentioned above, convective mixing moves nuclei into and out of the regions of high neutron density, so no nucleus actually experiences an exposure τ n (M r ). A direct measure of the global production of heavy elements is N c = n c (M r ) dM r , the number of neutron captures per iron seed nucleus at different phases of the stellar evolution integrated over mass range above the mass cut, 1.5 M ⊙ and within the relevant burning zone. n c (M r ) is the number of neutron captures per iron seed nuclei at interior radius M r at the burning phase. In Table 7 we present N c of our stellar models. That table shows that in each stellar model, the s-process during core helium burning is the dominant producer of ejected s-process heavy elements, followed by the s-process during carbon burning, and then the s-process during shell helium burning, except for the case of 15 M ⊙ where the mass cut of 1.5 M ⊙ is comparable to the maximum size of its convective helium burning core of 2.22 M ⊙ .
If we compare N c of different the stellar masses in Table 7, we find that the larger the stellar mass, the greater the heavy element production in each s-process burning phase. It is interesting to note that our 30 M ⊙ stellar model produces a larger total amount of heavy elements than the 25 M ⊙ stellar model even after weighting by an initial mass function factor. We surmise that the largest weak s-process production is for stellar mass around 25 -30 M ⊙ .
Comparison of Stellar Yields with Solar Abundance
Stellar nucleosynthesis yields can be tested with abundance measurements of interstellar medium, stellar atmosphere, presolar grains in meteorites, or solar system abundance. In this section, we compare our stellar yields with solar system abundance. In order to make a proper comparison with solar system abundances, a Galactic Chemical Evolution calculations where time-integrated yield contributions from multi generations of stars of different metallicities should be carried out . However, since we only produce a limited mass range of stellar models and only of initial solar metallicity, a meaningful comparison is done in a simplistic approach that we compare the s-only solar abundances with the sum of the IMFaveraged yields of the s-only nuclei of our stellar models and of the s-only nuclei contribution from the main component. We use the overproduction factors, X/X ⊙ for the comparisons.
As the overproduction factors of the main component are the ratios of the values of the third and the second columns of Table 2 of Arlandini et al. (1999). Our approach is similar with the analysis performed by Arlandini et al. (1999) in decomposing the solar abundance distribution into the s-and r-process components. Arlandini et al. (1999) calculated the r-component residuals by subtracting the s abundances of the arithmetic average of their 1.5 and 3 M ⊙ models at Z= Z ⊙ /2 from the solar abundance. They showed in their Fig. 3 that with their new (n,γ) cross sections of neutron magic nuclei at N=82, the agreement between their low-mass asymptotic giant branch s-only nuclei yields and their corresponding solar abundances improved significantly.
The IMF-averaged overproduction factors of our stellar models, y wk is calculated as y wk = [r 12.5−17.5 × y 15 × 13.5 + r 17.5−22.5 × y 20 × 18.5 + r 22.5−27.5 × y 25 × 23.5 + r 27.5−40.0 × y 30 × 28.5 ] /(r 12.5−17.5 × 13.5 + r 17.5−22.5 × 18.5 + r 22.5−27.5 × 23.5 + r 27.5−40.0 × 28.5) where we assume each stellar model ejects all its material into interstellar medium except for its 1.5 M ⊙ remnant and only stars in the mass range of 12.5 M ⊙ and 40 M ⊙ eject weak s-process materials. y 15 , y 20 , y 25 , and y 30 are the stellar overproduction factors of our 15, 20, 25, and 30 M ⊙ models (some are listed in Table 6 Our weak s-only nuclei overproduction factors, y wk , are scaled and summed with the scaled main s-only nuclei overproduction factors to produce the total s-only nuclei overproduction factors, y s−tot such that: y s−tot = c wk ×y wk + c mn ×y mn . The scale factors c wk and c mn are determined by least-square fit of the total s-only overproduction factors relative to the solar values of unity. We use the 34 s-only nuclei from 70 Ge up to 208 Pb in the fit. Their standard deviations are taken from column 4 of Table 2 of Arlandini et al. (1999) where the values were determined by taking into account the uncertainties in cross sections and solar abundances. The best fit of the total s-only nuclei overproduction factors is represented by the solid circles in Fig. 15. In this figure, the s-only overproduction factors of the weak component are from our stellar models (15N, 20N, 25N, and 30N) at the end of core oxygen burning and are represented by the filled diamond symbols, whereas the s-only overproduction factors of the main component are represented by the solid squares. The value of the best-fit χ 2 is 153.7 with 32 degrees of freedom. The value of the best-fit χ 2 is quite large suggesting that we may be underestimating the standard deviations (we have not taken into account the error propagation of the cross sections to the yields of our stellar models). Alternatively, the large χ 2 may be due to our too simple treatment of chemical evolution or the fact our stellar models begain with initial solar composition. Nevertheless, we show that the fit of the s-only nuclei of the weak component (A < 90) is as good a fit as the fit of the main component (A > 90). We expect that if the overproductions due to explosive burning and from the complete set of nuclei from the main components are included, most of the points on Fig. 15 would lie near the dashed line in the figure (solar values). Table 8 presents the best fit overproduction factors of the weak and main components of the s-only nuclei, of nuclei in the mass range 60 < A < 90 and overproduction factor >0.5, and of some other interesting heavy nuclei. We find that our set of stellar models using NACRE reaction rates produces too many 70 Ge and 82 Kr nuclei, at maximum the excesses are 14% and 13%, respectively. For s-only nuclei with A≤87, the weak component contributes at least 40% of the solar s-only nuclei. For s-only nuclei with A>90, most of the weak component contributions are between 5.3% and 8.5% except for 152 Gd,187 Os,and 198 Hg which are 14%, 13%, and 11%, respectively.
In principle, the method presented in this section is similar to the classical approach of fitting σN s curve solar abundance pioneered by Seeger et al. (1965) and Clayton & Rassbach (1967). Both methods fit the s-only nuclei of solar abundances. In the classical approach, seed nuclei are exposed with three exponential distributions of neutron exposures (the main, the weak, and the strong component), whereas in the stellar models seed nuclei are exposed with a single exposure in massive stars (Beer 1986;Beer & Macklin 1989) and an exponential exposure from repeated thermal pulses in the low-mass AGB stars (Ulrich 1973).
Comparison of s-process Burning Phase and Single Stellar Model
It is interesting to know how good the s-only nuclei overproduction factors of each stellar model at the end of core helium burning, at the end of core oxygen burning, and their IMFaveraged are relative to the best overall fit to the s-only solar abundance distribution. In Fig. 16 we present the best-fit overproduction factor distribution of model 30N at the end of core helium burning (panel a), of the IMF-averaged of our stellar models at the end of core helium burning (panel b), of model 25N at the end of core oxygen burning (to include s-process results from core helium and shell carbon burnings, panel c), and of the IMFaveraged of our stellar models at the end of core oxygen burning (panel d). In panel d, instead of model 25N for the 25 M ⊙ contribution (shown in Fig. 15), we take model 25K for testing the effect of 12 C(α,γ) 16 O reaction in fitting the s-only solar abundance.
Model 30N (Fig. 16a) produces the best s-only solar distribution fit with χ 2 =176 among 15N, 20N, 25N, 25K, and 30N models for yields at the end of core helium burning. Model 30N also produces a better fit than the IMF-averaged of models at the end of core helium burning (χ 2 =205, Fig. 16b), mostly due to the large χ 2 contribution from the 15 and 20 M ⊙ models.
Model 25K (Fig. 16c) produces the best fit to the s-only solar distribution with χ 2 =161 among 15N, 20N, 25N, 25K, and 30N models for yields at the end of core oxygen burning. We also find the overproduction of s-only nuclei of the IMF-averaged of models at the end of core oxygen burning (χ 2 =153) gives a better fit to the s-only solar distribution than the overproductions of the IMF-averaged of models at the end of core helium burning (χ 2 =206). The differences in χ 2 between the fits in the panels of Fig. 16 being larger than 8 are quite significant since the differences only involve 6 data points for the weak component. We conclude that including shell carbon burning s-processing indeed gives a better fit to the sonly solar distribution nuclei than using yields from the core helium burning s-process only. Also IMF-averaging is necessary to give a better fit to the solar abundance distribution (as can be seen by comparing the χ 2 and the spread from min to max of overproduction factors of panel c with the distribution of panel d). Furthermore, the overproduction factors X/X ⊙ > 0.5 for nuclei with 60 ≤ A ≤ 90 suggest that solar abundances of nuclei in this mass range are dominantly produced by the s-processing in massive stars (see also Table 8).
Summary and Discussion of the Results of the s-process
Tables 3, 4, and 6 summarize our results on the s-process in the massive stars under consideration. In Table 3, we emphasize the following points:
• A comparison between the overabundance obtained at the end of core He-burning in 25N and 25K shows that the reaction 12 C(α, γ) 16 O has only a small influence on the efficiency of the s-process during this phase. In contrast, the efficiency of the s-process during shell C-burning is very sensitive to the mass fraction of carbon left over at the end of core He-burning, which is determined by this reaction.
• The overabundances we have obtained in our case 25C at the end of core He-burning are similar to those calculated by Raiteri et al. (1991b). but our results of the shell C-burning are different from those by Raiteri et al. (1991b), since they have done essentially a one-zone calculation at fixed temperature and density.
• Table 4 indicates that the s-process during core He-burning leads to a monotonic increase of the overabundance as a function of stellar mass. However, this does not apply in the case of shell C-burning because branchings along the s-process path become effective as a result of the higher neutron density encountered during this phase (see Fig. 8). The behavior of the overabundance of 63,65 Cu, 64 Zn, 80 Kr, 86,87 Sr, and 152 Gd indicates this feature.
• In Table 6, we summarize the stellar yield compared to solar of the listed nuclei as obtained by integration above 1.5 M ⊙ for each stellar mass. Their overproduction factor distribution as function of mass number A are plotted in Fig. 14. The dependence of this yield on the stellar mass reflects the behavior of the overabundance described above. Relatively high yield is obtained for the pure s-nuclei. We emphasize the remarkable difference by a factor 4.2 in the yield for 80 Kr resulting from the sequences 25N and 25K at the end of core oxygen burning. The reason is the destruction of 80 Kr by shell C-burning in the sequence 25N and its production in 25K. This is seen in Fig. 10, where the normalized mass fractions are displayed as a function of the interior mass in the sequences 25N and 25K at the end of core oxygen burning. Note also the difference in the mass fraction of 70 Zn between the two sequences, which we have attributed to the different neutron densities encountered during shell C-burning as discussed above in §4.
• It is quite remarkable that 152 Gd, produced abundantly (overabundance >18) during core helium burning in massive stars, is brought back to its solar value at the end of shell carbon burning due to the larger 152 Eu(n,γ) rate during shell carbon burning, which causes the s-process flow to bypass 152 Gd. This result is reasonable since sprocessing in thermally-pulsed AGB stars produces enough 152 Gd to account for its solar abundance (Raiteri et al. 1993). A similar case also occurs for 158 Dy, which has an overabundance value of larger than 10 in all models studied at the end of core helium burning (Rayet & Hashimoto 2000) but decreases to an overabundance of less than solar after shell carbon burning. Production of 158 Dy occurs in the s-process because 157 Gd, which is stable in the lab, can β − decay in stars. The rate for this decay is temperature and density dependent (Takahashi & Yokoi 1987). Interestingly, this rate is lower in the conditions of shell carbon burning than is core helium burning. Moreover, the neutron-capture rate for 157 Gd increases with the higher temperature and density of the carbon shell. These effects cause the s-process flow to bypass 158 Gd during carbon burning.
• The opposite case of 152 Gd is for isotope 116 Cd, which is destroyed almost completely during core helium burning to an overabundance of less than 0.002, but then reproduced to a value close to solar after shell carbon burning due to a higher neutron-capture rate of 115 Cd during shell carbon burning. A case similar to but less dramatic than 116 Cd is 96 Zr, which is destroyed during core helium burning to an overabundance ≤0.4 then recovers to an overabundance ≥1 at the end of shell carbon burning.
• An interesting feature of the s-process in shell carbon burning is the strong enhancement of 80 Se, 86 Kr, and 87 Rb to an overabundance larger than 10 from a value of ∼1 at the end of core helium burning (see also Raiteri et al. 1993).
• Another significant feature of shell carbon burning is the overabundances of 23 Na and 27 Al. The overabundance of 23 Na is less than 10 at the end of core helium burning. It is enhanced significantly to a value larger than 230 in all models studied here. A similar result is also obtained for 27 Al overabundance which rises from 1.5 to 75.
Conclusions
Our detailed study of the s-process nucleosynthesis resulting from core He-burning and shell C-burning in massive stars on the basis of the updated nuclear data of some relevant reactions reveal many interesting points which we summarize in the following.
• The efficiency of s-process nucleosynthesis during core He-burning does not depend on the rate of 12 C(α, γ) 16 O but it is sensitive to the rates of 22 Ne(α, n) 25 Mg and 16 O(n, γ) 17 O. When we use the updated rates of these two reactions, as described in §2, we find a significantly reduced efficiency of the s-process during core He-burning (see Table 2).
• The s-process in shell C-burning is more complicated and depends on the evolution of the massive star beyond core He-burning. This complexity can be seen from the locations, the number of carbon convective shells, and the thickness of carbon convective shells in our models. These in turn are sensitive to the central carbon mass fraction X 12 achieved at the end of core He-burning as a result of the reaction 12 C(α, γ) 16 O, whose rate is not yet adequately determined. If this rate leads to a relatively lower X 12 , then the s-process occurs later in time, possibly even after central neon burning. Consequently, the neutrons density achieved is high enough (see Fig. 8) to drive the nuclear reaction flow to the Zirconium region. This does not happen when X 12 is higher, as in our case 25K, since the s-process occurs here earlier in time (before central neon ignition) and at lower temperatures and densities, which result in a smaller neutron density. This explains why the nuclear reaction flow stops essentially in the Strontium region.
• Our calculations show that the overabundance of 70 Zn can be used as indicator of the strength of the nuclear reaction flow through the branchings along the s-process path, especially at 69 Zn. We have also found that 87 Rb is strongly produced during shell carbon burning due to the higher rate of neutron-capture of 86 Rb relative to its rate during core helium burning (Raiteri et al. 1993).
• We measure the s-processing in the core helium, shell helium, and shell carbon burning in massive stars with N c = n c (M r ) dM r and show their relative strengths or importance. We show the s-process contribution from shell carbon burning decreases with increasing mass of the star.
• In comparing the yields of s-only nuclei of our stellar models with the solar distribution, we find that it is necessary to include the results of s-processing from shell carbon burning and to mix the yields of all mass range of massive stars to give a reasonable fit to the solar distribution. For s-only nuclei with mass number A≤87, massive stars contribute at least 40% to the solar s-only nuclei. For s-only nuclei with mass number A>90, massive stars contribute on average ∼7%, except for 152 Gd, 187 Os, and 198 Hg which can be 14%, 13%, and 11%, respectively. e Final 80 Kr production factor averaged over the maximum convective core mass. Table 3. Overabundance factors (X/X ) resulting from the s-process calculation of the present work compared to Raiteri et al. (1991b,a), referred to as R(91b) and R(91a) k For comparison, the overabundances of 88 Sr at the end of core helium burning for model 15N,20N,25C,25K,25N,25NM,and 30N are 3.46,7.17,43.0,14.4,13.4,19.1, and 23.8 respectively. a N c = n c (M r ) dM r , where n c is the number of neutron captures per iron seed nuclei, and integration is over the mass range (in unit of M ⊙ ) of the relevant convective burning regions and above the mass cut, 1.5 M ⊙ 14 · · · · · · · · · 27 Al 13 0.57 0.56 · · · · · · · · · 37 Cl 17 0.46 0.43 · · · · · · · · · 40 K 19 1.63 1.56 · · · · · · · · · 50 Ti 22 0.14 0.13 · · · · · · · · · 54 Cr 24 0.15 0.14 · · · · · · · · · 58 Fe 26 0.67 0.62 · · · · · · · · · 59 Co 27 0.28 0.26 · · · · · · · · · 61 Ni 28 0.40 0.38 · · · · · · · · · 62 Ni 28 0.26 0.24 · · · · · · · · · 64 Ni 28 0.60 0.56 · · · · · · · · · 63 Cu 29 0. The values of main component is scaled by a factor of 0.976 in order to produce the best fit to the s-only solar abundance Fig. 2.-Some characteristics of the s-process during core He-burning according to the present work for a 25 M ⊙ star in the cases 25N and 25C. τ n is the neutron exposure experienced by a nucleus that remained at the center of the star at all times: τ n = n n v th dt, where n n and v th are the neutron density and thermal velocity of the neutrons at the center of the star. X/X i is the ratio of the mass fraction to the mass fraction at the beginning of core helium burning. 15N, 20N, 25N, and 30N. The primary nucleosynthesis production process for each nuclei is indicated by the symbol type. -Overabundance factors of several nuclei produced as a function of time by the s-process during core helium burning and shell carbon-burning in a 25 M ⊙ star, model 25N. These factors are taken at mass coordinate of 2.26 M ⊙ specifying the bottom of the convective carbon-burning shell in this case. Core helium burning commences at abscissa value +6.0 and ends at +4.0. The first noticeable change occurs near the end of core helium burning, while the second change (abscissa value between 0.0 and -1.0) is the result of shell carbon-burning. An exception is 87 Rb which increases steadily between the two burning phases due to decay of 87 Sr (see text for explanation). The panels display snapshots taken at mass coordinate M r =2.26 M ⊙ , which locates the bottom of the carbon-burning shell in this sequence of models. Note the gradual increase of the neutron density following the gradual change of temperature and density. X 0 is the mass fraction at the beginning of the shell C-burning. τ n is the neutron exposure of the shell coordinate seen by a nucleus if it stays at this position at all times. τ F e54 is the neutron exposure implied by the mass fraction of 54 Fe: τ F e54 = -ln(X 54 /X 0 54 )/σ T where σ T is the neutron-capture cross section at T=30 keV and X 54 , X 0 54 are the final and initial mass fraction of 54 Fe, respectively. τ F e54 is useful as a measure of the neutron exposure averaged over the convective zones. Fig. 8 for the sequence 25K. The quantities are taken at a mass coordinate M r =1.38 M ⊙ , which is the location of the bottom of the carbon-burning shell in this model. Note the gradual increase of the neutron density following the gradual change of temperature and density. Fig. 11.-s-process nuclear reaction flow and final abundance for the s-process during core He-burning in a one-zone nucleosynthesis calculation using the central temperature, density, and 4 He mass fraction tracks of our evolutionary sequence 25N. The thickness of an arrow shows the level of that reaction flow (i.e. n N A < σv > ρ y i y j dt n ) relative to the maximum reaction flow within the boundary of the chart. The largest neutron-capture flows within the range of the figure are the 56,57,58 Fe(n,γ) reactions. 15N, 20N, 25N, 25K, and 30N taken at the end of core oxygen burning. The curves indicate the history and the location of s-process nuclear burning in the stellar models. The baselines of the curves are due to the s-processing during core helium burning with their highest values at the central region of the models. The narrow peaks superimposed on the generally falling curve arise from neutron exposure during different phases of shell carbon burning, except the outermost broad peak, which is due to the neutron exposure in the helium-burning shell (see also Table 5). 15N, 20N, 25N, and 30N at the end of core helium burning (χ 2 =205, panel b: top right), for model 25K at the end of core oxygen burning (χ 2 =161, panel c: bottom left), and for the IMF-averaged of models 15N, 20N, 25K, and 30N at the end of core oxygen burning (χ 2 =153, panel d: bottom right). The improvement of the fits going from s-only product of the core helium burning to the product of shell carbon burning shows the importance of including s-process nucleosynthesis from shell carbon burning in fitting the solar abundance. Also averaging the overproduction factors over the stellar mass range is necessary to fit the solar abundance distribution. The small spread of overproduction factor, X/X ⊙ > 0.5 for nuclei with 60 ≤ A ≤ 90 suggests that solar abundance nuclei in this mass range are dominantly produced by the s-processing in massive stars (see also Table 8).
). The factors r 12.5−17.5 , r 17.5−22.5 , r 22.5−27.5 , and r 27.5−40.0 are the normalized-number of stars in the mass range of 12.5-17.5 M ⊙ , 17.5-22.5 M ⊙ , 22.5-27.5 M ⊙ , and 27.5-40.0 M ⊙ , respectively, assuming their ratios follow the IMF distribution ξ 0 m −α with Salpeter's original value α=1.35.
c
The average temperature weighted by its neutron exposure at the bottom of the carbon shell in 10 9 K.d The average density weighted by its neutron exposure at the bottom of the carbon shell in 10 5 gm cm −3 . e The increase of the neutron exposure at the bottom of the carbon shell. f The increase of the number of neutron captures per iron seed averaged over the convective shell. g The increase of the neutron exposure averaged over the convective shell. h The maximum neutron density at the bottom of the carbon shell in 10 10 cm −3 . i The mass fraction of 22 Ne at the end of the burning shell. j The overabundance of 80 Kr and 88 Sr isotopes relative to their solar abundance.
Fig. 3 .
3-Overabundance factors of heavy nuclei averaged over the convective helium burning core for model
Fig. 5 .
5Fig. 5.-Overabundance factors of several nuclei produced as a function of time by the s-process during core helium burning and shell carbon-burning in a 25 M ⊙ star, model 25N. These factors are taken at mass coordinate of 2.26 M ⊙ specifying the bottom of the convective carbon-burning shell in this case. Core helium burning commences at abscissa value +6.0 and ends at +4.0. The first noticeable change occurs near the end of core helium burning, while the second change (abscissa value between 0.0 and -1.0) is the result of shell carbon-burning. An exception is 87 Rb which increases steadily between the two burning phases due to decay of 87 Sr (see text for explanation).
Fig. 6 .
6-The same as Fig.4 but for the case 25K in our calculations. The overabundance values are determined at mass coordinate 2.0 M ⊙ in this case. -43 -Fig. 7.-The same as Fig.4 but for the case 25C in our calculations. The overabundance values are taken at mass coordinate 2.547 M ⊙ .
Fig. 8 .
8-Several physical variables characterizing the carbon-burning shell in the sequence 25N.
Fig. 9 .
9-The same as
Fig. 10 .
10-Mass fractions of various important nuclear species normalized to solar values versus interior mass. These curves represent snapshots at the end of oxygen burning in a 25 M ⊙ star for model sequences 25N and 25K.
Fig. 13 .
13-Neutron exposure versus interior mass for sequences
Fig. 15 .
15-The overproduction factor distribution of the IMF-averaged of models 15N, 20N, 25N, and 30N that gives best fit to the s-only nuclei solar distribution after adding the sonly from the main component to the weak component. The χ 2 of the best fit is 153 with 32 degrees of freedom. The primary nucleosynthesis production process for each nuclei is indicated by the symbol type. The s-only contribution from the main component is shown with solid squares. The s-only contribution from the weak component is shown with solid diamonds and the total s-only abundance of the weak and main component is shown with solid circles. The dashed line represents the solar abundance distribution. Note that we only include the s-only nuclei of the main component in the plot.
Fig. 16 .
16-Similar to Fig. 15 but for model 30N at the end of core helium burning (χ 2 =176 see text, panel a: top left), for the IMF-averaged of models
Table 1 .
1Rates Used for Important Reactions in the Calculations b Mean neutron exposure at 30 keV averaged over the convective core mass. Maximum of the mean neutron density. Final 22 Ne mass fraction.Model
12 C(α, γ) 16 O
16 O(n, γ) 17 O
22 Ne(α, n) 25 Mg
15N
NACRE
Igashira et al. (1995)
NACRE
20N
NACRE
Igashira et al. (1995)
NACRE
25C
CF88
Beer et al. (1992)
CF88
25K
Kunz et al. (2002) Igashira et al. (1995)
NACRE
25N
NACRE
Igashira et al. (1995)
NACRE
30N
NACRE
Igashira et al. (1995)
NACRE
TEM00
CF88
BVW
CF88
Table 3 -
3Continuedspecies Z
End Core Helium
C-Shell
25N 25K 25C R(91b) 25N 25K 25C R(91a)
Table 4 .
4Overabundance of some relevant isotopes in He-Core and C-shell of our stellar models.Isotope
Z
Core Helium Burning
Carbon Shell Burning
15N
20N
25N
30N
15N
20N
25N
30N
23 Na
11
6.14
6.77
6.97
6.98
311.92
342.2
235.02
240.13
27 Al
13
0.89
0.99
1.16
1.31
83.61
110.03
103.11
126.31
37 Cl
17
58.78
69.29
72.12
72.69
57.65
62.22
61.14
58.47
40 K
19
154.13
216.89
267.58
299.45
146.56
190.70
224.21
277.73
50 Ti
22
9.78
13.63
15.84
17.42
11.55
14.81
16.89
18.16
54 Cr
24
13.03
16.15
16.84
16.76
15.09
16.74
16.45
16.15
58 Fe
26
82.03
106.33
104.89
98.54
98.23
102.28
92.78
88.76
59 Co
27
19.68
32.95
36.43
36.18
39.66
46.76
39.25
40.23
61 Ni
28
19.96
39.98
53.82
59.59
39.54
61.78
60.39
68.44
62 Ni
28
9.53
21.10
31.55
37.34
17.74
30.25
34.42
38.32
64 Ni
28
11.10
31.27
56.63
76.70
48.56
95.50
109.32
116.65
63 Cu
29
16.16
37.42
58.32
70.70
33.12
48.55
11.48
45.89
65 Cu
29
23.37
67.45
122.41
165.68
42.18
95.37
83.51
158.52
64 Zn
30
6.79
17.44
29.40
37.72
5.47
14.51
8.61
21.91
66 Zn
30
8.45
28.59
56.97
81.94
21.17
49.73
62.10
96.00
67 Zn
30
10.85
38.69
79.37
116.15
40.82
96.97
136.82
197.35
68 Zn
30
7.42
30.19
70.13
111.04
29.29
76.69
128.42
161.20
70 Zn
30
0.51
0.38
0.38
0.41
2.65
3.87
39.94
6.16
70 Ge
32
9.83
42.61
107.36
178.42
44.89
124.58
217.37
270.10
72 Ge
32
5.96
25.98
72.22
128.22
32.49
89.61
186.75
207.79
73 Ge
32
3.59
15.86
45.12
81.42
29.93
82.63
179.56
196.67
74 Ge
32
2.88
12.09
35.86
67.27
16.50
46.90
100.75
109.58
75 As
33
2.09
8.77
26.29
49.76
14.58
40.43
99.09
96.53
76 Se
34
6.01
24.58
74.75
143.74
31.08
89.80
188.51
209.01
80 Se
34
0.15
0.39
1.21
2.51
4.75
8.09
54.71
27.70
80 Kr
36
15.24
55.70
174.25
352.13
49.58
182.40
45.16
316.64
82 Kr
36
7.83
23.59
73.44
153.31
26.29
93.21
123.66
204.95
86 Kr
36
0.94
1.34
2.57
4.92
3.71
5.88
50.92
16.33
87 Rb
37
0.72
0.84
1.26
2.21
17.83
27.23
55.45
76.23
86 Sr
38
11.99
22.61
57.07
121.10
24.79
61.24
20.71
152.34
87 Sr
38
11.14
20.28
47.27
98.75
22.00
49.36
28.32
150.39
88 Sr
38
3.70
7.53
14.11
25.13
7.89
14.45
20.07
40.14
96 Zr
40
0.42
0.27
0.20
0.16
1.12
1.14
11.30
1.93
116 Cd
48
0.0002
0.001
0.001
0.002
0.55
0.30
2.67
0.47
152 Gd
64
17.65
18.81
22.76
27.04
2.38
3.74
0.09
0.91
158 Dy
66
7.52
10.90
14.91
18.71
0.35
0.14
0.04
0.64
Table 5 .
5a The interior mass range of the last carbon shell. The duration of the shell carbon burning.Properties of s-processing during Shell Carbon Burning
Model
∆ M a
C−Shell
τ b
CB
T c
9
ρ d
5
∆τ e
n
∆ n f
c
< ∆τ > g
n max h
10
X i
22
O j
80
O j,k
88
(M ⊙ )
(yrs)
(mb −1 )
(mb −1 )
(×10 −3 )
15N
1.38 -1.98
1.07
1.07
1.81
0.20
1.04
0.038
3.9
3.10
28
7.59
20N
1.24 -3.12
21.1
0.97
1.10
0.71
1.22
0.049
1.0
0.88
177
14.0
25C
1.23 -4.00
12.0
1.01
0.98
0.80
1.32
0.042
1.3
1.06
911
97.4
25K
1.30 -4.54
20.3
1.02
0.91
0.83
1.02
0.036
1.1
0.37
352
27.7
25N
2.26 -4.94
0.46
1.15
1.14
1.10
0.47
0.038
70
0.67
45
20.1
25NM
1.21 -3.50
4.12
1.08
1.30
0.40
1.14
0.079
1.8
0.81
108
44.7
30N
1.33 -5.22
4.06
1.13
1.10
0.56
0.61
0.019
4.3
0.80
311
46.9
b
Table 6 .
6Stellar yield (X/X with mass cut at M r =1.5 M ) of some heavy isotopes at the end of core Oxygen burningIsotope
Z
X/X
15N
20N
25N
25K
30N
11 Na
11
14.60
32.61
36.20
37.61
25.01
13 Al
13
4.33
11.12
15.82
19.81
19.61
37 Cl
17
5.44
9.65
11.52
11.67
13.81
40 K
19
13.30
27.89
39.55
44.95
61.42
50 Ti
22
1.68
2.66
3.45
3.42
4.51
54 Cr
24
2.06
3.10
3.60
3.64
4.37
58 Fe
26
7.04
13.78
16.19
16.39
21.15
59 Co
27
3.04
6.09
7.39
7.44
7.79
61 Ni
28
3.44
7.75
10.33
11.27
13.90
62 Ni
28
2.05
4.17
7.52
7.32
9.65
64 Ni
28
3.60
10.07
17.11
17.39
22.81
63 Cu
29
2.62
6.32
3.42
9.43
9.61
65 Cu
29
3.61
11.14
15.31
22.05
28.26
64 Zn
30
1.38
2.61
2.52
4.31
5.29
66 Zn
30
2.31
6.01
12.21
13.97
18.29
67 Zn
30
3.32
10.40
20.88
23.59
31.57
68 Zn
30
2.78
8.36
21.39
19.91
31.93
70 Zn
30
1.05
1.22
5.74
2.00
4.92
70 Ge
32
3.77
12.85
33.15
32.93
47.42
72 Ge
32
2.96
9.34
32.52
27.23
45.71
73 Ge
32
2.55
8.43
25.08
21.78
31.35
74 Ge
32
1.95
5.25
16.12
13.13
25.41
75 As
33
1.76
4.64
16.23
11.54
13.94
76 Se
34
3.04
9.40
29.33
29.18
34.34
80 Se
34
1.20
1.61
7.92
3.21
8.46
80 Kr 36
4.14
17.93
11.29
47.72
47.35
82 Kr 36
2.76
9.72
20.82
28.41
35.43
86 Kr 36
1.24
1.45
8.10
2.72
8.21
87 Rb
37
1.69
3.28
10.55
7.58
15.93
86 Sr
38
2.47
7.26
6.33
19.91
23.10
87 Sr
38
2.22
5.74
6.53
15.79
22.89
88 Sr
38
1.40
2.36
4.40
5.18
8.33
96 Zr
40
0.98
0.98
2.18
1.02
1.30
116 Cd
48
0.92
0.86
1.12
0.85
1.02
152 Gd
64
3.66
3.15
1.78
3.34
2.44
158 Dy
66
1.06
1.02
0.99
0.99
2.07
Table 7 .
7Measuring s-process Production of Core Helium, Shell Helium, and Shell Carbon Burning in Massive Stars Model N a c (He-core) N c (He-shell) N c (C-shell) N c (Total)15N
0.75
0.24
0.90
1.90
20N
5.01
0.27
2.27
7.54
25C
19.3
0.54
4.96
24.8
25K
13.3
0.28
3.39
17.0
25N
12.9
0.32
2.48
15.7
25NM
14.5
0.36
2.28
17.1
30N
23.9
0.72
4.33
29.0
Table 8 .
8s-process Contribution to Solar AbundancesIsotope
Z
Overproduction factor X/X
Weak Comp. (25N)
Weak Comp. (25K)
Main Comp.
Total (25N) a
Total (25K) b
23 Na
11
1.21
1.
Table 8 -
8Continued The values of main component is scaled by a factor of 0.974 in order to produce the best fit to the s-only solar abundanceIsotope
Z
Overproduction factor X/X
Weak Comp. (25N)
Weak Comp. (25K)
Main Comp.
Total (25N) a
Total (25K) b
124 Te
52
0.06
0.05
0.91
0.94
0.94
128 Xe
54
0.06
0.06
0.82
0.86
0.86
130 Xe
54
0.07
0.07
0.83
0.88
0.88
134 Ba
56
0.07
0.07
0.98
1.03
1.03
136 Ba
56
0.07
0.07
1.00
1.05
1.05
142 Nd
60
0.06
0.06
0.92
0.96
0.96
148 Sm
62
0.05
0.05
0.97
0.99
0.99
150 Sm
62
0.06
0.06
1.00
1.03
1.03
152 Gd
64
0.14
0.14
0.88
1.00
1.00
154 Gd
64
0.06
0.06
0.95
0.99
0.99
158 Dy
66
0.06
0.06
· · ·
· · ·
· · ·
160 Dy
66
0.07
0.06
0.87
0.92
0.92
164 Er
68
0.08
0.08
0.83
0.88
0.88
170 Yb
70
0.08
0.08
1.01
1.07
1.07
176 Lu
71
0.08
0.07
1.25
1.30
1.30
176 Hf
72
0.07
0.07
0.96
1.01
1.01
186 Os
76
0.06
0.05
0.97
1.00
1.00
187 Os
76
0.13
0.12
0.82
0.92
0.92
192 Pt
78
0.08
0.08
0.98
1.04
1.04
198 Hg
80
0.11
0.10
1.02
1.10
1.10
204 Pb
82
0.08
0.07
0.94
1.00
0.99
208 Pb
82
0.05
0.05
0.34
0.39
0.39
a b
. E Anders, N Grevesse, Geochim. Cosmochim. Acta. 53197Anders, E. & Grevesse, N. 1989, Geochim. Cosmochim. Acta, 53, 197
. C Angulo, M Arnould, M Rayet, P Descouvemont, D Baye, C Leclercq-Willain, A Coc, S Barhoumi, P Aguer, C Rolfs, R Kunz, J W Hammer, A Mayer, T Paradellis, S Kossionides, C Chronidou, K Spyrou, S Innocenti, G Fiorentini, B Ricci, S Zavatarelli, C Providencia, H Wolters, J Soares, C Grama, J Rahighi, A Shotter, M Lamehi Rachti, Nuclear Physics A. 6563Angulo, C., Arnould, M., Rayet, M., Descouvemont, P., Baye, D., Leclercq-Willain, C., Coc, A., Barhoumi, S., Aguer, P., Rolfs, C., Kunz, R., Hammer, J. W., Mayer, A., Paradellis, T., Kossionides, S., Chronidou, C., Spyrou, K., degl'Innocenti, S., Fiorentini, G., Ricci, B., Zavatarelli, S., Providencia, C., Wolters, H., Soares, J., Grama, C., Rahighi, J., Shotter, A., & Lamehi Rachti, M. 1999, Nuclear Physics A, 656, 3
. C Arlandini, F Käppeler, K Wisshak, R Gallino, M Lugaro, M Busso, O Straniero, ApJ. 525886Arlandini, C., Käppeler, F., Wisshak, K., Gallino, R., Lugaro, M., Busso, M., & Straniero, O. 1999, ApJ, 525, 886
Supernovae and nucleosynthesis. an investigation of the history of matter, from the Big Bang to the present (Princeton series in astrophysics. D Arnett, Princeton University PressPrinceton, NJc1996Arnett, D. 1996, Supernovae and nucleosynthesis. an investigation of the history of matter, from the Big Bang to the present (Princeton series in astrophysics, Princeton, NJ: Princeton University Press, -c1996)
. M Arnould, S Goriely, Phys. Rep. 3841Arnould, M. & Goriely, S. 2003, Phys. Rep., 384, 1
. G Audi, A H Wapstra, Nuclear Physics A. 595409Audi, G. & Wapstra, A. H. 1995, Nuclear Physics A, 595, 409
. I Baraffe, M F El Eid, N Prantzos, A&A. 258357Baraffe, I., El Eid, M. F., & Prantzos, N. 1992, A&A, 258, 357
H Beer, Advances in Nuclear Astrophysics. E. Vangioni-Flam, J. Audouze, M. Casse, J.-P. Chieze, & J. Tran Thanh vanBeer, H. 1986, in Advances in Nuclear Astrophysics, ed. E. Vangioni-Flam, J. Audouze, M. Casse, J.-P. Chieze, & J. Tran Thanh van, 375-383
. H Beer, R L Macklin, ApJ. 339962Beer, H. & Macklin, R. L. 1989, ApJ, 339, 962
. H Beer, F Voss, R R Winters, ApJS. 80403Beer, H., Voss, F., & Winters, R. R. 1992, ApJS, 80, 403
. L Buchmann, ApJ. 468127Buchmann, L. 1996, ApJ, 468, L127
. E M Burbidge, G R Burbidge, W A Fowler, F Hoyle, Rev. Mod. Phys. 29547Burbidge, E. M., Burbidge, G. R., Fowler, W. A., & Hoyle, F. 1957, Rev. Mod. Phys., 29, 547
. M Busso, R Gallino, G J Wasserburg, ARA&A. 37239Busso, M., Gallino, R., & Wasserburg, G. J. 1999, ARA&A, 37, 239
. A G W Cameron, PASP. 69201Cameron, A. G. W. 1957, PASP, 69, 201
Atomic Data and Nuclear Data Tables. G R Caughlan, W A Fowler, 40283Caughlan, G. R. & Fowler, W. A. 1988, Atomic Data and Nuclear Data Tables, 40, 283
. A Chieffi, M Limongi, O Straniero, ApJ. 502737Chieffi, A., Limongi, M., & Straniero, O. 1998, ApJ, 502, 737
D D Clayton, Principles of stellar evolution and nucleosynthesis. ChicagoUniversity of Chicago PressClayton, D. D. 1983, Principles of stellar evolution and nucleosynthesis (Chicago: University of Chicago Press, 1983)
. D D Clayton, W A Fowler, T E Hull, B A Zimmerman, Ann.Phy. 12331Clayton, D. D., Fowler, W. A., Hull, T. E., & Zimmerman, B. A. 1961, Ann.Phy., 12, 331
. D D Clayton, M E Rassbach, ApJ. 14869Clayton, D. D. & Rassbach, M. E. 1967, ApJ, 148, 69
An introduction to cosmochemistry. C R Cowley, Cambridge University PressCambridge; New Yorkc1995Cowley, C. R. 1995, An introduction to cosmochemistry (Cambridge; New York: Cambridge University Press, -c1995)
. C De Jager, H Nieuwenhuijzen, Van Der, K A Hucht, A&AS. 72259de Jager, C., Nieuwenhuijzen, H., & van der Hucht, K. A. 1988, A&AS, 72, 259
. H W Drotleff, A Denker, H Knee, M Soine, G Wolf, J W Hammer, U Greife, C Rolfs, H P Trautvetter, ApJ. 414735Drotleff, H. W., Denker, A., Knee, H., Soine, M., Wolf, G., Hammer, J. W., Greife, U., Rolfs, C., & Trautvetter, H. P. 1993, ApJ, 414, 735
. M F El Eid, B S Meyer, L.-S The, ApJ. 611396El Eid, M. F., Meyer, B. S., & The, L.-S. 2004, ApJ, 611, 396
. S R Federman, D C Knauth, D L Lambert, ApJ. 603105Federman, S. R., Knauth, D. C., & Lambert, D. L. 2004, ApJ, 603, L105
. D Hartmann, S E Woosley, M F El Eid, ApJ. 297837Hartmann, D., Woosley, S. E., & El Eid, M. F. 1985, ApJ, 297, 837
. W M Howard, W D Arnett, D D Clayton, S E Woosley, ApJ. 175201Howard, W. M., Arnett, W. D., Clayton, D. D., & Woosley, S. E. 1972, ApJ, 175, 201
. M Igashira, Y Nagai, K Masuda, T Ohsaki, H Kitazawa, ApJ. 44189Igashira, M., Nagai, Y., Masuda, K., Ohsaki, T., & Kitazawa, H. 1995, ApJ, 441, L89
. G Imbriani, M Limongi, L Gialanella, F Terrasi, O Straniero, A Chieffi, ApJ. 558903Imbriani, G., Limongi, M., Gialanella, L., Terrasi, F., Straniero, O., & Chieffi, A. 2001, ApJ, 558, 903
. M Jaeger, R Kunz, A Mayer, J W Hammer, G Staudt, K L Kratz, B Pfeiffer, Phys. Rev. Lett. 87202501Jaeger, M., Kunz, R., Mayer, A., Hammer, J. W., Staudt, G., Kratz, K. L., & Pfeiffer, B. 2001, Phys. Rev. Lett., 87, 202501
G C Jordan, B S Meyer, E Azevedo, Open Issues in Supernovae. A. Mezzacappa & G. M. FullerWorld ScientificJordan, G. C., Meyer, B. S., & D'Azevedo, E. 2005, in Open Issues in Supernovae, ed. A. Mezzacappa & G. M. Fuller (World Scientific)
. F Kappeler, H Beer, K Wisshak, Reports of Progress in Physics. 52945Kappeler, F., Beer, H., & Wisshak, K. 1989, Reports of Progress in Physics, 52, 945
. N Klay, F Käppeler, Phys. Rev. C. 38295Klay, N. & Käppeler, F. 1988, Phys. Rev. C, 38, 295
. R Kunz, M Fey, M Jaeger, A Mayer, J W Hammer, G Staudt, S Harissopulos, T Paradellis, ApJ. 567643Kunz, R., Fey, M., Jaeger, M., Mayer, A., Hammer, J. W., Staudt, G., Harissopulos, S., & Paradellis, T. 2002, ApJ, 567, 643
. N Langer, J.-P Arcoragi, M Arnould, A&A. 210187Langer, N., Arcoragi, J.-P., & Arnould, M. 1989, A&A, 210, 187
. M Limongi, O Straniero, A Chieffi, ApJS. 129625Limongi, M., Straniero, O., & Chieffi, A. 2000, ApJS, 129, 625
. R D Loss, G W Lugmair, ApJ. 36059Loss, R. D. & Lugmair, G. W. 1990, ApJ, 360, L59
. B S Meyer, T D Krishnan, D D Clayton, ApJ. 462825Meyer, B. S., Krishnan, T. D., & Clayton, D. D. 1996, ApJ, 462, 825
. K Misawa, C.-Y Shih, Y Reese, D D Bogard, L E Nyquist, Earth and Planetary Science Letters. 24690Misawa, K., Shih, C.-Y., Reese, Y., Bogard, D. D., & Nyquist, L. E. 2006, Earth and Planetary Science Letters, 246, 90
. K Nomoto, M Hashimoto, Phys. Rep. 16311Nomoto, K. & Hashimoto, M. 1988, Phys. Rep., 163, 11
. N Prantzos, M Arnould, J.-P Arcoragi, ApJ. 315209Prantzos, N., Arnould, M., & Arcoragi, J.-P. 1987, ApJ, 315, 209
. N Prantzos, M Hashimoto, K Nomoto, A&A. 234211Prantzos, N., Hashimoto, M., & Nomoto, K. 1990, A&A, 234, 211
. C M Raiteri, M Busso, G Picchio, R Gallino, ApJ. 371665Raiteri, C. M., Busso, M., Picchio, G., & Gallino, R. 1991a, ApJ, 371, 665
. C M Raiteri, M Busso, G Picchio, R Gallino, L Pulone, ApJ. 367228Raiteri, C. M., Busso, M., Picchio, G., Gallino, R., & Pulone, L. 1991b, ApJ, 367, 228
. C M Raiteri, R Gallino, M Busso, D Neuberger, F Kaeppeler, ApJ. 419207Raiteri, C. M., Gallino, R., Busso, M., Neuberger, D., & Kaeppeler, F. 1993, ApJ, 419, 207
Atomic Data and Nuclear Data Tables. T Rauscher, F Thielemann, 751Rauscher, T. & Thielemann, F. 2000, Atomic Data and Nuclear Data Tables, 75, 1
. M Rayet, M Hashimoto, A&A. 354740Rayet, M. & Hashimoto, M. 2000, A&A, 354, 740
. P A Seeger, W A Fowler, D D Clayton, ApJS. 11121Seeger, P. A., Fowler, W. A., & Clayton, D. D. 1965, ApJS, 11, 121
Atomic Data and Nuclear Data Tables. K Takahashi, K Yokoi, 36375Takahashi, K. & Yokoi, K. 1987, Atomic Data and Nuclear Data Tables, 36, 375
. L.-S The, M F El Eid, B S Meyer, ApJ. 533998The, L.-S., El Eid, M. F., & Meyer, B. S. 2000, ApJ, 533, 998
. C Travaglio, R Gallino, E Arnone, J Cowan, F Jordan, C Sneden, ApJ. 601864Travaglio, C., Gallino, R., Arnone, E., Cowan, J., Jordan, F., & Sneden, C. 2004, ApJ, 601, 864
J Tuli, Nuclear Wallet Cards. BrookhavenBrookhaven Natl. Lab.Tuli, J. 1995, Nuclear Wallet Cards (Brookhaven: Brookhaven Natl. Lab.)
R K Ulrich, Explosive Nucleosynthesis. Ulrich, R. K. 1973, in Explosive Nucleosynthesis, 139-167
. J Volkening, D A Papanastassiou, ApJ. 35829Volkening, J. & Papanastassiou, D. A. 1990, ApJ, 358, L29
. S E Woosley, ApJ. 476801Woosley, S. E. 1997, ApJ, 476, 801
. S E Woosley, A Heger, T A Weaver, Rev. Mod. Phys. 741015Woosley, S. E., Heger, A., & Weaver, T. A. 2002, Rev. Mod. Phys., 74, 1015
The overproduction factor distribution averaged over the ejecta of models 15N, 20N, 25N, and 30N. Fig. 14.-The overproduction factor distribution averaged over the ejecta of models 15N, 20N, 25N, and 30N.
| [] |
[
"Gauge Invariance and Finite Counterterms in Chiral Gauge Theories",
"Gauge Invariance and Finite Counterterms in Chiral Gauge Theories",
"Gauge Invariance and Finite Counterterms in Chiral Gauge Theories",
"Gauge Invariance and Finite Counterterms in Chiral Gauge Theories"
] | [
"Claudia Cornella \nPRISMA + Cluster of Excellence & MITP\nJohannes Gutenberg University\n55099MainzGermany\n",
"Ferruccio Feruglio \nINFN\nSezione di Padova\nVia Marzolo 8I-35131PaduaItaly\n",
"Luca Vecchi \nINFN\nSezione di Padova\nVia Marzolo 8I-35131PaduaItaly\n",
"Claudia Cornella \nPRISMA + Cluster of Excellence & MITP\nJohannes Gutenberg University\n55099MainzGermany\n",
"Ferruccio Feruglio \nINFN\nSezione di Padova\nVia Marzolo 8I-35131PaduaItaly\n",
"Luca Vecchi \nINFN\nSezione di Padova\nVia Marzolo 8I-35131PaduaItaly\n"
] | [
"PRISMA + Cluster of Excellence & MITP\nJohannes Gutenberg University\n55099MainzGermany",
"INFN\nSezione di Padova\nVia Marzolo 8I-35131PaduaItaly",
"INFN\nSezione di Padova\nVia Marzolo 8I-35131PaduaItaly",
"PRISMA + Cluster of Excellence & MITP\nJohannes Gutenberg University\n55099MainzGermany",
"INFN\nSezione di Padova\nVia Marzolo 8I-35131PaduaItaly",
"INFN\nSezione di Padova\nVia Marzolo 8I-35131PaduaItaly"
] | [] | We derive the finite one-loop counterterm required to restore the Ward Identities broken by the regularization scheme in chiral gauge theories. Our result is an analytic expression applicable to a wide class of regularizations satisfying a few general properties. We adopt the background field method, which ensures background gauge invariance in the quantized theory, and focus on renormalizable chiral theories with arbitrary gauge group and fermions in general representations. Our approach can be extended to theories involving scalars, such as the Standard Model, or to non-renormalizable theories, such as the SMEFT. As a concrete application, we work out the finite counterterm at one loop in the Standard Model, within dimensional regularization and the Breitenlohner-Maison-'t Hooft-Veltman prescription for γ 5 . | 10.1007/jhep02(2023)244 | [
"https://export.arxiv.org/pdf/2205.10381v1.pdf"
] | 248,986,421 | 2205.10381 | d53f915c920ab26c79f93bf00c0ab41ac8331c8a |
Gauge Invariance and Finite Counterterms in Chiral Gauge Theories
20 May 2022
Claudia Cornella
PRISMA + Cluster of Excellence & MITP
Johannes Gutenberg University
55099MainzGermany
Ferruccio Feruglio
INFN
Sezione di Padova
Via Marzolo 8I-35131PaduaItaly
Luca Vecchi
INFN
Sezione di Padova
Via Marzolo 8I-35131PaduaItaly
Gauge Invariance and Finite Counterterms in Chiral Gauge Theories
20 May 2022
We derive the finite one-loop counterterm required to restore the Ward Identities broken by the regularization scheme in chiral gauge theories. Our result is an analytic expression applicable to a wide class of regularizations satisfying a few general properties. We adopt the background field method, which ensures background gauge invariance in the quantized theory, and focus on renormalizable chiral theories with arbitrary gauge group and fermions in general representations. Our approach can be extended to theories involving scalars, such as the Standard Model, or to non-renormalizable theories, such as the SMEFT. As a concrete application, we work out the finite counterterm at one loop in the Standard Model, within dimensional regularization and the Breitenlohner-Maison-'t Hooft-Veltman prescription for γ 5 .
Introduction
Chiral gauge theories play a key role in the description of fundamental interactions. For example, the Standard Model (SM) of strong and electroweak interactions exhibits a chiral fermion content with respect to the gauge group SU(3)×SU(2)×U (1). While there are good reasons to believe that the SM is incomplete in several respects, the absence of confirmed signals of new physics suggests charting possible SM extensions in terms of effective chiral gauge theories, such as the SM effective field theory (SMEFT).
Quantization and renormalization of chiral gauge theories, defined by their symmetry and field content, are well-understood today. In particular, the framework of algebraic renormalization [1][2][3][4][5][6][7][8][9], relying on general properties of perturbative quantum field theories such as the Power Counting Theorem [10,11] and the Quantum Action Principle [12][13][14][15], allows to show how symmetries (local or rigid) are preserved 1 in perturbation theory. The great advantage of algebraic renormalization is its independence from the particular regularization used.
A regularization scheme should nonetheless be specified for practical computational purposes. The most convenient choice is provided by schemes preserving as many symmetries as possible of the underlying theory. However, the very existence of gauge anomalies prevents adopting a scheme where chiral gauge symmetries are maintained. Even when the field content is anomaly-free, any consistent regulator leads to a breaking of gauge invariance, which manifests itself in the amplitudes evaluated in perturbation theory. 2 Such amplitudes are required to satisfy the Ward Identities (WI) arising from the gauge symmetry of the theory. However, these identities are spoiled by contributions introduced by the regularization procedure. To remove the unwanted terms, different approaches are possible. The most elementary one is to disregard the undesired contributions, thus enforcing the WI by hand. This procedure has the disadvantage of requiring the identification of the correct set of WI amplitude by amplitude. Moreover, since the resulting subtraction is defined up to gauge-invariant contributions, independently for each process, ambiguities may arise when comparing different processes.
In a more comprehensive approach we can analyze (and repair) the breaking of gauge invariance induced by the regularization procedure directly at the level of the effective action, the generating functional of the one-particle irreducible (1PI) Green's functions, thus effectively handling all possible amplitudes at once [8,9,[19][20][21][22][23]. Owing to symmetries, the effective action is bound to satisfy WI in the form of functional identities. 3 These identities are violated in perturbation theory by terms that are severely constrained. In particular, the Quantum Action Principle requires such terms to be finite local polynomials in the fields and their derivatives, of bounded dimensionality, order by order in perturbation theory. Moreover, if the theory is anomaly free, they are trivial solutions to the Wess-Zumino (WZ) consistency conditions [24]. As a consequence, they can be expressed as gauge (or BRST) variations of integrated local polynomials that provide viable counterterms to recover the WI.
Each regularization scheme, combined with a subtraction procedure to remove divergences, requires its own set of WI-restoring finite counterterms. In fact, the above strategy has already been pursued in the context of dimensionally regularized (DR) [25,26] chiral gauge theories.
The specific cases that have been analyzed feature charged fermions of a single chirality [20][21][22][23]. To the best of our knowledge, however, a general procedure allowing to identify the whole set of counterterms, independently from the adopted regularization scheme and for arbitrary chiral fermion charges and general (non-simple) gauge group, has not yet been formulated. In this work we discuss this general problem and show how it can be solved in the one-loop approximation. Explicit general expressions for WI-restoring counterterms, adaptable to a wide class of chosen regularization schemes, can be of great utility for automated computations, such as those carried out today within the SMEFT [27][28][29]. As described in Section 2, in this paper we deal with a renormalizable chiral gauge theory depending on gauge bosons and fermions only, though there is no obstacle in extending our method to theories involving scalars, such as the SM, or to nonrenormalizable theories, such as the SMEFT. Indeed we consider this work as the first step of an approach meant to cover a wider range of applications. We assume an arbitrary regularization scheme, required to satisfy a few very general requirements, such as the Quantum Action Principle, Lorentz invariance, and gauge invariance in the limit where the theory is vector-like. Our treatment of fermions is completely general: we include fermions of both chiralities, which can transform under arbitrary representations of the gauge group, the latter being associated with a general (non-simple) compact Lie algebra. Only physical fields (apart from ghosts) are present. In this sense our approach is minimal.
We find it useful to quantize the theory within the background field method and to adopt the background field gauge fixing [30][31][32][33][34]. The latter preserves gauge invariance at the level of background fields, up to anomalies and regularization effects. The effective action is therefore bound to be a gauge-invariant functional of the background fields. As a consequence of the Quantum Action Principle, the gauge variation of the one-loop effective action (evaluated in perturbation theory within a given regularization) is a four-dimensional, Lorentz-invariant, finite local polynomial in the fields and their derivatives, that vanishes when the theory is vector-like. Moreover, by treating CP and P as spurious symmetries 4 , the gauge variation of the one-loop effective action turns out to be P-even and CP-odd.
It is then straightforward to expand such gauge variation in a basis of local operators with the desired symmetry properties. This expansion is characterized by a redundant set of coefficients. We can lift this redundancy by requiring the gauge variation of the one-loop effective action to satisfy the WZ consistency conditions, which hold for any gauge theory, whether anomalous or not. This request translates into a set of linear equations relating the coefficients of the expansion and reduces the initial set of coefficients to an irreducible one. As shown in Section 3, these first steps allow to parametrize in the most general and nonredundant way the gauge variation of the effective action at the one-loop order, independently from the adopted regularization. Similarly, we can build the most general parametrization of the one-loop finite counterterm necessary to restore the WI as a linear combination of integrated local operators with the correct symmetry properties. We finally require that, up to gauge anomalies, the gauge variation of the finite counterterm reproduces the gauge variation of the effective action. This allows to uniquely determine the parameters describing the counterterm in terms of those entering the variation of the effective action. As expected, we find that restoring the WI by means of a finite counterterm is always possible as long as the fermion field content is non-anomalous. We stress that, for non-anomalous theories, our result unambiguously determines the counterterm that reestablishes gauge invariance, for the entire class of regularizations satisfying the properties outlined above.
Nowadays the most widely used regularization in practical calculations is dimensional regularization. Within DR, only the Breitenlohner-Maison/'t Hooft-Veltman (BMHV) scheme [35] has been shown to provide a consistent treatment of γ 5 at all orders in perturbation theory. In Section 4 we derive explicit expressions for the gauge variation of the effective action and the necessary counterterm at one loop, using DR and the BMHV scheme, which has already been implemented in tools for automated computations, such as FeynCalc or Tracer. Our formalism allows to determine the full set of counterterms needed to cast one-loop results in a fully gauge-invariant form. The calculation is performed via path integral techniques and checked diagrammatically. The outcome is of course consistent with the general results of Section 3.
A paradigmatic example of chiral gauge theory is the Standard Model. Indeed, to illustrate our results, in Section 5 we work out the counterterms needed at one loop using DR and the BHMV scheme, in the limit of vanishing Yukawa couplings.
This paper is structured as follows. In Section 2 we recall the classical and effective action for a chiral gauge theory and discuss three important ingredients of algebraic renormalization, namely the Ward Identities, the Wess-Zumino conditions, and the Quantum Action principle. In Section 3 we put these to use to determine the gauge variation of the effective action and the WI-restoring counterterm at the one-loop order for any regularization scheme respecting the Quantum Action Principle, Lorentz invariance, hermiticity of the action, vectorial gauge symmetry, and P and CP. Section 4 is dedicated to deriving the gauge variation of the effective action and the WI-restoring counterterm at one loop for the specific case of Dimensional Regularization. Finally, in Section 5 we apply our results to the SM. In the Appendices, we provide some auxiliary expressions used in Sections 3 and 4. Appendix A contains results relevant to the general solution of the WZ conditions of Section 3. Appendix B provides details about the computation in Section 4.
The theory
We consider a theory based on a compact gauge group G, with gauge fields A a µ (a = 1 . . . dim(G)), and fully antisymmetric structure constants f abc . In general the gauge group is the direct product of N G simple groups G = G G G (with G = 1, . . . , N G ), possibly including U(1) factors. In this case the index a runs over the adjoint representation of each simple group, and similarly f abc is the direct sum of the structure constants f G abc of each G G . Throughout sections 1, 2 and 3, Lorentz indices run from 0 to 3 and are denoted by Greek letters µ, ν, etc. In Section 4, when using DR to exemplify our results, this notation will be slightly modified.
The matter content consists of two sets of massless chiral fermions, f L and f R , transforming under G according to representations characterized by hermitian generators T a L and T a R :
[T a X , T b X ] = if abc T c X , X = L, R . (2.1)
We are interested in chiral gauge theories, where T a L and T a R describe inequivalent representations. An example is provided by theories where T a L(R) = 0 and T a R(L) is nontrivial, as in the case of the SU (2) component of the Standard Model gauge group. Yet, our formalism encompasses all possible (chiral as well as vector-like) gauge theories with fermions.
In general, the representations described by T a L and T a R are reducible and their decomposition in irreducible representations contains trivial components. We exploit this possibility to describe the generators T a L and T a R using matrices of the same dimension. As a concrete example, consider hypercharge in the Standard Model. Its action on left-handed fermions can be described via a single generator acting on eight left-handed spinors per generation (six in the quark sector and two in the lepton sector). Its right-handed analogous instead acts non-trivially only on seven right-handed spinors per generation (six quarks and one lepton). Nevertheless, we can formally extend the matrix describing the right-handed generator by one trivial row and column per generation, to match the dimensionality of the left-handed one. Similarly, the multiplet f R may be extended to include a dummy degree of freedom, a right-handed neutrino, which however does not play any role in our discussion and can be safely set to zero.
While our focus is on theories with matter and gauge fields, fundamental scalars can be discussed along similar lines. This extension is left for future work.
Classical action before regularization
The most general renormalizable bare action describing the dynamics of a set of fermionic fields f charged under the gauge group G is:
S[A, f X ,f X ] = d 4 x (L YM + L Fermions ) ,(2.2)
where X = L, R, L YM is the usual Yang-Mills Lagrangian, and L Fermions includes kinetic terms and gauge interactions of the fermions. Since we allow the gauge group to be the direct product of simple groups G = G G G , the kinetic term of the gauge fields is controlled by a diagonal matrix 1/G ab = G δ ab G /g 2 G , where g G and δ ab G are the gauge coupling and the identity in the adjoint representation of G G , respectively. Explicitly, we write:
L YM = − 1 4 G ab F a µν F bµν , (2.3) L Fermions =f L i / Df L +f R i / Df R ,(2.4)
where the left-and right-handed fermions are defined as
f L = P L f, f R = P R f,(2.5)
with P L = 1 2 (1 − γ 5 ) and P R = 1 2 (1 + γ 5 ) the hermitian chirality projectors, satisfying P 2 L(R) = P L(R) and P L + P R = 1. The field strength of the gauge fields and the fermion covariant derivatives are defined for X = L, R as
F a µν = ∂ µ A a ν − ∂ ν A a µ − f abc A b µ A c ν , D µ f X = (∂ µ + iA a µ T a X )f X ,(2.6)
and / D = γ µ D µ . 5 The bare action is left invariant by the continuous local gauge transformations:
δ α A aµ = ∂ µ α a + f abc α b A cµ , δ α f X = −iα a T a X f X , (2.7)
α a being an infinitesimal gauge parameter. Given an arbitrary functional F [A, f X ,f X ] of the fermions and the gauge fields, we can write its gauge variation as
δ α F [A, f X ,f X ] ≡ d 4 x α a (x)L a (x)F [A, f X ,f X ] . (2.8) where the differential operator L a is L a (x) = −∂ µ δ δA aµ (x) + f abc A bµ (x) δ δA cµ (x) (2.9) + X=L,R −i ← − δ δf X (x) T a X f X (x) + if X (x)T a X δ δf X (x)
.
With this notation, the gauge invariance of the action, and similarly of any gauge-invariant functional, reads δ α S[A, f X ,f X ] = 0. Because this holds for any value of the gauge parameters, it is equivalent to writing the local relation
L a (x)S[A, f X ,f X ] = 0 . (2.10)
In the following, we will refer to the identity L a (x)F [A, f X ,f X ] = 0 as to the Ward Identity
for the functional F [A, f X ,f X ].
From the algebra (2.1) of the gauge group, it follows that any functional F [A, f X ,f X ] of the fields and their derivatives satisfies the Wess-Zumino consistency conditions [24]: [39]. On the other hand, P is not a symmetry unless the theory is vector-like. Nevertheless, we can always define a generalized, spurious P symmetry that leaves the bare action invariant. Such a generalized P formally acts on the fields as ordinary P and on the generators, viewed as spurions, in an appropriate way. The resulting combined action reproduces ordinary P in any P-invariant theory, but is formally conserved even in theories that do not respect P, like chiral theories. Actually, in order to fully exploit the selection rules associated to both discrete symmetries we find it convenient to define both CP and P as spurious transformations, acting on the gauge and fermion fields as
[L a (y), L b (x)] F [A, f X ,f X ] = −δ (4) (x − y)f abc L c (x)F [A, f X ,f X ] . (2.11) If F [A, f X ,f X ]x µ CP −→ x µ , x µ P − → x µ P = x µ , ∂ µ CP −→ ∂ µ , ∂ µ P − → ∂ µ , A aµ (x) CP −→ −A µ a (x P ) , A aµ (x) P − → A µ a (x P ) , f L,R (x) CP −→ Cf * L,R (x P ) , f L,R (x) P − → γ 0 f R,L (x P ) ,(2.12)
where C denotes the well-known charge conjugation matrix, and on the generators as
T a L(R) CP −→ T aT L(R) , T a L(R) P −→ T a R(L) . (2.13)
We emphasize that the latter relation implies that the structure constants transform as
f abc CP −→ −f abc , f abc P −→ f abc . (2.
14)
The transformations in Eqs (2.12)-(2.13) are formally symmetries of any theory defined by a classical action of the type (2.2). This restricts the structure of the counterterms needed to enforce the WI of the theory, provided one adopts a regularization respecting these symmetries. As a final remark, we note that the operator L a is CP-odd and P-even. Indeed, the CP and P transformations of Eq. (2.7), together with Eq. (2.13), demand that α a be formally treated as a CP-odd and P-even spurion. Thus, Eq. (2.8) implies that L a is CP-odd and P-even.
Regularization: the need of local counterterms
Going beyond the tree level, a regularization is needed. It is well known that in chiral gauge theories there is no consistent regularization procedure capable of preserving gauge invariance at the quantum level. This fact is at the origin of physical anomalies [17,18]. The absence of gauge anomalies is guaranteed if the fermion content of the theory satisfies the well-known condition [40]:
D abc = tr(T a L {T b L , T c L }) − tr(T a R {T b R , T c R }) = 0 . (2.15)
Yet, even if this condition holds, amplitudes computed in perturbation theory do not generally satisfy the WI. This is because the regularization procedure introduces scheme-dependent contributions to amplitudes beyond those removed by Eq. (2.15). Such sources of spurious, unphysical breaking of gauge invariance can always be removed by adding appropriate local counterterms to the classical action in Eq. (2.2). 6 Our analysis provides a general characterization of the counterterms required at the one-loop level in a chiral gauge theory, which applies to a large class of regularization schemes. Explicit expressions for such counterterms are then derived using dimensional regularization (DR). Let us explain our plan in some detail. The quantization of a gauge theory requires the introduction of a gauge-fixing term and a Faddeev-Popov term. Independently from the chosen regularization, these terms necessarily break the original gauge invariance, leaving the classical action invariant under BRST transformations. As a result, the effective 1PI action, as well as all Green's functions of the theory, no more obey linear Ward Identities of the type shown in Eq. (2.10), but rather non-linear Slavnov-Taylor identities. Whenever a non-symmetric regulator is adopted, the identification of the counterterm that must be added to the bare action in order to restore the ST identities is unavoidably complicated by the non-linearity of such identities, as well as by the involved structure of the BRST symmetry [20,22].
Here we follow a different path and quantize the theory with the background field method [30][31][32][33][34]. Concretely, within the background field method the 1PI effective action is obtained by re-writing any field, including ghosts, as the sum of a classical background φ plus a quantum fluctuationφ, and then integrating over the quantum fluctuations including only one-particle irreducible diagrams. In particular, the regularized 1PI effective action can be written as
e iΓ reg [φ] = 1PI Dφ e iS reg full [φ+φ] ,(2.16)
where S reg full ≡ S reg +S reg g.f. +S reg ghost is the sum of the regularized action, an appropriate gauge-fixing term, and the associated ghost action. The gauge-fixing Lagrangian is chosen to be
L g.f. [φ +φ] = − 1 2ξ f a f a ,(2.17)
where
f a = ∂ µÃ µ a − f abc A bµÃ µ c . (2.18)
The gauge-fixing action S reg g.f. serves its standard purpose of breaking gauge invariance. In particular, it is not invariant under gauge transformations of the quantum field. Yet S reg g.f. (and, as a consequence, S reg ghost ) is manifestly invariant under background gauge transformations. The latter act as a standard gauge transformation on the background A µ a , and as a linear redefinition of the integration variableà µ a . For all fields transforming linearly under the original gauge symmetry, both the quantum fluctuation and the classical background transform exactly as the original field, and the distinction between standard and background transformations is not relevant. As mentioned above, the invariance of the gauge-fixed action under background gauge transformations is the main advantage of the background field method. If one introduces sources for the quantum fields only, all generating functionals are also manifestly backgroundgauge invariant and satisfy linear Ward Identities as in Eq. (2.10), up to the regularizationdependent effects mentioned earlier. In particular, the background gauge symmetry, along with (2.15), guarantees that the unique source of violation of the WI is the regularization procedure. The linearity of such relations significantly simplifies the search for the WI-restoring counterterms because, as opposed to the non-linear Slavnov-Taylor equations, the linear WI relate only Green's functions of the same order in perturbation theory [8].
In our treatment we adopt a regularization scheme preserving the vectorial gauge transformations, four-dimensional Lorentz invariance, the generalized P and CP symmetries defined in Eqs. (2.12) and (2.13) and the Quantum Action Principle [5,12,13,16]. As a starting point, we assume that a consistent subtraction procedure is defined, making it possible to evaluate the renormalized functional Γ[φ] from Γ reg [φ]. At this stage we do not need to specify either how this subtraction is performed or which renormalization conditions are imposed; we will do so in Section 4, when performing explicit calculations within DR. Here we simply assume that this subtraction renders Γ[φ] finite order by order in perturbation theory. As we now show, the proof that finite counterterms can be added such that Γ[φ] satisfies the WI proceeds by induction. Suppose that we have successfully identified an action Γ[φ] that satisfies the WI of the theory up to loop order n − 1 (included):
L a (x)Γ[φ]| (k) = 0 k ≤ n − 1 ,(2.19)
where Γ[φ]| (k) stands for the k-order in the loop expansion of Γ [φ]. Although in general the WI will be broken at order n, the Quantum Action Principle guarantees that
L a (x)Γ[φ]| (n) = (∆ a · Γ)(x) = ∆ a (x)| (n) + O( n+1 ) . (2.20)
Here ∆ a · Γ is the generating functional of the amputated 1PI Green's functions with one insertion of a local polynomial in the fields, ∆ a | (n) , formally of order n . 7 In the rest of the paper, the expressions L a (x)Γ| (n) and ∆ a | (n) will be used interchangeably. By power counting follows that ∆ a | (n) is a dimension-four polynomial. According to our assumptions, it should be CP-odd and P-invariant as well as invariant under the four-dimensional Lorentz symmetry and should vanish when T a L = T a R . Moreover ∆ a | (n) must satisfy the WZ consistency conditions (2.11):
L a (y) ∆ b (x)| (n) − L b (x) ∆ a (y)| (n) = −δ (4) (x − y)f abc ∆ c (x)| (n) .
(2.21)
Theories complying with the criterion (2.15) have no anomalies, and the most general solution of Eq. (2.21) at order n is:
∆ a (x)| (n) = −L a (x) S ct [φ]| (n) , (2.22) 7
In the last step of Eq. (2.20) we used the fact that at tree-level the only non-vanishing correlator functions involving ∆ a | (n) are those that contain precisely the fields appearing in ∆ a | (n) , and the corresponding contribution to the one-particle irreducible action reads ∆ a , where by a slight abuse of notation the latter is now interpreted as being a functional of the background fields.
where S ct [φ]| (n) = d 4 y L ct (y)| (n) is an integrated local polynomial of order n in the fields and their derivatives invariant under the four-dimensional Lorentz group, CP and P, and the vectorial gauge symmetry. We can next define:
Γ inv [φ]| (n) = Γ[φ]| (n) + S ct [φ]| (n) ,
(2.23) obtaining:
L a (x)Γ inv [φ]| (n) = O( n+1 ) . (2.24)
The spurious noninvariant contributions induced by the regularization procedure are now removed, and gauge invariance is restored at order O( n ). After adding the n + 1-loop contributions and implementing the subtraction procedure, we get a new functional Γ[φ]| (n+1) and we can repeat the above steps to enforce the WI at O( n+1 ). One of our main results is the determination of the counterterm within the DR scheme at the one-loop order. We will see that DR can be made to comply with our symmetry requirements; in particular, it satisfies the Quantum Action Principle [35]. It is important to stress that the explicit form of the gauge variation of the effective action, as well as the countertem, does depend on the regularization scheme. Yet, as we show in the following section, several important features can be deduced solely from the general considerations presented in the previous paragraph and apply to all regularization schemes that preserve Lorentz invariance, hermiticity of the action, vectorial gauge transformations as well as generalized P and CP. Explicit results for DR will be presented in Section 4.
One-loop analysis for generic regularization schemes
As discussed above, whenever the theory is anomaly free the WI identities can be restored order by order by adding a counterterm to the classical action. The goal of this section is to determine the structure of the gauge variation of the effective action and the counterterm at the one-loop order, i.e. ∆ a | (1) and S ct [φ]| (1) , for any regularization scheme respecting: i) the Quantum Action Principle, i) four-dimensional Lorentz-invariance, ii) hermiticity of the action, iii) vectorial gauge symmetry, iv) the generalized P and CP symmetries of Eqs. (2.12), (2.13).
As we show in the following, these rather general hypotheses significantly constrain the form of ∆ a | (1) and S ct [φ]| (1) .
A basis for the gauge variation and the counterterm
We start by providing a convenient representation for both ∆ a | (1) and S ct [φ]| (1) . As discussed above, the former is a finite local polynomial of dimension four in the gauge and fermionic fields, and their derivatives. 8 We can thus expand it in a basis of monomials involving only monomial explicit expression CP P gauge and fermion fields:
I 0 a ∂ µ A aµ − + I 1 ab µναβ (∂ α A aµ )(∂ β A bν ) − − I 2 ab A aµ (∂ µ ∂ ν − g µν )A bν + + I 3 ab A aµ A µ b + + I 4 ab (∂ ν A aµ )(∂ ν A µ b ) + + I 5 ab (∂ ν A aµ )(∂ µ A ν b ) + + I 6 ab (∂ µ A aµ )(∂ ν A bν ) + + I 7 abd (∂ µ A µ a )A bν A ν d − + I 8 abd (∂ µ A ν a )A bµ A ν d − + I 9 abd µναβ (∂ β A aµ )A bν A dα + − I 10 abde A aµ A µ b A dν A ν e + + I 11 abde µνρσ A aµ A bν A dρ A eσ − − I 12 Xijf Xi − → / ∂ f Xj −f Xj ← − / ∂ f XifX i − → / ∂ fX j I 13 Xijf Xi ← − / ∂ f Xj −f Xj − → / ∂ f XifX i ← − / ∂ fX j I 14 Xaijf Xi / A a f Xj +f Xj / A a f XifX i / A a fX j∆ a (x)| (1) = 14 k=0 C k aA I k A (x) . (3.1)
where a sum over X = L, R is understood. The monomials I k A , where the label A collectively denotes the relevant set of indices, are collected in Table 3.1, along with their CP and P properties. The resulting basis coincides with the one already identified in Ref. [20]. Observable quantities are basis-independent, thus any other choice of basis would be equally good. The symmetry properties of the I k A imply:
C 1 abc = C 1 a(bc) , C 4 abc = C 4
a(bc) , C 5 abc = C 5 a(bc) , C 6 abc = C 6 a(bc) , C 7 abcd = C 7 ab(cd) , C 9 abcd = C 9 ab[cd] , C 10 abcde = C 10 a(bc)(de) = C 10 a(de)(bc) , C 11
abcde = C 11 a[bcde] ,(3.2)
where (a 1 . . . a n ) and [a 1 . . . a n ] denote symmetrization and antisymmetrization over the indices inside the parenthesis. For example, C 1 a(bc) = (C 1 abc +C 1 acb )/2, C 9 ab[cd] = (C 9 abcd −C 9 abdc )/2, whereas C 11 a[bcde] involves the anti-symmetrization of the four indices bcde. The decomposition of Eq. (3.1) is general, and applies to any regularization scheme satisfying the properties i)-iv). We can monomial explicit expression coefficient CP P further constrain this parametrization by observing that the effective action must fulfill the WZ conditions, hence its variation ∆ a | (1) must satisfy Eq. (2.21). Plugging the decomposition (3.1) in (2.21), a set of relations among the coefficients C k aA is obtained. We collectively denote them as
I 1 ghl (∂ ν A µ g )A hν A lµ ξ 1 ghl − + I 2 gh A gµ A µ h ξ 2 gh + + I 3 gh A gµ ∂ µ ∂ ν A hν ξ 3 gh + + I 4 ghl µνρσ A gµ A hν (∂ ρ A lσ ) ξ 4 [gh]l + − I 5 ghlm µνρσ A gµ A hν A lρ A mσ ξ 5 [ghlm] − − I 6 ghlm A gµ A µ h A lν A ν m ξ 6 (gh)(lm) + + I 7 Xijf Xi − → / ∂ f Xj ξ 7 Xij ξ 7 Xji ξ 7 Xij I 8 Xaijf Xi / A a f Xj ξ 8 Xaij ξ 8 Xaji ξ 8 XaijWZ[C k aA ] = 0 ,(3.3)
and provide their explicit expressions in Appendix A.1. It is worth stressing that the mutual dependence among the coefficients implied by Eq. (3.3) is not related to the linear dependence among the elements of the chosen basis, but is rather a consequence of the Lie algebra satisfied by the group generators. Also the polynomial L ct defining the counterterm S ct [φ]| (1) can be expanded in a basis. At variance with the elements I k A , which always occur unintegrated, L ct is integrated over spacetime. Since monomials related by integration by parts do not produce independent terms in S ct [φ]| (1) , we can expand L ct in a basis consisting in a subset of the one introduced above:
S ct [φ]| (1) = d 4 y L ct (y) = d 4 y 8 j=1 ξ j B I j B (y) ,(3.4)
where the label B collectively denotes the relevant set of indices and a sum over X = L, R is understood. The monomials I j B and their CP and P properties are displayed in Table 3.2. Exchanging the gauge indices we deduce the following constraints on the coefficients:
ξ 4 ghl = ξ 4 [gh]l , ξ 5 ghlm = ξ 5 [ghlm] , ξ 6 (gh)(lm) = ξ 6 (lm)(gh) . (3.5)
By computing the gauge variation of S ct [φ]| (1) , we find:
L a (x)S ct [φ]| (1) = − (ξ 2 ba + ξ 2 ab + ξ 3 ba + ξ 3 ab ) I 0 a (x) + 2ξ 4 [ab]c I 1 bc (x) + ξ 1 abc + ξ 1 acb − ξ 1 cba + (ξ 3 cd + ξ 3 dc )f dab I 2 bc (x) + [(ξ 1 abc + ξ 1 acb − ξ 1 cba − ξ 1 cab ) + (ξ 2 cd + ξ 2 dc + ξ 3 cd + ξ 3 dc )f dab ] I 3 bc (x) − ξ 1 cab I 4 bc (x) + (ξ 1 abc − ξ 1 cba )I 5 bc (x) + ξ 1 abc I 6 bc (x) − (f ace ξ 1 ebd + 4 ξ 6 (ab)(cd) )I 7 bcd (x) + (f ade ξ 1 bce − f ade ξ 1 ecb + f ace ξ 1 bed − 8 ξ 6 (ac)(bd) )I 8 bcd (x) + 4 f abg ξ 6 (gc)(de) I 10 bcde + 12 ξ 5 [abcd] + 2f ace (ξ 4 [de]b − ξ 4 [bd]e ) I 9 bcd (x) + 4 f abg ξ 5 [gcde] I 11 bcde (x) + i(T a X ξ 7 X + iξ 8 Xa ) ij I 12 Xij (x) + i(ξ 7 X T a X + iξ 8 Xa ) ij I 13 Xij (x) + i(T A X ξ 8 Xb − ξ 8 Xb T a X − if abc ξ 8 Xc ) ij I 14 Xbij (x) . = 14 k=0Ĉ k aA (ξ)I k A (x) . (3.6)
Again, a sum over X = L, R is understood. Explicit expressions for the coefficientsĈ k aA as a function of the coefficients ξ i B appearing in the counterterm are provided in Table 3.3. Note that, since Eq. (3.6) describes a gauge variation, theĈ k aA automatically satisfy the WZ conditions.
Using (3.6) and (3.1), the gauge variation of the sum of the 1-loop effective action and the counterterm can be written as
∆ a (x)| (1) + L a (x) S ct [φ]| (1) = 14 k=0 C k aA +Ĉ k aA (ξ) I k A (x) . (3.7)
In an anomaly-free theory, the WI can be enforced by requiring the right-hand side of this equation to vanish. If instead the fermion content of the theory is anomalous, we can generalize this requirement by splitting the gauge variation of the effective action into two contributions, only one of which can be removed by a counterterm. The remaining piece represents the anomaly. Since the anomaly of a gauge theory is an equivalence class, where two elements related by adding an integrated local polynomial of the fields and derivatives are equivalent, such a separation is ambiguous unless we pick up a specific representative element A a in the class. When this choice is made, we can write:
14 k=0 C k aA +Ĉ k aA (ξ) I k A (x) = A a (x) . (3.8)
This defines our master equation. In practice, it is a set of linear equations that determine the counterterm coefficients ξ j B as a function of the coefficients C k aA describing the breaking of gauge invariance induced by the regularization. If the theory is anomaly free, Eq. (3.8) simplifies to:
C k aA +Ĉ k aA (ξ) = 0 . (3.9)
Even in an anomalous theory, Eq. (3.9) can be enforced for a convenient subset of coefficients by appropriately choosing the representative element A a . For instance, one can always choose A a to be a combination of P-violating operators. Here we show how this well-known fact can be deduced in full generality from the WZ conditions. Table 3.3: Coefficients appearing in the gauge variation of the general counterterm L a Γ ct once it is decomposed in the basis of Table 3.1. Also shown are the transformation properties under CP and P. For the fermionic coefficientsĈ 12 aX ,Ĉ 13 aX andĈ 14 abX , we display their CP-and P-transformed, with L(R) = R(L).
coefficient explicit expression CP P C 0 ab −(ξ 2 ba + ξ 2 ab + ξ 3 ba + ξ 3 ab ) + + C 1 a(bc) ξ 4 [ab]c + ξ 4 [ac]b + − C 2 abc (ξ 1 abc + ξ 1 acb − ξ 1 cba ) + (ξ 3 cd + ξ 3 dc )f dab − + C 3 abc (ξ 1 abc + ξ 1 acb − ξ 1 cba − ξ 1 cab ) + (ξ 2 cd + ξ 2 dc + ξ 3 cd + ξ 3 dc )f dab − + C 4 a(bc) − 1 2 (ξ 1 cab + ξ 1 bac ) − + C 5 a(bc) 1 2 (ξ 1 abc − ξ 1 cba + ξ 1 acb − ξ 1 bca ) − + C 6 a(bc) 1 2 (ξ 1 abc + ξ 1 acb ) − + C 7 ab(cd) − 1 2 (f ace ξ 1 ebd + f ade ξ 1 ebc ) − 4 ξ 6 (ab)(cd) + + C 8 abcd f ade ξ 1 bce − f ade ξ 1 ecb + f ace ξ 1 bed − 8 ξ 6 (ac)(bd) + + C 9 ab[cd] 12 ξ 5 [abcd] + f ace (ξ 4 [de]b − ξ 4 [bd]e ) − f ade (ξ 4 [ce]b − ξ 4 [bc]e ) − − C 10 a(bc)(de) f abg ξ 6 (gc)(de) + f acg ξ 6 (gb)(de) + f adg ξ 6 (ge)(bc) + f aeg ξ 6 (gd)(bc) − + C 11 a[bcde] f abg ξ 5 [gcde] − f acg ξ 5 [gbde] + f adg ξ 5 [gbce] − f aeg ξ 5 [gbcd] + − C 12 aX i(T a X ξ 7 X + iξ 8 Xa )Ĉ 13 aX TĈ 12 aX C 13 aX i(ξ 7 X T a X + iξ 8 Xa )Ĉ 12 aX TĈ 13 aX C 14 abX i(T a X ξ 8 Xb − ξ 8 Xb T a X − if abc ξ 8 Xc ) −Ĉ 14 abX TĈ 14 abX
Solution to the master equation
We now wish to simultaneously solve the WZ conditions (3.3) and the master equation (3.8).
To this end, we first determine the most general form of the C k aA satisfying (3.3), and then find the counterterm coefficients ξ j B such that (3.8) is fulfilled. We do not need to specify the regularization scheme, which is only required to satisfy the general assumptions spelled out at the beginning of Section 3. An explicit determination of the coefficients C k aA and of the corresponding counterterms ξ j B is performed in Sec. 4.3 using DR. Our task is considerably facilitated by the observation that both C k aA and ξ j B have definite transformation properties under CP and P. For the C k aA these properties can be deduced from Eq. (3.1), recalling that ∆ a is CP-odd and P-even and that the operators I k A transform as shown in Table 3.1. Similarly, the transformations of ξ j B under CP and P, displayed in Table 3.2, can be deduced from Eq. (3.4), wWWhere each side is invariant under both CP and P. For consistency the coefficientsĈ k aA and C k aA must transform in the same way (see Table 3.3). Since gauge transformations do not mix operators with fermions with those containing only bosons, we can treat them independently. We start by solving the set of equations (3.3) and (3.8) involving purely bosonic operators and then discuss the fermionic sector.
) + + (T a 1 ...an X 1 ...Xn + T an...a 1 Xn...X 1 ) − (T a 1 ...añ X 1 ...Xn + T an...a 1 Xn...X 1 ) + − (T a 1 ...an X 1 ...Xn − T an...a 1 Xn...X 1 ) + (T a 1 ...añ X 1 ...Xn − T an...a 1 Xn...X 1 ) − + (T a 1 ...an X 1 ...Xn − T an...a 1 Xn...X 1 ) − (T a 1 ...añ X 1 ...Xn − T an...a 1 Xn...X 1 ) − −
Bosonic sector
The coefficients associated to the bosonic operators are C k=0−11 aA and ξ j=1−6 B . In this sector the Wess-Zumino conditions (3.3) and the master equation (3.8) split into two decoupled sets of equations, according to the parity of the operators involved. The P-even and P-odd sets are defined by k = 0, 2 − 8, 10 (in short: k ∈ P-even) and k = 1, 9, 11 (in short: k ∈ P-odd), respectively. The WZ conditions in the P-even and P-odd sectors are given in Eq. (A.1) and (A.2). The master equation (3.8) involves the counterterm coefficients ξ j=1,2,3,6 B in the P-even sector, and ξ j=4,5 B in the P-odd sector. At the one-loop order the coefficients C k aA and ξ j B can be written as linear combinations of single traces of the generators 9 :
C k a 1 ...an = c k X 1 ...Xn T a 1 ...an X 1 ...Xn ξ j a 1 ...an = χ j X 1 ...Xn T a 1 ...an X 1 ...Xn ,(3.10)
where T a 1 ...an X 1 ...Xn = tr(T a 1 X 1 ...T an Xn ) . and c k X 1 ...Xn and χ k X 1 ...Xn are numerical coefficients. Given the assumptions iii) and iv) stated at the beginning of the section and the decompositions in (3.1) and (3.4), the coefficients C k aA and ξ j B must have the following properties: 1. They transform under CP and P as indicated in Tables 3.2 and 3.3. 2. Under exchange a 1 . . . a n they behave as indicated in Table 3.3. 3. C k Aa andĈ k Aa (ξ) vanish for vector-like theories, i.e. if T a L = T a R . This strongly restricts the form of C k aA and ξ j B . In particular, the first requirement implies that the traces of Eq. (3.10) can only appear in the combinations with definite transformation properties under CP and P listed in Table 3.4. Once the remaining conditions are imposed, we are left with a general, regularization-independent parametrization of the C k aA and ξ j B at the one-loop order. For example, for elements I j A linear or quadratic in the gauge fields, the coefficients C k aA read:
C 0 ab = c 0 (T ab LL + T ab RR − T ab LR − T ab RL ) , C 1 a(bc) = c 1 LLL T abc LLL + T acb LLL − T abc RRR − T acb RRR ) + c 1 RLL (T abc RLL + T acb RLL − T abc LRR − T acb LRR 9
At higher loops also products of traces can appear.
+ c 1 LLR T abc LLR + T acb LRL − T abc RRL − T acb RLR + T abc LRL + T acb LLR − T abc RLR − T acb RRL , (3.12) C k=2,3 abc = c k LLL (T abc LLL − T acb LLL + T abc RRR − T acb RRR − T abc LRL + T acb LLR − T abc RLR + T acb RRL ) + c k LLR (T abc LLR − T acb LRL + T abc RRL − T acb RLR − T abc LRL + T acb LLR − T abc RLR + T acb RRL ) + c k RLL (T abc RLL − T acb RLL + T abc LRR − T acb LRR − T abc LRL + T acb LLR − T abc RLR + T acb RRL ) , C k=4,5,6 a(bc) = c k (T abc LLR − T abc LRL + T abc RRL − T abc RLR − T abc LRL + T abc LLR − T abc RLR + T abc RRL ) .
The parametrization for the remaining C k aA can be found in Appendix A.2. Analogously, the ξ j B can be parametrized as:
ξ 1 abc = χ 1 LLL (T abc LLL − T acb LLL + T abc RRR − T acb RRR ) + χ 1 LLR (T abc LLR − T acb LRL + T abc RRL − T acb RLR ) + χ 1 LRL (T abc LRL − T acb LLR + T abc RLR − T acb RRL ) + χ 1 RLL (T abc RLL − T acb RLL + T abc LRR − T acb LRR ) , ξ j=2,3 ab = χ k LL (T ab LL + T ab RR ) + χ k LR (T ab LR − T ab RL ) , ξ 4 [ab]c = χ 4 T abc LRL + T abc LRR − T abc RLL − T abc RLR − T bac LRL − T bac LRR + T bac RLL + T bac RLR , ξ 5 abcd = χ 5 LRLR −T abcd RLRL + T abdc RLRL + T acbd RLRL + T bacd RLRL − T bcad RLRL + T bcda RLRL −T cabd RLRL + T cbad RLRL − T cbda RLRL − T dbac RLRL + T dbca RLRL − T dcba RLRL + χ 5 LLLR T abcd LLLR − T abcd RLLL − T abcd RLRR + T abcd RRLR + T abdc LLRL + T abdc RLLL + T abdc RLRR − T abdc RRLR − T acbd LLLR + T acbd RLLL + T acbd RLRR − T acbd RRLR − T acdb RLRR − T adbc RLRR + T adcb RLRR + T bacd RLLL − T bcad RLLL + T bcda RLLL − T cabd RLLL − T cabd RLRR + T cadb RLRR + T cbad RLLL − T cbad RRLR − T cbda RLLL + T cbda RRLR − T cdab RLRR + T cdba RLRR − T dabc LLLR + T dabc RLRR + T dacb LLLR − T dacb RLRR + T dbac LLLR − T dbac LLRL − T dbac RLLL − T dbac RLRR + T dbac RRLR − T dbca LLLR + T dbca LLRL + T dbca RLLL + T dbca RLRR − T dbca RRLR − T dcab LLLR +T dcab RLRR + T dcba LLLR − T dcba LLRL − T dcba RLLL − T dcba RLRR + T dcba RRLR , ξ 6 abcd =χ 6 LLLL T abcd LLLL + T cabd LLLL + T cbad LLLL + T dbac LLLL + T cabd RRRR + T cbad RRRR + T dabc RRRR + T dbac RRRR + χ 6 RLLL T abcd RLLL + T abdc RLLL + T bacd RLLL + T bacd RRLR + T badc RLLL + T badc RRLR + T bcda RLRR + T bdca RLRR + T cabd LLLR + T cabd RLLL + T cabd RLRR + T cabd RRLR + T cbad LLLR + T cbad RLLL + T cbad RLRR + T cbad RRLR + T cdab RLRR + T cdba RLRR + T dabc LLLR + T dabc LLRL + T dabc RLLL + T dabc RLRR + T dabc RRLR + T dbac LLLR +T dbac LLRL + T dbac RLLL + T dbac RLRR + T dbac RRLR + T dcab LLLR + T dcab RLRR + T dcba LLLR + T dcba RLRR ) + χ 6 LLRR T abcd LLRR + T cdba LLRR + T dcba LLRR + T cdab LLRRR + T dcab LLRR + T abdc LLRR + T badc LLRR + T bacd LLRR + χ 6 LRLR T abcd LRLR + T bcda LRLR + T bdca LRLR + T dbac LRLR + T cbad LRLR + T dcab LRLR + T dcba LRLR + T cdba LRLR + χ 6 RLLR T abcd RLLR + T abdc RLLR + T bacd RLLR + T badc RLLR + T cdab RLLR + T cdba RLLR + T dcab RLLR + T dcba RLLR + χ 6 LLLL T cadb LLLL + T dacb LLLL + T cadb RRRR + T dacb RRRR + χ 6 RLLL T cadb RLLL + T cadb LLRL + T acbd RLLL + T bcad RLLL + T bcad RLRR + T bdac RLRR + T cadb RLRR + T cadb RRLR +T cbda RLRR + T dacb LLLR + T dacb LLRL + T dacb RLLL + T dacb RLRR + T dacb RRLR + T dbca LLLR + T dbca RLRR + χ 6 RLLR T cadb RLLR + T acbd RLLR + T adbc RLLR + T bcad RLLR + T bdac RLLR + T cbda RLLR + T dacb RLLR + T dbca RLLR + χ 6 LRLR T cadb LRLR + T cbda LRLR + T adbc LRLR + T acbd LRLR .
Given the length of these expressions, we also provide the parametrizations of all C k A and ξ j B coefficients in a Mathematica notebook attached to the arXiv preprint of this article as an ancillary file.
With our parametrization, C k=0−11 Aa automatically vanish for T a L = T a R . For the same to hold for their hatted counterparts, the counterterm coefficients must obey four additional conditions:
χ 2 LL + χ 2 LR + χ 3 LL + χ 3 LR = 0 , χ 1 LLL + χ 1 RLL + χ 1 LLR + χ 1 LRL − 2i(χ 2 LL + χ 2 LR ) = 0 , χ 2 LL + χ 2 LR + 4(χ 6 LLLL + 4χ 6 RLLL + χ 6 LLRR + χ 6 LRLR + χ 6 RLLR ) = 0 , χ 2 LL + χ 2 LR − 2(χ 6 LLLL + 4χ 6 RLLL + 2χ 6 RLLR + χ 6 LRLR ) = 0 . (3.13)
These allow us to express four coefficients, e.g. χ 1 LRL , χ 3 LR , χ 6 RLLR and χ 6 LRLR , as a function of the others. We, therefore, conclude that the C k aA are described by a total of 61 real parameters in the P-even sector and 27 in the P-odd one, while the ξ j B (hence theĈ k aA (ξ)) depend on 13 real parameters in the P-even sector and 4 in the P-odd one. Note that at this stage the parameters describing C k aA are still redundant, because -as mentioned above -the various C k aA are related by the WZ conditions WZ[C k cA ] = 0. In contrast to this, theĈ k aA (ξ) automatically satisfy WZ[Ĉ k cA ] = 0, hence there are no further restrictions on the ξ j B . In order to remove the redundancy in the above parametrization of C k aA , we proceed to solve the constraints WZ[C k cA ] = 0.
P-even sector
We start from the P-even sector. Plugging the parametrizations for C k∈P−even aA into Eq. (A.1), we obtain 49 independent conditions on the coefficients entering the parametrizations (see Appendix A.3.1 for the full expressions). This leaves us with 61 − 49 = 12 free parameters, which we choose to be:
c 0 , c 2 LLL , c 2 RLL , c 4 , c 6 , c 7 LLLR , c 7 LRLR , c 7 LLRR , c 7 LRRL , c 7 LLLR , c 7 LRLR , c 7 LLRR . (3.14)
From the conditions in A.3.1 we can also conclude that, independently from the choice of free parameters, the conditions WZ[C k cA ] = 0 fully determine C 10 abcd as a combination of other coefficients in the P-even sector. Making use of the expressions for C k∈P−even aA in terms of the parameters in (3.14), we can solve the homogeneous master equation (3.9). The solution
χ 1 LLL = ic 0 − c 2 LLL − 2iχ 2 LL , χ 1 LLR = −c 2 RLL , χ 1 RLL = 2c 4 − c 2 RLL , χ 2 LR = − 1 2 c 0 + ic 4 + ic 6 − i 2 c 2 LLL − 3i 2 c 2 RLL , χ 3 LL = 1 2 c 0 − χ 2 LL , χ 6 RLLL = 1 4 c 7 LLLR , χ 6 LLRR = 1 4 c 7 LLRR , χ 6 LRLR = 1 4 c 7 LRLR ,(3.15)χ 6 LLLL = 1 8 (c 0 − 2ic 6 + ic 2 LLL + 2ic 2 RLL − 8c 7 LLLR − 2c 7 LLRR − 2c 7 LRLR − 2c 7 LRRL − 2χ 2 LL ) , χ 6 LLLL = 1 4 (−c 0 + 2ic 6 − ic 2 LLL − 4c 7 LLLR − 2c 7 LLRR − c 7 LRLR + 2χ 2 LL ) , χ 6 RLLL = 1 4 c 7 LLLR − i 2 c 2 RLL , χ 6 RLLR = 1 4 ic 4 − i 2 c 2 RLL + c 7 LLRR ,
determines the P-even counterterms ξ 1,2,3,6 B and explicitly shows the absence of anomalies in this sector. 10 In other words, in the P-even sector the gauge variation of the effective action can always be compensated by a counterterm.
The conditions (3.15) fix only 12 out of the 13 available counterterm coefficients. The residual freedom amounts to the possibility of adding to L ct the gauge-invariant counterterm:
L ct ⊃ χ 2 LL I 2 ab − I 3 ab − 2f ceb I 1 eca + 1 2 f dga f ecb I 6 egcd (T ab LL + T ab RR ) = − χ 2 LL 2 F a µν F bµν (T ab LL + T ab RR ) . (3.16)
This term is manifestly gauge invariant because T ab LL and T ab RR can be written as the direct sum of identifies in the adjoint representations of the gauge group, each multiplied by a representationdependent Casimir.
P-odd sector
We now repeat the same procedure in the P-odd sector. Plugging the parametrizations for C k∈P−odd aA into Eq. (A.2) we obtain 23 conditions, which we list in Appendix A.3.2. Hence, only 27 − 23 = 4 out of the 27 coefficients appearing in C k∈P−odd aA are truly independent. We choose 11 :
c 1 LLL , c 1 LLR , c 9 LRLR , c 9 LLLR . (3.17)
On the other hand, the P-odd counterterms depend only on three parameters: χ 4 , χ 5 LLLR and χ 5 LRLR . We can use them to remove c 1 LLR , c 9 LRLR and c 9 LLLR by choosing 12 :
χ 4 = c 1 LLR 2 , χ 5 LLLR = − c 9 LLLR 12 , χ 5 LRLR = − c 9 LRLR 12 . (3.18)
The extra coefficient, c 1 LLL is related to the anomaly. In fact, by combining the parametrizations for C k∈P−odd aA , the constraints from the WZ conditions in Eq. (A.2) and the counterterm choice in (3.18), we get
k∈P−odd C k aA +Ĉ k aA (ξ) I k A (x) = c 1 LLL 2T a(bc) LLL I 1 (bc) − i T ab[cd] LLLL + T a[c|b|d] LLLL + T ba[cd] LLLL I 9 b[cd] − (L → R) ,(3.19)
where the vertical bars indicate that indices inbetween them do not get antisymmatrized. Since C k∈P−even aA satisfy the homogeneous equation (3.9), the right-hand side of (3.19) can be identified with A a . Using the explicit expressions for I 9 (bc) and I 6 b[cd] , we can write it as
A a = −c 1 LLL µνρσ ∂ µ A b ν ∂ ρ A e σ − i 4 A b ν A c ρ A d σ (if cde ) tr T a L T b L , T e L − T a R T b R , T e R .
(3.20) Because there is no freedom left in choosing the counterterms, the condition A a = 0 can only be satisfied by imposing Eq. (2.15). 11 Note that, as in the P-even sector, the WZ conditions fully determine C 11 abcd as a combination of the coefficients entering C 1,9 abcd 12 As in the previous section, all counterterms are fixed by using the master equation for k = 1, 9. C 11 abcd + C 11 abcd (ξ) = 0 is automatically satisfied.
Fermionic sector
We now turn to the fermionic sector, where it is convenient to first focus on the coefficients C 12 aX and C 13 aX . Both are matrices in flavor space that can be parametrized in terms of strings of generators. At the one-loop order such strings are not completely generic, since the relevant diagrams are the ones depicted in Fig. 3.1, from which we infer the patterns:
C 12 aX = a 1 T a X T b X T b X + a 2 T b X T b X T a X + a 3 f abc T b X T c X + a 4Y T b X T a Y T b X , C 13 aX = b 1 T a X T b X T b X + b 2 T b X T b X T a X + b 3 f abc T b X T c X + b 4Y T b X T a Y T b X ,(3.21)
where a sum over Y = L, R is understood. The Lie algebra guarantees that the combination
T a X T a X satisfies [T a X T a X , T b X ] = 0 for any T b X , while f abc T b X T c X is proportional to T a X .
Without losing generality, we can thus write:
C 12 aX = a 1X T a X + a 4Y T b X T a Y T b X C 13 aX = b 1X T a X + b 4Y T b X T a Y T b X ,(3.22)
where the matrices a 1X and b 1X commute with all generators T a X . We can further refine the parametrization of C 12 aX and C 13 aX by imposing invariance under CP. On the one side we have
C 12 aX CP −→ C 13 T aX = b 1X T aT X + b 4Y T bT X T aT Y T bT X . (3.23)
On the other side we recall that under CP T a X CP −→ T aT X and we obtain
C 12 aX CP −→ a 1X T aT X + a 4Y T bT X T aT Y T bT X . (3.24)
The two ways lead to the same result provided a 1X = b 1X and a 4Y = b 4Y , resulting in
C 12 aX = C 13 aX = a 1X T a X + a 4Y T b X T a Y T b X ,
holding at least at one-loop order. Moreover, by making use of C 12 cX = C 13 cX , from the WZ consistency conditions (see Appendix A), we can express C 14 abX in terms of C 13 cX :
C 14 abX = i(C 13 bX T a X − T a X C 13 bX + if abc C 13 cX ) . (3.25)
Therefore the independent coefficients relevant for the one-loop parametrization of the gauge variation in the fermionic sector are provided by the matrix C 13 cX . We now show that, for any choice of C 13 cX , the homogeneous equation (3.9) can always be solved, thus proving the absence of anomalies in this sector of the theory. When k = 12, 13, 14, Eq. (3.9) gives:
T a X ξ 7 X + iξ 8 Xa = iC 13 aX , ξ 7 X T a X + iξ 8 Xa = iC 13 aX , T a X ξ 8 Xb − ξ 8 Xb T a X − if abc ξ 8 Xc = −(C 13 bX T a X − T a X C 13 bX + if abc C 13 cX ) .
(3.26)
By combining the first two equations we see that ξ 7 X should commute with all generators T a X :
ξ 7 X T a X − T a X ξ 7 X = 0 . (3.27)
The third equation is automatically satisfied once we eliminate ξ 8 Xa in favour of ξ 7 X and C 13 aX . As a consequence, (3.26) only determines one combination of ξ 7 X and ξ 8 Xa :
ξ 8 Xa = −iξ 7 X T a X + ξ 8 Xa = C 13 aX . (3.28)
By expressing the searched-for counterterm in terms of ξ 7 X and ξ 8 Xa , we get:
f X ξ 7 X ( / ∂ + iT a X / A a )f X +f X ξ 8 aX / A a f X . (3.29)
Since the matrix ξ 7 X commutes with all generators, and thus with all gauge transformations, in the above expression the first term is gauge invariant and can be safely dropped because it does not affect (2.22). We end up withf
X ξ 8 aX / A a f X ,(3.30)
as the unique non-trivial counterterm, where ξ 8 aX is given in Eq. (3.28).
One-loop analysis in Dimensional Regularization
In this section we present explicit one-loop results for the variation of the effective action and the WI-restoring counterterms in DR, using the BMHV prescription for γ 5 . First, we introduce the conventional dimensionally regularized action, and then we perform the explicit one-loop computation.
Classical action in DR
In DR Lorentz indices are analytically extended from d = 4 to d = 4 − 2 complex dimensions. In this respect it is necessary to slightly modify the notation we used so far. In the present section (only), vector Lorentz indices like µ, ν run from 0 to d, and split into a four-dimensional set denoted byμ,ν and a d − 4-dimensional (evanescent) one labeledμ,ν. As we will discuss more extensively in Section 4.2, the gauge transformation is however taken to be purely fourdimensional in nature. Explicitly, the operators L a (x) in DR is defined as:
L a (x) = −∂μ δ δA aμ (x) + f abc A bμ (x) δ δA cμ (x) (4.1) + X=L,R −i ← − δ δf X (x) T a X f X (x) + if X (x)T a X δ δf X (x) .
The operators {I k A (x)} and {I j B (y)} of tables 3.1 and 3.2 are strictly four-dimensional. The spurious breaking of gauge invariance in DR arises because chiral fermions cannot be defined for arbitrary d. Indeed, as is well known, it is impossible to define a d-dimensional Clifford algebra
{γ µ , γ ν } = 2g µν ,(4.2)
and a chirality matrix γ 5 that commutes with all d−dimensional Lorentz generators. More specifically, there is no d−dimensional definition of γ 5 obeying all the familiar four-dimensional properties, namely i) {γ µ , γ 5 } = 0, ii) tr(γ µ γ ν γ ρ γ σ γ 5 ) = 4i µνρσ , and iii) cyclicity of the trace. Several treatments of γ 5 retaining i) have been put forward, see for example Refs. [47][48][49][50]. Unfortunately, none of them has been proven to be consistent to all orders. Here we adhere to the BMHV prescription, which has been rigorously established to all orders in perturbation theory [35][36][37][38]. In this approach the conditions ii) and iii) are preserved while i) is relaxed.
In particular, the matrix γ 5 is taken to be an intrinsically four-dimensional object, and the other γ µ matrices are split into a four-and a (d − 4)-dimensional part, denoted by γμ and γμ, respectively:
γ µ = γμ + γμ . (4.3)
An algebraically consistent scheme is then obtained by requiring:
{γ 5 , γμ} = 0 , [γ 5 , γμ] = 0 . (4.4)
Eq. (4.4) makes it impossible for γ 5 to commute with all the d−dimensional Lorentz generators.
Hence the notion of chirality is lost and, as we will see, a spurious (or genuine) violation of gauge invariance is bound to emerge. We now proceed to introduce the dimensionally regularized version of the classical action in Eq. (2.2). While the regularization of Feynman diagrams via DR requires an extension of the kinetic terms to d dimensions, the treatment of the interaction terms is, to a large extent, arbitrary: the only requirement is that they must reduce to those in (2.2) for d → 4. This leaves open the possibility of defining a large class of regularization schemes. For the bosonic Lagrangian L YM , a natural choice is to promote it entirely to d dimensions following the recipe outlined above, i.e. replacing L YM → L (d) YM . While this choice is obviously not unique, it is by far the most convenient, because it preserves all the symmetries of the unregularized theory. For this reason, it will be adopted in the following. Also the fermionic contribution L Fermions allows for several independent analytic continuations. There is however a fundamental distinction with respect to the bosonic action: because of the absence of d−dimensional chirality, there is no way to define a regularized fermionic action that respects chiral gauge invariance. Here we choose the following regularized fermion Lagrangian:
L (d) Fermions = if γ µ ∂ µ f − A a µf (P R γ µ P L T a L + P L γ µ P R T a R ) f = if γ µ ∂ µ f − A ā µf (P R γμP L T a L + P L γμP R T a R ) f ,(4.5)
with P L,R being the d−dimensional versions of the operators introduced around Eq. (2.5) for the (unregularized) four-dimensional theory. Even for arbitrary d P L,R represent hermitian projectors that can be employed to define what we will call d−dimensional left-and righthanded fermions, precisely as in (2.5). The crucial difference is that the fermionic kinetic term (which, consistently with DR, is d−dimensional) introduces f L ↔ f R transitions, whereas the interaction is purely four-dimensional and does not mediate such regularization-dependent transitions. In conclusion, the d−dimensional action that replaces (2.2) is taken to be:
S (d) [A, f X ,f X ] = d d x (L (d) YM + L (d) Fermions ) . (4.6)
Because this definition of S (d) is effectively part of the regularization scheme, all schemedependent quantities (including the counterterms derived below) depend on it, and will generally differ if another S (d) is adopted (see also Ref. [22]). Since the regularized Yang-Mills Lagrangian defined above is widely used in the literature, most of the scheme-dependence (within DR) stems from the fermionic Lagrangian. We will further comment on such schemedependence in Section 4.3.1. For now let us just stress that any alternative interaction scheme, such as those defined by −A a µf (γ µ P L T a L + γ µ P R T a R ) f or −A a µf (P R γ µ T a L + P L γ µ T a R ) f , differs from ours because of the addition of evanescent terms.
The choice in Eq. (4.5) is motivated by minimality of the resulting gauge variation which, as we will see below, is the central quantity in computing the variation of the 1PI effective action, ∆ a | (1) . In practice, (4.5) minimizes the number of diagrams to be computed in order to identify the WI-restoring counterterms. Perhaps even more importantly, (4.5) preserves P, CP, the vectorial gauge group (see below) and hermiticity of the action, which allow us to perform intermediate checks during the calculations. We also emphasize that, at variance with other approaches [20,22], our regularization does not require the introduction of additional fermions. The fermion content of our theory is exactly the same as in the four-dimensional theory. This makes our results directly applicable to theories of interest, like the SM.
Breaking of gauge invariance in DR: general considerations
Having introduced the regularized action the general results of Section 2.2 can be invoked to identify the WI-restoring counterterm S ct | (1) . To make contact with the notation of Section 2.2 we observe that the quantity (4.6) represents the tree-level regularized action,
S (d) ≡ Γ reg | (0) , whereas more generally Γ reg | (n) = Γ (d) | (n) .
At a given perturbative order, the gauge variation of Γ (d) [φ]| (n) contains both purely 4dimensional as well as evanescent terms. The evanescent terms are defined as those contributions that are proportional to d − 4 components of the fields, or contain space-time derivatives in the (d − 4)-dimensional coordinates. Such contributions to the effective action cannot describe physical processes because the latter are genuinely 4-dimensional. Physical processes are obtained by differentiating the effective action with respect to the 4-dimensional components of the background fields, assumed to carry purely 4-dimensional external momenta. For this reason evanescent contributions to Γ (d) do not have any physical significance.
To avoid any confusion we emphasize that this statement refers to the 1PI effective action, as opposed to the classical action. Evanescent terms actually appear in the classical action, are essential to the regularization procedure and in fact are at the origin of anomalies. Explicitly, performing 4-dimensional transformations of the fermionic and bosonic fields one finds that 4-dimensional gauge invariance is indeed explicitly broken by the regularized action (4.6):
L a (x)S (d) = L a (x)S (d) Fermions (4.7) = − f L γ µ T a L (∂ µ f R ) +f R γ µ T a R (∂ µ f L ) + (∂ µfL )γ µ T a R f R + (∂ µfR )γ µ T a L f L (x) = O(Eva),
where O(Eva) indicates that this is an evanescent quantity because it is controlled by terms of the typef X γ µ f Y =X which do not exist in d = 4. As already anticipated earlier, the fundamental reason why δ α S (d) [A, f X ,f X ] is not exactly zero is that the d−dimensional kinetic term characterizing DR necessarily mediates f L ↔ f R transitions. 13 More specifically, the mixed terms f † L f R , f † R f L are not gauge invariant unless the gauge transformation is vectorlike, i.e. our regularization (4.6) explicitly violates gauge invariance unless T a L = T a R . When T a L = T a R the gauge variation in Eq. (4.7) reduces to a total derivative with respect to the d − 4 coordinates, which can be safely ignored. Only in this case our DR scheme does not break the physical, four-dimensional gauge invariance. Note that our choice S (d) Fermions minimizes the breaking because the four-dimensional nature of the interaction conserves chirality: any other interaction scheme would feature additional terms on the right-hand side of Eq. (4.7).
In DR the gauge invariance is explicitly lost already at tree-level whenever T a L = T a R , i.e. whenever the theory is chiral. Any choice of L (d) Fermions would suffer from the same drawback. The dimensionally regularized classical action (4.6) is nevertheless invariant under the spurious P and CP transformation laws of Eqs. (2.12) and (2.13), as its four-dimensional sibling. 14 The associated selection rules will be heavily exploited in the calculations of the following sections. There is another sacred principle that appears to be violated by (4.6): the fermion interaction does not respect d−dimensional Lorentz transformations. However, this violation does not have tangible consequences, because the symmetry principle of physical relevance is the fourdimensional Lorentz group, not its d−dimensional extension. Indeed, Eq. (4.6) preserves four-dimensional Lorentz provided all the d − 4 indices, e.g. γμ, are viewed as scalars of SO(1, 3). As a result, DR does not require the introduction of counterterms to enforce the Ward Identities associated with physical Lorentz invariance. With this in mind, by an abuse of terminology, we will keep referring to (4.6) as to the regularized "d−dimensional action". The reader should note that the situation is radically different when considering the breaking of gauge invariance, since Eq. (4.7) reveals that (4.6) does not respect even the (physically relevant) four-dimensional version of (2.7), where the gauge parameters α a are assumed to depend only on the coordinates xμ. The very existence of WIs associated to four-dimensional gauge invariance demands the addition of local counterterms to (4.6).
As anticipated earlier, evanescent contributions to the 1PI effective action are unphysical. In particular, the breaking (4.7) has no effect in the tree approximation, since this is an evanescent quantity that does not exist when → 0; said differently, the operatorial version of (4.7) does not have any tree matrix element with (four-dimensional) physical states. For example, tree matrix elements off L γ µ ∂ µ f R =f L γμ∂μf R depend on the unphysical momentum along the d − 4 directions, and similarly for all other terms. However, when going beyond the tree level in the perturbative expansion, the evanescent terms in the classical action may get multiplied by singular integrals, resulting in non-evanescent contributions to the 1PI action that spoil the Ward Identities. This is the origin of the spurious breaking terms that force us to introduce counterterms.
An explicit expression for ∆ a in DR can be derived order by order in perturbation theory. As anticipated in Eq. (2.16), the regularized 1PI effective action in the background field method can be written as:
e iΓ (d) [φ] = 1PI Dφ e iS (d) full [φ+φ] (4.8) where S (d) full ≡ S (d) + S (d) g.f. + S (d)
ghost is the sum of the d−dimensional action (4.6), an invariant gauge-fixing term 15 :
S g.f. [φ +φ] = d d x − 1 2ξ f a f a ,(4.9)
and the associated ghost action. It is a remarkable property of DR that the non-invariance of the d−dimensional action S (d) , see Eq. (4.7), represents the only source of gauge-symmetry breaking. In particular, under a gauge transformation the measure of the dimensionally-regularized path integral remains invariant because any local transformation of the field is associated to a Jacobian J of the form ln detJ = δ (d) (0) d d x f (x), with some function f (x) that depends on the transformation parameters, and in DR δ (d) (0) identically vanishes, implying that J = 1. Any potential anomaly in local field transformations in DR must therefore come from the noninvariance of the classical action. In particular, the gauge variation of the 1PI effective action reads
L a Γ (d) [φ] = 1PI Dφ e iS (d) full [φ+φ] L a S (d) Fermions [φ +φ] 1PI Dφ e iS (d) full [φ+φ]
.
(4.10)
Thus, the spurious gauge symmetry breaking terms arise from the one-particle irreducible vacuum correlation functions of the gauge variation of the classical fermionic action. This is the regularized version of the Quantum Action Principle of (2.20). According to Eq. (2.22), the WI-restoring counterterm S ct | (1) is determined by the variation of renormalized 1PI effective action. We should therefore discuss how this is connected to the variation of the regularized 1PI action in (4.10). To appreciate this it is necessary to introduce a renormalization scheme.
In general, there are two types of contributions to the regularized 1PI effective action: (finite as well as divergent) evanescent terms and (finite as well as divergent) non-evanescent terms. In formulas, we may write
Γ (d) | (1) = Γ fin | (1) + 1 Γ div | (1) + Γ fin | (1) + 1 Γ div | (1) ,(4.11)
where a bar/hat identifies the non-evanescent/evanescent contributions. 16 In this paper we adopt a popular (minimal) subtraction scheme according to which the renormalized effective action is defined by subtracting all divergent terms, both the evanescent and non-evanescent ones, so that it reduces to the sum of finite evanescent and finite non-evanescent terms analogously to the tree-level expression S (d) = Γ| (0) :
Γ| (1) ≡ lim d→4 Γ fin | (1) + Γ fin | (1) . (4.12)
The formal 4-dimensional limit is carried out by discarding Γ fin | (1) and sending all fields and momenta in Γ fin | (1) to d = 4. The gauge variation (2.20) of the renormalized effective action
hence coincides with ∆ a | (1) = L a Γ| (1) = L a Γ fin | (1) . (4.13)
This is the quantity that determines S ct | (1) . Similarly to Γ (d) | (1) , the gauge variation L a Γ (d) | (1) of the regularized action is in general the sum of evanescent terms and non-evanescent terms. In evaluating (4.10) we find two contributions:
L a Γ (d) | (1) = ∆ fin a | (1) + ∆ fin a | (1) + 1 ∆ div a | (1) , (4.14)
namely a (finite) 4-dimensional one and an (finite plus divergent) evanescent one. Crucially, the action of L a on any finite term remains finite, and similarly the action of L a on a divergent term remains divergent. Furthermore, L a cannot turn an evanescent term into a non-evanescent one. These considerations imply that 17
∆ a | (1) = ∆ fin a | (1) . (4.15)
This represents an important simplifying result for us: in a 1-loop calculation, and with the subtraction scheme illustrated above, the variation of the renormalized 1PI action is fully determined by the finite 4-dimensional part of (4.10). This is the only contribution necessary to identify the corresponding counterterm S ct | (1) .
In the next subsection, we will present an explicit one-loop calculation of (4.10). Because the focus of our paper is S ct | (1) , the result summarized in Eq. (4.15) ensures that in that calculation we can safely neglect the divergent evanescent terms in L a Γ (d) | (1) . Yet, were we interested in carrying out a 2-loop computation of S ct , an explicit expression of the 1-loop counterterms necessary to subtract the divergences from Γ (d) | (1) would also be needed.
Breaking of gauge invariance in DR: one-loop calculation
There are several important simplifications that occur in the computation of (4.10) at the one-loop order. First, we only need the expansion of S (d) full [φ +φ] up to quadratic order in the quantum fluctuationsφ. Second, since by definition the effective action (2.16) includes only one-particle irreducible diagrams, terms linear in the quantum fluctuations do not contribute and can be discarded. Furthermore, as we will see shortly, ghosts do not play any role at the order of interest. In particular, we can safely switch off both their classical backgrounds and their quantum fluctuations. As a consequence, the only relevant degrees of freedom in our analysis are the gauge and the fermionic fields, along with their quantum fluctuations.
The central player in our calculation is the fermionic action. Upon performing the shift A a µ → A a µ +à a µ , the covariant derivative becomes i / D → i / D − γμà ā µ (P L T a L + P R T a R ). Expanding up to quadratic order we obtain
S (d) Fermions [φ +φ] = d d xf i / Df (4.16) + d d xf i / Df + S (d) F + O(φ,φ 3 ),
where we defined 17) and, as promised, we neglected terms linear and cubic in the fluctuations. The first term in (4.16) represents the classical fermionic action, and can be factored out of the path integral up to quadratic order, because (4.17) is linear in the background fermionic fields, and L a Γ (d) [φ] is a dimension-four local operator that contains at most two powers of such fields. Furthermore, the linear term in e iS (d)
S (d) F ≡ d d x −Ã ā µf γμ(P L T a L + P R T a R )f −Ã ā µf γμ(P L T a L + P R T a R )f ,(4.F = 1 + iS (d) F − 1 2 [S (d)
F ] 2 + · · · does not contribute, because no 1PI diagram can be built out of it. We then conclude that the one-loop approximation of (4.10) reads
δ α Γ (d) [A, f X ,f X ] (1) = δ α S (d) (4.18) + Ω|δ α d d xf i / Df |Ω A − 1 2 Ω|T S (d) F 2 δ α d d xf i / Df |Ω A ,
where the time-ordered Green-functions are vacuum to vacuum correlators in the background gauge A a µ :
Ω|T {O(x)O(y)} |Ω A ≡ 1PI Dφ e i d d xf i / Df +iS (d) gauge [A+Ã] O(x)O(y) 1PI Dφ e i d d xf i / Df +iS (d) gauge [A+Ã]
, (4.19) and we introduced the compact notation S (d)
Gauge ≡ S (d) YM + S (d) g.f. + S (d)
ghost . The quantity δ α S (d) in (4.18) describes the classical effect (4.7) and can be ignored because finite evanescent. The second and third terms instead induce contributions that do not vanish for → 0, because divergent 1/ one-loop effects turn them into finite non-evanescent. In four dimensions the one-loop gauge variation reads
δ α Γ (4) (1) = δ α Γ (4) Gauge + δ α Γ (4) Fermions , (4.20)
where we introduced the notation
δ α Γ (d) Gauge (1) = Ω|δ α d d xf i / Df |Ω A , (4.21) δ α Γ (d) Fermions (1) = − 1 2 Ω|T S (d) F 2 δ α d d xf i / Df |Ω A . (4.22)
The term (4.21) arises from a singlef loop and only depends on the background gauge fields. At one loop the gauge bosons in these diagrams are necessarily non-dynamical, i.e. the gauge field is a purely classical background. The term (4.22) instead receives contributions from diagrams with both virtual fermions and gauge bosons, and its explicit form depends on the fermionic background. It is easy to see that at one loop ghosts can be neglected. Indeed, one-loop diagrams contributing to either (4.21) or (4.22) cannot simultaneously involve virtual ghosts and the necessary virtual fermions. We can therefore safely neglect ghosts, keeping in mind that they should not be ignored when performing calculations beyond the one-loop approximation.
Bosonic sector
The gauge variations in Eqs. (4.21) and (4.22) become significantly more compact when expressed in terms of vector and axial combinations of the gauge fields. These are defined, along with the associated generators, as
V µ = T a V A a µ , T a V = 1 2 (T a R + T a L ) , (4.23) A µ = T a A A a µ , T a A = 1 2 (T a R − T a L ).
We therefore prefer to temporarily switch notation from T L,R to T V,A . To avoid confusion we restrict this change of notation to this section. Another useful quantity is
T a = T a V + T a A γ 5 = P L T a L + P R T a R . (4.24)
For clarity, we stress that the matrices T L , T R do not live in orthogonal spaces and therefore do not commute in general. As a result neither T a V nor T a A usually form an algebra. Yet, orthogonality of the chirality projectors always implies [T a , T b ] = if abc T c . 18 To familiarize with the new notation let us begin by re-writing the first term in (4.18):
δ α S (d) = δ α dxf i / Df (4.26) = d d x α af T a A / D, γ 5 f + ∂μα af T a γμf = Eva.
It is easy to see that this expression correctly reproduces Eq. (4.7) after integration by parts. A similar quantity, with the replacement f →f , is needed to compute the two remaining contributions. We find
δ α Γ (d) Gauge (1) = d d x Ω| α af T a A / D, γ 5 f + ∂μα af T a γμf |Ω A (4.27) = −Tr α a T a A / D, γ 5 1 / D − Tr ∂μα a T a γμ 1 / D ,
where the minus sign in the second line arises due to Fermi statistics. The trace "Tr" differs from the Dirac trace "tr" because it acts on the Dirac indices as well as space-time, i.e. Tr
[O] = d d x x|tr[O]|x .
As a non-trivial consistency check of (4.27), we note that this quantity arises from a single fermion loop with gauge bosons evaluated on their classical backgrounds. In this approximation the 1PI effective action reads −idet[ / D] and its variation may alternatively be given by
−iTr[ / D −1 δ α / D]. An explicit computation gives δ α (i / D) = −[ / D, α a T a V ] − [ / D, α a T a A ]γ 5 + ∂μα a T a γμ, so that −iTr[ / D −1 δ α / D] = Tr 1 / D [ / D, α a T a V ] + 1 / D [ / D, α a T a A ]γ 5 − 1 / D ∂μα a T a γμ (4.28)
18 More explicitly, the reader might want to verify that
[T a V , T b V ] = 1 2 if abc T c V + 1 4 [T a R , T b L ] + 1 4 [T a L , T b R ] , (4.25) [T a A , T b A ] = 1 2 if abc T c V − 1 4 [T a R , T b L ] − 1 4 [T a L , T b R ] , [T a V , T b A ] = 1 2 if abc T c A − 1 4 [T a R , T b L ] + 1 4 [T a L , T b R ]. = Tr α a T a V − 1 / D α a T a V / D + α a T a A γ 5 − 1 / D α a T a A / Dγ 5 − ∂μα a T a γμ 1 / D = −Tr α a T a A / D, γ 5 1 / D + ∂μα a T a γμ 1 / D ,
where we used Tr[α a T a A γ 5 ] = 0 and the cyclicity of the trace. The above expression exactly agrees with (4.27), as it should. Incidentally, this consistency check also provides an indirect proof of the gauge invariance of the dimensionally-regularized path integral measure (see discussion above Eq. (4.10)).
The trace in Eq. (4.27) may be computed diagrammatically or via other methods. Any of these would lead to the same result because the expression has been already regularized and is unambiguous. In the following, we employ the heat kernel method. The following result was first obtained in Ref. [45] via this same method. We think that a re-derivation makes our work more complete and self-contained, and at the same time might help clarify a few non-trivial steps of the computation. Now, the second term in the second line of Eq. (4.27) is evanescent and can be safely discarded as argued around (4.15). The first term in Eq. (4.27) is however not entirely negligible. In Appendix B we show that the first trace in the second line of Eq.(4.27) may be expressed in terms of the heat kernel coefficient a 2 plus divergent evanescent terms. Neglecting all evanescent terms, using (B.5) and explicitly evaluating a 2 (x, x) via (B.9), after a tedious but straightforward computation we arrive at 19
lim d→4 L a Γ (d) Gauge (1) = i 8π 2 tr [T a A γ 5 a 2 (x, x)] ≡ −tr T a A (a 2 (x) + a / 2 (x)) ,(4.29)
with
a 2 = µναβ 16π 2 V µν V αβ + 1 3 A µν A αβ − 8 3 i (A α A β V µν + A α V µν A β + V µν A α A β ) − 32 3 A µ A ν A α A β , a / 2 = 1 16π 2 4 3 D V ν D V ν D V µ A µ + 8 3 i[A µ , D V ν V µν ] − 2 3 i[A µν , V µν ] (4.30) + 1 16π 2 −8A µ (D V ν A ν )A µ − 8 3 D V µ A ν + D V ν A µ , A µ A ν + 4 3 D V µ A µ , A ν A ν .
In the above expression we introduced the field strengths of the vector and axial components of the four-dimensional gauge fields:
V µν = ∂ µ V ν − ∂ ν V µ + i[V µ , V ν ] + i[A µ , A ν ] A µν = ∂ µ A ν − ∂ ν A µ + i[V µ , A ν ] + i[A µ , V ν ]. (4.31)
Our result (4.29) agrees with Ref. [45], where a different convention for the gauge vectors was adopted. Interestingly, note that the one-loop variation δ α Γ (4) | Gauge is completely independent from the definition of the interaction in the regularized fermionic action (4.5). Any alternative regularization of the interaction would differ by evanescent terms involvingμ-components of the vector fields, and these would not affect the four-dimensional limit of (4.27). The mixed fermion-boson loops appearing in (4.22) are instead sensitive to such definitions and below will be evaluated for our choice (4.5). The gauge variation in Equations (4.29) and (4.30) satisfies all the desired properties. First, since in our convention the generators T a V,A are hermitian, the factors of i in (4.30) guarantee that δ α Γ (4) is hermitian. Second, the vector-like component of the gauge symmetry, defined by T a A = 0 (or, equivalently, by T a L = T a R ), is manifestly conserved, consistently with what is anticipated below Eq. (4.7). Third, expressions (4.29) and (4.30) are consistent with L a Γ (4) being CP-odd and P-even, see below Eq. (2.13). In particular, a / 2 is P-odd because it contains an odd number of axial vectors, whereas a 2 is P-odd because it contains an even number of axial vectors contracted with the Levi-Civita tensor. Finally, the expression (4.29) satisfies the WZ conditions, as we will discuss below.
The operators in a 2 , a / 2 form a complete set of P-odd, Lorentz-singlet, dimension-four local functions of the vectors and their derivatives compatible with vector gauge invariance. As expected, this operator basis is in one-to-one correspondence with the one presented in Table 3.1. We can therefore equally decompose Eq. 4.29 as we did in Section 3 (see Eq. 3.1 and text around it). The corresponding coefficients C k A are collected in Section 4.3.3.
Fermionic sector
The mixed fermion-gauge contribution to δ α Γ (4) may be calculated directly from its definition (4.22) (S (d) F is given in (4.17)):
L c Γ (d) Fermions (1) (4.32) = −2 Ω|T d d y 1 Ã ā ρf γρT af y 1 f T c A γμγ 5 ∂μf x d d y 2 Ã b σf γσT b f y 2 |Ω A + Eva ,
where we used the variation (4.26), where / D, γ 5 = 2γμγ 5 ∂μ, as well as the definition (4.24). The numerical factor in front is a multiplicity factor due to the presence of two possible con-
tractions with S (d) F 2 .
The full result of our computation will be presented below. Here, for brevity, we discuss explicitly only the derivation of terms containing two background fermions and a derivative. The remaining ones are of the formf / Af , involving background fermions and a background gauge field, and can be obtained analogously.
In the evaluation of terms containing no gauge fields the average in (4.32) can be interpreted as a vacuum to vacuum transition. We find
L c Γ (d) Fermions (1) (4.33) ⊃ 2 d d k 1 (2π) d d d k 2 (2π) d e i(k 1 −k 2 )x d d q (2π) d f (k 1 )γρT a ( / q + / k 1 ) (q + k 1 ) 2 T c A (/ q +/ k 2 )γ 5 ( / q + / k 2 ) (q + k 2 ) 2 γσT a f (k 2 ) G aa gρσ − (1 − ξ) qρqσ q 2 q 2 = − 2G aa 16π 2 1 + ξ − 1 6 d d k 1 (2π) d d d k 2 (2π) d e i(k 1 −k 2 )xf (k 1 )γ 5 i ( / k 1 − / k 2 ) T a T c A T a f (k 2 ) = − 2G aa 16π 2 1 + ξ − 1 6 f γ 5 − → / ∂ + ← − / ∂ T a T c A T a f,
where we made use of the shorthand notation in Eq. (4.24). The couplings G aa arise from the gauge propagator because the kinetic term in (2.3) is non-canonical. This contribution can be expressed as in Eq. 3.1. The resulting coefficients C 12,13 , along with those associated with thef / Af terms, are collected in the next section, together with those of the purely bosonic operators.
Collecting the results
The one-loop results derived in Section 4.3.2 and 4.3.1 can all be written in the form (3.1). The corresponding coefficients C k pA are: 20 Note that these C k A do not automatically satisfy the symmetry properties of Eq. (3.2) and need to be (anti)symmetrized accordingly. We nevertheless prefer to report the results without (anti)symmetrization to avoid complicating these already unwieldy expressions.
C 0 pa = 1 16π 2 tr T p A − 4 3 T a A (4.34) C 1 pab = 1 16π 2 tr T p A 4 T a V T b V + 1 3 T a A T b A C 2 pab = 1 16π 2 tr T p A − 8 3 i [T a A , T b V ] + [T a V , T b A ] C 3 pab = 1 16π 2 tr T p A 4 3 i [T a A , T b V ]−4i [T a V , T b A ] C 4 iab = 1 16π 2 tr T p A −4i [T a V , T b A ] C 5 pab = − 1 3 C 4 pab C 6 pab = 1 3 C 4 pab C 7 pabc = 1 16π 2 tr T p A 8 3 [T b A , [T c A , T a A ]] + 8 T b A T a A T c A − 4 3 T a A , T b A T c A + 4 3 [T a V , [T b V , T c A ]] + 4 3 [T b V , [T c V , T a A ]] + 8 3 [T b A , [T c V , T a V ]] C 8 pabc = 1 16π 2 tr T p A 8 3 [T b V , [T a V , T c A ]] + 8 3 [T b V , [T c V , T a A ]] + 8 3 [T c A , [T a A , T b A ]] + 8 3 T a A , T b A , T c A + 16 3 [T c A , [T a V , T b V ]] + 8 3 [T b A , [T c V , T a V ]] − 4 3 [T a A , [T b A , T c A ]] + 4 3 [T a V , [T b V , T c A ]] + 4 3 [T a V , [T b A , T c V ]] − 4 3 [T a A , [T b V , T c V ]] C 9 pabc = 1 16π 2 trT p A +4i T a V , T b V T c V + T b A T c A + 2 3 i T a A , [T b V , T c A ] + [T b A , T c V ] − 16 3 i T b A T c A T a V + T b A T a V T c A + T a V T b A T c A C 10 pabcd = 1 16π 2 tr T p A + 8 3 i [T a A , [T c V , [T b A , T d A ]]]− 2 3 i [[T a V , T c A ], [T b A , T d A ]]− 2 3 i [[T a A , T c V ], [T b A , T d A ]] + 8i T a A [T c V , T d A ]T b A + 8 3 i [T a V , T c A ], T b A , T d A − 4 3 i [T a V , T b A ], T c A T d A + 4 3 i [T a V , [T b V , [T c V , T d A ]]]+ 8 3 i [T a A , [T c V , [T b V , T d V ]]] − 2 3 i [[T a V , T c A ], [T b V , T d V ]]− 2 3 i [[T a A , T c V ], [T b V , T d V ]] C 11 pabcd = 1 16π 2 tr T p A 4 T a V T b V + T a A T b A T c V T d V + T c A T d A + 1 3 [T a V , T b A ] + [T a A , T b V ] [T c V , T d A ] + [T c A , T d V ] − 16 3 T c A T d A (T a V T b V + T a A T b A ) + T c A (T a V T b V + T a A T b A )T d A + (T a V T b V + T a A T b A )T c A T d A + 32 3 T a A T b A T c A T d A , C 12 pLij = − 1 16π 2 5 + ξ 6 [T a L (T p R − T p L )T a L ] ij C 12 pRij = + 1 16π 2 5 + ξ 6 [T a R (T p R − T p L )T a R ] ij C 13 pLij = C 12 pLij C 13 pRij = C 12 pRij C 14 pLaij = + i 16π 2 5 + ξ 6 {[T p L , T m L T a R T m L ] − if pan T m L T n R T m L } ij C 14 pRaij = − i 16π 2 5 + ξ 6 {[T p R , T m R T a L T m R ] − if pan T m R T n L T m R } ij , where tr T p A {· · · } is short for tr[T p A {· · · }].
The results collected in Eq. (4.34) pass a number of highly non-trivial consistency checks. To start, the coefficients C 0−6 A and C 12,13,14 A have been independently computed diagrammatically for ξ = 1. The Feynman diagrams exactly reproduce the coefficients in Eq. (4.34). Furthermore, we explicitly verified that the C k A in Eq. 4.34, after being properly (anti)symmetrized, satisfy the WZ conditions in A.1. We also computed the corresponding values of the coefficients c k introduced in Eq. (3.10) (see Table 4.1) and checked that these satisfy the constraints in A.3, as they should.
Counterterms
The explicit form of the gauge variation of the effective action induced by DR at one loop, for the specific renormalization scheme of Section 4.2, is given by the sum of (4.29) and the fermionic operators discussed in Section 4.3.2, see (4.20). Its 4-dimensional limit is unambiguous, and so does the counterterm S ct | (1) = d 4 x L ct | (1) in (2.22).
We can now write explicitly the counterterm necessary to restore gauge invariance in our renormalization scheme, under the hypothesis that (2.15) is satisfied. Using the definitions in Eqs. (4.24) and (4.23), we find, up to gauge-invariant contributions:
C k A c k XY Z C 0 c 0 = − 1 3 C 1 c 1 LLL = − 1 3 , c 1 LLR = − 1 6 , c 1 RLL = 1 3 C 2 c 2 LLL = − 2i 3 , c 2 LLR = 0, c 2 RLL = 2i 3 C 3 c 3 LLL = − i 3 , c 3 LLR = 2i 3 , c 3 RLL = i 3 C 4 c 4 = i 2 C 5 c 5 = − i 6 C 6 c 6 = i 6 C 7 c 7 LLLL = c 7 LLRL = c 7 LRRR = c 7 LLRR = c 7 LRRL = c 7 LRLR = 1 6 c 7 LLLL = c 7 LLLR = c 7 LLLR = c 7 LRRR = c 7 LRLR = c 7 LLRL = − 1 6 c 7 LRLL = c 7 LLRR = 0 C 8 c 8 LLLL = c 8 LLLL = c 8 LLRL = c 8 LLLR = c 8 LLLR = c 8 LLRL = c 8 LRLR = c 8 LLRR = 1 3 c 8 LLLL = c 8 LLRL = c 8 LRLL = c 8 LRRL = c 8 LRLR = c 8 LRLR = c 8 LRRL = − 1 3 c 8 LLRR = −c 8 LRLL = 1, c 8 LRLL = c 8 LLRR = 0, c 8 LLLR = −c 8 LRRL = − 2 3 C 9 c 9 LLLL = c 9 LLRL = c 9 LLLR = c 9 LRRR = c 9 LRLR = c 9 LRRL = − i 6 c 9 LRLL = c 9 LLRR = 0 c 9 LLLL = c 9 LRRR = c 9 LRLR = c 9 LLRL = −c 9 LLLR = −c 9 LLRR = − iL ct | (1) = µναβ 16π 2 Tr 8 3 ∂ µ V ν {V α , A β } + 4iV µ V ν V α A β + 4 3 iV µ A ν A α A β (4.35) + 1 16π 2 Tr − 4 3 (D V µ A ν ) 2 + 2(D V µ A µ ) 2 − 4 3 [A µ , A ν ] 2 + 4 3 (A µ A ν ) 2 + A 2 µν − 2 16π 2 1 + ξ − 1 6 G aaf γ 5 γ µ T a A µ T a f.
We emphasize that in our notation (see Eq. (2.3)) a further rescaling A a µ → g G δ G ab A b µ is needed to canonically normalize the kinetic term for the gauge bosons.
The counterterm is non-gauge-invariant by definition, see (2.22), but respects P, CP, as well as Lorentz invariance. 21 In addition, being proportional to the axial vector component, it manifestly vanishes for T a A = 0, namely for T a L = T a R , consistently with the fact that our regularization does not break vector-like gauge symmetries.
The first, second, and third lines of (4.35) can be found independently from each other because they do not mix under gauge transformations. The counterterm in the second line, which does not contain the Levi-Civita tensor, can be identified starting from the most general Lagrangian constructed with dimension-four vector operators invariant under P and covariant under the vector transformations. This requirement identifies all operators in the second line of (4.35) plus of course, V 2 µν + A 2 µν , which is irrelevant to our analysis because invariant under the full gauge symmetry group and is in one to one correspondence to the term in Eq. (3.16). The coefficients of the operators selected via this procedure are finally derived by requiring the gauge variation cancels the part of ∆ a | (1) controlled by a / 2 . 22 This fixes all coefficients but the one of V 2 µν + A 2 µν , coherently to what was found in Section 3. There are only two independent dimension-four operators with Levi-Civita that are invariant under P and built out of combinations that are manifestly singlet of the vector transformations; these are µναβ A µν V αβ and µναβ A µν A α A β . However, using the Bianchi identity one finds that both of them are total derivatives. To arrive at (4.35) we have to relax the assumption that the building blocks be manifestly invariant, and instead simply demand that the gauge variation vanishes for T a A = 0 (plus as usual invariance under the truly conserved symmetries P and CP as well as hermiticity). This less stringent request leaves us with the three independent operators shown in the first line of (4.35) (the complex i follows from hermiticity and invariance under CP). The numerical coefficients may then be obtained demanding that their variation exactly cancel the part of the anomaly controlled by a 2 whenever (2.15) holds.
ξ k B χ k XY Z ξ 1 χ 1 LLL = i 3 − 2iχ 2 LL , χ 1 LLR = − 2i 3 , χ 1 RLL = i 3 ξ 2 χ 2 LR = 1 6 ξ 3 χ 3 LL = − 1 6 − χ 2 LL ξ 4 χ 4 = 1 6 ξ 5 χ 5 LLLR = χ 5 LRLR = i 72 ξ 6 χ 6 LLLL = 1 12 − χ 2 LL 4 , χ 6 RLLL = χ 6 LRLR = −χ 6 RLLL = − 1 24 χ 6 LLRR = χ 6 RLLR = 0, χ 6 LLLL = − 1 8 + χ 2 LL 2
Finally, the last line of (4.35) is determined requiring its variation exactly compensates the fermion-dependent part of ∆ a . The most general set of 2-fermion operators would also include a gauge-invariant combination, but that cannot play any role in restoring the WIs and has not been included in (4.35).
The result in Eq. (4.35) is a particular case of the general counterterm derived in Section 3, obtained for the choice χ 2 LL = −1/(96π 2 ). To verify this one may use the explicit values of the c k in Table 4.1 and plug them in (3.15), (3.18), obtaining the χ k in Table 4.2. Substituting these in (3.4) one reproduces exactly the bosonic terms in (4.35). Analogously, plugging the expressions of C 12 cX = C 13 cX shown in (4.34) into (3.28) and (3.30), we arrive at the last line of (4.35). This is a strong cross check of the validity or our results.
An explicit example: counterterms in the SM
As an application of the formalism developed in this paper, we derive the WI-restoring counterterms for the SM gauge group SU(3) c × SU(2) L × U(1) Y , using DR and the BMHV scheme for γ 5 . Since our calculations do not include scalar loops, the results of this section apply to the SM in the limit of vanishing Yukawa couplings. We postpone to future work the derivation of the additional counterterms such couplings would require.
Before regularization, the SM gauge bosons and their interactions with the SM fermions are described by the classical action in Eq. 2.2. The gluon and electroweak gauge fields may be collected in a 12-dimensional tensor
A a µ = G a µ for a = 1, . . . , 8 W a µ
for a = 9, 10, 11 B µ for a = 12 (5.1) and their gauge couplings in a 12-dimensional tensor given by G aa = g 2 c (for a = 1, · · · 8), G aa = g 2 (for a = 9, 10, 11), and G aa = g 2 (for a = 12). For each fermion family, f L and f R can be written as vectors with eight components, f L = (u L , d L , ν L , e L ) and f R = (u R , d R , 0, e R ), with the quarks carrying color index. The generators T a L,R are eight-dimensional matrices. For example, the hypercharge generators explicitly read
T 12 L = 1 6 1 3 1 6 1 3 − 1 2 − 1 2 , T 12 R = 2 3 1 3 − 1 3 1 3 0 −1 , (5.2)
where 1 3 is the 3 × 3 identity matrix in color space. Analogous expressions may be derived for all other generators.
Having specified these conventions, we can compute the counterterm Lagrangian in Eq. (4.35). Before presenting the result it is useful to anticipate a few features. The vector component of the SM group contains color, which forms an algebra on its own. Invariance under SU(3) c implies that the counterterm can only depend on gluons via their field strength and covariant derivatives. Yet, it is straightforward to verify that the fully bosonic part of (4.35) cannot involve gluons, the reason being that all color-invariant dimension-4 operators are automatically fully gauge-invariant. Similarly, the gluons cannot appear in the fermionic part of the counterterm, since they live in the vectorial components V µ .
Having established that Eq. (4.35) can only depend on the electroweak gauge bosons we can proceed by presenting its explicit form. To make the invariance under the vector U(1) em manifest it is convenient to express (4.35) in terms of W ± µ , Z µ and the photon A µ , defined as usual (in the canonically normalized basis) by:
W ± µ = W 1 µ ∓ iW 2 µ √ 2 , Z µ = −s w B µ + c w W 2 µ , A µ = c w B µ + s w W 2 µ ,(5.3)
with c w and s w cosine and sine of the weak angle, i.e. c w = g/ g 2 + g 2 , s w = g / g 2 + g 2 . The complete result, after an integration by parts and having canonically normalized the gauge fields, reads
L ct = g 2 16π 2 2 3 D µ W − ν D µ W +ν + 1 3c 2 w ∂ µ Z ν ∂ µ Z ν − ig 2 e 8π 2 F µν W + µ W − ν (5.4) − ig 3 48π 2 c w (−4 + 6s 2 w )D µ W − µ W + ν Z ν + (8 − 6s 2 w )D µ W − ν W + µ Z ν +(−4 + 2s 2 w )D µ W − ν Z µ W + ν − h.c. + g 4 16π 2 (W + µ W −µ ) 2 − 5 6 W + µ W +µ W − ν W −ν + 1 24c 4 w (Z µ Z µ ) 2 + (−5 + 8s 2 w ) 3c 2 w W + µ W − ν Z µ Z ν + (11 − 16s 2 w + 4s 4 w ) 6c 2 w W + µ W −µ Z ν Z ν (5.5) − g 3 16π 2 9 − t 2 w 36 √ 2 ū L γ µ W + µ d L +d L γ µ W − µ u L + 9 − t 2 w 72c w ū L γ µ Z µ u L −d L γ µ Z µ d L + 1 − t 2 w 4 √ 2 ν L γ µ W + µ e L +ē L γ µ W − µ ν L + 1 − t 2 w 8c w [ν L γ µ Z µ ν L −ē L γ µ Z µ e L ] + 2t 2 w 9 √ 2 ū R γ µ W + µ d R +d R γ µ W − µ u R − t 2 w 18c w 4ū R γ µ Z µ u R −d R γ µ Z µ d R + t 2 w 2c wē R γ µ Z µ e R .
In this expression D µ W ± ν = (∂ µ ± ieA µ )W ± ν denotes the QED-covariant derivative and t w = s w /c w . As required by invariance under U(1) em , the dependence of the counterterm on the photon field occurs only via the field strength and the covariant derivative. Interestingly, the bosonic counterterm involving the Levi-Civita tensor, shown in the first line of Eq. (4.35), exactly vanishes. This turns out to be a special property of the electroweak gauge group and can be traced back to the peculiarity of the SU(2) algebra.
Outlook
Any consistent regularization scheme induces an apparent violation of gauge invariance in non-anomalous chiral gauge theories. This violation shows up in amplitudes evaluated in perturbation theory and can be removed by the inclusion of finite counterterms. In this context, renormalization is more sophisticated than in a vector-like gauge theory. Two steps can be distinguished in the subtraction procedure. A first one is required to remove infinities. At a given order in perturbation theory, this can be done by adding a set of local divergent counterterms. At this stage, the theory delivers finite results, but the corresponding amplitudes do not preserve gauge invariance in general. Indeed, the latter is broken by finite terms that can be systematically deleted by adding local finite counterterms.
The two steps can be reiterated at each order of perturbation theory and can be implemented directly at the level of the generating functional of the 1PI Green's functions of the theory. Starting satisfies the WI of the theory. Of course, such a separation of the subtraction procedure into two moves is purely conventional. What matters is the overall combination Γ ct [φ] + S ct [φ], which can be split into the sum of a divergent term and a finite one in infinitely many ways. In practical computation, however, the two above steps appear to be very convenient and have been adopted in our approach.
The main result of this work is a general analytic expression of the finite one-loop counterterm S ct [φ] for a renormalizable chiral gauge theory including gauge bosons and fermions transforming in arbitrary representations of the gauge group. A very appealing feature of this result is that the counterterm S ct [φ] is determined for any possible consistent regulator belonging to a wide class. We only require that the chosen regularization scheme obeys the Quantum Action Principle, preserves Lorentz invariance in four dimensions, and gauge invariance when the theory is vector-like. The physical information is entirely encoded in the gauge variation L a Γ[φ]. 23 This can be expressed as a linear combination of local operators of dimension four, whose coefficients can be determined by a one-loop computation for each given regularization scheme. The counterterm S ct [φ] automatically follows from the knowledge of these coefficients.
We started by quantizing the theory with the Background Field Method and by choosing the Background Field Gauge, which guarantees the gauge invariance of the functional Γ inv [φ] at the level of background fields. In this respect, we differ from previous approaches, where the theory is quantized with the help of a traditional gauge fixing that breaks the gauge symmetry down to the rigid BRST invariance. The WI of the functional Γ inv [φ] resulting from the Background Field Gauge are easier to deal with compared to the non-linear Slavnov-Taylor identities consequences of the BRST invariance: they simply read L a Γ inv [φ] = 0.
A key ingredient of our derivation is the non-redundant parametrization of the gauge variation L a Γ[φ] at the one-loop order, which has been established independently from the adopted regularization by exploiting several properties of the theory. The Quantum Action Principle guarantees that, order by order in perturbation theory, L a Γ[φ] is a finite local polynomial in the fields and their derivatives preserving the symmetries of the regulator. Last but not least, the WZ consistency conditions greatly reduce the number of independent coefficients needed to describe L a Γ[φ]. Similar considerations restrict the form of the sought-after counterterm S ct [φ]. Its analytic expression can be fully determined in complete generality -up to gauge-invariant contributions -from the equality L a (Γ[φ] + S ct [φ]) = 0.
One of the most widely used regularization in practical computation is DR and an important part of our work has been devoted to specifying our general results to such a scheme. Within a path-integral formalism, we have computed the gauge variation of the whole one-loop renormalized functional Γ[φ] in the BMHV scheme. The result was also reproduced in several parts via a diagrammatic computation. The full set of one-loop finite counterterms in DR for the class of theories under investigation has been obtained and is compactly summarized in Eq. (4.35).
To exemplify our result, we have computed the one-loop finite counterterm for the SM in the limit of vanishing Yukawa couplings, when DR and the BMHV scheme for γ 5 are chosen. This can be seen as a first step toward the automation of one-loop computations in an even more general class of theories such as chiral gauge theories including a scalar sector, like the SM, or non-renormalizable ones, such as the SMEFT. The need for local counterterms restoring gauge invariance in SMEFT one-loop computations have already been emphasized [27][28][29] and we are confident that our approach, suitably generalized, can represent a useful tool in this context.
A General solution of the Wess-Zumino conditions
A.1 Wess-Zumino consistency conditions in terms of the C k A A.1.1 Bosonic sector P-even sector c 9 LRLR (−T acbd LRLR + T acbd RLRL + T adbc LRLR − T adbc RLRL )+ c 9 LLRR (−T acbd LLRR − T acbd LRRL + T acbd RLLR + T acbd RRLL + T adbc LLRR + T adbc LRRL − T adbc RLLR − T adbc RRLL )+ c 9 LLRL (−T acbd LLRL + T acbd RRLR + T adbc LLRL − T adbc RRLR )+ c 9 LLLR (−T acbd LLLR − T acbd LRLL + T acbd RLRR + T acbd RRRL + T adbc LLLR + T adbc LRLL − T adbc RLRR − T adbc RRRL )+ c 9 LLLL (−T acbd LLLL + T acbd RRRR + T adbc LLLL − T adbc RRRR )+ c 9 LRRR (T abcd LRRR − T abcd RLLL − T abdc LRRR + T abdc RLLL + T acdb LRRR − T acdb RLLL − T adcb LRRR + T adcb RLLL )+ c 9 LLRR (T abcd LLRR − T abcd RRLL − T abdc LLRR + T abdc RRLL + T acdb LRRL − T acdb RLLR − T adcb LRRL + T adcb RLLR )+ c 9 LRLR (T abcd LRLR − T abcd RLRL − T abdc LRLR + T abdc RLRL + T acdb LRLR − T acdb RLRL − T adcb LRLR + T adcb RLRL )+ c 9 LLLR (T abcd LLLR − T abcd RRRL − T abdc LLLR + T abdc RRRL + T acdb LRLL − T acdb RLRR − T adcb LRLL + T adcb RLRR )+ c 9 LRRL (T abcd LRRL − T abcd RLLR − T abdc LRRL + T abdc RLLR + T acdb LLRR − T acdb RRLL − T adcb LLRR + T adcb RRLL )+ c 9 LLRL (T abcd LLRL − T abcd RRLR − T abdc LLRL + T abdc RRLR + T acdb LLRL − T acdb RRLR − T adcb LLRL + T adcb RRLR )+ c 9 LRLL (T abcd LRLL − T abcd RLRR − T abdc LRLL + T abdc RLRR + T acdb LLLR − T acdb RRRL − T adcb LLLR + T adcb RRRL )+ c 9 LLLL (T abcd LLLL − T abcd RRRR − T abdc LLLL + T abdc RRRR + T acdb LLLL − T acdb RRRR − T adcb LLLL + T adcb RRRR ) ,
C 0 [pc] = 0 C 3 cbp + f cbe C 0 pe symm. in pc = 0 (C 3 pbc + f pbe C 0 ce ) + (C 4 cpb + C 5 cpb ) − (C 4 pcb + C 5 pcb ) = 0 2(C 3 pbc + f pbe C 0 ce ) + (C 2 cpb + C 2 pcb ) − 2(C 4 pcb + C 5 pcb ) − 4C 6 pcb = 0 (C 3 pbc + f pbe C 0 ce ) − (C 2 cpb + C 2 pcb ) + C 3 cpb + C 3 pcb − 2(C 4 pcb + C 5 pcb ) = 0 (C 3 pbc + f pbe C 0 ce ) + (C 3 pcb + f pce C 0 eb ) − 2(C 4 pcb + C 5 pcb ) − 2C 6 pcb = 0 C 3 cbe f pde − C 3 pde f cbe − C 2 cbe f pde + C 2 pde f cbe symm. in bd = 2C 7 [pc](bd) C 2 cbe f pde − C 2 pde f cbe symm. in bd = 2C 8 [pc](bd) C 2 cbe f pde − C 2 pde f cbe + 2C 5 c(ed) f pbe + 2C 5 p(ed) f cbe + 2C 8 (c|db|p) − 2C 8 pc(db) = 0 2(C 3 cbe − C 2 cbe )f pde − 2(C 3 pde − C 2 pde )f cbe + 2C 4 c(ed) f pbe + 2C 4 p(ed) f cbe − 4C 7 pc(db) + 2C 8 (c|d|p)b = 0 C 2 cbe f pde − C 2 pde f cbe + 2C 6 c(ed) f pbe + 2C 6 p(ed) f cbe + 2C 7 cd(pb) + 2C 7 pd(cb) − 2C 8 pc(db) = 0 (A.1) C 2 ced f pbe + C 2 cbe f pde − 2C 2 p(ed) f cbe + 2C 5 p(ed) f cbe + 2C 6 p(ed) f cbe + 2C 7 pd(cb) − 2C 8 pc(db) + C 8 pdbc = −f pce C 2 ebd (C 3 ced − C 2 ced )f pbe + (C 3 cbe − C 2 cbe )f pde − 2(C 3 p(ed) − C 2 p(ed) )f cbe + 2C 4 p(ed) f cbe − 2C 7 pc(db) + C 8 pdcb = −(C 3 ebd − C 2 ebd )f pce 2C 4 c(eb) f pde − 2C 7 pc(bd) + C 8 pbcd symm. in bd = −C 4 e(bd) f pce 2C 5 c(eb) f pde + C 8 pdbc − C 8 pcbd symm. in bd = −C 5 e(bd) f pce 2C 6 c(eb) f pde + 2C 7 pd(bc) − C 8 pcbd symm. in bd = −C 6 e(bd) f pce f phe C 7 ce(bd) + f che C 7 pe(bd) + f pbe C 8 cehd + f cbe C 8 pehd + 4C 10 cphbd + 4C 10 pchbd symm. in bd = 0 f pde C 7 ce(bh) + 2f phe C 7 cd(be) − 2f che C 7 pd(be) + 4C 10 pcdbh + f che C 8 pedb symm. in bh = −f pce C 7 ed(bh) 2f che C 7 pe(bd) + f pde C 8 cehb + f phe C 8 cdeb + C 8 cdhe f pbe + C 8 pehd f cbe − f che C 8 pdeb − f cbe C 8 pdhe + 8C 10 pchbd = −f pce C 8 edhb 4f pha C 10 cabdf − 4f cha C 10 pabdf symm. in hb, df, hb ↔ df = −f pce C 10 ehbdf P-odd sector f pde C 1 c(eb) + f cde C 1 p(eb) + C 9 pb[cd] + C 9 cb[pd] = 0 (2f pde C 1 ceb + 2C 9 pb[cd] ) symm. in bd = −f pce C 1 e(db) (4C 11 c[pbdf ] + 4C 11 p[cbdf ] + f pde C 9 ce[bf ] + f cde C 9 pe[bf ] ) antisymm. in bdf = 0 2f pf e C 9 cb[ed] − f cf e C 9 pe[bd] − f cde C 9 pe[f b] − 2f cf e C 9 pb[ed] antisymm. in df + 12C 11 p[cbdf ] + f pbe C 9 ce[f d] = −f pce C 9 eb[f d] 4f phb C 11 c[badf ] − 4f chb C 11 p[badf ] antisymm. in adfh = −f pce C 11 e[af dh] (A.2) A.1.2 Fermionic sector − iC 14 pcX − C 12 cX T p X + T p X C 12 cX − T c X C 12 pX + T c X C 13 pX − if pcb C 12 bX = 0 − iC 14 pcX − C 13 cX T p X + T p X C 13 cX + C 13 pX T c X − C 12 pX T c X − if pcb C 13 bX = 0 − if pqb C 14 cbX + if cqb C 14 pbX + C 14 cqX T p X − T p X C 14 cqX + C 14 pqX T c X − T c X C 14 pqX − if pcb C 14 bqX = 0 − iC 14 cpX − iC 14 pcX − C 12 cX T p X + T p X C 13 cX − C 12 pX T c X + T c X C 13 pX = 0 (A.3) A.2 Parametrizations for the remaining C k A C 7 abcd = c 7 LRRR (−T abcd LLLL + T abcd LRRR + T abcd RLLL − T abcd RRRR − T abdc LLLL + T abdc LRRR + T abdc RLLL − T abdc RRRR (A.4) − T acdb LLLL + T acdb LRRR + T acdb RLLL − T acdb RRRR − T adcb LLLL + T adcb LRRR + T adcb RLLL − T adcb RRRR )+ c 7 LLRR (−T abcd LLLL + T abcd LLRR + T abcd RRLL − T
, i, j being flavour indices, we explicitly display their CP-and P-transformed versions, withL(R) = R(L).
Figure 3 . 1 :
31Diagrams contributing to C 12 aX I 12 X + C 13 aX I 13 X . The chirality X is determined by the external fieldsf X and f X . A dot indicates the action of the operator L a of Eq. (2.9).
( 4 .
410) because it involves no quantum fluctuations. The second line of Eq. (4.16) consists of the sum of two terms: a non-gauge-invariant one,f i / Df , which represents the original fermionic Lagrangian with the fermionic field replaced by its quantum fluctuation and the covariant derivative containing only the background gauge field, plus a genuinely four-dimensional gaugeinvariant piece we called S (d) F . At one-loop accuracy it is sufficient to expand e iS (d) F
LLLR − 2ic 4 + 2ic 6 + ic 2 RLL − 2c 7 LRLL + c 8 LLRL − c 8 LRLL
is gauge invariant, these equations are trivially satisfied, since both sides vanish identically. If instead F [A, f X ,f X ] is not gauge invariant, Eq. (2.11) becomes a non-trivial constraint, which will play an important role in our analysis.A chiral gauge theory featuring only gauge bosons and fermions is always invariant under CP, provided CP transformations are conveniently defined
Table 3 . 1 :
31Basis of local, dimension-four operators depending on gauge bosons, fermions and their derivatives entering the decomposition of ∆ a |(1) . Lorentz indices µ, ν,. . .run from 0 to 3. Also shown
Table 3 .
32: Basis of local, dimension-four operators, depending on gauge bosons, fermions and their
derivatives relevant to build the counterterm S ct . Lorentz indices µ, ν,. . . run from 0 to 3. Also
shown are the corresponding coefficients and their transformation properties under CP and P. For the
fermion bilinears we explicitly display their CP-and P-transformed, withL(R) = R(L).
(T a 1 ...an X 1 ...Xn + T an...a 1 Xn...X 1 ) + (T a 1 ...añ X 1 ...Xn + T an...a 1trace combination
CP P
Xn...X 1
Table 3 . 4 :
34Combinations of single traces eigenstates of CP and P.
Table 4 .
41: Explicit results at one loop in DR for the coefficients c k entering the C k
A parametrization
introduced in Section 3.1, in units of 1/(16π 2 ).
Table 4 .
42: Explicit results at one loop in DR for the coefficients χ k entering the ξ j
B parametrization
introduced in Section 3.1, in units of 1/(16π 2 ).
from the regularized functional Γ reg [φ], divergencies are canceled by the local counterterm Γ ct [φ], such that Γ[φ] = Γ reg [φ] + Γ ct [φ] produces finite results. By further adding the local finite counterterm S ct [φ], we finally get the functional Γ inv [φ] = Γ[φ] + S ct [φ] that
abcd RRRR − T abdc LLLL + T abdc LLRR + T abdc RRLL − T abdc RRRR − T acdb LLLL + T acdb LRRL + T acdb RLLR − T acdb RRRR − T adcb LLLL + T adcb LRRL + T adcb RLLR − T adcb RRRR )+ c 7 LRLR (−T abcd LLLL + T abcd LRLR + T abcd RLRL − T abcd RRRR − T abdc LLLL + T abdc LRLR + T abdc RLRL − T abdcRRRR − T acdb LLLL + T acdb LRLR + T acdb RLRL − T acdb RRRR − T adcb LLLL + T adcb LRLR + T adcb RLRL − T adcb RRRR )+ c 7 LLLR (−T abcd LLLL + T abcd LLLR + T abcd RRRL − T abcd RRRR − T abdc LLLL + T abdc LLLR + T abdc RRRL − T abdc RRRR − T acdb LLLL + T acdb LRLL + T acdb RLRR − T acdb RRRR − T adcb LLLL + T adcb LRLL + T adcb RLRR − T adcb RRRR )+ c 7 LRRL (−T abcd LLLL + T abcd LRRL + T abcd RLLR − T abcd RRRR − T abdc LLLL + T abdc LRRL + T abdc RLLR − T abdc RRRR − T acdb LLLL + T acdb LLRR + T acdb RRLL − T acdb RRRR − T adcb LLLL + T adcb LLRR + T adcb RRLL − T adcb RRRR )+ c 7 LLRL (−T abcd LLLL + T abcd LLRL + T abcd RRLR − T abcd RRRR − T abdc LLLL + T abdc LLRL + T abdc RRLR − T abdc RRRR − T acdb LLLL + T acdb LLRL + T acdb RRLR − T acdb RRRR − T adcb LLLL + T adcb LLRL + T adcb RRLR − T adcb RRRR )+ c 7 LRLL (−T abcd LLLL + T abcd LRLL + T abcd RLRR − T abcd RRRR − T abdc LLLL + T abdc LRLL + T abdc RLRR − T abdc RRRR − T acdb LLLL + T acdb LLLR + T acdb RRRL − T acdb RRRR − T adcb LLLL + T adcb LLLR + T adcb RRRL − T adcb RRRR ) c 7 LLRR (−2T acbd LLLL + T acbd LLRR + T acbd LRRL + T acbd RLLR + T acbd RRLL − 2T acbd RRRR − 2T adbc LLLL + T adbc LLRR + T adbc LRRL + T adbc RLLR + T adbc RRLL − 2T adbc RRRR )+ c 7 LLLR (−2T acbd LLLL + T acbd LLLR + T acbd LRLL + T acbd RLRR + T acbd RRRL − 2T acbd RRRR − 2T adbc LLLL + T adbc LLLR + T adbc LRLL + T adbc RLRR + T adbc RRRL − 2T adbc RRRR )+ c 7 LRRR (−T acbd LLLL + T acbd LRRR + T acbd RLLL − T acbd RRRR − T adbc LLLL + T adbc LRRR + T adbc RLLL − T adbc RRRR )+ c 7 LRLR (−T acbd LLLL + T acbd LRLR + T acbd RLRL − T acbd RRRR − T adbc LLLL + T adbc LRLR + T adbc RLRL − T adbc RRRR )+ c 7 LLRL (−T acbd LLLL + T acbd LLRL + T acbd RRLR − T acbd RRRR − T adbc LLLL + T adbc LLRL + T adbc RRLR − T adbc RRRR ) , LRLR (T abcd LRLR − T abcd LRRR − T abcd RLLL + T abcd RLRL + T adcb LRLR − T adcb LRRR − T adcb RLLL + T adcb RLRL )+ c 8 LLLR (T abcd LLLR − T abcd LRRR − T abcd RLLL + T abcd RRRL + T adcb LRLL − T adcb LRRR − T adcb RLLL + T adcb RLRR )+ c 8 LRRL (T abcd LRRL − T abcd LRRR − T abcd RLLL + T abcd RLLR + T adcb LLRR − T adcb LRRR − T adcb RLLL + T adcb RRLL )+ c 8 LLRL (T abcd LLRL − T abcd LRRR − T abcd RLLL + T abcd RRLR + T adcb LLRL − T adcb LRRR − T adcb RLLL + T adcb RRLR )+ c 8 LRLL (T abcd LRLL − T abcd LRRR − T abcd RLLL + T abcd RLRR + T adcb LLLR − T adcb LRRR − T adcb RLLL + T adcb RRRL )+ c 8 LLLL (T abcd LLLL − T abcd LRRR − T abcd RLLL + T abcd RRRR + T adcb LLLL − T adcb LRRR − T adcb RLLL + T adcb RRRR )+ c 8 LRRL (T abdc LLRR − T abdc LRRR − T abdc RLLL + T abdc RRLL + T acdb LRRL − T acdb LRRR − T acdb RLLL + T acdb RLLR )+ c 8 LRLR (T abdc LRLR − T abdc LRRR − T abdc RLLL + T abdc RLRL + T acdb LRLR − T acdb LRRR − T acdb RLLL + T acdb RLRL )+ c 8 LRLL (T abdc LLLR − T abdc LRRR − T abdc RLLL + T abdc RRRL + T acdb LRLL − T acdb LRRR − T acdb RLLL + T acdb RLRR )+ c 8 LLRR (T abdc LRRL − T abdc LRRR − T abdc RLLL + T abdc RLLR + T acdb LLRR − T acdb LRRR − T acdb RLLL + T acdb RRLL )+ c 8 LLRL (T abdc LLRL − T abdc LRRR − T abdc RLLL + T abdc RRLR + T acdb LLRL − T acdb LRRR − T acdb RLLL + T acdb RRLR )+ c 8 LLLR (T abdc LRLL − T abdc LRRR − T abdc RLLL + T abdc RLRR + T acdb LLLR − T acdb LRRR − T acdb RLLL + T acdb RRRL )+ c 8 LLLL (T abdc LLLL − T abdc LRRR − T abdc RLLL + T abdc RRRR + T acdb LLLL − T acdb LRRR − T acdb RLLL + T acdb RRRR )+ c 8 LRRL (T acbd LLRR − T acbd LRRR − T acbd RLLL + T acbd RRLL + T adbc LRRL − T adbc LRRR − T adbc RLLL + T adbc RLLR )+ c 8 LRLR (T acbd LRLR − T acbd LRRR − T acbd RLLL + T acbd RLRL + T adbc LRLR − T adbc LRRR − T adbc RLLL + T adbc RLRL )+ c 8 LRLL (T acbd LLLR − T acbd LRRR − T acbd RLLL + T acbd RRRL + T adbc LRLL − T adbc LRRR − T adbc RLLL + T adbc RLRR )+ c 8 LLRR (T acbd LRRL − T acbd LRRR − T acbd RLLL + T acbd RLLR + T adbc LLRR − T adbc LRRR − T adbc RLLL + T adbc RRLL )+ c 8 LLRL (T acbd LLRL − T acbd LRRR − T acbd RLLL + T acbd RRLR + T adbc LLRL − T adbc LRRR − T adbc RLLL + T adbc RRLR )+ c 8 LLLR (T acbd LRLL − T acbd LRRR − T acbd RLLL + T acbd RLRR + T adbc LLLR − T adbc LRRR − T adbc RLLL + T adbc RRRL )+ c 8 LLLL (T acbd LLLL − T acbd LRRR − T acbd RLLL + T acbd RRRR + T adbc LLLL − T adbc LRRR − T adbc RLLL + T adbc RRRR ) ,C 9 abcd = c 9 LRRR (−T acbd LRRR + T acbd RLLL + T adbc LRRR − T adbc RLLL )+ (A.6)C 8
abcd = c 8
LLRR (T abcd
LLRR − T abcd
LRRR − T abcd
RLLL + T abcd
RRLL + T adcb
LRRL − T adcb
LRRR − T adcb
RLLL + T adcb
RLLR )+
(A.5)
c 8
C 10 abcde = c 10 1 (T abdec RRLRR − T abdec RRRLR + T abedc RRLRR − T abedc RRRLR − T adbce LLLRL + T adbce LLRLL + T adbce RRLRR − T adbce RRRLR − (A.7) T adcbe LLLRL + T adcbe LLRLL + T adcbe RRLRR − T adcbe RRRLR + T bcdae RLLLL + T cbdae RLLLL + T cdeba RLRRR + T cedba RLRRR + T dbace LLLLR − T dbace RLLLL + T dcabe LLLLR − T dcabe RLLLL − T debac RLRRR + T ebacd LLLLR − T ebacd RLLLL − T ebcda LLRLL + T ebcda RLRRR − T ebcda RRLRR + T ecabd LLLLR − T ecabd RLLLL − T ecbda LLRLL + T ecbda RLRRR − T ecbda RRLRR − T edbac RLRRR )+ c 10 2 (T abdce LLLRL − T abdce LLRLL − T abdce RRLRR + T abdce RRRLR − T abecd LLRLL − T abecd RRLRR + T abecd RRRLR + T acdbe LLLRL − T acdbe LLRLL − T acdbe RRLRR + T acdbe RRRLR − T acebd LLRLL − T acebd RRLRR + T acebd RRRLR + T bdace RLLLL − T bdcae RLLLL + T dbaecA.3 Solution of Wess-Zumino conditions in the bosonic sectorA.3.1 P-even
LLRL − 2ic 4 + 2Ic 2 RLL − 4c 7 LLLR + c 8 LRLL = −2ic 4 + 4c 7 LLLR − c 8c 8
LLLL − 2ic 6 + 4c 7
LLLR + 2c 7
LLRR + 4c 7
LRLL + 2c 7
LRLR + 2c 7
LRRL
c 8
LRRL = −4ic 6 + 2ic 2
RLL − 4c 7
LLRL − c 8
LLLL − c 8
LLLR − c 8
LLRR − c 8
LRLL − c 8
LRLR
c 8
LLLL + 4ic 6 − 2ic 2
RLL + 4c 7
LLLR + 4c 7
LLRL + 4c 7
LLRR + 2c 7
LRLR = 0
c 8
LLLR
c 8
LLRR + 2ic 4 − 4c 7
LRRL + c 8
LRRL = 0
c 8
LRLL − 4c 7
LLLR + c 8
LLRL = 0
c 8
LRLR = 2c 7
LRLR
c 8
LRRL − 4c 7
LLRR + c 8
LLRR = 0
c 8
LLLR
c 8
LRLR − 4c 7
LRLR + c 8
LRLR = 0
2c 8
LLLR = −2ic 2
RLL + 4c 7
LLLR
c 8
LRRL = −8ic 6 + 4c 7
LLLR + 2c 7
LLRR + 2c 7
LRLL + 2c 7
LRLR + 2c 7
LRRL + 4c 7
LLLR
− 2c 8
LLRL − c 8
LLLR − 2c 8
LLRL − c 8
LLRR − c 8
LRLR
4c 7
LLRL = −8ic 6 + 2ic 2
RLL + 4c 7
LLLR
2c 7
LRLL = 2ic 6 − ic 2
RLL + 2c 7
LLLR
c 8
LLLR = 2ic 4 − ic 2
RLL + 2c 7
LLLR
c 8
LLRL + 4ic 6 − 2c 7
LLLR − 2c 7
LLLR + c 8
LLRL
c 8
LRLR = 2c 7
LRLR
c 8
LLRR = −2ic 4 + 2Ic 6 + 2c 7
LLRR − 2c 7
LLRR + c 8
LLRR
c 8
LLRL = −4ic 6 + ic 2
RLL + 2c 7
LLLR
c 8
LLRR = 2ic 4 − ic 2
RLL + 2c 7
LLRR
Or violated by anomalies[17,18].2 In a path-integral formulation, the breaking of gauge invariance can come from the non-invariance of either the classical action or the integration measure or both.3 These are the non-linear Slavnov-Taylor (ST) identities associated to the rigid BRST symmetry of the quantized theory, or else WI related to ordinary gauge invariance if the Background Field Method and the Background Field Gauge are adopted.
Formal invariance under CP and P is achieved if the generators of the group behave as spurions with well-defined transformation properties, as described below.
Note the conventional sign of the vector field in the covariant derivative.
No counterterm can repair the breaking of gauge invariance induced by a violation of Eq. (2.15).
We neglect a possible dependence of ∆ a | (1) on ghosts. As will be discussed in Section 4, these do not contribute to ∆ a at the one-loop level.
In particular, all counterterms are fixed by using the master equation for k ∈ P − even \ {10}. C 10 abcd + C 10 abcd (ξ) = 0 is automatically satisfied.
At the root of these transitions is that the projectors P L,R do not commute with the Jμμ generators of the d−dimensional Lorentz group, which is respected by the kinetic term (see Eq. (4.4)). Hence, Lorentz transformations mix f L , f R , as opposed to what happens in d = 4.
This is a consequence of the properties of the charge conjugation matrix C in d-dimensions (see Eqs. (2.15) and (2.16) of Ref.[22]).15 f a is the d−dimensional version of the expression in Eq. (2.18).
There is some ambiguity in this expression because via space-time integration by parts it is possible to convert a divergent evanescent operator into a finite non-evanescent one. Such ambiguity is however absent at the level of momentum-space Feynman diagrams. The expression in Eq. (4.11) is to be understood as a collection of momentum-space correlators, where no space-time integration by parts is performed.
Incidentally, (4.14) also implies that the divergent 4-dimensional terms Γ div | (1) are gauge-invariant.
Note that the first trace includes both gauge and Lorentz indices, whereas the second only the gauge indices are summed over.
The coefficients C 12,13,14 of the fermionic operators are written in terms of the T L,R generators because they only carry gauge indices. On the contrary, T V,A also involve Lorentz indices, which are fully contracted in the definitions of I 12,13,14 .
Possible gauge-invariant operators may be added to S ct . However, these would have no role in restoring the WIs. Rather, they would correspond to renormalizations of the couplings of the theory.
In deriving the variation it is useful to note that D V µ satisfies the Leibniz rule.
The dependence of S ct [φ] on the subtraction procedure is specified by L a Γ[φ] = L a (Γ reg [φ] + Γ ct [φ]).
IR-divergences at large t are cutoff by the factor e −εt in H 0 (x, y).
AcknowledgementsThe research of C.C. was supported by the Cluster of Excellence Precision Physics, Fundamental Interactions, and Structure of Matter (PRISMA + -EXC 2118/1) within the German Excellence Strategy (project ID 39083149).11 3= c 11 7 = c 11 10 = c 11 8 = c 11 2 = c 11 9 = 0 12c 1 4 + ic 9 LLLR = 0 12c 1 6 + ic 9 LRLR = 0 12c 1 5 − ic 9 LLLR = 0B Heat kernelThe heat kernel method was pioneered by Schwinger and then developed by De Witt and Seeley. For a lucid review, we refer the reader to Ref.[51]. This method allows us to write the matrix element of / D −2 in position space asThe solution H(x, y; t), which is referred to as the heat kernel, can be calculated perturbatively in the limit t → 0. We write the ansatz H(x, y; t) = H 0 (x, y; t)U (x, y; t), with the "free" solution H 0 being the solution of (B.1) with / D 2 replaced by ∂ 2 , namelyandU (x, y; t) = n=0,1,2,··· a n (x, y)(it) n . (B.3)The heat kernel coefficients a n (x, y) are smooth in the limit y → x, and satisfy the boundary condition a 0 (x, y) = 1, i.e. H(x, y; 0) = δ (d) (x − y). The parameter ε > 0 follows from the iε prescription in the Feynman propagator and should not be confused with = (4 − d)/2.In the second equality of (B.4) we merely used the definition of heat kernel, and in the third applied the derivatives. In the fourth step we took advantage of the fact that all terms with a single derivative of H 0 vanish in the limit y → x. The non-vanishing contributions come from the second derivative lim y→x ∂ µ ∂ ν H 0 = +ig µν H 0 /(2t) as well as evanescent terms proportional to DνU . These latter can be neglected, as explained around Eq. (4.15). The last equality follows from the identity γμγμ = (d − 4). Finally, the integral in dt can be performed explicitly for any order n of the perturbative expansion of U , and is proportional to Γ(d/2 − n).24There is a unique contribution that survives the d → 4 limit. This emerges from an UV divergence t → 0 that results in a factor Γ(d/2 − n) ∼ 1/(d/2 − n). The latter can exactly compensate the where the 4-dimensional limit is formally defined such that lim d→4 Eva = 0 for all evanescent operators. The 4−dimensional limit of the heat-kernel coefficients a n (x, x) can be obtained recursively. We first observe that / D 2 = D µ D µ + X, with D µ = ∂ µ + iP µ , and where P µ , X explicitly readThe field strengths of the vector and axial components were previously introduced in (4.31). Now, the heat kernel, defined in (B.1), satisfies i d dt H(x, y; t) = / D 2 x H(x, y; t). Inserting the ansatz H = H 0 U this becomes idU/dt = −i(x − y) µ D µ U/t + [D 2 + X]U . Equating order by order in t n gives the recursive relations (x − y) µ D µ x a 0 (x, y) = 0, (B.7) [n + 1 + (x − y) µ D µ x ]a n+1 (x, y) = −[D 2x + X]a n (x, y) (n > 0).The first definition, along with lim y→x a 0 (x, y) = 1 defines a 0 (x, y). We are interested in a 2 (x, x) = − 1 2 lim y→x [D 2x + X]a 1 (x, y), but to find its explicit expression we need a 1 (x, y) and its second derivative: x + X]a 0 (x, y)The first relation follows directly from the second equation in (B.7). Differentiating twice the same relation with n = 0 with respect to D we obtain the other one. Similarly, differentiating the first relation in (B.7) we derive lim y→x D α a 0 (x, y) = lim y→x D 2 a 0 (x, y) = 0. This leads us towhere P µν = ∂ µ P ν − ∂ ν P µ + i[P µ , P ν ]. In evaluating a 2 we used the linearity of the derivative, namely D µ [Xa 0 ] = [D µ X]a 0 + X[D µ a 0 ]. The relation D 2 x D 2 x a 0 (x, y) = 1 2 [D µ , D ν ][D µ , D ν ]a 0 (x, y) is proven differentiating four times the first equation in (B.7), and contracting with the metric tensor. Because [D µ , D ν ] is not a differential operator, the limit y → x can be performed trivially and (B.9) follows.
Lagrangian field theory, contribution to Les Houches Summer School on Theoretical Physics. R Stora, 2921 st conference in the Les Houches Summer School seriesR. Stora, Lagrangian field theory, contribution to Les Houches Summer School on The- oretical Physics, pp. 1-80, 21 st conference in the Les Houches Summer School series, 29
. C Becchi, A Rouet, R Stora, 10.1007/BF01614158Commun. Math. Phys. 42C. Becchi, A. Rouet and R. Stora, Commun. Math. Phys. 42 (1975), 127-162 doi:10.1007/BF01614158
. C Becchi, A Rouet, R Stora, 10.1016/0003-4916Annals Phys. 9876C. Becchi, A. Rouet and R. Stora, Annals Phys. 98 (1976), 287-321 doi:10.1016/0003- 4916(76)90156-1
C Becchi, A Rouet, R Stora, Renormalizable Theories with Symmetry Breaking, in Field Quantization and Statistical Physics. Memory of Bernard Jouvet, p. 3, E. TirapeguiDordrecht, HollandD. Reidel Publishing CompanyC. Becchi, A. Rouet and R. Stora, Renormalizable Theories with Symmetry Breaking, in Field Quantization and Statistical Physics, In Memory of Bernard Jouvet, p. 3, E. Tirapegui (Ed.), D. Reidel Publishing Company, Dordrecht, Holland (1981), ISBN 90- 277-1128-3
. O Piguet, S P Sorella, 10.1007/978-3-540-49192-7Lect. Notes Phys. Monogr. 28O. Piguet and S. P. Sorella, Lect. Notes Phys. Monogr. 28 (1995), 1-134 doi:10.1007/978- 3-540-49192-7
. E Kraus, 10.1006/aphy.1997.5746arXiv:hep-th/9709154Annals Phys. 262hep-thE. Kraus, Annals Phys. 262 (1998), 155-259 doi:10.1006/aphy.1997.5746 [arXiv:hep- th/9709154 [hep-th]].
. R Ferrari, P A Grassi, 10.1103/PhysRevD.60.065010arXiv:hep-th/9807191Phys. Rev. D. 6065010hep-thR. Ferrari and P. A. Grassi, Phys. Rev. D 60 (1999), 065010 doi:10.1103/PhysRevD.60.065010 [arXiv:hep-th/9807191 [hep-th]].
. P A Grassi, T Hurth, M Steinhauser, 10.1006/aphy.2001.6117arXiv:hep-ph/9907426Annals Phys. 288hep-phP. A. Grassi, T. Hurth and M. Steinhauser, Annals Phys. 288 (2001), 197-248 doi:10.1006/aphy.2001.6117 [arXiv:hep-ph/9907426 [hep-ph]].
. P A Grassi, T Hurth, M Steinhauser, 10.1016/S0550-3213(01)00303-0arXiv:hep-ph/0102005Nucl. Phys. B. 610hep-phP. A. Grassi, T. Hurth and M. Steinhauser, Nucl. Phys. B 610 (2001), 215-250 doi:10.1016/S0550-3213(01)00303-0 [arXiv:hep-ph/0102005 [hep-ph]].
. S Weinberg, 10.1103/PhysRev.118.838Phys. Rev. 118S. Weinberg, Phys. Rev. 118 (1960), 838-849 doi:10.1103/PhysRev.118.838
. W Zimmermann, 10.1007/BF01654298Commun. Math. Phys. 11W. Zimmermann, Commun. Math. Phys. 11 (1968), 1-8 doi:10.1007/BF01654298
. J H Lowenstein, 10.1007/BF01907030Commun. Math. Phys. 24J. H. Lowenstein, Commun. Math. Phys. 24 (1971), 1-21 doi:10.1007/BF01907030
. Y M P Lam, 10.1103/PhysRevD.6.2145Phys. Rev. D. 6Y. M. P. Lam, Phys. Rev. D 6 (1972), 2145-2161 doi:10.1103/PhysRevD.6.2145
. T E Clark, J H Lowenstein, 10.1016/0550-3213Nucl. Phys. B. 11376T. E. Clark and J. H. Lowenstein, Nucl. Phys. B 113 (1976), 109-134 doi:10.1016/0550- 3213(76)90457-0
. F Brennecke, M Duetsch, 10.1007/978-3-7643-8736-511arXiv:0801.1408hepthF. Brennecke and M. Duetsch, doi:10.1007/978-3-7643-8736-5 11 [arXiv:0801.1408 [hep- th]].
. O Piguet, A Rouet, 10.1016/0370-1573(81)90066-1Phys. Rept. 761O. Piguet and A. Rouet, Phys. Rept. 76 (1981), 1 doi:10.1016/0370-1573(81)90066-1
Axial vector vertex in spinor electrodynamics. S L Adler, Phys. Rev. 177S. L. Adler, "Axial vector vertex in spinor electrodynamics," Phys. Rev. 177 (1969), 2426-2438.
A PCAC puzzle: π 0 → γγ in the σ model. J S Bell, R Jackiw, Nuovo Cim. A. 60J. S. Bell and R. Jackiw, "A PCAC puzzle: π 0 → γγ in the σ model," Nuovo Cim. A 60 (1969), 47-61.
. G Bonneau, 10.1142/S0217751X90001641Int. J. Mod. Phys. A. 5G. Bonneau, Int. J. Mod. Phys. A 5 (1990), 3831-3860 doi:10.1142/S0217751X90001641
. C P Martin, D Sanchez-Ruiz, 10.1016/S0550-3213arXiv:hep-th/9905076Nucl. Phys. B. 57299hep-thC. P. Martin and D. Sanchez-Ruiz, Nucl. Phys. B 572 (2000), 387-477 doi:10.1016/S0550- 3213(99)00453-8 [arXiv:hep-th/9905076 [hep-th]].
. D Sanchez-Ruiz, 10.1103/PhysRevD.68.025009arXiv:hep-th/0209023Phys. Rev. D. 6825009hep-thD. Sanchez-Ruiz, Phys. Rev. D 68 (2003), 025009 doi:10.1103/PhysRevD.68.025009 [arXiv:hep-th/0209023 [hep-th]].
. H Bélusca-Maïto, A Ilakovac, M Mador-Božinović, D Stöckinger, 10.1007/JHEP08(2020)024arXiv:2004.14398JHEP. 080824hep-phH. Bélusca-Maïto, A. Ilakovac, M. Mador-Božinović and D. Stöckinger, JHEP 08 (2020) no.08, 024 doi:10.1007/JHEP08(2020)024 [arXiv:2004.14398 [hep-ph]].
. H Bélusca-Maïto, A Ilakovac, P Kühler, M Mador-Božinović, D Stöckinger, 10.1007/JHEP11(2021)159arXiv:2109.11042JHEP. 11159hep-phH. Bélusca-Maïto, A. Ilakovac, P. Kühler, M. Mador-Božinović and D. Stöckinger, JHEP 11 (2021), 159 doi:10.1007/JHEP11(2021)159 [arXiv:2109.11042 [hep-ph]].
Consequences of anomalous Ward identities. J Wess, B Zumino, Phys. Lett. B. 37J. Wess and B. Zumino, "Consequences of anomalous Ward identities," Phys. Lett. B 37 (1971), 95-97.
. C G Bollini, J J Giambiagi, 10.1007/BF02895558Nuovo Cim. B. 12C. G. Bollini and J. J. Giambiagi, Nuovo Cim. B 12 (1972), 20-26 doi:10.1007/BF02895558
. G Hooft, M J G Veltman, 10.1016/0550-3213(72Nucl. Phys. B. 44G. 't Hooft and M. J. G. Veltman, Nucl. Phys. B 44 (1972), 189-213 doi:10.1016/0550- 3213(72)90279-9
. Q Bonnefoy, L Di Luzio, C Grojean, A Paul, A N Rossia, 10.1007/JHEP05(2021)153arXiv:2012.07740JHEP. 05153hep-phQ. Bonnefoy, L. Di Luzio, C. Grojean, A. Paul and A. N. Rossia, JHEP 05 (2021), 153 doi:10.1007/JHEP05(2021)153 [arXiv:2012.07740 [hep-ph]].
. F Feruglio, 10.1007/JHEP03(2021)128arXiv:2012.13989JHEP. 03128hepphF. Feruglio, JHEP 03 (2021), 128 doi:10.1007/JHEP03(2021)128 [arXiv:2012.13989 [hep- ph]].
. G Passarino, 10.5506/APhysPolB.52.533arXiv:2104.13569Acta Phys. Polon. B. 526-7hep-phG. Passarino, Acta Phys. Polon. B 52 (2021) no.6-7, 533 doi:10.5506/APhysPolB.52.533 [arXiv:2104.13569 [hep-ph]].
. H Kluberg-Stern, J B Zuber, 10.1103/PhysRevD.12.467Phys. Rev. D. 12H. Kluberg-Stern and J. B. Zuber, Phys. Rev. D 12 (1975), 467-481 doi:10.1103/PhysRevD.12.467
. H Kluberg-Stern, J B Zuber, 10.1103/PhysRevD.12.482Phys. Rev. D. 12H. Kluberg-Stern and J. B. Zuber, Phys. Rev. D 12 (1975), 482-488 doi:10.1103/PhysRevD.12.482
. L F Abbott, Acta Phys. Polon. B. 13L. F. Abbott, Acta Phys. Polon. B 13 (1982), 33 CERN-TH-3113.
. S Ichinose, M Omote, 10.1016/0550-3213Nucl. Phys. B. 20382S. Ichinose and M. Omote, Nucl. Phys. B 203 (1982), 221-267 doi:10.1016/0550- 3213(82)90029-3
. D M Capper, A Maclean, 10.1016/0550-3213Nucl. Phys. B. 20382D. M. Capper and A. MacLean, Nucl. Phys. B 203 (1982), 413-422 doi:10.1016/0550- 3213(82)90321-2
. P Breitenlohner, D Maison, 10.1007/BF01609069Commun. Math. Phys. 52P. Breitenlohner and D. Maison, Commun. Math. Phys. 52 (1977), 11-38 doi:10.1007/BF01609069
. P Breitenlohner, D Maison, MPI-PAE-PTH-26-75P. Breitenlohner and D. Maison, MPI-PAE-PTH-26-75.
. P Breitenlohner, D Maison, 10.1007/BF01609070Commun. Math. Phys. 5239P. Breitenlohner and D. Maison, Commun. Math. Phys. 52 (1977), 39 doi:10.1007/BF01609070
. P Breitenlohner, D Maison, 10.1007/BF01609071Commun. Math. Phys. 5255P. Breitenlohner and D. Maison, Commun. Math. Phys. 52 (1977), 55 doi:10.1007/BF01609071
. W Grimus, M N Rebelo, 10.1016/S0370-1573(96)00030-0arXiv:hep-ph/9506272Phys. Rept. 281hep-phW. Grimus and M. N. Rebelo, Phys. Rept. 281 (1997), 239-308 doi:10.1016/S0370- 1573(96)00030-0 [arXiv:hep-ph/9506272 [hep-ph]].
. H Georgi, S L Glashow, 10.1103/PhysRevD.6.429Phys. Rev. D. 6429H. Georgi and S. L. Glashow, Phys. Rev. D 6 (1972), 429 doi:10.1103/PhysRevD.6.429
Probing top-quark couplings indirectly at Higgs factories. G Durieux, J Gu, E Vryonidou, C Zhang, arXiv:1809.03520Chin. Phys. C. 4212123107hep-phG. Durieux, J. Gu, E. Vryonidou and C. Zhang, "Probing top-quark couplings indirectly at Higgs factories," Chin. Phys. C 42 (2018) no.12, 123107 [arXiv:1809.03520 [hep-ph]].
Automated one-loop computations in the SMEFT. C Degrande, G Durieux, F Maltoni, K Mimasu, E Vryonidou, C Zhang, arXiv:2008.11743hep-phC. Degrande, G. Durieux, F. Maltoni, K. Mimasu, E. Vryonidou and C. Zhang, "Auto- mated one-loop computations in the SMEFT," [arXiv:2008.11743 [hep-ph]].
. J G Korner, D Kreimer, K Schilcher, 10.1007/BF01559471Z. Phys. C. 54J. G. Korner, D. Kreimer and K. Schilcher, Z. Phys. C 54, 503-512 (1992) doi:10.1007/BF01559471
. H Nicolai, P K Townsend, 10.1016/0370-2693Phys. Lett. B. 9380H. Nicolai and P. K. Townsend, Phys. Lett. B 93, 111-115 (1980) doi:10.1016/0370- 2693(80)90106-9
. A P Balachandran, G Marmo, V P Nair, C G Trahern, 10.1103/PhysRevD.25.2713Phys. Rev. D. 252713A. P. Balachandran, G. Marmo, V. P. Nair and C. G. Trahern, Phys. Rev. D 25, 2713 (1982) doi:10.1103/PhysRevD.25.2713
. M S Chanowitz, M Furman, I Hinchliffe, 10.1016/0550-3213(79Nucl. Phys. B. 15990333M. S. Chanowitz, M. Furman and I. Hinchliffe, Nucl. Phys. B 159 (1979), 225-243 doi:10.1016/0550-3213(79)90333-X
. M S Chanowitz, M Furman, I Hinchliffe, 10.1016/0550-3213(79Nucl. Phys. B. 15990333M. S. Chanowitz, M. Furman and I. Hinchliffe, Nucl. Phys. B 159 (1979), 225-243 doi:10.1016/0550-3213(79)90333-X
. D Kreimer, 10.1016/0370-2693(90)90461-EPhys. Lett. B. 237D. Kreimer, Phys. Lett. B 237 (1990), 59-62 doi:10.1016/0370-2693(90)90461-E
. J G Korner, D Kreimer, K Schilcher, 10.1007/BF01559471Z. Phys. C. 54J. G. Korner, D. Kreimer and K. Schilcher, Z. Phys. C 54 (1992), 503-512 doi:10.1007/BF01559471
. F Jegerlehner, 10.1007/s100520100573arXiv:hep-th/0005255Eur. Phys. J. C. 18hep-thF. Jegerlehner, Eur. Phys. J. C 18 (2001), 673-679 doi:10.1007/s100520100573 [arXiv:hep- th/0005255 [hep-th]].
Dynamical Theory of Groups and Fields. B S Dewitt, Gordon and BreachB. S. DeWitt, "Dynamical Theory of Groups and Fields", Gordon and Breach.
| [] |
[
"Conformal Prediction Intervals with Temporal Dependence",
"Conformal Prediction Intervals with Temporal Dependence"
] | [
"Zhen Lin zhenlin4@illinois.edu \nUniversity of Illinois at Urbana-Champaign\n\n",
"Shubhendu Trivedi shubhendu@csail.mit.edu ",
"Jimeng Sun \nUniversity of Illinois at Urbana-Champaign\n\n\nCarle's Illinois College of Medicine\nUniversity of Illinois at Urbana-Champaign\n\n",
"Jimeng@illinois Edu "
] | [
"University of Illinois at Urbana-Champaign\n",
"University of Illinois at Urbana-Champaign\n",
"Carle's Illinois College of Medicine\nUniversity of Illinois at Urbana-Champaign\n"
] | [] | Cross-sectional prediction is common in many domains such as healthcare, including forecasting tasks using electronic health records, where different patients form a cross-section. We focus on the task of constructing valid prediction intervals (PIs) in time series regression with a cross-section. A prediction interval is considered valid if it covers the true response with (a pre-specified) high probability. We first distinguish between two notions of validity in such a setting: cross-sectional and longitudinal. Cross-sectional validity is concerned with validity across the cross-section of the time series data, while longitudinal validity accounts for the temporal dimension. Coverage guarantees along both these dimensions are ideally desirable; however, we show that distribution-free longitudinal validity is theoretically impossible. Despite this limitation, we propose Conformal Prediction with Temporal Dependence (CPTD), a procedure that is able to maintain strict cross-sectional validity while improving longitudinal coverage. CPTD is post-hoc and light-weight, and can easily be used in conjunction with any prediction model as long as a calibration set is available. We focus on neural networks due to their ability to model complicated data such as diagnosis codes for time series regression, and perform extensive experimental validation to verify the efficacy of our approach. We find that CPTD outperforms baselines on a variety of datasets by improving longitudinal coverage and often providing more efficient (narrower) PIs. Our code is available at https://github.com/zlin7/CPTD. * During the initiation and pursuance of this research, the author's primary affiliation was MIT. | 10.48550/arxiv.2205.12940 | [
"https://export.arxiv.org/pdf/2205.12940v3.pdf"
] | 249,062,578 | 2205.12940 | fa1dfdac997c62692d670c3cd9169c7871ed6a01 |
Conformal Prediction Intervals with Temporal Dependence
Zhen Lin zhenlin4@illinois.edu
University of Illinois at Urbana-Champaign
Shubhendu Trivedi shubhendu@csail.mit.edu
Jimeng Sun
University of Illinois at Urbana-Champaign
Carle's Illinois College of Medicine
University of Illinois at Urbana-Champaign
Jimeng@illinois Edu
Conformal Prediction Intervals with Temporal Dependence
Under review as submission to TMLR Reviewed on OpenReview: https://openreview.net/forum?id=8QoxXTDcsH
Cross-sectional prediction is common in many domains such as healthcare, including forecasting tasks using electronic health records, where different patients form a cross-section. We focus on the task of constructing valid prediction intervals (PIs) in time series regression with a cross-section. A prediction interval is considered valid if it covers the true response with (a pre-specified) high probability. We first distinguish between two notions of validity in such a setting: cross-sectional and longitudinal. Cross-sectional validity is concerned with validity across the cross-section of the time series data, while longitudinal validity accounts for the temporal dimension. Coverage guarantees along both these dimensions are ideally desirable; however, we show that distribution-free longitudinal validity is theoretically impossible. Despite this limitation, we propose Conformal Prediction with Temporal Dependence (CPTD), a procedure that is able to maintain strict cross-sectional validity while improving longitudinal coverage. CPTD is post-hoc and light-weight, and can easily be used in conjunction with any prediction model as long as a calibration set is available. We focus on neural networks due to their ability to model complicated data such as diagnosis codes for time series regression, and perform extensive experimental validation to verify the efficacy of our approach. We find that CPTD outperforms baselines on a variety of datasets by improving longitudinal coverage and often providing more efficient (narrower) PIs. Our code is available at https://github.com/zlin7/CPTD. * During the initiation and pursuance of this research, the author's primary affiliation was MIT.
Introduction
Suppose we are given N independent and identically distributed (i.i.d) or exchangeable time series (TS), denoted {S i } N i=1 . Assume that each S i is sampled from an arbitrary distribution P S , and consists of temporally-dependent observations S i = [Z i,1 . . . , Z i,t , . . . , Z i,T ]. Each Z i,t is a pair (X i,t , Y i,t ) comprising of covariates X i,t ∈ R d and the response Y i,t ∈ R. Given data {Z N +1,t } t t =1 until time t for a new time series S N +1 , the time series regression problem amounts to predicting the response Y N +1,t+1 at an unknown time t + 1. An illustrative example is predicting the white blood cell count (WBCC) of a patient after she is administered an antibiotic. In such a case, X i,t could include covariates such as the weight or blood pressure of the i-th patient t days after the antiobiotic is given, and Y i,t the WBCC of this patient. longitudinal validity, which can be seen as inter-and intra-time series coverage guarantees. The black curves are predictions by the model, and the shaded blue bands denote the PIs. Red crosses are the ground-truth y not covered by PIs, while blue dots are the ground-truth y which are covered. Ideally, we want a small number of red crosses (i.e., misses) that are randomly distributed across samples (cross-sectionally valid) and along the time dimension within each TS (longitudinally valid). The leftmost illustration (A) features PIs that are not valid in either sense i.e. Y is never covered, neither across time series nor across time within a single time series. (B) shows a scenario with cross-sectional validity: for any t, the majority of TS are covered. It is however longitudinally invalid, because the PI of some TS has zero coverage. C (right) shows both cross-sectional and longitudinal validity.
While obtaining accurate point forecasts is often of interest, our chief concern is in quantifying the uncertainty of each prediction by constructing valid prediction intervals (PI). More precisely, we want to obtain an interval estimateĈ i,t ⊆ R, that covers Y i,t with a pre-selected high probability (1 − α). Such aĈ i,t is generated by an interval estimatorĈ ·,· utilizing available training data. We focus on scenarios with both cross-sectional and time series aspects such as electronic health record data (such as in Stankevičiūtė et al. (2021)), where different patients together form a cross-section. In such a setting there are two distinct notions of validity: cross-sectional validity and longitudinal validity. These notions are illustrated in Figure 1. Cross-sectional validity is a type of inter-time series coverage requirement, whereas longitudinal validity focuses on coverage along the temporal dimension in an individual time series. An effective uncertainty quantification method should ideally incorporate both notions satisfactorily.
In general, conformal prediction, owing to its distribution-free and model-agnostic nature, has gradually seen wider adoption for complicated models such as neural networks (Fisch et al., 2021;Lin et al., 2021;Zhang et al., 2021;Cortés-Ciriano & Bender, 2019;Angelopoulos et al., 2022). In the time series context, recent research effort, including Gibbs & Candes (2021); Zaffran et al. (2022); Xu & Xie (2021), has focused on obtaining PIs using variants of conformal prediction. However, these works invariably only consider the target TS, ignoring cross-sectional information along with the attendant notion of coverage. Moreover, such methods typically provide no longitudinal validity without strong distributional assumptions. The work of Stankevičiūtė et al. (2021), which also uses conformal prediction, is the only method that operates in the cross-sectional setting. However, Stankevičiūtė et al. (2021) ends up ignoring the temporal information while constructing PIs at different steps. On a different tack, popular (approximately) Bayesian methods such as Chen et al. (2014); Welling & Teh (2011);Neal (1992); Louizos & Welling (2017); Kingma & Welling (2014); Gal & Ghahramani (2016); Lakshminarayanan et al. (2017); Wilson & Izmailov (2020) could also be adapted to time series contexts (Fortunato et al., 2017;Caceres et al., 2021). However, such methods require changing the underlying regression model and typically provide no coverage guarantees.
A method to construct valid PIs that can handle both aforementioned notions of validity simultaneously, while preferably also being light-weight and post-hoc, is missing from the literature. In this paper, we fill this gap by resorting to the framework of conformal prediction. Our contributions are summarized as follows:
• We first dissect coverage guarantees in the cross-sectional time series setting to shed light on both crosssectional and longitudinal validity. We show that longitudinal coverage is impossible to achieve in a distribution-free manner.
• Despite the impossibility of distribution-free longitudinal validity, we propose a general and effective procedure (dubbed Conformal Prediction with Temporal Dependence or CPTD for short) to incorporate temporal information in conformal prediction for time series, with a focus to improve longitudinal coverage. • We theoretically establish the cross-sectional validity of the prediction intervals obtained by our procedure. • Through extensive experimentation, we show that CPTD is able to maintain cross-sectional validity while improving longitudinal coverage.
Related Works
Work most related to ours falls along a few closely related axes. We summarize some such work below to contextualize our contributions.
Bayesian Uncertainty Quantification is a popular line of research in uncertainty quantification for neural networks. While the posterior computation is almost always intractable, various approximations have been proposed, including variants of Bayesian learning based on Markov Chain Monte Carlo Chen et al. (2014); Welling & Teh (2011);Neal (1992), variational inference methods Louizos & Welling (2017); Kingma & Welling (2014) and Monte-Carlo Dropout Gal & Ghahramani (2016). Another popular uncertainty quantification method sometimes considered approximately Bayesian is Deep Ensemble Lakshminarayanan et al. (2017). Bayesian methods have also been extended to RNNs Fortunato et al. (2017); Caceres et al. (2021). The credible intervals provided by approximate Bayesian methods, however, do not provide frequentist coverage guarantees. Moreover, modifications to the network structure (such as the introduction of many Dropout layers), which could be considered additional constraints, could hurt model performance. In contrast to such methods, CPTD comes with provable coverage guarantees. More specifically, it is cross-sectionally valid, and improves longitudinal coverage. CPTD is also post-hoc, and does not interfere with the base neural network.
Quantile Prediction methods directly generate a prediction interval for each data point, instead of providing a point estimate. Such methods typically predict two scalars, representing the upper and lower bound for the PIs, with a pre-specified coverage level 1 − α. The loss for point estimation (such as MSE) is thus replaced with the "pinball"/quantile loss Steinwart & Christmann (2011); Koenker & Bassett (1978), which takes α as a parameter. Recent works applied quantile prediction to time series forecasting settings via direct prediction by an RNN Wen et al. (2017) or by combining RNN and linear splines to predict quantiles in a nonparametric manner Gasthaus et al. (2019). Such methods still do not provide provable coverage guarantees, and can suffer from the issue of quantile crossing as in the case of Wen et al. (2017).
Conformal Prediction (CP):
Pioneered by Vovk et al. (2005), conformal prediction (CP) provides methods to construct prediction intervals or regions that are guaranteed to cover the true response with a probability ≥ 1 − α, under the exchangeability assumption. Recently, CP has seen wider attention and has been heavily explored in deep learning (Lin et al. (2021); Angelopoulos et al. (2021); Stankevičiūtė et al. (2021)) due to its distribution-free nature, which makes it suitable for constructing valid PIs for complicated models like deep neural networks. It is worth noting that although most CP methods apply to point-estimators, methods like conformalized quantile regression Romano et al. (2019) can also be applied to quantile estimators like Wen et al. (2017). CPTD is a conformal prediction method, but in the cross-sectional time series setting.
Exchangeable Time Series and Cross-Sectional Validity:
The work that is most relevant to ours is Stankevičiūtė et al. (2021), which directly applies (split) conformal prediction (Vovk et al. (2005)) assuming cross-sectional exchangeability of the time series 1 . It however studies only the multi-horizon prediction setting, completely ignoring longitudinal validity. Although (cross-sectionally) valid, Stankevičiūtė et al. (2021) leads to unbalanced coverage (i.e. some TS receives poor coverage longitudinally while others high) and inefficient PIs. To the best of our knowledge, no other works explore the cross-sectional exchangeability in the context of time series forecasting. Gibbs & Candes (2021) propose a distribution-free conformal prediction method called ACI, which uses the realized residuals as conformal scores, and adapts the α at each time basing on the average coverage rate of recent PIs. To achieve distribution-free marginal validity, ACI has to (often) create non-informative infinitely-wide PIs, which is reasonable given the intrinsic difficulty stemming from the lack of exchangeability. Such methods also do not apply to our setting, because they typically require a very long window to estimate the error distribution for a particular time series as a burn-in period. Furthermore, they do not provide a way to leverage the rich information from the cross-section.
Long and Single Time Series and Longitudinal
We now proceed to first discuss some preliminaries that will be required to describe CPTD in detail.
Preliminaries
Given a target coverage level 1 − α, we want to construct PIs that will cover the true response Y for a specific time series, and at a specific time step, with probability at least 1 − α. However, we have not specified what kind of probability (and thus validity 2 ) we are referring to. In this section, we will formally define cross-sectional and longitudinal validity, both important in our setting (See Figure 1 for an illustration). However, before doing so, we will first state the basic exchangeability assumption, a staple of the conformal prediction literature. Vovk et al. (2005))A sequence of random variables, Z 1 , Z 2 , . . . , Z n are exchangeable if the joint probability density distribution does not change under any permutation applied to the subscript. That is, for any permutation π ∈ S n , and every measurable set E ⊆ Z n :
Definition 1. (The Exchangeability Assumption
P{(Z 1 , Z 2 , . . . , Z n ) ∈ E} = P{(Z π(1) , Z π(2) , . . . , Z π(n) ) ∈ E} (1)
where each Z i ∈ Z (the corresponding measurable space for the random variable Z i ).
Note that exchangeability is a weaker assumption than the "independent and identically distributed" (i.i.d.) assumption. We extend the definition to a sequence of random time series: It should be clear that the exchangeability is "inter"-time series. Such an assumption could be reasonable in many settings of interest. For instance, collecting electronic health data time series for different patients from a hospital. Notice that Def. 2 reduces to Def. 1 when we have only one specific value of t. Throughout this paper, we will assume S 1 , . . . , S N +1 are exchangeable time series.
Cross-sectional Validity
The first type of validity of PIs is what we refer to as the cross-sectional validity. This validity is widely discussed in the non-time series regression settings, often referred to as just "validity" or "coverage guaranteee" (e.g. in Barber et al. (2020)), but is rarely discussed in the context of time series regression. Cross-sectional validity refers to the type of coverage guarantee when the probability of coverage is taken over the cross-section i.e. across different points. The formal definition is as follows:
Definition 3. Prediction interval estimatorĈ ·,· is (1 − α) cross-sectionally valid if, for any t + 1,
P S N +1 ∼P S {Y N +1,t+1 ∈Ĉ N +1,t+1 } ≥ 1 − α.(2)
We will sometimes use an additional subscript α forĈ (i.e.,Ĉ α ) to emphasize the target coverage level. As a reminder,Ĉ ·,· , the estimator, denotes the model used to generate a specific PI (a subset of R) for each i and t. Using an example similar to one used earlier: suppose we want to predict the WBCC of a patient after the observation of some symptoms. In the first visit, there is really no time series information that can be used. Thus, the only type of coverage guarantee can only be cross-sectional. In simple terms, we could construct a cross-sectionally valid PI and say if we keep sampling new patients and construct the PI using the same procedure, about ≥ 1 − α of the patients' initial WBCC will fall in the corresponding PI.
It might be worth a small digression here to note that the validity in Def. 3 is marginal. That is, the PI will cover an "average patient" with probability ≥ 1 − α. If we only consider patients from a minority group, the probability of coverage could be much lower, even ifĈ is (cross-sectionally) valid. We direct interested readers to Barber et al. (2020) for a more thoroughgoing discussion.
Longitudinal Validity
Following on the above example, in later visits of a particular patient, we would ideally like to construct valid PIs that also consider information from previous visits. That is, we would like to use information already revealed to us for improved coverage, regardless of the patient. As might be apparent, this already moves beyond the purview of cross-sectional validity and leads to the notion of longitudinal validity:
Definition 4. Prediction intervalĈ ·,· is 1 − α longitudinally valid if for almost every time series S N +1 ∼ P S there exists a T 0 such that: t > T 0 =⇒ P Y N +1,t |S N +1,:t−1 {Y N +1,t ∈Ĉ N +1,t } ≥ 1 − α.(3)
We impose a threshold T 0 because it should be clear that there is no temporal information that we can use for small t such as Y N +1,t=0 . Here, the event A being true for "almost every" S N +1 means that the probability of occurrence of A is one under P S . Note that the crucial difference between cross-sectional validity and longitudinal validity is that the latter is similar to a "conditional validity", indicating a coverage guarantee conditional on a specific time series. Although highly desirable, it should be clear that this is a much stronger type of coverage. In fact, we can show that distribution-free longitudinal validity is impossible to achieve without using (many) infinitely-wide PIs that contain little information. We do so by adapting results on conditional validity, such as those in Lei & Wasserman (2014); Barber et al. (2020). We formally state our impossibility claim in the following theorem:
Theorem 3.1. (Impossibility of distribution-free finite-sample longitudinal validity)
For any P S with no atom 3 , supposeĈ α is a 1 − α longitudinally valid estimator as defined in Def. 4. Then, for almost all S N +1 that we fix,
E[λ(Ĉ α (X N +1,t+1 , S N +1,:t ))] = ∞,(4)
where λ(·) denotes the Lebesgue measure. The expectation is over the randomness of the calibration set.
At a high level, we will construct a distribution very close to P S except for in a small region with low probability mass. We will however require the distribution of Y in this new distribution to spread out on R. Therefore, a distribution-freeĈ α is required to be (arbitrarily) wide as we take the limit. The actual proof is deferred to the Appendix.
Remarks: Theorem 3.1 suggests that for continuous distributions, any longitudinally valid PI estimator can only give infinitely-wide (trivial) PIs all the time. This impossibility is due to the lack of exchangeability on the time dimension. In the case of cross-sectional validity, we condition on one particular time-step, but still have the room to leverage the fact that we have exchangeable patient records to construct the PI (using conformal prediction. See Section 4). In the case of longitudinal validity, we condition on a particular patient. However, we cannot make any exchangeability assumption along the time dimension. Indeed, such an assumption would defeat the purpose of time series modeling; beside the fact that we cannot see the future before making a prediction for the past.
We should also note that Theorem 3.1 does not preclude the use of temporal information in a meaningful way. In fact, the main contribution of this paper is to incorporate temporal information to improve longitudinal coverage while maintaining cross-sectional validity.
Conformal Prediction with Temporal Dependence (CPTD)
Conformal Prediction
For the task of generating valid prediction intervals, conformal prediction (CP) is a basket of powerful tools with minimal assumptions on the underlying distribution. In this paper we will focus on the case of inductive conformal prediction (Papadopoulos et al., 2002;Lei et al., 2015) (now often referred to as "split conformal"), which is relatively light-weight, thus more suitable and widely used for tasks that require training deep neural networks (Lin et al., 2021;Kivaranovic et al., 2020;Matiz & Barner, 2019). For this section only, suppose we are only interested in PIs for Y ·,t=0 . We denote Z i = (X i,0 , Y i,0 ) and drop the t subscript in X ·,t and Y ·,t . In split conformal prediction, if we want to construct a PI for a particular Y i , we would first split our training data {Z i } N i=1 into a proper training set and a calibration set (Papadopoulos et al., 2002). The proper training set is used to fit a (nonconformity) score function V . We could begin with one of the simplest such scoring functions: V (z) = |y −ŷ| whereŷ is predicted by a functionμ(·) fitted on the proper training set.
For ease of exposition and to keep notation simpler, we will assume any estimator likeμ has already been learned, and use {Z i } N i to denote the calibration set only. The crucial assumption for conformal prediction is
that {Z i } N +1 i=1 are exchangeable. We could construct the PI for Y N +1 by havinĝ C α,N +1 (X N +1 ) = [μ(X N +1 ) − w,μ(X N +1 ) + w] (5) where w = Q (1 − α)(N + 1) N + 1 , {|y i −ŷ i | vi:=V (zi) } N i=1 ∪ {∞} ,(6)
where Q(β, ·) denotes the β-quantile of ·. The · operation ensures validity with a finite N with discrete quantiles. To simplify our discussion, we will also assume that there is no tie amongst the {v i } N +1 i with probability 1, ensuring that there is no ambiguity for Q. This is a reasonable assumption for regression tasks (e.g. Lei et al. (2018)).
If the exchangeability assumption holds, then we have the following coverage guarantee (Vovk et al. (2005); Barber et al. (2022)):
P Z N +1 {Y N +1 ∈Ĉ α,N +1 (X N +1 )} ≤ α(7)
Because Y N +1 is unknown, we typically replace V (Z N +1 ) in Eq. 6 with ∞, which can only lead to a larger w and is thus a conservative estimate that still preserves validity. The output of V (·) is called the nonconformity score. The absolute residual used above is one of the most popular nonconformity scores, e.g.
Temporally-informed Nonconformity Scores (CPTD-M)
In this section we will describe a first attempt to improve longitudinal coverage, with a focus on the underlying intuition of the more general idea. Directly applying the split conformal method from above (like in Stankevičiūtė et al. (2021)) ensures cross-sectional validity, but comes with an important limitation. In a sense, when a test point is queried on a calibration set, the nonconformity scores are supposed to be uniform in ranking. It is implied that the point estimates cannot be improved, for instance, when we use the absolute residual as the nonconformity score. In our task, suppose the prediction errors for a patient always rank amongst the top 5% using the calibration set up to time t. Even if we started assuming that this is an "average" patient, we might revise our belief and issue wider PIs going forward, or our model may suffer consistent under-coverage for this patient. These considerations motivate the need of temporally-informed nonconformity scores. We hope to improve the nonconformity score used at time t + 1 by incorporating temporal information thus far, making it more uniformly distributed (in ranking), so that whether Y i,t is covered at different t is less dependent on previous cases.
We propose to compute a normalizerm N +1,t+1 for each t, and use the following nonconformity score:
V N +1,t+1 (ŷ, y; S N +1,:t ) = |ŷ − y| m N +1,t+1 ,(8)
where S ·,:t denotes the first t observations of S · . The idea is that if we expect the average magnitude of prediction errors for a patient to be high, we could divide it by a largem to bring the nonconformity scores of all patients back to a similar distribution. This is heavily inspired by a popular nonconformity score in the non-times-series settings -the "normalized" residual (Lei et al., 2018;Bellotti, 2020;Papadopoulos et al., 2002), where V (z) = | y−ŷ | andˆ can be any function fit on the proper training set. We use a simple mean absolute difference normalization strategy, or MAD-normalization in short, form:
m M i,t+1 := 1 t t t =1 |y i,t −ŷ i,t |.(9)
The superscript M stands for MAD. One could potentially replace this simple average with an exponentially weighted moving average.
Note that one crucial difference between ourm and the error prediction normalizerˆ lies in the source of information used. This source forˆ is mostly the proper training set, which means it faces the issue of over-fitting. This is especially problematic if there is distributional shift. For example, when a hospital deploys a model trained on a larger cohort of patients from a different database, but continues to its own patients as a small calibration set (which is more similar to any patient it might admit in the future). In our setting, however, conditioning on the point estimator,m does not depend on the proper training set at all. As we will see in the experiments (Section 5),m is more robust than an error predictor trained on the proper training set. We refer to this method as CPTD-M.
Once we havem N +1,t+1 , the PI is constructed in the following way:
C CP T D−M N +1,t+1 := [ŷ −v ·m N +1,t+1 ,ŷ +v ·m N +1,t+1 ](10)v := Q (1 − α)(N + 1) N + 1 , |y i,t+1 −ŷ i,t+1 | m i,t+1 N i=1 ∪ ∞ m N +1,t+1 .(11)
Temporally-and-cross-sectionally-informed Nonconformity Scores (CPTD-R)
In the previous section, we gave an example of incorporating temporal information into the nonconformity score. However, we still have not fully leveraged the cross-sectional data in the calibration set. In fact, even in the non-time series setting, the nonconformity score v i is not constrained to depend only on Z i . All we need for the conformal PI to be valid is that the nonconformity scores
{V i } N +1 i=1 themselves (as random variables) are exchangeable when {Z i } N +1
i=1 are exchangeable. This means that the nonconformity score can be much more complicated and take a form such as
v i = V (Z i ; {Z j } N +1
j=1 ), depending on the un-ordered set 4 of all {Z j } N +1 j=1 . It might be hard to imagine why and how one could adopt a complicated version of the nonconformity score for the non-time series case, but it is natural when we also have the longitudinal dimension. Suppose we are to construct a PI for Y N +1,t+1 using conformal prediction, the nonconformity scores can depend on both S N +1,:t and the unordered data S :t := {S 1,:t , . . . , S N +1,:t } for any t ≤ t + 1. To be precise, the nonconformity score V N +1,t+1 could take the following general form:
V N +1,t+1 (ŷ, y) = f (ŷ, y; S N +1,:t+1 , g(S 1,:t+1 , . . . , S N +1,:t+1 ))
where g satisfies the following property:
∀ permutation π, g(S π(1),:t+1 , . . . , S π(N +1),:t+1 ) = g(S 1,:t+1 , . . . , S N +1,:t+1 ).
Here, we propose Ratio-to-Median-Residual-normalization (Ĉ CP T D−R ) as a simple example. First off, notice that while MAD-normalization can adapt to the scale of errors, it is less robust when there is heteroskedasticity along the longitudinal dimension;m will be influenced by the noisiest step t < t + 1. To cope with this issue, we could basem i,t+1 on the ranks, which are often more robust to outliers. Specifically, at t + 1, we first compute the (cross-sectional) median absolute errors in the past:
∀s ≤ t, m s := median i {|r i,s |} where r i,s = y i,s −ŷ i,s .(14)
Then, for each i ∈ [N + 1], and each t, we compute the expanding mean of the median-normalized-residual:
nr i,t := 1 t t s=1 |r i,s | m s .(15)
nr i,t can be viewed as an estimate of the relative non-conformity of S i up to time t. Thus, if we have a guess of the rank for |r i,t+1 |, denoted asq i,t+1 , we could look up the corresponding quantile as:
m R i,t+1 := Q(q i,t+1 , {nr j,t } N +1 j=1 )(16)
Following the notation in Eq. 12, the output of g, g(S 1,:t+1 , . . . , S N +1,:t+1 ), is simply Q(·, {nr j,t } N +1 j=1 ). To obtainq i,t+1 , we can use the following rule (expanding mean with a prior):
q i,t+1 ← 0.5λ + t s=1F s (|r i,s |) t + λ(17)
whereF s is the empirical CDF over {|r i,s |} N +1 i=1 . For example,F s (max i {|r i,s }) = 1. Here we use λ = 1, which means our "prior" rank-percentile of 0.5 has the same weight as any actual observation. The full algorithm to computem R is presented in Alg.1.
With all these nuts and bolts in place, we can construct the PIĈ CP T D−R as usual by using Eq. 10 and Eq. 11. The dependence on the (t + 1)-th observation is simply dropped to avoid plugging in hypothetical values for y N +1,t+1 5 .Ĉ CP T D−R is somewhat complicated partially because we hope to exemplify how to let g depend on the cross-section, but it also tends to produce more efficient PIs empirically (see Section 5).
We dub our general method as CPTD (Conformal Prediction with Temporal Dependence), which includes both CPTD-M and CPTD-R.
Theoretical Guarantees
To formally state that CPTD provides us with cross-sectional validity, we first need a basic lemma: The validity for both our variants follows as a direct consequence:
5 It could still be incorporated by performing "full" or transductive conformal prediction, which is typically much more expensive. andĈ CP T D−R N +1,t+1 are both (1 − α) cross-sectionally valid.
All proofs are deferred to the Appendix.
Additional Remarks: Since the methods proposed are only cross-sectionally valid, they might raise the following natural question for some readers: What do we gain from using split-conformal, by going through the above troubles? First, we expect that the average coverage rate for the least-covered time series will be higher. Imagine the scenario where the absolute errors are highly temporally dependent. In such a case, CPTD-M and CPTD-R will try to capture the average scale of the errors, so that an extreme TS will not always fall out of the PIs (as long as such extremeness is somewhat predictable). This should be viewed as improved longitudinal coverage (despite the lack of guarantee). Secondly, we might observe improved efficiency -the PIs might be narrower on average.
Finally, one could replacem in Section 4.2 and Section 4.3, making use of a different function 6 g that could potentially be more suitable for a target dataset. As long as the new g satisfies Eq. 13, the cross-sectional validity will still hold. We want to emphasize that CPTD should be viewed as a general proposal to leverage both longitudinal and cross-sectional information to adjust the nonconformity score, for improved longitudinal coverage. Our focus is not on optimizing form. However, in our experiments, we found that both CPTD-M and CPTD-R already perform well despite the simple choice ofm.
Experiments
Through a set of experiments, we will first verify the validity of both CPTD-M and CPTD-R, as well as the efficiency (average width of the PIs). Then, more importantly, we will verify our assumption that ignoring the temporal dependence will lead to some TS being consistently under/over-covered, and that CPTD-M and CPTD-R improve the longitudinal coverage by appropriately adjusting the nonconformity scores with additional information. Datasets We test our methods and baselines on a variety of datasets, including:
Baselines
6 CPTD-M could also be viewed as having a constant g that ignores the input. 7 The authors suggest performing Bonferroni correction to jointly cover the entire horizon (all T steps). This however means if T (H in Stankevičiūtė et al. (2021)) is greater than α(N + 1), all PIs are infinitely wide Barber et al. (2022). The authors performed an incorrect split-conformal experiment, which is why the COVID19 dataset still has finite width in Stankevičiūtė et al. (2021).
• MIMIC: Electronic health records data for White Blood Cell Count (WBCC) prediction (Johnson et al. (2016); Goldberger et al. (2000); Johnson et al. (2019)). The cross-section is across different patients. • Insurance: Health insurance claim amount prediction using data from a healthcare data analytic company in North America. The cross-section is across different patients. • COVID19: COVID-19 case prediction in the United Kingdom (UK) (COVID). The cross-section is along different regions in UK. • EEG: Electroencephalography trajectory prediction after visual stimuli (UCI EEG). The cross-section comprises of different trials and different subjects. • Load: Utility (electricity) load forecasting (Hong et al. (2016)). The original data consists of one TS of hourly data for 9 years. We split the data by the date, with different days treated as the cross-section.
MIMIC, COVID19 and EEG are used in Stankevičiūtė et al. (2021) and we follow the setup closely. Note that for Load, we perform a strict temporal splitting (test data is preceded by calibration data, which is preceded by the training data), which means the exchangeability is broken. We also include a Load-R (random) version that preserves the exchangeability by ignoring the temporal order in data splitting. A summary of each dataset is in Table 2. We repeat each experiment 20 times, and report the mean and standard deviation of:
Evaluation Metrics and Experiment Setup
• Average coverage rate:
M i=1 1 M C i where C i = T t=1 1 T 1{Y i,t ∈Ĉ α,i,t }. • Tail coverage rate: j:Cj ∈L 1 |L| C j , where L := {C i : C j < Q(0.1, {C j } M j=1 )}.
In other words, we look at the average coverage rate of the least-covered time series. We wish it as high as possible. • Average PI width: 1
M T M i=1 T t=1 µ(Ĉ α,i,t ) where µ(·) is the width/length of ·.
In the above, M denotes the size of the test set. All metrics here consider the last 20 steps. The target α = 0.1 (corresponding to 90% PIs). We use the same LSTM architecture as Stankevičiūtė et al. (2021) with minor changes in the number of epochs or learning rate, except for the Insurance dataset where we introduce additional embedding training modules to encode hundreds of discrete diagnoses and procedures codes. In the Appendix, we include results for Linear Regression instead of LSTM.
Results
We first report the mean coverage rates in Table 3. We see that in terms of average coverage rate, all conformal methods are valid (90% coverage) for the exchangeable datasets. In the case of Load, since we did not enforce exchangeability during the sample splitting, there is clear (minor) under-coverage for all conformal methods. However, CPTD is still slightly better than baselines, potentially because it can leverage information from the calibration set better. The benefits of CPTD, however, are best illustrated in Tables 4 and 5. In terms of efficiency (width), CPTD-R generally provides the most efficient valid PIs. The improvement is generally not large, but still significant. We note that for MIMIC, directly predicting quantiles (QRNN and CQRNN) provides more efficient PIs, which might be due to an asymmetric distribution of the prediction errors by the point-estimator. Designing temporally adjusted nonconformity scores for quantile regression (potentially based on Romano et al. (2019)) will be an interesting direction for future research. Finally, if we examine the tail coverage in Table 5, we see that both CPTD-R and CPTD-M consistently outperform baselines (with the mean width rescaled to the same). Table 5 suggests improved longitudinal coverage, which is the major focus of our paper. This can also be observed in Figure 2. The results suggest CPTD significantly improves longitudinal coverage with the temporally-adjusted nonconformity scores.
Conclusions
This paper introduces CPTD, a simple algorithm for constructing prediction intervals for the task of time series forecasting with a cross-section. CPTD is the first algorithm that can improve longitudinal coverage while maintaining strict cross-sectional coverage guarantee. Being a conformal prediction method, the cross-sectional validity comes from the empirical distribution of nonconformity scores on the calibration set.
To construct prediction intervals for Y N +1,t+1 , we propose CPTD-M, which leverages only the temporal information for the time series of interest (S N +1 ), and CPTD-R, which exemplifies how to use the entire calibration set to improve temporal coverage. Our experiments confirm that both CPTD-M and CPTD-R significantly outperform state-of-the-art baselines by a wide margin. Moreover, CPTD could easily be applied to any model and data distribution. We hope CPTD will inspire future research in uncertainty quantification in time series forecasting with a cross-section.
Margaux Zaffran, Aymeric Dieuleveut, Olivier Féron, Yannig Goude, and Julie Josse. Adaptive conformal predictions for time series, 2022. URL https://arxiv.org/abs/2202.07282.
Jin Zhang, Ulf Norinder, and Fredrik Svensson. Deep learning-based conformal prediction of toxicity. Journal of chemical information and modeling, 2021.
A Proofs
A.1 Proof for Theorem 3.1
A.1.1 Lemmas
We first present an established result on the impossibility of (non-degenerate) finite-sample distribution-free conditional coverage guarantee from Lei & Wasserman (2014):
Lemma A.1. Let P be the joint distribution of two random variables (X, Y ). SupposeĈ N is conditionally valid, as defined by the following:
P{Y N +1 ∈ C N (x)|X N +1 = x} ≥ 1 − α for all P and almost all x.(18)
Then, for any P and any x 0 that is not an atom of P:
P{lim δ→0 ess sup x0−x ≤δ L(C N (x)) = ∞} = 1.(19)
The subscript N meansĈ N depends on a calibration set of size N , and L(·) is the Lebesgue measure.
A slightly stronger statement of Lemma A.1 is given by:
Lemma A.2. Let P be the joint distribution of two random variables (X, Y ). Let P U be the distribution of an additional random variable U thatĈ could use. Denote the joint distribution of (X, Y, U ) as P + = P × P U , and X + := (X, Y ). SupposeĈ N is conditionally valid with respect to the original P, as defined by the following:
P{Y N +1 ∈ C N (X + N +1 )|X N +1 = x} ≥ 1 − α for all P + and almost all x.(20)
Then, for any P + and any x 0 that is not an atom of P:
P{lim δ→0 ess sup x0−x ≤δ L(C N (x, U N +1 )) = ∞} = 1.(21)
Below is a proof mostly following Lei & Wasserman (2014). First, we define n and TV like in proof for lemma 1 in Lei & Wasserman (2014). For any pair of distributions P and Q, we define the total variation distance between them as:
T V (P, Q) = sup A |P(A) − Q(A)|(22)
For any > 0, define N = 2(1 − (1 − 2 8 ) 1 N ), and we will have (Lei & Wasserman (2014))
T V (P, Q) ≤ N =⇒ T V (P N , Q N ) ≤ .(23)
Fix > 0. Let x 0 be a non-atom and choose δ such that P X {B(x 0 , δ)} < N . Fix B > 0 and let B 0 = B 2(1−α) . Given P + , define another distribution Q + by
Q + (A) = P + (A ∩ S c ) + U(A ∪ S)(24)
where S = {(x, y, u) : x ∈ B(x 0 , δ)}, and U has total mass under P + (S) and is uniform in {(x, y, u) : x ∈ B(x 0 , δ), |y| < B 0 , u ∈ B(0, C)}. (We will see that the only thing that matters is that Y is uniform in this small region). Note that T V (P + , Q + ) ≤ N , which means T V (P +N , Q +N ) ≤ .
For all x ∈ B(x 0 , δ) and all u ∈ B(0, C), C(x,u)
q + (y|x)dy ≥ 1 − α implies leb(C(x, u) ≥ 2(1 − α)B 0 = B.
Therefore, Q +N {ess sup x∈B(x0,δ) leb(C(x, U )) ≥ B} = 1. Therefore,
P +N { ess sup x∈B(x0,δ) leb(C(x, U )) ≥ B} ≥ Q +N { ess sup x∈B(x0,δ) leb(C(x, U )) ≥ B} − = 1 −(25)
Lemma A.2 follows as and B are arbitrary.
A.1.2 Main Proofs
Proof. Now, for any t, we could view all previous observations (including Y ) as the new "X", and X i,t as the "U ". That is:
X i := [Z i,0 , Z i,1 , . . . , Z i,t−1 ](26)U i := X i,t(27)X + i := [X i , U i ] ∼ P +(28)
Then, the question is whether we could use the new X + N +1 , and {(X + j , Y j,t } N j=1 to create a prediction interval C such that
P Y N +1,t |X N +1 {Y N +1,t ∈Ĉ} ≥ 1 − α.(29)
Lemma A.2 tells us ifĈ satisfies such coverage guarantee (note the conditioning on X N +1 ), we have, for all
x 0 , P{lim δ→0 ess sup x0−x ≤δ leb(Ĉ(x, U N +1 )) = ∞} = 1 (30) =⇒ E X + N +1 [leb(Ĉ(X + N +1 ))] = ∞.(31)
A.2 Proof for Lemma 4.1
Proof. We will show the general case that any nonconformity scores V 1,t , . . . , V N +1,t generated Eq. 12 and Eq. 13 are exchangeable if S 1 , . . . , S N +1 are, because V CP T D−M and V CP T D−R are special cases of nonconformity scores generated this way.
First of all, if we denote V :,t as a vector, it is clearly a row-permutation-equivariant function (denoted as G) on the matrix S :,:t , where the i-th row (out of N + 1) is [Z i,1 , . . . , Z i,t ]. Formally, for any permutation π of N + 1 elements, V π,t = G(S π,:t ). For each measurable subset E of V N +1 , define E S ⊂ Z N +1 as E S := {s :,:t : G(s :,:t ) ∈ E}
Because S 1 , . . . , S N +1 are exchangeable, for any permutation π, we have P{V :,t ∈ E} = P{S :,:t ∈ E S } = P{S π,:t ∈ E S } = P{V π,t ∈ E} (33)
A.3 Proof for Theorem 4.2
Proof. Follows from Lemma 4.1 immediately. We provide a brief sketch here. With Lemma 4.1, we know that the following random variable O N +1
o N +1 := |{i ∈ [N ] : v i,t ≤ v N +1,t }| + 1 N + 1 (34) follows a uniform distribution on { i N +1 } N +1 i=1 .
(Again we assume the probability of having a tie is zero, which means there is always a strict ordering.) Since
o N +1 ≤ (1 − α)(N + 1) N + 1 =⇒ v N +1,t+1 ≤ Q( (1 − α)(N + 1) N + 1 , {v i,t+1 } N +1 i ) =⇒ Y N +1,t+1 ∈Ĉ N +1,t+1 ,(35)
we have
P{Y N +1,t+1 ∈Ĉ N +1,t+1 } ≥ P{o N +1 ≤ (1 − α)(N + 1) N + 1 } ≥ 1 − α(36)
B Why does normalization help?
In this section, we will consider some simple scenarios and show why CPTD-M and CPTD-R can improve longitudinal coverage. While these scenarios are over-simplifications of the reality, we use them mainly to illustrate the main ideas behind CPTD more rigorously.
Suppose the error is both time-dependent and heteroskedastic in the cross-section. Formally, suppose the error R i,t := |Y i,t −μ(X i,t ; S i,:t−1 )| is a random variable that factors out into a product R i,t = S i E t where the marginal distribution of E t is P Et . We impose a mild assumption that P{E t = 0} = 0 for simplicity of discussion. That is, there is an intrinsic "error scale" for each time series. Note this assumption is not overly simplistic either: While the assumption in many related works (e.g. Barber et al. (2022); Xu & Xie (2021)) is that of mild distributional shift, we allow arbitrary distribution for P Et .
The case for CPTD-M: Denote the percentile of
|y i,T −ŷ i,T | m M among { |y i,T −ŷ i,T | m M } N +1 i=1 asq M i,T .
We will examine the following probability:
P{Y N +1,T ∈Ĉ CP T D−M α,N +1 |F S (S N +1 ) ≥ β} (37) =P{q M N +1,T ≤ (1 − α)(N + 1) N + 1 |F S (S N +1 ) ≥ β}(38)
for a large β such as 0.99. This can be thought of a worst case coverage rate. For T > 1, we define a new random variable
E T = E T T −1 t=1
Et (which is defined with probability one). It is clear that
R i,T m M i,T ∼ P E T for all i.
As a result,
P{q CP T D−M N +1,T ≤ (1−α)(N +1) N +1 |F S (S N +1 ) ≥ β} = (1−α)(N +1) N +1
for any β.
The case for CPTD-R: Now, suppose we also have heteroskedasticity along the longitudinal dimension. For simplicity of discussion, we assume E t = C t e N (0,1) where C t is a non-random scalar for each t. Denote the percentile of the nonconformity scores for CPTD-R asq R i,T and that for the basic split conformal asq i,T . While the previous discussion still holds, consider a slightly different quantity than the above:
P{q M N +1,T ≤ (1 − α)(N + 1) N + 1 |q N +1,1 < 1 − β}(39)
What would happen if, say, C 1 (sup i,j Sj Si ) T t=2 C t ? Essentially, if we use CPTD-M,m M is dominated by the randomness of E 1 . Therefore,m M will be too small as long asq N +1,1 is very small, even if the target normalization constant S N +1 is large. This is however not an issue for CPTD-R, because C t is always cancelled out.
C Additional Experimental Details
C.1 Full TS
In Table 6 we show the same metrics as in the main text, but on the entire time series (instead of the last 20 steps).
C.2 Tail coverage rate over time
We plot the average longitudinal coverage rate for the bottom 10% TS up to t, for each t <= T , in Figure 3. For clarity, we only include CFRNN and CPTD (including CPTD-M and CPTD-R). Ideal is the ideal scenario where the event of coverage is temporally independent for each time series. That is, each 1{Y N +1,t ∈Ĉ} follows a Bernoulli distribution of p = 1 − α (independently). We did not perform re-scaling in this plot because Ideal corresponds to an average coverage rate of 1 − α, which is why CPTD-M (which typically generates wider PIs) shows better coverage than CPTD-R. We could see that there are still gaps between CPTD and Ideal, which is a room of improvement for future works. It is also interesting that most of the gain in coverage seems to happen at the beginning -that is, CPTD-R and CPTD-M adapt to the "extreme" TSs in a few steps, and maintain the gain. Table 6: Mean coverage, mean PI width, and tail coverage (re-scaled to same mean PI width) using the full time series. Valid mean coverage and the best of tail coverage and mean PI width are in bold. The conclusion is the same as in the main text -CPTD greatly improves the longitudinal coverage for the least-covered TSs, and maintains very efficient PIs (width).
C.3 Cross-sectional coverage over time
In Figure 4, we plot the (cross-sectional) mean coverage rate at different t. We can see that all conformal methods exhibit cross-sectional validity as expected, whereas the coverage rate for non-conformal methods vary greatly through time.
C.4 Normalization Quality
In this section we compare the rank (Spearman) correlation between different quantities and the realized residuals. The quantities to consider arem M for CPTD-M,ˆ for LASplit, and the width of PI predicted by QRNN (CQRNN). The correlation could be considered a measure of the normalization quality. That is, if the correlation is high, the distribution of the rank of the normalized residual/nonconformity score will be closer to a uniform distribution, which will mitigate the under-coverage of "outlier patients". For each t, corr t := SpearmanCorrelation i (q i,t , |y i,t −ŷ i,t |) where q i,t should be interpreted asm,ˆ or PI width mentioned above. The (pooled) mean and standard deviation across all t and seed pairs are reported in Table 7.
As we can see, both QRNN and LASplit typically have lower correlations at test time. This is simply because training errors are typically lower than test error (not over-fitting). LASplit is especially fragile to this because this effect is two-fold, for both the base point estimator and the error predictor. A similar argument can be found in Romano et al. (2019) explaining why CQR is better than LASplit in terms of efficiency. On the contrary, the correlation for CPTD-M is typically higher on the test set, benefiting from the same effect: With a simple expanding mean asm, the point estimator's error on the test set is easier to predict than on the training set. Note also that QRNN does not actually issue a point estimate, soŷ i,t is replaced with the middle of the PI. This means the level of correlation itself is probably not comparable, but the change from training to testing is still informative.
C.5 Linear Regression in the place of RNN
We perform additional experiments replacing the base LSTM with a linear regression model. This model consists of T sub-models, one for each t (using data up to t − 1 as input). The results are presented in Table 8. We plot the mean of 20 experiments. Ideal refers to simulated coverage events that have no temporal dependence. The X-axis is t, and the Y-axis is the longitudinal mean tail coverage rate up to time t. CPTD-M and CPTD-R typically adapt to the overall nonconformity of the TS in a few steps and maintain the advantage afterwards. There is, however, still gap between CPTD and Ideal. On the other hand, coverage rate for non-conformal methods vary greatly through time. Table 8: Mean coverage, mean PI width, and tail coverage (re-scaled to same mean PI width) with a linear regression point estimator. Valid mean coverage and the best of tail coverage and mean PI width are in bold.
Figure 1 :
1The figure illustrates cross-sectional validity vs.
used in Stankevičiūtė et al. (2021); Lin et al. (2021); Xu & Xie (2021); Barber et al. (2021).
Lemma 4. 1 .
1If S 1 , . . . , S N +1 are exchangeable time series, then ∀t, [V CP T D−M 1,t+1 , . . . , V CP T D−M N +1,t+1 ] and [V CP T D−R 1,t+1 , . . . , V CP T D−R N +1,t+1 ] are both exchangeable sequences of random variables.
:
We use the following state-of-the-art baselines for PI construction in time series forecasting: Conformal forecasting RNN (CFRNN) Stankevičiūtė et al. (2021)), a direct application of split-conformal prediction 7 ; Quantile RNN (QRNN) Wen et al. (2017); RNN with Monte-Carlo Dropout (DP-RNN) Gal & Ghahramani (2016); Conformalized Quantile Regression with QRNN (CQRNN) Romano et al. (2019); Locally adaptive split conformal prediction (LASplit)Lei et al. (2018), which uses a normalized absolute error as the nonconformity score (we follow the implementation inRomano et al. (2019)). Among the baselines, CQRNN and LASplit are existing conformal prediction methods extended to cross-sectional time series forecasting by us, and QRNN and DP-RNN are not conformal methods (and not valid).
Figure 3 :
3Tail Coverage Rate as a function of time.
Figure 4 :
4Mean coverage rate at different t. Conformal methods exhibit cross-sectional validity as expected.
Time series S 1 , S 2 , . . . , S n are exchangeable if, for any finitely many t 1 < · · · < t m , the random variablesDefinition 2. (The Exchangeable Time Series Assumption) Given time series S 1 , S 2 , . . . , S n where
S i = [Z i,1 , . . . , Z i,T , . . .], we denote Z i,{tj } m
j=1 as the random variable comprised of the tuple (Z i,t1 , . . . , Z i,tm ).
Z 1,{tj } m
j=1 , . . . , Z n,{tj } m
j=1 are exchangeable.
Table 1 :
1Notations used in this paper
Algorithm 1 Ratio-to-median-residual Normalization (CPTD-R) Input:{y i,s } i∈[N ],s∈[t] : Response on the calibration set and the test TS up to t. {ŷ i,s } i∈[N +1],s∈[t+1] : Predictions on the calibration set and the test TS up to t + 1.Output:
{m i,t+1 }: Normalization factors for the nonconformity scores at t + 1.
Procedures:
∀i ∈ [N + 1], s ∈ [t], compute r i,s ← |y i,s −ŷ i,s |, and m s ← median i {|r i,s |}.
∀i ∈ [N + 1], estimate the overall rankq i,t+1 using Eq. 17.
Compute the empirical distribution of the median-normalized residuals {nr i,t } N +1
i
using Eq. 15.
∀i ∈ [N + 1], look-up the normalizerm R
i,t+1 using Eq. 16.
Theorem 4.2.Ĉ CP T D−M
N +1,t+1
EEG, Insurance and Load, respectively. For QRNN, we replace the MSE loss with quantile loss. Except for QRNN, CQRNN and DPRNN, all methods share the same base LSTM point estimator. For the residual predictor for LASplit, we followRomano et al. (2019) and change the target from y to |y −ŷ|.We follow Stankevičiūtė et al. (2021) and use LSTM Hochre-
iter & Schmidhuber (1997) as the base time series regression model (mean estimator) for all methods. We
use ADAM (Kingma & Ba (2015)) as the optimizer with learning rate of 10 −3 , and MSE loss. The LSTM
has one layer and a hidden size of 32, and is trained with 200, 1000, 100, 500 and 1000 epochs on MIMIC,
COVID19,
Table 2 :
2Size of each dataset, and the length of the time series. Note that the Insurance dataset has up to 14 diagnoses codes, up to 17 CPT codes, and 3 other features. If we use one-hot encoding for the discrete codes, Insurance has 14*201+17*101+3 features instead. All results presented in this paper measures the last 20 steps, while the full results are in the Appendix.Figure 2: We show the coverage rate for the bottom 10% of the time series for EEG (left) and Load (right). All methods are re-scaled to the same mean PI width for a fair comparison. The Y-axis is the average coverage rate. The X-axis denotes the percentile among all test time series, with 0.00 meaning the least-covered time series. The band is an empirical 80% confidence band. CPTD significantly improves the longitudinal coverage rate, especially for the least-covered time series.Properties
MIMIC
Insurance
COVID19
EEG
Load/Load-R
# train/cal/test
192/100/100
2393/500/500
200/100/80
300/100/200
1198/200/700
T (length)
30
30
30
63
24
# features
25
34*
1
1
26
Table 3 :
3Average coverage rate for each time series. Empirically valid methods are in bold(with
Table 4 :
4Mean of PI width. The most efficient (and valid) method is in bold, including methods not significantly worst than the best one. For Load, we show the most efficient conformal method. CPTD-R generally provides the most efficient PIs.Width ↓
CPTD-R
CPTD-M
Split (CFRNN)
CQRNN
LASplit
QRNN
DPRNN
MIMIC
1.696±0.163
1.876±0.209
1.759±0.166
1.560±0.140
1.872±0.185
1.407±0.130
0.584±0.027
Insurance
2.594±0.051
2.723±0.054
2.690±0.057
2.613±0.050
2.694±0.067
2.314±0.034
0.585±0.044
COVID19
0.713±0.027
0.824±0.102
0.737±0.033
0.827±0.082
0.737±0.038
0.805±0.082
0.515±0.048
EEG
1.275±0.046
1.301±0.049
1.301±0.056
1.436±0.078
1.294±0.035
1.319±0.042
0.414±0.020
Load
0.200±0.004
0.230±0.005
0.209±0.004
0.216±0.005
0.213±0.005
0.168±0.005
0.569±0.008
Load-R
0.178±0.003
0.200±0.007
0.178±0.004
0.187±0.004
0.181±0.005
0.164±0.005
0.534±0.012
Table 5 :
5The tail coverage rate (mean coverage rate for the least-covered 10% time series).
For a fair comparison, we re-scaled all methods to have the same mean PI width (as CFRNN).
Unlike average coverage rate, we want the tail coverage rate to be as high as possible. The best
method is in bold, with the second-best underscored. Generally, both CPTD methods significantly
outperform the baselines, providing better longitudinal coverage.
Tail Coverage ↑
CPTD-R
CPTD-M
Split (CFRNN)
CQRNN
LASplit
QRNN
DPRNN
MIMIC
69.20±4.18
69.10±3.95
64.10±5.32
73.55±3.49
62.93±6.47
73.25±3.48
65.60±5.22
Insurance
71.13±1.92
72.49±1.32
66.03±2.15
68.28±2.94
68.22±2.29
64.72±2.52
47.82±2.92
COVID19
70.22±5.05
70.47±2.47
63.78±6.74
59.75±6.30
67.34±4.41
59.81±6.61
52.56±6.60
EEG
67.30±4.34
71.09±3.73
64.35±4.23
57.02±6.01
66.88±2.03
57.07±3.41
51.06±2.92
Load
70.58±0.98
68.85±0.94
58.80±1.43
59.65±1.62
59.62±1.33
59.87±2.06
29.56±1.91
Load-R
73.03±1.46
71.36±1.36
68.69±1.96
69.61±1.33
69.83±2.03
69.42±1.85
31.92±2.19
Table 7 :
7Normalization quality, as measured by the (cross-sectioanl) rank correlation between the realized residual andm M for CPTD-M, PI width for QRNN, orˆ for LASplit.Train
Test
Rank Correlation
CPTD-M
QRNN
LASplit
CPTD-M
QRNN
LASplit
The authors ofStankevičiūtė et al. (2021) did not the term "cross-setional", but this is exactly what they mean.
Throughout the paper, "validity" and "coverage guarantee" are used interchangeably i.e. a "valid" PI is synonymous with a PI with "coverage guarantee".
A point s is an atom of P S if there exists > 0 such that P S {{s : d(s , s) < δ)}} > for any δ > 0. d(·, ·) denotes the Euclidean distance.
While obvious, more discussion on this can be found inGuan (2021).
AcknowledgmentsThis work was supported by NSF award SCH-2014438, IIS-1838042 and NIH award R01 1R01NS107291-01. ST was partially supported by the NSF under grant No. DMS-1439786.
Uncertainty sets for image classifiers using conformal prediction. Anastasios Nikolas Angelopoulos, Stephen Bates, Michael Jordan, Jitendra Malik, International Conference on Learning Representations. Anastasios Nikolas Angelopoulos, Stephen Bates, Michael Jordan, and Jitendra Malik. Uncertainty sets for image classifiers using conformal prediction. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=eNdiU_DbM9.
Image-to-image regression with distribution-free uncertainty quantification and applications in imaging. Anastasios Nikolas Angelopoulos, Amit Kohli, Stephen Bates, Michael I Jordan, Jitendra Malik, Thayer Alshaabi, Srigokul Upadhyayula, Yaniv Romano, abs/2202.05265ArXiv. Anastasios Nikolas Angelopoulos, Amit Kohli, Stephen Bates, Michael I. Jordan, Jitendra Malik, Thayer Alshaabi, Srigokul Upadhyayula, and Yaniv Romano. Image-to-image regression with distribution-free uncertainty quantification and applications in imaging. ArXiv, abs/2202.05265, 2022.
The limits of distributionfree conditional predictive inference. arXiv, abs/1903.04684. Rina Foygel Barber, Emmanuel J Candès, Aaditya Ramdas, Ryan J Tibshirani, Rina Foygel Barber, Emmanuel J. Candès, Aaditya Ramdas, and Ryan J. Tibshirani. The limits of distribution- free conditional predictive inference. arXiv, abs/1903.04684, 2020. URL https://arxiv.org/abs/1903. 04684.
Predictive inference with the jackknife+. Rina Foygel Barber, Emmanuel J Candès, Aaditya Ramdas, Ryan J Tibshirani, 10.1214/20-AOS1965The Annals of Statistics. 491Rina Foygel Barber, Emmanuel J Candès, Aaditya Ramdas, and Ryan J Tibshirani. Predictive inference with the jackknife+. The Annals of Statistics, 49(1):486-507, 2021. doi: 10.1214/20-AOS1965. URL https://doi.org/10.1214/20-AOS1965.
Conformal prediction beyond exchangeability. Rina Foygel Barber, Emmanuel J Candes, Aaditya Ramdas, Ryan J Tibshirani, Rina Foygel Barber, Emmanuel J. Candes, Aaditya Ramdas, and Ryan J. Tibshirani. Conformal prediction beyond exchangeability, 2022. URL https://arxiv.org/abs/2202.13415.
Distribution-free, risk-controlling prediction sets. Stephen Bates, A Angelopoulos, Lihua Lei, Jitendra Malik, Michael I Jordan, 68:43:1-43:34J. ACM. Stephen Bates, A. Angelopoulos, Lihua Lei, Jitendra Malik, and Michael I. Jordan. Distribution-free, risk-controlling prediction sets. J. ACM, 68:43:1-43:34, 2021.
Constructing normalized nonconformity measures based on maximizing predictive efficiency. Anthony Bellotti, PMLRProceedings of the Ninth Symposium on Conformal and Probabilistic Prediction and Applications. Alexander Gammerman, Vladimir Vovk, Zhiyuan Luo, Evgueni Smirnov, and Giovanni Cherubinthe Ninth Symposium on Conformal and Probabilistic Prediction and Applications128Anthony Bellotti. Constructing normalized nonconformity measures based on maximizing predictive efficiency. In Alexander Gammerman, Vladimir Vovk, Zhiyuan Luo, Evgueni Smirnov, and Giovanni Cherubin (eds.), Proceedings of the Ninth Symposium on Conformal and Probabilistic Prediction and Applications, volume 128 of Proceedings of Machine Learning Research, pp. 41-54. PMLR, 09-11 Sep 2020. URL http://proceedings.mlr.press/v128/bellotti20a.html.
A probabilistic bayesian recurrent neural network for remaining useful life prognostics considering epistemic and aleatory uncertainties. Structural Control and Health Monitoring. Jose Caceres, Danilo Gonzalez, Taotao Zhou, Enrique Lopez Droguett, https:/onlinelibrary.wiley.com/doi/abs/10.1002/stc.2811282021Jose Caceres, Danilo Gonzalez, Taotao Zhou, and Enrique Lopez Droguett. A probabilistic bayesian recurrent neural network for remaining useful life prognostics considering epistemic and aleatory uncertainties. Structural Control and Health Monitoring, 28(10):e2811, 2021. doi: https://doi.org/10.1002/stc.2811. URL https://onlinelibrary.wiley.com/doi/abs/10.1002/stc.2811.
Stochastic gradient hamiltonian monte carlo. Tianqi Chen, Emily Fox, Carlos Guestrin, Proceedings of the 31st International Conference on Machine Learning. Eric P. Xing and Tony Jebarathe 31st International Conference on Machine LearningBejing, China32of Proceedings of Machine Learning ResearchTianqi Chen, Emily Fox, and Carlos Guestrin. Stochastic gradient hamiltonian monte carlo. In Eric P. Xing and Tony Jebara (eds.), Proceedings of the 31st International Conference on Machine Learning, volume 32:2 of Proceedings of Machine Learning Research, pp. 1683-1691, Bejing, China, 22-24 Jun 2014. PMLR. URL https://proceedings.mlr.press/v32/cheni14.html.
Concepts and applications of conformal prediction in computational drug discovery. Isidro Cortés, - Ciriano, Andreas Bender, abs/1908.03569ArXiv. Isidro Cortés-Ciriano and Andreas Bender. Concepts and applications of conformal prediction in computational drug discovery. ArXiv, abs/1908.03569, 2019.
covid-19) in the uk. Covid, Coronavirus, COVID. Coronavirus (covid-19) in the uk. https://https://coronavirus.data.gov.uk/, 2022. Accessed: 2022-04-14.
Efficient conformal prediction via cascaded inference with expanded admission. Adam Fisch, Tal Schuster, Tommi Jaakkola, Regina Barzilay, ICLR. 2021Adam Fisch, Tal Schuster, Tommi Jaakkola, and Regina Barzilay. Efficient conformal prediction via cascaded inference with expanded admission. In ICLR, 2021.
Bayesian recurrent neural networks. CoRR. Meire Fortunato, Charles Blundell, Oriol Vinyals, abs/1704.02798Meire Fortunato, Charles Blundell, and Oriol Vinyals. Bayesian recurrent neural networks. CoRR, abs/1704.02798, 2017. URL http://arxiv.org/abs/1704.02798.
Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. Yarin Gal, Zoubin Ghahramani, 33rd International Conference on Machine Learning. 9781510829008Yarin Gal and Zoubin Ghahramani. Dropout as a Bayesian approximation: Representing model uncertainty in deep learning. In 33rd International Conference on Machine Learning, ICML 2016, 2016. ISBN 9781510829008.
Probabilistic forecasting with spline quantile function rnns. Jan Gasthaus, Konstantinos Benidis, Yuyang Wang, Syama Sundar Rangapuram, David Salinas, Valentin Flunkert, Tim Januschowski, PMLRProceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics. Kamalika Chaudhuri and Masashi Sugiyamathe Twenty-Second International Conference on Artificial Intelligence and Statistics89Jan Gasthaus, Konstantinos Benidis, Yuyang Wang, Syama Sundar Rangapuram, David Salinas, Valentin Flunkert, and Tim Januschowski. Probabilistic forecasting with spline quantile function rnns. In Kamalika Chaudhuri and Masashi Sugiyama (eds.), Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, volume 89 of Proceedings of Machine Learning Research, pp. 1901-1910. PMLR, 16-18 Apr 2019. URL https://proceedings.mlr.press/v89/gasthaus19a.html.
Adaptive conformal inference under distribution shift. Isaac Gibbs, Emmanuel Candes, Advances in Neural Information Processing Systems. A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman VaughanIsaac Gibbs and Emmanuel Candes. Adaptive conformal inference under distribution shift. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, 2021. URL https://openreview.net/forum?id=6vaActvpcp3.
A L Goldberger, L A Amaral, L Glass, J M Hausdorff, P C Ivanov, R G Mark, J E Mietus, G B Moody, C K Peng, H E Stanley, 10.1161/01.cir.101.23.e215PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals. A. L. Goldberger, L. A. Amaral, L. Glass, J. M. Hausdorff, P. C. Ivanov, R. G. Mark, J. E. Mietus, G. B. Moody, C. K. Peng, and H. E. Stanley. PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals. Circulation, 2000. ISSN 15244539. doi: 10.1161/01.cir.101.23.e215.
Localized conformal prediction: A generalized inference framework for conformal prediction. Leying Guan, Leying Guan. Localized conformal prediction: A generalized inference framework for conformal prediction, 2021. URL https://arxiv.org/abs/2106.08460.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, 10.1162/neco.1997.9.8.1735Neural Comput. 98Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Comput., 9(8):1735-1780, nov 1997. ISSN 0899-7667. doi: 10.1162/neco.1997.9.8.1735. URL https://doi.org/10.1162/neco.1997.9. 8.1735.
Probabilistic energy forecasting: Global energy forecasting competition 2014 and beyond. Tao Hong, Pierre Pinson, Shu Fan, Hamidreza Zareipour, Alberto Troccoli, Rob J Hyndman, 10.1016/j.ijforecast.2016.02.001.URLhttps:/www.sciencedirect.com/science/article/pii/S01692070160001330169-2070International Journal of Forecasting. 323Tao Hong, Pierre Pinson, Shu Fan, Hamidreza Zareipour, Alberto Troccoli, and Rob J. Hyndman. Probabilistic energy forecasting: Global energy forecasting competition 2014 and beyond. International Journal of Forecasting, 32(3):896-913, 2016. ISSN 0169-2070. doi: https://doi.org/10.1016/j.ijforecast.2016.02.001. URL https://www.sciencedirect.com/science/article/pii/S0169207016000133.
Mimic-iii clinical database demo. A Johnson, T Pollard, R Mark, A. Johnson, T. Pollard, and R. Mark. Mimic-iii clinical database demo (version 1.4). https://archive.ics. uci.edu/ml/datasets/EEG+Database, 2019.
MIMIC-III, a freely accessible critical care database. E W Alistair, Tom J Johnson, Lu Pollard, Shen, H Li-Wei, Mengling Lehman, Mohammad Feng, Benjamin Ghassemi, Peter Moody, Leo Anthony Szolovits, Roger G Celi, Mark, 10.1038/sdata.2016.35Scientific Data. 31160035Alistair E.W. Johnson, Tom J. Pollard, Lu Shen, Li-wei H. Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G. Mark. MIMIC-III, a freely accessible critical care database. Scientific Data, 3(1):160035, 2016. ISSN 2052-4463. doi: 10.1038/sdata.2016.35. URL https://doi.org/10.1038/sdata.2016.35.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, 3rd International Conference on Learning Representations. Yoshua Bengio and Yann LeCunSan Diego, CA, USAConference Track ProceedingsDiederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun (eds.), 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. URL http://arxiv.org/abs/1412.6980.
Auto-encoding variational bayes. P Diederik, Max Kingma, Welling, 2nd International Conference on Learning Representations. Yoshua Bengio and Yann LeCunBanff, AB, CanadaConference Track ProceedingsDiederik P. Kingma and Max Welling. Auto-encoding variational bayes. In Yoshua Bengio and Yann LeCun (eds.), 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14-16, 2014, Conference Track Proceedings, 2014. URL http://arxiv.org/abs/1312.6114.
Adaptive, distribution-free prediction intervals for deep networks. Danijel Kivaranovic, Kory D Johnson, Hannes Leeb, PMLRProceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics. Silvia Chiappa and Roberto Calandrathe Twenty Third International Conference on Artificial Intelligence and Statistics108Danijel Kivaranovic, Kory D. Johnson, and Hannes Leeb. Adaptive, distribution-free prediction intervals for deep networks. In Silvia Chiappa and Roberto Calandra (eds.), Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, volume 108 of Proceedings of Machine Learning Research, pp. 4346-4356. PMLR, 26-28 Aug 2020. URL https://proceedings.mlr.press/ v108/kivaranovic20a.html.
Regression quantiles. Roger Koenker, Gilbert Bassett, 00129682, 14680262Econometrica. 461Roger Koenker and Gilbert Bassett. Regression quantiles. Econometrica, 46(1):33-50, 1978. ISSN 00129682, 14680262. URL http://www.jstor.org/stable/1913643.
Simple and scalable predictive uncertainty estimation using deep ensembles. Balaji Lakshminarayanan, Alexander Pritzel, Charles Blundell, Advances in Neural Information Processing Systems. Balaji Lakshminarayanan, Alexander Pritzel, and Charles Blundell. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in Neural Information Processing Systems, 2017.
Distribution-free prediction bands for non-parametric regression. Jing Lei, Larry Wasserman, https:/rss.onlinelibrary.wiley.com/doi/abs/10.1111/rssb.12021Journal of the Royal Statistical Society: Series B (Statistical Methodology). 761Jing Lei and Larry Wasserman. Distribution-free prediction bands for non-parametric regression. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 76(1):71-96, 2014. doi: https://doi.org/10. 1111/rssb.12021. URL https://rss.onlinelibrary.wiley.com/doi/abs/10.1111/rssb.12021.
A conformal prediction approach to explore functional data. Jing Lei, Alessandro Rinaldo, Larry Wasserman, 10.1007/s10472-013-9366-61573-7470. doi: 10.1007/ s10472-013-9366-6Annals of Mathematics and Artificial Intelligence. 741Jing Lei, Alessandro Rinaldo, and Larry Wasserman. A conformal prediction approach to explore functional data. Annals of Mathematics and Artificial Intelligence, 74(1):29-43, 2015. ISSN 1573-7470. doi: 10.1007/ s10472-013-9366-6. URL https://doi.org/10.1007/s10472-013-9366-6.
Distribution-Free Predictive Inference for Regression. Jing Lei, G' Max, Alessandro Sell, Ryan J Rinaldo, Larry Tibshirani, Wasserman, 10.1080/01621459.2017.1307116Journal of the American Statistical Association. Jing Lei, Max G'Sell, Alessandro Rinaldo, Ryan J. Tibshirani, and Larry Wasserman. Distribution-Free Predictive Inference for Regression. Journal of the American Statistical Association, 2018. ISSN 1537274X. doi: 10.1080/01621459.2017.1307116.
Locally valid and discriminative prediction intervals for deep learning models. Zhen Lin, Shubhendu Trivedi, Jimeng Sun, Advances in Neural Information Processing Systems. M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wortman VaughanCurran Associates, Inc34Zhen Lin, Shubhendu Trivedi, and Jimeng Sun. Locally valid and discriminative prediction intervals for deep learning models. In M. Ranzato, A. Beygelzimer, Y. Dauphin, P.S. Liang, and J. Wort- man Vaughan (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 8378- 8391. Curran Associates, Inc., 2021. URL https://proceedings.neurips.cc/paper/2021/file/ 46c7cb50b373877fb2f8d5c4517bb969-Paper.pdf.
Multiplicative normalizing flows for variational Bayesian neural networks. Christos Louizos, Max Welling, PMLRProceedings of the 34th International Conference on Machine Learning. Doina Precup and Yee Whye Tehthe 34th International Conference on Machine Learning70Christos Louizos and Max Welling. Multiplicative normalizing flows for variational Bayesian neural networks. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pp. 2218-2227. PMLR, 06-11 Aug 2017. URL https://proceedings.mlr.press/v70/louizos17a.html.
Inductive conformal predictor for convolutional neural networks: Applications to active learning for image classification. Sergio Matiz, Kenneth E Barner, 10.1016/j.patcog.2019.01.035.URLhttps:/www.sciencedirect.com/science/article/pii/S003132031930055X0031-3203Pattern Recognition. 90Sergio Matiz and Kenneth E. Barner. Inductive conformal predictor for convolutional neural networks: Applications to active learning for image classification. Pattern Recognition, 90:172-182, 2019. ISSN 0031-3203. doi: https://doi.org/10.1016/j.patcog.2019.01.035. URL https://www.sciencedirect.com/ science/article/pii/S003132031930055X.
Bayesian learning via stochastic dynamics. Radford Neal, Advances in Neural Information Processing Systems. S. Hanson, J. Cowan, and C. GilesMorgan-Kaufmann5Radford Neal. Bayesian learning via stochastic dynamics. In S. Hanson, J. Cowan, and C. Giles (eds.), Advances in Neural Information Processing Systems, volume 5. Morgan-Kaufmann, 1992. URL https: //proceedings.neurips.cc/paper/1992/file/f29c21d4897f78948b91f03172341b7b-Paper.pdf.
Inductive confidence machines for regression. Harris Papadopoulos, Kostas Proedrou, Volodya Vovk, Alex Gammerman, Machine Learning: ECML 2002. Tapio Elomaa, Heikki Mannila, and Hannu ToivonenBerlin, Heidelberg; Berlin HeidelbergSpringerHarris Papadopoulos, Kostas Proedrou, Volodya Vovk, and Alex Gammerman. Inductive confidence machines for regression. In Tapio Elomaa, Heikki Mannila, and Hannu Toivonen (eds.), Machine Learning: ECML 2002, pp. 345-356, Berlin, Heidelberg, 2002. Springer Berlin Heidelberg.
Conformalized quantile regression. Yaniv Romano, Evan Patterson, Emmanuel Candes, Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. d Alché-Buc, E. Fox, and R. GarnettCurran Associates, Inc32Yaniv Romano, Evan Patterson, and Emmanuel Candes. Conformalized quantile regression. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d Alché-Buc, E. Fox, and R. Garnett (eds.), Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings. neurips.cc/paper/2019/file/5103c3584b063c431bd1268e9b5e76fb-Paper.pdf.
Conformal time-series forecasting. Kamilė Stankevičiūtė, Ahmed Alaa, Mihaela Van Der Schaar, Advances in Neural Information Processing Systems. A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman VaughanKamilė Stankevičiūtė, Ahmed Alaa, and Mihaela van der Schaar. Conformal time-series forecasting. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan (eds.), Advances in Neural Information Processing Systems, 2021. URL https://openreview.net/forum?id=Rx9dBZaV_IP.
Estimating conditional quantiles with the help of the pinball loss. Ingo Steinwart, Andreas Christmann, 10.3150/10-BEJ267Bernoulli. 171Ingo Steinwart and Andreas Christmann. Estimating conditional quantiles with the help of the pinball loss. Bernoulli, 17(1):211 -225, 2011. doi: 10.3150/10-BEJ267. URL https://doi.org/10.3150/10-BEJ267.
Eeg database. Uci Eeg, UCI EEG. Eeg database. https://archive.ics.uci.edu/ml/datasets/EEG+Database, 1999. Accessed: 2022-04-23.
Algorithmic learning in a random world. Vladimir Vovk, Alexander Gammerman, Glenn Shafer, 10.1007/b106715Springer USVladimir Vovk, Alexander Gammerman, and Glenn Shafer. Algorithmic learning in a random world. Springer US, 2005. ISBN 0387001522. doi: 10.1007/b106715.
Bayesian learning via stochastic gradient langevin dynamics. Max Welling, Yee Whye Teh, Proceedings of the 28th International Conference on Machine Learning. the 28th International Conference on Machine Learning9781450306195Max Welling and Yee Whye Teh. Bayesian learning via stochastic gradient langevin dynamics. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011, 2011. ISBN 9781450306195.
A multi-horizon quantile recurrent forecaster. Ruofeng Wen, Kari Torkkola, Balakrishnan Narayanaswamy, Dhruv Madeka, Ruofeng Wen, Kari Torkkola, Balakrishnan Narayanaswamy, and Dhruv Madeka. A multi-horizon quantile recurrent forecaster, 2017. URL https://arxiv.org/abs/1711.11053.
Bayesian deep learning and a probabilistic perspective of generalization. Andrew Gordon Wilson, Pavel Izmailov, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020. Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin2020Andrew Gordon Wilson and Pavel Izmailov. Bayesian deep learning and a probabilistic perspective of generalization. In Hugo Larochelle, Marc'Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan- Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https: //proceedings.neurips.cc/paper/2020/hash/322f62469c5e3c7dc3e58f5a4d1ea399-Abstract.html.
Conformal prediction interval for dynamic time-series. Chen Xu, Yao Xie, PMLRProceedings of the 38th International Conference on Machine Learning. Marina Meila and Tong Zhangthe 38th International Conference on Machine Learning139Chen Xu and Yao Xie. Conformal prediction interval for dynamic time-series. In Marina Meila and Tong Zhang (eds.), Proceedings of the 38th International Conference on Machine Learning, volume 139 of Proceedings of Machine Learning Research, pp. 11559-11569. PMLR, 18-24 Jul 2021. URL https://proceedings.mlr.press/v139/xu21h
| [
"https://github.com/zlin7/CPTD."
] |
[
"NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-Based Simulation",
"NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-Based Simulation"
] | [
"Sungdong Kim sungdong.kim@navercorp.com ",
"Minsuk Chang minsuk.chang@navercorp.com ",
"Sang-Woo Lee sang.woo.lee@navercorp.com ",
"Naver Ai Lab ",
"Naver Clova "
] | [] | [
"Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing"
] | We propose NeuralWOZ, a novel dialogue collection framework that uses model-based dialogue simulation. NeuralWOZ has two pipelined models, Collector and Labeler. Collector generates dialogues from (1) user's goal instructions, which are the user context and task constraints in natural language, and (2) system's API call results, which is a list of possible query responses for user requests from the given knowledge base. Labeler annotates the generated dialogue by formulating the annotation as a multiple-choice problem, in which the candidate labels are extracted from goal instructions and API call results. We demonstrate the effectiveness of the proposed method in the zero-shot domain transfer learning for dialogue state tracking. In the evaluation, the synthetic dialogue corpus generated from NeuralWOZ achieves a new state-of-theart with improvements of 4.4% point joint goal accuracy on average across domains, and improvements of 5.7% point of zero-shot coverage against the MultiWOZ 2.1 dataset. 1 | 10.18653/v1/2021.acl-long.287 | [
"https://www.aclanthology.org/2021.acl-long.287.pdf"
] | 235,254,478 | 2105.14454 | cac6f85e645d82b77d1e34b8827da960a161bae2 |
NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-Based Simulation
August 1-6, 2021
Sungdong Kim sungdong.kim@navercorp.com
Minsuk Chang minsuk.chang@navercorp.com
Sang-Woo Lee sang.woo.lee@navercorp.com
Naver Ai Lab
Naver Clova
NeuralWOZ: Learning to Collect Task-Oriented Dialogue via Model-Based Simulation
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing
the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingAugust 1-6, 20213704
We propose NeuralWOZ, a novel dialogue collection framework that uses model-based dialogue simulation. NeuralWOZ has two pipelined models, Collector and Labeler. Collector generates dialogues from (1) user's goal instructions, which are the user context and task constraints in natural language, and (2) system's API call results, which is a list of possible query responses for user requests from the given knowledge base. Labeler annotates the generated dialogue by formulating the annotation as a multiple-choice problem, in which the candidate labels are extracted from goal instructions and API call results. We demonstrate the effectiveness of the proposed method in the zero-shot domain transfer learning for dialogue state tracking. In the evaluation, the synthetic dialogue corpus generated from NeuralWOZ achieves a new state-of-theart with improvements of 4.4% point joint goal accuracy on average across domains, and improvements of 5.7% point of zero-shot coverage against the MultiWOZ 2.1 dataset. 1
Introduction
For a task-oriented dialogue system to be scalable, the dialogue system needs to be able to quickly adapt and expand to new scenarios and domains. However, the cost and effort in collecting and annotating an expanding dataset is not only laborintensive but also proportional to the size and variety of the unseen scenarios.
There are three types of dialogue system expansions. (1) The simplest expansion is the addition of new instances in the knowledge base (KB) under the identical schema. For example, the addition of newly opened restaurants in the KB of restaurant domain falls under this category. (2) A slightly more complicated expansion involves modifications to the KB schema, and possibly the related 1 The code is available at github.com/naver-ai/neuralwoz. Figure 1: Overview of NeuralWOZ. The NeuralWOZ takes goal instruction for the user side (U) and API call results for the system side (S) to synthesize dialogue. First, it generates dialogue from the inputs and then labels dialogue state (B t ) and active domain (Domain t ) by turn t on the dialogue.
instances. For example, additions of new constraint types to access the KB due to the change in needs of the user often require a restructuring of the KB. If a dialogue system built with only restaurant search in mind observes user's requests about not only "restaurant location" and but also "traffic information" for navigating, the system now needs a new knowledge base including the additional different domain.
(3) The most complex expansion is the one that expands across multiple domains. For example, imagine an already built dialogue system supported restaurant and hotel reservation domains, but now needs to expand to points of interest or other domains. It is difficult to expand to new domain without collecting new data instances and building a new knowledge base, if the schema between the source (restaurant and hotel in this case) and target domain (point of interest) look different.
To support development of scalable dialogue systems, we propose NeuralWOZ, a model-based dialogue collection framework. NeuralWOZ uses goal instructions and KB instances for synthetic dialogue generation. NeuralWOZ mimics the mechanism of a Wizard-of-Oz (Kelley, 1984;Dahlbäck et al., 1993) and Figure 1 illustrates our approach. NeuralWOZ has two neural components, Collector and Labeler. Collector generates a dialogue by using the given goal instruction and candidate relevant API call results from the KB as an input. Labeler annotates the generated dialogue with appropriate labels by using the schema structure of the dialogue domain as meta information. More specifically, Labeler selects the labels from candidate labels which can be obtained from the goal instruction and the API call results. As a result, NeuralWOZ is able to generate a dialogue corpus without training data of the target domain.
We evaluate our method for zero-shot domain transfer task Campagna et al., 2020) to demonstrate the ability to generate corpus for unseen domains, when no prior training data exists. In dialogue state tracking (DST) task with MultiWOZ 2.1 (Eric et al., 2019), the synthetic data generated with NeuralWOZ achieves 4.4% point higher joint goal accuracy and 5.7% point higher zero-shot coverage than the existing baseline. Additionally, we examine few-shot and full data augmentation tasks using both training data and synthetic data. We also illustrate how to collect synthetic data beyond MultiWOZ domains, and discuss the effectiveness of the proposed approach as a data collection strategy.
Our contributions are as follows:
• NeuralWOZ, a novel method for generating dialogue corpus using goal instruction and knowledge base information
• New state-of-the-art performance on the zeroshot domain transfer task
• Analysis results highlighting the potential synergy of using the data generated from Neural-WOZ together with human-annotated data 2 Related Works
Wizard-of-Oz
Wizard-of-Oz (WOZ) is a widely used approach for constructing dialogue data (Henderson et al., 2014a,b;El Asri et al., 2017;Eric and Manning, 2017;Budzianowski et al., 2018). It works by facilitating a role play between two people. "User" utilizes a goal instruction that describes the context of the task and details of request and "system" has access to a knowledge base, and query results from the knowledge base. They take turns to converse, while the user makes requests one by one following the instructions, the system responds according to the knowledge base, and labels user's utterances.
Synthetic Dialogue Generation
Other studies on dialogue datasets use the user simulator-based data collection approaches (Schatzmann et al., 2007;Li et al., 2017;Bordes et al., 2017;Shah et al., 2018;Zhao and Eskenazi, 2018;Shah et al., 2018;Campagna et al., 2020). They define domain schema, rules, and dialogue templates to simulate user behavior under certain goals. The ingredients to the simulation are designed by developers and the dialogues are realized by predefined mapping rules or paraphrasing by crowdworkers. If a training corpus for the target domain exists, neural models that synthetically generates dialogues can augment the training corpus (Hou et al., 2018;Yoo et al., 2019). For example, Yoo et al. (2020) introduce Variational Hierarchical Dialog Autoencoder (VHDA), where hierarchical latent variables exist for speaker identity, user's request, dialog state, and utterance. They show the effectiveness of their model on single-domain DST tasks. SimulatedChat (Mohapatra et al., 2020) also uses goal instruction for dialogue augmentation. Although it does not solve zero-shot learning task with domain expansion in mind, we run auxiliary experiments to compare with NeuralWOZ, and the results are in the Appendix D.
Zero-shot Domain Transfer
In zero-shot domain transfer tasks, there is no data for target domain, but there exists plenty of data for other domains similar to target domain. Solving the problem of domain expansion of dialogue systems can be quite naturally reducted to solving zero-shot domain transfer. conduct a landmark study on the zero-shot DST. They Figure 2: Illustration of Collector and Labeler. Collector takes goal instruction G and API call results A as the input, and outputs dialogue D T which consists of T turns. The state candidate C is prepopulated from the G and A as a full set for labeling. Finally, Labeler takes its value's subset O Si and question q for each slot type S i and dialogue context D t from Collector, and chooses answerõ from the O Si . suggest a model, Transferable Dialogue State Generator (TRADE), which is robust to a new domain where few or no training data for the domain exists. Kumar et al. (2020) and Li et al. (2021) follow the same experimental setup, and we also compare NeuralWOZ in the same experiment setup. Abstract Transaction Dialogue Model (ATDM) (Campagna et al., 2020), another method for synthesizing dialogue data, is another baseline for zero-shot domain transfer tasks we adopt. They use rules, abstract state transition, and templates to synthesize the dialogue, which is then fed into a model-based zero-shot learner. They achieved state-of-the-art in the task using the synthetic data on SUMBT , a pretrained BERT (Devlin et al., 2019) based DST model.
NeuralWOZ
In this section, we describe the components of Neu-ralWOZ in detail, and how they interact with each other. Figure 2 illustrates the input and output of two modules in NeuralWOZ. The synthetic corpus, which Collector and Labeler made, are used for the training of the DST baselines, TRADE and SUMBT in our experiments.
Problem Statement
Domain Schema In task-oriented dialogues, there are two slot types; inf ormable and requestable slots (Henderson et al., 2014a;Budzianowski et al., 2018). The inf ormable slots are the task constraints to find relevant information from user requests, for example, "restaurantpricerange", "restaurant-food", "restaurant-name", and "restaurant-book people" in Figure 1. The requestable slots are the additional details of user requests, like "reference number" and "address" in Figure 1. Each slot S can have its corresponding value V in a scenario. In multi-domain scenarios, each domain has a knowledge base KB, which consists of slot-value pairs corresponding to its domain schema. The API call results in Figure 1 are the examples of the KB instances of the restaurant domain.
Goal Instruction The goal instruction, G, is a natural language text describing constraints of user behavior in the dialogue D including informable and requestable slots. The paragraph consists of four sentences at the top of Figure 1 is an example. We define a set of informable slot-value pairs that explicitly expressed on the G as C G , which we formally define as
C G = {(S G i , V G i ) | 1 ≤ i ≤ |C G |, S G i ∈ inf ormable}.
("restaurantpricerange", "expensive") and ("restaurant-food", "british") are examples of the elements of C G (Figure 1).
API Call Results
The API call results, A, are corresponding query results of the C G from KB. We for-
mally define A = {a i | 1 ≤ i ≤ |A|, a i ∈ KB}.
Each a i is associated with its domain, domain a i , and with slot-value pairs,
C a i = {(S a i k , V a i k ) | 1 ≤ k ≤ |C a i |}. A slot S a i
k can be either informable or requestable slot. For example, the restaurant instance, "graffiti" in Figure 1, is a query result from ("restaurant-pricerange", "expensive") and ("restaurant-food", "british") described in the goal instruction.
State Candidate
We define informable slot-value pairs that are not explicit in G but accessible by A in D as
C A = {(S A i , V A i ) | 1 ≤ i ≤ |C A |, S A i ∈ inf ormable}.
It contains all informable slot-value pairs from C a 1 to C a |A| . The elements of C A are likely to be uttered by summaries of current states or recommendations of KB instances by the system side in D. The system utterance of the second turn in Figure 1 is an example ("I recommend graffiti."). In this case, the slot-value pair ("restaurant-name", "graffiti") can be obtained from the A, not from the G. Finally, state candidate C is the union of C G and C A . It is a full set of the dialogue state for the dialogue D from given G and A. Thus, it can be used as label candidates of dialogue state tracking annotation.
Collector
Collector is a sequence-to-sequence model, which takes a goal instruction G and API call results A as the input and generates dialogue D T . The generated dialogue D T = (r 1 , u 1 , ..., r T , u T ) is the sequence of system response r and user utterance u. They are represented by N tokens (w 1 , ..., w N ) 2 .
p(D T |G, A) = N i=1 p(w i |w <i , G, A)
We denote the input of Collector as <s> ⊕ G ⊕ </s> ⊕ A, where the ⊕ is concatenate operation. The <s> and </s> are special tokens to indicate start and seperator respectively. The tokenized natural language description of G is directly used as the tokens. The A takes concatenation of each a i (a 1 ⊕ · · · ⊕ a |A| ) 3 . For each a i , we flatten the result to the token sequence,
<domain>⊕domain a i ⊕<slot>⊕S a i 1 ⊕V a i 1 ⊕ · · · ⊕ <slot> ⊕ S a i |C a i | ⊕ V a i |C a i | .
The <domain> and <slot> are other special tokens as separators. The objective function of Collector is
L C = − 1 M C M C j=1 N j i=1 log p(w j i |w j <i , G j , A j ).
Our Collector model uses the transformer architecture (Vaswani et al., 2017) initialized with pretrained BART (Lewis et al., 2020). Collector is trained using negative log-likelihood loss, where M C is the number of training dataset for Collector and N j is target length of the j-th instance. Following Lewis et al. (2020), label smoothing is used during the training with the smoothing parameter of 0.1.
2 Following Hosseini-Asl et al. (2020), we also utilize rolespecific special tokens <system> and <user> for the r and u respectively. 3 we limit the |A| to a maximum 3
Labeler
We formulate labeling as a multiple-choice problem. Specifically, Labeler takes a dialogue context D t = (r 1 , u 1 , ..., r t , u t ), question q, and a set of answer options O = {o 1 , o 2 , ..., o |O| }, and selects one answerõ ∈ O. Labeler encodes the inputs for each o i separately, and s o i ∈ R 1 is the corresponding logit score from the encoding. Finally, the logit score is normalized via softmax function over the answer option set O.
p(o i |D t , q, O) = exp(s o i ) |O| j exp(s o j ) , s o i = Labeler(D t , q, o i ), ∀i.
The input of Labeler is a concatenation of D t , q, and o i , <s>⊕D t ⊕</s>⊕q ⊕</s>⊕o i ⊕</s>, with special tokens. For labeling dialogue states to D t , we use the slot description for each corresponding slot type, S i , as the question, for example, "what is area or place of hotel?" for "hotel-area" in Figure 2.
We populate corresponding answer op-
tions O S i = {V j |(S j , V j ) ∈ C, S j = S i } from the state candidate set C.
There are two special values, Dontcare to indicate the user has no preference and N one to indicate the user is yet to specify a value for this slot (Henderson et al., 2014a;Budzianowski et al., 2018). We include these values in the O S i . For labeling the active domain of D t , which is the domain at t-th turn of D t , we define domain question, for example "what is the domain or topic of current turn?", for q and use predefined domain set O domain as answer options. In MultiWOZ, O domain = {"Attraction", "Hotel", "Restaurant", "Taxi", "Train"}.
Our Labeler model employs a pretrained RoBERTa model (Liu et al., 2019) as the initial weight. Dialogue state and domain labeling are trained jointly based on the multiple choice setting. Preliminary result shows that the imbalanced class problem is significant in the dialogue state labels. Most of the ground-truth answers is N one given question 4 . Therefore, we revise the negative loglikelihood objective to weight other (not-N one) answers by multiplying a constant β to the loglikelihood when the answer of training instance is not N one. The objective function of Labeler is
L L = − 1 M L M L j=1 T t=1 Nq i=1 L j t,i L j t,i = β log p(õ j t,i |D j t , q j i , O j i ), ifõ j t,i = N one log p(õ j t,i |D j t , q j i , O j i ), otherwise , whereõ j t,
i denotes the answer of i-th question for j-th training dialogue at turn t, the N q is the number of questions, and M L is the number of training dialogues for Labeler. We empirically set β to a constant 5.
Synthesizing a Dialogue
We first define goal template G. 5 G is a delexicalized version of G by changing each value V G i expressed on the instruction to its slot S G i . For example, the "expensive" and "british" of goal instruction in Figure 1 are replaced with "restaurantpricerange" and "restaurant-food", respectively. As a result, domain transitions in G becomes convenient.
First, G is sampled from a pre-defined set of goal template. API call results A, which correspond to domain transitions in G, are randomly selected from the KB. Especially, we constrain the sampling space of A when the consecutive scenario among domains in G have shared slot values. For example, the sampled API call results for restaurant and hotel domain should share the value of "area" to support the following instruction "I am looking for a hotel nearby the restaurant". G and A are aligned to become G A . In other words, each value for S G i in G is assigned using the corresponding values in A. 6 Then, Collector generates dialogue D, of which the total turn number is T , given G A and A. More details are in Appendix A. Nucleus sampling (Holtzman et al., 2020) is used for the generation.
We denote dialogue state and active domain at turn t as B t and domain t respectively. The B t , {(S j , V j,t ) | 1 ≤ j ≤ J}, has J number of predefined slots and their values at turn t. It means Labeler is asked J (from slot descriptions) + 1 (from domain question) questions regarding dialogue context D t from Collector. Finally, the out-put of Labeler is a set of dialogue context, dialogue state, and active domain at turn t triples { (D 1 , B 1 , domain 1 ), ..., (D T , B T , domain T )}.
Experimental Setups
Dataset
We use MultiWOZ 2.1 (Eric et al., 2019) dataset 7 for our experiments. It is one of the largest publicly available multi-domain dialogue data and it contains 7 domains related to travel (attraction, hotel, restaurant, taxi, train, police, hospital), including about 10,000 dialogues. The MultiWOZ data is created using WOZ so it includes goal instruction per each dialogue and domain-related knowledge base as well. We train our NeuralWOZ using the goal instructions and the knowledge bases first. Then we evaluate our method on dialogue state tracking with and without synthesized data from the NeuralWOZ using five domains (attraction, restaurant, hotel, taxi, train) in our baseline, and follow the same preprocessing steps of Wu et al. (2019); Campagna et al. (2020).
Training NeuralWOZ
We use the pretrained BART-Large (Lewis et al., 2020) for Collector and RoBERTa-Base (Liu et al., 2019) for Labeler. They share the same byte-level BPE vocab (Sennrich et al., 2016) introduced by Radford et al. (2019). We train the pipelined models using Adam optimizer (Kingma and Ba, 2017) with learning rate 1e-5, warming up steps 1,000, and batch size 32. The number of training epoch is set to 30 and 10 for Collector and Labeler respectively.
For the training phase of Labeler, we use a state candidate set from ground truth dialogue states B 1:T for each dialogue, not like the synthesizing phase where the options are obtained from goal instruction and API call results. We also evaluate the performance of Labeler itself like the training phase with validation data (Table 5). Before training Labeler on the MultiWOZ 2.1 dataset, we pretrain Labeler on DREAM 8 (Sun et al., 2019) to boost Labeler's performance. This is similar to coarse-tuning in Jin et al. (2019). The same hyper parameter setting is used for the pretraining.
For the zero-shot domain transfer task, we exclude dialogues which contains target domain from (Wolf et al., 2020). The best performing models, Collector and Labeler, are selected by evaluation results from the validation set.
Synthetic Data Generation
We synthesize 5,000 dialogues for every target domain for both zero-shot and few-shot experiments 9 , and 1,000 dialogues for full data augmentation. For zero-shot experiment, since the training data are unavailable for a target domain, we only use goal templates that contain the target domain scenario in the validation set similar to Campagna et al. (2020). We use nucleus sampling in Collector with parameters top p ratio in the range {0.92, 0.98} and temperature in the range {0.7, 0.9, 1.0}. It takes about two hours to synthesize 5,000 dialogues using one V100 GPU. More statistics is in Appendix B.
Baselines
We compare NeuralWOZ with baseline methods both zero-shot learning and data augmentation using MultiWOZ 2.1 in our experiments. We use a baseline zero-shot learning scheme which does not 9 In Campagna et al. (2020), the average number of synthesized dialogue over domains is 10,140. use synthetic data . For data augmentation, we use ATDM and VHDA.
ATDM refers to a rule-based synthetic data augmentation method for zero-shot learning suggested by Campagna et al. (2020). It defines rules including state transitions and templates for simulating dialogues and creates about 10,000 synthetic dialogues per five domains in the MultiWOZ dataset. Campagna et al. (2020) feed the synthetic dialogues into zero-shot learner models to perform zero-shot transfer task for dialogue state tracking. We also employ TRADE and SUMBT as baseline zero-shot learners for fair comparisons with the ATDM. VHDA refers to model-based generation method using hierarchical variational autoencoder (Yoo et al., 2020). It generates dialogues incorporating information of speaker, goal of the speaker, turnlevel dialogue acts, and utterance sequentially. Yoo et al. (2020) augment about 1,000 dialogues for restaurant and hotel domains in the MultiWOZ dataset. For a fair comparison, we use TRADE as the baseline model for the full data augmentation experiments. Also, we compare ours with the VHDA on the single-domain augmentation setting following their report.
Experimental Results
We use both joint goal accuracy (JGA) and slot accuracy (SA) as the performance measurement. The JGA is an accuracy which checks whether all slot values predicted at each turn exactly match the ground truth values, and the SA is the slotwise accuracy of partial match against the grouth truth values. Especially for zero and few-shot setting, we follow the previous setup Campagna et al., 2020). Following Campagna et al. (2020), the zero-shot learner model should be trained on data excluding the target domain, and tested on the target domain. We also add synthesized data from our NeuralWOZ which is trained in the same way, i.e., leave-one-out setup, to the training data in the experiment.
Zero-Shot Domain Transfer Learning
Our method achieves new state-of-the-art of zeroshot domain transfer learning for dialogue state tracking on the MultiWOZ 2.1 dataset (Table 1). Except for the hotel domain, the performance over all target domains is significantly better than the previous sota method. We discuss the lower performance in hotel domain in the analysis section. Following the work of Campagna et al. (2020), we also measure zero-shot coverage, which refers to the accuracy ratio between zero-shot learning over target domain, and fully trained model including the target domain. Our NeuralWOZ achieves 66.9% and 79.2% zero-shot coverage on TRADE and SUMBT, respectively, outperforming previous state-of-the-art, ATDM, which achieves 61.2% and 73.5%, respectively.
Data Augmentation on Full Data Setting
For full data augmentation, our synthesized data come from fully trained model including all five domains in this setting. Table 2 shows that our model still consistently outperforms in full data augmentation of multi-domain dialogue state tracking. Specifically, our NeuralWOZ performs 2.8% point better on the joint goal accuracy of TRADE than ATDM. Our augmentation improves the performance by a 1.6% point while ATDM degrades.
We also compare NeuralWOZ with VHDA, a previous model-based data augmentation method for dialogue state tracking (Yoo et al., 2020). Since the VHDA only considers single-domain simulation, we use single-domain dialogue in hotel and restaurant domains for the evaluation. Table 3 shows that our method still performs better than the VHDA in this setting. NeuralWOZ has more than twice better joint goal accuracy gain than that of VHDA. Table 4 shows the intrinsic evaluation results from two components (Collector and Labeler) of the NeuralWOZ on the validation set of MultiWOZ 2.1. We evaluate each component using perplexity for Collector and joint goal accuracy for Labeler, respectively. Note that the joint goal accuracy is achieved by using state candidate set, prepopulated as the multiple-choice options from the ground truth, B 1:T , as the training time of Labeler. It can be seen as using meta information since its purpose is accurate annotation but not the dialogue state tracking itself. We also report the results by excluding target domain from full dataset to simulate zero-shot environment. Surprisingly, synthesized data from ours performs effectively even though the annotation by Labeler is not perfect. We conduct further analysis, the responsibility of each model, in the following section. 6 Analysis 6.1 Error Analysis Figure 3 shows the slot accuracy for each slot type in the hotel domain, which is the weakest domain from ours. Different from other four domains, only the hotel domain has two boolean type slots, "parking" and "internet", which can have only "yes" or "no" as their value. Since they have abstract property for the tracking, Labeler's labeling performance tends to be limited to this domain. However, it is noticeable that our accuracy of booking related slots (book stay, book people, book day) are much higher than the ATDM's. Moreover, the model using synthetic data from the ATDM totally fails to track the "book stay" slot. In the synthesizing procedures of Campagna et al. (2020), they create the data with a simple substitution of a domain noun phrase when the two domains have similar slots. For example, "find me a restaurant in the city center" can be replaced with "find me a hotel in the city center" since the restaurant and hotel domains share "area" slot. We presume it is why they outperform over slots like "pricerange" and "area".
Intrinsic Evaluation of NeuralWOZ
Few-shot Learning
We further investigate how our method is complementary with human-annotated data. Figure 4 illustrates our NeuralWOZ shows a consistent gain in the few-shot domain transfer setting. Unlike the performance with ATDM is saturated as few-shot ratio increases, the performance using our Neu-ralWOZ is improved continuously. We get about 5.8% point improvement from the case which does not use synthetic data when using 10% of humanannotated data for the target domain. It implies our method could be used more effectively with the human-annotated data in a real scenario.
Ablation Study
We discover whether Collector and Labeler are more responsible for the quality of synthesizing. Table 5 shows ablation results where each model of NeuralWOZ is trained the data including or withholding the hotel domain. Except for the training data for each model, the pipelined models are trained and dialogues are synthesized in the same way. Then, we train TRADE model using the synthesized data and evaluate it on hotel domain like the zero-shot setting. The performance gain from Collector which is trained including the target domain is 4.3% point, whereas the gain from Labeler is only 0.8% point. It implies the generation quality from Collector is more responsible for the performance of the zero-shot learner than accurate annotation of Labeler. dataset. It is harder to generalize when the schema structure of the target domain is different from the source domain. Other examples can be found in Appendix C. We would like to extend the Neural-WOZ to more challenging expansion scenario like these in future work.
Qualitative Analysis
Comparison on End-to-End Task
To show that our framework can be used for other dialogue tasks, we test our data augmentation method on end-to-end task in MultiWOZ 2.1. We describe the result in Appendix D with discussion.
In full data setting, Our method achieves 17.46 BLUE, 75.1 Inform rate, 64.6 Success rate, and 87.31 Combine rate, showing performance gain using the synthetic data. Appendix D also includes the comparison and discussion on SimulatedChat (Mohapatra et al., 2020).
Conclusion
We propose NeuralWOZ, a novel dialogue collection framework, and we show our method achieves state-of-the-art performance on zero-shot domain transfer task. We find the dialogue corpus from NeuralWOZ is synergetic with human-annotated data. Finally, further analysis shows that Neural-WOZ can be applied for scaling dialogue system. We believe NeuralWOZ will spark further research into dialogue system environments where expansion target domains are distant from the source domains.
A Goal Instruction Sampling for Synthesizing in NeuralWOZ Figure 7 shows other examples from our NeuralWOZ. The left subfigure shows an example of synthesized dialogue from NeuralWOZ in a restaurant, which is seen domain and has the same schema from the restaurant domain in MultiWOZ dataset. However, the "spicy club" is an unseen instance which is newly added to the schema for the synthesizing. The right subfigure shows other synthetic dialogue in restaurant, which is a seen domain but has different schema from restaurant domain in MultiWOZ dataset. It describes navigation in-car scenario which is borrowed from KVret dataset (Eric and Manning, 2017). It is a non-trivial problem to adapt to unseen scenario, even if it is in the same domain.
C Additional Qualitative Examples
D Additional Explanation on Comparison in End-to-End Task
To compare our model with the model of (Mohapatra et al., 2020), we conduct end-to-end task experiments the previous work did. Table 8 illustrates the result. Though the performance of baseline implementation is different, we can see that the trend of performance improvement is comparable to the report of SimulatedChat. Two studies are also different in terms of modeling. In our method, all utterances in the dialogue are first collected based on goal instruction and KB information by Collector. After that, Labeler selects annotations from candidate labels, which can be inducted from goal instruction and KB information. On the other hand, SimulatedChat creates utterance and label sequentially with knowledge base access, for each turn. Thus, each generation of utterance is affected by the generated utterance of labels of the previous turn.
In detail, the two methods also differ in terms of complexity. SimulatedChat creates a model for each domain separately, and for each domain, it creates five neural modules: user response generation, user response selector, agent query generator, agent response generator, and agent response selector. This results 25 neural models for data augmentation in the MultiWOZ experiments. On the contrary, NeuralWOZ only needs two neural models for data augmentation: Collector and Labeler.
Another notable difference is that SimulatedChat does not generate multi-domain data in a natural way. The strategy of creating a model for each domain not only makes it difficult to transfer the knowledge to a new domain, but also makes it difficult to create multi-domain data. In SimulatedChat, the dialogue is created for each domain and then concatenated. Our model can properly reflect the information of all domains included in the goal instruction to generate synthetic dialogues, regardless of the number of domains.
E Other Experiment Details
The number of parameters of our models is 406M for Collector and 124M for Labeler, respectively. Both models are trained on two V100 GPUs with mixed precision floating point arithmetic. It takes about 4 (10 epochs) and 24 hours (30 epochs) for the training, respectively. We optimize hyperparameters of each model, learning rate {1e-5, 2e-5, 3e-5} and batch size {16, 32, 64}, based on greedy search. We set the maximum sequence length of Collector to 768 and the Labeler to 512.
For the main experiments, we fix hyperparameter settings of TRADE (learning rate 1e-4 and batch size 32) and SUMBT (learning rate 5e-5 and batch size 4) same with previous works. We use the script of Campagna et al. (2020) for converting the TRADE's data format to the SUMBT's.
For GPT2 (Radford et al., 2019) based model for the end2end task, we re-implement the model similar with SimpleTOD (Hosseini-Asl et al., 2020) but not using action. Thus, it generates dialogue context, dialogue state, database results, and system response in an autoregressive manner. We also use special tokens in the SimpleTOD (without special tokens for the action). We follow preprocessing procedure for the end2end task, including delexicalization suggested by (Budzianowski et al., 2018). We use 8 for batch size and 5e-5 for learning rate. Note that we also train our NeuralWOZ using 30% of training data and synthesize 5000 dialogues for the end2end experiments. However, we could not find detailed experiments setup of Mohapatra et al. (2020) including hyperparameter, the seed of each portion of training data, and evaluation, so it is not a fair comparison.
Figure 3 :
3Breakdown of accuracy by slot of hotel domain in the zero-shot experiments when using synthetic data. The analysis is conducted based on TRADE.
Figure 5 Figure 5 :
55is an qualitative example generated by NeuralWOZ. It shows the NeuralWOZ can generate an unseen movie domain which has a different schema from the traveling, the meta domain of the MultiWOZ dataset, even if it is trained on only the Unseen domain dialogue generation from NeuralWOZ. The movie domain is an example. It has very different domain schema from the domains in Mul-tiWOZ dataset.
Table 1 :
1Experimental results of zero-shot domain transfer on the test set of MultiWOZ 2.1. Joint goal accuracy / slot accuracy are reported. The Wu indicates original zero-shot scheme of the TRADE suggested by Wu et al.(2019) and reproduced by Campagna et al. (2020). The Campagna indicates a revised version of the original by
Campagna et al. (2020). The + indicates the synthesized dialogue is used together for the training.
the training data for both Collector and Labeler.
This means we train our pipelines for every target
domain separately. We use the same seed data for
training as Campagna et al. (2020) did in the few-
shot setting. All our implementations are conducted
on NAVER Smart Machine Learning (NSML) plat-
form (Sung et al., 2017; Kim et al., 2018) using hug-
gingface's transformers library
Table 2 :
2Full data augmentation on multi-domain DST. Joint goal accuracy / slot accuracy are reported.
Table 3 :
3Full data augmentation on single-domain DST. Joint goal accuracy / slot accuracy are reported. TRADE is used for evaluation.Domain
Collector ↓ Labeler ↑
Full
5.0
86.8
w/o Hotel
5.4
79.2
w/o Restaurant
5.3
81.3
w/o Attraction
5.3
83.4
w/o Train
5.6
83.2
w/o Taxi
5.2
83.1
Table 4 :
4Intrinsic evaluation results of NeuralWOZ on
the validation set of MultiWOZ 2.1. Perplexity and
joint goal accuracy are used for measurement respec-
tively. The "w/o" means the domain is excluded from
the full data. Different from the zero-shot experiments,
the joint goal accuracy is computed by regarding all
five domains.
Table 5 :
5Result of responsibility analysis. We compare the performances of each model with and without the hotel domain in the training data.
Figure 6: An example of sampling goal instruction G A using goal template G and randomly selected API call results A.B Data Statistics
# of Dialogues
# of Turns
Domain
Slots
Train Valid Test
Train Valid
Test
Attraction
area, name, type
2,717
401 395
8,073 1,220 1,256
Hotel
price range, type, parking, book stay, book day,
book people, area, stars, internet, name
3,381
416 394
14,793 1,781 1,756
Restaurant food, price range, area, name, book time, book
day, book people
3,813
438 437
15,367 1,708 1,726
Taxi
leave at, destination, departure, arrive by
1,654
207 195
4,618
690
654
Train
destination, day, departure, arrive by, book people,
leave at
3,103
484 494
12,133 1,972 1,976
Table 6 :
6Data Statistics of MultiWOZ 2.1.
Table 7 :
7Statistics of the synthesized data used in NeuralWOZ using for zero-shot and full augmentation experiments.Figure 7: Qualitative examples of synthesized dialogues from NeuralWOZ in the restaurant domain.Model
Belief State BLEU Inform Success Combined
DAMD (Zhang et al., 2020)
Oracle
17.3
80.3
65.1
90
SimpleTOD (Hosseini-Asl et al., 2020)
Oracle
16.22
85.1
73.5
95.52
GPT2 (Mohapatra et al., 2020)
Oracle
15.95
72.8
63.7
84.2
GPT2 + SimulatedChat (Mohapatra et al., 2020)
Oracle
15.06
80.4
62.2
86.36
GPT2 (ours)
Oracle
17.27
77.1
67.8
89.72
GPT2 + NeuralWOZ (ours)
Oracle
17.69
78.1
67.6
90.54
DAMD (Zhang et al., 2020)
Generated
18.0
72.4
57.7
83.05
SimpleTOD (Hosseini-Asl et al., 2020)
Generated
14.99
83.4
67.1
90.24
GPT2 (Mohapatra et al., 2020)
Generated
15.94
66.2
55.4
76.74
GPT2 + SimulatedChat (Mohapatra et al., 2020)
Generated
14.62
72.5
53.7
77.72
GPT2 (ours)
Generated
17.38
74.6
64.4
86.88
GPT2 + NeuralWOZ (ours)
Generated
17.46
75.1
64.6
87.31
Table 8 :
8Performance of the end-to-end task model.
The number of N one in the training data is about 10 times more than the number of others
InBudzianowski et al. (2018), they also use templates like ours when allocating goal instructions to the user in the Wizard-of-Oz setup.6 Booking-related slots, e.g., the number of people, time, day, and etc., are randomly sampled for their values since they are independent of the A.
https://github.com/budzianowski/multiwoz 8 The DREAM is a multiple-choice question answering dataset in dialogue and includes about 84% of non-extractive answers.
AcknowledgmentsWe thank Sohee Yang, Gyuwan Kim, Jung-Woo Ha, and other members of NAVER AI for their valuable comments. We also thank participants who helped our preliminary experiments for building data collection protocol.
Learning end-to-end goal-oriented dialog. Antoine Bordes, Y-Lan Boureau, Jason Weston, Antoine Bordes, Y-Lan Boureau, and Jason Weston. 2017. Learning end-to-end goal-oriented dialog.
MultiWOZ -a large-scale multi-domain Wizard-of-Oz dataset for task-oriented dialogue modelling. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Milica Osman Ramadan, Gašić, 10.18653/v1/D18-1547Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsPaweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Iñigo Casanueva, Stefan Ultes, Osman Ra- madan, and Milica Gašić. 2018. MultiWOZ -a large-scale multi-domain Wizard-of-Oz dataset for task-oriented dialogue modelling. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 5016-5026, Brus- sels, Belgium. Association for Computational Lin- guistics.
Zero-shot transfer learning with synthesized data for multi-domain dialogue state tracking. Giovanni Campagna, Agata Foryciarz, Mehrad Moradshahi, Monica Lam, 10.18653/v1/2020.acl-main.12Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsGiovanni Campagna, Agata Foryciarz, Mehrad Morad- shahi, and Monica Lam. 2020. Zero-shot transfer learning with synthesized data for multi-domain dia- logue state tracking. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 122-132, Online. Association for Computational Linguistics.
Wizard of oz studies: why and how. Nils Dahlbäck, Arne Jönsson, Lars Ahrenberg, Proceedings of the 1st international conference on Intelligent user interfaces. the 1st international conference on Intelligent user interfacesNils Dahlbäck, Arne Jönsson, and Lars Ahrenberg. 1993. Wizard of oz studies: why and how. In Pro- ceedings of the 1st international conference on Intel- ligent user interfaces, pages 193-200.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Frames: a corpus for adding memory to goal-oriented dialogue systems. Layla El Asri, Hannes Schulz, Shikhar Sharma, Jeremie Zumer, Justin Harris, Emery Fine, Rahul Mehrotra, Kaheer Suleman, 10.18653/v1/W17-5526Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue. the 18th Annual SIGdial Meeting on Discourse and DialogueSaarbrücken, GermanyAssociation for Computational LinguisticsLayla El Asri, Hannes Schulz, Shikhar Sharma, Jeremie Zumer, Justin Harris, Emery Fine, Rahul Mehrotra, and Kaheer Suleman. 2017. Frames: a corpus for adding memory to goal-oriented dialogue systems. In Proceedings of the 18th Annual SIG- dial Meeting on Discourse and Dialogue, pages 207- 219, Saarbrücken, Germany. Association for Com- putational Linguistics.
Multiwoz 2.1: A consolidated multi-domain dialogue dataset with state corrections and state tracking baselines. Mihail Eric, Rahul Goel, Shachi Paul, Adarsh Kumar, Abhishek Sethi, Peter Ku, Anuj Kumar Goyal, Sanchit Agarwal, Shuyang Gao, Dilek Hakkani-Tur, arXiv:1907.01669arXiv preprintMihail Eric, Rahul Goel, Shachi Paul, Adarsh Ku- mar, Abhishek Sethi, Peter Ku, Anuj Kumar Goyal, Sanchit Agarwal, Shuyang Gao, and Dilek Hakkani-Tur. 2019. Multiwoz 2.1: A consolidated multi-domain dialogue dataset with state correc- tions and state tracking baselines. arXiv preprint arXiv:1907.01669.
Keyvalue retrieval networks for task-oriented dialogue. Mihail Eric, Christopher D Manning, Mihail Eric and Christopher D. Manning. 2017. Key- value retrieval networks for task-oriented dialogue.
The second dialog state tracking challenge. Matthew Henderson, Blaise Thomson, Jason D Williams, 10.3115/v1/W14-4337Proceedings of the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL). the 15th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL)Philadelphia, PA, U.S.AAssociation for Computational LinguisticsMatthew Henderson, Blaise Thomson, and Jason D. Williams. 2014a. The second dialog state tracking challenge. In Proceedings of the 15th Annual Meet- ing of the Special Interest Group on Discourse and Dialogue (SIGDIAL), pages 263-272, Philadelphia, PA, U.S.A. Association for Computational Linguis- tics.
The third dialog state tracking challenge. Matthew Henderson, Blaise Thomson, Jason D Williams, 2014 IEEE Spoken Language Technology Workshop (SLT). IEEEMatthew Henderson, Blaise Thomson, and Jason D Williams. 2014b. The third dialog state tracking challenge. In 2014 IEEE Spoken Language Technol- ogy Workshop (SLT), pages 324-329. IEEE.
The curious case of neural text degeneration. Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, Yejin Choi, Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2020. The curious case of neural text degeneration.
Ehsan Hosseini-Asl, Bryan Mccann, Chien-Sheng Wu, Semih Yavuz, and Richard Socher. 2020. A simple language model for task-oriented dialogue. Ehsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu, Semih Yavuz, and Richard Socher. 2020. A simple language model for task-oriented dialogue.
Sequence-to-sequence data augmentation for dialogue language understanding. Yutai Hou, Yijia Liu, Wanxiang Che, Ting Liu, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsSanta Fe, New Mexico, USAAssociation for Computational LinguisticsYutai Hou, Yijia Liu, Wanxiang Che, and Ting Liu. 2018. Sequence-to-sequence data augmentation for dialogue language understanding. In Proceedings of the 27th International Conference on Computa- tional Linguistics, pages 1234-1245, Santa Fe, New Mexico, USA. Association for Computational Lin- guistics.
Mmm: Multi-stage multi-task learning for multi-choice reading comprehension. Di Jin, Shuyang Gao, Jiun-Yu Kao, Tagyoung Chung, Dilek Hakkani-Tur, Di Jin, Shuyang Gao, Jiun-Yu Kao, Tagyoung Chung, and Dilek Hakkani-tur. 2019. Mmm: Multi-stage multi-task learning for multi-choice reading compre- hension.
An iterative design methodology for user-friendly natural language office information applications. F John, Kelley, ACM Transactions on Information Systems (TOIS). 2John F Kelley. 1984. An iterative design methodology for user-friendly natural language office information applications. ACM Transactions on Information Sys- tems (TOIS), 2(1):26-41.
Hanjoo Kim, Minkyu Kim, Dongjoo Seo, Jinwoong Kim, Heungseok Park, Soeun Park, Hyunwoo Jo, Kyunghyun Kim, Youngil Yang, Youngkwan Kim, arXiv:1810.09957Nsml: Meet the mlaas platform with a real-world case study. arXiv preprintHanjoo Kim, Minkyu Kim, Dongjoo Seo, Jinwoong Kim, Heungseok Park, Soeun Park, Hyunwoo Jo, KyungHyun Kim, Youngil Yang, Youngkwan Kim, et al. 2018. Nsml: Meet the mlaas platform with a real-world case study. arXiv preprint arXiv:1810.09957.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, Diederik P. Kingma and Jimmy Ba. 2017. Adam: A method for stochastic optimization.
Adarsh Kumar, Peter Ku, Anuj Kumar Goyal, Angeliki Metallinou, and Dilek Hakkani-Tur. 2020. Ma-dst: Multi-attention based scalable dialog state tracking. Adarsh Kumar, Peter Ku, Anuj Kumar Goyal, Angeliki Metallinou, and Dilek Hakkani-Tur. 2020. Ma-dst: Multi-attention based scalable dialog state tracking.
SUMBT: Slot-utterance matching for universal and scalable belief tracking. Hwaran Lee, Jinsik Lee, Tae-Yoon Kim, 10.18653/v1/P19-1546Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsHwaran Lee, Jinsik Lee, and Tae-Yoon Kim. 2019. SUMBT: Slot-utterance matching for universal and scalable belief tracking. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 5478-5483, Florence, Italy. Association for Computational Linguistics.
BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal ; Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer, 10.18653/v1/2020.acl-main.703Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsMike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.
Zero-shot generalization in dialog state tracking through generative question answering. Shuyang Li, Jin Cao, Mukund Sridhar, Henghui Zhu, Shang-Wen Li, Wael Hamza, Julian Mcauley, Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeOnline. Association for Computational LinguisticsShuyang Li, Jin Cao, Mukund Sridhar, Henghui Zhu, Shang-Wen Li, Wael Hamza, and Julian McAuley. 2021. Zero-shot generalization in dialog state track- ing through generative question answering. In Pro- ceedings of the 16th Conference of the European Chapter of the Association for Computational Lin- guistics: Main Volume, pages 1063-1074, Online. Association for Computational Linguistics.
A user simulator for task-completion dialogues. Xiujun Li, Zachary C Lipton, Bhuwan Dhingra, Lihong Li, Jianfeng Gao, Yun-Nung Chen, Xiujun Li, Zachary C. Lipton, Bhuwan Dhingra, Li- hong Li, Jianfeng Gao, and Yun-Nung Chen. 2017. A user simulator for task-completion dialogues.
. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta: A robustly optimized bert pretraining approachYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach.
Biswesh Mohapatra, Gaurav Pandey, arXiv:2010.10216Danish Contractor, and Sachindra Joshi. 2020. Simulated chats for task-oriented dialog: Learning to generate conversations from instructions. arXiv preprintBiswesh Mohapatra, Gaurav Pandey, Danish Con- tractor, and Sachindra Joshi. 2020. Simulated chats for task-oriented dialog: Learning to gener- ate conversations from instructions. arXiv preprint arXiv:2010.10216.
Language models are unsupervised multitask learners. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Agenda-based user simulation for bootstrapping a POMDP dialogue system. Jost Schatzmann, Blaise Thomson, Karl Weilhammer, Hui Ye, Steve Young, Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Companion Volume, Short Papers. Rochester, New YorkAssociation for Computational LinguisticsJost Schatzmann, Blaise Thomson, Karl Weilhammer, Hui Ye, and Steve Young. 2007. Agenda-based user simulation for bootstrapping a POMDP dia- logue system. In Human Language Technologies 2007: The Conference of the North American Chap- ter of the Association for Computational Linguis- tics; Companion Volume, Short Papers, pages 149- 152, Rochester, New York. Association for Compu- tational Linguistics.
Neural machine translation of rare words with subword units. Rico Sennrich, Barry Haddow, Alexandra Birch, 10.18653/v1/P16-1162Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyLong Papers1Association for Computational LinguisticsRico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.
Building a conversational agent overnight with dialogue self-play. Pararth Shah, Dilek Hakkani-Tür, Gokhan Tür, Abhinav Rastogi, Ankur Bapna, Neha Nayak, Larry Heck, arXiv:1801.04871arXiv preprintPararth Shah, Dilek Hakkani-Tür, Gokhan Tür, Ab- hinav Rastogi, Ankur Bapna, Neha Nayak, and Larry Heck. 2018. Building a conversational agent overnight with dialogue self-play. arXiv preprint arXiv:1801.04871.
Dream: A challenge dataset and models for dialogue-based reading comprehension. Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, Claire Cardie, Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019. Dream: A challenge dataset and models for dialogue-based reading com- prehension.
Nako Sung, Minkyu Kim, Hyunwoo Jo, Youngil Yang, Jingwoong Kim, Leonard Lausen, Youngkwan Kim, Gayoung Lee, Donghyun Kwak, Jung-Woo Ha, arXiv:1712.05902Nsml: A machine learning platform that enables you to focus on your models. arXiv preprintNako Sung, Minkyu Kim, Hyunwoo Jo, Youngil Yang, Jingwoong Kim, Leonard Lausen, Youngkwan Kim, Gayoung Lee, Donghyun Kwak, Jung-Woo Ha, et al. 2017. Nsml: A machine learning platform that en- ables you to focus on your models. arXiv preprint arXiv:1712.05902.
. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Attention is all you needAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Drame, 10.18653/v1/2020.emnlp-demos.6Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsQuentin Lhoest, and Alexander RushOnline. Association for Computational LinguisticsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.
Transferable multi-domain state generator for task-oriented dialogue systems. Chien-Sheng Wu, Andrea Madotto, Ehsan Hosseini-Asl, Caiming Xiong, Richard Socher, Pascale Fung, 10.18653/v1/P19-1078Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsChien-Sheng Wu, Andrea Madotto, Ehsan Hosseini- Asl, Caiming Xiong, Richard Socher, and Pascale Fung. 2019. Transferable multi-domain state gener- ator for task-oriented dialogue systems. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 808-819, Flo- rence, Italy. Association for Computational Linguis- tics.
Variational hierarchical dialog autoencoder for dialog state tracking data augmentation. Hanbit Kang Min Yoo, Franck Lee, Trung Dernoncourt, Walter Bui, Sang-Goo Chang, Lee, 10.18653/v1/2020.emnlp-main.274Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsKang Min Yoo, Hanbit Lee, Franck Dernoncourt, Trung Bui, Walter Chang, and Sang-goo Lee. 2020. Variational hierarchical dialog autoencoder for dia- log state tracking data augmentation. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 3406-3425, Online. Association for Computational Linguistics.
Data augmentation for spoken language understanding via joint variational generation. Youhyun Kang Min Yoo, Sang-Goo Shin, Lee, Proceedings of the AAAI conference on artificial intelligence. the AAAI conference on artificial intelligence33Kang Min Yoo, Youhyun Shin, and Sang-goo Lee. 2019. Data augmentation for spoken language un- derstanding via joint variational generation. In Pro- ceedings of the AAAI conference on artificial intelli- gence, volume 33, pages 7402-7409.
Taskoriented dialog systems that consider multiple appropriate responses under the same context. Yichi Zhang, Zhijian Ou, Zhou Yu, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Yichi Zhang, Zhijian Ou, and Zhou Yu. 2020. Task- oriented dialog systems that consider multiple ap- propriate responses under the same context. In Pro- ceedings of the AAAI Conference on Artificial Intel- ligence, volume 34, pages 9604-9611.
Zeroshot dialog generation with cross-domain latent actions. Tiancheng Zhao, Maxine Eskenazi, Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue. the 19th Annual SIGdial Meeting on Discourse and DialogueTiancheng Zhao and Maxine Eskenazi. 2018. Zero- shot dialog generation with cross-domain latent ac- tions. In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, pages 1-10.
| [
"https://github.com/budzianowski/multiwoz"
] |
[] | [
"Judith G Cohen ",
"Andrew Mcwilliam ",
"Stephen Shectman ",
"Ian Thompson ",
"Norbert Christlieb ",
"Jorge Melendez ",
"Solange Ramirez ",
"Amber Swensson ",
"Franz-Josef Zickgraf "
] | [] | [] | We have carried out a detailed abundance analysis using high dispersion spectra from HIRES at Keck for a sample of 16 carbon stars found among candidate extremely metal-poor (EMP) stars from the Hamburg/ESO Survey. We find that the Fe-metallicities for the cooler C-stars (T eff ∼5100 K) have been underestimated by a factor of ∼10 by the standard HES survey tools. The results presented here provided crucial supporting data used byCohen et al. (2006a)to derive the frequency of C-stars among EMP stars.C-enhancement in these EMP C-stars appears to be independent of Femetallicity and approximately constant at ∼1/5 the solar ǫ(C). The Cenhancement shows some evidence of decreasing with decreasing T eff (increasing luminosity), presumably due to mixing and dredge-up of C-depleted material. The mostly low 12 C/ 13 C ratios (∼4) and the high N abundances in many of these stars suggest that material which has been through proton burning via the CN cycle comprises most of the stellar envelope.C-enhancement in this sample is associated with strong enrichment of heavy nuclei beyond the Fe-peak for 12 of the 16 stars. The remaining C-stars from the HES, which tend to be the most Fe-metal poor, show no evidence for enhancement of the heavy elements. Very high enhancements of lead are detected in some of the C-stars with highly enhanced Ba. The strong lead lines, the high Ba/Eu ratios, and the high ratios of abundances of the diagnostic elements in the first and second s-process peak demonstrate that the s-process is responsible for the enhancement of the heavy elements for the majority of the C-stars in our sample.The low 12 C/ 13 C ratios and large C and N enhancements of the EMP C-stars are more extreme than those of intrinsic AGB C-stars of near solar Fe-metallicity, but closer to the composition of CH stars. Our subsample of EMP C-stars without s-process enhancement is reminiscent of the R-type C-stars in the solar neighborhood; thus we expect that they are formed by similar mechanisms.We suggest that both the s-process rich and Ba-normal C-stars result from phenomena associated with mass transfer in binary systems. This leads directly to the progression from C-stars to CH stars and then to Ba stars as the Femetallicity increases. | 10.1086/504597 | [
"https://arxiv.org/pdf/astro-ph/0603582v1.pdf"
] | 15,347,378 | astro-ph/0603582 | 1f952d1a75a3152e2d5409beb83c5c284dddaf3d |
21 Mar 2006
Judith G Cohen
Andrew Mcwilliam
Stephen Shectman
Ian Thompson
Norbert Christlieb
Jorge Melendez
Solange Ramirez
Amber Swensson
Franz-Josef Zickgraf 21 Mar 2006arXiv:astro-ph/0603582v1 Carbon Stars in the Hamburg/ESO Survey: Abundances 1Subject headings: halo starsCarbon enhancementstars: abundances
We have carried out a detailed abundance analysis using high dispersion spectra from HIRES at Keck for a sample of 16 carbon stars found among candidate extremely metal-poor (EMP) stars from the Hamburg/ESO Survey. We find that the Fe-metallicities for the cooler C-stars (T eff ∼5100 K) have been underestimated by a factor of ∼10 by the standard HES survey tools. The results presented here provided crucial supporting data used byCohen et al. (2006a)to derive the frequency of C-stars among EMP stars.C-enhancement in these EMP C-stars appears to be independent of Femetallicity and approximately constant at ∼1/5 the solar ǫ(C). The Cenhancement shows some evidence of decreasing with decreasing T eff (increasing luminosity), presumably due to mixing and dredge-up of C-depleted material. The mostly low 12 C/ 13 C ratios (∼4) and the high N abundances in many of these stars suggest that material which has been through proton burning via the CN cycle comprises most of the stellar envelope.C-enhancement in this sample is associated with strong enrichment of heavy nuclei beyond the Fe-peak for 12 of the 16 stars. The remaining C-stars from the HES, which tend to be the most Fe-metal poor, show no evidence for enhancement of the heavy elements. Very high enhancements of lead are detected in some of the C-stars with highly enhanced Ba. The strong lead lines, the high Ba/Eu ratios, and the high ratios of abundances of the diagnostic elements in the first and second s-process peak demonstrate that the s-process is responsible for the enhancement of the heavy elements for the majority of the C-stars in our sample.The low 12 C/ 13 C ratios and large C and N enhancements of the EMP C-stars are more extreme than those of intrinsic AGB C-stars of near solar Fe-metallicity, but closer to the composition of CH stars. Our subsample of EMP C-stars without s-process enhancement is reminiscent of the R-type C-stars in the solar neighborhood; thus we expect that they are formed by similar mechanisms.We suggest that both the s-process rich and Ba-normal C-stars result from phenomena associated with mass transfer in binary systems. This leads directly to the progression from C-stars to CH stars and then to Ba stars as the Femetallicity increases.
Introduction
We are engaged in a large scale project to find extremely metal-poor (henceforth EMP) stars, characterized by [Fe/H] ≤ −3.0 dex 1 , by exploiting the Hamburg/ESO Survey (HES) database. The HES is an objective prism survey from which it is possible to efficiently select a variety of interesting stellar objects, among them EMP stars (Christlieb 2003). The discovery of a number of very metal-poor, carbon-rich, objects, with diverse additional peculiarities, particularly s-process or/and r-process enrichment, and the discovery of the most iron-poor star known, HE0107−5240 , at [Fe/H]=−5.3, which is also very Crich, recently surpassed by HE1327-2326, with similar characteristics at [Fe/H] ∼ −5.6 dex (Frebel et al. 2005), as well as the known C-rich binary M dwarf G77-61 established by Plez & Cohen (2005) to have [Fe/H] ∼ −4 dex, all contribute to a renewed interest in EMP carbon-rich halo stars.
Broadly speaking, when ǫ(O) exceeds ǫ(C) in cool stars, the oxide molecules (CO, TiO, etc) dominate in the outer layers of the stellar atmosphere. (This is the normal condition for solar abundance ratios.) However, if ǫ(C) is larger than ǫ(O), after the formation of CO, extra C remains rather than extra O, and carbon compounds such as C 2 , CH and CN will dominate. The strong bands of C 2 are then prominent in the optical spectrum of such stars, if they are cool enough, hence the origin of the name Carbon stars (C-stars). Our operational definition of a C-star is one whose spectrum shows the blue-degraded band of C 2 at 5160Å, which is the most prominent band of this molecule within the wavelength range of the spectra discussed here. If no C 2 bands are detected, but [C/Fe] > 1 dex, we denote a star to be C-enhanced. The strength of the C 2 bands will be a function of T eff , ǫ(C), and to a lesser extent, log(g), [Fe/H] and ǫ(O).
The purpose of the present paper is to carry out detailed chemical abundance analyses of a sample of 16 EMP C-stars selected from the HES. This provides a broad database to establish the Fe-metallicity for EMP C-stars. The results presented here provided crucial supporting data used by Cohen et al. (2006a) to derive the frequency of C-stars among EMP stars. We use the abundance ratios derived here for EMP C-stars to discuss the origin of the C-star phenomenon among EMP stars, which we attribute in toto to phenomena associated with binary systems.
After a description of the stellar sample in §2, readers who are not interested in the details of the abundance analyses should proceed to §4, then §4.1, then skip to §5.
We favor scenarios of C-star formation among the EMP halo stars resulting from the evolution of binary systems, including mass transfer. The evidence supporting this is described in §5. Section §5.5 compares the derived abundances of the EMP C-stars to those of various types of near solar [Fe/H] disk C-stars. The implications of our hypothesis for C-star formation among the EMP stars as applied to stars of higher and lower Fe-metallicity are described in §6. A brief summary concludes the paper.
The Stellar Sample
The normal procedures outlined by Christlieb (2003) to isolate extremely metal-poor (EMP) stars from the candidate lists produced by the HES were followed. In brief, candidate EMP stars were selected from the HES. This was followed by vetting via moderate resolution spectroscopy at 5-m class telescopes to eliminate the numerous higher abundance interlopers. The follow up spectra for the stars discussed here were obtained either with the Double Spectrograph (Oke & Gunn 1982) at the Hale Telescope at Palomar Mountain or with the Boller and Chivens spectrograph on the Baade and Clay Telescopes at the Las Campanas Observatory during the period from 2001 to the present.
These follow up spectra are used to determine an estimate of the metallicity of the star, which is much more accurate than can be derived from the low resolution objective prism spectra of the HES itself. This is accomplished via a combination of strength of absorption in Hδ (determining T eff ) and in the Ca II line at 3933Å (the KP index), which determines [Fe/H], once T eff , and hence log(g), are specified. The calibration of Hδ plus KP index to [Fe/H] is ultimately based on the results from high resolution abundance studies of standard stars; but such calibrations implicitly assume that the relation between these line indices and [Fe/H] is the same for both program and standard stars. We denote the resulting metallicity value as [Fe/H](HES). The specific algorithm adopted by the HES is described in and is identical to that used until recently by the HK Survey of Beers, Preston & Shectman (1985) and Beers, Preston & Shectman (1992). Stars were chosen for observation at high resolution with HIRES (Vogt et al. 1994) at the Keck I Telescope primarily on the basis of low predicted metallicity; all stars with [Fe/H](HES) ≤ −2.9 dex were put on the HIRES observing list, as well as selected other stars of interest. This paper is dedicated to an exploration of these stars which turned out from their moderate resolution spectra to be C-stars. A more complete discussion of the selection of our C-star sample from the HES and the frequency of C-stars within this sample will be given in Cohen et al. (2006c). We present here detailed abundance analyses for 15 C-stars from the HES observed at the Keck I telescope. One of these is a newly discovered short period double lined spectroscopic binary. We denote this group plus the dwarf C-star HE0007-1832 discussed in Cohen et al. (2004) as the primary sample. The augmented sample also includes two C-enhanced dwarfs selected from the HES and analyzed in the same way by our group in our previously published papers, HE0024-2523, discussed in Cohen et al. (2002), Carretta et al. (2002), and in great detail in Lucatello et al. (2003), and HE2148-1247, discussed in Cohen et al. (2003), both of which show highly enhanced lead in their spectra, plus a third C-enhanced star whose analysis will be presented in Cohen et al. (2006b).
Throughout this paper we ignore the two known ultra-metal poor stars HE0107-5240 ) and HE1327-2326 (Frebel et al. 2005). More than 7,000 EMP candidates were searched to turn up these two stars, and there are no stars in the Galaxy known to us with −5.2 ≤ [Fe/H] ≤ −4.3 dex. Although both of the known ultra-metal-poor stars are C-stars with extremely large C-enhancements, we are not certain that they represent a continuation towards lower Fe-metallicities of the stars discussed here, and hence we have chosen to not consider them here.
Stellar Parameters
In order to determine stellar parameters for these stars, particularly the cooler ones, ideally we would compute a special set of model atmosphere which would have the abundances, particularly those of CNO, set to values appropriate for each star. This would ensure that a proper accounting of the molecular absorption would be made. We have not done this. Instead we have followed our normal procedures described in Cohen et al. (2002) of matching observed broad band photometry V-I, V-J, and V-K to predicted grids of synthetic colors by Houdashelt, Bell & Sweigart (2000). Cohen et al. (2002) demonstrate that there is good agreement between the Kurucz and MARCS temperature scale. They find that the V − K relations of Alonso, Arribas & Martinez-Roger (1996 extrapolated to EMP stars gives T eff values ∼100 K cooler than those adopted here for giants, while the deduced T eff from V − K for stars near the main sequence turnoff are in good agreement.
We then rely on an appropriate 12 Gyr isochrone from the grid of Yi et al. (2002) to obtain the surface gravity for each star. The resulting stellar parameters, which have been derived with no reference to the spectra themselves, are given in Table 2. By using the larger wavelength differences of V-I, and of V-J and V-K to determine our T eff values, avoiding B-V and J-K, we achieve consistency to within ±150 K between the T eff determinations from each of these three colors for all stars. We have noticed that the B-V colors of the HES C-stars appear too red. This behavior is expected, since the flux in the B band is reduced much more by molecular bands in C-stars than is the flux in the V band. B-V colors thus tend to give T eff that are too low, presumably due to the effect of molecular absorption in one or both of the filter bandpasses altering a color which because of its small wavelength coverage is, even under the best of circumstances, relatively insensitive to T eff . This problem with B-V colors was pointed out by, among others, Preston & Sneden (2001). J-K is not very sensitive to T eff , changing in color by only 0.02 mag for ∆T eff of 100 K. Given that many of the HES stars are sufficiently faint that the errors in their 2MASS photometry exceed 0.05 mag at K, we avoid the use of J-K colors here.
The IR photometry we use is taken from 2MASS (Skrutskie et al. 1997;Cutri et al. 2003). We have obtained new photometry at V and I for many of the stars in our sample. We use ANDICAM images taken for this purpose over the past year via a service observing queue on the 1.3m telescope at CTIO operated by the SMARTS consortium. ANDICAM is a dual channel camera constructed by the Ohio State University instrument group 2 . Our ANDICAM program requires photometric conditions, and additional standard star fields, charged to our ANDICAM allocation through NOAO, are always taken for us.
Our new Andicam photometry for our sample of C-stars from the HES, as well as other relevant observational data for these stars, is presented in Table 1. Table 2 gives the resulting stellar parameters for these stars.
The uncertainty in log(g) arising from our 150 K uncertainty in T eff depends on the slope of the relationship between T eff and log(g) along the adopted isochrone. For stars close to the main sequence turnoff and for subgiants, this is small, and the uncertainty in log(g) is 0.1 dex. However, for stars along the RGB, it reaches 0.4 dex.
HIRES Observations and Abundance Analysis
Observations with HIRES at the Keck I Telescope were obtained during several runs from Sep 2001 to June 2005. The weather conditions varied from night to night. A spectral resolution of 45,000 was achieved using a 0.86 arcsec wide slit projecting to 3 pixels in the HIRES focal plane CCD detector. For those stars presented here with V > 15 mag, a spectral resolution of 34,000 was used, with the exception of HE1410−0004, which was observed at the higher spectral resolution. The spectra cover the region from 3840 to 5330Å with no gaps between orders for λ < 5000Å, and only small gaps thereafter. Each exposure was broken up into 1200 sec segments to expedite removal of cosmic rays. The goal was to achieve a SNR of 100 per spectral resolution element in the continuum at 4500Å; a few spectra have lower SNR. This SNR calculation utilizes only Poisson statistics, ignoring issues of cosmic ray removal, night sky subtraction, flattening, etc. The observations were carried out with the slit length aligned to the parallactic angle.
The recently installed upgraded HIRES detector designed and built by the Lick Observatory engineering staff, led by S. Vogt, was used for three C-stars observed in 2005: HE1410−0004, HE1443+0113, and HE1434−1442. HIRES-R was used for the first star, and HIRES-B for the other two. We thus obtain, among other desirable things, more complete spectral coverage, reaching in a single exposure from 4020 to 7800Å with HIRES-R and from 3200 to 5900Å with HIRES-B for the instrument configurations we use. Note that only for one star in the present sample does the included spectral range reach beyond 6000Å. Details of the HIRES exposures, including the exposure times and the SNR per spectral resolution element in the continuum, are given in Table 1.
This set of HIRES data was reduced using a combination of Figaro scripts and the software package MAKEE 3 . Insofar as possible, both the spectral reduction and abundance analyses presented here are identical to the procedures described in our earlier paper on EMP dwarfs from the HES .
Equivalent Widths and Abundance Analysis
The search for absorption features present in our HIRES data and the measurement of their equivalent width (W λ ) was done automatically with a FORTRAN code, EWDET, developed for a globular cluster project. Details of this code and its features are given in Ramírez et al. (2001). The strong molecular bands made it impossible to use the full spectral range; selected regions were eliminated prior to searching for absorption features. This also applied to the radial velocity determination procedure we use, described in Cohen et al. (2004). Extensive hand checking of the W λ for blending by molecular features was necessary in many cases, such as when the line profiles were frequently distorted by blends, due to strong molecular blanketing, or low S/N conditions. The spectrum of HE1410+0213 is so severely affected by its very strong molecular bands that only the region beyond 5160Å (plus a few strong lines near 4920Å) could be used. For HE1443+0113 there is only one exposure available which had to be terminated at 550 sec due to deteriorating weather conditions. It has a very low SNR, and only the strongest features could be measured, i.e. CH, the Na doublet, the Mg triplet, a few Fe I lines, and two Ba II lines. The W λ for this spectrum are more uncertain than those of the others presented here.
The atomic data and list of unblended lines used (ignoring those in the regions cut out due to the strong molecular bands), are identical to those of Cohen et al. (2004). We adopt logǫ(Fe) = 7.45 dex for iron following the revisions in the solar photospheric abundances suggested by Asplund et al. (2000), Prochaska et al. (2000) and Holweger (2001). Abundances were determined from equivalent widths, except for C, N (where we synthesized the region of the CN bandhead near 3885Å), and Pb. For C, we synthesized the region of the CH band near 4320Å, which is considerably weaker than the main bandhead of the G band near 4305Å, and hence still usable even in these C-stars. For the coolest C-stars with the strongest bands, even this region off the main bandhead is close to saturation. The solar abundances we adopt are those of Anders & Grevesse (1989), slightly updated as described in Cohen et al. (2004). A synthesis using our line list of CH and CN features combined with the Kurucz (1993) solar model matches the solar FTS spectrum of Wallace, Hinkle & Livingston (1998) with our initially adopted C and N solar abundances, logǫ(C) = 8.59 dex and logǫ(N) = 7.93 dex. These are close to those of Grevesse & Sauval (1998), but larger than those of Asplund et al. (2004Asplund et al. ( , 2005, which are are 0.2 dex smaller for C and 0.13 dex smaller for N. Once the C and N abundances were determined for a star, we synthesized the region of the 4057Å Pb I line to derive the Pb abundance.
The equivalent widths and atomic parameters used in the analysis of the primary sample of 16 C-stars selected as EMP candidates from the HES are tabulated in Table 3, 4 and 5. W λ for the additional redder lines seen only in the three C-stars observed with the upgraded HIRES detector are given in Table 6. Occasionally, for crucial elements where no line was securely detected in a star, we tabulate upper limits to W λ .
As in our previous work, we use the HFS components from Prochaska et al. (2000) for the lines we utilize here of Sc II, Mn I, and Co I. For Ba II, we adopt the HFS from McWilliam (1998). We use the laboratory spectroscopy of Lawler, Bonvallet & Sneden (2001a) and Lawler et al. (2001b) to calculate the HFS patterns for La II and for Eu II. We adopt the isotopic and HFS shifts for the 4057Å line of Pb I given by Van Eck et al. (2003); see her paper for references for the laboratory and theoretical atomic physics. McWilliam et al. (1995b) gives the HFS pattern for the NaD lines. Although the difference between log(ǫ(Na)) derived from the full HFS pattern and by just using two lines to represent the double is small, <0.08 dex, we use the full HFS pattern for these lines. A synthesis incorporating the list of hyperfine and isotopic components given by Hobbs, Thorburn & Rebull (1999) was used for the Li I resonance line for which an upper limit for its W λ was measured in one star. Spectral syntheses are carried out for each of the features with HFS to match the observed W λ and thus derive the abundance of the relevant species. For Pb, because of the strong blending by CH features, the spectral synthesis used to determine the Pb abundance included lines of 12 CH, 13 CH, Pb and other metals.
Recall that the amount of C 2 , CH, and CN formed is dependent upon the amount of free carbon present (i.e. the amount not locked-up in CO), and that in general we do not have measurements of the O abundance in these EMP C-stars. Thus our derived C abundances are dependent on the choice made for the O abundance through molecular formation and equilibrium.
The abundance analysis is carried out using a current version of the LTE spectral synthesis program MOOG (Sneden 1973). We employ the grid of stellar atmospheres from Kurucz (1993) without convective overshoot, when available. We compute the abundances of the species observed in each star using the measured W λ values and the four stellar atmosphere models from this grid with the closest T eff and log(g) to each star's parameters. The abundances were interpolated using results from the closest stellar model atmospheres to the appropriate T eff and log(g) for each star given in Table 2.
Our HIRES spectra show that HE0012−1441 is a double lined spectroscopic binary. Since it is rather faint, spectra were taken on each of three consecutive nights with the intention of summing them to reach a high SNR. Comparison of the summed spectra for each of the three nights revealed the presence of double lines as well as obvious differences in the velocity separation of the two components over the timespan of 48 hours, as is shown in Fig. 1. The v r of the primary decreased by 6 km s −1 over that timespan. The separation of the two components was largest on the last night, when it reached 28 km s −1 . Thus, this binary must have a relatively short period, and is probably similar to HE0024-2523 Carretta et al. 2002;Lucatello et al. 2003). Only the sum of the three 1200 sec HIRES exposures from the third night was used to determine W λ ; this night had the largest velocity separation, hence was the easiest from which to measure the W λ of the primary component. This, of course, reduces the SNR of the spectrum below that expected on the basis of the total integration time and below the desired value. We have assumed that the secondary, which contributes perhaps 1/5 of the total W λ for selected lines, does not seriously affect the colors used to determine T eff , which may not be a valid assumption. Furthermore, the lines from the secondary appear to be wider than those of the primary, suggesting a faint cool dwarf as the secondary star. (The secondary is too luminous to be a white dwarf with age ∼10 Gyr.) For this star only, the W λ were not used; the W λ listed in Table 3 for this star are for guidance only. Instead the abundance was determined by matching the observed line profile for each spectral feature with the predicted one, varying ǫ(X). This ensured proper treatment of the partially blended lines due to the second component in the binary system. A luminosity ratio for the two components of 4 throughout the relevant wavelength range of the HIRES spectra was assumed to determine the W λ for this star.
The microturbulent velocity (v t ) of a star can be determined spectroscopically by requiring the abundance to be independent of the strength of the lines. However, there are fewer usable Fe I lines in the complex spectra of these C-stars than in stars with normal C and N and the same stellar parameters due to the rejection of large regions of the spectrum where the molecular features are strongest. Furthermore the uncertainties in measurement of the remaining lines are larger, again due to possible molecular contamination and difficulties with continuum determination that do not occur in EMP stars with normal C and N. Based on our as yet unpublished analyses of a large sample of EMP giants from the HES, we set the v t to 1.6 to 1.8 km s −1 , depending on T eff . We checked in each case that a plot of derived Fe I abundance as a function of W λ looked reasonable, but did not try to iterate on v t to achieve a perfectly constant Fe I abundance.
The abundances presented here could be improved. Spectral syntheses could be used for additional elements. A better determination of T eff and of v t could be attempted. However, Table 7 demonstrates that the results achieved here are reasonably good. This table gives the slope of a linear fit to the derived Fe I abundance from each observed line as a function of EP, W λ /λ, and line wavelength. Assuming a perfect analysis, these slopes should all be zero 4 .
The full range in EP for the observed Fe I lines is only 3 eV. The mean slope for the derived Fe I abundance with EP for the 11 C-stars with entries in Table 7 is +0.02 dex/eV with σ = 0.05 dex/eV. We need to demonstrate that this mean and σ are consistent with our known uncertainties. The value of 0.05 dex/eV found for σ corresponds to a change in T eff of 250 K, somewhat larger than our adopted uncertainty in T eff discussed in §2.1 of 150 K. However, the random component of the uncertainty in the Fe abundance derived from a single Fe I line in a single star due primarily to errors in the gf value assigned to the line is at least 0.2 dex. This leads to a σ for the measured slopes Fe I abundance versus EP beyond that expected purely from the adopted T eff uncertainty. To support this assertion we note that the correlation coefficients for the relationship within each star are low (|r |< 0.25 in all cases).
The slopes for the Fe I abundance versus reduced equivalent width for the same set of 11 stars have a mean of −0.04 dex with σ = 0.12 dex. The spread in this slope is completely consistent with our adopted uncertainty in v t of 0.2 km s −1 . The set of correlation coefficients are low (|r |< 0.35 in all cases) here also.
The results for the abundances of typically 20 species in each star (only 9 in HE1410+0213, and only five for HE1443+0113) are given in Tables 8 to Table 10. We tabulate both logǫ(X) and [X/Fe]; our adopted solar abundances can be inferred directly from these tables. The 12 C/ 13 C ratios determined from the CH and the C 2 bands are given in Table 12. Table 13 gives the changes in the deduced abundances for small changes in T eff , log(g), v t and W λ in the [Fe/H] of the model atmosphere used for an EMP giant with T eff ∼5200 K. The last column gives expected random uncertainties for [X/Fe] appropriate for for a single star, combining in quadrature the uncertainties in [X/Fe] resulting from the errors in stellar parameters established in §2.1, i.e. an uncertainty of ±150 K in T eff , of ±0.4 dex in log(g), of ±0.5 dex in the metallicity assumed in the model atmosphere used for the analysis, of ±0.2 km s −1 for v t , and a contribution representing the errors in the measured equivalent widths. This last term is set at 20% (approximately equivalent to 0.08 dex abundance uncertainty, but depends upon line strength) for a single detected line (which may be an underestimate for the complex spectra of the C-stars), and is scaled based on the number of detected lines. The contribution of the various terms, particularly that of log(g),, which will be smaller for hotter stars, may vary somewhat with T eff . Systematic uncertainties, such as might arise from errors in the scale of the transition probabilities for an element, are not included in the entries in Table 13. Random errors in the gf value for a particular line are not relevant to this calculation provided that the same line list is used throughout.
The 12 C/ 13 C Ratios
We have measured the isotopic ratio 12 C/ 13 C for the C-stars from our sample with the highest SNR spectra using the line list for the 4300Å region of the G band of CH as described in Cohen et al. (2003). Spectral syntheses of the features of 13 CH at 4211.3, 4213.1, 4219.2, and 4221.8Å were used. We have verified for three stars whose HDS spectra were supplied by W. Aoki that our line list combined with our standard analysis procedures gives 12 C/ 13 C ratios derived from CH features differing from those derived by Aoki et al. (2001) or Aoki et al. (2002a) by 15% or less.
Spectrum synthesis for the C 2 bands was carried out based on the C 2 line list of Querci, Querci & Kunde (1971) and Querci, Querci & Tsuji (1974), as updated and supported on the web site of U. Jørgensen 5 . The dissociation potential for C 2 was taken as 6.30 eV (Urdahl, Bao & Jackson 1991). The isotopic line shift depends on the ratio of the reduced mass of a diatomic molecule AB, m A m B /(m A + m B ), for its two isotopic variants. This ratio is 1.04 for C 2 and only 1.007 for CH when considering 12 C versus 13 C. Thus, as has been know for a long time, isotopic effects are considerably easier to detect in certain C 2 bands than in those of CH. For the G band of CH, one must study detailed profiles of individual lines within the band which are often blends of multiple components of 12 CH or 13 CH. The situation for C 2 is very different. The strongest C 2 band within our spectral range is the (0,0) Swan band at 5160Å, which has a very small isotopic shift. However, the (1,0) bandhead for 12 C 13 C at 4744Å is separated from that of 12 C 12 C at 4737Å by ∼7Å, which is easily resolved even on moderate resolution spectra. The 13 C 13 C bandhead is ∼8Å further to the red at 4752Å; it can be glimpsed in the C-stars in our sample with the smallest 12 C/ 13 C ratios. Plates 26 and 29 of Keenan & McNeil (1976) show examples of spectra of C-stars stars with high and low 12 C/ 13 C ratios in this spectral region. Figure 2 illustrates the ease of separating the the bandheads 12 C 12 C and 12 C 13 C with the present much higher resolution data. Any uncertainty in the band electronic oscillator strength does not affect the determination of 12 C/ 13 C ratio.
Because 12 C 13 C is a heteronuclear molecule, and 13 C has a non-zero nuclear spin, it has a different number of states than does 12 C 12 C, affecting the partition function as well as the number of transitions in a band. Since the spectrum synthesis program, MOOG, which we use does not distinguish between isotopic molecular species it is necessary to reduce the gf value of each 12 C 13 C line by a factor of 2, to account for the partition function difference with 12 C 12 C; 13 C 13 C lines would require a factor of 4 reduction (e.g. see Amoit 1983). 6 We note that the band oscillator strengths for the isotopic species may not be exactly equal, due to wavefunction differences.
We performed a sanity check on our isotopic C 2 band line lists by synthesizing the 12 C 13 C and 12 C 12 C (1,0) bandheads. In this test we adopted a carbon abundance low enough that the bandheads were unsaturated. If we set 12 C/ 13 C = 1, and synthesize the spectrum in the region of the bandhead of the (1,0) Swan C 2 band, the 12 C 13 C and 12 C 12 C bandheads should then be roughly equal in strength, because although the 13 C 2 isotopic bandhead has twice as many lines, its partition function is a factor two larger. The ratio of absorption at the appropriate resulting bandheads in the synthesized spectrum is within 15% of unity, as expected.
Ionization Equilibrium and non-LTE
Since we have not used the high resolution spectra themselves to determine T eff or log(g), the ionization equilibrium is a stringent test of our analysis and procedures, including the determination of T eff and of log(g), as well as the assumption of LTE. For the 16 candidate EMP C-stars from the HES we analyze here, the Fe ionization equilibrium is shown in Fig. 3; we obtain a mean for logǫ(Fe:Fe II) − logǫ(Fe:Fe I) of −0.07 dex, with a 1σ rms scatter about the mean of 0.16 dex. This is an extremely good ionization equilibrium for stars with such complex spectra, and it demonstrates the validity of our determination of stellar parameters from photometry and isochrones. The ionization equilibrium for Ti is almost as good, with a mean of +0.08 dex, σ = 0.28 dex. The dispersion falls to 0.22 dex (and the mean becomes −0.03 dex) if one outlier with extremely weak Ti I lines is eliminated.
The Fe abundances derived from the neutral and ionized lines shift out of equilibrium by ∼0.25 dex for a 250 K change in T eff in this temperature regime (see Table 13). Our adopted uncertainty in T eff is ±150 K and the resulting uncertainty in log(g) is discussed in §2.1. Table 13 demonstrates these two factors alone can give rise to the dispersion observed among the sample stars in the Fe ionization equilibrium.
Following Cohen et al. (2004), we implement a non-LTE correction to logǫ(Al) of +0.60 dex for the lines of the Al I doublet near 3950Å (Baumüller & Gehren 1997). Only the 3961Å line can be used for most of these C-stars; the other line of this doublet is blended with molecular features. The 3905Å line of Si I, the only suitable line of this element in the be too simplistic, and that a factor of 4 should be used instead to correct the gf values for the 12 C 13 C lines. If true, our derived 13 C abundances will need to be reduced by a factor of two. wavelength range covered in most of our spectra, is also heavily blended with CH lines; we do not use it. Si abundances have been determined only for the small number of C-stars with the redder wavelength coverage achieved with the new HIRES detector, where unblended Si I lines near 5800Å become available. We use a non-LTE correction for Na abundances from the 5889,5895Å doublet of −0.20 dex following Baumüller, Butler & Gehren (1998) and Takeda et al. (2003). Only one star (HE1410−0004) has a spectrum which reaches any of the O I features, yielding an upper limit to the 6300Å forbidden line and a marginal detection of the strongest line in the triplet at 7772Å. Kisselman (2001) pointed out the need for non-LTE corrections for the IR triplet line, and we use those calculated by Takeda (2003). We adopt a non-LTE correction for O for this star of −0.2 dex.
Comparison with Previous High Dispersion Analyses
Two of the 16 C-stars studied here are rediscoveries of stars found in the HK Survey (Beers, Preston & Shectman 1985, 1992, and have been previously observed at high dispersion. HE0058-0244 (CS 22183-015) was analyzed by Johnson & Bolte (2002). They relied on stellar parameters determined from the spectra themselves; their adopted T eff (5200±100 K) is 400 K lower than our value, and their log(g) is correspondingly 1 dex higher to preserve ionization equilibrium. The difference in their derived [Fe/H], which is 0.35 dex lower than our value, is due entirely to the differences in the adopted stellar parameters. We attempt to compare [X/Fe], modifying their values to our adopted T eff ,log(g) using the sensitivity table (Table 13). With these corrections, which in some cases are large, we find pretty good agreement (within 0.25 dex), except for [Y/Fe] and [La/Fe], where our abundances are 0.4 dex lower than theirs. Norris, Ryan & Beers (1997), Bonifacio et al. (1998), Preston & Sneden (2001) and recently Aoki et al. (2002b) observed HE2356-0410 (CS 22957-027). The first two groups used B-V to establish T eff ; they both use a value 350 K cooler than that we adopted here. Preston & Sneden (2001) uses a hybrid method with B-V corrected for molecular absorption to determine T eff , while Aoki et al. (2002b) used B-V and V-K for this purpose, ending up with a value for T eff only 100 K lower than ours. These differences in adopted stellar parameters directly produce the differences in derived metallicity: the first two analyses yield [Fe/H] values 0.3 dex lower than adopted here, while that of the last is only 0.05 dex lower. Although there is overall good agreement for the C abundances, the derived [N/Fe] ranges over 0.9 dex among the five analyses.
We compare our derived abundance ratios with those of Aoki et al. (2002b), as the stellar parameters adopted in these two analyses are similar. Their 12 C/ 13 C ratio is 8±2, in reasonable agreement with our value of 4.0±1.3 (see Table 12 Aoki et al. (2002b). Large differences also occur in abundance ratios for many species in comparing our results with the other earlier analyses.
To isolate the cause of the large differences between the various analyses of these C-stars, we have compared our measured W λ with those published, when available. For HE0058-0244, the measured W λ for the 12 weak lines in common (mean W λ of 27.7 mÅ) tabulated by Johnson & Bolte (2002) for n-capture elements agree with ours with a mean difference of 1.1 mÅ and a σ of 3.2 mÅ. The only strong line in common is the 4554Å line of Ba II with measured a W λ of 177.5 and 166.2 mÅ in the two studies. For HE2356-0410 we have 18 lines in common with those tabulated by Norris, Ryan & Beers (1997). The W λ again agree well, with a mean difference of 0.3 mÅ and a σ of 12.2 mÅ. (The set of lines in common in this case are in general stronger lines, with a mean W λ of 62 mÅ.) The agreement with the W λ for this star tabulated by Bonifacio et al. (1998) is also very good, with σ of 6.1 mÅ.
Thus the differences in deduced abundances between the analysis presented here and those previously published for these two C-stars are not due to differences in measured W λ . They must arise from the choices made for the stellar parameters and in the details of the abundance analyses. In spite of these discrepancies, the overall characteristics of the abundance distribution in these two C-stars are inferred as identical by each of the analyses. All five groups, for example, agree that HE2356-0410 has an extremely large enhancement of C, and has a very low [Ba/Fe]. The deviations from "normal" EMP stars are in general and for this particular star very large, larger than the errors made by any of the independent analyses.
We previously published an analysis of the dwarf C-star HE0143-0441 in Cohen et al. (2004). The analysis presented here supersedes that one; the adopted T eff is 130 K cooler due to acquisition of better optical photometry in the interim and the W λ have also been rechecked for molecular blends since our earlier effort. The resulting [Fe/H] is 0.14 dex smaller than that of our previous work. The abundance ratios [X/Fe] derived from our two analyses are in good agreement, except for N. It appears there was a typo in the entry for logǫ(N) in Table 5 of Cohen et al. (2004) which is corrected in Table 8 here.
S. Lucatello's PhD thesis (Lucatello et al. 2006) will present a detailed abundance analysis for five of the C-stars analyzed here. That analysis should be definitive, with extensive use of spectral syntheses and maximum care in all aspects. The Si abundance should be recoverable with such syntheses, and a careful synthesis of the region of the 3961Å line of Al I would improve the Al abundances presented here.
Comments on Individual Elements
Iron
We confirm that our [Fe/H] determinations are largely free of molecular contamination by looking at the derived Fe-abundance in regions where molecular bands are absent as compared to those where they are (weakly) present. Regions where the molecular bands are strong in the spectrum of a sample star were ignored. Every star in our sample was checked to make sure that the Fe I abundance deduced from lines redward of 5160Å to the end of our spectral coverage, a region within which there are no molecular features, was the same as that for lines to the blue. For only two stars did a possible systematic difference appear, and it was only 0.1 dex, with the redder lines giving slightly lower Fe abundances. This supports the validity of our Fe-abundances.
The [Fe/H] values derived here are in some cases considerably higher than those predicted by the algorithm used on the moderate resolution HES follow-up spectra. Fig. 4 shows ∆[Fe/H], the difference between [Fe/H] as determined from a detailed abundance analysis of high dispersion spectra versus that from the application of the Beers et al. (1999) algorithm to the moderate resolution spectra. Initially, both for the HES and for the HK Survey, the B−V color was used to indicate T eff . Such a procedure is very convenient for the HES, for example, as rough colors can be measured directly from the objective prism spectra. This procedure, however, is a disaster for C-stars, as the B bandpass is much more affected by molecular absorption from CH and CN, than is the V bandpass. Spuriously red B−V colors lead to spuriously low deduced T eff , which in turn lead to spuriously low deduced Fe-abundances. In practice this affects all C-stars cooler than 6000 K, and almost certainly some even hotter than that. The literature is full of references to C-star abundance analyses where the resulting high resolution [Fe/H] grossly (by ∼1 dex) exceeds [Fe/H](HK), see, for example, Norris, Ryan & Beers (1997) or Hill et al. (2000), for which the relevant [Fe/H](HK) are given in Barbuy et al. (1997). The origin of this problem was realized several years ago by several groups, see, for example, Preston & Sneden (2001). Both the HES and the HK Survey then switched to using the strength of absorption at Hδ as a T eff indicator.
However, there is still a problem for the cooler C-stars, as is shown in Fig. 4. The five Cstars HE0212−0557, HE1031−0020, HE1434−1442, HE1443+0113 and HE1509−0806 show ∆[Fe/H] ∼1 dex. Something is still wrong, but now only the cooler giants, T eff ∼ 5100 K, and not all of them, are affected. As was shown by Cohen et al. (2006a), the problem is the molecular absorption in the specific bandpasses used, particularly in the red continuum bandpass for the Hδ index. For the most extreme C-stars in our sample, the HP2 index measuring the Hδ absorption defined and used by the HK Survey becomes negative (i.e. implies that Hδ is in emission), which is not the case when one examines high dispersion spectra. This again leads to spuriously low T eff estimates and hence to spuriously low deduced Fe-abundances from the moderate resolution follow up HES or HK Survey spectra. Both CN and CH contribute to the absorption there, with that of CN dominating at solar metallicity in the relevant T eff range. At the low Fe-metallicities considered here, the relative contribution of CN and of CH will depend primarily on T eff , with C/N ratio and Fe-abundance also playing a role. The five C-stars with large ∆[Fe/H] (those discrepant in Fig. 4) are the five stars with the strongest absorption over the specific spectral region of interest (4144 to 4164Å).
In the regime of KP and HP2 corresponding to EMP giants, a change in HP2 of 0.5Å can produce a change in predicted [Fe/H](HES) of 0.5 dex. The filter bandpass of HP2 is 12Å wide ). Thus, a 0.5Å error in the measured HP2 index corresponds to a 4% error in the continuum level. Looking at the spectra of the coolest C-stars in the 4000-4200 A region shown in Fig. 2 of Cohen et al. (2006a) in the relevant region for the feature and sideband bandpasses of HP2, it is difficult to see how an underestimate of the continuum level of this size will not occur.
Thus, the algorithm adopted by the HES, and until recently the HK Survey (Rossi et al. 2005), to deduce a Fe-metallicity from the low dispersion spectra, systematically underestimates [Fe/H](HES) by a factor of ∼10, for certain cool C-stars (T eff 5100 K). The important implications of this are: the overestimate of the frequency of C-stars among EMP stars, and the overestimate of the yield of EMP stars in the HES and, by implication, the HK Survey. These issues are discussed briefly in Cohen et al. (2006a) and will be discussed at length in Cohen et al. (2006c).
We demonstrate that the systematic [Fe/H] underestimate for EMP C-stars does not arise from the random uncertainty in the measurement of the HP2 indices. Comparison of HP2 indices measured from moderate dispersion spectra for 57 stars, most of which are C-normal, with observations on different runs at the P200 or observed at both the P200 and Magellan telescopes show a mean difference in measured HP2 indices of 0.18Å, with a rms dispersion about the mean of 0.65Å; details will be presented in Cohen et al. (2006c). Also note that the moderate resolution spectra of the 5 C-stars which show large ∆[Fe/H](HES) are from four different runs with the Double Spectrograph on the Hale Telescope.
The slight overlap of high and low ∆[Fe/H](HES) values at the boundary in T eff where this effect becomes important (∼5200 K) can be explained as resulting from observational uncertainties; recall that our adopted uncertainty in T eff for these C-stars is 150 K. Furthermore, this effect depends on the C abundance, the C/N ratio, and to a smaller extent [Fe/H], although the primary dependence is on T eff . Clearly, while using Hδ is better than using B-V as a T eff indicator, it has its limitations, particularly for cool C-rich giants, as shown here. Using a V-K color is better. J-K is not useful for the faint stars found in the HES; the errors of the 2MASS database are too large compared to the sensitivity of J-K to T eff which is, as discussed in §2.1, small. This statement may not hold for the HK Survey, where the stars are in the mean significantly brighter than the HES, and hence the 2MASS errors are much smaller.
C and N
Since a band of CN is used to derive the N abundance, the N abundance is linked to the choice of C. Systematic errors not included in Table 13 in the C and N abundances are possible in the case of unusually large oxygen abundance, because the CN and CH densities depend upon the amount of free carbon left given CO formation.
HE1150-0428 has extremely strong CN bands; the bandheads at 3885, 3875 and 3865Å are all present and the first two of these reach maximum absorption of ∼85% of the continuum. The continuum was very hard to define in this region of the spectrum of this star. Combining that with saturation issues, the N abundance this star is not well determined; appropriate errors might be log[ǫ(N)] = 7.15 (+0.5,−0.3) dex.
The determination of 12 C/ 13 C from the C 2 and CH bands is described in §3.2. Fig. 5 displays the measured absorption of the stellar continuum at 12 C 12 C and 12 C 13 C bandheads for the C-stars in our sample; see also Figure 2. The deduced isotopic ratios for C derived from these two bands are shown as a function of T eff in Fig. 6. 12 C/ 13 C is easy to determine from the 4740Å C 2 band, and the many values given in Table 12 demonstrate that the 12 C/ 13 C ratio is low, with a typical value of 4. The isotopic ratios for our sample of Cstars as determined from C 2 bands, ignoring the lower limits, are consistent to within 1.5σ with a constant value of 12 C/ 13 C of ∼3.5. Similar values have been found among luminous moderately metal poor field giant stars with normal C-abundances and with luminosities near the tip of the RGB by Carretta, Gratton, Sneden & Bragaglia (2000). The hottest stars in the sample yield only lower limits to 12 C/ 13 C using either of the molecular features.
More measurements of 12 C/ 13 C ratios at the extremes of the range of T eff would be required to search for any trend with T eff .
Barium
In many cases, the Ba II lines are very strong, and the resulting derived Ba abundances must be regarded as quite uncertain. Their HFS corrections are sometimes large and vary considerably with W λ . The HFS corrections calculated by McWilliam (1998) which we adopt are for a r-process isotopic distribution. We have rescaled them for the s-process Ba isotopic distribution; this in general reduces the deduced Ba abundance by ∼0.1 dex.
Lead
There is only one usable Pb I line in the spectral region we cover. This line, at 4057.8Å, is badly blended by CH features in these C-stars. Our spectral synthesis for this feature uses the isotopic and HFS pattern for Pb described in § 3.1, as well as features of 12 CH, 13 CH and various atomic species. A reasonable uncertainty for our Pb measurements based on spectral synthesis is ±0.3 dex. Non-detections, in cases where there is no problem indicated by notes in Table 8 to 11, correspond to upper limits of log[ǫ(Pb)] = +1.5 dex.
Use of Strong Lines
It is desirable in carrying out a detailed abundance analysis to use only absorption lines with W λ less than ∼170 mÅ to keep the errors as small as possible. Stronger lines will be formed in the outermost layers of the stellar atmosphere, where the T (τ ) relationship is more uncertain, and where LTE is less likely to prevail. Hence the W λ predicted from a model atmosphere for such strong lines are more uncertain, as is the derived abundance of the species from which the line originates. However, the wavelength coverage of our spectra, almost all of which were taken prior to the HIRES detector upgrade, is restricted, and CH, CN and C 2 molecular bands in the spectra of these stars further cut down the useful wavelength range. Some elements have very few detectable lines of any state of ionization within the allowed region. In a few cases only strong lines are available, while in others one or two weak lines are sometimes present together with the strong ones, at least for a few of the C-stars in our sample.
Examination of Tables 3 to 6 reveals the elements of concern. The Na I D lines are too strong for reliable abundance analysis in the spectrum of our coolest C-star, HE1443+0113, and are the only lines detected of that species in the only available HIRES spectrum of that star, which has low SNR. Two lines of the Mg triplet at 5170Å are always detected (the third is blended and not used) and are sometimes stronger than 170 mÅ, but often one or more of the weaker subordinate Mg I lines are seen as well. The Sr II line at 4077Å is the only one measured in many of the sample stars, as the 4215Å line is often swamped by CN. In the most s-process rich cool C-stars, this line exceeds the W λ cutoff suggested above; unfortunately there are no other detectable Sr lines in the available wavelength region. The Ba II lines at 4554 and 4934Å are extremely strong, far beyond the limit in W λ suggested above, in several of the cooler Ba-rich C-stars. But in many of these, the weaker 4130Å line is seen as well, and in the one star with a HIRES-R spectrum, the weaker 5854, 6141 and 6496Å Ba II lines are picked up as well. Caution is necessary for these particular elements, but we believe that the magnitude of the potential errors is sufficiently small that the fundamental conclusions of our work are not affected. Table 14. This produces a total sample of 27 Fe-poor C-stars and three EMP C-enhanced dwarfs.
Abundance Ratios
C/H Ratios
The dashed horizontal line in the upper panel of Fig. 7 indicates a constant C/H ratio of 20% of the solar value independent of [Fe/H]. This constant ǫ(C), which we denote as ǫ 0 (C), is a reasonable fit to all the available data, given the uncertainties. The inferred ǫ(C) reach a maximum value of ∼1/3 Solar, consistent with ǫ 0 (C). Fig. 7 shows that EMP C-stars, even though they are of very low [Fe/H], can, by whatever processes are relevant, achieve C-enrichment up to near the Solar abundance, but not beyond it. This is also true of the two known ultra-metal-poor stars Frebel et al. 2005). This star is a M dwarf in a binary system. Since the star is so cool compared to the C-stars studied here, it has a much more complex spectrum with very strong molecular features. As part of a recent study by Plez & Cohen (2005) a search was made for a detectable feature of O in the optical spectrum of this star. However, given the very strong molecular bands in this M dwarf, none could be found even in high precision Keck/HIRES spectra covering the full optical spectral regime from 0.4 to 1.0µ. Thus in the abundance analysis for G77-61 carried out by Plez & Cohen (2005), it was assumed that O was enhanced by +0.3 dex (i.e.
[O/Fe] = +0.3 dex). The resulting enhancement of C was found to be [C/Fe] +2.6 dex. A recently obtained Keck high resolution near IR spectrum yielded a detection of CO, and hence enabled determination of the O abundance. Plez, Cohen & Melendez (2006) found an unexpectedly high O-enhancement, [O/Fe] about +2.2 dex, much higher than the previously assumed value. With the original value for ǫ(C), this star would not be an extreme C-star, which its spectrum clearly demonstrates that it is. The new higher O abundance thus in turn led to a revised [C/Fe] value of +3.2 dex. The values plotted in the figures for this star (which is included in the additional sample from the literature) are these updated values.
In this context it is important to note that we also in general lack a determination of the O abundance for the C-stars in our sample (although clearly near IR spectra of the CO bands would yield such), and have assumed [O/Fe] to be the maximum of +0.5 dex or ([C/Fe] -0.8 dex) in calculating the molecular equilibria for all the C-stars analyzed here. Only one star in our sample has a measured O abundance; HE1410−0004 has [C/Fe] +2.0 dex, [O/Fe] + 1.2 dex, and C/O = 5. This O abundance is in accord with the assumption we have chosen to make regarding [O/Fe] when no O abundance in available. If the O abundance in this star is in fact even lower, which it might be given the marginal detection of the strongest line of the 7770Å IR triplet, the molecular equilibrium for CH and for C 2 would not change significantly. We expect the largest change in the deduced C abundance (i.e. the largest shift in the molecular equilibrium of CH and C 2 ) for C-stars as the O abundance is increased from that of C-normal stars to occur when ǫ(O) is only slightly less than ǫ(C). (Recall that ǫ(O) must be less than ǫ(C) since these are C-stars.) For changes in ǫ(O) from the nominal value for HE1410−0004 given in Table 10 not exceeding a factor of 4, the change in ǫ(C) deduced from the CH band in this star is modest, less than ±0.15 dex.
The interpretation of the CH band strengths as a measure of the C abundance in the sample C-stars is straightforward, ignoring the issue of the linkage to the assumed O abundance discussed above. With regard to C 2 , we look again at the upper panel of Fig. 5 (a plot of the absorption at several bandheads of C 2 versus T eff ). Although C 2 band strengths were not used to determine the C abundance, spectral synthesis in the region of the 5160Å bandhead with the fixed CNO abundances log[ǫ(C,N,O)] 7.56, 6.55, and 7.13 dex (a C/O ratio of 2.7) (for the value of f 00 , the band oscillator strength, adopted by Querci and col-laborators) were used to predict the depth of absorption at the 5160Å band head. The T eff , log(g) pairs were chosen to follow the isochrone for an age of 12 Gyr with [Fe/H] −2.5 dex. The result is shown as the solid curve in the upper panel the figure, and clearly indicates that increasing absorption at the C 2 bandhead as T eff decreases is due to the shift in the molecular equilibrium with T eff . Additional curves in this figure are shown for a C/O ratio of 1.0 and of 1/2.7, keeping ǫ(O) fixed, as would occur in a star to which C-rich material is added. The rapid decline in the strength of absorption at the C 2 bandhead is obvious and is due largely to the dependence of ǫ(C 2 ) on ǫ(CI) 2 . Our ability to match the observed strength of the C 2 bandhead in our sample of C-stars shown in Fig. 5 by varying only T eff is consistent with the key result from analysis of the G band of CH that an approximately constant ǫ(C) is a satisfactory fit to the existing data on highly C-enhanced stars.
There are no stars in the upper right area of Fig. 5. This is, in terms of C 2 band detectability, an allowed area. Thus sufficiently strong bands of C 2 , equivalent to sufficiently large C-enhancements, do not exist in real stars with T eff ∼6200 K with their higher continuum flux. The required very large C-enhancements in such hot stars must substantially exceed the constant ǫ 0 (C) deduced from the CH analysis. The maximum C 2 band strength, presumably that corresponding to ǫ 0 (C), is very weak among the hotter stars in our sample (the main sequence turnoff region stars), and so stars with lower C-enhancements will simply have no detectable C 2 . One might wonder why no stars appear in the lower left corner of this plot, where weaker C 2 features could easily be detected. This appears to be a consequence of the fact that a C-star must have ǫ(C) > ǫ(O), otherwise oxides will dominate the molecular equilibrium. At the solar composition, ǫ(C)/ǫ(O) is about 1/2. Normal-C EMP unevolved and hence unmixed stars (i.e. low luminosity giants or dwarfs) have [O/Fe] about +0.7 dex, while they have [C/Fe] about +0.4 dex (see, e.g. for the giants Spite et al. 2005). A C-enhancement of a factor of four for a normal-C unmixed EMP star will lead to ǫ(C) = ǫ(O), and that required to produce a C-star must be slightly higher. The C 2 becomes stronger as the C-enhancement increases above the minimum required to produce a C-star. We suggest that the duration of this phase of C-enhancement is short compared to the age of the EMP C-star, and that this phase did not in general occur recently as compared to the timescale for mixing, making this region of Fig. 5 unpopulated.
Among more highly evolved EMP and VMP C-stars, we would expect to see some evidence for depletion of C at the stellar surface as a result of mixing and dredge up, which will depend on the mass included in the mixing region. We use T eff as a surrogate for evolutionary stage, as the star cools as it moves up the RGB; the M dwarf and EMP star G77-61 is plotted as though its T eff were 6000 K to place it at the proper position corresponding to its evolutionary state in this figure. Fig. 8 displays log[ǫ(C)] as a function of T eff ; the 11 additional Fe-poor C-stars from the literature are included. There is a suggestion in this figure of decreasing ǫ(C) as T eff decreases, i.e. as the star moves up the giant branch, reminiscent of mixing and dredge up phenomena studied among EMP giants by Spite et al. (2005) and among globular cluster giants by Cohen, Briley & Stetson (2005). The slope of a linear fit to the data in this figure is statistically different from 0.0 at more than the 3σ level. The existence of such a correlation, should further work demonstrate conclusively that it is real, would again suggest that the C-enhancement could not have occurred recently; sufficient time for C-depletion and mixing in the giant EMP C-stars is required.
It is interesting to note that the highest value of 12 C/ 13 C we measured was obtained using the G band of CH for the hottest and least luminous (and presumably least evolved) of the C-stars with a high signal-to-noise ratio HIRES spectrum. Ryan et al. (2006) compiled 12 C/ 13 C ratios for Fe-poor C-rich stars from the literature. Their compilation also supports the suggestion that there is a general trend of declining 12 C/ 13 C with increasing luminosity. This trend, which needs further confirmation, together with the generally low 12 C/ 13 C ratios, is reproduced by the models of Boothroyd & Sackmann (1999) as a consequence of deep mixing and "cool bottom processing" after the first and second dredge up in low mass red giants. They establish that the latter increases dramatically as [Fe/H] decreases. Additional determinations of 12 C/ 13 C for EMP C-stars from the C 2 bandhead at 4740Å will be straightforward, and are now underway. Table 15 gives statistics for selected abundance ratios for the sample of 16 C-stars from the HES analyzed here. Upper limits are ignored. Only the sample of 16 C-stars analyzed here is used to compute the statistical measures given in Table 15. The mean abundance ratios for various elements are compared with those obtained by Cohen et al. (2004) for a large sample of EMP dwarfs, and in some cases to those from the First Stars project at the VLT for EMP giants (Cayrel et al. 2004;Spite et al. 2005).
Abundance Ratios for Other Elements
The median [C/Fe] is +1.9 dex, with a small dispersion (0.3 dex) about the mean. The lower limit of ǫ(C) is defined by the requirement that the star be a C-star to be included in the present sample, but the upper bound is not constrained; it is determined by the stellar characteristics themselves. N is also highly enhanced, with a median [N/Fe] of +1.7 dex, only slightly below the median C-enhancement. The scatter is perhaps slightly larger than that seen for ǫ(C) in Fig. 8. The lower panel of Fig. 7 shows [C/N] as a function of [Fe/H]. The mean is somewhat higher than the Solar value, but there is no obvious trend of C/N with [Fe/H]. Among the giants, there is a suggestion that ǫ(N) increases and [C/N] decreases as T eff decreases and luminosity along the giant branch increases, but the scatter is large and this may not be statistically significant.
We include in our analysis two of the Mg triplet lines, which lie in a region free of molecular features. Hence the Mg abundance should be reliable 8 . The median abundance ratio [Mg/Fe] of our C-star sample agrees well with that of the EMP dwarfs from Cohen et al. (2004), but the range of derived [Mg/Fe] is quite large (a factor of 10). The highest value, [Mg/Fe] = +1.04 dex (for HE0336+0113), is comparable to that of the small number of other extremely Mg enhanced C-rich stars known, i.e. CS 29498-043 discussed by Aoki et al. (2002b) and BS 14934-002 (Aoki et al. 2005). The lowest value of [Mg/Fe] among the C-stars in our sample (+0.04 dex, for HE0212-0557) is comparable to the lowest seen among VMP and EMP stars (see, e.g. the compilation in Fig.5 of Aoki et al. 2005). [Mg/Fe] almost certainly shows a real range from star-to-star among EMP stars.
The abundance of Ti should be well determined as there are many strong Ti II lines in the spectra of these C-stars, some of which lie in regions completely free of molecular contamination. Cr benefits from the strong line at 5206Å, again a region unaffected by molecular features. It is thus gratifying that the [Ti/Fe] and [Cr/Fe] abundance ratios among the C-stars from the HES show relatively small dispersion, with mean values in good agreement with the results for EMP dwarfs from Cohen et al. (2004). The remaining elements up to the Fe-peak suffer from a paucity of unblended lines with strengths sufficiently large for a reliable abundance analysis.
We find that 12 of our C-stars show an enhancement of Ba (see Fig. 9) and other s-process neutron capture heavy elements approximately equal to that of C. The other four show [Ba/C] ≤ −1.6 dex, i.e. a strong C enhancement, with normal heavy elements, as contrasted to enhancement of both C and the s-process elements in the majority of the C-stars. In the full sample of 27 C-stars and three C-enhanced dwarfs, 6 stars do not show a strong Ba enhancement, while ∼85% of the full sample do show a strong Baenhancement. Fig. 10 shows the [Ba/C] ratio for our sample of HES EMP stars. There is a strong suggestion that the stars with low [Ba/C] ratios are the most Fe-metal-poor of the sample. The bifurcation into s-normal and highly C-enhanced stars is not an artifact of relying on the Fe-abundances, which are decoupled from the C-abundances.
We can examine whether the process that produces highly enhanced C in these C-stars also leads to abnormalities in the abundances of other elements beyond those established above, i.e. CNO and the heavy elements beyond the Fe-peak. We define ∆(X) as the difference between the median [X/Fe] in our C-star sample with HIRES abundance analyses and that found for C-normal EMP dwarfs and giants. From the values given in Table 15, for elements from Na to Fe we find only two with | ∆(X) | > 0.25 dex. These are Al (∆(Al) = +0.36 dex) and Mn (∆(Mn) = +0.38 dex). There is only one reliable line for Al I (at 3961Å) and only two for Mn I (two of the three lines of the 4030Å triplet, ignoring a few very weak lines of Mn which are only rarely detected in the HIRES spectra of these C-stars) and each of these is located in regions of strong CH absorption. It is likely that there is still some contamination of the atomic features by molecular ones that we were not successful in removing. With this caveat, we thus conclude that the C-star phenomenon in EMP stars is confined to the elements CNO and to the elements heavier than the Fe-peak. The abundance ratios [X/Fe] of elements from Na to Fe for which we can detect suitable lines are normal.
Evidence that s-process Neutron Capture Dominates Among the EMP C-stars
We discuss here the evidence that enhancement of the neutron capture elements seen in EMP C-stars arises from the s-process, with no substantial/detectable contribution from the r-process. When we look at the elements beyond the Fe-peak, we notice that the median and the mean [Eu/Ba] (both about −0.8 dex) closely correspond to that characteristic of the main component of the solar s-process given by Arlandini et al. (1999). The detection of large amounts of lead is another clue that the s-process is responsible. The median value of [Pb/Ba] (+0.79 dex, with σ about the mean of 0.34 dex) is close to that of other s-process dominated stars: Sivarani et al. (2004) has compiled all the data for Pb in such stars available to date (their Table 5 and Figure 11).
Additional abundance ratios give clues to the detailed behavior of the s-process. For example, we find a smaller range in [Y/Fe] and in [Sr/Fe] than in [Ba/Fe], which shows a range of a factor of 1000; this is consistent with metal-poor s-processing in AGB stars. Busso, Gallino & Wasserburg (1999), for example, predict the s-process enhancement will be relatively larger for the second peak elements than for the lighter s-process nuclei in stars with lower Fe-metallicity. A recent extensive theoretical discussion of the nucleosynthesis of Sr, Y and Zr was given by Travaglio et al. (2004).
Ignoring the upper limits, σ[Y/Sr] and σ[La/Ba] are small (0.32 and 0.26 dex respectively), confirming previous work suggesting that within each of the peaks, the s-process element ratios for the Ba-rich EMP C-stars are approximately constant for elements within that particular peak, while the variation from star-to-star of the ratio of the strength of the various peaks is much larger. Aoki et al. (2005) also present relevant data for a sample of -25 -18 very metal-poor stars supporting this. Fig. 10 shows [Ba/C] as a function of Fe-metallicity for this sample of C and C-enhanced stars. Just as was seen in Fig. 9, 12 of the C-stars from the HES that we have analyzed show an enhancement of Ba (and of the other s-process neutron capture heavy elements) approximately equal to that of C. The other four show [Ba/C] ≤ −1.2 dex, i.e. a strong C enhancement, with more normal heavy elements. Including 10 additional C-stars compiled from the literature, 25 of the 30 stars in the full sample of EMP/VMP C-rich stars (83%) show highly enhanced Ba, while 1/6 have [Ba/C] ≤ −1.2 dex. It is clear from the evidence described above that the s-process is responsible for the enhancement of the heavy neutroncapture elements in these C-stars, when they are highly enhanced. We note the Ba-poor C-stars that are cooler than T eff = 5700 K have the same low 12 C/ 13 C ratios as do the Ba-rich C-stars.
The Ba-poor C-stars
We first consider whether the Ba in the Ba-poor stars is from the s or the r-process. One might argue for the former, claiming that Ba is in fact enhanced even in the Ba-poor stars. The influence of the very low Fe-metallicity on the heavy neutron capture rates might give rise to a very low s-process production, with the r-process making no or an even lower contribution. However, Fig. 9 shows that [Ba/Fe] in the Ba-poor stars is consistent with that observed among the C-normal stars from the HES that we have analyzed to date. We know that the Ba in C-normal EMP stars must be largely produced in the r-process based on their [Ba/Eu] and [La/Eu] ratios (e.g. McWilliam et al. 1995b, McWilliam 1997, 1998a, Simmerer 2004). Thus we infer that the Ba in the Ba-poor EMP C-stars has its origin in the r-process as well.
At first sight, the existence of two more or less distinct classes of EMP C-stars suggests that two distinct processes are required to produce the C-stars which are Ba-enhanced and those that are not Ba-rich. Nucleosynthesis within an intermediate mass AGB star can reproduce the first set of characteristics. If the mass of the EMP C-stars is assumed to be the turnoff mass of the halo with an age of ∼12 Gyr, near 0.8 M ⊙ , they are not massive enough to produce s-process elements at any time (e.g. see the review by Busso et al. 2004). Also, their T eff are too warm and the luminosities are too low for our C-stars to be AGB stars. Thus intrinsic nucleosynthesis production and transport to the stellar surface of large amounts of C is not possible for such unevolved stars.
We suppose instead that the EMP C-stars are the former secondaries of binary systems across which mass transfer has occurred. This is the mechanism originally suggested for the CH stars by McClure (1985), which also have enhanced C and Ba and low Fe-metallicities (e.g. Wallerstein & Greenstein 1964;Vanture 1992), although with ǫ(Fe) still a factor of 50 to 100 times higher than the EMP C-stars discussed here, so the apparent enhancements are not as large in the CH stars. McClure (1984) What about the 1/6 of the C-rich stars without heavy element enhancements? We suggest that there is no need to resort to intrinsic production or any other additional mechanism; in our view, essentially all of these stars could be produced by mass transfer and other phenomena in binary systems. There are several possibilities for explaining these stars within the context of our hypothesis that all EMP C-stars are or were binaries. We can ascribe the differing enhancement of the s-process elements from C-star to C-star within our sample to some dependence in the nucleosynthetic yields involving, for example, the initial [Fe/H] or mass of the original primary star. At the lowest metallicities, Busso, Gallino & Wasserburg (1999) (see especially their Fig. 12) predict that when n(Fe seed) becomes very small, there are so many neutrons available for each seed nucleus that the s-process runs to completion, with lead the main product, and very little Ba enhancement. Lead is the third s-process peak, and ǫ(Pb) is considerably higher in the Sun than that of its neighbors in the periodic table. Any heavier elements produced, which are all unstable except for Bi at atomic number 83, decay to lead. Although the prediction of Busso, Gallino & Wasserburg (1999) for the Fe-metallicity at which the peak Ba s-process production occurs in AGB stars may be slightly too high, their Fig. 12 shows a drop of more than a factor of 100 for the predicted [Ba/Fe] enhancement as [Fe/H] drops 1 dex lower than that at which maximum Ba production occurs.
We attempt to estimate the expected Pb abundance for a EMP C-star assuming the s-process runs to completion to see if it is detectable. The highest [Ba/Fe] seen among the C-stars in our sample (see Table 15) has ǫ(Ba) approximately at the solar value for a C-star with [Fe/H] −2.3 dex. We make the reasonable assumption that s-process production is proportional to the number of Fe seed nuclei, and assume that all the s-process elements in the Sun, from Ba to Pb, end up as lead. But all the intervening elements have very low s-process abundances, see, e.g., the s-process solar abundances for the heavy elements tabulated by Burris et al. (2000). Thus for a [Fe/H] −3.5 dex star, we predict ǫ(Pb) to be +1.5 dex. This Pb abundance, which is roughly 2.5 times the solar Pb abundance, is a very high Pb abundance for such a low Fe-metallicity star. However, it is, as discussed in §4.4, extremely difficult to detect in a highly C-enhanced (recall that ǫ 0 (C) ∼1/5 solar) star with strong molecular bands given that the strongest Pb I line at optical wavelengths is weak and located in a thicket of CH features. Thus verification of this idea through an abundance determination extending to the third s-process peak will be very difficult in practice. We do, however, expect in this case that the Ba-poor EMP C-stars to be predominantly those of the lowest Fe-metallicity, which does appear to be the case in our sample (see Figures 9 and 10), in the somewhat smaller sample of Ryan et al. (2006), as well as in that of W. Aoki (private communication).
Another possible explanation for the absence of s-process enhancements in some of our EMP C-stars is that the neutron flux is strongly reduced in the AGB star, either due to low temperatures in the intershell region, or because the 13 C pocket fails to be injected into the intershell region of the AGB star, thus restricting the the 13 C(α,n) 16 O reaction. This n-producing reaction competes with the reaction 13 C(p, γ) 14 N. At lower T , the latter may dominate, which would reduce the production of neutrons available to create s-process elements. The circumstances which might lead to lower T are not clear, perhaps lower Fe-metallicity is in some way the dominant factor. In the absence of the neutron flux the s-process can not operate with vigor, thus producing the Ba-poor stars.
We view the trend for the Ba-poor C-stars to be among the most Fe-poor as a fundamental clue to the mechanism(s) involved in producing the Ba-poor C-stars. Any differences in the luminosity distribution of the two groups of C-stars might provide other useful clues for identifying the mechanisms involved. Fig. 11 shows a T eff , log(g) diagram for our sample of C-rich stars. Also shown there is the entire sample of EMP candidates from the HES for which we have carried out detailed abundance analyses to date. Our sample is selected from the HES and stars are chosen for HIRES observations and subsequent abundance analyses solely on the basis of apparent low [Fe/H] 9 . Thus the distribution of stars, both C-rich and C-normal, along the locus they follow in the HR-diagram must represent some folding of the volume surveyed by the HES given the luminosity at each evolutionary stage, the IMF for EMP stars, and perhaps selection biases within the HES. The additional C-stars from the literature are not shown in this figure as they come from various sources and the selection criteria imposed for high resolution studies is not clear. Fig. 11 suggests that the C-stars of both types are concentrated towards high luminosities, and are relatively rare among the turnoff region stars. We ascribe this to a selection effect, as the G band of CH becomes weaker and harder to detect for such hot stars, even if the C-enhancement is very large. The C 2 bands become even weaker under such circumstances. Such hot stars can only be picked out as highly C-enhanced from a high resolution study. Fig. 5 demonstrates the weakness of the C 2 band in the hot turnoff stars. Low SNR moderate resolution spectra are inadequate to securely detect such weak bands. This is the case for the Ba-poor but C-rich star HE0007-1832 from our sample (this and the other Ba-poor C-rich stars are marked in the figure, as are the known binaries) which is a dwarf C-star whose analysis was published in Cohen et al. (2004). The somewhat hotter main sequence turnoff at a fixed age for lower metallicity stars (T eff at the turnoff becomes hotter by 150 K when the Fe-metallicity decreases from −2.2 to −3.2 dex) makes the CH and C 2 bands in the lowest metallicity stars near the main sequence turnoff even weaker and harder to detect. Ryan et al. (2006), in a very recent paper discussing the origin of the two classes of Cenhanced metal-poor stars described above, postulate two distinct mechanisms, with mass transfer in an AGB phase of a binary system giving rise to the Ba-rich and C-rich stars, while the Ba-poor, C-rich stars are assigned a completely different origin. However, the discussion given above indicates that there are several plausible scenarios for producing the Ba-poor EMP C-stars within the framework of the binary hypothesis adopted here. We do not find any reason at present to exclude them from also being formed via phenomena involving binary systems.
The path to resolve the origin of the Ba-poor EMP C-stars, which is in our view the only remaining area of considerable uncertainty in our scenario, is difficult. It requires assembling a larger sample of such stars, searching with exquisite high resolution spectra for the presence of Pb, and extensive radial velocity monitoring of these stars.
Comparison with Disk C-Stars
A comparison of the properties of the EMP C-stars with those having Fe-metallicity near solar is of interest. Wallerstein & Knapp (1998) present a review of the luminosities and abundances of the latter. Intrinsic C-stars stars which produce C internally, then dredge it up to the stellar surface, are AGB stars with luminosities much higher than those of the EMP C-stars in our sample. Lambert et al. (1986) have analyzed such luminous cool disk C-stars; their T eff is considerably lower than the stars studied here. Their sample has [Fe/H] ∼ −0.3 dex, and shows only modest C-enhancements (less than a factor of 2, far smaller than the factor of ∼100 seen in our sample), with no enhancement of N, and with 12 C/ 13 C typically large, 30 to 70. The 12 C/ 13 C in these intrinsic C-stars suggests the addition of pure 12 C from He burning, with quite different abundance ratios among the CNO elements and also quite different 12 C/ 13 C ratios than those seen among much more Fe-poor C-stars studied here. The difference between the C/N ratios may arise if the former primary of the binary EMP C-stars in our sample had, in the mean, a different stellar mass when it was on the AGB than is typical of disk solar Fe-metallicity AGB stars, so as to produce different abundance ratios. Higher mass AGB stars produce higher C/N ratios. The predictions from the models of Boothroyd & Sackmann (1999) are also relevant here, in that a dependence of the nuclear reaction rates and hence the internal production ratios on [Fe/H] might also contribute to these differences.
It is now possible to investigate the abundances of C and N for luminous AGB C-stars in the LMC and the SMC. Preliminary results by Marigo et al. (2003), Matsuura et al. (2005) and Van Loon et al. (2005) suggest that the differences in abundance ratios between these more Fe-poor luminous AGB stars and Galactic disk intrinsic C-stars are small. There is, however, a well known decrease in mean luminosity and increase in the C-star to late M giant ratio as [Fe/H] decreases from the Galaxy to the LMC and then to the SMC, first discussed by Blanco, McCarthy & Blanco (1980). This presumably arises as a smaller amount of C (of intrinsic origin; these are luminous AGB stars) needs to be added to a very metal-poor star with a fixed [O/Fe] ratio to reach ǫ(C) = ǫ(O) and so produce a C-star as [Fe/H] decreases.
The early R-stars (a type of C-star) are much closer in some of their properties to the Ba-poor EMP C-stars found in the HES that are studied here. and Dominy (1985) studied their chemical compositions and evolutionary state. (See also the review of Wallerstein & Knapp 1998.) The R-stars are of lower luminosity than the intrinsic AGB C-stars, with M bol ∼ −0.3 mag, L/L ⊙ ∼ 100, and, with T eff ∼ 4600 K, are warmer than AGB C-stars. Their space density is too high for them to be stars in the He shell burning phase of evolution (Scalo & Miller 1979). They have [Fe/H] ∼ solar, with moderate C enhancements (∼ +0.7 dex), and somewhat smaller N enhancements, but have ǫ(O) at the solar value. They, like the EMP C-stars, have low 12 C/ 13 C ratios. The R-stars do not in general show enhancements of the s-process elements. McClure (1997) has demonstrated, via extensive radial velocity monitoring, that they do not appear to be binaries; he suggested that they are coalesced binaries.
Among the various families of high Fe-metallicity C-stars, there appears to be a correlation that the stars with highest 12 C/ 13 C are those which have strong s-process enhancements, while those with the lowest 12 C/ 13 C have little or no enhancement of the elements past the Fe-peak. This correlation may be due to the variation with T in the rate of the reaction 13 C(α, n) 16 O, which provides the neutrons required for the s-process to occur, as compared to that of the reaction 13 C(p, γ) 14 N, which suppresses the production of neutrons from 13 C burning, or perhaps to some property of the 13 C pocket.
Implications of the Mass Transfer Scenario for EMP C-stars
We explore here the consequences of our assumption that mass transfer in binary systems produces all C-stars at all [Fe/H] whose luminosities are so low that they cannot be intrinsic C-stars. The stars being discussed here are very metal poor, so that by adding a small amount of processed material through binary mass transfer, a large change in surface abundances of the secondary star can be produced, which will lead to much more obvious changes in the star's spectral characteristics than would occur at solar metallicity. Furthermore the efficiency of the complex process of binary mass transfer depends on the mass of the primary star, which affects the mass loss rate, being higher for higher AGB luminosities, i.e. higher mass of primary, within certain limits. dM/dt may also depend on the metallicity if the mass loss is driven by radiation pressure on dust grains. For a given dM/dt of the AGB star, the accretion rate onto the secondary is a function of the binary separation and other orbital properties. The net result may be a highly variable efficiency for fixed initial [Fe/H] and the initial masses of the two components of the binary system.
A key result presented above is the approximately constant C/H ratio, ǫ(C) = ǫ 0 (C), in the photospheres of the C-stars in our sample, which we derive from our analysis of their CH and C 2 bands. This is presumably a consequence of the primary nature of C production in AGB stars. We assume this constant level extends to higher Fe-metallicity, although a slight upward trend as [Fe/H] increases cannot be ruled out at this point (see Fig. 7). We consider adding this constant ǫ 0 (C) to stars of both higher and lower Fe-metallicity than those studied here. As [Fe/H] rises, the impact of adding additional C (accompanied by additional H as well) is diluted. If we assume that C-normal EMP stars have [C/Fe] +0.3 dex and [O/Fe] +0.7 dex and that the stellar photosphere of the star we currently observe consists of equal amounts of its initial material and of material accreted from its AGB companion, then at [Fe/H] ∼ −1.4 dex, the star, with its additional C, will just achieve ǫ(C) = ǫ(O) with the additional C-rich material. This falls to −2.0 dex if the final photosphere contains 20% accreted material. More Fe-rich C-normal stars cannot become C-stars through the mass transfer process with our assumptions unless the accreted material comprises more than 50% of the stellar photosphere.
In this scenario we thus expect for higher Fe-metallicities to see stars which are C-rich, but without C 2 bands. These presumably correspond to the CH stars. They occur in the right Fe-metallicity range, and essentially all of them were shown by McClure (1984) (see also McClure & Woodsworth 1990) to be binaries. The frequency of C-stars in the HES as a function of [Fe/H] to be given in Cohen et al. (2006c) provides further support for this hypothesis. At still higher Fe-metallicities, the C-enhancement becomes too small to be noticeable. However, s-process production is to first order a secondary process proportional to the number of Fe seed nuclei (i.e. to [Fe/H]). Thus s-enhancement (i.e. the high levels of [s/Fe]) will still be present at high Fe-metallicity, although the details of the nucleosynthesis may shift the relative production of the s-process nuclei towards the first peak at Sr (see, e.g. Busso et al. 1999). Such stars presumably correspond to the Ba stars, which are of higher Fe-metallicity than the CH stars. According to McClure & Woodsworth (1990) (see also Luck & Bond 1991), the Ba stars are another example of the same phenomenon of mass-transfer in binary systems.
The situation at lower Fe-metallicities was explored in §5.4. We expect, as described earlier, the s-process to run through to lead, which will be extremely difficult to detect, with very low production of the more easily detected s-process elements such as Sr, Ba, La, etc.
The low 12 C/ 13 C ratios seen in these EMP C-stars, both Ba-enhanced and Ba-poor, provides another important clue. They, combined with the high C/N ratios, suggest that a two phase process is required. First, mass transfer across the binary system from a low Fe-metallicity AGB star with intrinsic production of C (and hence a high 12 C/ 13 C ratio) occurs. This is then followed by a phase of mixing combined with "cold bottom burning" as described by Boothroyd & Sackmann (1999) to produce the observed C/N and 12 C/ 13 C ratios. (See Carretta, Gratton, Sneden & Bragaglia 2000, for a description of the consequences of this mixing process in more metal-rich C-normal field stars.) Since the degree of Cdepletion appears to depend on the luminosity of the C-star we observe today, that part of the processing cannot have occurred in the donor star of the binary.
Binarity
We have suggested that all EMP C-stars (i.e. those with −4 [Fe/H] −2 dex) are the original secondary stars of binary systems in which mass transfer occurred. We have further suggested that this mass transfer from an AGB primary can produce the abundance anomalies we see among the EMP C-stars, specifically the high enhancement of s-process elements among ∼85% of these C-stars. Those VMP/EMP C-stars with low or no s-process enhancement are cases where some factor, most likely the low Fe-metallicity of the primary, while still producing, mixing to its surface, and transferring to the secondary star ample amounts of carbon, did not achieve such for the easily detectable heavy neutron-capture element Ba.
We consider here whether the statistics of binary detection among very metal-poor Cstars can support our hypothesis that all of these C-stars were once binaries. We expect most/all of them to still be binaries with (invisible) white dwarf companions. The HES C-stars of our sample are themselves not suitable for this purpose. They were only recently discovered to be interesting stars, and most have only been observed for a single epoch. They are in general faint for high dispersion spectroscopic analysis. There were no radial velocity monitoring programs for such stars until very recently. Even so, we have already found three confirmed binaries in our samples of candidate EMP stars from the HES.
So we look instead at the sample of additional C-stars from the literature. These stars are in general brighter than the HES C-stars in our sample, and they have been known as interesting objects for timescales of several years to a decade, giving more opportunity for radial velocity monitoring. Table 14 indicates which of these are known binaries and gives their periods and v r amplitudes. Four of these 11 C-stars are confirmed binaries, consistent with the very preliminary results of the v r monitoring program of Tsangarides, Ryan & Beers (2004) for s-process enhanced C-stars.
Although the sample is small, considering the lack of suitable long-term radial velocity monitoring programs, the length of the typical period, the small velocity amplitudes, the faintness of the stars, and the relatively short time they have been known to be interesting, we find our detection rate for binaries among very metal-poor and EMP C-stars to be consistent with all such stars being binaries; Monte Carlo simulations by Lucatello et al. (2004) support this. There is as yet insufficient v r monitoring data for the small fraction of C-enhanced stars without s-process enhancement to assess their binarity.
Summary
We have studied a sample of 16 C-stars from the EMP candidate lists of the HES using high dispersion spectra from HIRES at Keck and new optical photometry. We have carried out a detailed abundance analysis using a T eff scale based on V-I, V-J and V-K colors, while avoiding the effects of the molecular bands as much as possible. Earlier T eff scale problems affecting the Fe-metallicity deduced for EMP stars as hot as 6000 K by the HES (and, until recently, the HK Survey) were solved by changing from B−V to Hδ as a T eff indicator. Our results provide a broad database to establish the Fe-metallicity for EMP C-stars. We find that the Fe-metallicities for the cooler C-stars (T eff ∼5100 K) are still being underestimated by a factor of ∼10 by the current standard HES (and until very recently HK) survey tools. This is due to strong molecular absorption primarily in the red continuum bandpass of the HP2 index which measures the strength of Hδ and acts as an indicator of T eff . The results presented here provided crucial supporting data used by Cohen et al. (2006a) to derive the frequency of C-stars among EMP stars.
Carbon abundances in these very metal-poor stars appear to be constant, independent of Fe-metallicity, at about 1/5 the solar value. The C-abundances show marginal evidence of decreasing with decreasing T eff or increasing luminosity, presumably due to mixing and dredge-up of C-depleted material. Such C-depletion is seen among "normal" halo field giants over a wide range of metallicity for sufficiently evolved stars with luminosities brighter than that of the RGB bump, which is high on the red giant branch. N is also highly enhanced in the EMP C-stars. Among the elements studied here, abundance anomalies in these stars appear to be confined to CNO and to those heavier than the Fe-peak.
C-enhancement in this sample is associated with strong enhancement of s-process heavy nuclei for 12 of the 16 stars, with [C/Ba] about −0.1 dex with small scatter. The remaining four C-stars from the HES show no evidence for enhancement of the heavy elements, with Ba providing the strongest constraint, [Ba/Fe] ≤ +0.20 dex for each of the four stars. When 11 additional C-stars, mostly from the HK Survey, with recently published detailed abundance analyses are added, the same separation is seen, with ∼85% of the stars having [C/Ba] almost Solar.
Very high enhancements of lead are detected in some of the C-stars with highly enhanced Ba. The ratio Ba/Eu, the high Pb abundances, and the high ratios of diagnostic elements in the second to the first s-process peak for C-stars in our sample demonstrate that the s-process is responsible for the enhancement of the heavy elements for most of the C-stars in our sample. The mostly low 12 C/ 13 C ratios inferred from both the G band of CH and the 4740Å band of C 2 , where the isotope ratio is particularly easy to measure, as well as the high N-enhancements suggest that the bulk of the stellar envelope of these stars has been processed through the CN cycle of proton burning. Our data for the Ba-rich C-stars supports the suggestion that the abundance ratios for elements within a given s-process peak are to first order constant, while the ratio of the strength of the various peaks shows larger star-to-star variations.
The similarities and differences of the properties of the EMP C-stars to those of various types of near solar [Fe/H] disk C-stars are discussed. In particular, the early R stars show low 12 C/ 13 C ratios and no excess of the heavy elements, reminiscent of the Ba-poor EMP C-stars found (at a low rate) in our sample.
The abundance ratios we derive are used to discuss the origin of the C-rich stars among EMP stars. We suggest that both the s-process rich and Ba-normal C-stars result from phenomena associated with binary stars. The Ba-rich EMP C-stars presumably formed as secondaries in a mass transfer binary system with an AGB primary. This was followed by proton burning at moderate T to reduce 12 C/ 13 C and increase the C/N ratio. The implications of this hypothesis for stars of both higher and lower Fe-metallicity than those in the present sample are discussed. Several possible origins for the small minority of Ba-poor EMP C-stars are suggested. In the most metal-poor stars, Busso, Gallino & Wasserburg (1999) predict that the s-process runs to completion through the Ba-peak to the heaviest stable element, lead, leaving little or no apparent Ba-excess. Heavier elements (all unstable except Bi) mostly decay to lead as well. The predicted ǫ(Pb) in a [Fe/H] −3.5 dex star, while very high for a star with such a low Fe-metallicity, will be very difficult to detect. Another possibility for explaining the Ba-poor EMP C-stars is a possible lack of neutrons due to 13 C burning via 13 C(p, γ) 14 N instead of via 13 C(α, n) 16 O. The former dominates at lower T , while the latter provides the neutrons required for the s-process to occur. If either of these suggestions is correct, the Ba-poor C-stars should have lower Fe-metallicities in the mean than the Ba-rich C-stars, which does appear to be the case in our sample. The frequency of known binaries among the samples appears consistent with our hypothesis for the origin of EMP C-stars given the lack of long term radial velocity monitoring programs, the long periods, the low velocity amplitudes, and other characteristics of the stars.
We thus see no reason at present to exclude the scenario adopted here, that all the EMP C-stars are formed via phenomena involving binary systems. For old stars of low Fe-metallicity, several mechanisms described above may lead to C-stars with little or no sprocess enhancement, such as is occasionally seen in our sample. For old stars in binary mass transfer systems of higher [Fe/H] than those considered here, a progression with increasing [Fe/H] from C-stars to CH stars and finally to Ba stars is predicted for a constant donor ǫ 0 (C), which successfully reproduces several key observed characteristics of the behavior of C-rich stars in the Galaxy.
The entire Keck/HIRES user community owes a huge debt to Jerry Nelson, Gerry Smith, Steve Vogt, and many other people who have worked to make the Keck Telescope and HIRES a reality and to operate and maintain the Keck Observatory. We are grateful to the W. M. Keck Foundation for the vision to fund the construction of the W. M. Keck Observatory. We are grateful to W. Aoki for providing his HDS/Subaru spectra of selected C-stars to verify our 13 CH line list. We thank G. Wasserburg for helpful discussions and moral support. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. JGC is grateful for partial support from NSF grant AST-0205951. She is grateful for funds from the Ernest Fullam Award of the Dudley Observatory for help in initiating this work. N.C. and F.J.Z. acknowledge support from Deutsche Forschungsgemeinschaft through grant Re 353/44. N.C. is also supported by a Henri Chretien International Research Grant administered by the American Astronomical Society.
Vanture, A. D., 1992, AJ, 104, 1997Vogt, S. E. et al. 1994, SPIE, 2198 Yi, S., Demarque, P., Kim, Y.-C. , Lee, Y.-W., Ree, C. Lejeune, Th. & Barnes, S., 2001, ApJS, 136, 417 This preprint was prepared with the AAS L A T E X macros v5.2. · · · · · · 239.1 · · · · · · 4226.74 Ca I 0.00 0.240 303.0 · · · · · · · · · · · · 4425.44 Ca I 1.88 −0.360 · · · · · · 25.6 · · · · · · 4435.69 Ca I 1.89 −0.520 · · · 53.4 · · · · · · 20.8 4454.79 Ca I 1.90 0.260 78.1 81.4 70.0 · · · · · · 4578.56 Ca I 2.52 −0.558 8.0 · · · · · · 15.0 · · · 4670.41 Sc II 1.36 −0.580 · · · 8.0 11.2 · · · · · · 3924.53 Ti I 0.02 −0.940 · · · 9.6 · · · · · · · · · 3958.22 Ti I 0.05 −0.160 · · · 33.2 34.3 · · · · · · 3998.64 Ti I 0. 1.08 −2.670 · · · · · · · · · 46.9 · · · 4865.62 Ti II 1.12 −2.810 · · · · · · · · · 12.0 · · · 4911.20 Ti II 3.12 −0.340 · · · 11.2 · · · · · · · · · 5185.91 Ti II 1.89 −1.460 · · · · · · · · · 35.2 · · · 5206.04 Cr I 0.94 0. 28 −0.090 · · · · · · · · · 13.0 · · · 4823.51 Mn I 2.32 0.140 · · · 7.0 · · · 31.0 · · · 3865.52 Fe I 1.01 −0.980 · · · 103.1 65.3 · · · 77.9 3899.72 Fe I 0.09 −1.530 · · · 87.1 79.8 · · · 96.0 3902.96 Fe I 1.56 −0.470 · · · · · · 82.5 · · · · · · 3906.49 Fe I 0.11 −2.240 · · · 76.3 79.7 · · · 72.0 3916.74 Fe I 3.24 −0.560 · · · 8.0 · · · · · · 11.7 3920.27 Fe I 0.12 −1.750 · · · · · · 72.1 · · · · · · 3922.92 Fe I 0.05 −1.650 · · · · · · 86.5 · · · · · · 3949.96 Fe I 2.18 −1.160 · · · 14.4 · · · · · · 27.2 4005. 2 · · · 45.6 · · · 5234.63 Fe II 3.22 −2.220 · · · · · · · · · 43.0 · · · 3845.46 Co I 0.92 0.009 · · · · · · 43.8 · · · · · · 4121.31 Co I 0.92 −0.315 · · · 25.3 · · · · · · 22.5 3858.30 Ni I 0.42 −0.967 · · · 61.8 52.7 · · · · · · 4810.54 Zn I 4.08 −0.170 · · · · · · 10.4 · · · · · · 4077.71 Sr II 0.00 0.170 · · · 112.6 136.2 170.0 309.0 3950.36 Y II 0.10 −0.490 · · · 23.3 18.3 · · · 51.5 4883.69 Y II 1.08 0.070 · · · 12.8 18.9 74.6 54.8 5200.42 Y II 0.99 −0.570 · · · · · · · · · 36.0 · · · 5205.73 Y II 1.03 −0.340 · · · · · · · · · 43.0 · · · 4496.97 Zr II 0.71 −0.590 · · · · · · 14.4 · · · · · · 4130. 0.32 −0.240 · · · · · · · · · · · · 24.4 4127.38 Ce II 0.68 0.240 · · · · · · · · · · · · 32.0 15.0 · · · · · · · · · 5220.12 Pr II 0.80 0.170 · · · · · · · · · 58.0 · · · 4021.34 Nd II 0.32 −0.170 · · · 13.6 · · · · · · 17.0 4061.09 Nd II 0.47 0.300 · · · 27.9 29.7 · · · 40.0 4069.27 Nd II 0.06 −0.400 · · · 17.6 · · · · · · 12.8 4109.46 Nd II 0.32 0.180 · · · · · · 38.4 · · · 40.0 4446.39 Nd II 0.20 −0.630 · · · · · · · · · · · · 19.2 4462.99 Nd II 0.56 −0.070 · · · 29.0 29.6 · · · 35.2 5212.35 Nd II 0.20 −0.700 · · · · · · · · · 68.2 · · · 3819.67 Eu II 0.00 0.510 · · · 51.0 30.4 · · · 30.0 3907.11 Eu II 0.21 0.170 · · · 25.0 22.4 · · · · · · 4129.70 Eu II 0.00 0.220 · · · 29.0 22.8 · · · 12.0 4057.81 Pb I b 1.32 −0.220 <38.0 80.4 48.0 · · · 33.9 a W λ given as a guide. Synthesis used throughout for this binary star. See text.
b Synthesis used to derive Pb abundance. W λ given as a guidance to line strength. · · · · · · · · · · · · 4981.74 Ti I 0.85 0.500 58.0 23.4 · · · · · · 49.7 4999.51 Ti I 0.83 0.250 37.1 · · · · · · 11.7 42.9 5022.87 Ti I 0.83 −0.430 20.0 · · · · · · · · · · · · 5173.75 Ti I 0.00 −1.120 7.0 · · · 47.4 · · · · · · 5210.39 Ti I 0.05 −0.880 24.5 · · · 54.0 · · · 22.5 3900.54 Ti II 1.13 −0.450 109.4 · · · · · · · · · · · · 3987.61 Ti II 0.61 −2.730 32.4 · · · · · · · · · · · · 4012.39 Ti II 0.57 −1.610 98.2 24.3 · · · · · · · · · 4028.35 Ti II 1.89 −0.870 84.4 · · · · · · · · · · · · 4443.81 Ti II 1.08 −0.700 84.8 56.5 · · · · · · · · · 4468.51 Ti II 1.13 −0.600 120.0 94.0 · · · · · · · · · 4501.28 Ti II · · · · · · · · · · · · 32.5 4657.20 Ti II 1.24 −2.320 9.8 · · · · · · · · · · · · 4798.54 Ti II 1.08 −2.670 23.0 · · · · · · · · · 23.8 4865.62 Ti II 1.12 −2.810 11.0 · · · · · · · · · 24.7 4911.20 Ti II 3.12 −0.340 36.0 · · · · · · · · · · · · 5185.91 Ti II 1.89 −1.460 23.8 · · · 54.6 · · · · · · 5336.79 Ti II 1.58 −1.630 · · · · · · · · · · · · 36.1 5206.04 Cr I 0.94 0.030 50.0 11.0 83.0 23.7 55.3 5409.80 Cr I 1.03 −0.710 · · · · · · · · · 6.0 26.1 4030.75 Mn I 0.00 −0.470 120.0 · · · · · · 37.0 · · · 4033.06 Mn I 0.00 −0.620 100.0 · · · · · · 44.1 · · · 4451.59 Mn I 2.89 0.280 24.0 · · · · · · · · · · · · 4754.04 Mn I 2.28 −0.090 7.0 · · · · · · · · · 6.9 4783.42 Mn I 2.30 0.042 14.0 · · · · · · · · · 8.4 4823.51 Mn I 2.32 0.140 · · · 18.0 · · · · · · 19.2 3899.72 Fe I 0.09 −1.530 96.5 100.2 · · · · · · · · · 3902.96 Fe I 1.56 −0.470 94.0 66.4 · · · · · · · · · 3906.49 Fe I 0.11 −2.240 94.9 72.7 · · · · · · · · · 3916.74 Fe I 3.24 −0.560 17.2 · · · · · · · · · · · · 3922.92 Fe I 0.05 −1.650 97.5 89.5 · · · · · · · · · 3930.31 Fe I 0.09 −1.590 · · · 93.5 · · · · · · · · · 3949.96 Fe I 2.18 −1.160 24.0 · · · · · · · · · · · · 4005.24 Fe I 1.56 −0.610 75.3 66.6 · · · · · · · · · 4045.81 Fe I 1.49 0.280 130.2 110.9 · · · · · · · · · 4063.59 Fe I 1.56 0.060 130.2 85.3 · · · · · · · · · 4071.74 Fe I 1.61 −0.020 126.7 75.0 · · · · · · · · · 4118.55 Fe I 3.57 0.140 17.7 · · · · · · · · · · · · 4459.14 Fe I 2.18 −1.280 48.9 20.7 · · · · · · · · · 4461.66 Fe I 0.09 −3.210 80.0 58.6 · · · · · · · · · <18.0 · · · · · · · · · 4086.71 La II 0.00 −0.070 55.3 <18.0 · · · · · · · · · 4123.23 La II 0.32 0.130 40.5 · · · · · · · · · · · · 4073.47 Ce II 0.48 0.320 · · · <17.8 · · · · · · · · · 4120.84 Ce II 0.32 −0.240 · · · <48.0 · · · · · · · · · 4562.37 Ce II 0.48 0.330 34.0 <18.0 · · · · · · · · · 4021.34 Nd II 0.32 −0.170 39.5 · · · · · · · · · · · · 4061.09 Nd II 0.47 0.300 44.8 <11.0 · · · · · · · · · 4069.27 Nd II 0.06 −0.400 41.8 · · · · · · · · · · · · 4446.39 Nd II 0.20 −0.630 30.0 <11.0 · · · · · · · · · 5212.35 Nd II 0.20 −0.700 · · · · · · 23.5 · · · · · · 3907.11 Eu II 0.21 0.170 <15.0 <14.0 · · · · · · · · · 4057.81 Pb I a 1.32 −0.220 118.0 79.3 · · · <29.5 66.0 1.01 −0.980 · · · · · · · · · 90.0 · · · 3899.72 Fe I 0.09 −1.530 · · · · · · · · · 116.0 · · · 3902.96 Fe I 1.56 −0.470 · · · · · · · · · 121.0 · · · 3906.49 Fe I 0.11 −2.240 · · · · · · · · · 80.6 · · · 3916.74 Fe I 3.24 −0.560 · · · · · · · · · 45.0 · · · 3920.27 Fe I 0.12 −1.750 · · · · · · · · · 109.5 · · · 3922.92 Fe I 0.05 −1.650 · · · · · · · · · 112.7 · · · 3949.96 Fe I 2.18 −1.160 · · · · · · 33.5 40.4 · · · 4005. · · · · · · · · · · · · 5405.78 Fe I 0.99 −1.840 115.9 · · · · · · · · · · · · 4491.40 Fe II 2.84 −2.600 · · · · · · 20.8 · · · 8.0 1.08 −0.170 · · · · · · · · · 41.0 · · · 5200.42 Y II 0.99 −0.570 · · · 16.0 · · · · · · · · · 5205.73 Y II 1.03 −0.340 · · · · · · 28.1 · · · · · · 4161.21 Zr II 0.71 −0.720 · · · · · · 68.8 · · · · · · 4496.97 Zr II 0.71 −0.590 · · · · · · 59.1 · · · · · · 4130.65 Ba II 2.72 0.560 · · · · · · · · · 63. .0 · · · · · · · · · 4021.34 Nd II 0.32 −0.170 · · · · · · · · · · · · <16.0 4061.09 Nd II 0.47 0.300 · · · · · · · · · · · · <9.0 4069.27 Nd II 0.06 −0.400 · · · · · · 23.4 · · · <9.0 4109.46 Nd II 0.32 0.180 · · · 92.0 50.0 · · · <13.0 4446.39 Nd II 0.20 −0.630 · · · 29.0 · · · · · · · · · 4462.99 Nd II 0.56 −0.070 · · · 50.0 · · · · · · · · · 5212.35 Nd II 0.20 −0.700 · · · 9.0 · · · · · · · · · 3819.67 Eu II 0.00 0.510 · · · · · · 43.0 · · · <36.0 3907.11 Eu II 0.21 0.170 · · · <15.0 · · · · · · · · · 4129.70 Eu II 0.00 0.220 · · · <15.0 24.0 · · · <10.0 4057.81 Pb I a 1.32 −0.220 · · · 52.0 110.0 34.0 56.0 · · · · · · · · · · · · 0.56 0.91 1 · · · 0.67 1.46 1 · · · · · · · · · · · · · · · TiI 0.03 2. · · · · · · · · · · · · 0.17 2.34 1 · · · 0.47 3.08 1 · · · · · · · · · · · · · · · NiI · · · · · · · · · · · · −0.33 3.17 1 · · · −0.31 3.63 1 · · · · · · · · · · · · · · · ZnI · · · · · · · · · · · · · · · · · · · · · · · · 0.46 2.76 1 · · · · · · · · · · · · · · · SrII · · · · · · · · · · · · 0.34 0.49 1 · · · 0.86 1.45 1 · · · −0.05 0.59 1 · · · YII · · · · · · · · · · · · 0.52 0.01 2 0.03 0.59 0.52 2 0.21 0.55 0.53 3 0.16 ZrII · · · · · · · · · · · · · · · · · · · · · · · · 1.05 1.34 1 · · · · · · · · · · · · · · · BaII 1. · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · 2.37 0.81 1 · · · NdII · · · · · · · · · · · · 1.91 0.66 4 0.24 2.17 1.37 3 0.21 1.90 1.13 1 · · · EuII · · · · · · · · · · · · 1.70 −0.54 3 0.14 1.46 −0.34 3 0.17 · · · · · · · · · · · · PbI <1.92 <1.35 1 · · · 2.79 1.99 1 · · · 3.11 2.75 1 · · · c · · · · · · · · · a This is [Fe/H]. · · · · · · · · · · · · · · · · · · · · · · · · FeI −2. · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · ZnI · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · SrII 1.68 1.91 1 · · · 0.31 0.35 1 · · · −0.39 −0.80 1 · · · · · · · · · · · · · · · YII 1.40 0.96 2 0.29 0.25 −0.37 2 0.02 <0.23 < −0.83 1 · · · · · · · · · · · · · · · BaII 2.63 2. .65 · · · · · · · · · · · · NdII 2.12 0.94 6 0.27 1.72 0.36 4 0.11 <1.59 < −0.21 2 0.41 · · · · · · · · · · · · EuII 1.18 −0.99 2 0.13 <0.87 < −1.48 1 · · · <1.45 < −1.34 1 · · · · · · · · · · · · · · · PbI <2.28 <1.55 1 · · · 2.66 1.75 1 · · · · · · · · · · · · · · · b · · · · · · · · · a This is [Fe/H].
b Pb I 4057 line is swamped by molecular features.
c No Pb abundance as the large negative vr moves the 4057Å Pb I line onto a CCD defect. · · · · · · · · · · · · −0.65 3.76 1 · · · · · · · · · · · · · · · ZnI · · · · · · · · · · · · · · · · · · · · · · · · 0.51 2.04 1 · · · SrII 0.52 0.72 1 · · · 0.55 1.61 1 · · · −0.98 −1.15 1 · · · YII 0.87 0.41 3 0.19 0.60 0.99 3 0.24 < −0.01 < −0.84 2 0.25 ZrII 1.74 1.64 2 0.32 · · · · · · · · · · · · · · · · · · · · · · · · BaII 1.59 1.51 0.31 2 0.23 · · · · · · · · · · · · <1.20 < −0.36 4 0.26 SmII <2.40 <0.71 1 · · · · · · · · · · · · · · · · · · · · · · · · · · · EuII 0.80 −1.38 2 0.15 · · · · · · · · · · · · <0.97 < −1.59 2 0.35 PbI 2.60 1.85 1 · · · 1.55 1.65 1 · · · · · · · · · · · · · · · The arrows indicate two lines from the secondary star. lower panel: the same, but using the data summed over 3 nights from the Sep. 2002 HIRES run. Note the difference in the line profiles of the Mg triplet lines (and some of the weaker lines as well). Note also that the lines from the secondary star are noticably broader than those from the primary. The dashed vertical lines guide the eye to indicate the changing relative v r of the two components over a timespan of 48 hours. Fig. 2.-The HIRES spectra of three C-stars from our sample in the region of 4740Å. The bandheads of 12 C 12 C, 12 C 13 C and 13 C 13 C are indicated. The vertical range is the same for each panel. The derived 12 C/ 13 C ratio for these three stars ranges from 3.5 to 6, identical to within the observational errors of ±30%. shown as open circles while the 4744Å bandhead of 12 C 13 C is shown as filled circles. Vertical lines connect the two values for each C-star. All C-stars in our sample hotter than 5500 K only have upper limits for the latter. 6.-12 C/ 13 C ratios measured from the C 2 (1,0) Swan band and from the G band of CH are shown as a function of T eff for the C-stars in our sample. All C-stars with T eff > 5700 K have only lower limits to the 12 C/ 13 C ratio from the present spectra. Table 14 -log[ǫ(C)] is shown as a function of T eff for C-stars (large stars) and C-enhanced stars (large filled circles) from our sample. The augmented sample of very metal-poor C-stars with recent detailed abundance analyses from the literature (see Table 14 for details) is shown as small open circles. Known spectroscopic binaries are circled. The solid horizontal line represents the Solar ratio, while the horizontal dashed line represents 20% of Solar. G77-61 is plotted as a dwarf with T eff = 6000 K. Fig. 8; known spectroscopic binaries are circled. A typical error bar is indicated for a single star. Fig. 11.-The HR-diagram (T eff versus log(g)) is shown for our sample of 16 C-stars and 3 C-enhanced stars denoted by filled stars for the former and by open circles for the latter. The small filled circles indicate all the other EMP candidates from the HES that we have analyzed to date. The five Ba-poor, C-rich stars from our sample are enclosed in squares.
The three known binaries are circled. The additional C-stars from the literature are not shown. A typical error bar is indicated for a single star.
We have two indicators in the present work for the carbon abundances, the strength of the bands of CH and of C 2 . The upper panel ofFig. 7shows ǫ(C) inferred from the G band of CH as a function of [Fe/H](HIRES) for the full sample of 16 C-stars and the three EMP C-enhanced stars with [C/Fe] > 1.0 dex from our work. Eleven additional very metal-poor C-stars, mostly from the HK Survey, with recent analyses from the literature, are indicated as small open circles in this figure as well as inFig. 8and inFig. 10. The details for the additional stars are given in
Marsteller et al. (2004) also have reached similar conclusions using the HK Survey sample. The most metal-poor star shown in this figure is G77-61, with [Fe/H] about −4.0 dex.
(see alsoMcClure & Woodsworth 1990) established that essentially all CH stars are members of binary systems. The higher metallicity Ba stars appear to be another example of the same phenomenon (McClure & Woodsworth 1990);Bohm-Vitense et al. (2000) have established from UV HST spectra the presence of white dwarf companions for several of these stars.
Fig. 1 .
1-Upper panel: the spectrum of the spectroscopic binary HE0012−1441 from the night of 09/30/2002 in the region of the Mg triplet.
Fig. 3 .
3-The Fe ionization equilibrium for the 16 VMP and EMP C-stars in the present sample. The C-stars are indicated as large stars, the C-enhanced stars as open circles, and the C-normal stars from our published and unpublished analyses as filled circles. The three known spectroscopic binaries in this sample are circled. (One of these is an apparently Cnormal star (HE0218−2736) which is too hot and too metal-poor to show any Fe II lines, hence does not appear in this figure.)Fig. 4.-The difference between [Fe/H](HES) and [Fe/H](HIRES) is shown as a function of T eff for the C-stars (upper panel) and for the C-normal stars (lower panel) for those EMP candidates from the HES with analyses based on Keck/HIRES spectra. The symbols are those ofFig. 3. The vertical dashed line separates the giants from the dwarfs, while the horizontal dashed lines are represent the mean ∆ for the C-normal giants and for the C-normal dwarfs. A typical error is indicated for a single star.
Fig. 5 .
5-The fractional absorption at several C 2 bandheads measured from our Keck/HIRES spectra is shown for our sample of C-stars from the HES as a function of T eff . The symbols are those ofFig. 3. Upper panel: the fractional absorption at 5165.0Å (the 0,0 bandhead of the Swan system) is shown. The solid curve indicates the predicted behavior for log[ǫ(C,N,O)] = 7.56, 6.55, and 7.13 dex (C/O 2.7), the dashed curve is that for C/O=1 and the dot-dashed curve for C/O 1/2.7, keeping ǫ(O) fixed. Lower panel: the 4737Å bandhead of 12 C 12 C is
Fig.
Fig. 6.-12 C/ 13 C ratios measured from the C 2 (1,0) Swan band and from the G band of CH are shown as a function of T eff for the C-stars in our sample. All C-stars with T eff > 5700 K have only lower limits to the 12 C/ 13 C ratio from the present spectra.
Fig. 7 .
7-Upper panel: logǫ(C) is shown as a function of [Fe/H] for C-stars (marked by large stars) and the C-enhanced stars (indicated by filled circles) with detailed abundance analyses from the HES by our group. The augmented sample of very metal-poor C-stars from the literature (see
for details) is shown as small open circles. Known spectroscopic binaries are circled. The dashed horizontal line indicates a fixed ǫ(C) of 20% that of the Sun. The sloping line indicates the locus of [C/Fe] = +1.7 dex. Lower panel: the same for [C/N]. The horizontal line indicates the Solar ratio. A typical error bar is indicated for a single star in each panel.
Fig. 8 .
8Fig. 8.-log[ǫ(C)] is shown as a function of T eff for C-stars (large stars) and C-enhanced stars (large filled circles) from our sample. The augmented sample of very metal-poor C-stars with recent detailed abundance analyses from the literature (see Table 14 for details) is shown as small open circles. Known spectroscopic binaries are circled. The solid horizontal line represents the Solar ratio, while the horizontal dashed line represents 20% of Solar. G77-61 is plotted as a dwarf with T eff = 6000 K.
Fig. 9 .
9-The abundance ratio [Ba/Fe] as a function of [Fe/H] for HES EMP C-stars (large stars) and C-enhanced stars (large filled circles) with detailed abundance analyses. All Cnormal stars from the HES analyzed to date by us are shown as small open circles. The additional C-stars from the literature are not shown. Known spectroscopic binaries are circled.
Fig. 10 .
10-The abundance ratio [Ba/C] as a function of [Fe/H] for HES EMP C and Cenhanced stars with detailed abundance analyses and for the augmented sample of very metal-poor C-stars from the literature. The symbols are those used in
). Our [C/Fe] is 0.2 dex lower than theirs, while our derived [N/Fe] is a similar amount larger, as it must be to compensate in order to fit the CN band strength. The abundance ratios for all the species in common agree fairly well, with [Al/Fe], [Ca/Fe], [Sr/Fe] and [Ba/Fe] showing the largest differences, −0.42 7 , +0.58, −0.42 and +0.45 dex respectively for the values of [X/Fe] derived here minus those of
Table 1 .
1The Sample of C-Stars Selected as EMP Candidates from the HES Mostly from the Palomar SampleID
Coords.
V a
I a
Julian Date Obs.
v r
b
[Fe/H](HES)
(J2000)
(mag)
(mag)
(-2450000)
(km s −1 )
(dex)
Obs. with HIRES
HE0007−1832
00 09 52.8 −18 16 12 15.462
14.831
f
HE0012−1441
00 15 27.1 −14 24 37 16.358
15.704
2547.8627
+11 g
−2.61
HE0058−0244 c
01 00 53.0 −02 28 20 13.727
12.933
2179.0096
−68.4
−2.81
HE0143−0441
01 45 37.8 −04 26 43 16.382
15.724
2547.0190
+121.8
−2.94
HE0212−0557
02 15 02.5 −05 43 23 14.70
· · ·
2544.9791
−230.6
−3.45
HE0336+0113
03 38 52.8 +01 23 08 14.955
14.110
2179.0586
+66.6
−2.51
HE1031−0020
10 34 24.1 −00 36 09 14.296
13.338
3152.7297
+69.7
−3.70
HE1150−0428
11 53 06.6 −04 45 03 14.909
14.007
3152.7602
+46.6
−3.22
HE1410−0004
14 13 04.6 −00 18 33 15.494
14.712
3534.7447
+214.6
−2.65
HE1410+0213
14 13 06.5 +01 59 21 13.25
· · ·
2396.9707
+80 e
−3.17
HE1434−1442
14 37 26.6 −14 54 59 15.34
· · ·
3488.9757
+146.9 e
−3.42
HE1443+0113
14 46 16.4 +01 01 10 15.78
· · ·
3589.7671
−1.1
−3.13
HE1509−0806
15 11 41.5 −08 17 41 14.796
13.807
2421.9292
−169.9
−3.92
HE2158−0348
22 00 40.0 −03 34 12 15.707
14.735
2178.7352
+67.6
−2.77
HE2232−0603
22 34 47.4 −05 48 17 16.513
15.738
2178.8255
−61.2
−1.99
HE2353−1758 h
23 56 12.9 −17 42 03 15.491
14.558
3641.8915
+38.5
−2.60
HE2356−0410 d
23 59 13.1 −03 53 49 13.622
12.710
2179.8396
−61.5
−3.22
a Our photometry from ANDICAM images.
b Heliocentric v r .
c Rediscovery of CS22183−015.
d Rediscovery of CS22957−027.
e v r only from the Mg triplet lines.
f See Cohen et al. (2004).
g Double lined spectroscopic binary, v r variable
h HIRES spectrum obtained too late for analysis here. C-star with very strong Ba II lines.
Table 2 .
2b HIRES-R spectrum with new mosaic CCD detector. HIRES-B spectrum with new mosaic CCD detector. Assumes second component does not affect the observed colors. Only one short exposure available, stopped by fog.Stellar Parameters and Observations
ID
T eff
log(g)
v t
Exp. Time
S/N a Source/Follow Up
(K)
(dex) (km/s) (sec: HIRES)
HE0012−1441 5730 d 3.5
1.6
10,800
75
Magellan
HE0058−0244 5620
3.4
1.6
4800
100
Magellan
HE0143−0441 6240
3.7
1.6
9600
80
P200
HE0212−0557 5075
2.15
1.8
6000
100
P200
HE0336+0113 5700
3.5
1.6
12,200
100
Magellan
HE1031−0020 5080
2.2
1.6
2400
80
P200
HE1150−0428 5200
2.55
1.6
2400
70
P200
HE1410−0004 5605
3.5
1.6
2400 b
60
P200
HE1410+0213 4985
2.0
1.8
2700
80
P200
HE1434−1442 5420
3.15
1.6
6000 c
73
P200
HE1443+0113 4945
1.95
1.8
550 c,e low
P200
HE1509−0806 5185
2.5
1.6
2400
70
P200
HE2158−0348 5215
2.5
1.6
14,400
100
Magellan
HE2232−0603 5750
3.5
1.6
18,000
90
Magellan
HE2356−0410 5205
2.5
1.8
6000
100
Magellan
a Signal to noise ratio in the continuum near 4570Å per 4 pixel spectral resolution
element.
c d e
Table 3 .
3Equivalent Widths for the First 5 Stars of for the Primary Sample of C-Stars from the HESLine λ Species EP Log(gf ) HE0012 HE0058 HE0143 HE0212 HE0336
(Å)
(eV)
(dex)
−1441 a −0244
−0441
−0557
+0113
(mÅ)
(mÅ)
(mÅ)
(mÅ)
(mÅ)
4057.52 Mg I
4.34
−1.200
34.0
19.2
21.2
· · ·
34.4
4703.00 Mg I
4.34
−0.670
100.7
· · ·
· · ·
· · ·
· · ·
5172.70 Mg I
2.71
−0.380
258.9
144.7
163.5
183.5
210.9
5183.62 Mg I
2.72
−0.160
357.5
172.1
187.8
· · ·
340.9
3961.52 Al I
0.00
−0.340
· · ·
113.2
77.8
150.0
117.1
3905.53 Si I
1.91
−1.040
Table 3 -
3ContinuedLine λ Species EP Log(gf ) HE0012 HE0058 HE0143 HE0212 HE0336
(Å)
(eV)
(dex)
−1441 a −0244
−0441
−0557
+0113
(mÅ)
(mÅ)
(mÅ)
(mÅ)
(mÅ)
4486.91 Ce II
0.30
−0.360
· · ·
7.2
· · ·
· · ·
30.1
4562.37 Ce II
0.48
0.330
· · ·
28.2
18.0
95.0
29.1
4628.16 Ce II
0.52
0.260
· · ·
Table 4 .
4Equivalent Widths for the Next 5 Stars of for the Primary Sample of C-Stars
from the HES
Line λ Species EP Log(gf ) HE1031 HE1150 HE1410 HE1410 HE1434
(Å)
(eV)
(dex)
−0020
−0428
+0213
−0004
−1442
(mÅ)
(mÅ)
(mÅ)
(mÅ)
(mÅ)
4057.52 Mg I
4.34
−1.200
18.0
· · ·
· · ·
13.9
· · ·
4703.00 Mg I
4.34
−0.670
· · ·
· · ·
· · ·
31.4
· · ·
5172.70 Mg I
2.71
−0.380
160.3
115.2
218.6
128.6
191.7
5183.62 Mg I
2.72
−0.160
189.2
130.4
246.3
142.9
241.6
3961.52 Al I
0.00
−0.340
174.5
· · ·
· · ·
· · ·
· · ·
4435.69 Ca I
1.89
−0.520
55.0
32.4
· · ·
· · ·
· · ·
4454.79 Ca I
1.90
0.260
110.0
74.0
· · ·
· · ·
· · ·
4578.56 Ca I
2.52
−0.558
36.0
9.0
· · ·
· · ·
· · ·
3924.53 Ti I
0.02
−0.940
32.4
· · ·
· · ·
· · ·
· · ·
3958.22 Ti I
0.05
−0.160
47.4
23.4
· · ·
· · ·
· · ·
3998.64 Ti I
0.05
−0.050
33.4
25.2
· · ·
· · ·
· · ·
4533.25 Ti I
0.85
0.480
23.0
11.0
· · ·
8.4
27.3
4534.78 Ti I
0.84
0.280
31.0
11.0
· · ·
· · ·
27.6
4555.49 Ti I
0.85
−0.490
11.7
Table 4 -
4ContinuedLine λ Species EP Log(gf ) HE1031 HE1150 HE1410 HE1410 HE1434
(Å)
(eV)
(dex)
−0020
−0428
+0213
−0004
−1442
(mÅ)
(mÅ)
(mÅ)
(mÅ)
(mÅ)
4494.57 Fe I
2.20
−1.140
54.0
· · ·
· · ·
21.2
63.5
4531.16 Fe I
1.49
−2.150
54.0
30.0
· · ·
9.7
54.3
4602.95 Fe I
1.49
−2.220
41.7
18.5
· · ·
11.6
· · ·
4654.50 Fe I
1.56
−2.780
30.0
· · ·
· · ·
· · ·
· · ·
4871.33 Fe I
2.86
−0.360
67.8
32.5
· · ·
18.2
68.1
4872.14 Fe I
2.88
−0.570
· · ·
· · ·
· · ·
22.3
· · ·
4890.75 Fe I
2.87
−0.424
· · ·
· · ·
· · ·
· · ·
61.0
4891.50 Fe I
2.85
−0.110
83.0
52.0
77.6
41.5
78.0
4919.00 Fe I
2.86
−0.340
66.0
17.3
80.7
16.9
55.0
4920.51 Fe I
2.83
0.150
74.3
27.4
95.8
35.2
83.5
4957.61 Fe I
2.81
0.230
94.0
57.2
· · ·
69.3
94.7
5166.28 Fe I
0.00
−4.200
29.5
· · ·
· · ·
· · ·
· · ·
5171.61 Fe I
1.48
−1.790
58.1
26.0
115.0
26.6
· · ·
5192.35 Fe I
3.00
−0.420
36.7
13.0
83.8
· · ·
· · ·
5194.95 Fe I
1.56
−2.090
33.8
· · ·
92.0
11.7
· · ·
5198.72 Fe I
2.22
−2.140
13.0
· · ·
39.6
· · ·
· · ·
5216.28 Fe I
1.61
−2.150
34.7
· · ·
85.4
· · ·
42.5
5217.40 Fe I
3.21
−1.070
20.9
· · ·
44.7
· · ·
19.0
5227.19 Fe I
1.56
−1.350
77.2
45.2
132.9
41.9
87.9
5232.95 Fe I
2.94
−0.100
57.2
30.0
98.4
28.6
65.8
5269.55 Fe I
0.86
−1.320
99.7
80.2
163.3
64.8
106.6
5324.19 Fe I
3.21
−0.100
39.8
· · ·
· · ·
· · ·
52.5
5393.18 Fe I
3.24
−0.720
· · ·
· · ·
· · ·
· · ·
26.7
5405.78 Fe I
0.99
−1.840
· · ·
· · ·
· · ·
49.5
86.2
4491.40 Fe II
2.84
−2.600
7.0
· · ·
· · ·
· · ·
· · ·
4508.30 Fe II
2.84
−2.280
13.8
· · ·
· · ·
5.9
· · ·
4555.89 Fe II
2.82
−2.170
18.1
12.0
· · ·
11.7
27.0
4583.84 Fe II
2.81
−2.020
37.2
20.0
· · ·
16.9
42.6
4923.93 Fe II
2.88
−1.320
82.4
41.7
85.0
37.9
75.2
Table 5 .
5Equivalent Widths for the Last 5 Stars of for the Primary Sample of C-Stars from the HESLine λ Species EP Log(gf ) HE1443 HE1509 HE2158 HE2232 HE2356
(Å)
(eV)
(dex)
+0113
−0806
−0348
−0603
−0410
(mÅ)
(mÅ)
(mÅ)
(mÅ)
(mÅ)
4057.52 Mg I
4.34
−1.200
· · ·
· · ·
29.0
72.0
18.0
4167.28 Mg I
4.34
−1.000
· · ·
· · ·
· · ·
100.0
· · ·
4703.00 Mg I
4.34
−0.670
· · ·
· · ·
· · ·
105.4
· · ·
5172.70 Mg I
2.71
−0.380
251.3
167.0
187.1
275.9
123.2
5183.62 Mg I
2.72
−0.160
333.8
211.0
237.5
356.6
141.9
3961.52 Al I
0.00
−0.340
· · ·
119.0
129.1
140.0
111.3
4435.69 Ca I
1.89
−0.520
· · ·
29.0
56.0
60.3
26.7
4454.79 Ca I
1.90
0.260
· · ·
74.0
94.0
95.1
79.6
4670.41 Sc II
1.36
−0.580
· · ·
· · ·
· · ·
12.5
9.0
3958.22 Ti I
0.05
−0.160
· · ·
· · ·
50.0
62.3
21.0
3998.64 Ti I
0.05
−0.050
· · ·
· · ·
36.3
65.8
29.0
4518.03 Ti I
0.83
−0.230
· · ·
· · ·
· · ·
16.8
· · ·
4533.25 Ti I
0.85
0.480
· · ·
· · ·
35.0
42.0
16.0
4534.78 Ti I
0.84
0.280
· · ·
16.0
32.9
33.4
9.0
4548.77 Ti I
0.83
−0.350
· · ·
· · ·
· · ·
17.6
· · ·
4681.92 Ti I
0.05
−1.070
· · ·
· · ·
· · ·
21.2
· · ·
4981.74 Ti I
0.85
0.500
· · ·
32.0
· · ·
40.4
25.0
4999.51 Ti I
0.83
0.250
· · ·
18.0
49.3
48.8
18.0
5022.87 Ti I
0.83
−0.430
· · ·
· · ·
· · ·
9.4
· · ·
5173.75 Ti I
0.00
−1.120
· · ·
9.0
· · ·
22.4
· · ·
5210.39 Ti I
0.05
−0.880
· · ·
10.0
28.1
22.5
· · ·
3900.54 Ti II
1.13
−0.450
· · ·
124.0
· · ·
80.9
· · ·
3987.61 Ti II
0.61
−2.730
· · ·
22.4
· · ·
· · ·
· · ·
4012.39 Ti II
0.57
−1.610
· · ·
· · ·
· · ·
· · ·
39.5
4028.35 Ti II
1.89
−0.870
· · ·
75.0
· · ·
53.4
· · ·
4417.72 Ti II
1.16
−1.160
· · ·
· · ·
· · ·
73.7
· · ·
4443.81 Ti II
1.08
−0.700
· · ·
107.2
94.0
87.1
71.9
4468.51 Ti II
1.13
−0.600
· · ·
110.0
108.0
90.4
89.7
4501.28 Ti II
1.12
−0.760
· · ·
106.2
80.0
82.3
68.5
Table 5 -
5ContinuedLine λ Species EP Log(gf ) HE1443 HE1509 HE2158 HE2232 HE2356
(Å)
(eV)
(dex)
+0113
−0806
−0348
−0603
−0410
(mÅ)
(mÅ)
(mÅ)
(mÅ)
(mÅ)
4508.30 Fe II
2.84
−2.280
· · ·
18.0
28.7
28.6
19.1
4555.89 Fe II
2.82
−2.170
· · ·
25.0
28.0
41.3
15.6
4583.84 Fe II
2.81
−2.020
· · ·
· · ·
39.6
50.4
33.7
4923.93 Fe II
2.88
−1.320
· · ·
87.7
66.0
68.8
52.2
5018.45 Fe II
2.89
−1.220
· · ·
· · ·
81.0
86.9
64.2
5197.58 Fe II
3.23
−2.230
· · ·
16.0
13.0
25.4
5.0
5234.63 Fe II
3.22
−2.220
· · ·
16.0
19.1
26.4
5.0
3842.05 Co I
0.92
−0.763
· · ·
· · ·
· · ·
25.0
· · ·
3845.46 Co I
0.92
0.009
· · ·
· · ·
· · ·
62.8
· · ·
4121.31 Co I
0.92
−0.315
· · ·
42.0
40.8
46.9
36.0
3858.30 Ni I
0.42
−0.967
· · ·
· · ·
· · ·
73.8
· · ·
4810.54 Zn I
4.08
−0.170
· · ·
· · ·
· · ·
· · ·
9.0
4077.71 Sr II
0.00
0.170
· · ·
226.9
158.6
190.0
72.2
3950.36 Y II
0.10
−0.490
· · ·
· · ·
59.5
45.5
−9.0
4883.69 Y II
1.08
0.070
· · ·
48.0
57.7
46.0
−9.0
5087.43 Y II
Table 5 -
5ContinuedLine λ Species EP Log(gf ) HE1443 HE1509 HE2158 HE2232 HE2356
(Å)
(eV)
(dex)
+0113
−0806
−0348
−0603
−0410
(mÅ)
(mÅ)
(mÅ)
(mÅ)
(mÅ)
4120.84 Ce II
0.32
−0.240
· · ·
45.0
47.2
· · ·
· · ·
4486.91 Ce II
0.30
−0.360
· · ·
· · ·
35.2
· · ·
· · ·
4562.37 Ce II
0.48
0.330
· · ·
32.0
57.5
38.0
<9.0
4628.16 Ce II
0.52
0.260
· · ·
36
Table 6 .
6Equivalent Widths for Redder Lines in the Spectra of Three C-StarsLine λ Species EP Log(gf ) HE1410−0004 HE1434−1442 HE1443+0113
(Å)
(eV)
(dex)
(W λ -mÅ)
(W λ -mÅ)
(W λ -mÅ)
6707.76 Li I
0.00
0.178
<10.0
· · ·
· · ·
6300.30 O I
0.00
−9.78
<6.0
· · ·
· · ·
7771.94 O I
9.15
0.369
7.0
· · ·
· · ·
5688.19 Na I
2.10
−0.420
7.0
· · ·
· · ·
5889.95 Na I
0.00
0.110
148.5
178.8
363.0
5895.92 Na I
0.00
−0.190
127.0
176.5
210.0
5528.41 Mg I
4.34
−0.480
29.5
71.8
· · ·
5711.09 Mg I
4.34
−0.167
6.0
· · ·
· · ·
6696.02 Al I
3.14
−1.34
4.2
· · ·
· · ·
5690.43 Si I
4.93
−1.870
· · ·
6.0
· · ·
5948.54 Si I
5.08
−1.230
· · ·
8.0
· · ·
7698.97 K I
0.00
−0.168
12.6
· · ·
· · ·
5588.75 Ca I
2.52
0.437
11.1
36.3
· · ·
5590.11 Ca I
2.52
−0.710
· · ·
9.9
· · ·
5594.46 Ca I
2.52
−0.050
6.9
33.1
· · ·
5601.28 Ca I
2.52
−0.438
· · ·
22.0
· · ·
5857.45 Ca I
2.93
0.230
· · ·
16.8
· · ·
6162.17 Ca I
1.90
−0.090
17.3
· · ·
· · ·
6493.78 Ca I
2.52
0.140
5.6
· · ·
· · ·
7148.15 Ca I
2.71
0.218
9.8
· · ·
· · ·
5526.79 Sc II
1.77
0.130
· · ·
17.5
· · ·
5657.90 Sc II
1.51
−0.500
· · ·
11.7
· · ·
5424.08 Fe I
4.32
0.510
35.2
52.6
88.0
5434.53 Fe I
1.01
−2.130
· · ·
75.4
107.0
5497.52 Fe I
1.01
−2.830
10.4
27.2
· · ·
5506.79 Fe I
0.99
−2.790
· · ·
35.7
· · ·
5569.61 Fe I
3.42
−0.486
9.1
· · ·
· · ·
5586.76 Fe I
3.37
−0.140
10.4
· · ·
· · ·
6137.69 Fe I
2.59
−1.350
11.2
· · ·
· · ·
6430.84 Fe I
2.18
−1.950
6.9
· · ·
· · ·
Table 6 -
6ContinuedLine λ Species EP Log(gf ) HE1410−0004 HE1434−1442 HE1443+0113
(Å)
(eV)
(dex)
(W λ -mÅ)
(W λ -mÅ)
(W λ -mÅ)
6494.98 Fe I
2.40
−1.240
13.0
· · ·
· · ·
5425.26 Fe II
3.00
−3.240
· · ·
11.0
· · ·
5534.85 Fe II
3.25
−2.640
· · ·
6.3
· · ·
5530.79 Co I
1.71
−2.060
· · ·
<9.0
· · ·
5846.99 Ni I
1.68
−3.210
· · ·
<4.0
· · ·
4722.16 Zn I
4.03
−0.390
<8.0
· · ·
· · ·
5853.70 Ba II
0.60
−1.010
19.0
73.5
121.0
6141.70 Ba II
0.70
−0.070
61.1
· · ·
· · ·
6496.90 Ba II
0.60
−0.380
57.4
· · ·
· · ·
4959.12 Nd II
0.06
−0.800
· · ·
32.0
· · ·
5249.58 Nd II
0.98
0.200
· · ·
16.5
· · ·
6645.11 Eu II
1.38
0.120
<6.0
· · ·
· · ·
Table 7 .
7Fit Fe I Slopes With EP, Equivalent Width and Wavelength Typical range of EP is 3 eV. There were only 14 measured Fe I lines in this star. There were only 12 detected Fe I lines in this star. There were only 10 measured Fe I lines in this star. There were only 17 detected Fe I lines in this star.Star ID
∆[X/Fe]/∆(EP) a ∆[X/Fe]/∆[W λ /λ] ∆[X/Fe]/∆λ
(dex/eV)
(dex)
(10 −4 dex/Å)
HE0012−1441
b
b
b
HE0058−0244
−0.013
−0.198
1.02
HE0143−0441
−0.026
0.024
0.65
HE0212−0557
0.088
0.207
−5.33
HE0336+0113
0.075
−0.121
0.98
HE1031−0020
0.039
−0.117
2.02
HE1150−0428
−0.066
−0.027
0.14
HE1410−0004
−0.014
0.001
−0.08
HE1410+0213
c
c
c
HE1434−1442
0.056
0.006
−1.38
HE1443+0113
d
d
d
HE1509−0806
e
e
e
HE2158−0348
0.068
−0.248
1.63
HE2232−0603
0.057
−0.013
1.87
HE2356−0410
−0.036
0.006
−0.83
a b c d e
Table 8 .
8Abundances for the First Four EMP C-Stars From the HESSpecies HE0012−1441
HE0058−0244
HE0143−0441
HE0212−0557
[X/Fe]
logǫ(X)
No.
σ
[X/Fe]
logǫ(X)
No.
σ
[X/Fe]
logǫ(X)
No.
σ
[X/Fe]
logǫ(X)
No.
σ
(dex)
(dex)
Lines
(dex)
(dex)
(dex)
Lines
(dex)
(dex)
(dex)
Lines
(dex)
(dex)
(dex)
Lines
(dex)
C−CH
1.59
7.66
1
· · ·
1.92
7.76
1
· · ·
1.98
8.26
1
· · ·
1.74
8.06
1
· · ·
N−CN
0.64
6.05
1
· · ·
1.77
6.95
1
· · ·
1.73 d
7.35 d
1
· · ·
1.09
6.75
1
· · ·
MgI
0.91
5.93
4
0.16
0.54
5.33
3
0.19
0.63
5.86
3
0.08
0.04
5.32
1
· · ·
AlI
· · ·
· · ·
· · ·
· · ·
0.34
4.06
1
· · ·
−0.22
3.94
1
· · ·
0.01
4.21
1
· · ·
CaI
0.42
4.26
3
0.11
0.96
4.57
2
0.20
0.43
4.48
2
0.17
0.14
4.23
1
· · ·
ScII
b Pb I 4057 line is swamped by molecular features. No Pb abundance as the large negative vr moves the 4057Å Pb I line onto a CCD defect. This value supercedes that given inCohen et al. c d (2004).
0.42
4.10
1
· · ·
1.12
4.62
3
0.22
1.03
4.09
3
0.04
· · ·
· · ·
· · ·
· · ·
TiI
0.39
2.70
5
0.19
0.46
2.60
11
0.31
0.49
2.17
5
0.12
0.29
3.12
2
0.04
TiII
0.11
2.42
7
0.13
0.80
2.93
15
0.37
0.50
2.19
7
0.45
0.36
3.19
1
· · ·
CrI
−0.26
2.73
1
· · ·
−0.38
2.43
1
· · ·
−0.70
1.67
1
· · ·
−0.55
2.96
1
· · ·
MnI
−0.21
2.50
2
0.14
0.03
2.56
5
0.45
Table 11 .
11Abundances for the Last Three EMP C-Stars From the HESSpecies HE2158−0348
HE2232−0603
HE2356−0410
[X/Fe]
logǫ(X)
No.
σ
[X/Fe]
logǫ(X)
No.
σ
[X/Fe]
logǫ(X)
No.
σ
(dex)
(dex)
Lines
(dex)
(dex)
(dex)
Lines
(dex)
(dex)
(dex)
Lines
(dex)
C−CH
1.87
7.76
1
· · ·
1.22
7.96
1
· · ·
2.14
7.66
1
· · ·
N−CN
1.52
6.75
1
· · ·
0.47
6.55
1
· · ·
1.89
6.75
1
· · ·
MgI
0.68
5.52
3
0.18
0.85
6.54
5
0.32
0.36
4.83
3
0.50
AlI
0.47
4.24
1
· · ·
0.26
4.88
1
· · ·
0.25
3.65
1
· · ·
CaI
0.82
4.48
2
0.05
0.35
4.86
2
0.15
0.71
4.01
2
0.13
ScII
· · ·
· · ·
· · ·
· · ·
0.05
1.31
1
· · ·
0.48
0.51
1
· · ·
TiI
0.43
2.72
6
0.24
0.35
3.50
13
0.21
0.25
2.17
6
0.13
TiII
0.65
2.94
8
0.28
0.11
3.25
11
0.17
0.33
2.25
8
0.23
CrI
−0.66
2.31
1
· · ·
−0.06
3.76
3
0.25
−0.36
2.24
1
· · ·
MnI
−0.19
2.50
2
0.01
−0.41
3.14
3
0.34
−0.10
2.22
4
0.28
FeI
−2.70 a
4.75
28
0.26
−1.85 a
5.60
38
0.23
−3.07 a
4.38
28
0.21
FeII
−0.03
4.72
8
0.12
−0.28
5.32
7
0.13
−0.06
4.32
8
0.15
CoI
−0.06
2.16
1
· · ·
0.02
3.09
3
0.06
0.18
2.03
1
· · ·
NiI
Table 13 .
13Abundance Changes for Small Changes in Stellar Parametersa Fe] b (T eff −150K) (log(g)−0.4 dex) Model[Fe/H]-0.5 (v t -0.2 km s −1 ) W λ Unc.Species
∆[X/Fe]
∆[X/Fe]
∆[X/Fe]
∆[X/Fe]
∆[X/Fe]
∆[X/1 Star
(dex)
(dex)
(dex)
(dex)
(dex)
(1σ)(dex)
C(CH)
−0.18
0.14
−0.04
−0.02
· · ·
0.23
N(CN) f
0.04
0.09
0.00
0.00
· · ·
0.26 g
MgI
0.00
0.11
0.01
0.03
0.05
0.13
AlI
−0.02
0.10
0.02
0.10
0.08
0.16
CaI
0.04
0.03
0.01
0.06
0.06
0.10
TiI
−0.01
−0.01
−0.01
0.02
0.03
0.04
TiII
0.08
−0.14
−0.01
0.11
0.03
0.20
TiII h
−0.06
0.03
0.01
0.11
0.03
0.13
CrI
0.00
−0.01
−0.01
0.02
0.08
0.08
MnI c
−0.06
0.05
0.02
0.16
0.06
0.19
FeI d
−0.16 d
0.03 d
0.13 d
0.08 d
0.02 d
0.22 d
FeII
0.14
−0.17
−0.02
0.03
0.03
0.23
CoI c
−0.02
−0.01
−0.01
0.03
0.08
0.09
SrII
0.00
−0.02
0.00
0.08
0.08
0.11
SrII h
−0.14
0.15
0.02
0.08
0.08
0.24
YII
0.07
−0.15
−0.02
0.06
0.05
0.19
YII h
−0.07
0.02
0.00
0.06
0.05
0.11
ZrII
0.08
−0.15
−0.02
0.11
0.06
0.21
ZrII h
−0.07
0.02
0.00
0.11
0.06
0.14
BaII c
−0.04
0.03
0.00
0.04
0.06
0.09
BaII ch
−0.19
0.20
0.02
0.04
0.06
0.28
LaII c
0.06
−0.14
−0.02
0.09
0.04
0.19
LaII ch
−0.08
0.02
0.00
0.09
0.04
0.13
CeII
0.06
−0.15
−0.02
0.05
0.04
0.18
CeII h
−0.08
0.02
0.00
0.05
0.04
0.11
NdII
0.04
−0.14
−0.03
0.04
0.06
0.17
NdII h
−0.10
0.02
−0.01
0.04
0.06
0.13
SmII
0.06
−0.15
−0.02
0.06
0.08
0.19
SmII h
−0.08
0.02
0.00
0.06
0.08
0.13
EuII c
0.05
−0.15
−0.02
0.03
0.06
0.18
EuII ch
−0.09
0.02
0.00
0.03
0.06
0.11
Table 13 -
13Continued Fe] b (T eff −150K) (log(g)−0.4 dex) Model[Fe/H]-0.5 (v t -0.2 km s −1 ) W λ Unc.Computed from the line list and the stellar parameters of HE2158−0348 with respect to Fe I for all species. 1σ uncertainty in [X/Fe] for a single (typical) star including the 5 sources of uncertainties. Treated as individual absorption lines without HFS corrections. Change in log[ǫ(Fe)]. 4057Å feature treated as a single Pb I line. Assumes logǫ(C) varies as for CH. This includes the uncertainty in ǫ(C), as ǫ(N) is derived from lines of CN. Changes computed with respect to Fe II.Species
∆[X/Fe]
∆[X/Fe]
∆[X/Fe]
∆[X/Fe]
∆[X/Fe]
∆[X/1 Star
(dex)
(dex)
(dex)
(dex)
(dex)
(1σ)(dex)
PbI e
−0.05
0.12
0.02
0.14
0.08
0.21
a b c d e f g h
Table 15 .
15Statistics for Selected Abundance Ratios For the Primary Sample of 16 C-Stars From the HES a Species h Nstars b Min. [X/Y] Max. [X/Y] Median [X/Y] σ[X/Y] EMP Dwarfs c We include the dwarf C-star HE0007−1832 fromCohen et al. (2004). Upper limits are ignored.(dex)
(dex)
(dex)
(dex)
(dex)
[C/Fe]
16
1.22
2.52
1.93
0.31
0.2 f
[N/Fe]
14
0.47
2.52
1.75
0.59
0.0 f
[Na/Fe]
3
0.03
0.48
0.37
0.23
0.41 g
[Mg/Fe]
12
0.04
1.04
0.55
0.27
0.56
[Al/Fe] d
10
−0.55
0.88
0.27
0.39
−0.09
[Ca/Fe]
14
0.11
1.12
0.42
0.34
0.31
[ScII/Fe] d
5
0.05
0.67
0.48
0.26
0.24
[TiI/Fe]
14
0.03
0.57
0.36
0.15
0.36
[TiII/Fe]
15
0.11
0.96
0.36
0.26
0.36
[Cr/Fe]
14
−0.70
−0.06
−0.32
0.21
−0.23
[Mn/Fe]
12
−0.72
0.03
−0.31
0.23
−0.59
[FeII/FeI]
15
−0.41
0.17
−0.04
0.16
0.00
[Co/Fe] d
8
−0.06
0.67
0.16
0.24
0.42
[NiI]
4
−0.65
0.02
−0.32
0.27
−0.02
[Sr/Fe]
12
−0.98
1.68
0.32
0.69
−0.19
[Y/Fe]
11
−0.01
1.40
0.55
0.39
· · ·
[Ba/Fe]
16
−0.78
2.63
1.30
1.01
−0.20
[C/Ba]
16
−0.44
2.98
0.43
1.15
· · ·
[C/Ba] e
12
−0.44
0.93
0.15
0.49
· · ·
[Sr/Ba]
12
−2.23
0.21
−0.94
0.69
· · ·
[Sr/Ba] e
8
−2.23
−0.85
−1.11
0.51
· · ·
[Y/Ba]
9
−1.73
−0.72
−1.21
0.35
· · ·
[Y/Sr]
8
−0.28
0.60
−0.03
0.32
· · ·
[La/Ba]
8
−0.71
0.10
−0.32
0.26
· · ·
[Eu/Ba]
4
−1.47
−0.34
−0.82
0.46
· · ·
[Pb/Ba]
7
0.14
1.21
0.79
0.34
· · ·
a b
Based in part on observations obtained at the W.M. Keck Observatory, which is operated jointly by the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. 2 Palomar Observatory, Mail Stop 105-24, California Institute of Technology, Pasadena, Ca., 91125, jlc@astro.caltech.edu, aswenson@caltech.edu 3 Carnegie Observatories of Washington, 813 Santa Barbara Street, Pasadena, Ca. 91101, andy, ian, shec@ociw.edu 4 Hamburger Sternwarte, Universität Hamburg, Gojenbergsweg 112, D-21029 Hamburg, Germany, nchristlieb, fzickgraf@hs.uni-hamburg.de 5 Spitzer Science Center, Mail Stop 100-22, California Institute of Technology, Pasadena, Ca., 91125, solange@ipac.caltech.edu
The standard nomenclature is adopted; the abundance of element X is given by ǫ(X) = N (X)/N (H) on a scale where N (H) = 10 12 H atoms. Then [X/H] = log 10 [N(X)/N(H)] − log 10 [N(X)/N(H)] ⊙ , and similarly for [X/Fe].
See http://www.astronomy.ohio-state.edu/ANDICAM and http://www.astro.yale.edu/smarts.
MAKEE was developed by T.A. Barlow specifically for reduction of Keck HIRES data. It is freely available on the world wide web at the Keck Observatory home page, http://www2.keck.hawaii.edu:3636/.
We ignore contributions from any issues that vary as a function of T eff that may not be included in our analysis, such as non-LTE effects, which might contribute to the measured slopes and their rms disperion. Large contributions to the σ of the measured slopes from terms such can be excluded.
http://stella.nbi.dk/pub/scan 6 Notes on the web site of U. Jørgensen (http://stella.nbi.dk/pub/scan) suggest that this arguement may
The difference in adopted non-LTE correction for the lines of the 3950Å doublet of Al I has been removed.
There is a minor caveat regarding the issue of internal consistency of the gf values between the various Mg lines discussed inCohen et al. (2004), but this is a small effect, ∼0.2 dex at most.
It must be admitted that all the HIRES spectra of C-stars in hand as of Aug. 2005 have been analyzed, but not all the spectra of C-normal stars in hand have been analyzed yet. This bias only affects the relative ratio of C-rich to C-normal stars inFig. 11, but not their distribution along the locus.
a Synthesis used to derive Pb abundance. W λ given as a guidance to line strength.
a This is [Fe/H].
a 1.Plez & Cohen (2005),Plez, Cohen & Melendez (2006) 2.Dearborn et al. (1986), 3.Aoki et al. (2002b), 4.Aoki et al. (2003), 5.Johnson & Bolte (2004), 6. Sivarani et al. (2004), 7. Preston & Sneden (2000, 8.Preston & Sneden (2001), 9. Hill et al. (2000.
· · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · MgI 1. · · · · · · · · · · · · · · · · · · · · · · · · · · · SiI · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · CaI Species HE1410−0004 HE1434−1442 HE1443+0113Li I <1.03 d <1.32 1 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · C−CH 1.99 7.56 1 · · · 1.95 8.16 1 · · · 1.84 8.36 1 · · · 1.98 7.66 1 · · · N−CN · · · · · · · · · · · · 1.40 6.95 1 · · · · · · · · · · · · · · · 2.23 7.25 1 · · · O I 1.18 6.90 1 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · NaI Table 12. 12 C/ 13 C Ratios for EMP C-Stars From the HES ID 12 C/ 13 C(C 2 ) a 12 C/ 13 C(CH) HE0007−1832 >2.0 · · · HE0012−1441 >3.0 · · · HE0058−0244 3.5 8 -10 HE0143−0411 >4.0 · · · HE0212−0557 4.0 3 -4 HE0336+0113 2.5 7.5 HE1031−0020 5.0 · · · HE1150−0428 4.0 · · · HE1410+0213 2.0 2.5 HE1410−0004 >3.0 · · · HE1434−1442 5.0 · · · HE1443+0113 5.0 · · · HE1509−0806 4.0 · · · HE2158−0348 6.0 3 -5 HE2232−0603 >6.0 ≥30 HE2356−0410 4.0 3 -5 a The uncertainty in the deduced 12 C/ 13 C ratios is 30% of the isotopic ratio. · · · · · · +2.03 12 CS29498-043 3 · · · · · · −0.45 6 CS30301-015 4 · · · · · · +1.45 · · · CS31062-050 4,5 · · · · · · +2.61 · · ·
. A Alonso, S Arrivas, C Martinez-Roger, A&A. 313873Alonso, A., Arrivas, S. & Martinez-Roger, C., 1996, A&A, 313, 873
. A Alonso, S Arrivas, C Martinez-Roger, A&AS. 140261Alonso, A., Arrivas, S. & Martinez-Roger, C., 1996, A&AS, 140, 261
. C Amiot, ApJS. 52329Amiot C., 1983, ApJS, 52, 329
. E Anders, N Grevesse, Geochim. Cosmochim. Acta. 53197Anders, E. & Grevesse, N., 1989, Geochim. Cosmochim. Acta, 53, 197
. W Aoki, ApJ. 561346Aoki, W. et al., 2001, ApJ, 561, 346
. W Aoki, J E Norris, S G Ryan, T C Beers, H Ando, ApJ. 5671166Aoki, W. Norris, J. E., Ryan, S. G., Beers, T. C. & Ando, H., 2002a, ApJ, 567, 1166
. W Aoki, J E Norris, S G Ryan, T C Beers, H Ando, ApJ. 576141Aoki, W. Norris, J. E., Ryan, S. G., Beers, T. C. & Ando, H., 2002b, ApJ, 576, L141
. W Aoki, S G Ryan, J E Norris, T C Beers, H Ando, S Tsangarides, ApJ. 5801149Aoki, W., Ryan, S. G., Norris, J. E., Beers, T. C., Ando, H. & Tsangarides, S., 2002, ApJ, 580, 1149
. W Aoki, ApJ. 632611Aoki, W., et al., 2005, ApJ, 632, 611
. C Arlandini, F Käppeler, K Wisshak, R Gallino, M Lugaro, M Busso, O Straniero, ApJ. 525886Arlandini, C., Käppeler, F., Wisshak, K., Gallino, R., Lugaro, M., Busso, M. & Straniero, O., 1999, ApJ, 525, 886
. M Asplund, A Nordlund, R Trampedach, C A Prieto, R F Stein, A&A. 359729Asplund, M., Nordlund, A., Trampedach, R., Prieto, C. A. & Stein, R. F., 2000, A&A, 359, 729
. M Asplund, N Grevesse, A J Sauval, C Prieto, D Kiselman, A&A. 417751Asplund, M., Grevesse, N., Sauval, A. J., Allende Prieto, C. & Kiselman, D., 2004, A&A, 417, 751
. M Asplund, N Grevesse, A J Sauval, C Prieto, R Bloome, A&A. 431693Asplund, M., Grevesse, N., Sauval, A. J., Allende Prieto, C. & Bloome, R., 2005, A&A, 431, 693
. B Barbuy, A&A. 31763Barbuy, B. et al., 1997, A&A, 317, L63
. D G Baumüller, T Gehren, A&A. 325108Baumüller, D. G. & Gehren, T., 1997, A&A, 325, 108
. D G Baumüller, K Butler, T Gehren, A&A. 338637Baumüller, D. G., Butler, K. & Gehren, T., 1998, A&A, 338, 637
. T C Beers, G W Preston, S Shectman, AJ. 902089Beers, T.C., Preston, G.W. & Shectman, S., 1985, AJ, 90, 2089
. T C Beers, G W Preston, S Shectman, AJ. 103Beers, T.C., Preston, G.W. & Shectman, S., 1992, AJ, 103, 1987
. T C Beers, S Rossi, J E Norris, S Ryan, T Shefler, AJ. 117981Beers, T. C., Rossi, S., Norris, J. E., Ryan, S. & Shefler, T., 1999, AJ, 117, 981
. T C Beers, N Christlieb, ARA&A. 43531Beers, T. C. & Christlieb, N., 2005, ARA&A, 43, 531
. M S Bessell, N Christlieb, B Gustafsson, ApJ. 61261Bessell, M. S., Christlieb, N. & Gustafsson, B., 2004, ApJ, 612, L61
. V M Blanco, M F Mccarthy, B M Blanco, ApJ. 242938Blanco, V. M., McCarthy, M. F. & Blanco, B. M., 1980, ApJ, 242, 938
. E Bohm-Vitense, K Carpenter, R Robinson, T Ake, J Brown, ApJ. 533969Bohm-Vitense, E., Carpenter, K., Robinson, R., Ake, T. & Brown, J., 2000, ApJ, 533, 969
. Molaro Bonifacio, T C Beers, Vladilo, A&A. 332672Bonifacio, Molaro, Beers, T.C., Vladilo, 1998, A&A, 332, 672
. A I Boothroyd, I J Sackmann, ApJ. 510232Boothroyd, A. I. & Sackmann, I. J., 1999, ApJ, 510, 232
. D L Burris, C A Pilachowski, T E Armandroff, C Sneden, J J Cowan, H Roe, ApJ. 544302Burris, D. L., Pilachowski, C. A., Armandroff, T. E., Sneden, C., Cowan, J. J. & Roe, H., 2000, ApJ, 544, 302
. M Busso, R Gallino, G J Wasserburg, ARA&A. 37239Busso, M., Gallino, R. & Wasserburg, G.J., 1999, ARA&A, 37, 239
Origin and Evolution of the Elements. M Busso, O Straniero, R Gallino, C Abis, A.McWilliam & M.Rauch67PasadenaBusso, M., Straniero, O., Gallino, R. & Abis, C., 2004, in "Origin and Evolution of the Elements", eds. A.McWilliam & M.Rauch, Pasadena, (CUP), page 67
. E Carretta, R G Gratton, C Sneden, A Bragaglia, A&A. 354169Carretta, E., Gratton, R. G., Sneden, C. & Bragaglia, A., 2000, A&A, 354, 169
. E Carretta, R G Gratton, J G Cohen, T C Beers, N Christlieb, AJ. 124481Carretta, E., Gratton, R. G., Cohen, J. G., Beers, T. C., Christlieb, N., 2002, AJ, 124, 481
. R Cayrel, A&A. 4161117Cayrel, R. et al. 2004, A&A, 416, 1117
. N Christlieb, P J Green, L Wisotzki, D Reimers, A&A. 375366Christlieb, N., Green, P. J., Wisotzki, L. & Reimers, D., 2001, A&A, 375, 366
. N Christlieb, Rev. Mod. Astron. 16191Christlieb, N., 2003, Rev. Mod. Astron. 16, 191
. N Christlieb, B Gustafsson, A J Korn, P S Barklem, T C Beers, M S Bessell, T Karlsson, M Mizuno-Wiedner, ApJ. 603708Christlieb, N., Gustafsson, B., Korn, A. J., Barklem, P. S., Beers, T. C., Bessell, M. S., Karlsson, T. & Mizuno-Wiedner, M., 2004, ApJ, 603, 708
. J G Cohen, N Christlieb, T C Beers, R G Gratton, E Carretta, AJ. 124470Cohen, J. G., Christlieb, N., Beers, T. C., Gratton, R. G., Carretta, E., 2002, AJ, 124, 470
. J G Cohen, N Christlieb, Y Z Qian, J G Wasserburg, ApJ. 5881082Cohen, J. G., Christlieb, N., Qian, Y. Z. & Wasserburg, J. G., 2003, ApJ, 588, 1082
. J G Cohen, N Christlieb, A Mcwilliam, S Shectman, I Thompson, G J Wasserburg, I Ivans, Dehn, J Melendez, ApJ. 6121107Cohen, J. G., Christlieb, N., McWilliam, A., Shectman, S., Thompson, I., Wasserburg, G. J., Ivans, I., Dehn, Karlsson & Melendez, J., 2004, ApJ, 612, 1107
. J G Cohen, M M Briley, P B Stetson, AJ. 1301177Cohen, J. G., Briley, M. M. & Stetson, P. B., 2005, AJ, 130, 1177
. J G Cohen, ApJ. 633109Cohen, J. G., et al., 2006a, ApJ, 633, L109
. J G Cohen, in preparationCohen, J. G., et al., 2006b, in preparation
. J G Cohen, in preparationCohen, J. G., et al., 2006c, in preparation
Explanatory Supplement to the 2MASS All-Sky Data Release. R M Cutri, Cutri, R. M. et al., 2003, "Explanatory Supplement to the 2MASS All-Sky Data Release, http://www.ipac.caltech.edu/2mass/releases/allsky/doc/explsup.html
. D S P Dearborn, J Liebert, M Aaronson, C C Dahn, R Harrington, J Mould, J L Greenstein, ApJ. 300314Dearborn, D.S.P., Liebert, J., Aaronson, M., Dahn, C. C., Harrington, R., Mould, J. & Greenstein, J. L., 1986, ApJ, 300, 314
. J F Dominy, ApJS. 5527Dominy, J. F., 1984, ApJS, 55, 27
. J F Dominy, PASP. 97104Dominy, J. F., 1984, PASP, 97, 104
. A Frebel, Nature. Frebel, A., et al., 2005, Nature
. N Grevesse, A J Sauval, Space Science Reviews. 85161Grevesse, N. & Sauval, A. J., 1998, Space Science Reviews, 85, 161
. V Hill, A&A. 353557Hill, V. et al., 2000, A&A, 353, 557
. L M Hobbs, J A Thorburn, L M Rebull, ApJ. 523797Hobbs, L. M., Thorburn, J. A. & Rebull, L. M., 1999, ApJ, 523, 797
H Holweger, AIP Conf. Proceedings, (see Astro-ph/0107426). R.F.Wimmer-SchweingruberSolar and Galactic CompositionHolweger, H., 2001, in Solar and Galactic Composition, ed R.F.Wimmer-Schweingruber, AIP Conf. Proceedings, (see Astro-ph/0107426)
. M L Houdashelt, R A Bell, A V Sweigart, AJ. 1191448Houdashelt, M. L., Bell, R. A. & Sweigart, A. V., 2000, AJ, 119, 1448
. J J Johnson, M Bolte, ApJ. 57987Johnson, J. J. & Bolte, M., 2002, ApJ, 579, L87
. J J Johnson, M Bolte, ApJ. 462Johnson, J. J. & Bolte, M., 2004, ApJ, 605, 462
P C Keenan, R C. ; K Mcneil, M , C , The Ohio, An Atlas of Spectra of the Cooler Stars: Types G. University PressKeenan, P. C. & McNeil, R. C., 1976, An Atlas of Spectra of the Cooler Stars: Types G, K, M, S and C, The Ohio State University Press
. D Kisselman, New Astronomy Review. 45559Kisselman, D., 2001, New Astronomy Review, 45, 559
ATLAS9 Stellar Atmosphere Programs and 2 km/s Grid. R L Kurucz, Kurucz CD-ROMKurucz, R. L., 1993, ATLAS9 Stellar Atmosphere Programs and 2 km/s Grid, (Kurucz CD-ROM No. 13)
. D L Lambert, B Gustaffson, H Eriksson, K H Hinkle, ApJS. 62373Lambert, D. L., Gustaffson, B., Eriksson, H. & Hinkle, K. H., 1986, ApJS, 62, 373
. J E Lawler, G Bonvallet, C Sneden, ApJ. 556452Lawler, J. E., Bonvallet, G. & Sneden, C., 2001, ApJ, 556, 452
. J E Lawler, M E Wickliffe, E A Hartog, C Sneden, ApJ. 5631075Lawler, J. E., Wickliffe, M. E., Den Hartog, E. A. & Sneden, C., 2001, ApJ, 563, 1075
. S Lucatello, R Gratton, J G Cohen, T C Beers, N Christlieb, E Carretta, S Ramírez, AJ. 125875Lucatello, S., Gratton, R., Cohen, J. G., Beers, T. C., Christlieb, N., Carretta, E. & Ramírez, S., 2003, AJ, 125, 875
. R E Luck, H E Bond, ApJS. 77515Luck, R. E. & Bond, H. E., 1991, ApJS, 77, 515
. S Lucatello, S Tsagarides, T C Beers, E Carretta, R G Gratton, S G Ryan, ApJ. 625825Lucatello, S., Tsagarides, S., Beers, T. C., Carretta, E., Gratton, R. G. & Ryan, S. G., 2005, ApJ, 625, 825
. S Lucatello, A&A. 434691Lucatello, S. et al., 2006, in preparation Matsuura, M. et al., 2005, A&A, 434, 691
. P Marigo, J Bernard-Salas, S R Pottasch, A G G M Tielens, P R Wesselius, A&A. 409619Marigo, P., Bernard-Salas, J., Pottasch, S. R., Tielens, A. G. G. M. & Wesselius, P. R., 2003, A&A, 409, 619
B Marsteller, T C Beers, S Rossi, N Christlieb, M Bessell, J Rhee, Astro-ph/0408380Eighth Nuclei in the Cosmos. Marsteller, B., Beers, T. C., Rossi, S., Christlieb, N., Bessell, M. & Rhee, J., 2004, in Eighth Nuclei in the Cosmos, Nuclear Physics A (Astro-ph/0408380)
. R D Mcclure, ApJ. 28031McClure, R. D., 1984, ApJ, 280, L31
. R D Mcclure, JRASC. 79277McClure, R. D., 1985, JRASC, 79, 277
. R D Mcclure, A W Woodsworth, ApJ. 352709McClure, R. D. & Woodsworth, A. W., 1990, ApJ, 352, 709
. R D Mcclure, PASP. 109256McClure, R. D., 1997, PASP, 109, 256
. A Mcwilliam, G W Preston, C Sneden, S Shectman, AJ. 1092736McWilliam, A., Preston, G. W., Sneden, C. & Shectman, S., 1995, AJ, 109, 2736
. A Mcwilliam, G W Preston, C Sneden, L Searle, AJ. 1092757McWilliam, A., Preston, G. W., Sneden, C. & Searle, L., 1995, AJ, 109, 2757
. A Mcwilliam, ARA&A. 35503McWilliam, A., 1997, ARA&A, 35, 503
. A Mcwilliam, AJ. 1151640McWilliam, A., 1998, AJ, 115, 1640
. J E Norris, S Ryan, T C Beers, ApJ. 489169Norris, J.E., Ryan, S. & Beers, T.C., 1997, ApJ, 489, L169
. J B Oke, J E Gunn, PASP. 94586Oke, J. B. & Gunn, J. E., 1982, PASP, 94, 586
. B Plez, J G Cohen, A&A. 4341117Plez, B. & Cohen, J. G., 2005, A&A, 434, 1117
B Plez, J G Cohen, J Melendez, From Lithium to Uranium: Elemental Tracers of Early Cosmic Evolution, IAU Symp. 228. V. Hill, P. francois & f. Primas Preston, G. W. & Sneden, C.1201014Plez, B., Cohen, J. G. & Melendez, J., 2006, From Lithium to Uranium: Elemental Tracers of Early Cosmic Evolution, IAU Symp. 228, ed. V. Hill, P. francois & f. Primas Preston, G. W. & Sneden, C., 2000, AJ, 120, 1014
. G W Preston, C Sneden, AJ. 1221545Preston, G. W. & Sneden, C., 2001, AJ, 122, 1545
. J X Prochascka, S O Naumov, B W Carney, A Mcwilliam, A M Wolfe, ApJ. 1202513Prochascka, J. X., Naumov, S. O., Carney, B. W., McWilliam, A., & Wolfe, A. M., 2000, ApJ, 120, 2513
. F Querci, M Querci, V G Kunde, A&A. 15256Querci, F., Querci, M. & Kunde, V. G., 1971, A&A, 15, 256
. F Querci, M Querci, T Tsuji, A&A. 31265Querci, F., Querci, M. & Tsuji, T., 1971, A&A, 31, 265
. S V Ramírez, J G Cohen, J Buss, M M Briley, AJ. 1221429Ramírez, S. V., Cohen, J. G., Buss, J., & Briley, M. M., 2001, AJ, 122, 1429
S Rossi, T C Beers, C Sneden, The Galactic Halo. 165264Rossi, S., Beers, T. C. & Sneden, C., 1999, in The Galactic Halo, ASP Conf. Series, 165, 264
. S G Ryan, W Aoki, J E Norris, T C Beers, Astro-ph/0508475ApJ. in pressRyan, S. G., Aoki, W., Norris, J. E. & Beers, T. C., 2006, ApJ, in press (Astro-ph/0508475)
. J M Scalo, G E Miller, ApJ. 233596Scalo, J. M. & Miller, G. E., 1979, ApJ, 233, 596
. J Simmerer, C Sneden, J J Cowan, J Collier, V M Woolf, J E Lawler, ApJ. 6171091Simmerer, J., Sneden, C., Cowan, J. J., Collier, J., Woolf, V. M. & Lawler, J. E., 2004, ApJ, 617, 1091
. T Sivarani, A&A. 4131073Sivarani, T. et al., 2004, A&A, 413, 1073
M F Skrutskie, S E Schneider, R Stiening, S E Strom, M D Weinberg, C Beichman, T Chester, The Impact of Large Scale Near-IR Sky Surveys. Dordrecht187Skrutskie, M. F., Schneider, S.E., Stiening, R., Strom, S.E., Weinberg, M.D., Beichman, C., Chester, T. et al., 1997, in The Impact of Large Scale Near-IR Sky Surveys, ed. F.Garzon et al. (Dordrecht: Kluwer), p. 187
. C ; M Sneden, A&A. 430655Univ. of Texas SpitePh.D. thesisSneden, C., 1973, Ph.D. thesis, Univ. of Texas Spite, M. et al., 2005, A&A, 430, 655
. Y Takeda, A&A. 402343Takeda, Y., 2003, A&A, 402, 343
. Y Takeda, G Zhao, M Takad-Hidai, Y Q Chen, Y Saito, H W Zhang, Chinese Jrl. Astro. 3316Takeda, Y., Zhao, G., Takad-Hidai, M., Chen, Y. Q., Saito, Y. & Zhang, H. W., 2003, Chinese Jrl. Astro., 3, 316
. C Travaglio, R Gallino, E Arnone, J Cowan, F Jordan, C Sneden, ApJ. 601864Travaglio, C., Gallino, R., Arnone, E., Cowan, J., Jordan, F. & Sneden, C., 2004, ApJ, 601, 864
. S Tsangarides, S G Ryan, T C Beers, Mem. Soc. Astronom. Ital. 75772Tsangarides, S. Ryan, S.G. & Beers, T.C., 2004, Mem. Soc. Astronom. Ital., 75, 772
. R S Urdahl, Y Bao, W M Jackson, Chem. Phys. Lett. 178425Urdahl, R. S., Bao, Y. & Jackson, W. M., 1991, Chem. Phys. Lett., 178, 425
. L Wallace, K Hinkle, W Livingston, 98-001Technical ReportWallace, L., Hinkle, K. & Livingston, W., 1998, 1998, N.S.O. Technical Report 98-001, http://ftp.noao.edu.fts/visatl/README
. G Wallerstein, J L Greenstein, ApJ. 1391163Wallerstein, G. & Greenstein, J. L., 1964, ApJ, 139, 1163
. G Wallerstein, G R Knapp, ARA&A. 36369Wallerstein, G. & Knapp, G. R., 1998, ARA&A, 36, 369
. S Van Eck, S Goriely, A Jorissen, B Plez, A&A. 404291Van Eck, S., Goriely, S., Jorissen, A. & Plez, B., 2003, A&A, 404, 291
. J T Van Loon, S Stanimirovic, A Evans, E Muller, Astro- ph/0511118MNRAS. in pressVan Loon, J. T., Stanimirovic, S., Evans, A. & Muller, E., 2005, MNRAS, in press, Astro- ph/0511118
. Data From Cohen, c Data from Cohen et al. (2004).
| [] |
[
"Population Interference In Panel Experiments",
"Population Interference In Panel Experiments"
] | [
"Kevin Wu Han \nDepartment of Statistics\nDepartment of MS&E and Department of Statistics\nStanford University\nStanford University\n\n",
"Iavor Bojinov \nDepartment of Statistics\nDepartment of MS&E and Department of Statistics\nStanford University\nStanford University\n\n",
"Harvard Business School \nDepartment of Statistics\nDepartment of MS&E and Department of Statistics\nStanford University\nStanford University\n\n",
"Guillaume Basse \nDepartment of Statistics\nDepartment of MS&E and Department of Statistics\nStanford University\nStanford University\n\n"
] | [
"Department of Statistics\nDepartment of MS&E and Department of Statistics\nStanford University\nStanford University\n",
"Department of Statistics\nDepartment of MS&E and Department of Statistics\nStanford University\nStanford University\n",
"Department of Statistics\nDepartment of MS&E and Department of Statistics\nStanford University\nStanford University\n",
"Department of Statistics\nDepartment of MS&E and Department of Statistics\nStanford University\nStanford University\n"
] | [] | The phenomenon of population interference, where a treatment assigned to one experimental unit affects another experimental unit's outcome, has received considerable attention in standard randomized experiments. The complications produced by population interference in this setting are now readily recognized, and partial remedies are well known. Less understood is the impact of population interference in panel experiments where treatment is sequentially randomized in the population, and the outcomes are observed at each time step. This paper proposes a general framework for studying population interference in panel experiments and presents new finite population estimation and inference results. Our findings suggest that, under mild assumptions, the addition of a temporal dimension to an experiment alleviates some of the challenges of population interference for certain estimands. In contrast, we show that the presence of carryover effects -that is, when past treatments may affect future outcomes -exacerbates the problem. Our results are illustrated through both an empirical analysis and an extensive simulation study. | 10.2139/ssrn.3802304 | [
"https://export.arxiv.org/pdf/2103.00553v2.pdf"
] | 232,075,775 | 2103.00553 | e2e5487d5966dabf38f47e891c9e1552faec78bc |
Population Interference In Panel Experiments
June 8, 2023
Kevin Wu Han
Department of Statistics
Department of MS&E and Department of Statistics
Stanford University
Stanford University
Iavor Bojinov
Department of Statistics
Department of MS&E and Department of Statistics
Stanford University
Stanford University
Harvard Business School
Department of Statistics
Department of MS&E and Department of Statistics
Stanford University
Stanford University
Guillaume Basse
Department of Statistics
Department of MS&E and Department of Statistics
Stanford University
Stanford University
Population Interference In Panel Experiments
June 8, 2023Finite-population InferenceRandomization DistributionPotential OutcomesDynamic Causal Effects
The phenomenon of population interference, where a treatment assigned to one experimental unit affects another experimental unit's outcome, has received considerable attention in standard randomized experiments. The complications produced by population interference in this setting are now readily recognized, and partial remedies are well known. Less understood is the impact of population interference in panel experiments where treatment is sequentially randomized in the population, and the outcomes are observed at each time step. This paper proposes a general framework for studying population interference in panel experiments and presents new finite population estimation and inference results. Our findings suggest that, under mild assumptions, the addition of a temporal dimension to an experiment alleviates some of the challenges of population interference for certain estimands. In contrast, we show that the presence of carryover effects -that is, when past treatments may affect future outcomes -exacerbates the problem. Our results are illustrated through both an empirical analysis and an extensive simulation study.
Introduction
When researchers estimate causal effects from randomized experiments, they almost always make assumptions that restrict the number of counterfactual outcomes to simplify the subsequent inference. In standard experiments, where units are randomly assigned to either a treatment or control, researchers commonly assume that one unit's assignment does not affect another unit's response;
this is usually referred to as no interference (Cox 1958, Chapter 2). In panel experiments, where units are exposed to different interventions over time , in addition to no interference, researchers regularly assume that the observed outcomes were not impacted by past assignments; this is often called the no carryover assumption (Cox 1958, Chapter 13). Although these two assumptions are useful, there are numerous empirical examples where they are violated.
This mismatch between practical applications and theoretical assumptions has catalyzed a growing amount of literature dedicated to studying relaxations of these stringent conditions for either standard or panel experiments, but not both.
In standard experiments without evoking the no interference assumption, each unit's outcome depends on the assignments received by all other experimental units. Allowing for such arbitrary population interference 1 makes causal inference challenging (Basse & Airoldi 2018). In practice, researchers look for an underlying structure that limits the scope of interference. For example, when studying electoral participation during a special election in 2009 in Chicago, Sinclair et al. (2012) assumed that interference occurred within-household but not across; more broadly, this type of interference has been found in many other applications, including education (Hong & Raudenbush (2006); Rosenbaum (2007)), economics (Sobel (2006); Manski (2013)) and public health (Halloran & Struchiner (1995)). Inference in this setting is challenging because interference increases the number of potential outcomes and makes observations dependent. Aronow & Samii (2017) introduce a general framework for studying causal inference with interference: they introduce the concept of exposure mapping, define useful estimands, and construct asymptotically valid confidence intervals based on the Horvitz-Thompson estimator.
The literature on panel experiments has similarly shifted towards relaxing the no carryover effects assumption that precludes outcomes from being impacted by past assignments. For example, in the most extreme case, Bojinov & Shephard (2019) allows for arbitrary carryover effects when studying whether algorithms or humans are better at executing large financial orders; more generally, these types of relaxations have also been studied in economics (Angrist & Kuersteiner 2011, Rambachan & Shephard 2019, epidemiology (Robins 1986, Robins et al. 1999a, public health (Boruvka et al. 2018), and political science (Blackwell & Glynn 2018). Similarly to relaxing the no interference assumption, removing the no carryover assumption enables researchers to develop and explore a richer class of causal estimands that capture both the contemporaneous and 1 We use the term population interference to emphasize that the interference occurred across units. delayed causal effects (Bojinov & Shephard 2019). The latter is particularly important for technology companies seeking to understand the long-term impact of their interventions (Basse, Ding & Toulis 2019, Hohnhold et al. 2015. Similarly to the population interference setting, researchers use the analogous Horvitz-Thompson type estimator estimators to analyze experiments with carryover effects.
An apparent gap in the literature is an understanding of whether the possibility of running a panel experiment alleviates the challenges posed by population interference or makes them worse. This is particularly important for researchers wishing to run field experiments for two reasons.
First, it is often the case the researchers are constrained on the maximal number of experimental units they can recruit, for instance, because of costs or limits in the total population. Second, population interference often leads to increased uncertainty that reduces only by increasing the sample size. Panel experiments can alleviate these as it is often cheaper to change an experimental unit's treatment than to recruit more subjects, and uncertainty tends to decrease as the sample size and the number of time period increase. However, what happens when there is population interference and carryover effects is unclear.
To address this gap, we introduce a unifying framework for studying panel experiments with population interference. We begin by focusing on panel experiments with population interference but no carryover effects (Section 3). Here, we provide asymptotically valid confidence intervals for estimands defined at specific time periods and estimands that average contrasts over multiple time periods. We also introduce a novel class of assumptions enabling us to leverage past data to improve inference at a given time. Together, our results show that using panel experiments when there is population interference allows us to achieve valid inference under much weaker conditions on the population interference and even drop all restrictions for large time horizons. These results should be particularly encouraging for researchers wishing to run field experiments when the number of experimental subjects is constrained, as is often the case in Economics (for example, Andreoni & Samuelson (2006)) and Management Science (for example, Choudhury et al. (2021)).
We then tackle the most general setting featuring both population and temporal interference (Section 4). Under additional assumptions, we derive a general central limit theorem, which fails to provide the same clear benefit because of the data complexity caused by carryover effects.
We also state asymptotic results for a restricted type of mixed interference that generalizes the usual stratified interference to panel experiments and provides a blueprint for deriving additional results in specific contexts. Here we show a clear benefit that incorporating a temporal dimension allows us to relax the main restriction on the maximal cluster size to obtain valid inference. For researchers, these results are slightly less encouraging but, nevertheless, provide an essential next step in understanding how to leverage panel experiments in real-world settings.
Finally, Section 2 details our setup by introducing the potential outcomes framework, our causal estimands and corresponding estimators, and the randomization-based framework that we leverage for all our results. We conclude with simulations (Section 5), empirical applications (Section 6), and a discussion (Section 7). The Appendix contains a detailed discussion of inference under population interference for standard experiments, all proofs, and additional simulations.
Setup
Assignments
Consider a randomized experiment occurring over T periods on a finite population of n experimental units. At each time step t ∈ {1, · · · , T }, unit i ∈ {1, · · · , n} can be assigned to treatment (W i,t = 1) or control (W i,t = 0); extensions to non-binary treatments are straightforward. We denote by W i,1:t = (W i,1 , W i,2 , · · · , W i,t ) the assignment path up to time t for unit i, W 1:n,t the assignment vector for all n units at time step t and W 1:n,1:t ∈ {0, 1} n×t the assignment matrix. Hence, for each i and t, W i,1:t is a vector of length t, W 1:n,t is a vector of length n and W 1:n,1:t is a matrix of dimension n × t.
We define an assignment mechanism (or design) to be the probability distribution of the assignment matrix P(W 1:n,1:T ).
For example, in a Bernoulli design, the assignment mechanism is independent across time and units such that P(W i,t = 1) = p for all i, t. More generally, we will often work with assignments that are temporally independent.
Definition 1 (Temporally independent assignment mechanism). We say an assignment mechanism is temporally independent if W 1:n,t and W 1:n,t ′ are independent for any t and t ′ Following much of the literature on analyzing complex experiments, we adopt the randomizationbased approach to inference, in which the assignment mechanism is the only source of randomness (Kempthorne 1955); see Abadie et al. (2020) for a recent review. Throughout, we use lower cases w with the appropriate subscript for realizations of the assignment matrix W .
Potential outcomes and exposure mappings
The goal of causal inference is to study how an intervention impacts an outcome of interest. Following the potential outcomes formulation, for panel experiments without any assumptions, each unit i at time t has 2 nT potential outcomes corresponding to the total number of distinct realizations of the assignment matrix, denoted by Y i,t (w 1:n,1:T ). For simplicity, we assume that the potential outcomes are one-dimensional, although it is straightforward to relax this assumption.
In randomized experiments, where we control the assignment mechanism, the outcomes at time t are not impacted by future assignments that have yet to be revealed to the units (Bojinov & Shephard 2019). This assumption drastically reduces the total number of potential outcomes 2 and will be implicitly made throughout this paper. We now formally define the no carryover effects and no population interference assumption.
Definition 2 (No carryover effect and population interference). We say that there is no carryover effect if and only if Y i,t (w 1:n,1:t ) = Y i,t (w ′ 1:n,1:t ) whenever w 1:n,t = w ′ 1:n,t .
And we say that there is no population interference if and only if
Y i,t (w 1:n,1:t ) = Y i,t (w ′ 1:n,1:t ) whenever w i,1:t = w ′ i,1:t .
If we make both assumptions, inference is relatively straightforward. However, if we only assume that there is population interference and no carryover effects, inference is still impossible without any assumptions on the population interference structure (Basse & Airoldi 2018). One way forward is to assume that the outcomes of unit i depend only on the treatments assigned to a subset of the population. This intuition extends more generally to the assertion that the outcome of unit i at time t depends on a low-dimensional representation of w 1:n,1:t . Formally, for each unique i, t pair we define the exposure mapping f i,t : {0, 1} n×t → ∆, where ∆ is the set of possible exposures 3 (Aronow & Samii 2017). On the other hand, if we assume that there is no population interference, again, we would need additional assumptions to obtain valid randomization-based inference ).
Defining exposure mappings in this flexible manner allows us to unify and transparently consider restrictions on the population interference and the duration of the carryover effect. Throughout this paper, we restrict our focus to properly specified time-invariant exposure mappings, which are formally defined below.
Assumption 1 (Properly specified time-invariant exposure mapping). The exposure mappings are properly specified if, for all pairs i ∈ {1, · · · , n} and t ∈ {1, · · · , T }, and any two assignment matrices w 1:n,1:t and w ′ 1:n,1:t ,
Y i,t (w 1:n,1:t ) = Y i,t (w ′ 1:n,1:t ) whenever f i,t (w 1:n,1:t ) = f i,t (w ′ 1:n,1:t ). 2
The assumption, known as non-anticipating potential outcomes (Bojinov & Shephard 2019), can be violated if experimental units are told what their future assignments will be and modify their present behavior as a result. For instance, this could occur for shoppers who expect to receive a considerable discount on a subsequent day and may curtail their spending until they receive the discount.
3 To make exposure mappings useful, we assume the cardinality of ∆ is (substantially) smaller than n × t.
For p ∈ {1, · · · , T }, we say the exposure mappings are p-time-invariant if for any t, t ′ ∈ {p, · · · , T } and any unit i, f i,t (w 1:n,1:t ) = f i,t ′ (w 1:n,1:t ′ ) whenever w 1:n,t−p+1:t = w 1:n,t ′ −p+1:t ′ .
The exposure mappings are time-invariant if the exposure mappings are p-time-invariant for some p ∈ {1, · · · , T }. We say the exposure mappings are properly specified time-invariant exposure mappings if they are both properly specified and time-invariant.
Properly specified exposure mappings can be thought of as defining "effective treatments,"
allowing us to write:
Y i,t (w 1:n,1:t ) = Y i,t (f i,t (w 1:n,1:t )) = Y i,t (h i,t ),
where h i,t = f i,t (w 1:n,1:t ) ∈ ∆. Time-invariant exposure mappings constrain the relationship between experimental units to be invariant over time. Specifically, it does not allow the exposure mappings to change across time. For example, if at time t = 1, the outcomes depend on the fraction of treated neighbors in the graph, then it cannot be the case that at time t = 2 the outcomes now depend on the number of treated neighbors in the graph. We will see why such an invariance assumption is necessary for the next section when we define causal effects. Of course, the validity of Assumption 1 depends on the exact definition of the exposure mapping and should be informed by the empirical context.
Throughout this paper, we consider a special class of exposure mappings that restrict the outcomes of unit i to depend only on the assignments of a predefined subset of units that we refer to as i's neighborhood and index by N i ; note that the index set is not dynamically changing over time. For example, for units connected through a social network, N i indexes the set of nodes connected to i by an edge; for units organized households, N i indexes the set of units that live in the same household as i; and for units located in space, N i indexes the set of units who are at most a certain distance away from unit i.
Definition 3 (Locally Effective Assignments (LEA)). We say the assignments and exposure mappings are locally effective if the exposure mappings are p-time-invariant for some p ∈ {1, · · · , T } and f i,t (w 1:n,1:
t ) = f i,t (w N i ,t−p+1:t ),
with the convention that w N i ,t−p+1:t = w N i ,1:t for t − p + 1 ≤ 0.
Although LEA imposes further structure, it still provides a great deal of flexibility as it incorporates all notions of traditional population interference and temporal carryover effects as special cases. For example, fixing p = 1 makes the exposure values depend only on current assignments, which is equivalent to usual population interference. On the other hand, fixing N i = {i} is equivalent to the no interference assumption imposed on panel experiments in Bojinov et al. (2021). Of course, these special cases are interesting and extensively studied, but our general formulation's real benefit is to consider scenarios where there is both population interference and carryover effects.
Example 1 (Example of Locally Effective Assignments). We consider an example where the exposure values depend on past assignments. In particular, let
f i,t (w 1:n,1:t ) = (w i,t−1 , w i,t , u i,t−1 , u i,t ) where u i,t−1 = 1 |N i | j∈N i w j,t−1 and u i,t = 1 |N i | j∈N i w j,t ;
we use |A| to denote the cardinality of the set A. Hence, one unit's assignment and the fraction of treated neighbors at the previous time step matter as well. This is a special case of LEA with p = 2. In this example, the exposure mappings are 2-time-invariant: for t, t ′ ≥ 2, if w 1:n,(t−1):t = w 1:n,(t ′ −1):t ′ then f i,t (w 1:n,1:t ) = f i,t ′ (w 1:n,1:t ′ ).
One limitation of LEA(p) assumption is that it can not directly capture long-range decaying dependency on past assignments and population interference beyond a unit's neighborhood. Such long-range decaying dependency on time is uncommon in econometrics literature (Judson & Owen 1999, Wooldridge 2010. For example, if we consider the following parametric model (Wooldridge 2010):
Y i,t = ρY i,t−1 + βW i,t + ϵ i,t , where ρ ∈ (−1, 1) and E[ϵ i,t |Y i,1 , · · · , Y i,t , W i,1 , · · · , W i,t ] = 0,
we have an infinite-long decaying dependency on past assignments on the current outcomes. To capture this, we would need to set p = T in the LEA(p) causal effect, which, although possible, is unlikely to be practically useful as we would not be able to estimate this estimand with any reasonable precision. Despite this limitation, the LEA(p) assumption allows for a great deal of flexibility as it does not require imposing modeling assumptions on the outcomes and is still useful in many real-world situations.
Finally, population interference beyond local interference has also been studied in the econometrics literature (Manski 1993, Bramoullé et al. 2009, Leung 2022. We leave the extensions of our work to this setting as future work.
Causal effects
Causal effects, within the potential outcomes framework, are defined as contrasts of each unit's potential outcomes under alternate assignments (Imbens & Rubin 2015). As the number of possible contrasts grows exponentially with the number of distinct potential outcomes, we focus on two important special cases.
The first-which is well-defined regardless of the interference structure-compares the difference in the potential outcomes across two extreme scenarios: assigning every unit to treatment, W 1:n,1:t = 1 1:n,1:t , as opposed to control, W 1:n,1:t = 0 1:n,1:t .
Definition 4 (Total effect at time t). The total effect at time t is
τ T E t = 1 n n i=1 Y i,t (1 1:n,1:t ) − 1 n n i=1
Y i,t (0 1:n,1:t ).
Our total effect at time t corresponds to the Global Average Treatment Effect sometimes used in single time experiments (Ugander & Yin 2020). In the absence of interference and carryover effects, the total effect at time t reduces to the usual average treatment effect at time t.
The second-which requires Assumption 1-provides a much richer class of causal effects with important practical applications. The TEC estimand is the generalization of the usual exposure contrast estimands (Aronow & Samii 2017) to the panel experiment setting. Hereafter, the letter k will always represent values in ∆.
Definition 5 (Temporal exposure contrast (TEC)). For any time step t and exposure values k, k ′ ∈ ∆, we define the temporal exposure contrast between k and k ′ to be
τ k,k ′ t = 1 n n i=1 Y i,t (k) − 1 n n i=1 Y i,t (k ′ )
Here, we implicitly assume that for every unit, the potential outcome is well defined for all values of k ∈ ∆. This assumption precludes situations where the range of the exposure mappings depends on N i . Further, note that if there exist carryover effects, then TEC may not be well-defined for the first few time steps. In this case, we may assume that all units are in the control group prior to the first time step in the panel experiment.
In panel experiments, researchers are often less interested in the idiosyncratic effects at each Such companies rarely are interested in the effect at time t-for instance, on Tuesday 2-3 pm-but instead, want to understand the average performance throughout the experiment.
Definition 6 (Average total effect). The average total effect is
τ T E = 1 T T t=1 τ T E t .
Similar to the total effect, in many applications, we are interested in the TEC's temporal average.
Definition 7 (Average temporal exposure contrast (ATEC)). For any exposure values k, k ′ ∈ ∆, we define the average temporal exposure contrast between k and k ′ to bē
τ k,k ′ = 1 T T t=1 τ k,k ′ t
Without assuming that the exposure mappings are time-invariant, the definition of the ATEC becomes more cumbersome as an exposure k ∈ ∆ may be in the image of f i,t for some t, but not
in the image of f i,t ′ . That is, Y i,t (k) might be well-defined while Y i,t ′ (k)
is not, which makes taking temporal averages difficult.
Of course, our causal estimands are not exhaustive, and there are many other causal estimands of interest. For example, there is a vast literature in econometrics and statistics studying estimation and inference of spillover effects under either different designs or different model assumptions (Robins 1986, Robins et al. 1999b, Leung 2020, Bramoullé et al. 2020, Vazquez-Bare 2022).
Estimation and inference
The observed data
For any choice of exposure mappings {f i,t }, the observed assignment path W 1:n,1:t induces the exposure H i,t = f i,t (W 1:n,1:t ) for each i and t; in particular, the assignment mechanism P(W 1:n,1:t ) induces a distribution for the exposures P(H i,t ) for each i and t. Under Assumption 1 4 , the observed outcomes Y i,t for unit i at time t can therefore be written:
Y i,t = k∈∆ 1(H i,t = k)Y i,t (k)
, ∀i ∈ 1, · · · , n, ∀t ∈ 1, · · · , T ,
We use these observed data to estimate the causal effects defined in 2.3.
Estimation
For the different interference structures studied in the following sections, we will rely on Horvitz-Thompson estimators (Horvitz & Thompson 1952), or variations of it; e.g., to estimate τ k,k ′ t , we 4 We additionally assume that each unit fully complies with the assignment, leaving the relaxation of this assumption as future work.
will use:τ
k,k ′ t = 1 n n i=1 1(H i,t = k) P(H i,t = k) Y i,t − 1 n n i=1 1(H i,t = k ′ ) P(H i,t = k ′ ) Y i,t .(1)
Taking the temporal average of (1) provides a natural estimator ofτ k,k ′ ,
τ k,k ′ = 1 T T t=1τ k,k ′ t .(2)
Similarly, if we let h 1 i,t := f i,t (1 1:n,1:t ) and h 0 i,t := f i,t (0 1:n,1:t ), then we can estimate total effect at time t (c.f. Definition 4) by the following estimator
τ T E t = 1 n n i=1 1(H i,t = h 1 i,t ) P(H i,t = h 1 i,t ) Y i,t − 1 n n i=1 1(H i,t = h 0 i,t ) P(H i,t = h 0 i,t ) Y i,t .(3)
Again, we have a natural estimator of average total effect induced by the above estimator
τ T E = 1 T T t=1τ T E t .(4)
The properties of these estimators are discussed in details in the rest of this manuscript.
Randomization-based inference
Throughout this paper, we adopt the randomization-based (sometimes called design-based) frameworkthat is, we consider the potential outcomes as fixed, with the assignment being the only source of randomness. Equivalently, we can view the randomization-based framework as conditioning on the full set of potential outcomes and only using the randomness in the assignment for inference.
This framework has seen a recent uptake in causal inference (Lin 2013, Li & Ding 2017, Li et al. 2019, Basse & Bojinov 2020 and has become standard for analyzing experiments with population interference (Aronow & Samii 2017, Sävje et al. 2017, Basse & Feller 2018, Chin 2018) and unbounded carryover effects (Bojinov & Shephard 2019, Rambachan & Shephard 2019). An additional benefit to adopting a randomization-based approach in the context of population interference is that it explicitly removes the challenges posed by correlations between the potential outcomes and the social relationships, as the potential outcomes are fixed (unknown)
constants.
There are two dominant inferential strategies within the randomization framework. The first is to use Fisher (conditional) randomization tests for sharp null hypotheses of no exposure effects, or for pairwise null hypotheses contrasting two exposures. While these tests deliver p-values that are exact and non-asymptotic, they are challenging to run with complex exposure mappings ( The second, which we focus on in this paper, is to construct confidence intervals based on the asymptotic distribution of our estimators, which can be used for testing if there is an effect on average. Intuitively, the asymptotic distribution represents a sequence of hypothetical randomized experiments in which either the number of units increases, the number of time steps increases, or both (Li & Ding 2017. Within each step, we apply the analogous assignment mechanism, obtain the observed data, and compute our proposed estimand to estimate the causal effect of interest (Aronow & Samii 2017, Chin 2018.
Under the randomization framework, it is easy to show that the Horvitz-Thompson estimatorŝ
τ k,k ′ t ,τ k,k ′ ,τ T E t andτ T E are unbiased for τ k,k ′ t ,τ k,k ′ , τ T E t
andτ T E , respectively 5 ; obtaining central limit theorems in this setting, however, is notoriously challenging. In the next two sections, we develop such results for the above four estimators under different experiment assumptions.
Panel experiments with population interference and no carryover effects
Panel experiments are particularly helpful when there is population interference but no carryover effects-a setting we refer to as puper population interference. This situation commonly occurs when the treatment has a relatively short-lived effect; for example, as is the case for most digital experiments on networks and platforms (see the discussion in Kohavi et al. (2020), Bojinov & Gupta (2022)). In this setting, inference for the TEC at a fixed point in time is equivalent to the standard experimental setup (see Appendix A for a full discussion of this setting, including a new central limit theorem that illustrated a fundamental trade-off between the interference structure and the design of the experiment).
We can leverage time and move beyond the standard experimental setup in two ways. First, we can focus on inference for the ATEC that captures the average effect across units and time. Here, we show that varying the treatment over time allows us to handle settings with more expansive population interference. Second, we revisit the TEC and provide a variance reduction technique that leverages a stability assumption that limits the change in the potential outcomes across time for the same unit receiving the same treatment. Together, our results demonstrate the potential of leveraging panel experiments when there are no carryover effects.
Average temporal exposure contrast
There are three distinct asymptotic regimes when considering inference for the ATEC and its natural estimatorτ k,k ′ : (1) T fixed and n → ∞;
(2) T → ∞ and n → ∞;
(3) T → ∞ and n fixed.
An important insight from this section is that inference in these three regimes requires different 5 For example, see Bojinov & Shephard (2019) and Aronow & Samii (2017) for explicit proof.
constraints on the population interference mechanism. Roughly speaking, the larger T is relative to n, the more interference we can tolerate.
Assumptions
For most of our central limit theorem results, we require a notion of the dependency graph for a collection of random variables. We define the dependency graph G n,t for H 1,t , · · · , H n,t to be the graph with vertices V t = {1, · · · , n} and edges E t such that (i, j) ∈ E if and only if H i,t and H j,t are not independent. The graph G n models the dependency relationship among n random variables H 1,t , · · · , H n,t . Notice that the dependency graph depends both on the interference structure and on the assignment mechanism. Finally, denote by d n,t the maximal number of dependent exposure values for any unit i at time step t and let d n = lim sup t→∞ d n,t with the convention that for fixed T , d n = max{d n,1 , · · · , d n,T }. See Appendix A.2 for the derivation of d n,t in several specific contexts.
Throughout this subsection, we work exclusively with temporally independent assignment mechanisms, see Definition 1. Our central limit theorems also require three additional assumptions. The first two assumptions bound the potential outcomes and the inverse probabilities of exposure.
Assumption 2 (Uniformly bounded potential outcomes). Assume that all the potential outcomes are uniformly bounded, i.e., |Y i,t (k)| ≤ M for some M and for all i ∈ {1, n}, k ∈ ∆, and t ≥ 1.
Assumption 3 (Overlap). Assume all the exposure probabilities are bounded away from 0 and 1,
i.e., ∃η > 0 such that for all i ∈ {1, n}, k ∈ ∆, and t ≥ 1, 0 < η ≤ π i (k) ≤ 1 − η < 1.
Assumptions 2 and 3 are standard in the causal inference literature (Aronow & Samii (2017);
Leung (2022)). Assumption 2 holds in most practical applications as realizations of the outcome variables are almost always bounded. Assumption 3 is necessary as vanishing exposure probabilities make the causal question ill-defined as we cannot observe the associated potential outcomes.
The next assumption rules out the existence of a pathological subsequence n k along which the limiting variance of our estimator is zero.
Assumption 4 (Nondegenerate asymptotic variance). Assume that lim inf n→∞ Var(
√ nτ k,k ′ t ) > 0 for any t.
As a consequence of this assumption, for each t, there exists a constant c > 0 such that
Var( √ nτ k,k ′ t )
≥ c for all sufficiently large n. This type of assumption seems unavoidable, even in settings without interference (see, e.g., Corollary 1 in Guo & Basse (2021), and subsequent discussion).
Three central limit theorems
We now state and discuss our three new central limit theorems for the ATEC under population interference but no carryover effects.
Theorem 1. Suppose we have pure population interference and a temporally independent assignment mechanism, then for any T , under Assumption 1-4 and the condition that d n = o(n 1/4 ), we
have √ nT (τ k,k ′ −τ k,k ′ ) 1 T T t=1 σ 2 n,t d − → N (0, 1), as n → ∞, where σ 2 n,t = Var( √ nτ k,k ′ t ).
This first theorem states a central limit theorem for the regime where T is fixed and n → ∞, making it relevant for applications where n is much larger than T . The assumption that d n = o(n 1/4 ) quantifies the dependence among observations due to interference. If we compare this result to the analogous CLT in the non-temporal experimental setup with interference, Theorem 7
in Appendix A, we have the same requirement, namely d n = o(n 1/4 ). Intuitively, this is because this asymptotic regime is closest to the standard setting with no temporal dimension--any finite number of time periods T is negligible compared with infinitely many observations n.
At the other extreme, we consider the regime where T → ∞ and n is fixed:
Theorem 2. Suppose we have pure population interference, a temporally independent assignment mechanism, and Assumptions 1, 2, 3 are satisfied. Let σ 2 n,t = Var(
√ nτ k,k ′ t ), we further assume that 1 T T t=1 σ 2 n,t is bounded away from 0 for any T . We then have √ nT (τ k,k ′ −τ k,k ′ ) 1 T T t=1 σ 2 n,t d − → N (0, 1),
as T → ∞.
This central limit theorem makes no assumption whatsoever on the interference mechanism, beyond assuming that there are no carryover effects: in particular, we allow a unit's outcome to depend on any other unit's assignment. This perhaps surprising fact sheds some light into the nature of inference for the ATEC, and how it differs from the TEC. Intuitively, a central limit theorem requires enough "nearly independent" observations: this means that even if at any time step t, the observations are all correlated, we can still have infinitely many independent observations if: (1) observations are uncorrelated across time and (2) we observe infinitely many time periods.
The next theorem formalizes this intuition, by making the trade-off between the growth rates of T and d n explicit:
Theorem 3. Suppose we have pure population interference, a temporally independent assignment mechanism, and Assumption 1-4 are satisfied, then for T = T (n) such that either
n T → 0 (5) or min{d 2 n , n} √ nT → 0 (6) holds, we have √ nT (τ k,k ′ −τ k,k ′ ) 1 T T t=1 σ 2 n,t d − → N (0, 1), as n → ∞, where σ 2 n,t = Var( √ nτ k,k ′ t ).
Condition (5) is actually a special case of condition (6): if we do not impose any assumptions on the interference, min{d 2 n , n} is just n, so we need n √ nT → 0, which is equivalent to n T → 0. Condition 6 gives us more subtle control over the rate of growth required of T for any given level of interference. For instance, while for finite T we would require d n = o(n 1/4 ), if T grows as T (n) = √ n we only require d n = o(n 1/2 ). As with the previous theorem, the intuition behind this result is that as d n becomes larger, the number of "nearly independent" observations at each time point shrinks -this must be counterbalanced by an increase in the the number of temporal observation, i.e, an increase in the rate of T = T (n).
Variance bounds and estimation
Unfortunately, as is typical in finite population causal inference, Var(τ k,k ′ t ) contains terms that are products of potential outcomes that can never be simultaneously observed from a single experiment, making it non-identifiable (Basse & Bojinov 2020). Instead, researchers derive an upper bound to the variance and compute unbiased estimates for this bound, allowing them to conduct conservative inference (i.e., derive confidence intervals with higher coverage than the nominal level). Without making assumptions on the assignment mechanism, we can obtain a simple bound by replacing all non-observable products of potential outcomes with the sum of their squares, we denote the estimate of the bound by Var( √ nτ k,k ′ t ). The specific expression can be found in the following proposition:
Proposition 1. (Estimator of variance) Let, Var( √ nτ k,k ′ t ) = 1 n n i=1 1(H i,t = k)(1 − π i,t (k)) Y i,t π i,t (k) 2 + n i=1 j̸ =i,πij (k)=0 1(H i,t = k)Y 2 i,t 2π i,t (k) + 1(H j,t = k)Y 2 j,t 2π j,t (k) + n i=1 j̸ =i,πij (k)>0 1(H i,t = k)1(H j,t = k) × π ij (k) − π i,t (k)π j,t (k) π ij (k) Y i,t π i,t (k) Y j,t π j,t (k) + n i=1 1(H i,t = k ′ )(1 − π i,t (k ′ )) Y i,t π i,t (k ′ ) 2 + n i=1 j̸ =i,πij (k ′ )=0 1(H i,t = k ′ )Y 2 i,t 2π i,t (k ′ ) + 1(H j,t = k ′ )Y 2 j,t 2π j,t (k ′ ) + n i=1 j̸ =i,πij (k ′ )>0 1(H i,t = k ′ )1(H j,t = k ′ ) × π ij (k ′ ) − π i,t (k ′ )π j,t (k ′ ) π ij (k ′ ) Y i,t π i,t (k ′ ) Y j,t π j,t (k ′ ) −2 n i=1 j̸ =i,πij (k,k ′ )>0 π ij (k, k ′ ) − π i,t (k)π j,t (k ′ ) × 1(H i,t = k)1(H j,t = k ′ ) π ij (k, k ′ ) Y i,t π i,t (k) Y j,t π j,t (k ′ ) +2 n i=1 j̸ =i,πij (k,k ′ )=0 1(H i,t = k)Y 2 i,t 2π i,t (k) + 1(H j,t = k ′ )Y 2 j,t 2π j,t (k ′ ) , where π i (k) = mathbbP (H i,t = k) and π i,j = P(H i,t = k&H j,t = k). Then E Var( √ nτ k,k ′ ) ≥ Var( √ nτ k,k ′ ).
We drop the subscript t to ease notations. With the above proposition and central limit theorems, inference proceeds as follows:
Proposition 2. Suppose Theorem 1 or 3 holds, then for any δ > 0,
P τ k,k ′ ∈ τ k,k ′ − z 1− α 2 √ 1 − δ 1 T 2 T t=1 Var(τ k,k ′ t ),τ k,k ′ + z 1− α 2 √ 1 − δ 1 T 2 T t=1 Var(τ k,k ′ t ) ≥ 1 − α
for large n. Moreover, suppose Theorem 2 holds, then for any δ > 0,
P τ k,k ′ ∈ τ k,k ′ − z 1− α 2 √ 1 − δ 1 T 2 T t=1 Var(τ k,k ′ t ),τ k,k ′ + z 1− α 2 √ 1 − δ 1 T 2 T t=1 Var(τ k,k ′ t ) ≥ 1 − α for large T .
The proof of the above proposition builds on the proof of Proposition 7 in Appedix A.
The variance bound and inference for the regime when T → ∞ is identical to what is given in Bojinov & Shephard (2019) and is therefore omitted.
Shrinkage estimator under stability assumption
We now focus on deriving a better estimator of the TEC, at a fixed point in time. Suppose a unit receives the same treatment for two consecutive periods. If the potential outcomes are similar across time, then we can borrow information from past outcomes to reduce the variance of our eastimate of the TEC. Intuitively, this section provides a bias-variance trade-off, where we introduce some bias in our inference for a potentially substantial reduction in the variance. For a specific treatment, the similar across time assumption can be formalized as follows.
Assumption 5 (Weak stability of potential outcomes). We say the potential outcome matrix Y i,t , i = 1, · · · , N , t = 1, · · · , T is ϵ-weakly stable if for each i and exposure value k, we have
|Y i,t (k) − Y i,t+1 (k)| ≤ ϵ, ∀t ∈ {1, · · · , T − 1}.
If we further assume that ϵ = 0, we then say that the potential outcome matrix is strongly stable.
All results in this section easily generalize to the case where the uniform bound ϵ is replaced by a time dependent bound ϵ t . Throughout, we focus on the estimation of the total effect at time t as an example to illustrate how we can leverage temporal information under weak stability.
Under pure population interference and time-invariant exposure mappings,
τ T E t = 1 n n i=1 Y i,t (h 1 i ) − 1 n n i=1 Y i,t (h 0 i ),(7)
where
h 1 i = f i (1 t,1:n ) and h 0 i = f i (0 t,1:n ).
To build some intuition, we first describe how to leverage a single past time period, t ′ = t − 1 to improve estimation at time t. The idea is that by considering a convex combinationτ c t = ατ T E t + (1 − α)τ T E t−1 , for some α ∈ [0, 1] as an estimator ofτ T E Algorithm 1 Algorithm to estimate ϵ 1: Initializeε = 0 2: For t = 1 to T − 1:
(a) For i = 1, 2, · · · , n compute h i,t and h i,t+1 .
(b) If h i,t = h i,t+1 = k, compute ϵ i,t = |y i,t − y i,t+1 |. (c) If ϵ i,t >ε, setε = ϵ i,t . 3: Outputε.
In other words, if the current variance is larger, by choosing some α, the convex combination type estimator would give us a better estimator in terms of MSE. Moreover, as the proposition suggests, if we know the difference is bigger than 4ϵ 2 , we know that α = 1 2 is sufficient. If we further assume that the assignment mechanism is temporally independent (Definition 1), then Cov(τ T E t ,τ T E t−1 ) = 0, hence we have the following result.
Proposition 5. Suppose that the assignments mechanism is temporally independent, then there
exists some α ∈ (0, 1) such thatτ c t = ατ T E t + (1 − α)τ T E t−1 has lower MSE thanτ T E t . The optimal α is given by α = 1 − V ar(τ T E t ) 4ϵ 2 +V ar(τ T E t )+V ar(τ T E t−1 ) .
Under the ϵ−stability assumption, Algorithm 1 provides a data dependent approach to estimate ϵ and allows us to obtain estimateα of the weight parameter α,
α = 1 − Var(τ T E t ) Var(τ T E
We can easily generalize the above discussion to a general version of this estimator such that we
combineτ T E t andτ T E t ′ for arbitrary t ′ < t.
The above results trivially generalize and are therefore omitted for brevity.
Based on the variance estimator above, we propose two ways to construct confidence intervals.
The first one ignores the bias ofτ c t and uses Gaussian confidence interval. The second one takes advantage of Chebyshev's inequality and incorporates the bias. Specifically, note that
P(|τ c t − (E[τ c t ] − τ T E t ) − τ T E t | ≥ ϵ) ≤ Var(τ c t ) ϵ 2 , hence ∀δ > 0, P τ T E t ∈ τ c t − (E[τ c t ] − τ T E t ) − ϵ,τ c t − (E[τ c t ] − τ T E t ) + ϵ ≥ 1 − δ for ϵ = Var(τ c t ) δ . Let b(τ c t ) = E[τ c t ] − τ T E t = (1 − α)(τ T E t−1 − τ T E t ) be the bias of our convex combination estimator. If we estimate b(τ c t ) byb(τ c t ) = (1 −α)(τ T E t−1 −τ T E t )
, then we can use the following interval as an approximate
(1 − δ)-level confidence interval of τ T E t : τ c t −b(τ c t ) − Var(τ c t ) δ ,τ c t −b(τ c t ) + Var(τ c t ) δ .
We explore empirically the coverage of the above approximate confidence intervals with a simulation study in Section 5. The approach we have described in this section naturally extends to using the k − 1 previous time steps, yielding the weighted combination estimator:
τ c t = α 1τ
Theorem 4. Assume we have a temporally independent assignment mechanism, under Assumption 1-4 and the condition that d n = o(n 1/4 ), we have
√ n(τ k,k ′ t − τ k,k ′ t ) Var( √ nτ k,k ′ t ) 1/2 d − → N (0, 1)
as n → ∞.
The difference with the pure population setting, discussed in Appendix A, is not mathematical but conceptual: in the mixed setting, the exposures involve the assignments over previous time steps. Consequently, there are generally many more exposures than in the pure population setting, and each unit has a lower probability of receiving each. This leads to Horvitz-Thompson estimators with a much larger variance.
For the average temporal exposure contrast, the difference between population interference and mixed interference is starker. The main difficulty is that mixed interference breaks the temporal independence that powered the results of Section 3.1.
Nevertheless, we can still establish a general central limit theorem, be it under some additional assumptions. In particular, we require a restriction on the rate at which the variance shrinks.
Assumption 6. Assume that
lim inf n→∞ Var( √ nTτ k,k ′ ) ≥ ϵ > 0 for some ϵ.
This technical assumption rules out the pathological case that the variance vanishes as n → ∞.
To state our theorem, we also need to introduce the notion of s-dependent sequence.
Definition 8 (s-dependant sequences). We say that {H i,t } n i=1 is an s-dependent sequence of random variables if and only if for any index set I, J ⊆ 1, . . . , n, {H i,t } n i=1 and {H i,t } n i=1 are independent so long as min j∈J j − max i∈I i > s.
Intuitively, this assumption limits the depends across units.
Theorem 5. Assume we have a temporally independent assignment mechanism, under Assumption 1-4 and Assumption 6, suppose {H i,t } n i=1 is an s-dependent sequence of random variables for a fixed t and LEA(p) assumption is satisfied with some finite p. If s, n, T are such that s 5 T 4 = o(n 1−α ) for some 0 < α < 1, then we have that
√ nT (τ k,k ′ −τ k,k ′ ) Var( √ nTτ k,k ′ ) d − → N (0, 1)
as n → ∞.
The above theorem requires general assumptions on exposure values as well as the asymptotic variance though independent assignments are not required. However, it is somewhat difficult to apply this result to practical settings. Therefore, we now focus on a specific setting to illustrate the type of results that can be derived under mixed interference.
Stratified interference
Consider the following natural temporal extension of the stratified interference (Hudgens & Halloran (2008); Basse & Feller (2018)) setting (Hudgens & Halloran (2008); Basse & Feller (2018)):
f i,t (w 1:n,1:t ) = f (w i,t−1 , w i,t , {w j,t−1 } j∈N i ,j̸ =i , {w j,t } j∈N i ,j̸ =i )
where N i is the group to which unit i belongs. For convenience, we fix each group to be of size r 6 .
Theorem 6. With the above setting and temporally independent assignments, under Assumption 1-4 and Assumption 6, suppose n, r, T are such that r = o((nT )
1 4 ), then we have that √ nrT (τ k,k ′ −τ k,k ′ ) Var( √ nrTτ k,k ′ ) d − → N (0, 1)
as n → ∞.
The theorem holds for heterogeneous group sizes as long as max i r i = o((nT ) 1 4 ) where r i = |N i | is the size of the group unit i belongs to. To do inference, we consider a specific example of stratified interference:
f i,t (w 1:n,1:t ) = w i,t−1 , w i,t , j∈N i ,j̸ =i w j,t−1 , j∈N i ,j̸ =i w j,t .
We focus on the Bernoulli design where each unit is independently assigned to treatment with probability 1 2 . We consider the exposures k = (1, 1, r − 1, r − 1) and k ′ = (0, 0, 0, 0). Such exposure contrast is exactly the same as the total effect since essentially we are comparing the world of everyone getting treatment to the world of everyone getting control. Notice that in this case, r cannot be infinite, otherwise the overlap assumption would be violated. To ease notations, we index each unit i by a tuple (l, q), meaning that unit i is the q-th unit in the l-th group 7 . Proposition 6. Assuming the above setup, let
X n,t = √ n(τ k,k ′ t − τ k,k ′ t )
, then we can estimate the asymptotic variance by
B n 2 = T t=1
Var(X n,t ) + 2
T −1 t=1
Cov(X n,t , X n,t+1 ), 6 This ensures that each unit is associated with exactly the same set of exposure values so that the exposure contrast between two arbitrary exposure values is well-defined. 7 If we use a tuple (l, q) to represent the q−th unit in the l−th household, then we note by passing that 0 < C1 ≤ Y (l,q),t (k) ≤ C2 for all l, q, t, k for some C1, C2 is sufficient for Assumption 6.
where Var(X n,t ) = 1 nrT
n l=1 r q=1 (2 2r − 1) 1(H (l,q),t = k)Y 2 (l,q),t P(H (l,q),t = k) + n l=1 r q=1 (2 2r − 1) 1(H (l,q),t = k ′ )Y 2 (l,q),t P(H (l,q),t = k ′ ) + n l=1 r q=1 1(H (l,q),t = k)Y 2 (l,q),t P(H (l,q),t = k) + 1(H (l,q),t = k ′ )Y 2 (l,q),t P(H (l,q),t = k ′ ) + n l=1 r q 1 =1 q 2 ̸ =q 1 (2 2r − 1) 1(H (l,q 1 ),t = k, H (l,q 2 ),t) = k)Y (l,q 1 ),t Y (l,q 2 ),t P(H (l,q 1 ),t = k, H (l,q 2 ),t) = k) + + (2 2r − 1) 1(H (l,q 1 ),t = k ′ , H (l,q 2 ),t) = k ′ )Y (l,q 1 ),t Y (l,q 2 ),t P(H (l,q 1 ),t = k ′ , H (l,q 2 ),t) = k ′ ) + n l=1 r q 1 =1 q 2 ̸ =q 1 1(H (l,q 1 ),t = k)Y 2 (l,q 1 ),t P(H (l,q 1 ),t = k) + n l=1 r q 1 =1 q 2 ̸ =q 1 1(H (l,q 2 ),t = k ′ )Y 2 (l,q 2 ),t P(H (l,q 2 ),t = k ′ ) (9)
and
Cov(X n,t , X n,t+1 ) = 1 nrT
n l=1 r q 1 =1 r q 2 =1 (2 r − 1) 1(H (l,q 1 ),t = k, H (l,q 2 ),t+1 = k)Y (l,q 1 ),t Y (l,q 2 ),t+1 P(H (l,q 1 ),t = k, H (l,q 2 ),t+1 = k) +(2 r − 1) 1(H (l,q 1 ),t = k ′ , H (l,q 2 ),t+1 = k ′ )Y (l,q 1 ),t Y (l,q 2 ),t+1 P(H (l,q 1 ),t = k ′ , H (l,q 2 ),t+1 = k ′ ) + 1(H (l,q 1 ),t = k ′ )Y 2 (l,q 1 ),t P(H (l,q 1 ),t = k ′ ) + 1(H (l,q 2 ),t+1 = k)Y 2 (l,q 2 ),t+1
P(H (l,q 2 ),t+1 = k)
+ 1(H (l,q 1 ),t = k)Y 2 (l,q 1 ),t P(H (l,q 1 ),t = k) + 1(H (l,q 2 ),t+1 = k ′ )Y 2 (l,q 2 ),t+1 P(H (l,q 2 ),t+1 = k ′ )(10)
The expression of the variance is immediate from the setup, (B.22) and (B.23). The estimator is obtained by replacing the non-identifiable terms with an upper bound and estimating the upper bound accordingly.
The difficulty in doing inference under a more general setting comes from the fact that it is hard to give the explicit expression of the variance. Since there is dependence across time, the variance of τ k,k ′ also involves covariance between Horvitz-Thompson estimators across different times. Hence, in this case, we need at least more assumptions on the assignment mechanism in order to express the variance term explicitly.
Simulations
We now use a simulation to explore some of our theoretical results. Section 5.1 explores some of the finite sample properties of our central limit theorems in different realistic settings. Section 5.2 in particular, we show that confidence intervals based on normal approximations behave well in our simulation is a reasonable candidate for practical use.
Simulations for central limit theorems
We first explore the finite sample behavior of our central limit theorems. To make our simulations relevant, we consider a version of the popular stratified interference setting (Duflo & Saez (2003); Basse & Feller (2018)), in which individuals are nested in groups of varying sizes, and interference may occur within but not across groups. Specifically, we consider the exposure mapping Throughout, we consider a two-stage design whereby each group is assigned independently with probability 1 2 to a high-exposure or low-exposure arm, and then each unit is assigned to treatment independently with probability 0.9 in high-exposure groups, and 0.1 in low-exposure groups. We focus on the central limit theorems for ATEC. Theorem 3 establishes asymptotic results for ATEC under less constraining assumptions on the interference mechanism than for TEC (Theorem 7 in Appendix A). To illustrate this point, we consider the stratified interference setting. We assume that the size of each group is bounded by n 1/3 . In this case, d n = n 1/3 and hence T = √ n suffices for Theorem 3. Compared to d n = o(n 1/4 ) in the cross-sectional setting, we are able to have larger group size. We consider the exposure mapping f i,t (w 1:n ) = (w i,t , u i,t ) where u i,t = 0 if less than 25% of the neighbors are treated; u i,t = 1 if between 25% and 50% of the neighbors are treated; u i,t = 2 if between 50% and 75% of the neighbors are treated and u i,t = 3 if more than 75% of the neighbors are treated. We generate the potential outcomes for unit i at time step t according to N (3w i,t + 2u i,t + 5 + ϵ i,t , 1), where ϵ i,t is uniform{−1, 1}. Figure 1 and 2 show that n = 1000 suffices for a good approximation. Moreover, the coverage of our 95% confidence interval is 95.4%.
Estimation under the stability assumption
In Section 3.2, we showed that with an appropriate choice of weights, the family of convex combination estimators outperforms the Horvitz-Thompson estimator. We illustrate this with a simulation study and show that our proposed confidence intervals perform well in our simulated setting.
Estimation under stability assumption for total effects
We consider a social network generated according to an Erdős-Rényi model, in which the units are assigned to treatment or control following a Bernoulli(1/2) design at each time step. We assume a local, pure population form of interference, summarized by the following exposure mappings:
f i (w 1:n,t ) = w i,t , 1 |N i | j∈N i w j,t (11)
where N i is the neighborhood of the i-th unit; that is, we assume that only direct neighbors affect one's potential outcomes. For each unit i, we generate the potential outcomes at t = 1 randomly from N (10, 1). Then, for each time t > 1, we generate the potential outcome Y i,t (k) uniformly from the interval (Y i,t−1 (k) − ϵ, Y i,t−1 (k) + ϵ), so ϵ-stability holds. Throughout our simulations, we assume that T = 20 and we are interested in the total effect at time step t = 20. We compare the performance of the standard Horvitz-Thompson estimator and the performance of the convex combination estimator for estimating the total effect τ T E T at time t = T = 20, varying both the population size n and the number of time steps k used in the convex combination. We estimate ϵ using Algorithm 1 described in Section 3.2; we use Proposition 5 to estimate α when k = 2, and solve the optimization problem introduced in Appendix C for k ≥ 3.
We first fix ϵ to be 3 and vary the sample size. To make each unit have the same expected number of neighbors, we scale the probability p in Erdős-Rényi model accordingly. For each n, we fix the graph and generate 100 realizations of assignments. Table 1 shows the root mean squared errors for three kinds of estimators for the total effect: the usual Horvitz-Thompson estimator, the convex combination type estimator with k = 2, and the convex combination estimator with k = 5.
We see that the convex combination type estimators effectively reduce the mean squared error.
Moreover, when n is relatively small, the reduction in mean squared error is significant.
Coverage of two approximate confidence intervals
Recall that in Section 3.2, we gave two approximate confidence intervals of τ T E t based on our convex combination estimatorτ c t and variance estimator. We now provide coverage results of these two approximate confidence intervals. We assume a social network generated from the Erdős-Rényi Model with n = 100 and p = 0.05. We fix the stability parameter ϵ to be 3 and generate the data in the same way as in the previous section. To calculate the coverage, we generate 1000 realizations of the assignments and construct approximate confidence intervals accordingly. Table 2 shows the two approximate confidence intervals provide reasonable coverage across the three different social networks. Although the Gaussian confidence interval ignores the bias ofτ c t , it tends to provide better coverage than the confidence intervals obtained from the Chebyshev approach. Moreover, the Gaussian intervals tend to be shorter, making them practically more useful. Appendix D provides an additional table showing the average lengths of the confidence intervals in Table 2.
Two real data examples
We now apply our methods to two empirical applications. In the first application, we use the convex combination estimator to analyze a panel experiment and show it reduces the variance and leads to more reliable estimates of the temporal exposure contrast. In the second application, we run a semi-synthetic experiment on a social network to demonstrate the necessity of our assumptions for the validity of the analysis and provide further empirical evidence of the advantage of the convex combination estimator.
Rational cooperation
The panel experiment we analyze is from Andreoni & Samuelson (2006). The authors test a game-theoretic model of "rational cooperation" through a panel experiment. Specifically, in each experiment session, they recruited 22 subjects to play 20 twice-played prisoners' dilemmas, ensuring that no player would meet the same partner twice. The twice-played prisoners' dilemma consists of two periods with different pay-off structures, as shown in Table 3. The parameters x 1 , x 2 satisfy
x 1 , x 2 ≥ 0, x 1 + x 2 = 10. C D C (3x 1 , 3x 1 ) (0, 4x 1 ) D (4x 1 , 0) (x 1 , x 1 ) Period one C D C (3x 2 , 3x 2 ) (0, 4x 2 ) D (4x 2 , 0) (x 2 , x 2 )
Period two Table 3: Payoff structure in the experiment conducted by Andreoni & Samuelson (2006). The choice C denotes "cooperate" and the choice D "defect."
Let λ = x 1 x 1 +x 2 , then for each round of the experiment, 22 subjects were grouped into 11 pairs, and each pair was randomly assigned with a λ ∼ Unif{0, 0.1, · · · , 0.9, 1}. The outcomes were the total payoffs. Since there are five sessions in total, we have 110 subjects and 2200 outcomes. We use this panel experiment to illustrate that the convex combination estimator effectively reduces the estimates' variance and thus produces more reliable estimates. To this end, following Bojinov et al. (2021), we define treatment to be λ > 0.6 and control to be λ ≤ 0.6. This results in a panel experiment with binary treatments and Bernoulli design with treated probability 5 11 . Under this setup, we generally expect a positive treatment affect as the payoffs are more concentrated in period two.
We next build a social network among all subjects in the experiment. If the players have played each other in the first few rounds, then they should have some influence on each other for the later rounds. Hence, we consider any players that played each other in the first five rounds of the game as being connected. We then use the remaining 15 rounds as our experimental data. So, for our panel experiment, we have n = 110 and T = 15. As Bojinov et al. (2021) showed little evidence of carryover effects, we assume there is only population interference. Then, we use the exposure model in (11) and the temporal exposure contrast we are interested in is the exposure contrast between (0, ≤ 0.2) and (1, ≥ 0.8) for each time step. We now report the Horvitz-Thompson estimates of the temporal exposure contrast for the last 10 time steps, the estimates from the 2-step, and the 5-steps convex combination estimator estimates. Table 4 shows the results.
In general, we do not expect the temporal exposure contrasts to be different for different time steps since all the 15 rounds of games were done together in one session. And as we can see from the table, the convex combination estimator leads to estimates with much smaller variance. Note that the estimates from 2-step and 5-steps convex combination estimators are similar, illustrating that the choice of k is not crucial since the estimator itself takes care of it. Moreover, as we pointed out earlier, we would expect positive exposure contrast and the estimates from convex combination Time Step T = 6 T = 7 T = 8 T = 9 T = 10 T = 11 T = 12 T = 13 T = 14 T = 15 Variance Table 4: Estimates for temporal exposure contrasts from the panel experiment in Andreoni & Samuelson (2006) estimator are more reliable in the sense that it shrinks the estimates towards zero when the Horvitz-Thompson estimator gives a negative value (this is possible since we only have n = 110 subjects which is a small number).
Facebook network semi-synthetic experiment
We now describe a semi-synthetic experiment using the Swarthmore College social network from the Facebook 100 dataset (Traud et al. 2012). All networks in this dataset are complete online friendship networks for one hundred colleges and universities collected from a single-day snapshot of Facebook in September 2005. The network we use is of size 1657 with 61049 edges. We use this network as the graph that describes population interference among units and generate an assignment vector using a Bernoulli design with a success probability of 1/2. We first show mean squared error reduction of using convex combination estimator to estimate temporal exposure contrast between (0, 0) and (1, 1) at T = 20. Let ρ i,t = 1 |N i | j∈N i w j,t , we assume the following exposure mappings:
f i (w 1:n,t ) = (w i,t ,ρ i,t ), whereρ i,t = 0 if ρ i,t ≤ 0.3, ρ i,t if 0.3 ≤ ρ i,t ≤ 0.7, 1 if ρ i,t > 0.7.
Now we make a panel experiment with T = 20. We generate the outcomes at each time step according to a linear model that is linear in w i,t andρ i,t and add a time-varying component ϵ t that is uniformly distributed on [−0.5, 0.5]. Table 5 shows the empirical bias, the variance, and the root mean squared errors (RMSE) of the estimates of the temporal exposure contrast at time step T = 20 by using Horvitz-Thompson estimator, 2-step convex combination estimator, and 5-step convex combination estimator. As expected, the convex combination estimator reduces the RMSE significantly. Though the biases seem large compared to the Horvitz-Thompson estimator, as we mentioned previously, we can also control the amount of bias we tolerate, which implicitly accounts for the time effect.
The maximal degree for the network is 577, which is far greater than the √ n.
f i (w 1:n,t ) = (w i,t ,ρ i,t ), whereρ i,t = 0 if ρ i,t ≤ 0.35, 1 if 0.35 < ρ i,t ≤ 0.5, 2 if 0.5 < ρ i,t ≤ 0.65, 3 if ρ i,t > 0.65.
We are interested in the average temporal exposure contrast between (1, 3) and (0, 0). Since the network is dense, with an average degree of 73.69, we expect the Horvitz-Thompson estimator to be inaccurate since units with exposure values (1, 3) or (0, 0) will unlikely to be those units with many neighbors. Figure 3 and 4 show the histograms of Horvitz-Thompson estimates for T = 20 and T = 100 respectively. Here, we calculate ATEC for 10,000 realizations, and since the computation of the variance estimate is time-consuming, we do not rescale the estimates. Figure 3 shows that when T = 20 the histogram is far from normally distributed. Figure 4 shows that when T = 100, although the data are much closer to being normally distributed, they still are not. Also, note that the centers of these two histograms are away from 0; as we stated above, since some of the neighborhoods are extremely large, we cannot observe the exposure value we would need for units with large neighborhoods. This illustrates the necessity of condition (6) -reliable inference requires more experiments if we have a dense network. We also report the coverage using a Gaussian confidence interval here. For both T = 20 and T = 100, the empirical coverage of naive Gaussian confidence interval is around 80%.
Conclusion
In this paper, we developed estimation and inference results for panel experiments with population interference. Our work shows that the added temporal dimension in experiments with population interference may either help or hurt our ability to do inference. In the absence of carryover effects, causal effects averaged across time can be estimated more easily than at single time points. Here we also introduced a variance reduction technique that incorporated a novel stability assumption and showed that if the potential outcomes are well-behaved, a simple convex combination type estimator can effectively reduce the mean squared error. In the presence of carryover effects, we can still obtain novel central limit theorems; however, these require additional assumptions and are not as flexible. Finally, our results also explicitly capture the relationship between the growth rate of the dependency graph of the exposure values, allowing us to consider the trade-off between the assignment mechanism and the interference.
Many interesting avenues of investigation around interference in panel experiments have been
left unexplored in this manuscript and will be the object of future work. First, our results mostly consider the temporally independent assignment mechanism: this is, of course, limiting as it removes all dynamic designs, but it does present a useful benchmark. Second, while our simulations show that our convex combination estimators seem to behave well, our formal results under this new stability assumption are still limited. Third, we do not explicit discussions hypothesis testing.
Although our results do provide a way to test specific hypotheses by inverting the confidence intervals, there has been literature that discusses how to conduct a Fisher randomization test for the sharp null hypothesis in panel experiments .
A Standard population interference
This appendix focuses on estimating TEC under population interference and assumes that either the experiment was conducted over a single time point or that there are no carryover effects. In both cases, we drop the subscript t for the remainder of the section. Our setup is now equivalent to the one studied in Liu & Hudgens (2014), Aronow & Samii (2017), Chin (2018) and Leung (2022).
Our Horvitz-Thompson type estimatorτ k,k ′ now simplifies to,
τ k,k ′ = 1 n n i=1 1(H i = k) π i (k) Y i (k) − 1(H i = k ′ ) π i (k ′ ) Y i (k ′ ) , (A.1)
where π i (k) = P(H i = k) and π i (k ′ ) = P(H i = k ′ ). Aronow & Samii (2017) showed that if the potential outcomes and inverse exposure probabilities are bounded, and the number of dependent pairs of H i 's is of order o(n 2 ), then the estimatorτ k,k ′ is consistent,
τ k,k ′ − τ k,k ′ → P 0.
In addition, the authors provided an asymptotically conservative confidence interval ofτ k,k ′ and implicitly outlined a version of a central limit theorem in the proof. However, the conditions stated in their derivations were sufficient but not necessary. Below, we establish a central limit theorem forτ k,k ′ under weaker conditions and provide a detailed proof that builds on recent results by Chin (2018). We then illustrate the trade-offs between the strength of the interference structure assumption and the assignment mechanism's flexibility.
A.1 A central limit theorem
We can now state the following central limit theorem for temporal exposure contrast.
Theorem 7. Under Assumptions 2-4 and the condition that d n = o(n 1/4 ), we have
√ n(τ k,k ′ − τ k,k ′ ) Var( √ nτ k,k ′ ) 1/2 d − → N (0, 1)
as n → ∞.
The assumption that d n = o(n 1/4 ) quantifies the dependence among observations due to interference. Let d n be the maximal degree in this graph, which is equal to the maximal number of dependent exposure values for each unit; this is different from the definition of d n in the main paper, where it is defined as a limit or a maximal over t.
Theorem 7 strengthens the result of Aronow & Samii (2017) in two ways. First, our Assumption 4 weakens Condition 6 of Aronow & Samii (2017), which requires the convergence of Var( √ nτ k,k ′ t ).
Second, we allow for a higher range of dependence (d n = o(n 1/4 ) compared to d n = O(1) as in Aronow & Samii (2017)) among exposure values. The proof of this theorem relies on recent results in Chin (2018).
A.2 Design and interference structure: a trade-off
Intuitively, Theorem 7 asserts that asymptotic normality holds so long as the dependency relations among the H i 's are moderate. However, since H i = f i (W 1:n ) is determined by both function f i and assignment W , the dependence structure among the H i 's -and therefore the value of d ndepends on both the exposure specification and the assignment mechanism.
This suggests that there exists a trade-off between the strength of the dependence in the W i 's induced by the assignment mechanism and the dependence induced by the interference structure.
The less restricted the interference structure is, the more restricted the assignment mechanism must be; in reverse, the more restricted the interference structure, the more flexible one can be with the design. We illustrate these insights with three special cases of Theorem 7, applied to popular settings. We should also note that our condition on d n is not a sufficient condition for the central limit theorem. For example, if we consider f i (W i ) = W i (i.e., there is no interference) and
W follows completely randomized design, then the central limit theorem still holds (see Theorem 1 in Ding (2017)). The discussion here mainly illustrates the entanglement between the assignment mechanism and the interference structure from a general perspective.
Example 2. Suppose that the interference structure among n units is adequately described by a social network A n , and assume that the exposure mapping is of the form f i (W 1:n ) = f i (W N i ); that is, only the neighbors' assignments matter. Let δ n be the maximal number of neighbors a unit can have in the network A n -which is distinct from the dependency graph. Then if δ n = o(n 1/8 ) and
the W i 's are independent (i.e., the design is Bernoulli), then d n = o(n 1/4 ) as required by Theorem 7.
This first example explores one extreme end of the trade-off, in which the assignment mechanism is maximally restricted -the W i 's are independent -which allows for a comparatively large amount of interference.
Example 3. We consider the graph cluster randomization approach (Ugander et al. (2013)) in which case we group units into clusters and randomize at the cluster level. Following the notations in Ugander et al. (2013), we let the vertices be partitioned into n c clusters C 1 , · · · , C nc . The graph cluster randomization approach assigns either treatment or control to all the units in each cluster. Suppose one's potential outcomes depend only on the assignments of its neighbors. Let δ n be the maximal number of neighbors one can have and c n be the maximal size of the cluster. Then d n = o(n 1/4 ) for δ 2 n + δ n c n = o(n 1/4 ).
Example 4. Another commonly studied scenario is the "household" interference (Basse & Feller (2018); Duflo & Saez (2003)). In household interference, we assume that each unit belongs to a "household" and their potential outcomes depend only on the assignments of the units within the "household". Suppose we have a two-stage design such that we first assign each household into treatment group or control group independently and then we assign treatments to units in each household depending on the assignment of their associated household. Let r n be the maximal size of the "household", then d n = o(n 1/4 ) for r n = o(n 1/4 ). the interference is restricted within households, we can have a more complex two-stage design. In the same spirit, Example 3 shows that for a highly dependent design, we need an even stronger condition on the interference structure, indicated by a stronger rate condition on δ n . In general, a weaker assumption on the interference structure induces a more complex dependence graph for the exposures, which in turn reduces our flexibility in the choice of design.
A.3 Inference
The central limit theorem stated in Theorem 7 serves as our basis for inference.
Proposition 7. Assuming all the assumptions in Theorem 7, then for any δ > 0, we have that P Var(τ k,k ′ )
Var(τ k,k ′ ) ≥ 1 − δ → 1.
where Var(τ k,k ′ ) = n −1 Var( √ nτ k,k ′ ). Therefore, we can construct asymptotically conservative confidence interval based on the variance estimator: for any δ > 0,
P τ k,k ′ ∈ τ k,k ′ − z 1− α 2 √ 1 − δ Var(τ k,k ′ ), τ k,k ′ + z 1− α 2 √ 1 − δ Var(τ k,k ′ ) ≥ 1 − α for large n.
Var( √ nτ k,k ′ ) is the same as the one given in Proposition 2. Once again, this result strengthens that of Aronow & Samii (2017) by both removing the requirement that nVar(τ k,k ′ ) converge, and by relaxing the constraint on the interference mechanism. Note that here δ > 0 is arbitrary and we present detailed simulations in Section 5 with δ = 0.04.
B Proofs and additional discussions
To begin with, we provide technical tools that we will use in our proofs. We first state a lemma from Ross (2011):
Lemma 1. Let X 1 , · · · , X n be a collection of random variables such that E X 4 i < ∞ and E [X i ] = 0. Let σ 2 =Var( i X i ) and S = i X i . Let d be the maximal degree of the dependency graph of (X 1 , · · · , X n ). Then for constants C 1 and C 2 which do not depend on n, d or σ 2 ,
d W (S/σ) ≤ C 1 d 3/2 σ 2 n i=1 E X 4 i 1/2 + C 2 d 2 σ 3 n i=1 E|X i | 3 , (B.1)
where d W (S/σ) is the Wasserstein distance between S/σ and standard Gaussian.
Second, we provide the expression for the variance ofτ k,k ′ :
Lemma 2 (Variance of Horvitz-Thompson estimator). We have that (Aronow & Samii (2017)):
Var( √ nτ k,k ′ ) = 1 n n i=1 π i (k)(1 − π i (k)) Y i (k) π i (k) 2 + 1 n n i=1 π i (k ′ )(1 − π i (k ′ )) Y i (k ′ ) π i (k ′ ) 2 + 2 n n i=1 Y i (k)Y i (k ′ ) + 1 n n i=1 j̸ =i [π ij (k) − π i (k)π j (k)] Y i (k) π i (k) Y j (k) π j (k) + π ij (k ′ ) − π i (k ′ )π j (k ′ ) Y i (k ′ ) π i (k ′ ) Y j (k ′ ) π j (k ′ ) − 2 n n i=1 j̸ =i π ij (k, k ′ ) − π i (k)π j (k ′ ) Y i (k) π i (k) Y j (k ′ ) π j (k ′ )
Here π ij (k) = P(H i = k and H j = k)
Proof of Theorem 7. Note thatτ k,k ′ = n i=1τ i wherẽ
τ i = 1 n 1(H i = k) π i (k) Y i (k) − 1(H i = k ′ ) π i (k ′ ) Y i (k ′ ) and E [τ i ] = 1 n Y i (k) − Y i (k ′ ) , hence if we let X i = √ n(τ i − E [τ i ]), then √ n(τ k,k ′ − τ k,k ′ ) = n i=1 X i = S.
By Assumption 2 and Assumption 3, we know that X i = O p (n −1/2 ), hence there exist some constants C 1 and C 2 such that for sufficiently large n, both n i=1 E X 4 i 1/2 ≤ C 1 n −1/2 and n i=1 E|X i | 3 ≤ C 2 n −1/2 hold. Moreover, by Assumption 4,
σ 2 = Var( i X i ) = nVar(τ k,k ′ )
is at least O(1). Note that X i is a function of H i , hence X i and X j are not independent if and only if H i and H j are not independent. Since d n = o(n 1/4 ), we know that the maximal degree of the dependency graph of X i 's is o(n 1/4 ). Now we apply Lemma 1. Since σ 2 is at least O(1), we get:
RHS of (B.1) = o(n −1/8 ) + o(1) → 0
We're done.
Remark 1. In fact, with the tools in Leung (2022) Proof of Example 4. We use the same reasoning as in the above proof. The only change is that now we know that each unit is belonged to a group and units in the group are connected. Therefore, for each fixed unit i, all the units outside the group will not have effect on unit i. As a result, we can have r n = o(n 1/4 ).
Proof of Example 3. Since we do not have Bernoulli design anymore, there might be the case that W i and W j are dependent, hence except ({i} ∪ N i ) ∩ ({j} ∪ N j ) is nonempty, there is another case that makes H i and H j dependent: a neighbor of i is in the same cluster as a neighbor of j. For this case, we have at most δ n c n such j's for a fixed unit i. Hence, in total, there are at most δ 2 n + δ n c n j's such that H i and H j are dependent.
Proof of Proposition 7. We first prove the first part of the proposition. The proof is based on A.7 in Aronow & Samii (2017). To start with, for any (i, j) ∈ {1, · · · , } × {1, · · · , n}, we define e ij = 1 if H i and H j are dependent and 0 otherwise. Let a ij (H i , H j ) be the sum of the elements in Var(τ k,k ′ ) that incorporate i and j, then
Var Var(τ k,k ′ ) ≤ n −4 Var n i=1 n j=1 e ij a ij (H i , H j ) = n −4 n i=1 n j=1 n k=1 n l=1 Cov [e ij a ij (H i , H j ), e kl a kl (H k , H l )]
Note that Cov [e ij a ij (H i , H j ), e kl a kl (H k , H l )] is nonzero if and only if e ij = 1, e kl = 1 and at least one of e ik , e il , e jk , e jl is 1. In total, there are at most 4nd 3 n (i, j, k, l)'s satisfying this condition. And by Assumption 2 and 3, each covariance term is bounded, so we know that Var Var(τ k,k ′ ) = o(n −4 × n × n 3/4 ) → 0 as n → ∞. Then by Chebyshev's inequality,
Var( √ nτ k,k ′ ) − E Var( √ nτ k,k ′ ) = o p (1). Since E Var(τ k,k ′ ) ≥ Var(τ k,k ′ ), P Var(τ k,k ′ ) Var(τ k,k ′ ) ≥ 1 − δ → 1
for any δ > 0. Now we can prove the second part of the proposition. We have that
LHS = P √ n(τ k,k ′ − τ k,k ′ ) Var( √ nτ k,k ′ ) ≤ z 1− α 2 √ 1 − δ Var( √ nτ k,k ′ ) Var( √ nτ k,k ′ ) ≥ P √ n(τ k,k ′ − τ k,k ′ ) Var( √ nτ k,k ′ ) ≤ z 1− α 2 √ 1 − δ Var( √ nτ k,k ′ ) Var( √ nτ k,k ′ ) and Var( √ nτ k,k ′ ) Var( √ nτ k,k ′ ) ≥ 1 − δ ≥ P √ n(τ k,k ′ − τ k,k ′ ) Var( √ nτ k,k ′ ) ≤ z 1− α 2 and Var( √ nτ k,k ′ ) Var( √ nτ k,k ′ ) ≥ 1 − δ = P √ n(τ k,k ′ − τ k,k ′ ) Var( √ nτ k,k ′ ) ≤ z 1− α 2 − P √ n(τ k,k ′ − τ k,k ′ ) Var( √ nτ k,k ′ ) ≤ z 1− α 2 and Var( √ nτ k,k ′ ) Var( √ nτ k,k ′ ) < 1 − δ (B.2) Now, (B.2) ≥ P √ n(τ k,k ′ − τ k,k ′ ) Var( √ nτ k,k ′ ) ≤ z 1− α 2 − P Var( √ nτ k,k ′ ) Var( √ nτ k,k ′ ) < 1 − δ = P √ n(τ k,k ′ − τ k,k ′ ) Var( √ nτ k,k ′ ) ≤ z 1− α 2 − P Var(τ k,k ′ ) Var(τ k,k ′ ) < 1 − δ → 1 − α
as n → ∞ by the first part and Theorem 7.
Proof of Theorem 1. We use a characteristic function argument. We first note that
√ nT (τ k,k ′ −τ k,k ′ ) 1 T T t=1 σ 2 n,t = √ nT ( 1 T T t=1τ k,k ′ t − 1 T T t=1 τ k,k ′ t ) 1 T T t=1 σ 2 n,t = √ T 1 T T t=1 √ n(τ k,k ′ t − τ k,k ′ t ) 1 T T t=1 σ 2 n,t = 1 √ T T t=1 X n,t 1 T T t=1 σ 2 n,t , where X n,t = √ n(τ k,k ′ t − τ k,k ′ t ). Now, E exp iλ √ nT (τ k,k ′ −τ k,k ′ ) 1 T T t=1 σ 2 n,t (B.3) = E exp iλ 1 √ T T t=1 X n,t 1 T T t=1 σ 2 n,t = T t=1 E exp iλ 1 √ T X n,t 1 T T t=1 σ 2 n,t = T t=1 E exp i λσ n,t T t=1 σ 2 n,t X n,t σ n,t = T t=1 ϕ X n,t σ n,t λσ n,t T t=1 σ 2 n,t (B.4)
The second equality follows from our assumption that assignment vectors are independent across time and ϕ X denotes the characteristic function of a random variable X. Pick ϵ > 0. Now, to conclude the proof, we note that ϕ X n,t σ n,t (θ) → e − θ 2 2 for any t ∈ {1, · · · , T }. Moreover, for each t, the convergence is actually uniform on any bounded interval. Therefore, for any t ∈ {1, · · · , T }, ϕ X n,t σ n,t (θ) → e − θ 2 2 uniformly on (0, 1).
Note that λσ n,t T t=1 σ 2 n,t ∈ (0, 1), so for any t, ∃N t ∈ N such that for any n ≥ N t ,
ϕ X n,t σ n,t λσ n,t T t=1 σ 2 n,t − exp − 1 2 λ 2 σ 2 n,t T t=1 σ 2 n,t = |ϵ t | ≤ 1 2 K .
Let N = max{N 1 , · · · , N T }, then for all n ≥ N , and for all t ∈ {1, · · · , T },
ϕ X n,t σ n,t λσ n,t T t=1 σ 2 n,t − exp − 1 2 λ 2 σ 2 n,t T t=1 σ 2 n,t = |ϵ t | ≤ 1 2 K ,
where K is any big number we want. Now,
(B.4) = T t=1 exp − 1 2 λ 2 σ 2 n,t T t=1 σ 2 n,t + ϵ t = exp − 1 2 λ 2 + R(ϵ t ),
where R(ϵ t ) is a remainder term that is the sum of several monomial terms of ϵ t 's. Note that exp − 1 2 λ 2 σ 2 n,t T t=1 σ 2 n,t is actually bounded by 1, hence by making K sufficiently large, we can make R(ϵ t ) arbitrarily small. Pick such K, then we know that for sufficiently large n,
E exp iλ √ nT (τ k,k ′ −τ k,k ′ ) 1 T T t=1 σ 2 n,t − exp − 1 2 λ 2 ≤ ϵ.
Hence, by standard characteristic function argument, we complete the proof of the theorem.
To prove Theorem 3, we first state the following version of Lindeberg-Feller central limit theorem.
Lemma 3 (Lindeberg-Feller CLT). Let {k n } n≥1 be a sequence of positive integers increasing to infinity. For each n, let {X n,i } 1≤i≤kn is a collection of independent random variables. Let µ n,i := E(X n,i ) and
s 2 n := kn i=1
Var(X n,i ).
Suppose that for any ϵ > 0,
lim n→∞ 1 s 2 n kn i=1 E (X n,i − µ n,i ) 2 ; |X n,i − µ n,i | ≥ ϵs n = 0. (B.5) Then the random variable kn i=1 (X n,i − µ n,i ) s n d − → N (0, 1)
as n → ∞.
Proof of Theorem 3. We first prove the theorem with condition (5). We note that
√ nT (τ k,k ′ − τ k,k ′ ) = T t=1 n T (τ k,k ′ t − τ k,k ′ t ). Let X n,t = n Tτ k,k ′ t , then µ n,t = n T τ k,k ′ t ,
so the numerator is exactly T t=1 (X n,t − µ n,t ). Moreover, note that for any n, X n,1 , · · · , X n,T are independent by the pure population interference assumption. Now,
s 2 n = T t=1 Var(X n,t ) = T t=1 Var n Tτ k,k ′ t = 1 T T t=1 Var( √ nτ k,k ′ t ) = 1 T T t=1 σ 2 n,t .
Hence, to finish the proof, we only need to check (B.5) is satisfied. Notice that for any ϵ > 0,
|X n,t − µ n,t | ≥ ϵs n ⇔ n Tτ k,k ′ t − n T τ k,k ′ t ≥ ϵ 1 T T t=1 σ 2 n,t ⇔ τ k,k ′ t − τ k,k ′ t ≥ ϵ 1 n T t=1 σ 2 n,t
By Assumption 4, σ 2 n,t ≥ c for some c > 0 and for all n large. Hence
ϵ 1 n T t=1 σ 2 n,t ≥ ϵ T n c → ∞.
Note that by Assumptions 2 and 3, τ k,k ′ t − τ k,k ′ t is uniformly bounded. Hence for sufficiently large
n, τ k,k ′ t − τ k,k ′ t < ϵ 1
n T t=1 σ 2 n,t for all t. Therefore, for sufficiently large n,
1 s 2 n T t=1
E (X n,t − µ n,t ) 2 ; |X n,t − µ n,t | ≥ ϵs n = 0.
As a result, (B.5) is satisfied. We're done. The proof of this theorem with condition (6) is exactly the same as in single time step case once we notice that the numerator is just a sum of nT mean 0 dependent random variables.
To prove Theorem 2, we need the following version of Lyapunov central limit theorem.
Lemma 4 (Lyapunov CLT). Let {X n } ∞ n=1 be a sequence of independent random variables. Let µ i := E(X i ) and
s 2 n = n i=1
Var(X i ).
If for some δ > 0,
lim n→∞ 1 s 2+δ n n i=1 E|X i − µ i | 2+δ = 0, (B.6) then the random variable n i=1 (X i − µ i ) s n d − → N (0, 1)
Proof of Theorem 2. This time, we let X t = n Tτ k,k ′ t then the numerator is T t=1 (X t − µ t ). Since we have pure population interference, {X t } ∞ t=1 are independent. Now,
s 2 T = T t=1 Var(X t ) = 1 T T t=1 σ 2 n,t .
Hence, we only need to check (B.6). We have that
lim T →∞ 1 s 2+δ T T t=1 E|X t − µ t | 2+δ = lim T →∞ 1 s 2+δ T n T 1+ δ 2 T t=1 E τ k,k ′ t − τ k,k ′ t 2+δ
Now, by Assumptions 2 and 3, ∃M > 0 such that τ k,k ′ t − τ k,k ′ t ≤ M for all t. Hence,
1 s 2+δ T n T 1+ δ 2 T t=1 E τ k,k ′ t − τ k,k ′ t 2+δ ≤ 1 s 2+δ T n T 1+ δ 2 T M 2+δ = 1 s 2+δ T n 1+ δ 2 T M 2+δ If T → ∞, 1 s 2+δ T n 1+ δ 2
T M 2+δ → 0. Therefore, (B.6) is satisfied. We're done.
Proof of Proposition 2. Now we can prove the second part of the proposition. We have that
LHS = P √ nT (τ k,k ′ −τ k,k ′ ) 1 T T t=1 Var( √ nτ k,k ′ t ) ≤ z 1− α 2 √ 1 − δ 1 T T t=1 Var( √ nτ k,k ′ t ) 1 T T t=1 Var( √ nτ k,k ′ t ) ≥ P √ nT (τ k,k ′ −τ k,k ′ ) 1 T T t=1 Var( √ nτ k,k ′ t ) ≤ z 1− α 2 √ 1 − δ 1 T T t=1 Var( √ nτ k,k ′ t ) 1 T T t=1 Var( √ nτ k,k ′ t ) and 1 T T t=1 Var(τ k,k ′ t ) 1 T T t=1 Var(τ k,k ′ t ) ≥ 1 − δ ≥ P √ nT (τ k,k ′ −τ k,k ′ ) 1 T T t=1 Var( √ nτ k,k ′ t ) ≤ z 1− α 2 and 1 T T t=1 Var(τ k,k ′ t ) 1 T T t=1 Var(τ k,k ′ t ) ≥ 1 − δ (B.7)
Furthermore, we have that
(B.7) = P √ nT (τ k,k ′ −τ k,k ′ ) 1 T T t=1 Var( √ nτ k,k ′ t ) ≤ z 1− α 2 − P √ nT (τ k,k ′ −τ k,k ′ ) 1 T T t=1 Var( √ nτ k,k ′ t ) ≤ z 1− α 2 and 1 T T t=1 Var(τ k,k ′ t ) 1 T T t=1 Var(τ k,k ′ t ) < 1 − δ ≥ P √ nT (τ k,k ′ −τ k,k ′ ) 1 T T t=1 Var( √ nτ k,k ′ t ) ≤ z 1− α 2 − P 1 T T t=1 Var(τ k,k ′ t ) 1 T T t=1 Var(τ k,k ′ t ) < 1 − δ So if we can show P 1 T T t=1 Var(τ k,k ′ t ) 1 T T t=1 Var(τ k,k ′ t )
≥ 1 − δ → 0 then we are done. Notice that
Var 1 T T t=1 Var(τ k,k ′ t ) = 1 T 2 T t=1 Var Var(τ k,k ′ t ) . (B.8)
Remark 2. Note that following the exact derivation, we can know that
|τ T E t − τ T E t ′ | ≤ 2|t − t ′ |ϵ (B.9)
Proposition 8 (Variance and Covariance of Horvitz-Thompson Type Estimators). For each i ∈ {1, · · · , n}, t ∈ {1, · · · , T }, we let P(
H i,t = h 1 i ) = π 1 i,t , P(H i,t = h 0 i ) = π 0 i,t , P(H j,t = h 1 j ) = π 1 j,t
and P(H j,t = h 0 j ) = π 0 j,t . Moreover, for each i ̸ = j and t, we let P(
H i,t = h 1 i , H j,t = h 1 j ) = π 1,1 i,j,t , P(H i,t = h 0 i , H j,t = h 1 j ) = π 0,1 i,j,t , P(H i,t = h 1 i , H j,t = h 0 j ) = π 1,0 i,j,t and P(H i,t = h 0 i , H j,t = h 0 j ) = π 0,0 i,j,t , then V ar(τ T E t ) = 1 n 2 n i=1 Y 2 i,t (h 1 i )(1 − π 1 i,t ) π 1 i,t + Y 2 i,t (h 0 i )(1 − π 0 i,t ) π 0 i,t + 2Y i,t (h 1 i )Y i,t (h 0 i ) + 2 n 2 1≤i<j≤n Y i,t (h 1 i )Y j,t (h 1 j )(π 1,1 i,j,t − π 1 i,t π 1 j,t ) π 1 i,t π 1
which is further equivalent to
α 2 V ar(τ T E t ) + (1 − α) 2 V ar(τ T E t−1 ) +2α(1 − α)Cov(τ T E t ,τ T E t−1 ) + 4(1 − α) 2 ϵ 2 ≤ V ar(τ T E t ) (B.12)
Rewrite (B.12), we have
4ϵ 2 + V ar(τ T E t ) + V ar(τ T E t−1 ) − 2Cov(τ T E t ,τ T E t−1 ) α 2 − 8ϵ 2 + 2V ar(τ T E t−1 ) − 2Cov(τ T E t ,τ T E t−1 ) α + 4ϵ 2 + V ar(τ T E t−1 ) − V ar(τ T E t ) ≤ 0 (B.13)
Now we look at the left hand side of (B.13), which is quadratic in α. To ease notations, let
A = V ar(τ T E t ), B = V ar(τ T E t−1 ) and C = Cov(τ T E t ,τ T E t−1 )
. It's easy to see that the left hand side achieves its minimum at α = δ = 1 − 2(A−C) 8ϵ 2 +2A+2B−4C and is 0 at α = 1. So if we have δ < 1, then for some α ∈ (0, 1), we have reduction in MSE. Moreover, if δ < 1 2 , we then know that for α = 1 2 , we also have smaller MSE by the property of quadratic functions. And simple algebra shows that
δ < 1 2 is equivalent to A − B > 4ϵ 2 .
Proposition 9 (Estimators of variance). We define two estimators of the variance:
V ar u (τ T E t ) = 1 n 2 n i=1 1(H i,t = h 1 i )(1 − π 1 i,t ) Y i,t π 1 i,t 2 + 1(H i,t = h 0 i )(1 − π 0 i,t ) Y i,t π 0 i,t 2 + Y 2 i,t π 1 i,t 1(H i,t = h 1 i ) + Y 2 i,t π 0 i,t 1(H i,t = h 0 i ) + 2 n 2 1≤i<j≤n 1(π 1,1 i,j,t ̸ = 0)1(H i,t = h 1 i )1(H j,t = h 1 j ) (π 1,1 i,j,t − π 1 i,t π 1 j,t )Y i,t Y j,t π 1 i,t π 1 j,t π 1,1 i,j,t − 1(π 0,1 i,j,t ̸ = 0)1(H i,t = h 0 i )1(H j,t = h 1 j ) (π 0,1 i,j,t − π 0 i,t π 1 j,t )Y i,t Y j,t π 0 i,t π 1 j,t π 0,1 i,j,t −1(π 0,1 i,j,t = 0) 1(H i,t = h 0 i )Y 2 i,t 2π 0 i,t + 1(H j,t = h 1 j )Y 2 j,t 2π 1 j,t − 1(π 1,0 i,j,t ̸ = 0)1(H i,t = h 1 i )1(H j,t = h 0 j ) × (π 1,0 i,j,t − π 1 i,t π 0 j,t )Y i,t Y j,t π 1 i,t π 0 j,t π 1,0 i,j,t −1(π 1,0 i,j,t = 0) 1(H i,t = h 1 i )Y 2 i,t 2π 1 i,t + 1(H j,t = h 0 j )Y 2 j,t 2π 0 j,t +1(π 0,0 i,j,t ̸ = 0) 1(H i,t = h 0 i )1(H j,t = h 0 j )(π 0,0 i,j,t − π 0 i,t π 0 j,t )Y i,t Y j,t π 0 i,t π 0 j,t π 0,0 i,j,t (B.14) and V ar d (τ T E t ) = 1 n 2 n i=1 1(H i,t = h 1 i )(1 − π 1 i,t ) Y i,t π 1 i,t 2 + 1(H i,t = h 0 i )(1 − π 0 i,t ) Y i,t π 0 i,t 2 + 2 n 2 1≤i<j≤n 1(π 1,1 i,j,t ̸ = 0) 1(H i,t = h 1 i )1(H j,t = h 1 j )(π 1,1 i,j,t − π 1 i,t π 1 j,t )Y i,t Y j,t π 1 i,t π 1 j,t π 1,1 i,j,t −1(π 1,1 i,j,t = 0) 1(H i,t = h 1 i )Y 2 i,t 2π 1 i,t + 1(H j,t = h 1 j )Y 2 j,t 2π 1 j,t −1(π 0,1 i,j,t ̸ = 0) 1(H i,t = h 0 i )1(H j,t = h 1 j )(π 0,1 i,j,t − π 0 i,t π 1 j,t )Y i,t Y j,t π 0 i,t π 1 j,t π 0,1 i,j,t −1(π 1,0 i,j,t ̸ = 0) 1(H i,t = h 1 i )1(H j,t = h 0 j )(π 1,0 i,j,t − π 1 i,t π 0 j,t )Y i,t Y j,t π 1 i,t π 0 j,t π 1,0 i,j,t + 1(π 0,0 i,j,t ̸ = 0) 1(H i,t = h 0 i )1(H j,t = h 0 j )(π 0,0 i,j,t − π 0 i,t π 0 j,t )Y i,t Y j,t π 0 i,t π 0 j,t π 0,0 i,j,t −1(π 0,0 i,j,t = 0) 1(H i,t = h 0 i )Y 2 i,t 2π 0 i,t + 1(H j,t = h 0 j )Y 2 j,t 2π 0 j,t . (B.15)
Assuming all the potential outcomes are non-negative, we then have that Then, B −1 n (X n,1 + · · · X n,d ) d − → N (0, 1).
Proof. This is essentially Theorem 2.1 in Romano & Wolf (2000). We replace the original conditions 4, 5 and 6 by the last three conditions. In fact, the last three conditions are needed to establish the theorem and the conditions 4, 5 and 6 in Theorem 2.1 in Romano & Wolf (2000) are sufficient
conditions. Now, we are ready to prove Theorem 5.
Proof of Theorem 5. We defineτ i,t = 1(H i,t =k)
P(H i,t =k) Y i,t − 1(H i,t =k ′ ) P(H i,t =k ′ ) Y i,t = 1(H i,t =k) P(H i,t =k) Y i,t (k)− 1(H i,t =k ′ ) P(H i,t =k ′ ) Y i,t (k ′ )
. Then the ATEC can be written asτ
k,k ′ = n i=1 T t=1 1 nTτ i,t .
Similarly, we define τ i,t = Y i,t (k) − Y i,t (k ′ ), which is the true individual exposure contrast. Now,
√ nT (τ k,k ′ −τ k,k ′ ) = n i=1 T t=1 1 √ nT (τ i,t − τ i,t ).
To proceed, we let X n,i,t = 1 √ nT (τ i,t −τ i,t ). We view {X n,i,t } as a single sequence of random variables by enumerating X n,i,t following the order X n,1,1 , · · · , X n,1,T , X n,2,1 , · · · , X n,2,T , · · · , X n,n,T . Using the language in the lemma, d = nT . Since {H i,t } n i=1 is a sequence of s-dependent random variables and X n,i,t is a function of H i,t , we know that {X n,i,t } is a sT -dependent sequence of random variables. In other words, m = sT in the above lemma. Note that |X n,i,t | ≤ C 1 √ nT by uniform boundedness of potential outcomes. Hence, for any δ > 0, ∆ n = C 2 (nT ) −1−δ/2 . Now, we calculate B 2 n,k,a and B 2 n . We start with B 2 n,k,a . For all (i 1 , t 1 ) and k ≥ m, let (i 2 , t 2 ) be the index such that when we order X's there are exactly k indices from (i 1 , t 1 ) to (i 2 , t 2 ). Var(τ i,t ) + 2 (u,v)̸ =(p,q)
Cov(τ u,v ,τ p,q ) Since k ≥ m = sT , we know that at most mk covariance terms are non-zero. Given uniform boundedness of potential outcomes and overlap, all the variance and covariance terms are upper bounded by constants M 1 > 0 and M 2 > 0 respectively. Hence, B 2 n,k,a ≤ 1 nT (kM 1 + 2mkM 2 ) ≤ M 3 mk nT = M 3 sk n .
Therefore, B 2 n,k,a /k ≤ M 3 sk n /k = M 3 s n = K n .
Now we look at B 2 n . By Assumption 6, Var( √ nTτ k,k ′ ) ≥ ϵ > 0, hence, for sufficiently large n, B 2 n = Var( √ nTτ k,k ′ ) ≥ ϵ > 0, and B 2 n /d = B 2 n /(nT ) ≥ ϵ/(nT ) = L n .
We let γ = 0, δ = 2. Pick g = g n = s 3 T 3 n α . With such g, m/g obviously goes to 0. Now, K n L n · m g = ϵM 3 sT · 1 s 2 T 2 n α → 0, K n L n · m g
(1−γ)/2 = ϵM 3 sT · 1 sT n 0.5α → 0, ∆ n L −(2+δ)/2 n g δ/2+(1−γ)(2+δ)/2 d −δ/2 m g (1−γ)(2+δ)/2 = C 2 (nT ) −1−δ/2 ϵ −(2+δ)/2 (nT ) (2+δ)/2 g δ/2 (nT ) −δ/2 (sT ) 1+δ/2 = C 4 gs 2 T /n when δ = 2 = C 4 s 5 T 4 n α /n. Proof of Theorem 6. As in the above proof, we check the six conditions in Lemma 5 are satisfied with γ = 0 and δ = 2. Note that since now X n,i,t and X n,j,t are correlated if and only if i and j are in the same group, we can reorder X n,i,t 's as follows:
X n,1,1 , · · · , X n,r,1 , X n,1,2 , · · · , X n,r,2 , · · · , X n,r,T , X n,r+1,1 , · · · , X n,nr,T . Now, this sequence is actually (2r)-dependent, i.e., m = 2r, s = r. Then K n = M 4 /(nT ), L n = ϵ/(nrT ).
Hence K n /L n = M 5 r. Pick g = g n such that g → ∞ and g = (nT ) 3/4 . Then with r = o((nT ) 1 4 ), r 2 /g → 0 and r 3 /g → 0.
K n L n · m g = M 5 r · 2r g → 0, Hence all the conditions are satisfied. It is also easy to see that instead of just 2 time steps, any finite p time steps would work.
Proof of Proposition 6. Let X n,t = nr
T (τ k,k ′ t − τ k,k ′ t ).
The key ingredients are the following two expressions:
Var(X n,t ) = 1 nrT n l=1 r q=1 (2 2r − 1)Y (l,q),t (k) 2 + n l=1 r q=1 (2 2r − 1)Y (l,q),t (k ′ ) 2 + 2 n l=1 r q=1 Y (l,q),t (k)Y (l,q),t (k ′ ) + n l=1 r q 1 =1 q 2 ̸ =q 1
(2 2r − 1)Y (l,q 1 ),t (k)Y (l,q 2 ),t (k) + (2 2r − 1)Y (l,q 1 ),t (k ′ )Y (l,q 2 ),t (k ′ ) +2 n l=1 r q 1 =1 q 2 ̸ =q 1 Y (l,q 1 ),t (k)Y (l,q 2 ),t (k ′ ) (B.22) and
Cov(X n,t , X n,t+1 ) = 1 nrT n l=1 r q 1 =1 r q 2 =1
(2 r − 1)Y (l,q 1 ),t (k)Y (l,q 2 ),t+1 (k) +(2 r − 1)Y (l,q 1 ),t (k ′ )Y (l,q 2 ),t+1 (k ′ ) + Y (l,q 1 ),t (k ′ )Y (l,q 2 ),t+1 (k) + Y (l,q 1 ),t (k)Y (l,q 2 ),t+1 (k ′ ) (B.23) By Proposition 11, it suffices to have α 2 1 Var(τ T E t−k+1 ) + · · · + α 2 k Var(τ T E t ) + 2α i α j 1≤i<j≤k Cov(τ T E t−k+i ,τ T E t−k+j ) + 4 [(k − 1)α 1 + · · · + α k−1 ] 2 ϵ 2 ≤ Var(τ T E t ) (C.1) Now, the left hand side of (C.1) is convex in α 1 , · · · , α k . For the simulation study in Section 5.2.1, we use p = 0.1 for n = 50 and then scale the probability p accordingly for larger n so that each unit has the same expected number of neighbors.
D.1.2 The effect of estimated stability parameter
Recall that ourε is only a lower bound of the true ϵ, hence may underestimate ϵ. To investigate how our estimate of ϵ affects the results, we fix n = 50 and generate the social network according to Erdős-Rényi Model with p = 0.1. We generate 500 realizations of assignments and plug inε, 1.5ε, 2ε, 2.5ε and 3ε for three kinds of estimators considered above. Table 7 shows the results. We see that the convex combination type estimator with k = 2 is not sensitive to the estimate of ϵ while the convex combination type estimator with k = 5 is. Even we use 3ε, two convex combination type estimators still show better performance in terms of root mean squared error. Finally, we investigate how k affects the results. We generate three different social networks, and for each one, we plot the root mean squared errors of using 1 time step (i.e., the Horvitz-Thompson type estimator) to 20 time steps (i.e., we use all time steps to estimate the total effect at time step 20). From Figure 5 we can see that the RMSE curves stay flat after a certain value of k. Hence, we do not need to worry about using too many time steps as the optimization problem intrinsically pick the right k. Table 8 shows the average lengths of approximate confidence intervals. As expected, Gaussian confidence intervals are shorter.
D.1.4 Lengths of approximate confidence intervals
point in time and instead focus on the temporal average causal effect that captures the intervention's average impact across both time and units (Boruvka et al. 2018, Bojinov & Shephard 2019, Bojinov et al. 2021, 2022, Hu & Wager 2022, Xiong et al. 2019). For example, Bojinov & Shephard (2019) are not interested in the relative difference between an algorithm or a human executing a large financial order on an arbitrary day of the experiment but are instead interested in the average difference across multiple trades on the same market. Similarly, technology companies like Door-Dash (Tang et al. 2020), Lyft Chamandy (2016), and Uber Farronato et al. (2018) use switchback design for panel experiments and consider average effect across time to make product decisions.
Athey et al. 2018, Basse, Feller & Toulis 2019, Puelz et al. 2019).
Figure 1 :, n = 1000 Figure 2 :
110002HistogramQ-Q normal plot, n = 1000 explores empirically some properties of the convex combination estimator proposed in Section 3.2:
f i,t (w 1:n,t ) = (w i,t , u i,t ), where at time t u i,t = 1 if unit i has at least one treated neighbor and u i,t = 0 otherwise, so each unit may receive one of four exposures: (0,0), (0,1), (1,0) and (1,1).
Figure 3 : 20 Figure 4 :
3204Histogram, T = Histogram, T = 100 this empirically through a semi-synthetic experiment. Let
Since s 5
5T 4 = o(n 1−α ), s 5 T 4 n α = o(n) and hence ∆ n L Having checked all the conditions, by Lemma 5, we are done.
=
M 6 rg/(nT ) = o(nT )/(nT ) → 0
Figure 5 :
5Root mean squared errors (RMSE) forτ T E 20 andτ c
Table 2 :
2Coverage of two approximate confidence intervals for τ T Et
with k = 2
Table 5 :
5RMSE for different estimators of temporal exposure contrast at T = 20
Table 6
6summarizes the above three examples. In Example 2, to have a general network in-
terference setting with the maximum possible number of neighbors for each unit, we constrain
the design to be the Bernoulli design. Further limiting the interference, like in Example 4 where
Table 6 :
6Trade-off between design and interference
Proof of Example 2. Note that H i is a function of W i and W j 's for j being a neighbor of i. If H i and H j are dependent, there must be the case that ({i} ∪ N i ) ∩ ({j} ∪ N j ) is nonempty since we have the Bernoulli design. Hence, for each fixed unit i, there are at most δ n units such that the above intersection is nonempty., we can proof this theorem with a weaker
condition on d n : d n = O(log n).
Table 7 :
7Root mean squared errors (RMSE) forτ T E 20 ,τ c 20 with k = 2 andτ c 20 with k = 5 D Additional simulation results D.1 Simulation results for estimation under stability assumption D.1.1 Parameters for Erdős-Rényi Model
Gaussian CI with variance estimated by VarGaussian CI with variance estimated by VarChebyshev CI with variance estimated by VarChebyshev CI with variance estimated by VarConfidence Interval
Network 1 Network 2 Network 3
d
27.38
26.62
27.02
u
34.04
32.34
33.33
d
62.47
60.75
61.66
u
77.67
73.79
76.04
Table 8 :
8Lengths of two approximate confidence intervals for τ T E D.1.3 The effect of the number of time stepst
with k = 2
t , we introduce some bias but reduce the variance -the hope being that under weak stability, the bias introduced will be modest compared to the reduction in variance. This is formalized in the following proposition.Proposition 3 (Bound on the bias ofτ c t ).|E[τ c t ] − τ T E t | ≤ 2(1 − α)ϵ(8)As we can see, the absolute bias ofτ c t is bounded by a quantity that grows linearly with ϵ: if ϵ is very small, then so will the maximum bias. In particular,τ c t is unbiased for τ T E t if ϵ = 0, which corresponds to the somewhat unrealistic assumption that the potential outcomes do not vary across time. Under some conditions, it can be guaranteed that the gain in bias is more than counterbalanced by a reduction in variance, making it a worthwhile trade-off in terms of the mean squared error (MSE).Proposition 4. Suppose V ar(τ T E t ) > Cov(τ T E t ,τ T E t−1 ), then there exists some α ∈ (0, 1) such that τ c t = ατ T E t +(1−α)τ T E t−1 has lower MSE thanτ T E t . Moreover, if we have V ar(τ T E t )−V ar(τ T E t−1 ) > 4ϵ 2 then we know thatτ c t = 1 2τ T E t + 12τ T E t−1 has lower MSE thanτ T E t .
t ) + Var(τ T E t ′ ) + 4(t − t ′ ) 2ε2 ,where Var(τ T E t ) can be any estimator of the variance Var(τ T E t ): we discuss a few options in Proposition 9 of Appendix B. In addition, under pure population interference and temporally independent assignments,Var(τ c t ) = Var ατ T E t + (1 − α)τ T E t−1 = α 2 Var(τ T E t ) + (1 − α) 2 Var(τ T E t−1 ),which suggests the following plug-in estimator of the variance:Var(τ c t ) =α 2 Var(τ T E t ) + (1 −α) 2 Var(τ T E t−1 ).We also give the expression of Cov(τ T E t ,τ T E t−1 ) and an estimator for it in Proposition 8 and Proposition 10 of Appendix B respectively. Equipped with the variance and the covariance estimators, we can directly check the condition in Proposition 10. The optimal α is given in the proof and can be estimated in the similar way as in the independent assignment case.
T E t−k+1 + · · · + α kτ T E t ,where α 1 , . . . , α k can be estimated by solving a slightly more involved convex optimization problem.We describe this approach in full details in Appendix C.4 Panel experiments with population interference and carryover effectsSection 3 shows that adding a temporal dimension is particularly useful if no carryover effects exist.We now consider the setting where there are both carryover effects and population interference, which we call mixed interference. As we show below, mixed interference affects our ability to draw inference both for the TEC and ATEC, albeit in different ways. For temporal exposure contrasts (TEC), the same theorem as in Section A holds.
t ) + |E[τ c t ] − τ T E t | 2 ≤ V ar(τ T E t )By Proposition 3, it suffices to have V ar(τ c t ) + 4(1 − α) 2 ϵ 2 ≤ V ar(τ T E t ),
t ) ≥ V ar(τ T E t ) and E V ar d (τ T E t ) ≤ V ar(τ T E t ).Proposition 10 (Estimator of the covariance). We have the following unbiased estimator of
t ) + |E[τ c t ] − τ T E t | 2 ≤ Var(τ T E t )
Proof of Theorem 4. This should be exactly the same as our proof of Theorem 7.Proof of Proposition 3.The second equality follows from unbiasedness ofτ T E t andτ T E t−1 . To further bound the bias, we need to bound |τ T E t−1 − τ T E t |. We do this below.by our ϵ-weak-stability assumption. Hence,Proof of Proposition 8. This can be done by direct calculations.Proof of Proposition 4. We'd like to have reduction in MSE by usingτ c t . By the bias-variance decomposition and note thatτ T E t is unbiased, this boils down toProving Theorem 5 relies on results in m-dependence central limit theorem. We need the following result as a lemma.Lemma 5. Let {X n,i } be a triangular array of mean zero random variables. For each n = 1, 2, · · · let d = d n , and suppose X n,1 , · · · , X n,d is an m-dependent sequence of random variables for some m ∈ N. Define B 2 n,k,a = VarAssume the following conditions hold. For some δ > 0, −1 ≤ γ < 1 and g = g n > 2m is such that m g → 0: E|X n,i | 2+δ ≤ ∆ n for all i, (B.16) B 2 n,k,a /(k 1+γ ) ≤ K n for all a and for all k ≥ m, (B.17)We have thatVar(X n,t ) + 2Cov(X n,t , X n,t+1 )Plugging in (B.22) and (B.23), we have the expression of B 2 n . The estimator is obtained by replacing the non-identifiable terms by corresponding upper bound.C k−steps convex estimator The approach we have described in Section 3.2 naturally extends to using the k − 1 previous time steps, yielding the weighted combination estimator:which exhibits the following absolute bias bound:Proposition 11 (Bound on the bias ofτ c t ).As in the previous section, we can estimate α 1 , · · · , α k by solving the following convex optimization problem:arg minwhere Var(τ T E t−k+1 ), · · · , Var(τ T E t ) are estimators of the associated variance terms, and are provided in Appendix B. This then suggests the following plug-in estimator:We can assert stronger control over the bias ofτ c by incorporating an additional constraint to the optimization problem:arg minNumerical solutions for either optimization problem are straightforward to obtain using standard numerical solvers. Variance estimator and confidence interval of τ T E t can be constructed in exactly the same way as in the case k = 2.Proof of Proposition 11.We first give the optimization problem for the general case that assignments may be correlated across time:arg min+ 4 [(k − 1)α 1 + · · · + α k−1 ] 2 ϵ 2 subject to α 1 + · · · + α k = 1,where Var(τ T E t−k+1 ), · · · , Var(τ T E t ) and Cov(τ T E t−k+i ,τ T E t−k+j ) can be any estimator in Proposition 9 and 10. Moreover, suppose that the assignments are independent across time, we know that Cov(τ T E t−k+i ,τ T E t−k+j ) = 0, hence we have an even simpler optimization problem as stated in the main text.Derivation of the optimization problem. We first calculate the variance.Var(τ c t ) = Var(α 1τSuppose we want to have smaller MSE by usingτ c t , we need to have Var(τ c
Sampling-based versus designbased uncertainty in regression analysis. A Abadie, S Athey, G W Imbens, J M Wooldridge, Econometrica. 881Abadie, A., Athey, S., Imbens, G. W. & Wooldridge, J. M. (2020), 'Sampling-based versus design- based uncertainty in regression analysis', Econometrica 88(1), 265-296.
Building rational cooperation. J Andreoni, L Samuelson, Journal of Economic Theory. 1271Andreoni, J. & Samuelson, L. (2006), 'Building rational cooperation', Journal of Economic Theory 127(1), 117-154.
Causal effects of monetary shocks: Semiparametric conditional independence tests with a multinomial propensity score. J D Angrist, G M Kuersteiner, Review of Economics and Statistics. 933Angrist, J. D. & Kuersteiner, G. M. (2011), 'Causal effects of monetary shocks: Semiparametric conditional independence tests with a multinomial propensity score', Review of Economics and Statistics 93(3), 725-747.
Estimating average causal effects under general interference, with application to a social network experiment. P M Aronow, C Samii, Annals of Applied Statistics. 114Aronow, P. M. & Samii, C. (2017), 'Estimating average causal effects under general interference, with application to a social network experiment', Annals of Applied Statistics 11(4), 1912-1947.
Exact p-values for network interference. S Athey, D Eckles, G W Imbens, Journal of the American Statistical Association. 113521Athey, S., Eckles, D. & Imbens, G. W. (2018), 'Exact p-values for network interference', Journal of the American Statistical Association 113(521), 230-240.
A general theory of identification. G Basse, I Bojinov, arXiv:2002.06041arXiv preprintBasse, G. & Bojinov, I. (2020), 'A general theory of identification', arXiv preprint arXiv:2002.06041 .
Minimax designs for causal effects in temporal experiments with treatment habituation. G Basse, Y Ding, P Toulis, arXiv:1908.03531Basse, G., Ding, Y. & Toulis, P. (2019), 'Minimax designs for causal effects in temporal experiments with treatment habituation', arXiv e-prints p. arXiv:1908.03531.
Analyzing two-stage experiments in the presence of interference. G Basse, A Feller, Journal of the American Statistical Association. 113521Basse, G. & Feller, A. (2018), 'Analyzing two-stage experiments in the presence of interference', Journal of the American Statistical Association 113(521), 41-55.
Randomization tests of causal effects under interference. G Basse, A Feller, P Toulis, Biometrika. 1062Basse, G., Feller, A. & Toulis, P. (2019), 'Randomization tests of causal effects under interference', Biometrika 106(2), 487-494.
Limitations of design-based causal inference and a/b testing under arbitrary and network interference. G W Basse, E M Airoldi, Sociological Methodology. 481Basse, G. W. & Airoldi, E. M. (2018), 'Limitations of design-based causal inference and a/b testing under arbitrary and network interference', Sociological Methodology 48(1), 136-151.
How to make causal inferences with time-series cross-sectional data under selection on observables. M Blackwell, A N Glynn, American Political Science Review. 1124Blackwell, M. & Glynn, A. N. (2018), 'How to make causal inferences with time-series cross-sectional data under selection on observables', American Political Science Review 112(4), 1067-1082.
Online experimentation: Benefits, operational and methodological challenges, and scaling guide. I Bojinov, S Gupta, Harvard Data Science Review. 43Bojinov, I. & Gupta, S. (2022), 'Online experimentation: Benefits, operational and methodological challenges, and scaling guide', Harvard Data Science Review 4(3).
Panel experiments and dynamic causal effects: A finite population perspective. I Bojinov, A Rambachan, N Shephard, Quantitative Economics. 12Bojinov, I., Rambachan, A. & Shephard, N. (2021), 'Panel experiments and dynamic causal effects: A finite population perspective', Quantitative Economics 12, 1171-1196.
Time series experiments and causal estimands: Exact randomization tests and trading. I Bojinov, N Shephard, Journal of the American Statistical Association. 114528Bojinov, I. & Shephard, N. (2019), 'Time series experiments and causal estimands: Exact random- ization tests and trading', Journal of the American Statistical Association 114(528), 1665-1682.
Design and analysis of switchback experiments. I Bojinov, D Simchi-Levi, J Zhao, Management Science. Bojinov, I., Simchi-Levi, D. & Zhao, J. (2022), 'Design and analysis of switchback experiments', Management Science .
Assessing time-varying causal effect moderation in mobile health. A Boruvka, D Almirall, K Witkiewitz, S A Murphy, Journal of the American Statistical Association. 113523Boruvka, A., Almirall, D., Witkiewitz, K. & Murphy, S. A. (2018), 'Assessing time-varying causal effect moderation in mobile health', Journal of the American Statistical Association 113(523), 1112-1121.
Peer effects in networks: A survey. Y Bramoullé, H Djebbari, B Fortin, Annual Review of Economics. 12Bramoullé, Y., Djebbari, H. & Fortin, B. (2020), 'Peer effects in networks: A survey', Annual Review of Economics 12, 603-629.
Identification of peer effects through social networks. Y Bramoullé, H Djebbari, B Fortin, Journal of Econometrics. 1501Bramoullé, Y., Djebbari, H. & Fortin, B. (2009), 'Identification of peer effects through social networks', Journal of Econometrics 150(1), 41-55.
Experimentation in a ridesharing marketplace lyft engineering. N Chamandy, Chamandy, N. (2016), 'Experimentation in a ridesharing marketplace lyft engineering', URL: https://eng. lyft. com/experimentation-in-a-ridesharing-marketplace-b39db027a66e .
Central limit theorems via Stein's method for randomized experiments under interference. A Chin, arXiv:1804.03105Chin, A. (2018), 'Central limit theorems via Stein's method for randomized experiments under interference', arXiv e-prints p. arXiv:1804.03105.
Virtual water coolers: A field experiment on the role of virtual interactions on organizational newcomer performance. P Choudhury, J N Lane, I Bojinov, Choudhury, P., Lane, J. N. & Bojinov, I. (2021), 'Virtual water coolers: A field experiment on the role of virtual interactions on organizational newcomer performance'.
Planning of experiments. D R Cox, New York, WileyCox, D. R. (1958), Planning of experiments, New York, Wiley.
A Paradox from Randomization-Based Causal Inference. P Ding, Statistical Science. 323Ding, P. (2017), 'A Paradox from Randomization-Based Causal Inference', Statistical Science 32(3), 331 -345.
The Role of Information and Social Interactions in Retirement Plan Decisions: Evidence from a Randomized Experiment. E Duflo, E Saez, The Quarterly Journal of Economics. 1183Duflo, E. & Saez, E. (2003), 'The Role of Information and Social Interactions in Retirement Plan Decisions: Evidence from a Randomized Experiment', The Quarterly Journal of Economics 118(3), 815-842.
Innovation at uber: The launch of express pool. C Farronato, A Maccormack, S Mehta, 620Harvard Business School CaseFarronato, C., MacCormack, A. & Mehta, S. (2018), 'Innovation at uber: The launch of express pool', Harvard Business School Case 620(062).
The generalized oaxaca-blinder estimator. K Guo, G Basse, Journal of the American Statistical Association. Guo, K. & Basse, G. (2021), 'The generalized oaxaca-blinder estimator', Journal of the American Statistical Association pp. 1-13.
Causal inference in infectious diseases. M E Halloran, C J Struchiner, Epidemiology. 62Halloran, M. E. & Struchiner, C. J. (1995), 'Causal inference in infectious diseases', Epidemiology 6(2), 142-151.
Focusing on the long-term: It's good for users and business. H Hohnhold, D O'brien, D Tang, Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data MiningHohnhold, H., O'Brien, D. & Tang, D. (2015), Focusing on the long-term: It's good for users and business, in 'Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining', pp. 1849-1858.
Evaluating kindergarten retention policy. G Hong, S W Raudenbush, Journal of the American Statistical Association. 101475Hong, G. & Raudenbush, S. W. (2006), 'Evaluating kindergarten retention policy', Journal of the American Statistical Association 101(475), 901-910.
A generalization of sampling without replacement from a finite universe. D G Horvitz, D J Thompson, Journal of the American Statistical Association. 47260Horvitz, D. G. & Thompson, D. J. (1952), 'A generalization of sampling without replacement from a finite universe', Journal of the American Statistical Association 47(260), 663-685.
Switchback experiments under geometric mixing. Y Hu, S Wager, arXiv:2209.00197arXiv preprintHu, Y. & Wager, S. (2022), 'Switchback experiments under geometric mixing', arXiv preprint arXiv:2209.00197 .
Toward causal inference with interference. M G Hudgens, M E Halloran, 19081744Journal of the American Statistical Association. 103482Hudgens, M. G. & Halloran, M. E. (2008), 'Toward causal inference with interference', Journal of the American Statistical Association 103(482), 832-842. PMID: 19081744.
G W Imbens, D B Rubin, Causal inference in statistics, social, and biomedical sciences. Cambridge University PressImbens, G. W. & Rubin, D. B. (2015), Causal inference in statistics, social, and biomedical sciences, Cambridge University Press.
Estimating dynamic panel data models: a guide for macroeconomists. R A Judson, A L Owen, Economics Letters. 651Judson, R. A. & Owen, A. L. (1999), 'Estimating dynamic panel data models: a guide for macroe- conomists', Economics Letters 65(1), 9-15.
The randomization theory of experimental inference. O Kempthorne, Journal of the American Statistical Association. 50271Kempthorne, O. (1955), 'The randomization theory of experimental inference', Journal of the Amer- ican Statistical Association 50(271), 946-967.
Trustworthy online controlled experiments: A practical guide to a/b testing. R Kohavi, D Tang, Y Xu, Cambridge University PressKohavi, R., Tang, D. & Xu, Y. (2020), Trustworthy online controlled experiments: A practical guide to a/b testing, Cambridge University Press.
Treatment and spillover effects under network interference. M P Leung, Review of Economics and Statistics. 1022Leung, M. P. (2020), 'Treatment and spillover effects under network interference', Review of Eco- nomics and Statistics 102(2), 368-380.
Causal inference under approximate neighborhood interference. M P Leung, Econometrica. 901Leung, M. P. (2022), 'Causal inference under approximate neighborhood interference', Economet- rica 90(1), 267-293.
General forms of finite population central limit theorems with applications to causal inference. X Li, P Ding, Journal of the American Statistical Association. 112520Li, X. & Ding, P. (2017), 'General forms of finite population central limit theorems with applications to causal inference', Journal of the American Statistical Association 112(520), 1759-1769.
Randomization inference for peer effects. X Li, P Ding, Q Lin, D Yang, J S Liu, Journal of the American Statistical Association. 114528Li, X., Ding, P., Lin, Q., Yang, D. & Liu, J. S. (2019), 'Randomization inference for peer effects', Journal of the American Statistical Association 114(528), 1651-1664.
Agnostic notes on regression adjustments to experimental data: Reexamining freedman's critique. W Lin, Annals of Applied Statistics. 71Lin, W. (2013), 'Agnostic notes on regression adjustments to experimental data: Reexamining freedman's critique', Annals of Applied Statistics 7(1), 295-318.
Large sample randomization inference of causal effects in the presence of interference. L Liu, M G Hudgens, Journal of the American Statistical Association. 109505Liu, L. & Hudgens, M. G. (2014), 'Large sample randomization inference of causal effects in the presence of interference', Journal of the American Statistical Association 109(505), 288-301.
Identification of Endogenous Social Effects: The Reflection Problem. C F Manski, The Review of Economic Studies. 603Manski, C. F. (1993), 'Identification of Endogenous Social Effects: The Reflection Problem', The Review of Economic Studies 60(3), 531-542.
Identification of treatment response with social interactions. C F Manski, The Econometrics Journal. 161Manski, C. F. (2013), 'Identification of treatment response with social interactions', The Econo- metrics Journal 16(1), S1-S23.
A graph-theoretic approach to randomization tests of causal effects under general interference. D Puelz, G Basse, A Feller, P Toulis, arXiv:1910.10862arXiv preprintPuelz, D., Basse, G., Feller, A. & Toulis, P. (2019), 'A graph-theoretic approach to randomization tests of causal effects under general interference', arXiv preprint arXiv:1910.10862 .
Econometric analysis of potential outcomes time series: instruments, shocks, linearity and the causal response function. A Rambachan, N Shephard, arXiv:1903.01637arXiv preprintRambachan, A. & Shephard, N. (2019), 'Econometric analysis of potential outcomes time series: instruments, shocks, linearity and the causal response function', arXiv preprint arXiv:1903.01637 .
A new approach to causal inference in mortality studies with a sustained exposure period-application to control of the healthy worker survivor effect. J Robins, Mathematical modelling. 79Robins, J. (1986), 'A new approach to causal inference in mortality studies with a sustained expo- sure period-application to control of the healthy worker survivor effect', Mathematical modelling 7(9-12), 1393-1512.
Estimation of the causal effect of a time-varying exposure on the marginal mean of a repeated binary outcome. J M Robins, S Greenland, F.-C Hu, Journal of the American Statistical Association. 94447Robins, J. M., Greenland, S. & Hu, F.-C. (1999a), 'Estimation of the causal effect of a time-varying exposure on the marginal mean of a repeated binary outcome', Journal of the American Statistical Association 94(447), 687-700.
Estimation of the causal effect of a time-varying exposure on the marginal mean of a repeated binary outcome. J M Robins, S Greenland, F.-C Hu, Journal of the American Statistical Association. 94447Robins, J. M., Greenland, S. & Hu, F.-C. (1999b), 'Estimation of the causal effect of a time-varying exposure on the marginal mean of a repeated binary outcome', Journal of the American Statistical Association 94(447), 687-700.
A more general central limit theorem for m-dependent random variables with unbounded m. J P Romano, M Wolf, Statistics & Probability Letters. 472Romano, J. P. & Wolf, M. (2000), 'A more general central limit theorem for m-dependent random variables with unbounded m', Statistics & Probability Letters 47(2), 115 -124.
Interference between units in randomized experiments. P R Rosenbaum, Journal of the American Statistical Association. 102477Rosenbaum, P. R. (2007), 'Interference between units in randomized experiments', Journal of the American Statistical Association 102(477), 191-200.
Fundamentals of stein's method. N Ross, Probability Surveys. 8Ross, N. (2011), 'Fundamentals of stein's method', Probability Surveys 8, 210-293.
Average treatment effects in the presence of unknown interference. F Sävje, P M Aronow, M G Hudgens, arXiv:1711.06399Sävje, F., Aronow, P. M. & Hudgens, M. G. (2017), 'Average treatment effects in the presence of unknown interference', arXiv e-prints p. arXiv:1711.06399.
Detecting spillover effects: Design and analysis of multilevel experiments. B Sinclair, M Mcconnell, D P Green, American Journal of Political Science. 564Sinclair, B., McConnell, M. & Green, D. P. (2012), 'Detecting spillover effects: Design and analysis of multilevel experiments', American Journal of Political Science 56(4), 1055-1069.
What do randomized studies of housing mobility demonstrate?. M E Sobel, Journal of the American Statistical Association. 101476Sobel, M. E. (2006), 'What do randomized studies of housing mobility demonstrate?', Journal of the American Statistical Association 101(476), 1398-1407.
Control using predictions as covariates in switchback experiments. Y Tang, C Huang, D Kastelman, J Bauman, Tang, Y., Huang, C., Kastelman, D. & Bauman, J. (2020), 'Control using predictions as covariates in switchback experiments'.
Social structure of facebook networks. A L Traud, P J Mucha, M A Porter, Physica A: Statistical Mechanics and its Applications. 39116Traud, A. L., Mucha, P. J. & Porter, M. A. (2012), 'Social structure of facebook networks', Physica A: Statistical Mechanics and its Applications 391(16), 4165-4180.
Graph cluster randomization: Network exposure to multiple universes. J Ugander, B Karrer, L Backstrom, J Kleinberg, Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining', KDD '13. the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining', KDD '13New York, NY, USAAssociation for Computing MachineryUgander, J., Karrer, B., Backstrom, L. & Kleinberg, J. (2013), Graph cluster randomization: Network exposure to multiple universes, in 'Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining', KDD '13, Association for Computing Machinery, New York, NY, USA, p. 329-337.
Randomized Graph Cluster Randomization. J Ugander, H Yin, arXiv:2009.02297Ugander, J. & Yin, H. (2020), 'Randomized Graph Cluster Randomization', arXiv e-prints p. arXiv:2009.02297.
Identification and estimation of spillover effects in randomized experiments. G Vazquez-Bare, Journal of Econometrics. Vazquez-Bare, G. (2022), 'Identification and estimation of spillover effects in randomized experi- ments', Journal of Econometrics .
Econometric Analysis of Cross Section and Panel Data. J M Wooldridge, The MIT PressWooldridge, J. M. (2010), Econometric Analysis of Cross Section and Panel Data, The MIT Press.
Optimal experimental design for staggered rollouts. R Xiong, S Athey, M Bayati, G W Imbens, Available at SSRN 3483934Xiong, R., Athey, S., Bayati, M. & Imbens, G. W. (2019), 'Optimal experimental design for stag- gered rollouts', Available at SSRN 3483934 .
| [] |
[
"Compressed Geometric Arrays for Point Cloud Processing",
"Compressed Geometric Arrays for Point Cloud Processing"
] | [
"MahdiHoda Roodak ",
"Nazm Bojnordi "
] | [] | [] | The ever-increasing demand for 3D modeling in the emerging immersive applications has made point clouds an essential class of data for 3D image and video processing. Treebased structures are commonly used for representing point clouds where pointers are used to realize the connection between nodes. Tree-based structures significantly suffer from irregular access patterns for large point clouds. Memory access indirection in such structures is disruptive to bandwidth efficiency and performance. In this paper, we propose a point cloud representation format based on compressed geometric arrays (CGA). Then, we examine new methods for point cloud processing based on CGA. The proposed format enables a higher bandwidth efficiency via eliminating memory access indirections (i.e., pointer chasing at the nodes of tree) thereby improving the efficiency of point cloud processing. Our experimental results show that using CGA for point cloud operations achieves 1328× speed up, 1321× better bandwidth utilization, and 54% reduction in the volume of transferred data as compared to the state-of-the-art tree-based format from point cloud library (PCL).Index Terms-point cloud representations point cloud operations spatial/temporal coding memory management | 10.1109/tmm.2022.3233256 | [
"https://arxiv.org/pdf/2110.11616v1.pdf"
] | 239,616,367 | 2110.11616 | 42f080abb893eefd39f0ee36c132eea1543c26d6 |
Compressed Geometric Arrays for Point Cloud Processing
MahdiHoda Roodak
Nazm Bojnordi
Compressed Geometric Arrays for Point Cloud Processing
1
The ever-increasing demand for 3D modeling in the emerging immersive applications has made point clouds an essential class of data for 3D image and video processing. Treebased structures are commonly used for representing point clouds where pointers are used to realize the connection between nodes. Tree-based structures significantly suffer from irregular access patterns for large point clouds. Memory access indirection in such structures is disruptive to bandwidth efficiency and performance. In this paper, we propose a point cloud representation format based on compressed geometric arrays (CGA). Then, we examine new methods for point cloud processing based on CGA. The proposed format enables a higher bandwidth efficiency via eliminating memory access indirections (i.e., pointer chasing at the nodes of tree) thereby improving the efficiency of point cloud processing. Our experimental results show that using CGA for point cloud operations achieves 1328× speed up, 1321× better bandwidth utilization, and 54% reduction in the volume of transferred data as compared to the state-of-the-art tree-based format from point cloud library (PCL).Index Terms-point cloud representations point cloud operations spatial/temporal coding memory management
I. INTRODUCTION
Point cloud is a set of geometric points with x, y, and z coordinates that are sampled from the three-dimensional (3D) space. Point cloud data is usually generated by computer graphics or acquired by light detection and ranging (LiDAR) scanners to represent 3D objects for various applications-e.g., medical imaging, architecture, 3D printing, manufacturing, 3D gaming, and virtual reality (VR). Point clouds store detailed information about the physical space that may lead to generating datasets in excess of terabytes. As a result, memory bandwidth and capacity become critical to high performance point cloud processing. Organizing points in a data structure that supports fast point operations such as insertion, deletion, and search is a must for efficient point cloud processing [1]. Tree and grid are two frequently used data types for representing points [2] and [3]. Existing methods exploit either type in various tasks including storing, analyzing, and visualizing point clouds. A tree structure follows a node-based representation where each node implements a relationship within a subset of points. In a grid, the point cloud data is projected onto a graphical data structure to form a network of 2D images or 3D voxels [4]. Tree is the most commonly used data structure for point cloud representation. For example, the point cloud library (PCL) Hoda is an open project for point cloud processing based on kdimensional tree (kdtree) and octree [5]. Pointers are required to represent the hierarchical connections among the tree nodes, one for each of the children. The large number of pointers used in tree results in a heavy load of memory indirections that make the point cloud processing slow.
As an alternative to tree-based structures, we propose compressed geometric arrays (CGA) that enable fast lookup, small memory footprint, and efficient memory bandwidth utilization. This paper is an extension of our recent work on geometric arrays for point cloud processing [6]. Here, we examine new methods for point cloud processing based on CGA. First, we provide an introduction to various point operations such as cloud merge, projection, nearest neighbor (NN) search, and point cloud compression. Then, we investigate the challenges and opportunities of using octree in point cloud operations. Next, we explain the CGA representation format and its application to the point operations. Finally, we perform a set of experiments on various point clouds generated by LiDAR and computer graphics tools. Our experimental results indicate that CGA can improve the execution time and bandwidth utilization significantly. As compared to the PCL library, CGA achieves more than 1000× speedup for the merge, projection, and NN search operations. When used for spatiotemporal compression in MPEG G-PCC, CGA improves both the quality and compression ratio by 13-30%.
II. BASIC POINT CLOUD OPERATIONS
In this section, we provide the necessary background in point cloud processing and compression.
A. Merging Point Clouds
Point cloud merging is widely used for 3D surface reconstruction in computer vision, computer graphics, and reverse engineering [7]. Merging structured point clouds is different from appending points of one point cloud to another. Iterative tree or graph traversals are necessary to merge point clouds, which often demand significant memory bandwidth. Point cloud merging is usually used when several views of the same object or scene are collected from different angles and positions. As each view may have a different coordinate system, a conversion of the views to a common coordinate system is necessary [8]. Typically, the views are obtained from multiple 3D scanners or a single scanner positioned at different locations with specific orientations. Each view is represented as a point cloud that may have overlapping areas with other views. Merging point clouds consists of several stages. First, the 3D shape of an object is acquired and stored in multiple data sets. Then, the data sets are merged to a common coordinate system according to their acquisition directions [7] [9]. For merging, we consider one point cloud as the base and identify the overlapping points between the base and other point clouds according to their x, y, and z coordinates. For the overlapping points, we either choose the attributes of the base, or an average of the attributes. Given two point sets S 0 and S 1 , the following equation denotes a merge operation.
S 0 ∪ S 1 = {s i |s i ∈ S 0 ∨ s i ∈ S 1 }(1)
In this equation, s i represents a point in the 3D space with the s x i , s y i , and s z i coordinates. Fig. 1 illustrates the result of merging various views of an example point cloud frame, called Soldier [10].
B. 3D Surface Reconstruction
For 3D surface reconstruction, different views are first ported to a common coordinate system. Then, the initial rotation and transformation of each view are computed to find an optimal alignment among the views. For this purpose, at least three pairs of corresponding points from every two point clouds are needed to calculate a transformation between the views. For example, the following Euclidean metric (λ) may be used to measure the similarity of every two points in space [11].
λ(s 0 , s 1 ) = (s x 0 − s x 1 ) 2 + (s y 0 − s y 1 ) 2 + (s z 0 − s z 1 ) 2 (s x 0 ) 2 + (s x 1 ) 2 + (s y 0 ) 2 + (s y 1 ) 2 + (s z 0 ) 2 + (s z 1 ) 2
(2) In this equation, s 0 and s 1 are two points from the same or different point clouds. A threshold similarity assessment is often performed to determine the closest points based on λ. Similar to point merging, 3D surface reconstruction requires a significant memory bandwidth.
C. Point Cloud Projection
Many point cloud processing applications, such as object detecting and tracking in autonomous vehicles [12], require pre-processing steps that involve projecting point clouds to 2D images. Unlike the 3D point clouds, 2D images follow a regular pattern for organizing pixels in a dense matrix. Such regularity was proved beneficial to extracting useful 2D perceptual information from 3D data that can facilitate object detection, classification, and convolutional neural networks (CNNs) [12]. Projection from the 3D space to 2D is usually an orthogonal or perspective mapping.
1) Orthogonal Projection:
Orthogonal projection is generally used for representing a 3D object with three or more 2D views. In each view, the object is viewed along parallel lines that are perpendicular to the plane of that view. For example, a typical orthographic projection of a house consists of a top, a front, and a side view. Fig. 2 shows the orthogonal projections of an example point cloud frame from different directions. For example, consider a point s = (s x , s y , s z ) in the 3D space projected onto a 2D point t = (t x , t y ) along the z axis. The coordinates of t can be calculated as follows [13].
t x t y = m x 0 0 0 m y 0 s x s y s z (3)
In this example, m x and m y are used to set the projection angle from the laser scanning direction. For instance, m x = m y = 1 is used to set the projection angle from the laser scanning direction to 90 degrees.
2) Perspective Projection: Perspective projection linearly maps the 3D objects onto a 2D view such that the distant objects appear smaller than the nearer ones. In a typical perspective projection, the distances, angles, and parallelism are not preserved [14]. For example, the perspective projection of a 3D point s 0 with coordinates (s x , s y , s z ) to a 2D point s 0 with coordinates (s x , s y , s z ) is computed by the following equation.
s x s y s z 1 = 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 D 0 s x s y s z 1 (4)
In this projection, the projection plane is considered at −D on the z-axis. The 2D coordinates are computed by the following equations.
s x = s x .D s z , s y = s y .D s z , s z = D(5)
Note that in a perspective projection, one of the s x , s y , or s z components may be omitted to form a 2D view [14]. Moreover, vertex processing, modeling, and viewing transformation may be required before the projection. For instance, to view 3 faces of a cuboid, one rotation is used before applying the perspective projection. For a rotation with a θ angle around the y-axis, the following matrix operations are necessary. In all, point cloud projection requires visiting points to compute their projected coordinates. In a tree structure, such data stream is generated by traversing the tree nodes that results in irregular access patterns and low bandwidth utilization.
s x s y s z 1 = cos(θ) 0 −sin(θ) 0 0 1 0 0 sin(θ) 0 cos(θ) 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 1 D 0 s x s y s z 1 (6)
D. Point Cloud NN Search
Consider a 3D space S and a query point s 0 ∈ S. An NN search refers to selecting 1 point from S such that it is closer to s 0 than the rest of points in S. Several metrics exist for measuring the distance between every two points (s 0 and s 1 ) in space. For example, Euclidean distance (ε) is defined by the following equation.
ε(s 0 , s 1 ) = (s x 0 − s x 1 ) 2 + (s y 0 − s y 1 ) 2 + (s z 0 − s z 1 ) 2 (7)
Given a point set S with n points and a query point s i (s i ∈ S), NN is to find a subset C (C ⊂ S) containing nearest points to s i . For all s 0 ∈ C and s 1 ∈ S-C (the set whose elements belong to S but not C): Fig. 3 shows the results of an example NN search to find 400 nearest point. Most NN methods are based on recursive subdivisions of the 3D space [15]. For example, octree relies on uniform subdivisions [15] while kdtree uses a nonuniform subdivision approach [15]. A kdtree based NN algorithm is described in Algorithm 1.
ε(s i , s 0 ) ≤ ε(s i , s 1 )(8)
Algorithm 1 NN Search
Input:
A dataset S A query point q Tree root: the root of a Kdtree A non-leaf node in Kdtree divides the space into two parts:
left subtree: points to the left of this space right subtree: points to the right of the space Superior: A node that has a child is called the child's superior Output: the k closest points s 1 , s 2 , ..., s k ∈ S to q for each p point do Start at the tree root //Traverse the kdtree to the subtree the p point belongs Find the leaf Store it as the "current best point". //Traverse upward for each superior node of the "current best point" do if the superior point is closer to the q point than the current best point then select it as the current best point. end if end for //Check the subtree on the other side if there is a closer point to p in this subtree then traverse this subtree to a leaf end if //The nearest neighbor point is found end for
E. Point Cloud Compression
Generally, point clouds are of two types: static and streaming. Static point clouds are suitable for representing one object, such as a building or a human face. In contrast, streaming point clouds are used to represent the progress of an event in time. For compressing such data, the redundancies among neighboring frames may be extracted to enhance the quality of compressed data [16]. Most of these methods are based on octree formation of the 3D space in which the point cloud is encoded in terms of occupied octree cells. Fig. 4 shows the most important components of the 3D geometric point cloud compression, combines features from common 3D octreebased point cloud compression and common hybrid video coding that proposed in traditional codecs such as H.264 and HEVC which include block based motion compensation [17].
1) Constructing Octree: An octree is defined as a tree with each node comprising up to 8 children. The octree of a 3D space is created by recursively dividing it into octants. The G-PCC encoder recursively divides the point cloud aligned bounding box into eight children. Only non-empty children continue to divide. An 8-bit code is used to represent each octree subdivision. A 1-bit flag is used to denote whether a child cell is empty or not, with '1' indicating a nonempty child cell and '0' an empty child cell. The cells are traversed according to a fixed order and the flag bits of all child cells are collected to obtain an 8-bit code, which is called the occupancy code as shown in Fig. 5. Starting from root node, each octree level has an 8 bit occupancy code. Only non-empty nodes are divided further. The decoder only needs the occupancy codes to reconstruct an octree.
2) Spatial Coding: The Spatial coding consists of three steps [17]. a) Bounding Box Alignment: A box with a lower corner at (s x min ,s y min ,s z min ) and an upper corner at (s x max ,s y max ,s z max ) is called a bounding box. This box is used as the root of the octree and may change from one frame to another. The spatio-temporal locality of point coordinates may vary between consecutive frames. Therefore, the geometric correspondence of points between consecutive frames may be lost, which makes temporal prediction hard. Bounding box alignment is a technique for adjusting the size of bounding box so to include all the points [17].
b) Coding of the Occupancy Codes: Starting from the root, each octree subdivision results in an 8 bit occupancy code as shown in Fig. 5. Since, only non-empty nodes are subdivided, the decoder only needs the occupancy codes to reconstruct the point cloud. So, efficient coding of occupancy codes is crucial. c) Color Encoder: In addition to the geometry information, the color attributes should be coded in point cloud compression. To encode the color attributes efficiently, the redundancy between them should be extracted. For this purpose, the octree is traversed based on the depth first search method in which, first, all descendants of root are visited recursively and then the root is visited. Then, the color attributes are written into some 8×8 blocks. Since, in the depth first traversal, the consecutive pixels are often co-located and therefore are correlated to each other, the redundancy between them could be extracted using discrete cosine transform (DCT) and Differential Pulse Code Modulation (DPCM). After applying DCT, DC component is large and varied, but often close to previous value. DPCM proposes to encode the difference between the DC value of each 8×8 block from the previous block. Algorithm 2 shows multiple steps of color coding [17].
Algorithm 2 Color coding in point cloud compression
Start at the root of octree Traverse the octree based on the depth first search //First visits all descendants of root recursively, finally visits the root for all visited points do Extract the color attributes end for All extracted color attributes are written into some 8×8 blocks. // In depth first search, the consecutive pixels are often colocated and therefore are correlated to each other Apply DCT to 8×8 blocks Use DPCM to encode the difference between the DC values of each 8×8 block from the previous block. Send the residuals of DPCM Send the AC values d) Temporal Prediction: The temporal prediction algorithm uses the spatial coding method in combination with a prediction scheme. The data in predicted frame (P-frame) is encoded in two parts. The first part is data that contains the points that could not be predictively coded and the second part is the data that could be predicted well from the previous frame. For the first part, the spatial coding method is used and for the second part, the cubes with appropriate size, 8×8×8, 16×16×16 or 32×32×32 are generated. Then, each one is traversed to find a corresponding cube in reference frame (Iframe). The color variance of the points in the cube is also checked and the temporal prediction only performs in the area of point cloud that have low color/texture variance. The temporal prediction may result in visible artifacts in high variance texture image regions. the prediction is performed based on computing a rigid transform between the matching cubes. The rigid transform is a 4 × 4 matrix that is composed of a 3 × 3 rotation matrix and a 3×1 translation vector. This computation is based on the iterative closest point (ICP) algorithm and only takes the geometric coordinates into account. If the iterative closest point algorithm converges, the predictor, which is the rigid transform and the position of cube P are coded. Otherwise the cube is coded using spatial coding. Using the matching cube in I-frame, the motion vector is computed. Finally, a color offset is coded to compensate for color difference offsets between the corresponding cubes. This color offset can be used to compensate the color difference between cubes due to the brightness difference. Algorithm 3 show the predictive point cloud coding algorithm [17].
Algorithm 3 Temporal point cloud coding
Generate the cubes of P-frame and I-frame for each cubes P in P-frame do do Seach to find the corresponding cube, cube I in I-frame if cube I not found then use spatial coding for cube P else check the color variance cube P points if the color variance ≥ threshold then computing a rigid transform between cubes P and cube I using ICP if ICP converged then code rigid transform code position of cube P end if else spatial coding for cube P end if code the color offset between cubes P and cube I end if end for
III. DESIGN CHALLENGES AND OPPORTUNITIES
Traversing an octree suffers from costly memory indirections due to pointer-chasing at the tree nodes. Memory access indirection is counterproductive to bandwidth efficiency and performance of point cloud processing due to lack of locality in memory accesses. In the litrature, there are several implementations for octrees [18].
A. Octree Implementation 1) Standard Representation: In this representation, each tree node includes eight pointers. The leaf nodes have no children; therefore, two node types are used to form a tree. One for the inner nodes and one for leaves. Moreover, a flag must be used to distinguish between an inner and a leaf. Some octree implementations add pointers to the parent node for bottom-up traversals.
2) Block Representation: To save memory, it is possible for each node to store only one pointer to a block of eight children, instead of eight individual pointers. However, all eight children must be allocated when a leaf node is subdivided and it is not possible to allocate child nodes on-demand.
3) Sibling-child Representation: In this method, each node uses two pointers per node. One for the next sibling, which is the next child of the node's parent. Another pointer is for the first child of the node. In comparison to the standard representation, this representation needs less memory for pointers.
In the standard representation, the best case is when the first child should be accessed and only one indirection is required. In the worst case, where the last child should be accessed, eight indirection is required. The block representation requires fewer indirections than the standard representation. In siblingchild representation, two pointers instead of eight pointers are required per node; therefore, it consumes less memory than the standard representation. When the child nodes are accessed sequentially, the indirections in this representation is the same as the standard representation. But, the worst case is when the node should be accessed randomly, in which the indirections is much higher than the standard representation [18]. Fig. 6 shows the difference between various representation of octree.
B. Performance Analysis
To investigate the computational complexity, memory bandwidth utilization, and the volume of data transfer, we use four different computer-generated sequences that capture human bodies in motion, i.e., Soldier, Longdress, Loot, and Redandblack [10] and two LiDAR sequences [19]. Various point cloud processes, i.e., point cloud merging, 3D to 2D projection, NN and point cloud compression are implemented on a system with Intel(R) Core(TM) i7-8750H CPU @ 2.5GHz 2.6 GHz and 8.00 GB RAM. We use the octree structure of PCL library [5] for point cloud representation. PCL is a C++ library for 3D point cloud processing, in which most mathematical operations are implemented with and based on Eigen, an open-source library for linear algebra. Considering the efficiency and performance of modern CPUs, PCL provides support for OpenMP [20] and Intel Threading Building Blocks (TBB) library [21] for multi-core parallelization. To make the algorithms more efficient, the modules in PCL pass data around using shared pointers which avoiding the need to re-copy data presented in the system. So, the primary data structures in PCL are fully optimized [5]. PCL includes various algorithms for point cloud processing, such as filtering, surface reconstruction, segmentation, etc [5]. The PCL as the most famous open source library for 3D point cloud processing is released under the terms of the BSD license, which is free for commercial and research use [22]. PCL has two space-partitioning data structures, kd-tree and octree that both of them use a tree structure to store the points. Based on the benefits listed for PCL, the octree structure of this library is used in this paper as the baseline structure. The processing time, bandwidth utilization, and the volume of data transfer is measured for computer-generated and LIDAR sequences and are shown in Fig. 7 and Fig. 8, respectively. The x-axis shows the number of points for each sequence. We vary number of points by downsizing the point clouds to three different versions in addition to their original sequence. For this purpose, the pcdownsample function of MATLAB is uses. In this function, the points within a 3D box with a specific size (Gridstep) are merged to a single point in the output sequence. Gridstep is the input of pcdownsample function which is specified as a numeric value. This value should be small when there are numerous number of points in 3D space to construct fine-grained grids. Then, the color attributes of the points whiten a 3D box are averaged accordingly. This method preserves the shape of the point cloud. The processing time, memory bandwidth utilization, and the amount of transferred data for different operation are then measured using the Intel VTune Profiler [23]. Memory bandwidth utilization is defined as the LLC(last-level cache) miss counts divided by the processing time. The LLC is the last, and longest-latency, level in the memory hierarchy before main memory (DRAM). Any memory requests missing here must be serviced by local or remote DRAM, with significant latency. The LLC miss count metric shows total number of demand loads which missed LLC. The high amount of bandwidth consumed shown in Fig. 7 and Fig. 8, indicate the requirement to improve the bandwidth utilization for various point cloud processes.
IV. RELATED WORK
A. Data Structures for Point Clouds
Numerous data structures have been proposed for storing point cloud data in the literature. Windowed priority queue is a data structure for point clouds that holds references to the first dimension of points. Then, the intervals of points are identified by the windowed priority queue. This approach focuses on the indexing of the data for fast retrieval and can be generalized for all dimensions of point data by instantiating the windowed priority queue repeatedly [24]. Octree is another implementation of point cloud that refers to a tree in which each node has up to eight children and represents the volume formed by a rectangular cuboid [2]. The internal nodes in octree used to represent a 3D region or a cuboid and the leaf nodes are used to to represent that no point exists in the region it represent. When storing a point cloud, the maximum depth is defined as a stopping rule for occupied volumes. In octree, most information about the inner nodes is extracted using the information of neighboring nodes. For example, the depth of each node is calculated as the depth of its parent plus one and the parent pointers are pushed to a stack to remember the parents of each node. So, the information of each node that could be obtained by traversing the tree is not saved in memory and would be omitted. In addition, one byte is added to each node in which each bit corresponds to one octant of the node. So, it is not required to store 8 child for each node to make the memory consumption more efficient. The data structure proposed in [3] extends the Voronoi diagram and Delaunay triangulation [25] to an environment for 3D voxels. But, it is not clear how we can efficiently scale this approach to handle large scale point cloud data. A kdtree proposed in PCL is a space-partitioning data structure for organizing points in a k-dimensional space. It tries to partition the space for organizing points in a k-dimensional space. It splits the space into two parts in each layer. Some or half of the points may be stored in the left subtree, and the other ones in the right subtree. It stops splitting when the number of points in one node reaches the specified value [26]. The main drawback of this structure is that its performance degrades rapidly if new points are required to be added to the kd-tree after its construction [27]. Some implementations of octree store redundant information for each node. For instance, some pointers to neighbor and parent nodes are saved for fast tracking of nodes in search operations. In some other implementations, the position and size of each node and eight pointers to child nodes are stored. So, the subdivision could be stopped fast to reduce the required number of nodes for storage [2]. Serialized pointer-free octrees are the most memory efficient structure. But, the accessing time in this structure is O(n), where n is the size of the data (number of points/cells). This implementation is useful in bandwidth intensive applications [2].
B. Point Cloud Processing
Machine learning on point clouds has been attracting more attention in recent years. Fundamental processes in 2D images, such as including 3D shape classification, 3D object detection and tracking, and 3D point cloud segmentation also exist in point clouds [28]. The 3D shape classification methods are classified into multiview and volumetric-based methods according to the input data type. In the multi-view methods, first, the 3D shape is projected to multiple views to extract the view-wise features. Then, these features are used for shape classification. Finding the view-wise features is the main challenge for these methods. The volumetric-based methods try to voxelize a point cloud into a 3D grids to apply a 3D convolution neural network for shape classification. A 3D object detector takes the input point cloud of a scene and generates an 3D bounding box around each detected object. These methods can be divided into two categories: region proposal-based and single shot methods. The proposalbased methods first extract several possible regions (proposals) containing objects. Then, the region-wise features are used to determine the category label of each region. The single shot methods try to predict various categories and then regress the predicted bounding boxes of objects using a single-stage network [28]. 3D object tracking uses the locations of an object in the first frame to estimate its location in subsequent frames. The geometric information in point clouds can overcome main drawbacks of object detection in 2D images such as occlusion and illumination and scale variation [28]. Automated driving has been developed rapidly in recent years. It requires vision-based SLAM (simultaneous localization and mapping) systems. But, static environment is a prerequisite for these systems to work properly, which greatly limits the use of SLAM in real 3D environments [29] . In addition, a mono-camera perception system cannot provide reliable 3D geometry, which is essential for autonomous driving. Therefore, autonomous vehicle are usually equipped with a suite of various sensors to ensure accurate environmental perception. This way, the camera LiDAR fusion is becoming an emerging research area. Processing the data gathered from such sensors has a main challenge. In terms of data structure, the point cloud is irregular, orderless and continuous. These characteristics differences between the point cloud and the image, which is while the image is regular, ordered and discrete, make processing of point cloud a challenging task [30]. Virtual and augmented reality created using point clouds is another attractive application of point cloud data. The high amount of data collected with LiDAR, RGB have to be transformed into representations that fulfill the computational requirements for VR and AR setups [31]. This paper proposes to use compressed geometric arrays for point cloud representation in various point cloud processes. Our simulation results show that using this representation format, particularly the sparse point cloud data could be represented in a suitable way for computation intensive point cloud processes. This way, using the CGA, the computations complexity of the mentioned applications could be improved.
V. COMPRESSED GEOMETRIC ARRAYS
We propose an efficient array based data representation for 3D point cloud that reduces the amount of stored data and increases the memory bandwidth efficiency. Our proposed approach makes the indirection minimum using the geometric arrays to represent the point cloud data. Since, the indexing in array is much faster and easier. Reducing the number of indirection makes the memory bandwidth utilization much higher.
To process a point cloud, the x, y, and z coordinates of the points should be traversed to find the existing points. For instance, in merging process, in which two octree should be merged, the second octree should be traversed to find each point and then, the first octree should be traveled to find the appropriate place to add this point. Fig. 9 shows an example of the CGA data layout for six points (i.e., p 0 to p 5 ) with different coordinates. We have six arrays in this format. • A X index vector shows the x coordinates of the points in the Y index array. • An X pointer vector stores the cumulative number of existing points with the x coordinate equal to the index of X pointer vector. It is defined by the recursive relation below.
-X pointer [0] = 0 -X pointer [i + 1] = X pointer [i] + number of existing points with x coordinate equal to i and non equal y values The proposed data structure is called compressed geometric array, since in this format the duplicated coordinates are not saved as many times as they are repeated. Only the number of times that x and y coordinated are repeated is saved in X pointer and Y pointer arrays, respectively. Therefore, this data structure requires less memory space and is much more compact than when all the coordinates are stored. CGA is an extension of geometric arrays (G-Arrays) for point cloud processing proposed in [6]. Another array, which is called X index , has been added in CGA that shows the x coordinates of the points. Then, the X pointer vector stores the cumulative number of existing points with the x coordinate equal to the index of X pointer vector. In G-Arrays, where the X index array did not exist, one cell of X pointer array is assigned to each number between the minimum and maximum values of x coordinates. Then, if there is no point in the space with a specific coordinate, the corresponding cell in the X pointer array would be zero. Thus, due to the sparsity of points in 3D point clouds, lots of cells in X pointer array would be zero, which would lead to polluting the memory. In CGA format, just the existing x coordinate are stored in X index array and the corresponding X pointer array would be more compact.
In should be noted that the contribution of this paper over our previous work in [6] is that here, we examine new methods for point cloud processing, including cloud merging, 3D to 2D projection, nearest neighbor (NN) search, and point cloud compression, based on CGA .
A. Point Cloud Merging
Consider merging two point clouds stored in separate CGAs as shown in Fig. 10. To merge these two point clouds, we should concatenate the fields (e.g., dimensions and attributes) of these two different point clouds.
B. Point Cloud 3D to 2D Projection
Point could projection implements a single view projection of a 3D point cloud that resulting in a 2D plane. For this purpose, each points of the point cloud should be accessed and processed using the proper projection matrix, i.e., orthogonal or perspective projection as explained in II-C. So, we should access the points stored in CGA as explained in V-A. Then, the orthogonal or perspective projection matrices described in II-C are used for each extracted point, s 0 = (s x 0 , s y 0 , s z 0 ), from the CGA to find the projected point.
C. Point Cloud NN Search
We develop a CGA-based NN algorithm for point cloud processing, which finds the nearest point to a given query point for the compression method described in Section V-D. To find the nearest point to a query point, we first look for a point with exactly equal coordinates to those of the query point. Using the look up method explained in Section V-A, we determine if such a point exists. If not, the X index array is searched for the nearest x coordinate to the query point. Then, following the corresponding pointers in X pointer and Y pointer arrays, the nearest y and z coordinates to the query point are identified. Next, we calculate the Euclidean distance (i.e., ) between the found point and the query point. Any existing points from the set whose x, y, and z coordinates are distant from the corresponding coordinates of the query point at less than are candidates for the nearest point. The candidate points (i.e., s x i ) are identified in X index , by calculating the Euclidean distance between each element of X index and the query point. Note that X pointer [s x i + 1] − X pointer [s x i ] shows the number of existing points in the candidates list. Following the corresponding pointers in X pointer and Y pointer arrays, the corresponding y and z coordinates are found. If the distance between the found y and z coordinates from the query point are less than , these points are candidates for the nearest point. Finally, among these candidate points, a point with the minimum Euclidean distance is selected. To evaluate the proposed NN search algorithm, a bunch of NN queries, using random in the bounding box, are performed.
D. Point Cloud Compression with CGA
The proposed method for spatio/temporal compression of point cloud using CGA as presented in Fig. 11. The first frame is coded spatially based on the prior work on MPEG G-VCC [32] and considered as the reference for the second frame. We employ the second path of Fig. 11 to consider all the other frames as predicted frames.
1) Clustering Points:
The clustering step is proposed to find the motion vectors as the first step in the proposed temporal prediction technique. Using clustering, parts of the consecutive frame with high similarity are most likely placed in the corresponding clusters of reference and predicted frames. Since, in the motion estimation process, we want to look for the most similar point in the reference frame for each point of the predicted frame, the corresponding cluster of each point in the reference frame is enough to find the most similar point. Without loss of generality, in this paper, the k-means algorithm is used for clustering of the reference and predicted frames. The right number of clusters in k-means algorithm depends on a trade-off between the number of distance computations and the quality of clustering. Based on the minimum and maximum values of the x, y and z coordinates, we consider eight initial center points at the corners of each frame .
2) Estimating Cluster Motion: In temporal prediction, the movement of textures or objects in consecutive frames described by Motion estimation process. When the best matching block for a predicted block is found in the reference frame, the corresponding movement between the reference and predicted blocks is represented by motion vector. In the proposed approach, the centroids of the computed clusters in the previous step is used to estimate the coarsegrained motion vectors as shown in Fig. 12. The amount of changes in the centroids of the corresponding clusters from the reference and predicted frames could be considered as a good estimation of the amount of movement between the points, since, each cluster contains points with similar coordinates. Therefore, the motion vectors are computed using the centroids and are coded as part of the output bitstream. 3) Generating Point Residual: After computing motion vectors, a reconstructed frame is generated using the predicted frame and the estimated motion vectors. The points of each cluster are displaced according to its corresponding motion vector. The reconstructed error frame, often referred to as the residual frame contains the information to correct the error of motion estimation process. For this purpose, the difference between the attributes of the corresponding points in the reference and reconstructed frames should be computed. But, the main issue is how to find the corresponding point to each point of the reconstructed block in the reference frame. The geometry of points in point clouds, unlike the computer-generated or natural 2D videos, has limited spatio-temporal locality. It means that unlike in 2D video frames, there is no guaranty that all points exist in a 3D point cloud frame. So, finding the corresponding point in reference frame to each specific point in the current frame is not trivial. We examine our proposed CGA format to remove the spatio/temporal redundancy to achieve higher compression ratio. To find the corresponding point to each point of the reconstructed block in the reference frame as the main issue in temporal prediction, our proposed scheme suggests to represent the reference frame in CGA to search its points more quickly. A matching criterion such as Mean Squared Error (MSE) or Sum of Absolute Differences (SAD) can be used to find the corresponding point. SAD is preferable for VLSI implementation because of its simple computational steps. Therefore, we use SAD for choosing the best corresponding block. After searching, the smallest SAD candidate is chosen as the best matching reference for each point of the reconstructed frame. Then, the residual frame is created by calculating the difference between the RGBs of the reconstructed and the reference frame points.
VI. EVALUATION
A. Experimental setup
To evaluate various point cloud processes, i.e., point cloud merging, 3D to 2D projection, NN search and point cloud compression, a system with Intel(R) Core(TM) i7-8750H CPU @ 2.5GHz 2.6 GHz and 8.00 GB RAM, and Microsoft Windows 10 is used. For the proposed method, the proposed CGA is used to implement the mentioned point cloud processes. As the baseline method, these point cloud processes are implemented using octree data structure of PCL library. PCL splits into a series of modular libraries such as pcl octree library which provides efficient methods for creating a hierarchical tree data structure from point cloud data. This library enables spatial partitioning, downsampling and the other operations on the point data set. Each octree node has either eight children or no children. The root node defines a cubic bounding box that includes all points. At every tree level, this space becomes subdivided by a factor of 2 [5]. Since PCL is released under the terms of the BSD license, and free for commercial and research use, we have used the pcl octree library as a basline to implement various point cloud processes.
B. Dataset
The Moving Picture Expert Group (MPEG) has suggested a coding solution for various categories of point clouds: 1) LIDAR point cloud for dynamically acquired data, 2) Surface point cloud for static point cloud data, and 3) Videobased point cloud for dynamic content. According to these categories, Video-based point cloud processing (V-PCC) and Geometry-based point cloud processing (G-PCC) standards have been developed [33], [34] and [35]. Video-based point cloud focused on point sets with a relatively uniform distribution of points and LIDAR and surface point clouds are appropriate for more sparse distributions. In this paper, LIDAR and surface point cloud and the corresponding video coding standard (G-PCC) are used in our experiments. As surface point clouds, the 8i Voxelized Surface Light Field (8iVSLF) Dataset [36] is used. For each point cloud in the 8iVSLF dataset, the full body of a human subject was captured by 39 synchronized RGB cameras, at 30 frames per second (fps). In our experimental results, Soldier, Longdress, RedandBlack, and Loot are from 8iVSLF dataset. The other two point cloud sequences, GT Madame and Urbane scene [37] are LIDAR point cloud. GT Madame contains two PLY files, each one with 10 million points that are collected from rue Madame, a street in France. The Urbane scene dataset consists of around 2km of Mobile Laser Scanning point cloud acquired in two cities. Each point cloud consists of a list of x, y and z corresponding to geometric coordinates and the corresponding attributes.
C. Simulation Results
In this section, we assess the performance of CGA for point cloud processing, i.e., merging, projection, NN search and compression of 3D point clouds. The point cloud operations are implemented as explained in Section V. For the baseline method, we have considered the octree structure from PCL library [5]. First, we compare the required time for octree and CGA construction for various point cloud sequences as shown in table I. The results show that in terms of the construction time, CGA is superior to the octree data structure.
We measure performance in terms of the processing time, memory bandwidth, and the amount of transferred data between CPU and memory. We compute the speed up, memory bandwidth, and the amount of transferred data of CGA over PCL for the computer-generated and LiDAR sequences as shown in Fig. 13 and Fig. 14, respectively. The x-axis shows the number of points for each sequence. The results indicate that CGA can replace octrees in point cloud processing to improve the processing time significantly.
The reason for the significant improvement in the processing time of LIDAR point cloud compared to the computergenerated ones is that the LIDAR point cloud data used in this paper has no color attributes and the geometric values are much smaller than the computer-generated point cloud sequences. So, the amount of data that should be processed is much less than the computer-generated ones. Table II shows six points from computer-generated and LIDAR point cloud sequences for more clarifications. 82 73 67 19 19 19 127 14 159 82 73 67 7 7 7 127 15 158 82 73 67 17 17 17 127 15 159 82 73 67 18 18 18 127 17 155 79 72 67 11 11 11 127 19 153 83 77 73 15 15 15 The normalized bandwidth utilization of CGA over PCL for the computer-generated and LiDAR sequences, showed in Fig. 13 and Fig. 14, demonstrate the effectiveness of the proposed CGA approach to achieve much higher bandwidth utilization compared to PCL. The existence of peaks and valleys depends on the type of point cloud data and the hardware configuration. The simulations are done on real hardware and the number of cache misses affects the bandwidth utilization considerably. As explained in section III-B, the number of points are changed by downsizing the point cloud to three different versions. Then, the simulations are done for these downsized versions and the original point cloud sequences. In the original point cloud sequences with the maximum number of points, there is a high correlation between the coordinates and the color attributes of the adjacent points. This high correlation between the coordinates and the color attributes of the points greatly reduces the number of cache misses during point cloud processes. Fig. 13. Speedup, memory bandwidth utilization, and the volume of transferred data of the CGA over PCL for computer-generated point cloud sequences. Then, we have used the downsampling process to reduced the number of points of each cloud to half of the initial points. Due to the cache size of the system (384 KB), significant volumes of points do not fit in the cache. In addition, the correlation between the coordinates and the color attributes of the points is reduced considerably during downsampling process. So, the number of cache misses during the point cloud operations increase significantly. The peaks in Fig. 13 and Fig. 14, show the related bandwidth utilization for these downsized versions of point clouds. For the other downsized versions of point clouds, the valleys shown in Fig. 13 and Fig. 14, the total number of generated points is so small that most of the points can be fitted in the cache. So, the number of cache misses is reduced considerably during the point cloud operations. One of the main reasons for the superior performance of CGA over PCL is the reduced data transfer during point cloud processing. The amount of transferred data by CGA normalized to that of PCL for the computer-generated and LiDAR sequences is shown in Fig. 13 and Fig. 14, respectively. The amount of transferred data in CGA compared to the PCL is improved significantly. We also use CGA for temporal encoding of point clouds. In our proposed approach, the frames of the sequences are coded sequentially. Advanced compression methods for point clouds try to remove the spatial and temporal redundancy within a point cloud. For the temporal encoding of a frame (predicted frame), instead of directly encoding the raw pixel values, the encoder finds similar points to the points of the predicted frame from a previously encoded frame (reference frame). Temporal prediction relies on estimating motion between blocks of the consecutive frames. In this process, given a specific block A in the predicted frame and its matching block B in the reference frame, a motion vector (MV) is defined as a vector connecting the top-left coordinates of A and B. Then, the extracted motion vector and a residual block are encoded and sent to the decoder. The residual block is defined as the difference between A and B. Then, the reconstructed error frame, often referred to as the residual frame, should be found. This frame contains the information to correct the error of motion estimation process. For this purpose, the difference between the attributes of the corresponding points in the reference and reconstructed frames should be computed. But, the main issue in point cloud compression is how to find the corresponding point to each point of the reconstructed block in the reference frame. The geometry of points in point clouds, unlike the natural 2D videos, has limited spatiotemporal locality. In other words, there is no guaranty that all points exist in a 3D point cloud frame. So, finding the corresponding point in the reference frame to each specific point in the current frame is not trivial. We need to store the points in a searchable data structure. The data structure may be a sparse matrix because the point cloud doesn't necessarily include all possible points in the 3D space. Storing the nonexisting points would lead to polluting the memory. Even in its sparse representation form, a typical point cloud comprises millions of points, which imposes significant pressure on the memory capacity and bandwidth.
Our proposed CGA format enables a faster lookup than octrees to remove the spatio/temporal redundancy during the compression process. The results in table III indicate that CGA can replace octrees in point cloud compression to enable significantly faster point lookups. For the baseline method, all of the frames are coded using their best set of parameters in the MPEG G-PCC open-source library [17]. We also measure performance in terms of the output quality and compression ratio. There are several objective quality metrics available in the literature to measure the quality of encoded point clouds. The point-to-point metric measures the quality using the point-based distances. For each point in the decoded frame, the nearest neighbor in the original point cloud frame is obtained and a distance-based mean squared error (MSE) is computed over all pairs of points. The point-to-plane metric is often used to better evaluate the output quality, since, the point cloud points represent the surface of objects in a visual scene. This metric approximates the underlying surface at each point as a plane. This metric results in smaller errors for the points that are closer to the surface [39]. We study various quantization parameters (QP = 11, 10, and 9) for our simulations which determines the number of bits representing each component of the points. Fig. 15 shows the comparison of rate-distortion of the baseline and our proposed method.
VII. ANALYSIS OF THE TIME AND SPACE COMPLEXITY For various point cloud operations, the constructed point cloud should be traveresd to access the points. For octree construction, a three-dimensional space between the minimum and maximum coordinates of points is recursively subdividing into eight octants. Each octant is further subdivided recursively only when they contain some points. The final subdivision results in eight leaf nodes that store points. In each division, the three-dimensional space is divided by two in each dimension. So, the maximum depth of the octree would be log(N), where the space is of dimension N x M x L and N ≥ M, L. Hence, using the octree structure, accessing the points of a point cloud would be done in O(KlogN), where the space is of dimension N x M x L, N ≥ M, L, and K is the number of points. But in CGA, to access the points of a point cloud, the X index array should be traversed to access the x coordinates of the points. Then, following the corresponding pointers in X pointer and Y pointer arrays of each x coordinate, the nearest y and z coordinates to the query point are identified in O(1). So, the time complexity of accessing a point in CGA is O(K), where K is the number of points. This way, the time complexity of using CGA in various point cloud processes are much less than octree structure. The only avaiable data structure in the litrature that uses pointers similar to our proposed CGA is windowed priority queue (WinPQ) [24] that holds references to points, sorted in one dimension. Intervals of points are extracted from the WinPQ. Each interval contains all points within a specific onedimensional window. The interval of points can be updated by moving the window in discrete steps. The one-dimensional WinPQ can be used to scan higher-dimensional point cloud data by instantiating it repeatedly within the nested loops. So, for three-dimension point cloud the time complexity of point cloud operations using WinPQ is O(K 3 ), where K is the number of points. So, WinPQ is much more complex compared to our proposed CGA. We also evaluate the performance of the CGA as compared to octrees in terms of memory consumption. For this purpose, an octree and a CGA are constructed to store the points of computer-generated point clouds. Table IV shows the memory saving of CGA structure over the Octree.
VIII. CONCLUSION
In this paper, we proposed a new data structure for processing point clouds, called CGA. We examined various point cloud operations, including merge, projection, NN search and point cloud compression. Our simulation results on a set of computer-generated and LiDAR point cloud sequences showed that the proposed format has provided significant speed ups, better bandwidth utilization, and less transferred data compared to the state-of-the-art tree-based structures from the PCL library.
Fig. 1 .
1Point cloud merging example.
Fig. 2 .
2Orthogonal projection from 3D to 2D, 0 degrees (orange) and 90 degrees (blue).
Fig. 3 .
3Point cloud 400 nearest search example.
Fig. 4 .
4Schematic of point cloud compression[17].
Fig. 5 .
5Illustrative example of an octree and its occupancy code based on the prior work[17].
Fig. 6 .
6Various octree structures based on the prior work[18].
Fig. 7 .
7Processing time, bandwidth utilization, and the volume of data transfer for different point cloud processes and various computer-generated point cloud sequences using PCL library.
Fig. 8 .
8Processing time, bandwidth utilization, and the volume of data transfer for different point cloud processes and various LiDAR point cloud sequences using PCL library.
Fig. 9 .
9Presentation of six points in a 3D space and the points in the proposed geometric arrays. • A V alue array stores the attribute values of the existing points in the point cloud. • A Z index vector shows the z coordinates of the corresponding points in the V alue array. • A Y index vector shows the y coordinates of the points in the V alue array. • A Y pointer vector stores the cumulative number of existing points with the y coordinate equal to Y index [i]. It is defined by the following recursive relation. -Y pointr [0] = 0 -Y pointer [i + 1] = Y pointer [i] + number of existing points with their y coordinates equal to Y index [i]
One can create a new CGA; then, access the points of each CGA and add them it this new one. To access a specific point s 0 = (s x 0 , s y 0 , s z 0 ) in CGA, we should first determine if this point exists or not. For this purpose, it is enough to refer to X index array. Then,if X pointer [s x 0 + 1] − X pointer [s x 0 ] is non-zero, there exist at least one point with the x coordinate equal to s x 0 . Then, the pointers of X pointer [s x 0 ] and X pointer [s x 0 + 1]show the candidate locations for finding the corresponding y coordinates in the Y index array. Searching the cells with indices between these two pointers in the Y index array, the point with x and y coordinates equal to s x 0 and s y 0 could be found. Then, the index of the point found in Y pointer array (i.e., k) is used to find the corresponding z coordinate. The Y pointer [k+ 1] − Y pointer [k]shows the number of points with x and y coordinates equal to s x 0 and s y 0 , respectively and the pointers of Y pointer [k] and Y pointer [k + 1] in Y pointer array show the candidate locations for finding the corresponding z coordinate in the Z index array.
Fig. 10 .
10Illustrative example of merging two CGA-based point clouds.
Fig. 11 .
11Our proposed point cloud spatio-temporal encoding based on CGA.
Fig. 12 .
12Finding motion vectors based on the clustering of points in the reference and predicted frames.
14 Fig. 14 .
1414Speedup, memory bandwidth utilization, and the volume of transferred data of the CGA over PCL for LiDAR point cloud sequences.
Fig. 15 .
15The PSNR versus compression ratio for various quantization parameters (QP) for baseline method and our compression scheme using CGA.
Roodaki is with K. N. Toosi University of Technologys, Tehran, Iran (email: hroodaki@kntu.ac.ir) Mahdi Nazm Bojnordi is with School of Computing, University of Utah, Utah, United States (email:bojnordi@cs.utah.edu)
TABLE I COMPARISON
IOF THE REQUIRED TIME FOR OCTREE AND CGACONSTRUCTION
Point cloud sequences The required time for generating data structure
Octree
CGA
Soldier
3.93s
3.87s
Longdress
2.68s
1.90s
Loot
2.94s
2.49s
RedandBlack
2.75s
2.28s
TABLE II THE
IIx, y AND z COORDINATES AND COLOR ATTRIBUTES OF COMPUTER-GENERATED AND LIDAR POINT CLOUD SEQUENCESSoldier (computer-generated)
GT Madame (LIDAR)
x
y
z
red green blue
x
y
z
127 15 157
TABLE III PERFORMANCE
IIIGAINS OF THE CGA OVER THE OCTREE IN COMPRESSION PROCESS Point cloud sequences Lookup speedup during the compression processover octree
Soldier
16.59x
Longdress
15.97x
RedandBlack
23.58x
Loot
54.78x
TABLE IV MEMORY
IVSAVING OF CGA STRUCTURE OVER THE OCTREE Sequence name Memory saving of CGA structureover octree
Soldier
1.44x
Longdress
1.43x
RedandBlack
1.42x
Loot
1.40x
Data Structures for Efficient Dynamic Processing in 3-D. J-F Lalonde, N Vandapel, M Hebert, M , The International Journal of Robotics Research (IJRR). 268J-F. Lalonde, N. Vandapel, and M. Hebert M. "Data Structures for Effi- cient Dynamic Processing in 3-D", The International Journal of Robotics Research (IJRR), vol. 26, no. 8, pp. 777-796, 2007.
Efficient processing of large 3D point cloud. J Elseberg, D Borrmann, A Nuchter, 2011 XXIII International Symposium on Information, Communication and Automation Technologies. New York, NY, USAJ. Elseberg, D. Borrmann, and A. Nuchter. "Efficient processing of large 3D point cloud", 2011 XXIII International Symposium on Information, Communication and Automation Technologies, New York, NY, USA, 2011.
Efficient proximity search for 3-d cuboids. J Gao, R R Gupta, Computational Science and Its Applications. J. Gao, and R. Gupta. R. "Efficient proximity search for 3-d cuboids", Computational Science and Its Applications, PP. 2669-777-796, 2003.
An Efficient Point Cloud Processing Framework for Terrestrial and Mobile Lidar Data Via Reconstructing the Scan Pattern Grid. E Che, Corvallis, OR 97331; United StatesOregon State UniversityE. Che. An Efficient Point Cloud Processing Framework for Terrestrial and Mobile Lidar Data Via Reconstructing the Scan Pattern Grid, Oregon State University, Corvallis, OR 97331, United States, 2018.
3D is here: Point Cloud Library (PCL). R B Rusu, S Cousins, IEEE International Conference on Robotics and Automation (ICRA). Shanghai, ChinaR.B. Rusu, and S. Cousins . "3D is here: Point Cloud Library (PCL)", IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China,2011.
G-Arrays: Geometric Arrays for Efficient Point Cloud Processing. H Roodaki, M Dehyadegari, M Nazm Bojnordi, International Conference on Acoustics, Speech, and Signal Processing. Toronto, Ontario, Canada2021H. Roodaki, M. Dehyadegari, and M. Nazm Bojnordi. "G-Arrays: Ge- ometric Arrays for Efficient Point Cloud Processing", International Con- ference on Acoustics, Speech, and Signal Processing (ICASSP), Toronto, Ontario, Canada, 2021.
Point Cloud Merging for Complete 3D Surface Reconstruction. V Matiukas, D Miniotas, Elektronika Ir Elektrotechnika. 113V. Matiukas, and D. Miniotas. "Point Cloud Merging for Complete 3D Surface Reconstruction", Elektronika Ir Elektrotechnika, vol. 113, no. 7, pp.73-79, 2011.
An overview of lidar point cloud processing software. J C Fernandez, A Singhania, J Caceres, K C Slatton, M Starek, R Kuma, GEM Center Report Number Rep. J.C. Fernandez, A. Singhania, J. Caceres, K.C. Slatton, M. Starek, and R Kuma. "An overview of lidar point cloud processing software", GEM Center Report Number Rep-2007-12-001, 2007.
Complete 3D model reconstruction from multiple views. H-Y Lin, M Subbarao, S-Y. Park, Proceedings of SPIE 4567, Machine Vision and Three-Dimensional Imaging Systems for Inspection and Metrology II. SPIE 4567, Machine Vision and Three-Dimensional Imaging Systems for Inspection and Metrology IIH-Y. Lin, M. Subbarao, and S-Y. Park. "Complete 3D model reconstruc- tion from multiple views", Proceedings of SPIE 4567, Machine Vision and Three-Dimensional Imaging Systems for Inspection and Metrology II, 2002.
An Effective Way to Represent Octrees. E Eon, B Harrison, T Myers, P A Chou, ISO/IEC JTC1/SC29 WG11 (MPEG) input document m42914. E. d'Eon, B. Harrison, T. Myers, and P.A. Chou. "An Effective Way to Represent Octrees", ISO/IEC JTC1/SC29 WG11 (MPEG) input document m42914, 2018.
Color-Based Algorithm for Automatic Merging of Multiview 3D Point Clouds. E Holowko, J Wojsz, R Sitnik, M Karaszewski, ACM Journal on Computing and Cultural Heritage. 73E. Holowko, J. Wojsz, R. Sitnik, and M. Karaszewski. "Color-Based Algorithm for Automatic Merging of Multiview 3D Point Clouds", ACM Journal on Computing and Cultural Heritage, vol. 7, no.3, PP.1-21, 2014.
Learning to Segment 3D Point Clouds in 2D Image Space. Y Lyu, X Huang, Z Zhang, 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle, WA, USAY. Lyu, X. Huang, and Z. Zhang. "Learning to Segment 3D Point Clouds in 2D Image Space", 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 2020.
Projection-Recognition-Projection Method for Automatic Object Recognition and Registration for Dynamic Heavy Equipment Operations. Y K Cho, M Gai, Journal of Computing in Civil Engineering. 285Y K. Cho, and M. Gai. "Projection-Recognition-Projection Method for Automatic Object Recognition and Registration for Dynamic Heavy Equipment Operations", Journal of Computing in Civil Engineering, vol. 28, no. 5, 2014.
Knowledge-based 3D point clouds processing. Q H Truong, ffNNT : 2013DIJOS045ff. fftel-00977434f. Université de Bourgogne, English.Q.H. Truong. "Knowledge-based 3D point clouds processing", Univer- sité de Bourgogne, English. ffNNT : 2013DIJOS045ff. fftel-00977434f, 2013.
Almost constant-time 3D nearest-neighbor lookup using implicit octrees. B H Drost, S Ilic, Machine Vision and Applications. 29B. H. Drost, and S. Ilic. "Almost constant-time 3D nearest-neighbor lookup using implicit octrees". Machine Vision and Applications, vol. 29, pp. 299-311, 2018.
A Novel Point Cloud Compression Algorithm Based on Clustering. S Xuebin, H Ma, Y Sun, M Liu, IEEE Robotics and Automation Letters. 4S. Xuebin, H. Ma, Y. Sun, and M. Liu. "A Novel Point Cloud Com- pression Algorithm Based on Clustering", IEEE Robotics and Automation Letters, vol.4, pp.2132-2139, 2019.
Design, Implementation, and Evaluation of a Point Cloud Codec for Tele-Immersive Video. R Mekuria, K Blom, P Cesar, IEEE Transactions on Circuits and Systems for Video Technology. 27R. Mekuria, K. Blom, and P. Cesar. "Design, Implementation, and Evaluation of a Point Cloud Codec for Tele-Immersive Video", IEEE Transactions on Circuits and Systems for Video Technology, vol. 27, no. 4, pp. 828-842, 2017.
An Effective Way to Represent Octrees. I Gargantini, Communications of the ACM. 25I. Gargantini. "An Effective Way to Represent Octrees", Communica- tions of the ACM, vol. 25, no. 12, pp.905-910, 1982.
Ter-raMobilita/IQmulus urban point cloud analysis benchmark. B Vallet, M Brédif, A Serna, B Marcotegui, N Paparoditis, Computers and Graphics. 49ElsevierB. Vallet, M. Brédif, A. Serna, B. Marcotegui, N. Paparoditis. "Ter- raMobilita/IQmulus urban point cloud analysis benchmark", Computers and Graphics, Elsevier, vol.49, pp.126-133, 2015.
Intel threading building blocks : outfitting C++ for multicore processor parallelism. J Reinders, O'ReillyJ. Reinders. Intel threading building blocks : outfitting C++ for multicore processor parallelism, O'Reilly, 2007.
Near real-time point cloud processing using the PCL. M Miknis, R Davies, P Plassmann, A Ware, 2015 International Conference on Systems, Signals and Image Processing (IWSSIP). M. Miknis, R. Davies, P. Plassmann, and A. Ware. "Near real-time point cloud processing using the PCL", 2015 International Conference on Systems, Signals and Image Processing (IWSSIP), 2015.
. Profiler Intel Vtune, Intel VTune Profiler", https://software.intel.com/content/www/us/en/develop/documentation/vtune- help/top.html, Accessed on October 2021.
Structural-surface extraction from 3D laser radar point clouds. J R Lersch, B N Webb, Proc. SPIE 5412, Laser Radar Technology and Applications IX. SPIE 5412, Laser Radar Technology and Applications IXJ.R. Lersch, B.N. Webb. "Structural-surface extraction from 3D laser radar point clouds", Proc. SPIE 5412, Laser Radar Technology and Applications IX, pp. 126-133, 2004.
Locational optimization problems solved through Voronoi diagrams. A Okabea, A Suzukib, European Journal of Operational Research. 983A. Okabea, A. Suzukib. "Locational optimization problems solved through Voronoi diagrams", European Journal of Operational Research, vol. 98, no. 3, pp. 445-456, 1997.
Kd-tree Based Nonuniform Simplification of 3D Point Cloud. Z Xiao, W Huang, Third International Conference on Genetic and Evolutionary Computing. GuilinZ. Xiao, and W. Huang. "Kd-tree Based Nonuniform Simplification of 3D Point Cloud", 2009 Third International Conference on Genetic and Evolutionary Computing, Guilin, 2009..
The Design and Analysis of Spatial Data Structures. H Samet, Addison-WesleyH. Samet. The Design and Analysis of Spatial Data Structures, Addison- Wesley, 1989.
Deep Learning for 3D Point Clouds: A Survey. Y Guo, H Wang, Q Hu, H Liu, L Liu, M Bennamoun, IEEE Transactions on Pattern Analysis and Machine Intelligence. Y. Guo, H. Wang, Q. Hu, H. Liu, L. Liu, and M. Bennamoun. "Deep Learning for 3D Point Clouds: A Survey", IEEE Transactions on Pattern Analysis and Machine Intelligence, 2020.
Fast Point Cloud Feature Extraction for Real-time SLAM. S-W Lee, C-M Hsu, M-C Lee, Y-T Fu, F Atas, A Tsai, 2019 International Automatic Control Conference (CACS). Keelung, TaiwanS-W. Lee, C-M. Hsu, M-C. Lee, Y-T. Fu, F. Atas, and A. Tsai. "Fast Point Cloud Feature Extraction for Real-time SLAM", 2019 International Automatic Control Conference (CACS), Keelung, Taiwan, 2019.
Semantic SLAM With More Accurate Point Cloud Map in Dynamic Environments. Y Fan, Q Zhang, S Liu, Y Tang, X Jing, J Yao, H Han, IEEE Access. 8Y. Fan, Q. Zhang, S. Liu, Y. Tang, X. Jing, J. Yao, and H. Han. "Semantic SLAM With More Accurate Point Cloud Map in Dynamic Environments", IEEE Access, vol. 8, pp. 112237-112252, 2020.
Large-Scale 3D Point Cloud Processing for Mixed and Augmented Reality. D Borrmann, A Nuechter, T Wiemann, IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). Munich, GermanyD. Borrmann, A. Nuechter, T. Wiemann. "Large-Scale 3D Point Cloud Processing for Mixed and Augmented Reality", IEEE International Sympo- sium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), Munich, Germany, 2018.
MPEG Geometric point cloud compression. Mpeg-G-Pcc, MPEG-G-PCC (2021), MPEG Geometric point cloud compression (2021) https://github.com/MPEGGroup/mpeg-pcc-tmc13.
Emerging MPEG Standards for Point Cloud Compression. S Schwarz, M Preda, V Baroncini, M Budagavi, P Cesar, P A Chou, R A Cohen, M Krivokuca, S Lasserre, A Li, J Llach, K Mammou, R Mekuria, O Nakagami, E Siahaan, A Tabatabai, A M Tourapis, V Zakharchenko, IEEE Journal on Emerging and Selected Topics in Circuits and Systems. 91S. Schwarz, M. Preda, V. Baroncini, M. Budagavi, P. Cesar, P.A. Chou, R.A. Cohen, M. Krivokuca, S. Lasserre, A. Li, J. Llach, K. Mammou, R. Mekuria, O. Nakagami, E. Siahaan, A. Tabatabai, A.M. Tourapis, and V. Zakharchenko. "Emerging MPEG Standards for Point Cloud Compres- sion", IEEE Journal on Emerging and Selected Topics in Circuits and Systems, vol. 9, no. 1. pp.33-148, 2019.
Video-Based Point-Cloud-Compression Standard in MPEG: From Evidence Collection to Committee Draft. E S Jang, M Preda, K Mammou, A M Tourapis, J Kim, D B Graziosi, S Rhyu, M Budagavi, IEEE Signal Processing Magazine. 363Standards in a NutshellE.S. Jang, M. Preda, K. Mammou, A.M. Tourapis, J. Kim, D.B. Graziosi, S. Rhyu,and M Budagavi. "Video-Based Point-Cloud-Compression Stan- dard in MPEG: From Evidence Collection to Committee Draft [Standards in a Nutshell]", IEEE Signal Processing Magazine, vol. 36, no. 3, pp. 118-123, 2019.
An overview of ongoing point cloud compression standardization activities: video-based (V-PCC) and geometry-based (GPCC). D Graziosi, O Nakagami, S Kuma, A Zaghetto, T Suzuki, A Tabatabai, APSIPA Transactions on Signal and Information Processing. 9D. Graziosi, O. Nakagami, S. Kuma, A. Zaghetto, T. Suzuki,and A. Tabatabai. "An overview of ongoing point cloud compression standard- ization activities: video-based (V-PCC) and geometry-based (GPCC)", APSIPA Transactions on Signal and Information Processing, vol. 9, 2020.
M Krivokuca, P A Chou, P Savill, ISO/IEC JTC1/SC29 WG11 (MPEG) input document m42914. Ljubljana8i Voxelized Surface Light Field (8iVSLF) DatasetM. Krivokuca, P.A. Chou, and P. Savill. "8i Voxelized Surface Light Field (8iVSLF) Dataset", ISO/IEC JTC1/SC29 WG11 (MPEG) input document m42914, Ljubljana,2018.
Paris-Lille-3D: A large and high-quality ground-truth urban point cloud dataset for automatic segmentation and classification. X Roynard, J-E Deschaud, F Goulette, International journal of Robotics research. 376X. Roynard, J-E. Deschaud, and F. Goulette. "Paris-Lille-3D: A large and high-quality ground-truth urban point cloud dataset for automatic seg- mentation and classification", International journal of Robotics research, vol. 37, no. 6, pp.545-557, 2018.
Geometric distortion metrics for point cloud compression. D Tian, H Ochimizu, C Feng, R Cohen, A Vetro, IEEE International Conference on Image Processing (ICIP). Beijing, ChinaD. Tian, H. Ochimizu, C. Feng, R. Cohen, and A. Vetro. "Geometric distortion metrics for point cloud compression", IEEE International Con- ference on Image Processing (ICIP), Beijing, China.2017.
| [
"https://github.com/MPEGGroup/mpeg-pcc-tmc13."
] |
[
"A GROMOV-WITTEN THEORY FOR SIMPLE NORMAL-CROSSING PAIRS WITHOUT LOG GEOMETRY HSIAN-HUA TSENG AND FENGLONG YOU",
"A GROMOV-WITTEN THEORY FOR SIMPLE NORMAL-CROSSING PAIRS WITHOUT LOG GEOMETRY HSIAN-HUA TSENG AND FENGLONG YOU"
] | [
"\nDEPARTMENT OF MATHEMATICS\nWEST 18TH AVE\nOHIO STATE UNIVERSITY\n100 MATH TOWER231\n",
"\nDEPARTMENT OF MATHEMATICS, UNIVERSITY OF OSLO\nCOLUMBUS\nPO BOX 1053 BLINDERN43210, 0316OSLOOHUSA, NORWAY\n"
] | [
"DEPARTMENT OF MATHEMATICS\nWEST 18TH AVE\nOHIO STATE UNIVERSITY\n100 MATH TOWER231",
"DEPARTMENT OF MATHEMATICS, UNIVERSITY OF OSLO\nCOLUMBUS\nPO BOX 1053 BLINDERN43210, 0316OSLOOHUSA, NORWAY"
] | [] | We define a new Gromov-Witten theory relative to simple normal crossing divisors as a limit of Gromov-Witten theory of multi-root stacks. Several structural properties are proved including relative quantum cohomology, Givental formalism, Virasoro constraints (genus zero) and a partial cohomological field theory. Furthermore, we use the degree zero part of the relative quantum cohomology to provide an alternative mirror construction of Gross-Siebert[19]and to prove the Frobenius structure conjecture of Gross-Hacking-Keel [16] in our setting. CONTENTS Date: January 31, 2023.Question 3. How is the new theory related to the (punctured) logarithmic Gromov-Witten theory of Abramovich-Chen-Gross-Siebert defined in [17], [10], [2], [5]?In [38], we showed by explicit computations that these two theories are equal in some cases. When D is irreducible, the main results of [1] and[36]imply that these two theories are the same for invariants without punctured points 1 . As pointed out by Dhruv Ranganathan, these two theories are not equal in general. For example, logarithmic invariants are invariant uner birational transformation [4], but orbifold invariants are not. However, it is perhaps reasonable to expect that our new theory and the punctured logarithmic Gromov-Witten theory are equivalent somehow. It would be interesting to find the precise relation between these two theories. Then, one can compute punctured invariants through corresponding invariants of X D,∞ . Recently, the birational invariance of orbifold invariants has been investigated in[8]and[41]. | 10.1007/s00220-023-04656-2 | [
"https://export.arxiv.org/pdf/2008.04844v2.pdf"
] | 221,095,715 | 2008.04844 | 013e0f78e0b6b8fc25602e94ff4d06df8b7afbe4 |
A GROMOV-WITTEN THEORY FOR SIMPLE NORMAL-CROSSING PAIRS WITHOUT LOG GEOMETRY HSIAN-HUA TSENG AND FENGLONG YOU
Jan 2023
DEPARTMENT OF MATHEMATICS
WEST 18TH AVE
OHIO STATE UNIVERSITY
100 MATH TOWER231
DEPARTMENT OF MATHEMATICS, UNIVERSITY OF OSLO
COLUMBUS
PO BOX 1053 BLINDERN43210, 0316OSLOOHUSA, NORWAY
A GROMOV-WITTEN THEORY FOR SIMPLE NORMAL-CROSSING PAIRS WITHOUT LOG GEOMETRY HSIAN-HUA TSENG AND FENGLONG YOU
Jan 2023
We define a new Gromov-Witten theory relative to simple normal crossing divisors as a limit of Gromov-Witten theory of multi-root stacks. Several structural properties are proved including relative quantum cohomology, Givental formalism, Virasoro constraints (genus zero) and a partial cohomological field theory. Furthermore, we use the degree zero part of the relative quantum cohomology to provide an alternative mirror construction of Gross-Siebert[19]and to prove the Frobenius structure conjecture of Gross-Hacking-Keel [16] in our setting. CONTENTS Date: January 31, 2023.Question 3. How is the new theory related to the (punctured) logarithmic Gromov-Witten theory of Abramovich-Chen-Gross-Siebert defined in [17], [10], [2], [5]?In [38], we showed by explicit computations that these two theories are equal in some cases. When D is irreducible, the main results of [1] and[36]imply that these two theories are the same for invariants without punctured points 1 . As pointed out by Dhruv Ranganathan, these two theories are not equal in general. For example, logarithmic invariants are invariant uner birational transformation [4], but orbifold invariants are not. However, it is perhaps reasonable to expect that our new theory and the punctured logarithmic Gromov-Witten theory are equivalent somehow. It would be interesting to find the precise relation between these two theories. Then, one can compute punctured invariants through corresponding invariants of X D,∞ . Recently, the birational invariance of orbifold invariants has been investigated in[8]and[41].
For r 1 , ..., r n ∈ N pairwise coprime, the multi-root stack X D, r := X (D 1 ,r 1 ),...,(Dn,rn) , defined in Definition 17, is smooth. The first result of this paper shows that the Gromov-Witten theory of X D, r is a polynomial in r 1 , ..., r n , see Corollary 18 in Section 3. This is achieved by certain polynomiality results for root stacks associated to a pair (X , D) of Deligne-Mumford stack X and a smooth divisor D ⊂ X .
Theorem 1. For r sufficiently large, genus 0 Gromov-Witten invariant of X D,r is independent of r. Genus g > 0 Gromov-Witten invariant of X D,r is a polynomial in r. Furthermore, the constant term of the polynomial is the corresponding relative Gromov-Witten invariant of (X , D).
We refer the readers to Theorems 9 and 10 in Section 2 for the precise statement. Taking the constant terms yields a theory canonically attached to the pair (X, D). See Definition 20 in Section 3 for the precise definition of this new theory.
We may view this new theory formally as the Gromov-Witten theory of the infinite root stack X D,∞ associated to (X, D), as constructed in [31], because in genus 0 we show that the Gromov-Witten theory of X D, r is independent of r 1 , ..., r n and taking constant terms is the same as taking large r i limit.
Question 2. Can one define Gromov-Witten theory of infinite root stacks directly?
Naturally, one can expect such a definition to coincide with the constant terms of Gromov-Witten theory of finite root stacks. By [31], the infinite root stack structure determines the logarithmic structure. It is natural to expect that infinite root stack Gromov-Witten theory should determine logarithmic Gromov-Witten theory.
Logarithmic theory.
Our new theory has some advantages:
(1) Negative contact orders are naturally included. A relative marking with positive contact order k > 0 along a divisor D i corresponds to an orbifold marking with age(N D i /X D, r ) equals to k/r i for r i ≫ 1. On the other hand, a relative marking with negative contact order k < 0 along a divisor D i comes from an orbifold marking with age(N D i /X D, r ) equals to 1 + k/r i for r i ≫ 1. Roughly speaking, if we have negative contact order with a divisor D i at a marking, then the irreducible component of the curve containing this marking should map into D i . When D is irreducible, we recover relative Gromov-Witten theory with negative contact orders defined in [13] and [14] which is a generalization of the usual relative Gromov-Witten theory of [25], [20], [26] and [27] (2) It enjoys very nice properties. In particular, we highlight the following properties.
• In genus zero, we have -Topological recursion relation (TRR) (Section 4.2) -Witten-Dijkgraaf-Verlinde-Verlinde (WDVV) equation (Section 4.2) -Relative quantum cohomology ring (Section 4.3) -Givental formalism (Section 5) -Virasoro constraints (Section 6). • In all genera, we have string, dilaton, and divisor equations (Section 4.2) a Partial CohFT (Section 8).
(3) It is quite computable. It has already been proved in [38] that one can construct an Ifunction for the Gromov-Witten theory of X D,∞ . Therefore, Givental formalism that we developed in Section 5 provides a necessary foundation for [38] to state a mirror theorem for X D,∞ (see Theorem 31). The mirror theorem allows us to compute genus zero invariants of X D,∞ in various cases. Some examples and applications were given in [38]. Therefore, one may expect that Gromov-Witten invariants of infinite root stacks are more accessible (than log Gromov-Witten invariants) in terms of computation, as lots of sophisticated techniques in traditional Gromov-Witten theory are available.
We may view our new theory as a logarithmic Gromov-Witten theory of (X, D). As such, it is natural to ask Another interesting question is Question 4 (R. Pandharipande). Does the new theory have a degeneration formula?
When D is irreducible and there are no punctured points, it is proved in [36] that our theory is the relative Gromov-Witten theory of [26], which admits a degeneration formula [27]. A degeneration formula for logarithmic Gromov-Witten theory can be found in [3], [24] and [30]. 1.3. Mirror constructions. In [18] and [19], Gross-Siebert constructed mirrors to a log Calabi-Yau pair (X, D) and a maximally unipotent degeneration X → S of log Calabi-Yau manifolds. The mirrors are constructed from the degree 0 part of the relative quantum cohomology ring QH 0 (X, D).
A key ingredient is the punctured Gromov-Witten theory which is used to describe the structure constants for the product rule.
We construct a relative quantum cohomology ring for the pair (X, D) in Section 4 using Gromov-Witten invariants of X D,∞ . The associativity of the relative quantum cohomology follows from the WDVV equation. Restricting it to the degree 0 part of the relative quantum cohomology ring,
QH 0 (X D,∞ ),
there is a product structure naturally coming from the restriction of the relative quantum product. Similar to [19], the associativity is not expected to be preserved under this restriction. We show in Section 7 that the associativity is true under some assumptions. More precisely, we have the following theorem.
Theorem 5 (=Theorem 37). When (K X + D) is nef or anti-nef, the structure constants N orb,β p 1 ,p 2 ,−r define, via (7.5), a commutative, associative S I -algebra structure on R I with unit given by ϑ 0 , where S I and R I are defined in (7.3) and (7.4) respectively; the structure constants are defined in (7.2). Remark 6. Theorem 5 is [19,Theorem 1.9], which is a main theorem of [19], if we replace the structure constants by the corresponding punctured Gromov-Witten invariants. It is worth noting that in our setting the proof of the associativity is substantially shorter. Gross-Siebert also proved the case when (X, D) is (non-minimal) log Calabi-Yau in [19,Theorem 1.12], which would avoid issues from the existence of minimal models. We plan to study this case in the future.
Furthermore, we show that the Frobenius structure conjecture of Gross-Hacking-Keel [16] holds in our setting.
Theorem 7 (=Theorem 38). When (K X + D) is nef or anti-nef, the Frobenius structure conjecture (see Conjecture 35) holds for QH 0 (X D,∞ ).
In Section 7.3, we use the algebra in Theorem 5 to construct mirrors following the Gross-Siebert program (see [18] and [19]). Naturally, one can ask Question 8. How are the resulting mirrors related to mirrors from other constructions?
One can expect that the resulting mirrors are closely related to, if not the same as, Gross-Siebert mirrors. One such evidence is given in [38,Section 6] where we obtained a mirror identity between quantum periods of Fano varieties and classical periods of their mirror Landau-Ginzburg potentials by replacing log invariants with formal invariants of infinite root stacks.
1.4. Acknowledgement. We thank Mark Gross, Rahul Pandharipande, Dhruv Ranganathan, and Helge Ruddat for valuable comments and suggestions.
H.-H. T. is supported in part by Simons foundation collaboration grant. F. Y. is supported by a postdoctoral fellowship funded by NSERC and Department of Mathematical Sciences at the University of Alberta.
POLYNOMIALITY
In this section, we generalize the main results of [36], [13] and [14] to the case when the target X is a Deligne-Mumford stack instead of a variety. In the next section, we will use these results to prove the polynomiality of Gromov-Witten theory of multi-root stacks.
2.1. Set-up. Let X be a smooth proper Deligne-Mumford stack over C with projective coarse moduli space. Let D ⊂ X be a smooth irreducible divisor. Assume that r ∈ N is coprime with the order of any stabilizer of X . Then the stack of r-th roots along D, X D,r , is smooth and we consider its Gromov-Witten theory.
Given an effective curve class β ∈ H 2 (X , Q), let
k = (k 1 , . . . , k m ) ∈ (Q × ) m be a vector that satisfies m j=1 k j = β [D].
The number of positive and negative elements in k are denoted by m + and m − respectively. So
m = m + + m − .
We assume that r is sufficiently large. We consider the moduli space M g, k,n (X D,r , β)
of (m + n)-pointed, genus g, degree β ∈ H 2 (X , Q) orbifold stable maps to X D,r where the j-th marking is an orbifold marking with age(N D/X ) equals to k j /r if k j > 0; the j-th marking is an orbifold marking with age(N D/X ) equals to 1 + k j /r if k j < 0; there are n extra markings that map to IX , the rigidified inertia stack of X . We consider the forgetful map
τ orb : M g, k,n (X D,r , β) → M g,m+n (X , β) × (IX ) m (ID) m .
We first consider the case when m − = 0, namely, there are only positive contact orders. In this case, we write M g, k,n (X /D, β)
for the corresponding moduli space of relative orbifold stable maps to (X , D) where the contact orders are given by k. We consider the forgetful map
τ rel : M g, k,n (X /D, β) → M g,m+n (X , β) × (IX ) m (ID) m .
Theorem 9. For m − = 0 and r sufficiently large, genus 0 Gromov-Witten invariant of X D,r is independent of r. Genus g > 0 Gromov-Witten invariant of X D,r is a polynomial in r. Furthermore, the constant term of the polynomial is the corresponding relative Gromov-Witten invariant of (X , D).
More precisely, we have the following results at the cycle level.
(τ orb ) * M g, k,n (X D,r , β) vir r 0 = (τ rel ) * M g, k,n (X /D, β) vir and (τ orb ) * M 0, k,n (X D,r , β)
vir is independent of r, where [· · · ] r 0 means the constant term of a polynomial in r.
Theorem 10. For m − > 0 and r sufficiently large, after multiplying by r m − , genus 0 Gromov-Witten invariant of X D,r is independent of r. After multiplying by r m − , genus g > 0 Gromov-Witten invariant of X D,r is a polynomial in r. More precisely,
r m − (τ orb ) * M g, k,n (X D,r , β)
vir is a polynomial in r and
r m − (τ orb ) * M 0, k,n (X D,r , β) vir is independent of r.
Remark 11. The degree of this polynomial can be studied using the method of [37]. One can show that the degree of this polynomial is bounded by 2g − 1 for g ≥ 1. Since we do not use such a result, we leave the proof to the interested readers.
Remark 12. Theorem 10 generalizes the main result of [13] and [14] to the orbifold case, namely X is a Deligne-Mumford stack instead of a variety. Therefore, we can also define relative Gromov-Witten theory of (X , D) with negative contact orders as a limit of orbifold Gromov-Witten theory of X D,r . Similar to [13] and [14], with some extra work, we can define relative Gromov-Witten theory of (X , D) with negative contact orders purely in terms of relative Gromov-Witten theory of (X , D) with positive contact orders and rubber theory of D. [36], to analyze the r-dependence of Gromov-Witten invariants of X D,r , we use the degeneration formula to reduce to a local model. We also refer to [14,Section 4.2] for some details.
2.2.1. Degeneration. Let p : X → A 1
be the deformation to the normal cone of D ⊂ X . The special fiber p −1 (0) is X and
Y := P(N D/X ⊕ O X )
glued together by identifying D ⊂ X with
D ∞ := P(N D/X ) ⊂ P(N D/X ⊕ O X ).
Other fibers p −1 (t = 0) are isomorphic to X . There is a divisor D ⊂ X whose restriction to p −1 (t = 0) is D and whose restriction to p −1 (0) is
D 0 := P(O X ) ⊂ P(N D/X ⊕ O X ).
The r-th root stack of X along D, X D,r , is a flat degeneration of X D,r to
X ∪ D=D∞ P(N D/X ⊕ O X ) D 0 ,r .
The degeneration formula for orbifold Gromov-Witten theory [6] applied to X D,r expresses Gromov-Witten invariants of X D,r in terms of (disconnected) relative Gromov-Witten invariants of (X , D) and (P(N D/X ⊕ O X ) D 0 ,r , D ∞ ). The sum in the degeneration formula ranges over the intersection profile along D. Since (X , D) is independent of r, the r-dependence must come from orbifold-relative Gromov-Witten invariants of (Y D 0 ,r = P(N D/X ⊕ O X ) D 0 ,r , D ∞ ). Therefore, we just need to compute
(τ ′ ) * M g, k,n, µ (Y D 0 ,r /D ∞ , β) vir , (2.1)
where µ ∈ (Z >0 ) | µ| records contact orders at D ∞ and τ ′ is the forgetful map τ ′ : M g, k,n, µ (Y D 0 ,r /D ∞ , β) → M g,m+n+| µ| (D, β).
2.2.2.
Localization. The orbifold-relative Gromov-Witten theory of (Y D 0 ,r , D ∞ ) may be studied using virtual localization with respect to the C * -action that scales the fibers of Y D 0 ,r → D.
When D is a scheme and r is sufficiently large, the virtual localization formula has been written in detail in [22] and [36]. In the present case the formula is completely analogous. We write r L/D for the r-th root of the line bundle L over D. Recall that r L/D is a gerbe over D banded by µ r . The virtual localization formula expresses (2.1) as a sum over decorated graphs. For the purpose of analyzing the r-dependence, we only need to note that r only appears in the contribution from stable vertices v over D 0 , given by the following expression capping with the virtual class [M g(v),n(v) ( r L/D, β(v))] vir :
e∈E(v) |G (e,v) | r (e,v) r (e,v) d e t + ev * e c 1 (L) − d eψ(e,v) · ∞ i=0 (t/r) g(v)−1+|E(v)|−i c i (−R • π * L) (2.2) =t −1 e∈E(v) |G ′ (e,v) | 1 d e 1 + (ev * e c 1 (L) − d eψ(e,v) )/t · ∞ i=0 t g(v)−i (r) i−g(v)+1 c i (−R • π * L) =t −1 e∈E(v) |G ′ (e,v) | 1 d e 1 + (ev * e c 1 (L) − d eψ(e,v) )/t · ∞ i=0 (tr) g(v)−i (r) 2i−2g(v)+1 c i (−R • π * L) , where • g(v)
is the genus of the vertex v over D 0 in a localization graph, • n(v) is the number of marked points of the vertex v,
• β(v) is the degree assigned to the vertex v,
• t is the equivariant parameter,
• L = N D/X , • π : C g(v),n(v) ( r L/D, β(v)) → M g(v),n(v) ( r L/D, β(v)) is the universal curve, L → C g(v),n(v) ( r L/D, β(v)) is the universal r-th root,
• d e is the degree of the edge e ∈ E(v),
• ev e is the evaluation map at the node corresponding to e, •ψ (e,v) is the descendant class at the marked point corresponding to the pair (e, v), • G (e,v) is the stabilizer group associated to the vertex v and the edge e. The group G (e,v) is a µ r extension of G ′ (e,v) , so
|G (e,v) | = r|G ′ (e,v) |. The group G ′ (e,v) is independent of r. • r (e,v)
is the order of the orbifold structure at the node indexed by (e, v).
Moreover, if the target expands over D ∞ , the vertex contribution over
D ∞ is e∈E(v) |G (e,v) | r (e,v) e∈E(Γ) d e r (e,v) t + ψ ∞ , (2.3)
which always contribute to negative powers of t. The edge contribution is trivial when r is sufficiently large.
To obtain genus g Gromov-Witten invariants of (Y D 0 ,r , D ∞ ), we must take the non-equivariant limit, i.e. taking the t 0 coefficient in the localization formula.
If the genus g = 0, then g(v) = 0 and we note that (2.2) and (2.3) only contain negative powers of t. It follows by the arguments of [14,Lemma 4.8] that the t 0 coefficient is 0 unless M 0, k,n, µ (Y D 0 ,r /D ∞ , β) is unstable (genus zero, two markings and curve class zero). Then the degeneration formula simplifies to (τ orb ) * M 0, k,n (X D,r , β) vir = (τ rel ) * M 0, k,n (X /D, β) vir . Now we assume g > 0.
Proposition 14. For r sufficiently large and i ≥ 0, the class
r 2i−2g(v)+1 τ ′ * (c i (−R * π * L) ∩ [M g(v),n(v) ( r L/D, β(v))] vir ) is a polynomial in r. Here τ ′ : M g(v),n(v) ( r L/D, β(v)) → M g(v),n(v) (D, β(v))
is the natural map to the moduli space of stable maps to D.
The proof of Proposition 14 will be given in Section 2.2.3. Here, we complete the proof of the theorem. The polynomiality follows immediately from Proposition 14. By the formula (2.2) and Proposition 14, the t 0 r 0 -coefficient of the localization contribution of (τ ′ ) * M g, k,n, µ (
Y D 0 ,r /D ∞ , β) vir is 0 unless M g, k,n, µ (Y D 0 ,r /D ∞ , β) is unstable. Then r 0 -coefficient of the degeneration formula sim- plifies to (τ orb ) * M g, k,n (X D,r , β) vir r 0 = (τ rel ) * M g, k,n (X /D, β) vir .
Proof of Proposition 14.
The Chern character ch(R • π * L) can be calculated explicitly using Toen's Grothendieck-Riemann-Roch formula, see [34]. In general, let Z be a smooth proper Deligne-Mumford stack over C with projective coarse moduli space, and let V be a line bundle on Z. Consider the universal family π : C → M g,n (Z, β), f : C → Z.
A formula for the Chern character ch(R • π * f * V )∩[M g,n (Z, β)] vir is calculated in [34]. For simplicity, in what follows we omit the capping with virtual classes in the discussion. With this understood, the formula reads
ch(R • π * f * V ) =π * (ch(f * V )T d ∨ (L n+1 )) − n i=1 m≥1 ev * i A m m! ψ m−1 i + 1 2 (π • ι) * m≥2 1 m! r 2 node ev * node A m ψ m−1 + + (−1) m ψ m−1 − ψ + + ψ − , (2.4) where (1) T d is the Todd class. (2) On the component Z i of the inertia stack IZ, A m is B m (age Z i (p * i V ))ch(p * i V ) = B m (age Z i (p * i V ))p * i (e c 1 (V ) ). Here p i : Z i → Z is the natural projection, and B m (x) are Bernoulli polynomials defined by te tx e t − 1 = m≥0 B m (x) m! t m .
(3) ι is the inclusion of the nodal locus into the universal curve C.
(4) r node is the order of orbifold structure at the node. (5) ev node is the evaluation map at the node. (6) ψ ± are ψ classes associated to branches of the node.
We want to apply the formula to the case
Z = r L/D
the stack of r-th roots of the line bundle L = N D/X over D, and V the universal r-th root line bundle on Z.
For this purpose, we need to discuss how to choose orbifold structures induced from Z at marked points and nodes.
If a point p ∈ D has stabilizer group G, then its inverse image q ∈ Z has stabilizer group G(r), which is a cyclic extension of G:
1 → µ r → G(r) → G → 1.
An orbifold structure at a point mapping to q is a conjugacy class of G(r). If the induced orbifold structure at the point (which maps to p) is chosen, then this conjugacy class in G(r) can be identified with an element in µ r . We refer to [35, Section 3.2] for more details.
For the j-th marked point from M g, k,n (Y, β), the orbifold structure is chosen so that the age of V at this marked point is k j /r if k j ≥ 0 and 1 + k j /r if k j < 0. For other marked points, which are formed by splitting nodes in C * -fixed stable maps, the orbifold structures are determined by the Galois covers attached at these points. For a node, the orbifold structure is chosen by selecting a w ∈ {0, ..., r − 1} such that the age of V at this node is (age node L + w)/r.
We substitute these ages into (2.4) and write (2.4) as
ch(R • π * f * V ) =π * (ch(f * V )T d ∨ (L n+1 )) − n(v) j=1 α j + 1 2 (π • ι) * r 2 node β node ,(2.5)
where
α j := m≥1 ev * j A m m! ψ m−1 j β node := m≥2 1 m! ev * node A m ψ m−1 + + (−1) m ψ m−1 − ψ + + ψ − ,
and n(v) is the number of marked points at the vertex v. So
ch m (R • π * f * V ) =π * (ch(f * V )T d ∨ (L n+1 )) m − n(v) j=1 (α j ) m + 1 2 ((π • ι) * r 2 node β node ) m . (2.6) Using c(−E • ) = exp( m≥1 (−1) m (m − 1)!ch m (E • )), we obtain a formula for c(−R • π * f * V ) ∩ [M g(v),n(v) ( r L/D, β(v))] vir .
Using that the pushforward via τ ′ has virtual degree r 2g−1 on genus g stable map moduli, as calculated in [32], we can get a formula for τ ′
* (c(−R • π * f * V ) ∩ [M g(v),n(v) ( r L/D, β(v))] vir ): Γ∈G g,n,β (D) χ∈Γ(D),w∈W Γ,χ,r r 2g(v)−1−h 1 (Γ) |Aut(Γ)| (j Γ,χ ) * v∈V (Γ) exp( m≥1 (−1) m (m − 1)!π * (ch(f * V )T d ∨ (L n+1 )) m ) n(v) j=1 exp( m≥1 (−1) m−1 (m − 1)!(α j ) m ) (h,h ′ )∈E(Γ) 1 − exp( m≥1 (−1) m (m − 1)!(β node ) m (ψ h + ψ h ′ )) ψ h + ψ h ′ ∩ [M g(v),n(v) (D, β(v))] vir . (2.7)
Here the sum is over the set of D-valued stable graphs denoted by G g,n,β (D) as in [22]; and χ ∈ Γ(D) is a map that assigns to each half-edge a component of the inertia stack of D, corresponding to assigning orbifold structures. Note that
(1) For (h, h ′ ) ∈ E(Γ), χ(h) and χ(h ′ ) are opposite. (2) For v ∈ V (Γ), we have βv c 1 (L) − h∈H(v) age χ(h) L ∈ Z.
This is a consequence of Riemann-Roch for orbifold curves.
We have used the equality
|E(Γ)| + v∈V (Γ) (2g v − 1) = 2g(v) − 1 − h 1 (Γ) for the prestable graph Γ to get the factor r 2g(v)−1−h 1 (Γ) in the formula. The map j Γ,χ : M Γ,χ → M g(v),n(v) (D, β(v))
is the natural map from the component indexed by Γ and χ into the moduli of stable maps to D.
Finally W Γ,χ,r is the collection of r-twistings, which is the assignment
h → w(h) ∈ {0, ..., r − 1}, such that (1) For j ∈ L(Γ), we have w(j) ≡ k j − age X i j L mod r, so the age of V at marked point j is k j /r for k j ≥ 0 or 1 + k j /r for k j < 0. (2) For (h, h ′ ) ∈ E(Γ), if age χ(h) L = 0, then w(h) + w(h ′ ) ≡ 0 mod r. If age χ(h) L = 0, then w(h) + w(h ′ ) ≡ −1 mod r. These conditions ensure that (age χ(h) L + w(h))/r = 1 − (age χ(h ′ ) L + w(h ′ ))/r. (3) For v ∈ V (Γ), we have h∈H(v) w(h) ≡ βv c 1 (L) − h∈H(v) age χ(h) L mod r.
This follows from the lifting analysis of [32].
Fix Γ and χ in (2.7). It follows from the description of A m that the summands in (2.7) are polyno-
mials in w ∈ W Γ,χ,r . Pixton's polynomiality [21, Appendix A] applies to show that τ ′ * (c i (−R • π * f * V )∩ [M g(v),n(v) ( r L/D, β(v))] vir )
is a Laurent polynomial in r. Following [21, Proposition 5], we can identify the lowest r terms.
(1) After the summation over r-twistings, the lowest possible power of r is r h 1 (Γ)−2i .
(2) The formula has a factor r 2g
(v)−1−h 1 (Γ) . (3) Finally there is a prefactor r 2i−2g(v)+1 .
Taken together, this shows that the lowest power of r is r 0 . This completes the proof.
2.3.
Proof of Theorem 10. The proof of Theorem 10 is similar to the proof of Theorem 9, but requires a more refined polynomiality than Proposition 14.
Let M g, a ( r L/D, β) be the moduli space of orbifold stable maps to r L/D, where a is a vector of ages. Let π : C g, a ( r L/D, β) → M g, a ( r L/D, β) be the universal curve,
L → C g, a ( r L/D, β)
is the universal r-th root. We consider the forgetful map
τ ′ : M g, a ( r L/D, β) → M g,l( a) (D, β)
that forgets the r-th root construction.
Proposition 15. For r sufficiently large and i ≥ 0, the class
r i−g(v)+1 τ ′ * (c i (−R • π * L) ∩ [M g, a (D r , β)] vir ) is a polynomial in r and it is constant in r when g(v) = 0, where τ ′ is the map to the moduli space of stable maps to D.
The proof of Proposition 15 is similar to the proof in [13, Appendix A] and [14,Section 4]. We briefly explain the idea here. First of all, in the proof of Theorem 9, we showed that, for sufficiently large r, the class (τ ′ ) * M g, k,n (Y D 0 ,r , β) vir is a polynomial in r and it is constant in r when g = 0. The equivariant version of it is also true by considering equivariant theory as a limit of non-equivariant theory (see, for example [14,Section 4.3]). Then the proposition follows from taking localization residue.
Proof of Proposition 15. Recall that the class (τ ′ ) * M g, k,n (Y D 0 ,r , β) vir is a polynomial in r and it is constant in r when g = 0. The first step is to prove it for families over a base. Let π : E → B be a smooth morphism between two smooth algebraic varieties. Suppose that E is also a C * -torsor over B. Let
Y D 0 ,r × C * E = (Y D 0 ,r × E)/C * with C * acts on both factors. We consider moduli space M g, k,n (Y D 0 ,r × C * E, β) of orbifold stable maps to Y D 0 ,r × C * E,
where the curve class β is a fiber class (projects to 0 on B). Let
M g, k,n (Y D 0 ,r × C * E, β)
vir π be the virtual cycle relative to the base B. Let
τ ′ E : M g, k,n (Y D 0 ,r × C * E, β) → M g,m+n (Y × C * E, β)
be the forgetful map that forgets the r-th root construction. Then
(τ ′ E ) * M g, k,n (Y D 0 ,r × C * E, β) virπ (2.8)
is a polynomial in r and is constant in r if g = 0. The proof is parallel to the proof of Proposition 14 as explained in [14,Section 4.2].
The next step is to prove that the equivariant cycle class
τ ′ * M g, k,n (Y D 0 ,r , β) vir,eq (2.9)
is a polynomial in r and is constant in r when g = 0. We follow the proof of [14,Section 4.3].
The idea is that equivariant theory can be considered as a limit of non-equivariant theory. By [11, Section 2.2], the i-th Chow group of a space X under an algebraic group G can be defined as follows. Let V be an l-dimensional representation of G and U ⊂ V be an equivariant open set where G acts freely and whose complement has codimension more than dim X − i. Then the i-th Chow group is defined as
A G i (X) = A i+l−dim G ((X × U)/G). (2.10)
To apply it to our case, we let G = C * and
E = U = C N − {0}, where N is a sufficiently large integer. Then we have that (X × E)/C * is an X-fibration over B = U/G = P N −1 . Note that M g, k,n (Y D 0 ,r × C * E, β) ∼ = M g, k,n (Y D 0 ,r , β) × E /C *
as moduli spaces. For suitable N, (2.9) identifies the equivariant Chow group with a non-equivariant model. Therefore, the equivariant cycle (2.9) is identified with the non-equivariant cycle (2.8) under (2.10). Therefore, the equivariant class (2.9) is also a polynomial in r and is constant in r when g = 0.
The last step is to consider localization residues of M g, k,n (Y D 0 ,r , β). We consider the decorated graph with one vertex over D 0 such that markings and edges are associated with the vector of ages a. The localization residue is a polynomial in r and is a constant when g = 0. Then the cycle
τ ′ * ∞ i=0 t r g−i−1 c i (−R • π * L) ∩ [M g, a (D r , β)] vir ,
coming from the localization residue, is a polynomial in r and is constant when g = 0. This is the conclusion of [14,Theorem 4.1] for Y a smooth Deligne-Mumford stack. As a consequence (see also [14,Corollary 4.2]), the cycle
τ ′ * (r) i−g+1 c i (−R • π * L) ∩ [M g, a (D r , β)]
vir is a polynomial in r and is constant when g = 0. This concludes the proposition.
Proof of Theorem 10. The proof is similar to the proof of Theorem 9 with the help of Proposition 15. The degeneration formula again reduces the proof to local models. The localization computation is similar to the computation in Section 2.2.2 except that the r-dependence appears in the following form as the vertex contribution over D 0 :
e∈E(v) |G (e,v) | r (e,v) r (e,v) d e t + ev * e c 1 (L) − d eψ(e,v) · ∞ i=0 (t/r) g(v)−1+|E(v)|−i+m − (v) c i (−R • π * L) = e∈E(v) |G ′ (e,v) | 1 d e 1 + (ev * e c 1 (L) − d eψ(e,v) )/t · ∞ i=0 t g(v)−i+m − (v)−1 (r) i−g(v)+1−m − (v) c i (−R • π * L) =r −m − (v) e∈E(v) |G ′ (e,v) | 1 d e 1 + (ev * e c 1 (L) − d eψ(e,v) )/t · ∞ i=0 (t) g(v)−i+m − (v)−1 (r) i−g(v)+1 c i (−R • π * L) ,
where m − (v) is the number of large age markings attached to the vertex v over D 0 . Multiplying by r m − , then the polynomiality follows from Proposition 15. This completes the proof of Theorem 10.
Theorem 10 implies that we can define relative Gromov-Witten invariants of an orbifold pair (X , D) with negative contact orders as follows.
Definition 16. Let X be a smooth proper Deligne-Mumford stack over C with projective coarse moduli space. Let D ⊂ X be a smooth irreducible divisor. The virtual cycle for the relative Gromov-Witten theory of the pair (X , D) with negative contact orders is defined as follows:
M g, k,n (X /D, β) vir := r m − (τ orb ) * M g, k,n (X D,r , β) vir r 0 ∈ A * M g,m+n (X , β) × (IX ) m (ID) m .
GROMOV-WITTEN THEORY OF MULTI-ROOT STACKS AND ITS LIMIT
Let X be a smooth projective variety 2 over C and let Definition 17. For r = (r 1 , . . . , r n ) ∈ N n , the multi-root stack X D, r := X (D 1 ,r 1 ),...,(Dn,rn) , is the stack whose objects over a scheme S consist of the data
f : S → X, {M i : line bundle on S}, {s i ∈ H 0 (M i )}, {φ i : M ⊗r i i → f * O X (D i )} such that s r i i = φ * i f * σ i for i = 1, ..., n.
If r 1 , ..., r n are pairwise coprime, then X D, r is smooth and has a well-defined Gromov-Witten theory.
For each i = 1, ..., n, we can view X D, r as (X (D 1 ,r 1 ),..., (D i ,r i ),...,(Dn,rn) ) (D i ,r i ) .
Therefore Theorem 9 applied to X D, r implies polynomiality for each r i , hence proves [38, Conjecture 1.2]:
Corollary 18. For r 1 , ..., r n sufficiently large, genus 0 Gromov-Witten theory of X D, r , after multiplying by suitable powers of r i , is independent of r 1 , ..., r n . Higher genus Gromov-Witten theory of X D, r , after multiplying by suitable powers of r i , is a polynomial in r 1 , ..., r n .
We may view the r 0 1 ...r 0 n term of the Gromov-Witten theory of X D, r as formally giving a Gromov-Witten theory of infinite root stack X D,∞ , which provides a virtual count of curves with tangency conditions along a simple normal crossing divisor. This can be viewed as analogous to logarithmic Gromov-Witten theory of the pair (X, D). Now, we will state Corollary 18 more precisely and define the formal Gromov-Witten theory of X D,∞ . Notation 19. We will use "relative marking" and "orbifold marking" interchangeably. Terms like "contact order" and "tangency condition" will also be used. In Section 2, we treat relative markings and interior markings separately. Here, it is more convenient to treat them all together. Therefore, the notation for the rest of the paper will be slightly different from the notation in Section 2. We will use n to denote the number of irreducible components of the divisor D and use m to denote the number of markings (including both relative and interior markings).
For any index set I ⊆ {1, . . . , n}, we define
D I := ∩ i∈I D i .
Note that D I can be disconnected. In particular, we set For sufficiently large 3 r, we consider the moduli space M g,{ s j } m j=1 (X D, r , β) of genus g, degree β ∈ H 2 (X), m-pointed, orbifold stable maps to X D, r with orbifold conditions specified by { s j } m j=1 . Note that the j-th marking maps to twisted sector D I s j with age
D ∅ := X.i:s j i >0 s j i r i + i:s j i <0 1 + s j i r i .
There are evaluation maps
ev j : M g,{ s j } m j=1 (X D, r , β) → D I s j , for j ∈ {1, . . . , m}. Let • γ j ∈ H * (D I s j ), for j ∈ {1, 2, . . . , m}; • a j ∈ Z ≥0 , for j ∈ {1, 2, . . . , m}.
Gromov-Witten invariants of X D, r are defined as follows
γ 1ψ a 1 , . . . , γ mψ am X D, r g,{ s j } m j=1 ,β := M g,{ s j } m j=1 (X D, r ,β) vir ev * 1 (γ 1 )ψ a 1 1 · · · ev * m (γ m )ψ am m .
We define s i,− := #{j : s j i < 0}, for i = 1, 2, . . . , n. Let τ : M g,{ s j } m j=1 (X D, r , β) → M g,m (X, β) × X m D I s 1 × · · · × D I s m . be the forgetful map.
By Theorem 10, the cycle class
n i=1 r s i,− i τ * M g,{ s j } m j=1 (X D, r , β)
vir is a polynomial in r i when r is sufficiently large. We denote the constant term of the above polynomial as
M g,{ s j } m j=1 (X D,∞ , β) vir := lim r→∞ n i=1 r s i,− i τ * M g,{ s j } m j=1 (X D, r , β) vir n i=1 r 0 i .
It is considered as the virtual cycle of the formal Gromov-Witten theory of the infinite root stack X D,∞ .
Recall that there are evaluation maps
ev j : M g,{ s j } m j=1 (X D, r , β) → D I s j ,
for j ∈ {1, . . . , m}. We define the following evaluation maps
ev j : M g,m (X, β) × X m D I s 1 × · · · × D I s m → D I s j , such that ev j • τ = ev j , for j ∈ {1, . . . , m}.
The formal Gromov-Witten invariants of X D,∞ can be defined as follows.
Definition 20. Let • γ j ∈ H * (D I s j ), for j ∈ {1, 2, . . . , m}; • a j ∈ Z ≥0 , for j ∈ {1, 2, . . . , m}.
The formal Gromov-Witten invariants of X D,∞ are defined as
[γ 1 ] s 1ψ a 1 , . . . , [γ m ] s mψ am X D,∞ g,{ s j } m j=1 ,β := M g,{ s j } m j=1 (X D,∞ ,β) vir ev * 1 (γ 1 )ψ a 1 1 · · · ev * m (γ m )ψ am m .
In other words,
[γ 1 ] s 1ψ a 1 , . . . , [γ m ] s mψ am X D,∞ g,{ s j } m j=1 ,β := n i=1 r s i,− i γ 1ψ a 1 , . . . , γ mψ am X D, r g,{ s j } m j=1 ,β n i=1 r 0 i
for sufficiently large r.
Note that theψ-classes are pullback of ψ-classes on the moduli space M g,m (X, β) of stable maps to X.
Remark 21.
When D is irreducible, the formal Gromov-Witten theory of X D,∞ coincides with relative Gromov-Witten theory (possibly with negative contact orders) defined in [13] and [14]. Relative Gromov-Witten theory in [13] and [14] can also be defined using the usual relative Gromov-Witten theory of J. Li [26], [27] and rubber theory of D. When D is simple normal crossing, it is also possible to define the formal Gromov-Witten theory of X D,∞ in terms of the usual relative Gromov-Witten theory and rubber theory of D i , but it will be more complicated and the combinatorics will be more involved than [13] and [14].
RELATIVE QUANTUM COHOMOLOGY
In this section, we introduce quantum cohomology for X D,∞ . We will call it relative quantum cohomology of (X, D) because we consider the formal Gromov-Witten theory of X D,∞ as a Gromov-Witten theory of X relative to the simple normal crossing divisor D.
4.1. The state space. We briefly described the state space for the formal Gromov-Witten theory of infinite root stacks in [38,Section 4]. In this section, we will provide more detailed discussion of it and its ring structure.
Following the description in [13, Section 7.1], we formally define the state space for the Gromov-Witten theory of X D,∞ as the limit of the state space of X D, r : The pairing on the rest of the classes is generated by linearity. Recall that D ∅ = X, therefore
H := s∈Z n H s ,([α] s , [β] s ′ ) = X α ∪ β, if s = − s ′ = 0.
We choose a basis {T I,k } k for H * (D I ). When I = ∅, we can also simply write {T k } k for a basis for H * (X). Then we can define a basis of H as follows: X D,∞ 0,3,0 , where the right-hand side is the genus zero, degree zero invariant of X D,∞ with three marked points.
T s,k = [T I s ,k ] s .
Similar to [13], the product structure can be written down explicitly, by computing the genus zero, degree zero 3-pointed invariants.
Note that the ring H is multi-graded. There are gradings with respect to contact orders s:
deg i ([α] s ) = s i . (4.2)
There is one grading for the cohomological degree of the class. Suppose α ∈ H s is a cohomology class of real degree d. Then we define,
deg 0 ([α] s ) = d/2 + #{i : s i < 0}. (4.3)
Note that there is a shift of the degree in (4.3). It already appears in [13, Section 7.1] when D is irreducible. One can simply think about the degree (4.3) as a limit of the orbifold degree (shifted by ages).
Let [γ j ] s j ∈ H and a j ∈ Z ≥0 , for j ∈ {1, . . . , m}, where s j = (s j 1 , . . . , s j n ) ∈ (Z) n .
Recall that the formal Gromov-Witten invariant of X D,∞ is denoted by
[γ 1 ] s 1ψ a 1 , . . . , [γ m ] s mψ am X D,∞ g,{ s j } m j=1 ,β . (4.4)
The invariant (4.4) is zero unless it satisfies the virtual dimension constraint
(1 − g)(dim C X − 3) + m + β c 1 (T X ) − β [D] = m j=1 deg 0 ([γ j ] s j ) + m j=1 a j . (4.5)
We will also denote the invariant (4.4) by · · · X D,∞ g,m,β if the contact order information is clear from the insertion. Sometimes, we will abbreviate it to · · · for simplicity. [34] for universal equations for orbifold Gromov-Witten invariants). It was proved in [13] that relative Gromov-Witten invariants also satisfy these universal equations. Our definition of the formal Gromov-Witten invariants of infinite root stacks is taken as the limit of orbifold Gromov-Witten invariants of finite root stacks. It is straightforward to show that these universal equations are preserved under the limit. Therefore, we have the following universal equations for the formal Gromov-Witten invariants of infinite root stacks.
= [γ 1 ] s 1ψ a 1 , j∈S 1 [γ j ] s jψ a j ,T s,k X D,∞ 0,{ s j } j∈S 1 ∪{1} , s,β 1 · T k − s , [γ 2 ] s 2ψ a 2 , [γ 3 ] s 3ψ a 3 , j∈S 2 [γ j ] s jψ a j X D,∞ 0,− s,{ s j } j∈S 2 ∪{2,3} ,β 2 ,
where the sum is over all splittings of β 1 + β 2 = β, all indices s, k of basis, and all splittings of disjoint sets S 1 , S 2 with S 1 ∪ S 2 = {4, . . . , m}. Note that the right-hand side is a finite sum.
Proposition 27 (WDVV). In genus zero,
[γ 1 ] s 1ψ a 1 , [γ 2 ] s 2ψ a 2 , j∈S 1 [γ j ] s jψ a j ,T s,k X D,∞ 0,{ s j } j∈S 1 ∪{1,2} , s,β 1 (4.8) · T k − s , [γ 3 ] s 3ψ a 3 , [γ 4 ] s 4ψ a 4 , j∈S 2 [γ j ] s jψ a j X D,∞ 0,− s,{ s j } j∈S 2 ∪{3,4} ,β 2 = [γ 1 ] s 1ψ a 1 , [γ 3 ] s 3ψ a 3 j∈S 1 [γ j ] s jψ a j ,T s,k X D,∞ 0,{ s j } j∈S 1 ∪{1,3} , s,β 1 · T k − s , [γ 2 ] s 2ψ a 2 , [γ 4 ] s 4ψ a 4 , j∈S 2 [γ j ] s jψ a j X D,∞ 0,− s,{ s j } j∈S 2 ∪{2,4} ,β 2 ,
where each sum is over all splittings of β 1 + β 2 = β, all indices s, k of basis, and all splittings of disjoint sets S 1 , S 2 with S 1 ∪ S 2 = {5, . . . , m}. Note that both sides are finite sums.
Remark 28. Just like the WDVV equation for absolute Gromov-Witten theory implies the associativity of the quantum cohomology, the WDVV equation for the formal Gromov-Witten theory of
infinite root stacks also implies the associativity of the relative quantum cohomology. Note that in [19], it requires extensive arguments to prove the associativity for (the degree zero part of) the relative quantum cohomology. While in our case, we obtain the associativity for free. Since we do not know the relation between the invariants that we considered here and the punctured invariants in [19] and [5], it is not known that if our approach will provide an easier proof of the associativity in [19].
The compatibility between this new theory and the Gross-Siebert program will be discussed in Section 7. Note that there are infinitely many variables. We will work on a completion of this ring. Consider the ideals
I p = ({t s,k } |s i |≥p,∀i )
for p ≥ 0. These ideals form a chain
I 0 ⊃ I 1 ⊃ I 2 ⊃ · · · .
Now we have the completion
C[[NE(X)]] [[{t s,k }]] = lim ← − C[[NE(X)]][[{t s,k }]]/I p .
The genus-zero potential for the Gromov-Witten theory of infinite root stacks is defined to be
Φ 0 (t) = m≥3 β 1 m! t, · · · , t X D,∞ 0,m,β q β ∈ C[[NE(X)]] [[{t s,k }]].
Note that Φ 0 is a formal function in variables {t s,k }. To define a ring structure on
C[[NE(X)]] [[{t s,k }]],
we define the quantum product ⋆ by the following
T s 1 ,k 1 ⋆T s 2 ,k 2 = s 3 ,k 3 ∂ 3 Φ 0 ∂t s 1 ,k 1 ∂t s 2 ,k 2 ∂t s 3 ,k 3T k 3 − s 3 .
Recall thatT s 3 ,k 3 andT k 3 − s 3 are dual to each other under the pairing. One can also define small relative quantum cohomology ring by setting t s,k = 0 if s = 0 or T 0,k ∈ H 0 (X) ⊕ H 2 (X) ⊂ H 0 in the formal function
∂ 3 Φ 0 ∂t s 1 ,k 1 ∂t s 2 ,k 2 ∂t s 3 ,k 3 .
The small relative quantum product is denoted by ⋆ sm . The small relative quantum cohomology ring is denoted by QH(X D,∞ ).
Similar to the absolute Gromov-Witten theory, under the specialization q = 0 and t = 0, we obtain the product structure of the state space in Section 4.1:
T s 1 ,k 1 ⋆ q=0,t=0T s 2 ,k 2 = s 3 ,k 3 T s 1 ,k 1 ,T s 2 ,k 2 ,T s 3 ,k 3 X D,∞ 0,3,0T k 3 − s 3 .
Relative quantum cohomology ring is a multi-graded ring. Similar to [13,Section 7.3], the gradings are defined as extensions of deg i in (4.3) and (4.2). Furthermore, we define
deg (i) (q β ) = β D i , deg (i) (t s,k ) = −s i , for i ∈ {1, . . . , n}, deg (0) (q β ) = β c 1 (T X (− log D)), deg (0) (t s,k ) = 1 − deg (0) (T s,k ).
GIVENTAL FORMALISM
In this section, we set up Givental formalism for genus zero formal Gromov-Witten theory of the infinite root stack X D,∞ following [15]. A mirror theorem for infinite root stacks has already been proved in [38]. This section provides the necessary foundation for [38].
Consider the space H = H ⊗ C C[[NE(X)]]((z −1 )),
where ((z −1 )) means formal Laurent series in z −1 .
There is a C[[NE(X)]]-valued symplectic form
Ω(f, g) = Res z=0 (f (−z), g(z))dz, for f, g ∈ H,
where the pairing (f (−z), g(z)) takes values in C[[NE(X)]]((z −1 )) and is induced by the pairing on H.
Consider the following polarization
H = H + ⊕ H − , where H + = H ⊗ C C[[NE(X)]][z], and H − = z −1 H ⊗ C C[[NE(X)]][[z −1 ]].
There is a natural symplectic identification between H + ⊕ H − and the cotangent bundle T * H + .
For l ≥ 0, we write t l = s,k t l; s,k T s,k where t l; s,k are formal variables. Also write
t(z) = ∞ l=0 t l z l .
The genus g descendant Gromov-Witten potential of X D,∞ is defined as
F g X D,∞ (t(z)) = β ∞ m=0 q β m! t(ψ), . . . , t(ψ) X D,∞ g,m,β .
The total descendant Gromov-Witten potential is defined as
D X D,∞ (t) := exp g≥0 g−1 F g X D,∞ (t) .
Following [15], we define the dilaton-shifted coordinates of H + q(z) = q 0 + q 1 z + q 2 z 2 + . . . = −z + t 0 + t 1 z + t 2 z 2 + . . . .
p(z) = p 0 z −1 + p 1 z −2 + . . . = l≤−1 s,k p l; s,kT k − s z l .
Coordinates p(z) in H − are chosen so that q, p form Darboux coordinates.
One can consider the graph of the differential dF 0 X D,∞ :
L X D,∞ := {(p, q)|p = d q F 0 X D,∞ } ⊂ H = T * H + .
Equivalently, a (formal) point in L X D,∞ can be explicitly written as Proposition 29. L X D,∞ is the formal germ of a Lagrangian cone with vertex at the origin such that each tangent space T to the cone is tangent to the cone exactly along zT .
−z + t(z) + β m s,k q β m! T s,k −z −ψ , t(ψ), . . . , t(ψ) X D,
Following [7], the set of tangent spaces T to the cone L satisfying Proposition 29 carries a canonical Frobenius structure. We refer to [15] for more details.
Definition 30. We define the J-function J X D,∞ (t, z) as follows,
J X D,∞ (t, z) = z + t + m≥1,β∈NE(X) s,k q β m! T s,k −z −ψ , t, . . . , t X D,∞ 0,m+1,βT k − s .
The J-function is a formal power series in coordinates t s,
k of t = t s,kT s,k ∈ H taking values in H. The point of L X D,∞ above −z + t ∈ H + is J X D,∞ (t, −z). In other words, J X D,∞ (t, −z) is the intersection of L X D,∞ with (−z + t) + H − .
The I-function I X D,∞ for X D,∞ is constructed in [38, Section 4] as a hypergeometric modification of the J-function of X. Using Givental formalism that we just developed, a mirror theorem for the infinite root stack X D,∞ can be stated as follows.
VIRASORO CONSTRAINTS
Givental formalism implies Virasoro constraints for genus zero Gromov-Witten invariants of infinite root stacks. We briefly describe it in this section.
Given a class [α] s ∈ H such that α ∈ H p,q (D I s ). Note that when s = 0, we use the convention that D I 0 = D ∅ = X. We define two operators ρ, µ as follows.
ρ([α] s ) = [α ∪ c 1 (T X (− log D))] s , µ([α] s ) = [(dim C (X)/2 − p − #{i : s i < 0})α] s .
Then we define the following transformations:
l −1 = z −1 , l 0 = zd/dz + 1/2 + µ + ρ/z, l m = l 0 (zl 0 ) m , m ≥ 1.
Recall that an operator A : H → H is called infinitesimal symplectic if it satisfies Ω(A(f ), g) + Ω(f, A(g)) = 0 for all f, g ∈ H.
One can check that l m are infinitesimal symplectic. Furthermore, the operator l m satisfies the following commutation relations:
{l m , l n } = (n − m)l m+n , where {−, −} is the Poisson bracket.
Following [15], an infinitesimal symplectic transformation A gives rise to a vector field on H in the following way. The tangent space of H at a point f ∈ H can be naturally identified with H itself. One obtains a tangent vector field on H by assigning the vector A(f ) ∈ T f H to the point f . The following proposition follows from [15,Theorem 6]. Therefore, l m are associated with Hamitonian functions on L:
f → 1 2 Ω(l m f, f ).
We define the quantization of the quadratic monomials using the following standard rules:
(q l; s,k q l ′ ; s ′ ,k ′ ) ∧ = q l; s,k q l ′ ; s ′ ,k ′ / , (q l; s,k p l ′ ; s ′ ,k ′ ) ∧ = q l; s,k ∂/∂q l ′ ; s ′ ,k ′ ,
(p l; s,k p l ′ ; s ′ ,k ′ ) ∧ = ∂ 2 /∂q l; s,k ∂q l ′ ; s ′ ,k ′ .
Hence, we obtain a sequence of quantized operators
L m =l m .
Then the following genus zero Virasoro constraints follow from the fact that l m is infinitesimal symplectic and Proposition 33.
INTRINSIC MIRROR SYMMETRY
In this section, we apply invariants of X D,∞ and relative quantum cohomology QH(X D,∞ ) to study the intrinsic mirror symmetry of the Gross-Siebert program in our setting.
The Frobenius structure conjecture for log pairs (X, D) was stated in the first arXiv version of [16]. The Frobenius structure conjecture predicts that there is a commutative associative algebra associated to the pair (X, D) and the spectrum of the algebra is mirror to (X, D). The conjecture was proved in [19] by explicitly defining all structure constants in terms of punctured Gromov-Witten invariants. It was proved for cluster log pairs in [29] and for affine log Calabi-Yau varieties containing a torus in [23]. Our construction will also provide a commutative associative algebra associated to log pairs (X, D) when D is a simple normal crossing divisor. We briefly review the conjecture and explain how our construction can be used to study the conjecture as well as the mirror construction in the Gross-Siebert program [18] and [19] in our setting.
Let D = D 1 + · · · + D n and S be the dual intersection complex of D. That is, S is the simplicial complex with vertices v 1 , . . . , v n and simplices v i 1 , . . . , v ip corresponding to non-empty intersections D i 1 ∩ · · · ∩ D ip . Let B denote the cone over S and Σ be the induced simplicial fan in B. Let B(Z) be the set of integer points of B. Let QH 0 log (X, D) be the degree 0 subalgebra of the relative quantum cohomology ring QH * log (X, D). There is a bijection between points p ∈ B(Z) and prime fundamental classes ϑ p ∈ QH 0 log (X, D). Suppose we are given points p 1 , . . . , p m ∈ B 0 (Z), where B 0 = B \ {0}. Each p i can be written as a linear combination of primitive generators v ij of rays in Σ:
p i = j m ij v ij ,
where the ray generated by v ij corresponds to a divisor D ij .
We assume (K X + D) is nef or anti-nef. For m ≥ 2, using the result of [17] and [2], one can define the associated log Gromov-Witten invariant Therefore, we must have d = 0, and #{i : s i < 0} = 0.
Hence, we have a canonical basis of QH 0 (X D,∞ ) given by identity classes of H s when s i ≥ 0 for all i ∈ {1, . . . , n}. So there is a bijection between such classes and integer points of B(Z).
Hence there is a bijection between this canonical basis of QH 0 (X D,∞ ), denoted by [1] p , and prime fundamental classes ϑ p ∈ QH 0 log (X, D). We can also use theta functions ϑ as the canonical basis of QH 0 (X D,∞ ). Then we can write
QH 0 (X D,∞ ) = p∈B(Z) C[[NE(X)]]ϑ p as a free C[[NE(X)]]-module.
One can replace the log invariant N β p 1 ,...,pm,0 defined in (7.1) by the corresponding invariant of X D,∞ (with the same input data), denoted by N orb,β p 1 ,...,pm,0 . The product ϑ p 1 ⋆ ϑ p 2 is simply replaced by the restriction of the small relative quantum product [1]
p 1 ⋆ sm [1] p 2 to QH 0 (X D,∞ )
. We denote this product by ϑ p 1 ⋆ orb ϑ p 2 . The structure constant N orb p 1 ,p 2 ,−r is defined as the invariant of X D,∞ with two "inputs" with positive contact orders given by p 1 , p 2 ∈ B(Z), one "output" with negative contact order given by −r such that r ∈ B(Z), and a point constraint for the punctured point. Namely,
N orb,β p 1 ,p 2 ,−r = [1] p 1 , [1] p 2 , [pt] −r X D,∞ 0,3,β . (7.2)
The corresponding punctured invariants are structure constants considered in [19] In the next lemma (see also [18,Lemma 2.1] for the corresponding lemma for punctured invariants), we will show that the virtual dimension constraint implies that the number N orb,β p 1 ,p 2 ,−r = 0 unless β [K X + D] = 0. Similarly, for N orb,β p 1 ,...,pm,0 , which will appear in Theorem 38.
Lemma 36. For p, q, r ∈ B(Z),
N orb,β p 1 ,p 2 ,−r = 0 if β [K X + D] = 0.
Proof. Since r ∈ B(Z), contact orders at the third marking, represented by −r, are non-positive with each divisor D i . Then the definition of deg 0 in (4.3) implies that deg 0 ([pt] −r ) = dim C X. 4 The notation in [19] is N β p1,p2,r which is slightly different from what we use here.
The virtual dimension constraint (4.5) is
dim C X − 3 + 3 − β [K X + D] = deg 0 ([pt] −r ), i.e. β [K X + D] = 0.
Note that the restriction of the quantum product may involve infinite sums. For the finiteness of the product rule, we will follow [19]. Let P ⊂ H 2 (X) be a finitely generated submonoid, containing all effective curve classes and the group of invertible elements P × of P coincides with the torsion part of H 2 (X). Let I ⊂ P be a monoid ideal such that P \ I is finite. That is,
S I := C[P ]/I (7.3)
is Artinian. Then one can define
R I := p∈B(Z) S I ϑ p , (7.4)
which is a free S I -module.
Replacing punctured invariants by orbifold invariants, we write the product as
ϑ p 1 ⋆ orb ϑ p 2 = β∈P \I,r∈B(Z) N orb,β p 1 ,p 2 ,−r q β ϑ r . (7.5)
Theorem 37. When (K X + D) is nef or anti-nef, the structure constants N orb,β p 1 ,p 2 ,−r define a commutative, associative S I -algebra structure on R I with unit given by ϑ 0 .
We will refer to R I as mirror algebra.
Proof. The finiteness of the product rule follows directly from the definition of the structure constants N orb,β p 1 ,p 2 ,−r and the fact that P \ I is finite. The commutativity is straightforward. It follows from the fact that the structure constants are Gromov-Witten invariants of X D,∞ which satisfy
N orb,β p 1 ,p 2 ,−r = N orb,β p 2 ,p 1 ,−r .
The fact that the class ϑ 0 is the unit can be rephrased in terms of the invariants N orb,β p 1 ,p 2 ,−r as follows. For p ∈ B(Z), N orb,β 0,p,−r = 0 β = 0 or p = r, 1 β = 0, p = r.
But this is a direct consequence of the string equation (4.6).
The associativity for the relative quantum product follows from the WDVV equation (4.8). However, as mentioned in [19], the product rule that we consider here is only a truncation (restriction) of the actual product rule for relative quantum cohomology, so the associativity is not preserved in general. Here comes the assumption that ±(K X + D) is nef. Under this assumption, we will show that the associativity is preserved.
For the associativity, we need to prove that
(ϑ p 1 ⋆ orb ϑ p 2 ) ⋆ orb ϑ p 3 = ϑ p 1 ⋆ orb (ϑ p 2 ⋆ orb ϑ p 3 ). Since (ϑ p 1 ⋆ orb ϑ p 2 ) ⋆ orb ϑ p 3 = β 1 ∈P \I,s∈B(Z) N orb,β 1 p 1 ,p 2 ,−s q β 1 ϑ s ⋆ orb ϑ p 3 = β 1 ,β 2 ∈P \I,s∈B(Z) N orb,β 1 p 1 ,p 2 ,−s N orb,β 2 s,p 3 ,−r q β 1 +β 2 ϑ r and ϑ p 1 ⋆ orb (ϑ p 2 ⋆ orb ϑ p 3 ) = ϑ p 1 ⋆ orb β 1 ∈P \I,s∈B(Z) N orb,β 1 p 2 ,p 3 ,−s q β 1 ϑ s = β 1 ,β 2 ∈P \I,s∈B(Z) N orb,β 1 p 2 ,p 3 ,−s N orb,β 2 s,p 1 ,−r q β 1 +β 2 ϑ r .
Therefore, we just need to prove
β 1 +β 2 =β∈P \I s∈B(Z) N orb,β 1 p 1 ,p 2 ,−s N orb,β 2 s,p 3 ,−r = β 1 +β 2 =β∈P \I s∈B(Z) N orb,β 1 p 2 ,p 3 ,−s N orb,β 2 s,p 1 ,−r , (7.6)
where each sum is over all possible splitting of β 1 + β 2 = β and all s ∈ B(Z). However, this is not the WDVV equation (4.8)! The WDVV equation is of the following form with extra terms in each sum. We need to use the bracket notation to write it down:
β 1 +β 2 =β∈H 2 (X) s∈(Z) n ,k [1] p 1 , [1] p 2 ,T − s,k 0,3,β 1 T k s , [1] p 3 , [pt] −r 0,3,β 2 (7.7) = β 1 +β 2 =β∈H 2 (X) s∈(Z) n ,k [1] p 2 , [1] p 3 ,T − s,k 0,3,β 1 T k s , [1] p 1 , [pt] −r 0,3,β 2 ,
where p 1 , p 2 , r ∈ B(Z); each sum is over all splittings of β 1 + β 2 = β, all indices s, k of basis. We will see that extra terms in the WDVV equation vanish under the assumption that ±(K X + D) is nef.
When −K X − D is nef, we consider the invariant [1] p 1 , [1] p 2 ,T − s,k 0,3,β 1 in (7.7)
. The virtual dimension constraint (4.5) becomes Since
dim C X − 3 + 3 + β 1 [−K X − D] = deg 0 (T − s,k ) dim C X + β 1 [−K X − D] = deg 0 (T − s,k ).deg(T − s,k )/2 ≤ dim C D I s ≤ dim C X − #{i : −s i = 0}, we have deg 0 (T − s,k ) ≤ dim C X − #{i : −s i = 0} + #{i : −s i < 0} = dim C X − #{i : −s i > 0}.
Therefore, if #{i : −s i > 0} > 0, we must have
deg 0 (T − s,k ) < dim C X.
On the other hand, −K X − D is nef implies that
dim C X + β 1 ([−K X − D]) ≥ dim C X.
Hence, the virtual dimension constraint (7.8) does not hold unless #{i : −s i > 0} = 0, in other words, −s i ≤ 0 for all i ∈ {1, . . . , n}. Furthermore, we must havẽ
T − s,k = [pt] −s , for some s ∈ B(Z).
It implies that LHS of (7.6)= LHS of (7.7) modulo I. The same argument implies that RHS of (7.6)= RHS of (7.7) modulo I. This completes the case when −K X − D is nef.
When K X + D is nef, we consider the invariant T k s , [1]
Then − β 2 [K x + D] ≤ 0 implies that deg 0 (T k s ) ≤ 0.
Therefore, we must have So LHS of (7.6)= LHS of (7.7) modulo I. The same argument implies that RHS of (7.6)=RHS of (7.7) modulo I. This completes the case when K X + D is nef, hence, completes the proof of the theorem.
7.2. The Frobenius structure conjecture.
Theorem 38. When (K X + D) is nef or anti-nef, Conjecture 35 holds for QH 0 (X D,∞ ).
Proof. The case of m = 2 directly follows from the definition of our structure constants N orb,β p 1 ,p 2 ,0 . The case of m ≥ 3 can be proved using TRR (4.7).
We need to show that β∈H 2 (X) N orb,β p 1 ,...,pm,0 q β coincides with the coefficient of ϑ 0 in the product ϑ p 1 ⋆ orb · · · ⋆ orb ϑ pm . Recall that
N orb,[1] p j ,T s,k T k − s , [1] p 1 , [1] p 2 , j∈S 2 [1] p j , (7.9)
where the sum is over all splittings of β 1 + β 2 = β, all indices s, k of basis, and all splittings of disjoint sets S 1 , S 2 with S 1 ∪ S 2 = {3, . . . , m}. We will show that some terms in (7.9) vanish and the RHS of (7.9) coincide with the coefficient of ϑ 0 of the product.
When −K X − D is nef, we consider the invariant T k − s , [1] p 1 , [1] p 2 , j∈S 2 [1] p j in (7.9). The virtual dimension constraint (4.5) is
dim C X + |S 2 | + β 2 [−K X − D] = deg 0 (T k − s ). (7.10) Note that deg 0 (T k − s ) := deg(T k − s ) + #{i : −s i < 0} ≤ dim C X − #{i : −s i = 0} + #{i : −s i < 0} = dim C X − #{i : −s i > 0} ≤ dim C X.
On the other hand, −K X − D is nef implies
dim C X + |S 2 | + β 2 [−K X − D] ≥ dim C X.
Therefore, the equality (7.10) does not hold unless N orb,β 1 s,p 3 ,...,pm,0 N orb,β 2 p 1 ,p 2 ,−s .
|S 2 | = 0,
Repeat this process (m − 3)-times, we get
N orb,β p 1 ,...,pm,0 = m−1 i=1 β i =β∈H 2 (X),s i ∈B(Z) N orb,β 2 p 1 ,p 2 ,−s 1 N orb,β 2 s 1 ,p 3 ,−s 2 · · · N orb,β m−1 s m−2 ,pm,0 .
The right-hand side is precisely the coefficient of ϑ 0 of ϑ p 1 ⋆ orb · · · ⋆ orb ϑ pm by definition. This completes the case when −K X − D is nef.
When K X + D is nef, we consider the invariant [pt] 0ψ m−3 , j∈S 1 [1] p j ,T s,k in (7.9). The virtual dimension constraint (4.5) is
dim C X − 3 + 2 + |S 1 | + β 1 [−K X − D] = dim C X + m − 3 + deg 0 (T s,k ). (7.11) Since |S 1 | ≤ m − 2 and K X + D is nef, we have dim C X − 3 + 2 + |S 1 | + β 1 [−K X − D] ≤ dim C X − 1 + m − 2 = dim C X + m − 3.
On the other hand,
dim C X + m − 3 + deg 0 (T s,k ) ≥ dim C X + m − 3.
Therefore, the equality (7.11) does not hold unless where the right-hand side is precisely the coefficient of ϑ 0 of ϑ p 1 ⋆ orb · · · ⋆ orb ϑ pm . This completes the proof of the case when K X + D is nef, hence completes the proof of the theorem.
|S 1 | = m − 2, ,
Mirror construction.
With the mirror algebra R I , one can construct the mirror following the Gross-Siebert program. We will follow the construction in [18] and [19].
Let (X, D) be a log Calabi-Yau pair and B be pure-dimensional with dim R B = dim C X. One can define families of schemes Spec R I → Spec S I .
Taking the direct limit of this families of schemes, one obtains a formal flat family of affine schemeš X → Spf C[P ], (7.12) where C[P ] is the completion of C[P ] with respect to the maximal ideal P \ P × . The family (7.12) can be viewed as the mirror family to X \ D.
Next, we consider mirrors to a degeneration of Calabi-Yau manifolds g : X → S, so that D = g −1 (0) set-theoretically. One can define the ring
R = ⊕ p∈B(Z) C[P ]ϑ p .
The multiplication will always be a finite sum as mentioned in [19,Construction 1.19]. Furthermore, R carries an associative C[P ]-algebra structure with a natural grading. When dim R B = dim C X, the mirror family is defined to be the flat family
X = Proj R → Spec C[P ].
Remark 39. [19] actually described the mirrors in a more general setting. One can also try to construct mirrors following [19] using the more general setting, but with invariants of X D,∞ . We do not repeat these constructions here and refer readers to [19] for more details. An interesting question to ask is that if our construction agrees with the construction in [19]. We plan to study this question in the future.
A PARTIAL COHOMOLOGICAL FIELD THEORY
In this section, we show that the formal Gromov-Witten theory of infinite root stacks form a partial cohomological field theory (partial CohFT) in the sense of [28]. This generalizes the result of [14, Section 3.5] to infinite root stacks with simple normal crossing divisors. We first provide a brief review of the CohFT.
Let M g,m be the moduli space of genus g, m-pointed stable curves. We assume that 2g −2+m > 0. There are several canonical morphisms between moduli space M g,m of stable curves.
• Forgetful morphisms π : M g,m+1 → M g,m obtained by forgetting the last marking of (m + 1)-pointed, genus g curves in M g,m+1 . • Morphisms of gluing the loops ρ l : M g,m+2 → M g+1,m obtained by identifying the last two markings of the (m + 2)-pointed, genus g curves in M g,m+2 .
• Morphisms of gluing the trees ρ t : M g 1 ,m 1 +1 × M g 2 ,m 2 +1 → M g 1 +g 2 ,m 1 +m 2 obtained by identifying the last markings of separate pointed curves in M g 1 ,m 1 +1 ×M g 2 ,m 2 +1 .
The state space H is a graded vector space with a non-degenerate pairing −, − and a distinguished element 1 ∈ H. Given a basis {e i }, let η jk = e j , e k and (η jk ) = (η jk ) −1 .
A cohomological field theory (CohFT) is a collection of homomorphisms Ω g,m : H ⊗m → H * (M g,m , Q) satisfying the following axioms:
• The element Ω g,m is invariant under the natural action of symmetric group S m . • For all α i ∈ H, Ω g,m satisfies Ω g,m+1 (α 1 , . . . , α m , 1) = π * Ω g,m (α 1 , . . . , α m ).
• The splitting axiom:
ρ *
t Ω g 1 +g 2 ,m 1 +m 2 (α 1 , . . . , α m 1 +m 2 ) = j,k η jk Ω g 1 ,m 1 (α 1 , . . . , α m 1 , e j ) ⊗ Ω g 2 ,m 2 (α m 1 +1 , . . . , α m 1 +m 2 , e k ), for all α i ∈ H. We also refer to [9, Section 3] for more discussions of infinite rank partial CohFT.
Recall that, for Gromov-Witten theory of infinite root stacks, the ring of insertions is H defined in Section 4.1. Let π : M g,m (X, β) × X m D I s 1 × · · · × D I s m → M g,m be the forgetful map. It is straightforward to check that Ω X D,∞ g,m satisfies the first two axioms of CohFT. The proof of the splitting axiom is parallel to the proof in [14,Theorem 3.16]. Therefore, we conclude that Theorem 42. Ω X D,∞ g,m forms a partial CohFT.
It is already known in [14] that the loop axiom does not hold for relative Gromov-Witten theory. Therefore, it does not hold for the formal Gromov-Witten theory of infinite root stacks. It would be interesting to find a replacement of the loop axiom. Some results along this direction has been proved in [40] by studying orbifold Gromov-Witten invariants of finite root stacks with mid-ages.
.
The theory. Let X be a smooth projective variety over C and letD 1 , ..., D n ⊂ X be smooth irreducible divisors. Suppose D := D 1 + ... + D n is simple normal crossing.
D 1
1, ..., D n ⊂ X be smooth irreducible divisors. Suppose D := D 1 + ... + D n is simple normal crossing.
Let s = (s 1 , . . . , s n ) ∈ Z n . The vector s is used to record contact orders. Note that both positive and negative contact orders are allowed. We define I s := {i : s i = 0} ⊆ {1, . . . , n}.Consider the vectors s j = (s j 1 , . . . , s j n ) ∈ (Z) n , for j = 1, 2, . . . , m, i ], for i ∈ {1, . . . , n}.
where H s := H * (D I s ). Note that• H 0 := H * (D ∅ ) := H * (X); • if ∩ i:s i =0 D i = ∅, then H s = 0.Each H s naturally embeds into H. For an element γ ∈ H s , we write[γ] s for its image in H. The pairing on H (−, −) : H × H → C is defined as follows: for [α] s and [β] s ′ , define ([α] s , [β] s ′ ) = D I s α ∪ β, if s = − s
Let
{T k I } be the dual basis of {T I,k } under the Poincaré pairing of H * (D I ). Definẽ T k s = [T k I s ] s . Then {T k s } form a dual basis of {T s,k } under the pairing of H. Note that the dual ofT s,k isT k − s under the pairing of H.
4. 2 .
2Universal equations. Absolute Gromov-Witten invariants are known to satisfy the following universal equations: string equation, divisor equation, dilaton equation, topological recursion relation (TRR), and Witten-Dijkgraaf-Verlinde-Verlinde (WDVV) equation (see, for example, [?],
Let s 0
0= 0, we have Proposition 23 (String equation).
[ 1 ][γ 1
110 , [γ 1 ] s 1ψ a 1 , . . . , [γ m ] s mψ am X ] s 1ψ a 1 , . . . , [γ j ] s jψ a j −1 , . . . , [γ m ] s mψ am X D,∞ g,{ s j } m j=1 ,β .
4. 3 .
3Relative quantum cohomology ring. Let t = t s,kT s,k where t s,k are formal variables. Let C[[NE(X)]] be the Novikov ring, where q is the Novikov variable and NE(X) be the cone of effective curve classes in X. We denote the formal power series ring with variables t s,k by C[[NE(X)]][[{t s,k }]].
,
Theorem 1] (see also [34, Theorem 3.1.1] for orbifold Gromov-Witten theory), string equation, dilaton equation and topological recursion relations imply the following property.
Theorem 31 .
31Let X be a smooth projective variety. Let D := D 1 + D 2 + ... + D n be a simple normal-crossing divisor with D i ⊂ X smooth, irreducible and nef. The I-function I X D,∞ , defined in [38, Section 4], of the infinite root stack X D,∞ lies in Givental's Lagrangian cone L X D,∞ of X D,∞ . Remark 32. The I-function I D,∞ considered in [38, Section 4] is taken as a limit of the I-functions for finite root stacks. Theorem 31 holds for both non-extended I-function and extended I-function. When D is a smooth divisor, Theorem 31 is simply [12, Theorem 1.4] for non-extended I-function and [12, Theorem 1.5] for extended I-function of the smooth pair (X, D).
Proposition 33 .
33The vector fields defined by the operators l m , m = 1, 2, . . . , are tangent to the Lagrangian cone L.
Proposition 34 .
34For m ≥ −1, we have the following identitye −F 0 (t)/ L m e F 0 (t)/ −1 = 0,where [· · · ] −1 means taking the −1 -coefficient.
0,m+1 (X/D, β) is the moduli stack of logarithmic stable maps which provides a compactification for the space of stable mapsf : (C, x 0 , x 1 , . . . , x m ) → Xsuch that f * [C] = β, and C meets D ij at x i with contact order m ij for each i, j and contact order zero with D at x 0 . Note that no punctured invariants are involved at this point.The Frobenius structure conjecture can be partially rephrased as Conjecture 35. The coefficient of ϑ 0 in the product ϑ p 1 ⋆ · · · ⋆ ϑ pm is β∈H 2 (X) N β p 1 ,...,pm,0 q β .Conjecture 35 will be rephrased in our language in the following sections.7.1. The mirror algebra. Let QH 0 (X D,∞ ) be the degree zero part of the relative quantum cohomology ring QH(X D,∞ ) in Section 4.3. The degree zero part means the degree in (4.3) is zero. For a cohomology class [α] s ∈ H s of real degree d to be of degree zero, we need deg 0 ([α] s ) = d/2 + #{i : s i < 0} = 0.
deg([α]) be the real degree of α ∈ H * (D I ) for I ⊆ {1, . . . , n}. Recall that deg 0 (T − s,k ) = deg(T − s,k )/2 + #{i : −s i < 0}.
p 3 ,[
3[K x + D] = deg 0 (T k s ) + deg 0 ([pt] −r ).Since r ∈ B(Z), contact orders represented by −r are non-positive. The definition of deg 0 in (4.3) implies that deg 0 ([pt] −r ) = dim C X.
deg 0 (
0T k s ) := deg(T k s ) + #{i : s i < 0} = 0. Hence, #{i : s i < 0} = 0 andT k s = [1] s , for some s ∈ B(Z).
β 2 [
2−K X − D] = 0, #{i : −s i > 0} = 0 andT k − s = [pt] −s , for some s ∈ B(Z).
β 1 [
1−K X − D] = 0, #{i : s i < 0} = 0, andT s,k = [1] s , for some s ∈ B(Z). Hence (7.9) becomesN orb,β p 1 ,...,pm,0 = β 1 +β 2 =β∈H 2 (X),s∈B(Z) [pt] 0ψ m−3 , [1] p 3 , . . . , [1] pm , [1] s [pt] −s , [1] p 1 , [1] p 2 = β 1 +β 2 =β∈H 2 (X),s∈B(Z)N orb,β 1 s,p 3 ,...,pm,0 N orb,β 2 p 1 ,p 2 ,−s .We again repeat this process (m − 3)-times to haveN orb,β p 1 ,...,pm,0 = m−1 i=1 β i =β∈H 2 (X),s i ∈B(Z)N orb,β 2 p 1 ,p 2 ,−s 1 N orb,β 2 s 1 ,p 3 ,−s 2 · · · N orb,β m−1 s m−2 ,pm,0 ,
•
The loop axiom:ρ * l Ω g+1,m (α 1 , . . . , α m ) = j,k η jk Ω g,m+2 (α 1 , . . . , α m , e j , e k ),for all α i ∈ H. In addition, the equalityΩ 0,3 (v 1 , v 2 , 1) = v 1 , v 2holds for all v 1 , v 2 ∈ H.
Definition 41 .
41Given elements [α 1 ], . . . , [α m ] ∈ H, the Gromov-Witten class for infinite root stacks is defined as Ω X D,∞ g,m,β ([α 1 ], . . . , [α m ]) = π * m j=1 ev * j ([α j ]) ∩ M g,m (X D,∞ , β) vir ∈ H * (M g,m , Q), where contact orders are specified by insertions. We then define the class Ω X D,∞ g,m ([α 1 ], . . . , [α m 1 ], . . . , [α m ])q β .
Definition 22 .
22For [α], [β] ∈ H, the product [α] · [β] is defined as follows: for [γ] ∈ H,([α] · [β], [γ]) := [α], [β], [γ]
] s 1ψ a 1 , . . . , [γ j · γ] s jψ a j −1 , . . . , [γ m ] s mψ am X D,∞ [γ 1 ] s 1ψ a 1 +1 , . . . , [γ m ] s mψ am X D,∞Proposition 24 (Divisor equation). For γ ∈ H 2 (X),
[γ] 0 , [γ 1 ] s 1ψ a 1 , . . . , [γ m ] s mψ am X D,∞
g,{ s j } m
j=0 ,β =
β
γ [γ 1 ] s 1ψ a 1 , . . . , [γ m ] s mψ am X D,∞
g,{ s j } m
j=1 ,β
+
m
j=1
[γ 1 g,{ s j } m
j=1 ,β .
Proposition 25 (Dilaton equation).
ψ [1] 0 , [γ 1 ] s 1ψ a 1 , . . . , [γ m ] s mψ am X D,∞
g,{ s j } m
j=0 ,β = (2g − 2 + m) [γ 1 ] s 1ψ a 1 , . . . , [γ m ] s mψ am X D,∞
g,{ s j } m
j=1 ,β .
Proposition 26 (TRR). In genus zero,
0,{ s j } m
j=1 ,β
(4.7)
β p 1 ,...,pm,0 := [1] p 1 , . . . , [1] pm , [pt] 0ψ Similar to absolute Gromov-Witten theory, TRR (4.7) can be used to remove the descendant class ψ. We havem−2 X D,∞
0,m+1,β .
N orb,β
p 1 ,...,pm,0 =
[pt] 0ψ
m−3 ,
j∈S 1
Therefore (7.9) becomes N orb,β p 1 ,...,pm,0 = β 1 +β 2 =β∈H 2 (X),s∈B(Z) [pt] 0ψ m−3 , [1] p 3 , . . . , [1] pm , [1] s [pt] −s , [1] p 1 , [1] p 2=
β 1 +β 2 =β∈H 2 (X),s∈B(Z)
Definition 40 ([28], Definition 2.7). If the collection {Ω g,m } satisfies all the axioms except for the loop axiom, we call it a partial CohFT.
The arguments easily extend to the case D i 's are disjoint, showing that the two theories are the same in this case, too.
The main results of this paper also holds when X is a smooth projective Deligne-Mumford stack. For simplicity, we only consider the case when X is a smooth projective variety.
By sufficiently large r, we mean r i are sufficiently large for all i ∈ {1, . . . , n}.
Relative and orbifold Gromov-Witten invariants. D Abramovich, C Cadman, J Wise, Algebr. Geom. 44D. Abramovich, C. Cadman, J. Wise, Relative and orbifold Gromov-Witten invariants, Algebr. Geom. 4 (2017), no. 4, 472-500.
Stable logarithmic maps to Deligne-Faltings pairs II. D Abramovich, Q Chen, Asian J. Math. 183D. Abramovich, Q. Chen, Stable logarithmic maps to Deligne-Faltings pairs II, Asian J. Math. 18 (2014), no. 3, 465-488.
Decomposition of degenerate Gromov-Witten invariants. D Abramovich, Q Chen, M Gross, B Siebert, Compos. Math. 15610D. Abramovich, Q. Chen, M. Gross, B. Siebert, Decomposition of degenerate Gromov-Witten invariants, Compos. Math. 156 (2020), no. 10, 2020-2075.
Birational invariance in logarithmic Gromov-Witten theory. D Abramovich, J Wise, Compos. Math. 1543D. Abramovich, J. Wise, Birational invariance in logarithmic Gromov-Witten theory, Compos. Math. 154 (2018), no. 3, 595-620.
D Abramovich, Q Chen, M Gross, B Siebert, arXiv:2009.07720Punctured logarithmic maps. D. Abramovich, Q. Chen, M. Gross, B. Siebert, Punctured logarithmic maps, arXiv:2009.07720.
Orbifold techniques in degeneration formulas. D Abramovich, B Fantechi, Ann. Sc. Norm. Super. Pisa Cl. Sci. 5D. Abramovich, B. Fantechi, Orbifold techniques in degeneration formulas, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 16 (2016), no. 2, 519-579.
Quantum periods. I. Semi-infinite variations of Hodge structures. S Barannikov, Internat. Math. Res. Notices. 23S. Barannikov, Quantum periods. I. Semi-infinite variations of Hodge structures, Internat. Math. Res. Notices 2001, no. 23, 1243-1264.
L Battistella, N Nabijou, D Ranganathan, arXiv:2203.17224Gromov-Witten theory via roots and logarithms. L. Battistella, N. Nabijou, D. Ranganathan, Gromov-Witten theory via roots and logarithms, arXiv:2203.17224.
A Buryak, P Rossi, Quadratic double ramification integrals and the noncommutative KdV hierarchy. 53A. Buryak, P. Rossi, Quadratic double ramification integrals and the noncommutative KdV hierarchy, Bull. Lond. Math. Soc. 53 (2021), no. 3, 843-854.
Stable logarithmic maps to Deligne-Faltings pairs I. Q Chen, Ann. of Math. 2Q. Chen, Stable logarithmic maps to Deligne-Faltings pairs I, Ann. of Math. (2) 180 (2014), no. 2, 455-521.
Equivariant intersection theory. D Edidin, W Graham, Invent. Math. 1313D. Edidin, W. Graham, Equivariant intersection theory, Invent. Math. 131 (1998), no. 3, 595-634.
Mirror theorems for root stacks and relative pairs. H Fan, H.-H Tseng, F You, Sel. Math. New Ser. 2554H. Fan, H.-H. Tseng, F. You, Mirror theorems for root stacks and relative pairs, Sel. Math. New Ser. (2019) 25: 54.
Structures in Genus-zero Relative Gromov-Witten Theory. H Fan, L Wu, F You, J. Topol. 131H. Fan, L. Wu, F. You, Structures in Genus-zero Relative Gromov-Witten Theory, J. Topol. 13 (2020), no. 1, 269-307.
Higher genus relative Gromov-Witten theory and double ramification cycles. H Fan, L Wu, F You, J. Lond. Math. Soc. 2H. Fan, L. Wu, F. You, Higher genus relative Gromov-Witten theory and double ramification cycles, J. Lond. Math. Soc. (2) 103 (2021), no. 4, 1547-1576.
Symplectic geometry of Frobenius structures. Frobenius manifolds. A , Aspects Math. E36, Friedr. ViewegA. Givental, Symplectic geometry of Frobenius structures. Frobenius manifolds, 91-112, Aspects Math., E36, Friedr. Vieweg, Wiesbaden, 2004.
Mirror symmetry for log Calabi-Yau surfaces I. M Gross, P Hacking, S Keel, Publ. Math. Inst. HautesÉtudes Sci. 122M. Gross, P. Hacking, S. Keel, Mirror symmetry for log Calabi-Yau surfaces I, Publ. Math. Inst. HautesÉtudes Sci. 122 (2015), 65-168.
Logarithmic Gromov-Witten invariants. M Gross, B Siebert, J. Amer. Math. Soc. 262M. Gross, B. Siebert, Logarithmic Gromov-Witten invariants, J. Amer. Math. Soc. 26 (2013), no. 2, 451-510.
Intrinsic mirror symmetry and punctured Gromov-Witten invariants, Algebraic geometry: Salt Lake City. M Gross, B Siebert, Proc. Sympos. Pure Math., 97. Sympos. Pure Math., 97Providence, RIAmer. Math. SocM. Gross, B. Siebert, Intrinsic mirror symmetry and punctured Gromov-Witten invariants, Algebraic geometry: Salt Lake City 2015, 199-230, Proc. Sympos. Pure Math., 97.2, Amer. Math. Soc., Providence, RI, 2018.
M Gross, B Siebert, arXiv:1909.07649Intrinsic mirror symmetry. M. Gross, B. Siebert, Intrinsic mirror symmetry, arXiv:1909.07649.
Relative Gromov-Witten invariants. E Ionel, T Parker, Ann. of Math. 2E. Ionel, T. Parker, Relative Gromov-Witten invariants, Ann. of Math. (2) 157 (2003), no. 1, 45-96.
Double ramification cycles on the moduli spaces of curves. F Janda, R Pandharipande, A Pixton, D Zvonkine, Publ. Math. Inst. HautesÉtudes Sci. 125F. Janda, R. Pandharipande, A. Pixton, D. Zvonkine, Double ramification cycles on the moduli spaces of curves, Publ. Math. Inst. HautesÉtudes Sci. 125 (2017), 221-266.
Double ramification cycles with target varieties. F Janda, R Pandharipande, A Pixton, D Zvonkine, J. Topol. 134F. Janda, R. Pandharipande, A. Pixton, D. Zvonkine, Double ramification cycles with target varieties, J. Topol. 13 (2020), no. 4, 1725-1766.
S Keel, T Y Yu, arXiv:1908.09861The Frobenius structure theorem for affine log Calabi-Yau varieties containing a torus. S. Keel, T.Y. Yu, The Frobenius structure theorem for affine log Calabi-Yau varieties containing a torus, arXiv:1908.09861.
B Kim, H Lho, H Ruddat, arXiv:1803.04210The degeneration formula for stable log maps. B. Kim, H. Lho, H. Ruddat, The degeneration formula for stable log maps, arXiv:1803.04210.
Symplectic surgery and Gromov-Witten invariants of Calabi-Yau 3-folds. A.-M Li, Y Ruan, Invent. Math. 1451A.-M. Li and Y. Ruan, Symplectic surgery and Gromov-Witten invariants of Calabi-Yau 3-folds, Invent. Math. 145 (2001), no. 1, 151-218.
Stable morphisms to singular schemes and relative stable morphisms. J Li, J. Differential Geom. 573J. Li, Stable morphisms to singular schemes and relative stable morphisms, J. Differential Geom. 57 (2001), no. 3, 509-578.
A degeneration formula of GW-invariants. J Li, J. Differential Geom. 602J. Li, A degeneration formula of GW-invariants, J. Differential Geom. 60 (2002), no. 2, 199-293.
Sokolov hierarchies and FJRW-theory. S.-Q Liu, Y Ruan, Y Zhang, Drinfeld, Invent. Math. 2012S.-Q. Liu, Y. Ruan, Y. Zhang, BCFG Drinfeld-Sokolov hierarchies and FJRW-theory, Invent. Math. 201 (2015), no. 2, 711-772.
Theta bases and log Gromov-Witten invariants of cluster varieties. T Mandel, Trans. Amer. Math. Soc. 3748T. Mandel, Theta bases and log Gromov-Witten invariants of cluster varieties, Trans. Amer. Math. Soc. 374 (2021), no. 8, 5433-5471.
Logarithmic Gromov-Witten theory with expansions. D Ranganathan, Algebr. Geom. 96D. Ranganathan, Logarithmic Gromov-Witten theory with expansions, Algebr. Geom. 9 (2022), no. 6, 714-761.
Vistoli Infinite root stacks and quasi-coherent sheaves on logarithmic schemes. M Talpo, A , Proc. Lond. Math. Soc. 3M. Talpo A. Vistoli Infinite root stacks and quasi-coherent sheaves on logarithmic schemes, Proc. Lond. Math. Soc. (3) 116 (2018), no. 5, 1187-1243.
A quantum Leray-Hirsch theorem for banded gerbes. X Tang, H.-H Tseng, J. Differential Geom. 1193X. Tang, H.-H. Tseng, A quantum Leray-Hirsch theorem for banded gerbes, J. Differential Geom. 119 (2021), no. 3, 459-511.
On gerbe duality and relative Gromov-Witten theory. X Tang, H.-H Tseng, Adv. Theor. Math. Phys. 258X. Tang, H.-H. Tseng, On gerbe duality and relative Gromov-Witten theory, Adv. Theor. Math. Phys. 25 (2021), no. 8, 2171-2177.
Orbifold quantum Riemann-Roch. H.-H Tseng, Lefschetz and Serre, Geom. Topol. 141H.-H. Tseng, Orbifold quantum Riemann-Roch, Lefschetz and Serre, Geom. Topol. 14 (2010), no. 1, 1-81.
H.-H Tseng, F You, arXiv:1606.03770Double ramification cycles on the moduli spaces of admissible covers. H.-H. Tseng, F. You, Double ramification cycles on the moduli spaces of admissible covers, arXiv:1606.03770.
Higher genus relative and orbifold Gromov-Witten invariants. H.-H Tseng, F You, Geom. Topol. 246H.-H. Tseng, F. You, Higher genus relative and orbifold Gromov-Witten invariants, Geom. Topol. 24 (2020), no. 6, 2749-2779.
On the polynomiality of orbifold Gromov-Witten theory of root stacks. H.-H Tseng, F You, Math. Z. 3001H.-H. Tseng, F. You, On the polynomiality of orbifold Gromov-Witten theory of root stacks, Math. Z. 300 (2022), no. 1, 235-246.
A mirror theorem for multi-root stacks and applications. H.-H Tseng, F You, Selecta Math. (N.S.). 291ppH.-H. Tseng, F. You, A mirror theorem for multi-root stacks and applications, Selecta Math. (N.S.) 29 (2023), no. 1, Paper No. 6, 33 pp.
Relative Gromov-Witten invariants and the enumerative meaning of mirror maps for toric Calabi-Yau orbifolds. F You, Trans. Amer. Math. Soc. 37311F. You, Relative Gromov-Witten invariants and the enumerative meaning of mirror maps for toric Calabi-Yau orbifolds, Trans. Amer. Math. Soc. 373 (2020), no. 11, 8259-8288.
Gromov-Witten invariants of root stacks with mid-ages and the loop axiom. F You, Adv. Math. 386ppPaper No. 107811F. You, Gromov-Witten invariants of root stacks with mid-ages and the loop axiom, Adv. Math. 386 (2021), Paper No. 107811, 25 pp.
F You, arXiv:2204.00509Relative quantum cohomology under birational transformations. F. You, Relative quantum cohomology under birational transformations, arXiv:2204.00509.
| [] |
[
"Adversarial Learning for Counterfactual Fairness",
"Adversarial Learning for Counterfactual Fairness"
] | [
"Vincent Grari vincent.grari@lip6.fr ",
"Sylvain Lamprier sylvain.lamprier@lip6.fr ",
"Marcin Detyniecki marcin.detyniecki@axa.com ",
"\nSorbonne Université LIP6/CNRS Paris\nSorbonneFrance\n",
"\nAXA REV Research Paris\nUniversité LIP6/CNRS\nParisFrance, France\n"
] | [
"Sorbonne Université LIP6/CNRS Paris\nSorbonneFrance",
"AXA REV Research Paris\nUniversité LIP6/CNRS\nParisFrance, France"
] | [] | In recent years, fairness has become an important topic in the machine learning research community. In particular, counterfactual fairness aims at building prediction models which ensure fairness at the most individual level. Rather than globally considering equity over the entire population, the idea is to imagine what any individual would look like with a variation of a given attribute of interest, such as a different gender or race for instance. Existing approaches rely on Variational Auto-encoding of individuals, using Maximum Mean Discrepancy (MMD) penalization to limit the statistical dependence of inferred representations with their corresponding sensitive attributes. This enables the simulation of counterfactual samples used for training the target fair model, the goal being to produce similar outcomes for every alternate version of any individual. In this work, we propose to rely on an adversarial neural learning approach, that enables more powerful inference than with MMD penalties, and is particularly better fitted for the continuous setting, where values of sensitive attributes cannot be exhaustively enumerated. Experiments show significant improvements in term of counterfactual fairness for both the discrete and the continuous settings. | 10.1007/s10994-022-06206-8 | [
"https://arxiv.org/pdf/2008.13122v1.pdf"
] | 221,377,252 | 2008.13122 | 291a3d55e2ce1f7dfb3ac7fb63f8c634c2b126e0 |
Adversarial Learning for Counterfactual Fairness
Vincent Grari vincent.grari@lip6.fr
Sylvain Lamprier sylvain.lamprier@lip6.fr
Marcin Detyniecki marcin.detyniecki@axa.com
Sorbonne Université LIP6/CNRS Paris
SorbonneFrance
AXA REV Research Paris
Université LIP6/CNRS
ParisFrance, France
Adversarial Learning for Counterfactual Fairness
Index Terms-Counterfactual FairnessAdversarial Neural NetworkCausal Inference
In recent years, fairness has become an important topic in the machine learning research community. In particular, counterfactual fairness aims at building prediction models which ensure fairness at the most individual level. Rather than globally considering equity over the entire population, the idea is to imagine what any individual would look like with a variation of a given attribute of interest, such as a different gender or race for instance. Existing approaches rely on Variational Auto-encoding of individuals, using Maximum Mean Discrepancy (MMD) penalization to limit the statistical dependence of inferred representations with their corresponding sensitive attributes. This enables the simulation of counterfactual samples used for training the target fair model, the goal being to produce similar outcomes for every alternate version of any individual. In this work, we propose to rely on an adversarial neural learning approach, that enables more powerful inference than with MMD penalties, and is particularly better fitted for the continuous setting, where values of sensitive attributes cannot be exhaustively enumerated. Experiments show significant improvements in term of counterfactual fairness for both the discrete and the continuous settings.
I. INTRODUCTION
Machine learning models have an increasingly important role in our daily lives and can have significant implications for citizens like loan applications, recidivism score, credit rating, etc. However, the data used for training the models can reflect sensitive biases that exist in our society and without a careful design the models can perpetuate or even reinforce these biases [1]. Many incidents of this kind have been reported in recent years. An infamous example is the case of a tool for criminal risk prediction (COMPAS), which showed strong discrimination against black defendants [2]. A fair predictive model provides outcomes that do not contain any prejudice or favoritism toward an individual or a group based on a set of sensitive characteristics. One of the problems in achieving a non-discriminatory model is that it is not simply a matter of removing protected attributes from the training base [3]. This concept, known as fairness through unawareness, is highly insufficient because any other nonsensitive attribute might indirectly contain significant sensitive information. To tackle this problem the recent fair machine learning research field has emerged.
A link to the online repository will be provided upon acceptance
As of now, a large majority of works in the field focused on group fairness metrics, that assess a form of conditional independence between the 3 following features: the sensitive attribute A, the true outcome feature Y , and the output model predictionsŶ . For example, one of the most known objective Demographic parity ensures that the output prediction is not dependent of the sensitive feature [4,5]. However, predictive models trained to be fair regarding such group metrics may induce dramatic consequences for some individuals. For example in an extreme case, a person may be refused a position only because of belonging to a privileged group, regardless of their merit within the group. To tackle such issues, a recent field called Counterfactual fairness [6] recently proposed to assess fairness at the individual level, by leveraging causal inference to ensure that some sensitive attributes are not the cause of a prediction change. It argues to lead to a more intuitive, powerful, and less error-prone way of reasoning about fairness [7]. The idea is to imagine what any individual would look like with a variation of a given attribute of interest, such as a different gender or race for instances, in order to ensure similar outcomes for every alternate version of the same individual. While plenty of methods have been proposed recently to tackle this challenge for discrete variables, to the best of our knowledge no approach address the continuous case. The existing approches may not hold when, for instance, the sensitive attribute is the age or the weight of an individual.
The main contributions of this paper are:
• We propose an adversarial approach for confounding variable inference, which allows the generation of accurate counterfactuals in both discrete and continuous sensitive settings (while existing approaches are limited to the discrete case); • Based on this, we define an approach for counterfactual fairness tolerant to continuous features, notably via a dynamic sampling method that focuses on individualized hard locations of the sensitive space; • We demonstrate empirically that our algorithm can mitigate counterfactual fairness Section 2 first gives details for counterfactual fairness, which we believe are essential for a good understanding of our contributions. Then, in section III, we detail our approach in two main steps. Section IV evaluates performances for both the discrete and the continuous settings.
II. BACKGROUND
Recently, there has been a dramatic rise of interest for fair machine learning by the academic community. Many questions have been raised, such as: How to define fairness [6,8,9,10] ? How to mitigate the sensitive bias [5,11,12,13,14,15,16,17,18,19,20] ? How to keep a high prediction accuracy while remaining fair in a complex realworld scenario [21,22] ? To answer these questions, three main families of fairness approaches exist in the literature. While pre-processing [13,14,15] and post-processing [9,19] approaches respectively act on the input or the output of a classically trained predictor, pre-processing [13,14,15] and post-processing [9,19] approaches respectively act on the input or the output of a classically trained predictor, inprocessing approaches mitigate the undesired bias directly during the training phase [5,11,16,17,18]. In this paper we focus on in-processing fairness, which reveals as the most powerful framework for settings where acting on the training process is an option.
Throughout this document, the aim is to learn a predictive function h θ from training data that consists of m examples
(x i , a i , y i ) m i=1
, where x i ∈ R p is the p-sized feature vector X of the i-th example, a i ∈ Ω A the value of its sensitive attribute and y i its label to be predicted. According to the setting, the domain Ω A of the sensitive attribute A can be either a discrete or a continuous set. The outcome Y is also either binary or continuous. The objective is to ensure some individual fairness guarantees on the outcomes of the predictorŶ = h θ (X, A), by the way of Counterfactual Fairness. The remaining of this section presents classical and Counterfactual Fairness Metrics and existing methods for Counterfactual Fairness.
A. Fairness definitions and metrics
The vast majority of fairness research works have focused on two metrics that have become very popular in the fairness field: Demographic parity [10] and Equalized odds [9]. Both of them consider fairness globally, by focusing on equity between groups of people, defined according to one or several high level sensitive attributes. The Demographic parity metric compares the average prediction for each demographic sensitive group. For instance, in the binary discrete case, it comes down to ensure that: P ( Y = 1|A = 0) = P ( Y = 1|A = 1). The underlying idea is that each sensitive demographic group must own the same chance for a positive outcome. The Equalized odds metric rather compares rates of True positives and False positives between sensitive groups: P ( Y = 1|A = 0, Y = y) = P ( Y = 1|A = 1, Y = y), ∀y ∈ {0, 1}. The notion of fairness here is that chances for being correctly or incorrectly classified as positive should be equal for every group. However, these metrics which correspond to averages over each sensitive groups are known to lead to arbitrary individuallevel fairness deviations, with a high outcome variance within groups [20].
In the continuous setting, some recent works proposed to consider non-linear correlation metrics between the predicted outcomeŶ and the sensitive attribute A, such as the Hirschfeld-Gebelein-Rényi maximal correlation (HGR) defined, for two jointly distributed random variables U ∈ U and V ∈ V, as:
HGR(U, V ) = sup f :U →R,g:V→R E(f (U ))=E(g(V ))=0 E(f 2 (U ))=E(g 2 (V ))=1 ρ(f (U ), g(V ))(1)
where ρ is the Pearson linear correlation coefficient with some measurable functions f and g. The HGR coefficient is equal to 0 if the two random variables are independent. If they are strictly dependent the value is 1. Applied to fairness [12,23], it can be used to measure objectives similar to those defined for the discrete setting, such as the Demographic parity, which can be measured via HGR( Y , A) (this accounts for the violation level of the constraint P ( Y |A) = P ( Y )).
However, even such approaches in the continuous setting only consider fairness globally and can lead to particularly unfair decisions at the individual level. For example, a fair algorithm can choose to accept a high MSE error for the outcome of a given person if this allows the distribution P ( Y |A) to get closer to P ( Y ). Penalization can be arbitrarily high on a given kind of individual profile compared to any other equivalent one, only depending on where the learning process converged. Global fairness is unfair.
To tackle this problem, Counterfactual fairness has been recently introduced for quantifying fairness at the most individual sense [6]. The idea is to consider that a decision is fair for an individual if it coincides with the one that would have been taken in a counterfactual world in which the values of its sensitive attributes were different. It leverages the previous work [24], which introduced a causal framework to learn from biased data by exploring the relationship between sensitive features and data. With the recent development in deep learning, some novels approaches [25,26,27] argue to lead to a less error-prone decision-making model, by improving the approximation of the causal inference in the presence of unobserved confounders.
Definition 1: Counterfactual demographic parity [6]: A predictive function h θ is considered counterfactually fair for a causal world G, if for any x ∈ X and ∀y ∈ Y ,∀(a, a ) ∈ Ω 2 A with a = a :
p(Ŷ A←a = y|X = x, A = a) = p(Ŷ A←a = y|X = x, A = a)
whereŶ A←a = h θ (X A←a , a ) is the outcome of the predictive function h θ for any transformationX A←a of input X, resulting from setting a as its sensitive attribute value, according to the causal graph G. Following definition 1, an algorithm is considered counterfactually fair in term of demographic parity if the predictions are equal for each individual in the factual causal world where A = a and in any counterfactual world where A = a . It therefore compares the predictions of the same individual with an alternate version of him/herself. Similar extension can be done to adapt the Equalized Odds objective for the Counterfactual framework [26]. Learning transformationsX A←a for a given causal graph is at the heart of Counterfactual Fairness, as described in the next subsection. In this paper, we focus on the classical causal graph depicted in Fig.1, often used in the counterfactual fairness literature [6,7,26], which can apply for most applications. For more specific tasks, note further that our approach could be easily adapted for different graphs, such as those explored in [6] for instances. In this causal graph, both input X and outcome Y only depend on the sensitive attribute A and a latent variable U , which represents all the relevant knowledge non dependent on the sensitive feature A. In that setting, the knowledge of U can be used during training to simulate various versions of the same individual, corresponding to different values of A, in order to obtain a predictive function h θ which respects the fairness objective from definition 1. For any training individual, U has to be inferred since only X, A and Y are observed. This inference must however ensure that no dependence is created between U and A (no arrow from U to A in the graph from Fig.1), unless preventing the generation of proper alternative versions of X and Y for any values A.
A classic way to achieve a counterfactually fair model is to proceed with two distinct main steps of Causal Inference and Model Learning [26,28], that are described below.
1)
Step 1: Counterfactual Inference: The goal is to define a way to generate counterfactual versions of original individuals. As discussed above, this is usually done via approximate Bayesian inference, according to a pre-defined causal graph.
The initial idea to perform inference was to suppose with strong hypothesis a non deterministic structural model with some specific distribution for all the causal links [6]. In this setting, the posterior distribution of U was estimated using the probabilistic programming language Stan [29]. Then, leveraging recent developments for approximate inference with deep learning, many works [7,25,26,27] proposed to use Variational Autoencoding [30] methods (VAE) to generalize this first model and capture more complex -non lineardependencies in the causal graph.
Following the formulation of VAE, it would be possible to directly optimize the classical lower bound (ELBO) [30] on the training set D, by minimizing:
L ELBO = − E (x,y,a)∼D, u∼q φ (u|x,y,a) [log p θ (x, y|u, a)]
(2)
+ D KL (q φ (u|x, y, a)||p(u))
where D KL denotes the Kullback-Leibler divergence of the posterior q φ (u|x, y, a) from a prior p(u), typically a standard Gaussian distribution N (0, I). The posterior q φ (u|x, y, a) is represented by a deep neural network with parameters φ, which typically outputs the mean µ φ and the variance σ φ of a diagonal Gaussian distribution N (µ φ , σ φ I). The likelihood term factorizes as p θ (x, y|u, a) = p θ (x|u, a)p θ (y|u, a), which are defined as neural networks with parameters θ. Since attracted by a standard prior, the posterior is supposed to remove probability mass for any features of the latent representation U that are not involved in the reconstruction of X and Y . Since A is given together with U as input of the likelihoods, all the information from A should be removed from the posterior distribution of U .
However, many state of the art algorithms [7,25,26,27] show that the independence level between the latent space U and the sensitive variable A is insufficient with this classical ELBO optimization. Some information from A leaks in the inferred U . In order to ensure a high level of independence, a specific TARNet [31] architecture can be employed [25] or a penalisation term can be added in the loss function. For example, [7,26] add a Maximum Mean Discrepancy (MMD) [32] constraint. The MMD term can be used to enforce all the different aggregated posterior to the prior distribution [26]: L M M D (q φ (u|A = a k )||p(u)) for all a k ∈ Ω A (referred to as MMD wrt P (U ) in the following). Alternatively, the constraint can directly enforce the matching between pairs of posteriors [7]: L M M D (q φ (u|A = a k )||q φ (u|A = a)) for all a k ∈ Ω A , with a standing for the original sensitive value of the considered individual (referred to as MMD wrt U a in the following). Notice that while this additional term can improve independence, it can also encourage the model to ignore the latent confounders U , by being too restrictive. One possible approach to address this issue is to apply weights λ (hyperparameters) to control the relative importance of the different terms. In addition, we employ in this paper a variant of the ELBO optimization as done in [26], where the D KL (q φ (u|x, y, a)||p(u)) term is replaced by a MMD term L M M D (q φ (u)||p(u)) between the aggregated posterior q φ (u) and the prior. This has been shown more powerful than the classical D KL for ELBO optimization in [33], as the latter can reveal as too restrictive (uninformative latent code problem) [34,35,36] and can also tend to overfit the data (Variance Over-estimation in Feature Space). Finally, the inference for counterfactual fairness can be optimized by minimizing [26]:
L CE−V AE = − E (x,y,a)∼D, u∼q φ (u|x,y,a) λ x log(p θ (x|u, a)) + λ y log(p θ (y|u, a)) +λ M M D L M M D (q φ (u)||p(u)) (3) +λ ADV 1 m a a k ∈Ω A L M M D (q φ (u|a = a k )||p(u))
where λ x , λ y , λ M M D , λ ADV are scalar hyperparameters and m a = |Ω A |. The additional MMD objective can be interpreted as minimizing the distance between all moments of each aggregated latent code distribution and the prior distribution, in order to remove most sensitive dependency from the code generator. It requires however a careful design of the kernel used for MMD computations (typically a zero mean isotropic Gaussian). Note that we chose to present all models with a generic inference scheme q(U |X, Y, A), while most approaches from the literature only consider q(U |X, A). The use of Y as input is allowed since U is only used during training, for generating counterfactual samples used to learn the predictive model in step 2. Various schemes of inference are considered in our experiments (section IV).
2)
Step 2: Counterfactual predictive model: Once the causal model is learned, the goal is to use it to learn a fair predictive function h θ , by leveraging the ability of the model to generate alternative versions of each training individual. The global loss function is usually composed of the traditional predictor loss l(h θ (x i , a i ), y i ) (e.g. cross-entropy for instance i) and the counterfactual unfairness estimation term L CF (θ):
L = 1 m m i l(h θ (x i ), y i ) + λL CF (θ)(4)
where λ is an hyperparameter which controls the impact of the counterfactual loss in the optimization. The counterfactual loss L CF (θ) considers differences of predictions for alternative versions of any individual. For example, [28] considers the following Monte-Carlo estimate from S samples for each individual i and each value a ∈ Ω A :
L CF (θ) = 1 m m i=1 1 m a a k ∈Ω A 1 S S s=1 ∆ i,s a k (5) where ∆ i,s a k = ∆(h θ (x s i,A←ai , a i ), h θ (x s i,A←a k , a k )
) is a loss function that compares two predictions, x s i,A←a denotes the s-th sample from the causal model for the i-th individual of the training set and the sensitive attribute value a. Following the causal model learned at step 1, x s i,A←a is obtained by first inferring a sample u from q φ (u|x i , a i , y i ) and then sampling x s i,A←a using p θ (x|u, a) with the counterfactual (or factual) attribute value a. According to the task, ∆ can take various forms. For binary classification, it can correspond to a logit paring loss as done in [26]:
∆(z, z ) = (σ −1 (z) − σ −1 (z )) 2 ,
where σ −1 is the logit function. For continuous outcomes, it can simply correspond to a mean squared difference.
3) Discussion: For now, state-of-the-art approaches have focused specifically on categorical variables A. Unfortunately, the classical methodology for CounterFactual Fairness as described above cannot be directly generalized for continuous sensitive attributes, because the two steps involve enumerations of the discrete counterfactual modalities a k in the set Ω A . Particularly in step 1, sampling A from a uniform distribution for approximating the expectation E a∼p(A) L M M D (q φ (u|A = a)||p(u)) is not an option since this requires to own a good estimation of q φ (u|A = a) for any a ∈ Ω A , which is difficult in the continuous case. While such a posterior can be obtained for discrete sensitive attributes (at least when |Ω A | << m) by aggregating the posteriors q φ (u|x i , a i , y i ) over training samples i such that a i = a, such a simple aggregation over filtered samples is not possible for continuous attributes. Moreover, existing approaches based on MMD costs imply to infer codes U from a distribution that takes A as input, in order to be able to obtain the required aggregated distributions via:
q φ (u|a) = E p data (x,y|a) [q φ (u|x, y, a)].
Omitting A from the conditioning of the generator would correspond to assume the mutual independence of u and a given x and y, which is usually wrong. On the other hand, passing A to the generator of U can encourage their mutual dependency in some settings, as we observe in our experiments.
III. ADVERSARIAL LEARNING FOR COUNTERFACTUAL
FAIRNESS
In this section we revisit the 2 steps shown above by using adversarial learning rather than MMD costs for ensuring Counterfactual Fairness. Our contribution covers a broad range of scenarios, where the sensitive attribute A and the outcome value Y can be either discrete or continuous.
1)
Step 1: Counterfactual Inference: To avoid the comparison of distributions for each possible sensitive value, which reveals particularly problematic in the continuous setting, we propose to employ an adversarial learning framework, which allows one to avoid the enumeration of possible values in Ω A . We follow an approach similar to the adversarial auto-encoders proposed in [37], but where the discriminator real/fake data is replaced by a sensitive value predictor. The idea is to avoid any adversarial function to be able to decode A from the code U inferred from the encoder q φ , which allows one to ensure mutual independence of A and U . This defines a twoplayers adversarial game, such as in GANs [38], where the goal is to find some parameters φ which minimize the loss to reconstruct X and Y , while maximizing the reconstruction loss of A according to the best decoder p ψ (A|U ):
arg min θ,φ max ψ L ADV (θ, φ, ψ)(6)
with, for the graphical causal model from figure 1:
L ADV (θ, φ, ψ) = − E (x,y,a)∼D, u∼q φ (u|x,y,a) λ x log(p θ (x|u, a)) + λ y log(p θ (y|u, a)) +λ M M D L M M D (q φ (u)||p(u)) (7) +λ ADV E (x,a)∼D, u∼q φ (u|x,y,a)) [log(p ψ (a|u))]
where λ x , λ y , λ M M D , λ ADV are scalar hyperparameters. Compared to existing approaches presented in previous section, the difference is the last term which corresponds to the expectation of the log-likelihood of A given U according to the decoder with parameters φ. This decoder corresponds to a neural network which outputs the parameters of the distribution of A given U (i.e., the logits of a Categorical distribution for the discrete case, the mean and log-variance of an diagonal Gaussian in the continuous case).
All parameters are learned conjointly. Figure 2 gives the full architecture of our variational adversarial inference for the causal model from figure 1. It depicts the neural network encoder q φ (U |X, Y, A) which generates a latent code U from the inputs X, Y and A. A neural network decoder p θ (X, Y |U ) reconstructs the original X and Y from both U and A. The adversarial network p ψ tries to reconstruct the sensitive attribute A from the confounder U . As classically done in adversarial learning, we alternate steps for the adversarial maximization and steps of global loss minimization (one gradient descent iteration on the same batch of data at each step). Optimization is done via the re-parametrization trick [30] to handle stochastic optimization.
2)
Step 2: Counterfactual predictive model: As described in section 2.3, the counterfactual fairness in the predictive model learned at step 2 is ensured by comparing, for each training individual, counterfactual predictions Y A←a for all a ∈ Ω A . For the discrete case (i.e., A is a Categorical variable), we keep this process for our experiments. However, for the continuous setting (i.e., A is for instance generated from a Gaussian), such an approach must be somehow adapted, due to the infinite set Ω A . In that case, we can consider a sampling distribution P (A) to formulate the following loss, which can be optimized via Monte-Carlo sampling and stochastic gradient descent (SGD):
L CF (θ) = 1 m m i l(h θ (x i ), y i ) + λ E u∼P (u|xi,ai,yi), x∼P (x|ui,ai), a ∼P (A),x ∼P (x|u,a ) [(h θ (x) − h θ (x )) 2 ] (8)
This formulation is equivalent to the one from Eq. 5, for continuous outcomesŶ (thus considering a least squared cost as ∆) and for continuous attributes A (thus using the sampling distribution P (A) rather than considering every possible a ∈ Ω A ).
Note that using a non-uniform sampling distribution P (A) would enforce the attention of the penalisation near the mass of the distribution. This prevents using the prior of A estimated from the training set, since this would tend to reproduce inequity between individuals: counterfactual predictions for rare A values would be be little taken into account during training. We therefore consider a uniform P (A) in our experiments for the continuous setting when using the L CF (θ) objective at step 2.
However, for the specific case of high-dimensional sensitive attributes A, using a uniform sampling distribution P (A) could reveal as particularly inefficient. The risk is that a high number of counterfactual samples fall in easy areas for the learning process, while some difficult areas -where an important work for fairness has to be performed -remain insufficiently visited.
To tackle this problem, we propose to allow the learning process to dynamically focus on the most useful areas of Ω A for each individual. During learning, we consider an adversarial process, which is in charge of moving the sampling distribution P (A), so that the counterfactual loss is the highest. This allows the learning process to select useful counterfactuals for ensuring fairness. Who can do more can do less: dynamically focusing on hardest areas allows one to expect fairness everywhere. Again, we face a two-players adversarial game, which formulates as follows:
arg min θ arg max φ L DynCF (θ, φ)(9)
with:
L DynCF (θ, φ) = 1 m m i l(h θ (x i ), y i )(10)+ λ E u∼P (u|xi,ai,yi), x∼P (x|u,ai), a ∼P φ (a|u),x ∼P (x|u,a ) [(h θ (x) − h θ (x )) 2 ]
Compared to Eq. 8, this formulation considers an adversarial sampling distribution P φ (A|U ) rather than a uniform static distribution P (A). It takes the form of a neural network that outputs the parameters of the sampling distribution for a given individual representation U . In our experiments we use a diagonal logit-Normal distribution sigmoid(N (µ φ (u), σ 2 φ (u)I)), where µ φ (u) and σ 2 φ (u) stand for the mean and variance parameters provided by the network for the latent code u. Samples from this distribution are then projected on the support Ω A via a linear mapping depending on the shape of the set. Passing U as input for the network allows the process to define different distributions for different codes: according to the individual profiles, the unfair areas are not always the same. This also limits the risk that the adversarial process gets stuck in sub-optimums of the sensitive manifold. As done for adversarial learning in step 1, all parameters are learned conjointly, by alternating steps for the adversarial maximization and steps of global loss minimization. The reparametrization trick [30] is also used, for the adversarial optimization of P φ (A|U ).
IV. EXPERIMENTS
We empirically evaluate the performance of our contribution on 6 real world data sets. For the discrete scenario and specifically in the binary case (Y ∈ {0, 1}, A ∈ {0, 1}), we use 3 different popular data sets: the Adult UCI income data set [39] with a gender sensitive attribute (male or female), the COMPAS data set [2] with the race sensitive attribute (Caucasian or not-Caucasian) and the Bank dataset [40] with the age as sensitive attribute (age is between 30 and 60 years, or not). For the continuous setting (Y and A are continuous), we use the 3 following data sets: the US Census dataset [41] with gender rate as sensitive attribute encoded as the percentage of women in the census tract, the Motor dataset [42] with the driver's age as sensitive attribute and the Crime dataset [39] with the ratio of an ethnic group per population as sensitive attribute.
Additionally to the 6 real-world datasets, we consider a synthetic scenario, that allows us to perform a further analysis of the relative performances of the approaches. The synthetic scenario subject is a pricing algorithm for a fictional car insurance policy, which follows the causal graph from figure 1. We simulate both a binary and a continuous dataset from this scenario. The main advantage of these synthetic scenarios is that it is possible to get "ground truth" counterfactuals for each code U , obtained using the true relationships of the generation model while varying A uniformly in Ω A . This will allow us to evaluate the counterfactual fairness of the models without depending on a given inference process for the evaluation metric, by relying on prediction differences between these true counterfactuals and the original individual. The objective of this scenario is to achieve a counterfactual fair predictor which estimates the average cost history of insurance customers. We suppose 5 unobserved variables (Aggressiveness, Inattention, Restlessness, Reckless and Overreaction) which corresponds to a 5 dimensional confounder U . The input X is composed of four explicit variables X 1 , ..., X 4 which stand for vehicle age, speed average, horsepower and average kilometers per year respectively. We consider the policyholder's age as sensitive attribute A. The input X and the average cost variable Y are sampled from U and A as depicted in figure 1 from the main paper. We propose both a binary and a continuous version of this scenario. For both of them, 5000 individuals are sampled. Details of distributions used for the continuous setting of this synthetic scenario are given below:
U ∼ N 0 0.5 1 1.5 2 , 1 0 0 0 0 0 4 0 0 0 0 0 2 0 0 0 0 0 3 0 0 0 0 0 2 X1 ∼ N (7 + 0.1 * A + U 1 + U 2 + U 3 , 1); X2 ∼ N (80 + A + U 2 2 , 10); X3 ∼ N (200 + 5 * A + 5 * U 3 , 20); X4 ∼ N ((10 4 + 5 * A + U 4 + U 5 , 1000) X ∼ [X1, X2, X3, X4];
A ∼ N [45,5];
Y ∼ N (2 * (7 * A + 20 * j U j ), 0.1)
A. Step 1: Counterfactual Inference
In this section, we report experiments performed for assessing our adversarial approach for Counterfactual Inference (step 1 of the previous section). We compare our adversarial approach with two version of the approach in Eq. 3, each using one of the two MMD constraints MMD wrt P (A) or MMD wrt U a as presented in section II-B (step 1). Note that these approaches are not applicable for continuous datasets as discussed at the end of section II. For every approach, we compare three different inference schemes for U : q φ (u|x, y, a), q φ (u|x, y) and q φ (u|x, a). As a baseline, we also use a classical Variational Autoencoder inference without counterfactual independence constraint (i.e., Eq. II-B without the last term).
All hyper-parameters for every approach have been tuned by 5-fold cross-validation. For the US Census data set for our approach for instance, the encoder q φ architecture is an MLP of 3 hidden layers with 128, 64 and 32 units respectively, with ReLU activations. On this dataset, the decoder p θ is an MLP of only one hidden layer with 64 units with a ReLu activation function and the output consists in one single output node with linear activation to reconstruct Y and 37 units to reconstruct X (number of features). The adversarial neural network p ψ is an MLP of two hidden layers with 32 and 16 units respectively. For the binary datasets, a sigmoid is applied on the outputs of decoders for A and Y . For both MMD constraints we used a Gaussian radial basis function kernel. For all datasets, the prior distribution p(U ) considered for training the models is a five-dimensional standard Gaussian.
In order to evaluate the level of dependence between the latent space U and the sensitive variable A, we compare the different approaches by using the neural estimation of the HGR correlation coefficient given in [12]. This coefficient, as shown above in Eq.1, assesses the level of non-linear dependency between two jointly distributed random variables. The estimator is trained for each dataset and each approach on the train set, comparing observed variables A with the corresponding inferred codes U .
For all data sets, we repeat five experiments by randomly sampling two subsets, 80% for the training set and 20% for the test set. Finally, we report the average reconstruction loss for X and Y on the test set, as long as the HGR between inferred test codes and the corresponding sensitive attributes. Results of our experiments can be found in table I for the discrete case and table II for the continuous case. For all of them, we attempted via the different hyperparameters (λ x , λ y , λ M M D , λ ADV ) to obtain the lower dependence measure while keeping the minimum loss as possible to reconstruct X and Y .
As expected, the baseline without the independence constraint achieves the best X and Y reconstruction loss, but this is also the most biased one with the worst dependence in term of HGR in most datasets. Comparing the different constraints in the discrete case, the adversarial achieves globally the best result with the lower HGR while maintaining a reasonable reconstruction for X and Y . It is unclear which MMD constraint performs better than the other. We observe that the best results in terms of independence are obtained without the sensitive variable given as input of the inference network (inference only with X and Y ). Note however that for the MMD constraints, this setting implies to make the wrong assumption of independence of U w.r.t. A given X and Y for the estimation of the constraint (as discussed at the end of section II). This is not the case for our adversarial approach, which obtains particularly good results on this setting for discrete datasets. On continuous datasets, our approach succeeds in maintaining reasonable reconstruction losses for important gains in term of HGR compared to the classical VAE approach (without constraint). Interestingly, on these datasets, it appears that our approach obtains slightly better results when using the full information (X, Y and A) as input of the inference network. We explain this by the fact that removing the influence of a binary input is harder than the one of a smoother continuous one, while this can reveal as a useful information for generating relevant codes.
B. Step 2: Counterfactual predictive model
This section reports experiments involving the training procedure from step 2 as described in section III. The goal of these experiments is threefold: 1. assess the impact of the adversarial inference on the target task of counterfactual fairness, 2. compare our two proposals for counterfactual bias mitigation (i.e., using a uniform distribution or an adversarial dynamic one for the sampling of counterfactual sensitive values) and 3. assess the impact of the control parameter from Eq.9.
The predictive model used in our experiments is a MLP with 3 hidden layers. The adversarial network P φ from Eq.10 is a MLP with 2 hidden layers and RELU activation. For all our experiments, a single counterfactual for each individual is sampled at each iteration during the training of the models. Optimization is performed using ADAM.
Tables III and IV report results for the discrete and the continuous case respectively. The inference column refers to the inference process that was used for sampling counterfactuals for learning the predictive model. For each setting, we use the best configuration from tables I and II. The mitigation column refers to the type of counterfactual mitigation that is used for the results: No mitigation or L CF (Eq.5) for the discrete case; No mitigation, L CF (Eq.8) or L DynCF (Eq.10) for the continuous setting. Results are reported in terms of accuracy (for the discrete case) or MSE (for the continuous case) and of Counterfactual Fairness (CF). The CF measure is defined, for the m test individuals from the test set, as:
CF = 1 m test mtest i E (x ,a )∼C(i) [∆(h θ (x i , a i ), h θ (x , a ))](11)
where C(i) is the set of counterfactual samples for the i-th individual of the test set. This corresponds to counterfactuals sampled with the Adversarial inference process defined at step 1 (with the best configuration reported in tables I and II). As discussed above, the synthetic datasets allow one to rely on "true" counterfactuals for the computation of counterfactual fairness, rather than relying on an inference process which may include some bias. For these datasets, we thus also report an additional RealCF metric, which is defined as in Eq. 11, but using these counterfactuals sampled from the true codes used to generate the test data. For both CF and RealCF, for every i from the test set, |C(i)| equals 1 for binary settings and |C(i)| equals 1000 for the continuous one. ∆ is a cost function between two predictions, the logit paring cost for the binary case (more details given in section II-B step 2) and a simple squared difference for the continuous setting.
Results from both tables first confirm the good behavior of our inference model from step 1, which allows one to obtain greatly better results than other inference processes for both the discrete and the continuous settings. Our adversarial counterfactual inference framework allows one to get codes that can be easily used to generate relevant counterfactual individuals. For this observation, the most important results are those given for the synthetic scenarios, for which the RealCF metric shows good results for our method, while strongly reliable since relying on counterfactuals sampled from true codes of individuals.
Secondly, results from table II show that, even in the continuous setting where the enumeration of all values from Ω A is not possible, it is possible to define counterfactual mitigation methods such as our approaches L CF and L DynCF . These two methods, used in conjunction with our Adversarial Inference, give significantly better results than no mitigation on every dataset. Interestingly, we also observe that L DynCF allows one to improve results over L CF , which shows the relevance of the proposed dynamic sampling process.Furthermore, note that we can reasonably expect even better results compared to L CF on data with higher-dimensional sensitive attributes.
To illustrate the impact of the hyperparameter λ on the predictions accuracy (MSE Error) and the counterfactual fairness figure 4 for the Crime data set. It clearly confirms that higher values of λ produce fairer predictions, while a value near 0 allows one to only focus on optimizing the predictor loss. This is also observable from Fig. 3 which plots counterfactual predictions for a specific instance i from the test set. Higher values of λ produce clearly more stable counterfactual predictions.
In figure 5, we consider the distribution of considered counterfactual samples w.r.t. to the sensitive variable A for the uniform sampling strategy from P (A) and the dynamic strategy as defined in Eq.9. This is done on the Motor dataset and for a specific randomly sampled instance i with sensitive attribute a i = 75, at a given point of the optimization, far before convergence (the model is clearly unfair at this point). The blue points are the counterfactual fairness estimation (h θ (X i,A←a , a) − h θ (X i,A←a , a )) for each counterfactual sampled a' s (1.000 points) from the uniform distribution P (A). The red points are the counterfactual fairness esti- sarial network which optimizes the best mean and variance for each latent code u (µ φ (u) and σ 2 φ (u)). Being optimized to maximize the error at each gradient step, the red points are sampled on lower values of A where the error is the most important. More importantly, very few points are sampled in the easy area, near the true sensitive value of i which is 75. This demonstrates the good behavior of our dynamic sampling process. Figure 5: Dynamic Sampling Visualization for a randomly sampled individual whose age A is 75. Red points are sampled counterfactuals from the dynamic distribution P φ (a |u) with u the inferred confounding for this individual.
V. CONCLUSION
We developed a new adversarial learning approach for counterfactual fairness. To the best of our knowledge, this is the first such method that can be applied for continuous sensitive attributes. The method proved to be very efficient for different dependence metrics on various artificial and realworld data sets, for both the discrete and the continuous settings. Finally, our proposal is applicable for any causal graph to achieve generic counterfactual fairness. As future works, it might be interesting to consider a generalization of our proposal for Path Specific [7] counterfactual fairness in the continuous case.
Figure 1 :
1Graphical causal model. Unobserved confounder U has effect on both X and Y .
Figure 2 :
2Architecture of our Counterfactual inference process. Blue arrows represent the retro-propagated gradients for the minimization of the global objective. The red one corresponds to the gradients for the adversarial optimization. Circles are observed variables, squares are samples from the neural distributions.
Figure 3 :
3Impact of λ (Crime data set) on a specific instance i. Blue points are counterfactual predictions h θ (x i,A←a ) from 1.000 points A ← a generated randomly. The red cross represents the prediction h θ (x i,A←a ) for the real A = a of instance i.
Figure 4 :
4Impact of hyperparameter λ (Crime data set) mations for counterfactuals corresponding to a' values (30 points) sampled from our dynamic distribution P φ (a |u) = N (µ φ (u), σ 2 φ (u)I), where φ are the parameters of the adver-
Table I :
IInference results in the discrete caseAdult UCI
Compas
Bank
Synthetic Scenario
Loss X
Loss Y
HGR
Loss X
Loss Y
HGR
Loss X
Loss Y
HGR
Loss X
Loss Y
HGR
({x, y, a})
No Constraint, q(u|x, y, a)
0.0781
0.0006
0.6984
0.0278
0.0041
0.6952
0.0963
0.0001
0.5988
0.2681
0.0085
0.9725
Adv. Constraint, q(u|x, y, a)
0.1091
0.0009
0.5453
0.0254
0.0020
0.2693
0.2038
0.0005
0.3423
0.2669
0.0721
0.4167
MMD wrt P (U ), q(u|x, y, a)
0.1286
0.0012
0.7017
0.0252
0.0029
0.6565
0.2002
0.0002
0.4521
0.2535
0.0839
0.6623
MMD wrt Ua, q(u|x, y, a)
0.0938
0.0009
0.7181
0.0259
0.0098
0.8892
0.1263
0.0003
0.5188
0.2762
0.0351
0.5697
({x, y})
No Constraint, q(u|x, y)
0.0786
0.0008
0.6077
0.0274
0.0133
0.3817
0.0957
0.0001
0.4989
0.2577
0.0022
0.6418
Adv. Constraint, q(u|x, y)
0.1272
0.0329
0.1811
0.0245
0.0013
0.1728
0.1858
0.0073
0.2476
0.2649
0.1015
0.4521
MMD wrt P (U ), q(u|x, y)
0.1287
0.0016
0.6092
0.0259
0.0055
0.4470
0.1898
0.0003
0.3716
0.2567
0.0885
0.6868
MMD wrt Ua, q(u|x, y)
0.0872
0.0013
0.6852
0.0266
0.0094
0.3109
0.1415
0.0003
0.3929
0.2674
0.0553
0.4473
({x, a})
No Constraint, q(u|x, a)
0.0982
0.3534
0.6689
0.0288
0.8246
0.3726
0.1391
0.2101
0.5572
0.2686
0.0128
0.7040
Adv. Constraint, q(u|x, a)
0.0995
0.3462
0.5259
0.0271
0.6889
0.4344
0.1880
0.2110
0.3061
0.2589
0.0980
0.4264
MMD wrt P (U ), q(u|x, a)
0.1308
0.3559
0.3586
0.0288
0.7611
0.4365
0.2141
0.2129
0.3386
0.2506
0.1176
0.6298
MMD wrt Ua, q(u|x, a)
0.0940
0.3603
0.5811
0.0278
0.7314
0.3345
0.1485
0.2135
0.5536
0.2584
0.1076
0.4692
Table II :
IIInference results in the continuous case estimation (CF), we plot results for 10 different values of λ (5 runs each) onUS Census
Motor
Crime
Synthetic Scenario
Loss X
Loss Y
HGR
Loss X
Loss Y
HGR
Loss X
Loss Y
HGR
Loss X
Loss Y
HGR
No Cons. q(u|x, y, a)
0.1685
0.0019
0.5709
0.2526
0.0024
0.9023
0.4558
0.0016
0.9059
0.6788
0.0076
0.9523
No Cons. q(u|x, y)
0.1690
0.0005
0.4163
0.3068
0.0034
0.9479
0.4523
0.0018
0.8998
0.6495
0.0003
0.6227
No Cons. q(u|x, a)
0.1726
0.2886
0.8252
0.3377
0.9381
0.9728
0.4634
0.3999
0.9076
0.6751
0.4554
0.8650
Adv q(u|x, y, a)
0.1617
0.0004
0.3079
0.4702
0.0035
0.2941
0.4865
0.0701
0.5268
0.6804
0.0088
0.2280
Adv q(u|x, y)
0.1663
0.0009
0.2980
0.3694
0.0057
0.3314
0.4835
0.0571
0.6024
0.6633
0.1196
0.3175
Adv q(u|x, a)
0.1828
0.2891
0.3285
0.4706
0.9878
0.2478
0.4904
0.3933
0.5810
0.6862
0.8819
0.5148
Table III :
IIICounterfactual Fairness Results for the Discrete CaseInference
Mitigation
Adult UCI
Compas
Bank
Synthetic Scenario
Accuracy
CF
Accuracy
CF
Accuracy
CF
Accuracy
CF
Real CF
Without Constraint
None
84.22%
0.0096
67.12%
0.0102
90.64%
0.0369
99.49%
0.1087
0.1810
L CF
83.28%
0.0008
66.20%
0.0051
90.46%
0.0024
95.89%
0.0757
0.1327
M M D
None
84.22%
0.0116
67.12%
0.0076
90.64%
0.0469
99.49%
0.1074
0.1775
L CF
83.84%
0.0024
65.91%
0.0041
90.64%
0.0043
99.29%
0.0893
0.1557
Adversarial
None
84.22%
0.0114
67.12%
0.0118
90.64%
0.0376
99.49%
0.1426
0.1838
L CF
83.74%
0.0002
66.73%
0.0001
90.60%
0.000
93.19%
0.0001
0.0014
Table IV :
IVCounterfactual Fairness Results for the Continuous CaseInference
Mitigation
US Census
Motor
Crime
Synthetic Scenario
Accuracy
CF
MSE
CF
MSE
CF
MSE
CF
Real CF
Adversarial
None
0.274
0.0615
0.938
0.0285
0.412
0.7412
0.454
0.2490
1.1248
L CF
0.289
0.0009
0.941
0.0009
0.452
0.0154
0.572
0.0014
0.2013
L DynCF
0.290
0.0008
0.940
0.0005
0.445
0.0076
0.568
0.0013
0.2000
Without Constraint
None
0.274
0.0433
0.938
0.0271
0.381
0.7219
0.454
0.2919
1.1338
L CF
0.307
0.0010
0.939
0.0021
0.407
0.2938
0.531
0.1968
0.3303
L DynCF
0.310
0.0008
0.942
0.0016
0.418
0.2881
0.546
0.1743
0.3188
Man is to computer programmer as woman is to homemaker? debiasing word embeddings. T Bolukbasi, K.-W Chang, J Y Zou, V Saligrama, A T Kalai, Advances in neural information processing systems. T. Bolukbasi, K.-W. Chang, J. Y. Zou, V. Saligrama, and A. T. Kalai, "Man is to computer programmer as woman is to homemaker? debiasing word embeddings," in Advances in neural information processing systems, 2016, pp. 4349-4357.
. J Angwin, J Larson, S Mattu, L Kirchner, Machine bias. ProPublicaJ. Angwin, J. Larson, S. Mattu, and L. Kirchner, "Ma- chine bias. ProPublica, May 23, 2016," 2016.
Discriminationaware data mining. D Pedreshi, S Ruggieri, F Turini, KDD'08. D. Pedreshi, S. Ruggieri, and F. Turini, "Discrimination- aware data mining," in KDD'08, 2008, pp. 560-568.
Building classifiers with independency constraints. T Calders, F Kamiran, M Pechenizkiy, ICDM Workshops. IEEET. Calders, F. Kamiran, and M. Pechenizkiy, "Build- ing classifiers with independency constraints," in ICDM Workshops. IEEE, 2009, pp. 13-18.
Fairness constraints: Mechanisms for fair classification. M B Zafar, I Valera, M G Rodriguez, K P Gummadi, arXiv:1507.05259arXiv preprintM. B. Zafar, I. Valera, M. G. Rodriguez, and K. P. Gummadi, "Fairness constraints: Mechanisms for fair classification," arXiv preprint arXiv:1507.05259, 2015.
Counterfactual fairness. M J Kusner, J Loftus, C Russell, R Silva, Advances in Neural Information Processing Systems. M. J. Kusner, J. Loftus, C. Russell, and R. Silva, "Coun- terfactual fairness," in Advances in Neural Information Processing Systems, 2017, pp. 4066-4076.
Path-specific counterfactual fairness. S Chiappa, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33S. Chiappa, "Path-specific counterfactual fairness," in Proceedings of the AAAI Conference on Artificial Intel- ligence, vol. 33, 2019, pp. 7801-7808.
Evaluating fairness metrics in the presence of dataset bias. J H Hinnefeld, P Cooman, N Mammo, R Deese, arXiv:1809.09245arXiv preprintJ. H. Hinnefeld, P. Cooman, N. Mammo, and R. Deese, "Evaluating fairness metrics in the presence of dataset bias," arXiv preprint arXiv:1809.09245, 2018.
Equality of opportunity in supervised learning. M Hardt, E Price, N Srebro, Advances in neural information processing systems. M. Hardt, E. Price, and N. Srebro, "Equality of oppor- tunity in supervised learning," in Advances in neural information processing systems, 2016, pp. 3315-3323.
Fairness through awareness. C Dwork, M Hardt, T Pitassi, O Reingold, R Zemel, ITCS'12. C. Dwork, M. Hardt, T. Pitassi, O. Reingold, and R. Zemel, "Fairness through awareness," in ITCS'12, 2012, pp. 214-226.
Mitigating unwanted biases with adversarial learning. B H Zhang, B Lemoine, M Mitchell, AAAI'18. B. H. Zhang, B. Lemoine, and M. Mitchell, "Mitigating unwanted biases with adversarial learning," in AAAI'18, 2018, pp. 335-340.
V Grari, B Ruf, S Lamprier, M Detyniecki, arXiv:1911.04929Fairness-aware neural rényi minimization for continuous features. V. Grari, B. Ruf, S. Lamprier, and M. Detyniecki, "Fairness-aware neural rényi minimization for continu- ous features," arXiv:1911.04929, 2019.
Data preprocessing techniques for classification without discrimination. F Kamiran, T Calders, Knowledge and Informatoin Systems. 331F. Kamiran and T. Calders, "Data preprocessing tech- niques for classification without discrimination," Knowl- edge and Informatoin Systems, vol. 33, no. 1, pp. 1-33, 2012.
Ai fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. R K Bellamy, K Dey, M Hind, S C Hoffman, S Houde, K Kannan, P Lohia, J Martino, S Mehta, A Mojsilovic, arXiv:1810.01943arXiv preprintR. K. Bellamy, K. Dey, M. Hind, S. C. Hoff- man, S. Houde, K. Kannan, P. Lohia, J. Martino, S. Mehta, A. Mojsilovic et al., "Ai fairness 360: An extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias," arXiv preprint arXiv:1810.01943, 2018.
Optimized data pre-processing for discrimination prevention. F P Calmon, D Wei, K N Ramamurthy, K R Varshney, arXiv:1704.03354arXiv preprintF. P. Calmon, D. Wei, K. N. Ramamurthy, and K. R. Varshney, "Optimized data pre-processing for discrimina- tion prevention," arXiv preprint arXiv:1704.03354, 2017.
Classification with fairness constraints: A metaalgorithm with provable guarantees. L E Celis, L Huang, V Keswani, N K Vishnoi, Proceedings of the Conference on Fairness, Accountability, and Transparency. the Conference on Fairness, Accountability, and TransparencyL. E. Celis, L. Huang, V. Keswani, and N. K. Vish- noi, "Classification with fairness constraints: A meta- algorithm with provable guarantees," in Proceedings of the Conference on Fairness, Accountability, and Trans- parency, 2019, pp. 319-328.
Achieving fairness through adversarial learning: an application to recidivism prediction. C Wadsworth, F Vera, C Piech, arXiv:1807.00199C. Wadsworth, F. Vera, and C. Piech, "Achieving fairness through adversarial learning: an application to recidivism prediction," arXiv:1807.00199, 2018.
Learning to pivot with adversarial networks. G Louppe, M Kagan, K Cranmer, Advances in neural information processing systems. G. Louppe, M. Kagan, and K. Cranmer, "Learning to pivot with adversarial networks," in Advances in neural information processing systems, 2017, pp. 981-990.
Fairness under unawareness: Assessing disparity when protected class is unobserved. J Chen, N Kallus, X Mao, G Svacha, M Udell, Proceedings of the Conference on Fairness, Accountability, and Transparency. the Conference on Fairness, Accountability, and TransparencyJ. Chen, N. Kallus, X. Mao, G. Svacha, and M. Udell, "Fairness under unawareness: Assessing disparity when protected class is unobserved," in Proceedings of the Conference on Fairness, Accountability, and Trans- parency, 2019, pp. 339-348.
Preventing fairness gerrymandering: Auditing and learning for subgroup fairness. M Kearns, S Neel, A Roth, Z S Wu, arXiv:1711.05144arXiv preprintM. Kearns, S. Neel, A. Roth, and Z. S. Wu, "Preventing fairness gerrymandering: Auditing and learning for sub- group fairness," arXiv preprint arXiv:1711.05144, 2017.
Fair adversarial gradient tree boosting. V Grari, B Ruf, S Lamprier, M Detyniecki, ICDM'19. V. Grari, B. Ruf, S. Lamprier, and M. Detyniecki, "Fair adversarial gradient tree boosting," in ICDM'19, 2019, pp. 1060-1065.
Onenetwork adversarial fairness. T Adel, I Valera, Z Ghahramani, A Weller, AAAI'19. 33T. Adel, I. Valera, Z. Ghahramani, and A. Weller, "One- network adversarial fairness," in AAAI'19, vol. 33, 2019, pp. 2412-2420.
Fairnessaware learning for continuous attributes and treatments. J Mary, C Calauzènes, N E Karoui, ICML'19. J. Mary, C. Calauzènes, and N. E. Karoui, "Fairness- aware learning for continuous attributes and treatments," in ICML'19, 2019, pp. 4382-4391.
Causal inference in statistics: An overview. J Pearl, Statistics surveys. 3J. Pearl et al., "Causal inference in statistics: An overview," Statistics surveys, vol. 3, pp. 96-146, 2009.
Fairness through causal awareness: Learning causal latentvariable models for biased data. D Madras, E Creager, T Pitassi, R Zemel, Proceedings of the Conference on Fairness, Accountability, and Transparency. the Conference on Fairness, Accountability, and TransparencyD. Madras, E. Creager, T. Pitassi, and R. Zemel, "Fair- ness through causal awareness: Learning causal latent- variable models for biased data," in Proceedings of the Conference on Fairness, Accountability, and Trans- parency, 2019, pp. 349-358.
Counterfactual reasoning for fair clinical risk prediction. S Pfohl, T Duan, D Y Ding, N H Shah, arXiv:1907.06260arXiv preprintS. Pfohl, T. Duan, D. Y. Ding, and N. H. Shah, "Counter- factual reasoning for fair clinical risk prediction," arXiv preprint arXiv:1907.06260, 2019.
Causal effect inference with deep latent-variable models. C Louizos, U Shalit, J M Mooij, D Sontag, R Zemel, M Welling, Advances in Neural Information Processing Systems. C. Louizos, U. Shalit, J. M. Mooij, D. Sontag, R. Zemel, and M. Welling, "Causal effect inference with deep latent-variable models," in Advances in Neural Informa- tion Processing Systems, 2017, pp. 6446-6456.
When worlds collide: integrating different counterfactual assumptions in fairness. C Russell, M J Kusner, J Loftus, R Silva, Advances in Neural Information Processing Systems. C. Russell, M. J. Kusner, J. Loftus, and R. Silva, "When worlds collide: integrating different counterfactual as- sumptions in fairness," in Advances in Neural Informa- tion Processing Systems, 2017, pp. 6414-6423.
Rstan: the r interface to stan. S D Team, 2R package versionS. D. Team et al., "Rstan: the r interface to stan," R package version, vol. 2, no. 1, 2016.
Auto-encoding variational bayes. D P Kingma, M Welling, arXiv:1312.6114arXiv preprintD. P. Kingma and M. Welling, "Auto-encoding varia- tional bayes," arXiv preprint arXiv:1312.6114, 2013.
Estimating individual treatment effect: generalization bounds and algorithms. U Shalit, F D Johansson, D Sontag, ICML'17. U. Shalit, F. D. Johansson, and D. Sontag, "Estimating individual treatment effect: generalization bounds and algorithms," in ICML'17, 2017, pp. 3076-3085.
A kernel two-sample test. A Gretton, K M Borgwardt, M J Rasch, B Schölkopf, A Smola, Journal of Machine Learning Research. 13A. Gretton, K. M. Borgwardt, M. J. Rasch, B. Schölkopf, and A. Smola, "A kernel two-sample test," Journal of Machine Learning Research, vol. 13, no. Mar, pp. 723- 773, 2012.
Infovae: Information maximizing variational autoencoders. S Zhao, J Song, S Ermon, arXiv:1706.02262arXiv preprintS. Zhao, J. Song, and S. Ermon, "Infovae: Information maximizing variational autoencoders," arXiv preprint arXiv:1706.02262, 2017.
Variational lossy autoencoder. X Chen, D P Kingma, T Salimans, Y Duan, P Dhariwal, J Schulman, I Sutskever, P Abbeel, arXiv:1611.02731arXiv preprintX. Chen, D. P. Kingma, T. Salimans, Y. Duan, P. Dhariwal, J. Schulman, I. Sutskever, and P. Abbeel, "Variational lossy autoencoder," arXiv preprint arXiv:1611.02731, 2016.
Generating sentences from a continuous space. S R Bowman, L Vilnis, O Vinyals, A M Dai, R Jozefowicz, S Bengio, arXiv:1511.06349S. R. Bowman, L. Vilnis, O. Vinyals, A. M. Dai, R. Joze- fowicz, and S. Bengio, "Generating sentences from a continuous space," arXiv:1511.06349, 2015.
Ladder variational autoencoders. C K Sønderby, T Raiko, L Maaløe, S K Sønderby, O Winther, NIPS'16. C. K. Sønderby, T. Raiko, L. Maaløe, S. K. Sønderby, and O. Winther, "Ladder variational autoencoders," in NIPS'16, 2016, pp. 3738-3746.
Adversarial autoencoders. A Makhzani, J Shlens, N Jaitly, I J Goodfellow, abs/1511.05644CoRR. A. Makhzani, J. Shlens, N. Jaitly, and I. J. Goodfellow, "Adversarial autoencoders," CoRR, vol. abs/1511.05644, 2015. [Online]. Available: http://arxiv.org/abs/1511.05644
Generative Adversarial Networks. I J Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, "Generative Adversarial Networks," pp. 1-9, 2014.
UCI ml repository. D Dua, C Graff, D. Dua and C. Graff, "UCI ml repository," http://archive. ics.uci.edu/ml, 2017.
A data-driven approach to predict the success of bank telemarketing. S Moro, P Cortez, P Rita, Decision Support Systems. 622014S. Moro, P. Cortez, and P. Rita, "A data-driven approach to predict the success of bank telemarketing," Decision Support Systems, vol. 62, 06 2014.
Us census demographic data. Us Census, Bureau, US Census Bureau, "Us census demographic data," https: //data.census.gov/cedsci/, online; accessed 03 April 2019.
Pricing game. The Institute of Actuaries of FranceThe Institute of Actuaries of France, "Pricing game 2015," https://freakonometrics.hypotheses.org/20191, on- line; accessed 14 August 2019.
The variational fair autoencoder. C Louizos, K Swersky, Y Li, M Welling, R Zemel, arXiv:1511.00830arXiv preprintC. Louizos, K. Swersky, Y. Li, M. Welling, and R. Zemel, "The variational fair autoencoder," arXiv preprint arXiv:1511.00830, 2015.
Causality: models, reasoning, and inference, by judea pearl. L Neuberg, Econometric Theory. 19cambridge university pressL. Neuberg, "Causality: models, reasoning, and infer- ence, by judea pearl, cambridge university press, 2000," Econometric Theory, vol. 19, pp. 675-685, 2003.
The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients. I.-C Yeh, C.-H Lien, Expert Syst. Appl. 362I.-C. Yeh and C.-h. Lien, "The comparisons of data min- ing techniques for the predictive accuracy of probability of default of credit card clients," Expert Syst. Appl., vol. 36, no. 2, pp. 2473-2480, Mar. 2009.
| [] |
[
"MAX CUT in Weighted Random Intersection Graphs and Discrepancy of Sparse Random Set Systems",
"MAX CUT in Weighted Random Intersection Graphs and Discrepancy of Sparse Random Set Systems"
] | [
"Sotiris Nikoletseas ",
"Christoforos Raptopoulos ",
"Paul Spirakis ",
"\nComputer Engineering & Informatics Department\nComputer Technology Institute\nComputer Engineering & Informatics Department\nUniversity of Patras\nGreece, Greece\n",
"\nDepartment of Computer Science\nUniversity of Patras\nGreece\n",
"\nComputer Engineering & Informatics Department\nUniversity of Liverpool\nUK\n",
"\nComputer Technology Institute\nUniversity of Patras\nGreece, Greece\n"
] | [
"Computer Engineering & Informatics Department\nComputer Technology Institute\nComputer Engineering & Informatics Department\nUniversity of Patras\nGreece, Greece",
"Department of Computer Science\nUniversity of Patras\nGreece",
"Computer Engineering & Informatics Department\nUniversity of Liverpool\nUK",
"Computer Technology Institute\nUniversity of Patras\nGreece, Greece"
] | [] | Let V be a set of n vertices, M a set of m labels, and let R be an m × n matrix of independent Bernoulli random variables with probability of success p; columns of R are incidence vectors of label sets assigned to vertices. A random instance G(V, E, R T R) of the weighted random intersection graph model is constructed by drawing an edge with weight equal to the number of common labels (namely [R T R]v,u) between any two vertices u, v for which this weight is strictly larger than 0. In this paper we study the average case analysis of Weighted Max Cut, assuming the input is a weighted random intersection graph, i.e. given G(V, E, R T R) we wish to find a partition of V into two sets so that the total weight of the edges having exactly one endpoint in each set is maximized.In particular, we initially prove that the weight of a maximum cut of G(V, E, R T R) is concentrated around its expected value, and then show that, when the number of labels is much smaller than the number of vertices (in particular, m = n α , α < 1), a random partition of the vertices achieves asymptotically optimal cut weight with high probability. Furthermore, in the case n = m and constant average degree (i.e. p = Θ(1) n ), we show that with high probability, a majority type randomized algorithm outputs a cut with weight that is larger than the weight of a random cut by a multiplicative constant strictly larger than 1. Then, we formally prove a connection between the computational problem of finding a (weighted) maximum cut in G(V, E, R T R) and the problem of finding a 2-coloring that achieves minimum discrepancy for a set system Σ with incidence matrix R (i.e. minimum imbalance over all sets in Σ). We exploit this connection by proposing a (weak) bipartization algorithm for the case m = n, p = Θ(1) n that, when it terminates, its output can be used to find a 2-coloring with minimum discrepancy in a set system with incidence matrix R. In fact, with high probability, the latter 2-coloring corresponds to a bipartition with maximum cut-weight in G(V, E, R T R). Finally, we prove that our (weak) bipartization algorithm terminates in polynomial time, with high probability, at least when p = c n , c < 1. | 10.1007/s00453-023-01121-3 | [
"https://arxiv.org/pdf/2009.01567v2.pdf"
] | 221,470,273 | 2009.01567 | e9ae9999674852c4329656014f9adb51adf13afb |
MAX CUT in Weighted Random Intersection Graphs and Discrepancy of Sparse Random Set Systems
Sotiris Nikoletseas
Christoforos Raptopoulos
Paul Spirakis
Computer Engineering & Informatics Department
Computer Technology Institute
Computer Engineering & Informatics Department
University of Patras
Greece, Greece
Department of Computer Science
University of Patras
Greece
Computer Engineering & Informatics Department
University of Liverpool
UK
Computer Technology Institute
University of Patras
Greece, Greece
MAX CUT in Weighted Random Intersection Graphs and Discrepancy of Sparse Random Set Systems
10.4230/LIPIcs2012 ACM Subject Classification Mathematics of computing → Random graphs Keywords and phrases Random Intersection GraphsMaximum CutDiscrepancy
Let V be a set of n vertices, M a set of m labels, and let R be an m × n matrix of independent Bernoulli random variables with probability of success p; columns of R are incidence vectors of label sets assigned to vertices. A random instance G(V, E, R T R) of the weighted random intersection graph model is constructed by drawing an edge with weight equal to the number of common labels (namely [R T R]v,u) between any two vertices u, v for which this weight is strictly larger than 0. In this paper we study the average case analysis of Weighted Max Cut, assuming the input is a weighted random intersection graph, i.e. given G(V, E, R T R) we wish to find a partition of V into two sets so that the total weight of the edges having exactly one endpoint in each set is maximized.In particular, we initially prove that the weight of a maximum cut of G(V, E, R T R) is concentrated around its expected value, and then show that, when the number of labels is much smaller than the number of vertices (in particular, m = n α , α < 1), a random partition of the vertices achieves asymptotically optimal cut weight with high probability. Furthermore, in the case n = m and constant average degree (i.e. p = Θ(1) n ), we show that with high probability, a majority type randomized algorithm outputs a cut with weight that is larger than the weight of a random cut by a multiplicative constant strictly larger than 1. Then, we formally prove a connection between the computational problem of finding a (weighted) maximum cut in G(V, E, R T R) and the problem of finding a 2-coloring that achieves minimum discrepancy for a set system Σ with incidence matrix R (i.e. minimum imbalance over all sets in Σ). We exploit this connection by proposing a (weak) bipartization algorithm for the case m = n, p = Θ(1) n that, when it terminates, its output can be used to find a 2-coloring with minimum discrepancy in a set system with incidence matrix R. In fact, with high probability, the latter 2-coloring corresponds to a bipartition with maximum cut-weight in G(V, E, R T R). Finally, we prove that our (weak) bipartization algorithm terminates in polynomial time, with high probability, at least when p = c n , c < 1.
Introduction
Given an undirected graph G(V, E), the Max Cut problem asks for a partition of the vertices of G into two sets, such that the number of edges with exactly one endpoint in each set of the partition is maximized. This problem can be naturally generalized for weighted (undirected) graphs. A weighted graph is denoted by G(V, E, W), where V is the set of vertices, E is the set of edges and W is a weight matrix, which specifies a weight W i,j = w i,j , for each pair of vertices i, j. In particular, we assume that W i,j = 0, for each edge {i, j} / ∈ E.
▶ Definition 1 (Weighted Max Cut). Given a weighted graph G(V, E, W), find a partition of V into two (disjoint) subsets A, B, so as to maximize the cumulative weight of the edges of G having one endpoint in A and the other in B.
Weighted Max Cut is fundamental in theoretical computer science and is relevant in various graph layout and embedding problems [10]. Furthermore, it also has many practical applications, including infrastructure cost and circuit layout optimization in network and VLSI design [19], minimizing the Hamiltonian of a spin glass model in statistical physics [3], and data clustering [18]. In the worst case Max Cut (and also Weighted Max Cut) is APX-hard, meaning that there is no polynomial-time approximation scheme that finds a solution that is arbitrarily close to the optimum, unless P = NP [17].
The average case analysis of Max Cut, namely the case where the input graph is chosen at random from a probabilistic space of graphs, is also of considerable interest and is further motivated by the desire to justify and understand why various graph partitioning heuristics work well in practical applications. In most research works the input graphs are drawn from the Erdős-Rényi random graphs model G n,m , i.e. random instances are drawn equiprobably from the set of simple undirected graphs on n vertices and m edges, where m is a linear function of n (see also [13,7] for the average case analysis of Max Cut and its generalizations with respect to other random graph models). One of the earliest results in this area is that Max Cut undergoes a phase transition on G n,γn at γ = 1 2 [8], in that the difference between the number of edges of the graph and the Max-Cut size is O(1), for γ < 1 2 , while it is Ω(n), when γ > 1 2 . For large values of γ, it was proved in [4] that the maximum cut size of G n,γn normalized by the number of vertices n reaches an absolute limit in probability as n → ∞, but it was not until recently that the latter limit was established and expressed analytically in [9], using the interpolation method; in particular, it was shown to be asymptotically equal to ( γ 2 + P * γ 2 )n, where P * ≈ 0.7632. We note however that these results are existential, and thus do not lead to an efficient approximation scheme for finding a tight approximation of the maximum cut with large enough probability when the input graph is drawn from G n,γn . An efficient approximation scheme in this case was designed in [8], and it was proved that, with high probability, this scheme constructs a cut with at least γ 2 + 0.37613 √ γ n = (1 + 0.75226 1 √ γ ) γ 2 n edges, noting that γ 2 n is the size of a random cut (in which each vertex is placed independently and equiprobably in one of the two sets of the partition). Whether there exists an efficient approximation scheme that can close the gap between the approximation guarantee of [8] and the limit of [9] remains an open problem.
In this paper, we study the average case analysis of Weighted Max Cut when input graphs are drawn from the generalization of another well-established model of random graphs, namely the weighted random intersection graphs model (the unweighted version of the model was initially defined in [15]). In this model, edges are formed through the intersection of label sets assigned to each vertex and edge weights are equal to the number of common labels between edgepoints.
▶ Definition 2 (Weighted random intersection graph). Consider a universe M = {1, 2, . . . , m} of labels and a set of n vertices V . We define the m × n representation matrix R whose entries are independent Bernoulli random variables with probability of success p. For ℓ ∈ M and v ∈ V , we say that vertex v has chosen label ℓ iff R ℓ,v = 1. Furthermore, we draw an edge with weight [R T R] v,u between any two vertices u, v for which this weight is strictly larger than 0.The weighted graph G = (V, E, R T R) is then a random instance of the weighted random intersection graphs model G n,m,p .
Random intersection graphs are relevant to and capture quite nicely social networking; vertices are the individual actors and labels correspond to specific types of interdependency. Other applications include oblivious resource sharing in a (general) distributed setting, efficient and secure communication in sensor networks [20], interactions of mobile agents traversing the web etc. (see e.g. the survey papers [6,16] for further motivation and recent research related to random intersection graphs). In all these settings, weighted random intersection graphs, in particular, also capture the strength of connections between actors (e.g. in a social network, individuals having several characteristics in common have more intimate relationships than those that share only a few common characteristics). One of the most celebrated results in this area is equivalence (measured in terms of total variation distance) of random intersection graphs and Erdős-Rényi random graphs when the number of labels satisfies m = n α , α > 6 [12]. This bound on the number of labels was improved in [22], by showing equivalence of sharp threshold functions among the two models for α ≥ 3. Similarity of the two models has been proved even for smaller values of α (e.g. for any α > 1) in the form of various translation results (see e.g. Theorem 1 in [21]), suggesting that some algorithmic ideas developed for Erdős-Rényi random graphs also work for random intersection graphs (and also weighted random intersection graphs).
In view of this, in the present paper we study the average case analysis of Weighted Max Cut under the weighted random intersection graphs model, for the range m = n α , α ≤ 1 for two main reasons: First, the average case analysis of Max Cut has not been considered in the literature so far when the input is a drawn from the random intersection graphs model, and thus the asymptotic behaviour of the maximum cut remains unknown especially for the range of values where random intersection graphs and Erdős-Rényi random graphs differ the most. Furthermore, studying a model where we can implicitly control its intersection number (indeed m is an obvious upper bound on the number of cliques that can cover all edges of the graph) may help understand algorithmic bottlenecks for finding maximum cuts in Erdős-Rényi random graphs.
Second, we note that the representation matrix R of a weighted random intersection graph can be used to define a random set system Σ consisting of m sets Σ = {L 1 , . . . , L m }, where L ℓ is the set of vertices that have chosen label ℓ; we say that R is the incidence matrix of Σ. Therefore, there is a natural connection between Weighted Max Cut and the discrepancy of such random set systems, which we formalize in this paper. In particular, given a set system Σ with incidence matrix R, its discrepancy is defined as disc(Σ) = min x∈{±1} n max L∈Σ v∈L x v = ∥Rx∥ ∞ , i.e. it is the minimum imbalance of all sets in Σ over all 2-colorings x. Recent work on the discrepancy of random rectangular matrices defined as above [1] has shown that, when the number of labels (sets) m satisfies n ≥ 0.73m log m, the discrepancy of Σ is at most 1 with high probability. The proof of the main result in [1] is based on a conditional second moment method combined with Stein's method of exchangeable pairs, and improves upon a Fourier analytic result of [14], and also upon previous results in [11], [20]. The design of an efficient algorithm that can find a 2coloring having discrepancy O(1) in this range still remains an open problem. Approximation algorithms for a similar model for random set systems were designed and analyzed in [2]; however, the algorithmic ideas there do not apply in our case.
Our Contribution
In this paper, we introduce the model of weighted random intersection graphs and we study the average case analysis of Weighted Max Cut through the prism of Discrepancy of random set systems. We formalize the connection between these two combinatorial problems for the case of arbitrary weighted intersection graphs in Corollary 4. We prove that, given a weighted intersection graph G = (V, E, R T R) with representation matrix R, and a set system with incidence matrix R, such that disc(Σ) ≤ 1, a 2-coloring has maximum cut weight in G if and only if it achieves minimum discrepancy in Σ. In particular, Corollary 4 applies in the range of values considered in [1] (i.e. n ≥ 0.73m log m), and thus any algorithm that finds a maximum cut in G(V, E, R T R) with large enough probability can also be used to find a 2-coloring with minimum discrepancy in a set system Σ with incidence matrix R, with the same probability of success.
We then consider weighted random intersection graphs in the case m = n α , α ≤ 1, and we prove that the maximum cut weight of a random instance G(V, E, R T R) of G n,m,p concentrates around its expected value (see Theorem 5). In particular, with high probability (whp, i.e. with probability tending to 1 as n → ∞) over the choices of R,
Max-Cut(G) ∼ E R [Max-Cut(G)]
, where E R denotes expectation with respect to R. The proof is based on the Efron-Stein inequality for upper bounding the variance of the maximum cut. As a consequence of our concentration result, we prove in Theorem 6 that, in the case α < 1, a random 2-coloring (i.e. biparition) x (rand) in which each vertex chooses its color independently and equiprobably, has cut weight asymptotically equal to Max-Cut(G), with high probability over the choices of x (rand) and R.
The latter result on random cuts allows us to focus the analysis of our randomized algorithms of Section 4 on the case m = n (i.e. α = 1), and p = c n , for some constant c (see also the discussion at the end of subsection 3.1), where the assumptions of Theorem 6 do not hold. It is worth noting that, in this range of values, the expected weight of a fixed edge in a weighted random intersection graph is equal to mp 2 = Θ(1/n), and thus we hope that our work here will serve as an intermediate step towards understanding when algorithmic bottlenecks for Max Cut appear in sparse random graphs (especially Erdős-Rényi random graphs) with respect to the intersection number. In particular, we analyze a Majority Cut Algorithm 1 that extends the algorithmic idea of [8] to weighted intersection graphs as follows: vertices are colored sequentially (each color +1 or −1 corresponding to a different set in the partition of the vertices), and the t-th vertex is colored opposite to the sign of i∈[t−1] [R T R] i,t x i , namely the total available weight of its incident edges, taking into account colors of adjacent vertices. Our average case analysis of the Majority Cut Algorithm shows that, when m = n and p = c n , for large constant c, with high probability over the choices of R, the expected weight of the constructed cut is at least 1 + β times larger than the expected weight of a random cut, for some constant β = β(c) ≥ 16 27πc 3 − o (1). The fact that the lower bound on beta is inversely proportional to c 3/2 was to be expected, because, as p increases, the approximation of the maximum cut that we get from the weight of a random cut improves (see also the discussion at the end of subsection 3.1).
In subsection 4.2 we propose a framework for finding maximum cuts in weighted random intersection graphs for m = n and p = c n , for constant c, by exploiting the connection between Weighted Max Cut and the problem of discrepancy minimization in random set systems. In particular, we design a Weak Bipartization Algorithm 2, that takes as input an intersection graph with representation matrix R and outputs a subgraph that is "almost" bipartite. In fact, the input intersection graph is treated as a multigraph composed by overlapping cliques formed by the label sets L ℓ = {v : R ℓ,v = 1}, ℓ ∈ M. The algorithm attempts to destroy all odd cycles of the input (except from odd cycles that are formed by labels with only two vertices) by replacing each clique induced by some label set L ℓ by a random maximal matching. In Theorem 11 we prove that, with high probability over the choices of R, if the Weak Bipartization Algorithm terminates, then its output can be used to construct a 2-coloring that has minimum discrepancy in a set system with incidence matrix R, which also gives a maximum cut in G(V, E, R T R). It is worth noting that this does not follow from Corollary 4, because a random set system with incidence matrix R has discrepancy larger than 1 with (at least) constant probability when m = n and p = c n . Our proof relies on a structural property of closed 0-strong vertex-label sequences (loosely defined as closed walks of edges formed by distinct labels) in the weighted random intersection graph G(V, E, R T R) (Lemma 8). Finally, in Theorem 12, we prove that our Weak Bipartization Algorithm terminates in polynomial time, with high probability, if the constant c is strictly less than 1. Therefore, there is a polynomial time algorithm for finding weighted maximum cuts, with high probability, when the input is drawn from G n,n, c n , with c < 1. We believe that this part of our work may also be of interest regarding the design of efficient algorithms for finding minimum disrepancy colorings in random set systems.
Due to lack of space, some of the proofs are given in a clearly marked Appendix, to be read at the discretion of the program committee.
Notation and preliminary results
We denote weighted undirected graphs by G(V, E, W); in particular, V = V (G) (resp. E = E(G)) is the set of vertices (resp. set of edges) and W = W(G) is the weight matrix, i.e. W i,j = w i,j is the weight of (undirected) edge {i, j} ∈ E. We allow W to have non-zero diagonal entries, as these do not affect cut weights. We also denote the number of vertices by n, and we use the notation [n] = {1, 2, . . . , n}. We also use this notation to define parts of matrices, for example W [n],1 denotes the first column of the weight matrix. A bipartition of the sets of vertices is a partition of V into two sets A, B such that A ∩ B = ∅ and A ∪ B = V . Bipartitions correspond to 2-colorings, which we denote by vectors x such that
x i = +1 if i ∈ A and x i = −1 if i ∈ B.
Given a weighted graph G(V, E, W), we denote by Cut(G, x) the weight of a cut defined by a bipartition x, namely Cut(G,
x) = {i,j}∈E:i∈A,j∈B w i,j = 1 4 {i,j}∈E w i,j (x i − x j ) 2 . The maximum cut of G is Max-Cut(G) = max x∈{−1,+1} n Cut(G, x).
For a weighted random intersection graph G(V, E, R T R) with representation matrix R, we denote by S v the set of labels chosen by vertex v ∈ V , i.e. S v = {ℓ : R ℓ,v = 1}. Furthermore, we denote by L ℓ the set of vertices having chosen label ℓ, i.e. L ℓ = {v : R ℓ,v = 1}. Using this notation, the weight of an edge {v, u} ∈ E is |S v ∩ S u |; notice also that this is equal to 0 when {v, u} / ∈ E. We also note here that we may also think of a weighted random intersection graph as a simple weighted graph where, for any pair of vertices v, u, there are |S v ∩ S u | simple edges between them.
A set system Σ defined on a set V is a family of sets
Σ = {L 1 , L 2 , . . . , L m }, where L ℓ ⊆ V, ℓ ∈ [m]. The incidence matrix of Σ is an m × n matrix R = R(Σ), where for any ℓ ∈ [m], v ∈ [n], R ℓ,v = 1 if v ∈ S ℓ and 0 otherwise. The discrepancy of Σ with respect to a 2-coloring x of the vertices in V is disc(Σ, x) = max ℓ∈[m] v∈V R ℓ,v x v = ∥Rx∥ ∞ . The discrepancy of Σ is disc(Σ) = min x∈{−1,+1} n disc(Σ, x).
It is well-known that the cut size of a bipartition of the set of vertices of a graph G(V, E) into sets A and B is given by 1
4 {i,j}∈E (x i − x j ) 2 , where x i = +1 if i ∈ A and x i = −1 if i ∈ B.
This can be naturally generalized for multigraphs and also for weighted graphs. In particular, the Max-Cut size of a weighted graph G(V, E, W) is given by
Max-Cut(G) = max x∈{−1,+1} n 1 4 {i,j}∈E W i,j (x i − x j ) 2 .(1)
In particular, we get the following Corollary (refer to Section A of the Appendix for the proof):
▶ Corollary 3. Let G(V, E, R T R) be a weighted intersection graph with representation matrix R. Then, for any x ∈ {−1, +1} n , Cut(G, x) = 1 4 i,j∈[n] 2 R T R i,j − ∥Rx∥ 2 (2)
and so
Max-Cut(G) = 1 4 i,j∈[n] 2 R T R i,j − min x∈{−1,+1} n ∥Rx∥ 2 ,(3)
where ∥ · ∥ denotes the 2-norm. In particular, the expectation of the size of a random cut, where each entry of x is independently and equiprobably either +1 or -1 is equal to
E x [Cut(G, x)] = 1 4 i̸ =j,i,j∈[n] R T R i,j ,
where E x denotes expectation with respect to x.
Since i,j∈[n] 2 R T R i,j is fixed for any given representation matrix R, the above Corollary implies that, to find a bipartition of the vertex set V that corresponds to a maximum cut, we need to find an n-dimensional vector in arg min x∈{−1,+1} n ∥Rx∥ 2 . We thus get the following (refer to Section B of the Appendix for the proof):
▶ Corollary 4. Let G(V, E, R T R) be a weighted intersection graph with representation matrix R and Σ a set system with incidence matrix R. If disc(Σ) ≤ 1, then x * ∈ arg min x∈{−1,+1} n ∥Rx∥ 2 if and only if x * ∈ arg min x∈{−1,+1} n disc(Σ, x). In particular, if the minimum discrepancy of Σ is at most 1, a bipartition corresponds to a maximum cut iff it achieves minimum discrepancy.
Notice that above result is not necessarily true when disc(Σ) > 1, since the minimum of ∥Rx∥ could be achieved by 2-colorings with larger discrepancy than the optimal.
Range of values for p
Concerning the success probability p, we note that, when p = o 1 nm , direct application of the results of [5] suggest that G(V, E, R T R) is chordal with high probability, but in fact the same proofs reveal that a stronger property holds, namely that there is no closed vertex-label sequence (refer to the precise definition in subsection 4.2) having distinct labels. Therefore, in this case, finding a bipartition with maximum cut weight is straightforward: indeed, one way to construct a maximum cut is to run our Weak Bipartization Algorithm 2 from subsection 4.2, and then to apply Theorem 11 (noting that Weak Bipartization termination condition trivially holds, since the set C odd (G (b) ) defined in subsection 4.2 is empty). Furthermore, even though we consider weighted graphs, we will also assume that mp 2 = O(1), noting that, otherwise, G(V, E, R T R) will be almost complete with high probability (indeed, the unconditional edge existence probability is 1−(1−p 2 ) m , which tends to 1 for mp 2 = ω(1)). In particular, we will assume that C 1 1 nm ≤ p ≤ C 2 1 √ m , for arbitrary positive constants C 1 , C 2 ; C 1 can be as small as possible, and C 2 can be as large as possible, provided C 2 1 √ m ≤ 1. We note that, when p is asymptotically equal to the upper bound C 2 1 √ m , there is no constant weight upper bound that holds with high probability, whereas, when p is asymptotically equal to the lower bound C 1 1 nm , all weights in the graph are bounded by a small constant with high probability. Our results in Section 3 assume this range of values for p, and thus graph instances may contain edges with large (but constant) weights. On the other hand, in the analysis of our randomized algorithms in section 4, we assume n = m and p = Θ 1 n ; this range of values gives sparse graph instances (even though the distribution is different from sparse Erdős-Rényi random graphs).
Concentration of Max-Cut
In this section we prove that the size of the maximum cut in a weighted random intersection graph concentrates around its expected value. We note however, that the following Theorem does not provide an explicit formula for the expected value of the maximum cut. Proof. Let G = G(V, E, R T R) be a weighted random intersection graph, and let D denote the (random) diagonal matrix containing all diagonal elements of R T R. In particular, equation (3) of Corollary 3 can be written as
Max-Cut(G) = 1 4 i̸ =j,i,j∈[n] R T R i,j − min x∈{−1,+1} n x T R T R − D x .
Furthermore, for any given R, notice that, if we select each element of x independently and
equiprobably from {−1, +1}, then E x [x T R T R − D x] = 0,
where E x denotes expectation with respect to x. Therefore, by the probabilistic method, min x∈{−1,+1} n x T R T R − D x ≤ 0, implying the following bound:
1 4 i̸ =j,i,j∈[n] R T R i,j ≤ Max-Cut(G) ≤ 1 2 i̸ =j,i,j∈[n] R T R i,j ,(4)
where the second inequality follows trivially by observing that 1 2 i̸ =j,i,j∈[n] R T R i,j equals the sum of the weights of all edges.
By linearity,
E R i̸ =j,i,j∈[n] R T R i,j = E R i̸ =j,i,j∈[n]
ℓ∈[m] R ℓ,i R ℓ,j = n(n − 1)mp 2 = Θ(n 2 mp 2 ), which goes to infinity as n → ∞, because np = Ω n m = Ω(1) in the range of parameters that we consider. In particular, by (4), we have
E R [Max-Cut(G)] = Θ(n 2 mp 2 ).(5)
By Chebyshev's inequality, for any ϵ > 0, we have
Pr |Max-Cut(G) − E R [Max-Cut(G)]| ≥ ϵn 2 mp 2 ≤ Var R (Max-Cut(G)) ϵ 2 n 4 m 2 p 4 ,(6)
where Var R denotes variance with respect to R. To bound the variance on the right hand side of the above inequality, we use the Efron-Stein inequality. In particular, we write
Max-Cut(G) := f (R), i.e. we view Max-Cut(G) as a function of the label choices. For ℓ ∈ [m], i ∈ [n]
, we also write R (ℓ,i) for the matrix R where entry (ℓ, i) has been replaced by an independent, identically distributed (i.i.d.) copy of R ℓ,i , which we denote by R ′ ℓ,i . By the Efron-Stein inequality, we have
Var R (Max-Cut(G)) ≤ 1 2 ℓ∈[m],i∈[n] E f (R) − f R (ℓ,i) 2 .(7)
Notice now that, given all entries of
R except R ℓ,i , the probability that f (R) is different from f R (ℓ,i) is at most Pr(R ℓ,i ̸ = R ′ ℓ,i ) = 2p(1 − p). Furthermore, if L ℓ \{i} is the set of vertices different from i which have selected ℓ, we then have that f (R) − f R (ℓ,i) 2 ≤ |L ℓ \{i}| 2 ,
because the intersection graph with representation matrix R differs by at most |L ℓ \{i}| edges from the intersection graph with representation matrix R (ℓ,i) . Also note that, by definition, |L ℓ \{i}| follows the Binomial distribution B(n−1, p). In particular,
E |L ℓ \{i}| 2 = (n − 1)p(np − 2p + 1), implying E f (R) − f R (ℓ,i) 2 ≤ 2p(1 − p)(n − 1)p(np − 2p + 1), for any fixed ℓ ∈ [m], i ∈ [n].
Putting this all together, (7) becomes
Var R (Max-Cut(G)) ≤ 1 2 ℓ∈[m],i∈[n] 2p(1 − p)(n − 1)p(np − 2p + 1) = nmp(1 − p)(n − 1)p(np − 2p + 1) = O(n 3 mp 3 ),(8)
Therefore, by (6), we get
Pr |Max-Cut(G) − E R [Max-Cut(G)]| ≥ ϵn 2 mp 2 ≤ O(n 3 mp 3 ) ϵ 2 n 4 m 2 p 4 = O 1 ϵ 2 nmp ,
which goes to 0 in the range of values that we consider. Together with (5), the above bound proves that Max-Cut(G) is concentrated around its expected value. ◀
Max-Cut for small number of labels
Using Theorem 5, we can now show that, in the case m = n α , α < 1, and p = O 1 √ m , a random cut has asymptotically the same weight as Max-
Cut(G), where G = G(V, E, R T R)
is a random instance of G n,m,p . In particular, let x (rand) be constructed as follows: for each i ∈ [n], set x (rand) i = −1 independently with probability 1 2 , and x (rand) i = +1 otherwise. The proof details of the following Theorem can be found in Section C of the Appendix. In view of equation (3), the main idea is to prove that, with high probability over random x and R, ∥Rx∥ 2 is asymptotically smaller than the expectation of the weight of the cut defined by x (rand) , in which case the theorem follows by concentration of Max-Cut(G) around its expected value (Theorem 5), and straightforward bounds on Max-Cut(G). ▶ Theorem 6. Let G(V, E, R T R) be a random instance of the G n,m,p model with m = n a , α < 1, and C 1 1 nm ≤ p ≤ C 2 1 √ m , for arbitrary positive constants C 1 , C 2 , and let R be its representation matrix. Then the cut weight of the random 2-coloring x (rand) satisfies Cut(G, x (rand) ) = (1 − o(1))Max-Cut(G) with high probability over the choices of x (rand) , R.
We note that the same analysis also holds when n = m and p is sufficiently large (e.g. p = ω( ln n n )); more details can be found at the end of Section C of the Appendix. In view of this, in the following sections we will only assume m = n (i.e. α = 1) and also p = c n , for some positive constant c. Besides avoiding complicated formulae for p, the reason behind this assumption is that, in this range of values, the expected weight of a fixed edge in G(V, E, R T R) is equal to mp 2 = Θ(1/n), and thus we hope that our work will serve as an intermediate step towards understanding algorithmic bottlenecks for finding maximum cuts in Erdős-Rényi random graphs G n,c/n with respect to their intersection number.
4
Algorithmic results (randomized algorithms)
The Majority Cut Algorithm
In the following algorithm, the 2-coloring representing the bipartition of a cut is constructed as follows: initially, a small constant fraction ϵ of vertices are randomly placed in the two partitions, and then in each subsequent step, one of the remaining vertices is placed in the partition that maximizes the weight of incident edges with endpoints in the opposite partition.
1 + β) 1 4 E i̸ =j,i,j∈[n] R T R i,j , where β = β(c) ≥ 16 27πc 3 − o(1) isM t = M t−1 + 1 2 i∈[t−1] R T R i,t + 1 2 |Z t | , where Z t = Z t (x, R) = i∈[t−1] R T R i,t x i = ℓ∈[m] R ℓ,t i∈[t−1] R ℓ,i x i .
Observe that, in the latter recursive equation, the term 1 2 i∈[t−1] R T R i,t corresponds to the expected increment of the constructed cut if the t-vertex chose its color uniformly at random. Therefore, lower bounding the expectation of 1 2 |Z t | will tell us how much better the Majority Algorithm does when considering the t-th vertex.
Towards this end, we note that, given
x [t−1] = {x i , i ∈ [t − 1]}, and R [m],[t−1] = {R ℓ,i , ℓ ∈ [m], i ∈ [t − 1]}, Z t
is the sum of m independent random variables, since the Bernoulli random variables R ℓ,t , ℓ ∈ [m], are independent, for any given t (note that the conditioning is essential for independence, otherwise the inner sums in the definition of Z t would also depend on the x i 's, which are not random when i is large). By using a domination argument, we can then prove that
E[|Z t | x [t−1] , R [m],[t−1] ] ≥ MD(Z B t ),
where Z B t is a certain Binomial random variable (formally defined in the full proof), and MD(·) is the mean absolute difference of (two independent copies of)
Z B t , namely MD(Z B t ) = E[ Z B t − Z ′B t ].
Intersection graph (weak) bipartization
Notice that we can view a weighted intersection graph G(V, E, R T R) as a multigraph, composed by m (possibly) overlapping cliques corresponding to the sets of vertices having chosen a certain label, namely L ℓ = {v : R ℓ,v }, ℓ ∈ [m]. In particular, let K (ℓ) denote the clique induced by label ℓ. Then G = ∪ + ℓ∈[m] K (ℓ) , where ∪ + denotes union that keeps multiple edges. In this section, we present an algorithm that takes as input an intersection graph G given as a union of overlapping cliques and outputs a subgraph that is "almost" bipartite.
To facilitate the presentation of our algorithm, we first give some useful definitions. A closed vertex-label sequence is a sequence of alternating vertices and labels starting and ending at the same vertex, namely σ := v 1 , ℓ 1 , v 2 , ℓ 2 , · · · , v k , ℓ k , v k+1 = v 1 , where the size of the sequence k = |σ| is the number of its labels, v i ∈ V , ℓ i ∈ M, and {v i , v i+1 } ⊆ L ℓi , for all i ∈ [k] (i.e. v i is connected to v i+1 in the intersection graph). We will also say that label ℓ is strong if |L ℓ | ≥ 3, otherwise it is weak. For a given closed vertex-label sequence σ, and any integer λ ∈ [|σ|], we will say that σ is λ-strong if |L ℓi | ≥ 3, for λ indices i ∈ [|σ|]. The structural Lemma below is useful for our analysis (see Section E of the Appendix for the proof). The following definition is essential for the presentation of our algorithm.
▶ Definition 9. Given a weighted intersection graph G = G(V, E, R T R) and a subgraph
G (b) ⊆ G, let C odd (G (b)
) be the set of odd length closed vertex-label sequences σ := v 1 , ℓ 1 , v 2 , ℓ 2 , · · · , v k , ℓ k , v k+1 = v 1 that additionally satisfy the following:
(a) σ has distinct vertices (except the first and the last) and distinct labels.
(b) v i is connected to v i+1 in G (b) , for all i ∈ [|σ|]. (c) σ is λ-strong, for some λ > 0.
Algorithm 2 initially replaces each clique K (ℓ) by a random maximal matching M (ℓ) , and thus gets a subgraph G (b) ⊆ G. If C odd (G (b) ) is not empty, then the algorithm selects σ ∈ C odd (G (b) ) and a strong label ℓ ∈ σ, and then replaces M (ℓ) in G (b) by a new random matching of K (ℓ) . The algorithm repeats until all odd cycles are destroyed (or runs forever trying to do so).
Algorithm 2 Intersection Graph Weak Bipartization
Input: Weighted intersection graph G = ∪ + ℓ∈[m] K (ℓ) Output: A subgraph of G (b) that has only 0-strong odd cycles
1 for each ℓ ∈ [m] do 2
Let M (ℓ) be a random maximal matching of K (ℓ) ;
3 Set G (b) = ∪ + ℓ∈[m] M (ℓ) ; 4 while C odd (G (b) ) ̸ = ∅ do 5
Let σ ∈ C odd (G (b) ) and ℓ a label in σ with |L ℓ | ≥ 3; 6 Replace the part of G (b) corresponding to ℓ by a new random maximal matching M (ℓ) ;
7 return G (b) ;
The following results are the main technical tools that justify the use of the Weak Bipartization Algorithm for Weighted Max Cut. The proof details for Lemma 10 can be found in Section F of the Appendix. ▶ Theorem 11. Let G(V, E, R T R) be a random instance of the G n,m,p model, with n = m and p = c n , where c > 0 is a constant, and let R be its representation matrix. Let also Σ be a set system with incidence matrix R. With high probability over the choices of R, if Algorithm 2 for weak bipartization terminates on input G, its output can be used to construct a 2-coloring x (disc) ∈ arg min x∈{±1} n disc (Σ, x), which also gives a maximum cut in G, i.e. x (disc) ∈ arg max x∈{±1} n Cut (G, x).
Proof. By construction, the output of Algorithm 2, namely G (b) , has only 0-strong odd cycles. Furthermore, by Lemma 8 these cycles correspond to vertex-label sequencies that are label-disjoint. Let H denote the subgraph of G (b) in which we have destroyed all 0-strong odd cycles by deleting a single (arbitrary) edge e C from each 0-strong odd cycle C (keeping all other edges intact), and notice that e C corresponds to a weak label. In particular, H is a bipartite multi-graph and thus its vertices can be partitioned into two independent sets A, B constructed as follows: In each connected component of H, start with an arbitrary vertex v and include in A (resp. in B) the set of vertices reachable from v that are at an even (resp. odd) distance from v. Since H is bipartite, it does not have odd cycles, and thus this construction is well-defined, i.e. no vertex can be placed in both A and B.
We now define x (disc) by setting x
(disc) i = +1 if i ∈ A and x (disc) i = +1 if i ∈ B.
Let M 0 denote the set of weak labels corresponding to the edges removed from G (b) in the construction of H. We first note that, for each ℓ C ∈ M 0 corresponding to the removal of an edge e C , we have
i∈L ℓ C x (disc) i = 2.
Indeed, since e C belongs to an odd cycle in G (b) , its endpoints are at even distance in H, which means that either they both belong to A or they both belong to B. Therefore, their corresponding entries of x (disc) have the same sign, and so (taking into account that the endpoints of e C are the only vertices in L ℓ C ), we have To complete the proof of the theorem, we need to show that Cut(G, x (disc) ) is maximum. By Corollary 3, this is equivalent to proving that ∥Rx (disc) ∥ ≤ ∥Rx∥ for all x ∈ {−1, +1} n . Suppose that there is some x (min) ∈ {−1, +1} n such that ∥Rx (disc) ∥ > ∥Rx (min) ∥. As mentioned above, for all ℓ ∈ [m]\M 0 , we have [Rx (disc) ] ℓ ≤ 1, and so [Rx (disc) ] ℓ ≤ [Rx (min) ] ℓ . Therefore, the only labels where x (min) could do better are those corresponding to edges e C that are removed from G (b) in the construction of H, i.e. ℓ C ∈ M 0 , for which we have [Rx (disc) ] ℓ C = 2. However, any such edge e C belongs to an odd cycle C, and thus any 2-coloring of the vertices of C will force at least one of the 0-strong labels corresponding to edges of C to be monochromatic. Taking into account the fact that, by Lemma 8, with high probability over the choices of R, all 0-strong odd cycles correspond to vertex-label sequences that are label-disjoint, we conclude that ∥Rx (disc) ∥ ≤ ∥Rx (min) ∥, which completes the proof. ◀
The fact that Theorem 11 is not an immediate consequence of Corollary 4 follows from the observation that a random set system with incidence matrix R has discrepancy larger than 1 with (at least) constant probability when m = n and p = c n . Indeed, by a straightforward counting argument, we can see that the expected number of 0-strong odd cycles is at least constant. Furthermore, in any 2-coloring of the vertices at least one of the weak labels forming edges in a 0-strong odd cycle will be monochromatic. Therefore, with at least constant probability, for any x ∈ {−1, +1} n , there exists a weak label ℓ, such that x i x j = 1, for both i, j ∈ L ℓ , implying that disc(L ℓ ) = 2.
We close this section by a result indicating that the conditional statement of Theorem 11
is not void, namely there is a range of values for c where the Weak Bipartization Algorithm terminates in polynomial time.
▶ Theorem 12. Let G(V, E, R T R) be a random instance of the G n,m,p model, with n = m and p = c n , where 0 < c < 1 is a constant, and let R be its representation matrix. With high probability over the choices of R, Algorithm 2 for weak bipartization terminates on input G in O (n + ℓ∈[m] |L ℓ |) · log n polynomial time.
The proof of the above theorem uses the following structural Lemma regarding the expected number of closed vertex label sequences.
▶ Lemma 13. Let G(V, E, R T R) be a random instance of the G n,m,p model. Let also C k denote the number of distinct closed vertex-label sequences of size k in G. Then
E[C k ] = 1 k n! (n − k)! m! (m − k)! p 2k .(9)
In particular, when m = n → ∞, p = c n , c > 0, and k ≥ 3,
we have E[C k ] ≤ e 2π c 2k .
Proof. Notice that there are 1 k n! (n−k)! ways to arrange k out of n vertices in a cycle. Furthermore, in each such arrangement, there are m! (m−k)! ways to place k out of m labels so that there is exactly one label between each pair of vertices. Since labels in any given arrangement must be selected by both its adjacent vertices, (9) follows by linearity of expectation.
Setting m = n and p = c n , and using the inequalities When n goes to ∞ and k ≥ 3, then the above is at most e 2π c 2k as needed. ◀
√ 2πn n+ 1 2 e −n ≤ n! ≤ en n+ 1 2 e −n , E[C k ] =
We are now ready for the proof of the Theorem.
Proof of Theorem 12. We will prove that, when m = n → ∞, p = c n , c < 1, and k ≥ 3, with high probability, there are no closed vertex-label sequences that have labels in common. To this end, recalling Definition 9 for C odd (G (b) ), we provide upper bounds on the following
events: A def = {∃k ≥ log n : C k ≥ 1}, B def = {|C odd (G (b) )| ≥ log n} and C def = {∃σ ̸ = σ ′ ∈ C odd (G (b) ) : ∃ℓ ∈ σ, ℓ ∈ σ ′ }.
By the union bound, Markov's inequality and Lemma 13, we get that, whp all closed vertex-label sequences have less than log n labels:
Pr (A) ≤ k≥log n E[C k ] ≤ k≥log n e 2π c 2k = e 2π c 2 log n 1 − c 2 = O c 2 log n = o(1),(10)
where the last equality follows since c < 1 is a constant. Furthermore, by Markov's inequality and Lemma 13, and noting that any closed vertex-label sequence in C odd (G (b) ) must have at least k ≥ 3 labels, we get that, whp there less than log n closed vertex-label sequences in C odd (G (b) ):
Pr (B) ≤ 1 log n k≥3 E[C k ] ≤ 1 log n k≥3 e 2π c 2k = 1 log n e 2π c 6 1 − c 2 = O 1 log n .(11)
To bound Pr(C), fix a closed vertex-label sequence σ, and let |σ| ≥ 3 be the number of its labels. Notice that, the probability that there is another closed vertex-label sequence that has labels in common with σ implies the existence of a vertex-label sequenceσ that starts with either a vertex or a label from σ, ends with either a vertex or a label from σ, and has at least one label or at least one vertex that does not belong to σ. Let |σ| denote the number of labels ofσ that do not belong to σ. Then the number of different vertex-label sequencesσ that start and end in labels from σ is at most |σ| 2 n |σ|+1 m |σ| ; indeedσ in this case has |σ| labels and |σ| + 1 vertices that do not belong to σ. Therefore, by independence, each such sequenceσ has probability p 2|σ|+2 to appear. Similarly, the number of different vertex-label sequencesσ that start and end in vertices from σ is at most |σ| 2 n |σ|−1 m |σ| and each one has probability p 2|σ| to appear. Finally, the number of different vertex-label sequencesσ that start in a vertex from σ and end in a label from σ (notice that this also covers the case whereσ starts in a label from σ and ends in a vertex from σ) is at most |σ| 2 n |σ| m |σ| and each one has probability p 2|σ|+1 to appear. Overall, for a given sequence σ, the expected number of sequencesσ described above that additionally satisfies |σ| < log n, is at most
log n−1 k=0 |σ| 2 n k+1 m k p 2k+2 + log n−1 k=1 |σ| 2 n k−1 m k p 2k + log n−1 k=1 |σ| 2 n k m k p 2k+1 ≤ c|σ| 2 log n n ,(12)
where in the last inequality we used the fact that m = n, p = c n and c < 1. Since the existence of a sequenceσ for σ that additionally satisfies |σ| ≥ log n implies event A, and on other hand the existence of more than log n different sequences σ ∈ |C odd (G (b) )| implies event B, by Markov's inequality and (12) We have thus proved that, with high probability over the choices of R, closed vertex-label sequences in C odd (G (b) ) are label disjoint, as needed.
In view of this, the proof of the Theorem follows by noting that, since closed vertex label sequences in C odd (G (b) ) are label disjoint, steps 5 and 6 within the while loop of the Weak Bipartization Algorithm will be executed exactly once for each sequence in C odd (G (b) ), where G (b) is defined in step 3 of the algorithm; indeed, once a closed vertex label sequence σ ∈ C odd (G (b) ) is destroyed in step 6, no new closed vertex label sequence is created. In fact, once σ is destroyed we can remove the corresponding labels and edges from G (b) , as these will no longer belong to other closed vertex label sequences. Furthermore, to find a closed vertex label sequences in C odd (G (b) ), it suffices to find an odd cycle in G (b) , which can be done by running DFS, requiring O(n + ℓ∈[m] |L ℓ |) time, because G (b) has at most ℓ∈[m] |L ℓ | edges. Finally, by (11), we have |C odd (G (b) )| < log n with high probability, and so the running time of the Weak Bipartization Algorithm is O((n + ℓ∈[m] |L ℓ |) log n), which concludes the proof of Theorem 12. ◀
Discussion and some open problems
In this paper, we introduced the model of weighted random intersection graphs and we studied the average case analysis of Weighted Max Cut through the prism of discrepancy of random set systems. In particular, in the first part of the paper, we proved concentration of the weight of a maximum cut of G(V, E, R T R) around its expected value, and we used it to show that, with high probability, the weight of a random cut is asymptotically equal to the maximum cut weight of the input graph, when m = n α , α < 1. On the other hand, in the case where the number of labels is equal to the number of vertices (i.e. m = n), we proved that a majority algorithm gives a cut with weight that is larger than the weight of a random cut by at least a constant factor, when p = c n and c is large. In the second part of the paper, we highlighted a connection between Weighted Max Cut of sparse weighted random intersection graphs and Discrepancy of sparse random set systems, formalized through our Weak Bipartization Algorithm and its analysis. We demonstrated how our proposed framework can be used to find optimal solutions for these problems, with high probability, in special cases of sparse inputs (m = n, p = c n , c < 1). One of the main problems left open in our work concerns the termination of our Weak Bipartization Algorithm for large values of c. We conjecture the following: (1), and let R be its representation matrix. Let also Σ be a set system with incidence matrix R. Then, with high probability over the choices of R, there exists x disc ∈ arg min x∈{−1,+1} n disc(Σ, x), such that Cut(G, x disc ) is asymptotically equal to Max-Cut(G).
A Proof of Corollary 3
We first prove the following Lemma, by straightforward calculation from equation (1):
▶ Lemma 16. Let G(V, E, W) be a weighted graph such that W is symmetric and W i,j = 0 if {i, j} / ∈ E. Then Max-Cut(G) = 1 4 i,j∈[n] 2 W i,j − min x∈{−1,+1} n x T Wx .(13)
Proof. For any x ∈ {−1, +1} n , we write
i,j∈[n] 2 W i,j − x T Wx = i,j∈[n] 2 W i,j − i,j∈[n] 2 W i,j x i x j = 1 2 i,j∈[n] 2 W i,j x 2 i + x 2 j − 2x i x j = 1 2 i,j∈[n] 2 W i,j (x i − x j ) 2 = {i,j}∈E W i,j (x i − x j ) 2 .
By (1)
R T R i,j − ∥Rx∥ 2 = i̸ =j,i,j∈[n] 2 R T R i,j − i̸ =j,i,j∈[n] 2 R T R i,j x i x j .
Taking expectations with respect to x, the contribution of the second sum in the above expression equals 0, which completes the proof. ◀
B Proof of Corollary 4
Proof. Since disc(Σ, x * ) ≤ 1, then each component of Rx * is either 0 or 1, for any x * ∈ {−1, +1} n . In particular, for any ℓ ∈ [m], [Rx * ] ℓ is 0 if the number of ones in the ℓ-th row is even and it is equal to 1 otherwise. This is the best one can hope for, since sets with an odd number of elements cannot have discrepancy less than 1. Therefore, ∥Rx * ∥ is also the minimum possible. In particular, this implies that, in the case disc(Σ, x * ) ≤ 1, any 2-coloring that achieves minimum discrepancy gives a bipartition that corresponds to a maximum cut and vice versa. ◀
C Proof of Theorem 6
Proof. Let G = G(V, E, R T R) be a weighted random intersection graph. By equation (2) of Corollary 3, for any x ∈ {−1, +1} n , we have:
Cut(G, x) = 1 4 i,j∈[n] R T R i,j − ∥Rx∥ 2 .
Taking expectations with respect to random x and R, we get
E x,R [Cut(G, x)] = 1 4 · E R i,j∈[n] R T R i,j − i∈[n] R T R i,i = 1 4 · E R i̸ =j,i,j∈[n] R T R i,j = 1 4 n(n − 1)mp 2 .(14)
To prove the Theorem, we will show that, with high probability over random x and R, we have
∥Rx∥ 2 = o E R 1 4 i̸ =j,i,j∈[n] R T R i,j = o(n 2 mp 2 ),Pr(Y ℓ > (1 + δ)np) ≤ e δ (1 + δ) 1+δ np . Since np ≥ C 1 n m = C 1 n 1−α 2 , taking any δ ≥ 2, we get Pr(Y ℓ > 3np) ≤ e 2 27 np = o 1 m .(15)
Therefore, by the union bound,
Pr(∃ℓ ∈ [m] : Y ℓ > 3np) = o(1),(16)
implying that, all rows of R have at most 3np non-zero elements with high probability. Fix now ℓ and consider the random variable corresponding to the ℓ-th entry of Rx, namely Z ℓ = i∈[n] R ℓ,i x i . In particular, given Y ℓ , notice that Z ℓ is equal to the sum of Y ℓ independent random variables x i ∈ {−1, +1}, for i such that R ℓ,i = 1. Therefore, since E x [Z ℓ ] = E x [Z ℓ |Y ℓ ] = 0, by Hoeffding's inequality, for any λ ≥ 0,
Pr(|Z ℓ | > λ|Y ℓ ) ≤ e − λ 2 2Y ℓ .
Therefore, by the union bound, and taking λ ≥ √ 6np ln n,
Pr(|Z ℓ | > λ) ≤ Pr(∃ℓ ∈ [m] : Y ℓ > 3np) + me − λ 2 6np = o(1) + m n = o(1),(17)
implying that all entries of Rx have absolute value at most √ 6np ln n with high probability over the choices of x and R. Consequently, with high probability over the choices of x and R, we have ∥Rx∥ 2 = 6mnp ln n, which is o(n 2 mp 2 ), since np = ω(ln n) in the range of parameters considered in this theorem. This completes the proof. ◀
We note that the same analysis also holds when n = m and p is sufficiently large (e.g. p = ω( ln n n )). In particular, similar probability bounds hold in equations (15), (16) and (17), for the same choices of δ ≥ 2 and λ ≥ √ 6np ln n, implying that ∥Rx∥ 2 = 6mnp ln n = o(n 2 mp 2 ) with high probability.
D Proof of Theorem 7
Proof. Let G(V, E, R T R) (i.e. the input to the Majority Cut Algorithm 1) be a random instance of the G n,m,p model, with m = n, and p = c n , for some large enough constant c. For t ∈ [n], let M t denote the constructed cut size just after the consideration of a vertex v t , for some t ≥ ϵn + 1. In particular, by equation (3) for n = t, and since the values x 1 , . . . , x t−1 are already decided in previous steps, we have
M t = 1 4 i,j∈[t] 2 R T R i,j − min xt∈{−1,+1} R [m],[t] x [t] 2 (18)
The first of the above terms is
1 4 i,j∈[t] 2 R T R i,j = 1 4 i,j∈[t−1] 2 R T R i,j + 2 i∈[t−1] R T R i,t + R T R t,t (19)R T R i,t x i x t + R T R t,t (20)
By (18), (19) and (20)
R T R i,t x i x t = M t−1 + 1 2 i∈[t−1] R T R i,t + 1 2 i∈[t−1] R T R i,t x i(21)
Define now the random variable
Z t = Z t (x, R) = i∈[t−1] R T R i,t x i = ℓ∈[m] R ℓ,t i∈[t−1] R ℓ,i x i , so that M t = M t−1 + 1 2 i∈[t−1] R T R i,t + 1 2 |Z t |.
Observe that, in the latter recursive equation, the term 1 2 i∈[t−1] R T R i,t corresponds to the expected increment of the constructed cut if the t-vertex chose its color uniformly at random. Therefore, lower bounding the expectation of 1 2 |Z t | will tell us how much better the Majority Algorithm does when considering the t-th vertex.
Towards this end, we first note that, given x , i ∈ [t−1]}, Z t is the sum of m independent random variables, since the Bernoulli random variables R ℓ,t , ℓ ∈ [m], are independent, for any given t (note that the conditioning is essential for independence, otherwise the inner sums in the definition of Z t would also depend on the x i 's, which are not random when i is large
Z t = ℓ∈A + t R ℓ,t i∈[t−1] R ℓ,i x i − ℓ∈A − t R ℓ,t i∈[t−1] R ℓ,i x i ,(22)
where R ℓ,t , ℓ ∈ A + t ∪ A − t are independent Bernoulli random variables with success probability p.
It is a matter of careful calculation to show that E |Z t | x [
R ℓ,i x i , ℓ∈A − t i∈[t−1] R ℓ,i x i ,(23)
then
E[|Z t | x [t−1] , R [m],[t−1] ] ≥ MD(Z B t ),(24)
where MD(·) is the mean absolute difference of (two independent copies of) Z B t . In particular,
MD(Z B t ) = E[ Z B t − Z ′B t ],
where Z B t , Z ′B t are independent random variables following B (N t , p). Unfortunately, we are aware of no simple closed formula for MD(Z B t ), and so we resort to Gaussian approximation through the Berry-Esseen Theorem: . Indeed, notice that, the independence of X k , X ′ k implies that these random variables work against each other (with respect to the absolute value) at least half of the time.
= k i=1 a i X i − N i=k a i X i , and X ′ = k−1 i=1 a i X i + (a k − 1)X k + X ′ k − N i=k a i X i ,
▶ Theorem 5 .
5Let G(V, E, R T R) be a random instance of the G n,m,p model with m = n a , α ≤ 1, and C 1 1 nm ≤ p ≤ 1, for arbitrary positive constant C 1 , and let R be its representation matrix. Then Max-Cut(G) ∼ E R [Max-Cut(G)] with high probability, where E R denotes expectation with respect to R, i.e. Max-Cut(G) concentrates around its expected value.
Algorithm 1 then 6 x t = −
16Majority CutInput: G(V, E, R T R) and its representation matrix R ∈ {0, 1} m×n Output: Large cut 2-coloring x ∈ {−1, +1} n 1 Let v 1 , . . . , v n an arbitrary ordering of vertices; 2 for t = 1 to ϵn do3 Set x t to either −1 or +1 independently with equal probability;4 for t = ϵn + 1 to n do 5 if i∈[t−1] [R T R] i,t x i ≥ 0Clearly the Majority Algorithm runs in polynomial time in n, m. Furthermore, the following Theorem provides a lower bound on the expected weight of the cut constructed by the algorithm in the case m = n, p = c n , for large constant c, and ϵ → 0. The full proof details can be found in Section D of the Appendix.
▶ Theorem 7 .
7Let G(V, E, R T R) be a random instance of the G n,m,p model, with m = n, and p = c n , for large positive constant c, and let R be its representation matrix. Then, with high probability over the choices of R, the majority algorithm constructs a cut with expected weight at least (
Even though we are aware of no simple closed formula for MD(Z B t ), we resort to Gaussian approximation of Z B t − Z ′B t through the Berry-Esseen Theorem, ultimately showing that |Z B t − Z ′B t | follows approximately the folded normal distribution. In particular, we show that MD(Z B t ) ≥ c(t−1) 3πn − o(1), and since the right hand side is independent of x [t−1] , R [m],[t−1] , we get the same lower bound on the expectation of |Z t |, namely, E[|Z t |] ≥ c(t−1) 3πn − o(1). Summing over all t ≥ ϵn + 1, we get t≥ϵn+1 E [|Z t |ϵ 3/2 n − o(n), and the result follows by noting that the expected weight of a random cut is equal to 1 4 n(n − 1)mp 2 = c 2 4 n + o(n), and taking ϵ → 0.
2 ▶
2Lemma 8. Let G(V, E, R T R) be a random instance of the G n,m,p model, with m = n, and p = c n , for some constant c > 0. With high probability over the choices of R, 0-strong closed vertex-label sequences in G do not have labels in common.
▶
Lemma 10. If C odd (G (b) ) is empty, then G (b) may only have 0-strong odd cycles.
= 2 .
2Second, we show that, for all the other labels ℓ ∈ [m]\M 0 , i∈L ℓ x (disc) i will be equal to 1 if |L ℓ | is odd and 0 otherwise. For any label ℓ ∈ [m]\M 0 , let M (ℓ) denote the part of G (b) corresponding to a maximal matching of K (ℓ) , and note that all edges of M (ℓ) are contained in H. Since H is bipartite, no edge in M (ℓ) can have both its endpoints in either A or B. Therefore, by construction, the contribution of entries of x (disc) corresponding to endpoints of edges in M (ℓ) to the sum i∈L ℓ x (disc) i is 0. In particular, if |L ℓ | is even, then M (ℓ) is a perfect matching and i∈L ℓ x (disc) i = 0, otherwise (i.e. if |L ℓ | is odd) there is a single vertex not matched in M (ℓ) and i∈L ℓ x
n 2n+1 e −2n 2π(n − k) 2n−2k+1 e
, we get Pr(C) ≤ Pr(A)+Pr(B)+c (log n) 4 n = O c 2 log n +O 1 log n +O (log n) 4 n = O 1 log n .
▶
Conjecture 14. Let G(V, E, R T R) be a random instance of the G n,m,p model, with m = n, and p = c n , for some constant c ≥ 1. With high probability over the choices of R, on input G, Algorithm 2 for weak bipartization terminates in polynomial time. We also leave the problem of determining whether Algorithm 2 terminates in polynomial time, in the case m = n and p = ω(1/n), as an open question for future research. Towards strengthening the connection between Weighted Max Cut under the G n,m,p model, and Discrepancy in random set systems, we conjecture the following: ▶ Conjecture 15. Let G(V, E, R T R) be a random instance of the G n,m,p model, with m = n α , α ≤ 1 and mp 2 = O
▶3
Theorem (Berry-Esseen Theorem [23]). Let X 1 , X 2 , . . . , be independent, identically distributed random variables, with E[X i ] = 0, E[X 2 i ] = σ 2 > 0, and E[|X i | 3 ] = ρ < ∞. For N > 0, let F N (·) be the cumulative distribution function of X1+···+X N σ √ N, and let Φ(·) be the cumulative distribution function of the standard normal distribution. Then, sup x∈R |F N (x) − Φ(x)This property follows inductively, by noting that, if X
a constant, i.e. at least 1 + β times larger than the expected weight of a random cut.Proof sketch. Let G(V, E, R T R) be a random instance of the G n,m,p model, with m = n, and p = c n , for some large enough constant c. For t ∈ [n], let M t denote the constructed cut size just after the consideration of a vertex v t , for some t ≥ ϵn + 1. By equation(3)for n = t, and since the values x 1 , . . . , x t−1 are already decided in previous steps, we have M t = 14
i,j∈[t] 2 R T R i,j − min xt∈{−1,+1} R [m],[t] x [t]
2 , and after careful calculation
we get the recurrence
, this completes the proof. ◀ Proof of Corollary 3. Notice that diagonal entries of the weight matrix in (13) cancel out, and so, for any x ∈ {−1, +1} n , we havei,j∈[n] 2
in which case the theorem follows by concentration of Max-Cut(G) around its expected value (Theorem 5), and the fact thatMax-Cut(G) ≥ 1 i̸ =j,i,j∈[n] R T R i,j .To this end, fix ℓ ∈ [m] and consider the random variable counting the number of ones in the ℓ-th row of R, namely Y ℓ = i∈[n] R ℓ,i . By the multiplicative Chernoff bound, for any δ > 0,4
R [m],t x t + i∈[t−1] R [m],i x i R T R i,j x i x j R T R i,j x i x j + 2 minand the second term is
−
1
4
min
xt∈{−1,+1}
R [m],[t] x [t]
2
= −
1
4
min
xt∈{−1,+1}
2
= −
1
4
min
xt∈{−1,+1}
i,j∈[t] 2
= −
1
4
i,j∈[t−1] 2
xt∈{−1,+1}
i∈[t−1]
, we haveM t = M t−1 + 1 2i∈[t−1]
R T R i,t −
1
2
min
xt∈{−1,+1}
i∈[t−1]
[t−1] = {x i , i ∈ [t − 1]}, and R [m],[t−1] = {R ℓ,i , ℓ ∈ [m]
2 .
2). Furthermore, E[Z t |x [t−1] , R [m],[t−1] ] = p ℓ∈[m] i∈[t−1] R ℓ,i x i and Var(Z t |x [t−1] , R [m],[t−1] ) = p(1 − p) ℓ∈[m] i∈[t−1] R ℓ,i x i Given x [t−1] = {x i , i ∈ [t − 1]}, and R [m],[t−1] = {R ℓ,i , ℓ ∈ [m], i ∈ [t − 1]}, define the sets A + t = {ℓ ∈ [m] : i∈[t−1] R ℓ,i x i > 0} and A − t = {ℓ ∈ [m] : i∈[t−1] R ℓ,i x i < 0}. In particular, given x [t−1] = {x i , i ∈ [t − 1]}, and R [m],[t−1] = {R ℓ,i , ℓ ∈ [m], i ∈ [t − 1]}, Z t can be written as
t−1] , R [m],[t−1] is smallest when the conditional expectation E Z t x [t−1] , R [m],[t−1] is 0, which happens when the sum of positive factors for the Bernoulli random variables in the definition of Z t is equal to the sum of negative ones,namely ℓ∈A + t i∈[t−1] R ℓ,i x i = ℓ∈A − t i∈[t−1] R ℓ,i x i . Furthermore, we note that E[|Z t | x [t−1] , R [m],[t−1] ] does not increase if we replace ℓ∈A + t R ℓ,t i∈[t−1] R ℓ,i x i and ℓ∈A − t R ℓ,t i∈[t−1] R ℓ,i x i in the expression (22) for Z t by independent binomial random variables Z + t ∼ B ℓ∈A + i∈[t−1] R ℓ,i x i , p and Z − t ∼ B ℓ∈A − i∈[t−1] R ℓ,i x i , p , respectively. 3In view of the above, if Z B t is a random variable which, givenx [t−1] = {x i , i ∈ [t − 1]}, and R [m],[t−1] = {R ℓ,i , ℓ ∈ [m], i ∈ [t − 1]}, follows the Binomial distribution B (N t , p), wheret
t
N t
def
= max
ℓ∈A +
t i∈[t−1]
where k, N, a i ∈ N + , i ∈ [N ], and X i , i ∈ [N ], X ′ are independent, identically distributed Bernoulli random variables, then E[|X|] ≥ E[|X ′ |]k
We conjecture that the structural property of Lemma 8 also holds if we replace 0-strong with λ-strong, for any constant λ, but this stronger version is not necessary for our analysis.
I. Shevtsova. On the absolute constants in the Berry Esseen type inequalities for identically distributed summands. arXiv:1111.6554 [math.PR].
In our case, we writet,i , Z ′B t,i are independent Bernoulli random variables with success probability p, for any i ∈ [N t ]. In particular, we have E[(1 − p). Therefore, by the Berry-Esseen Theorem, givenNotice that the latter approximation error bound becomes o(1)if N t = Θ(n), p = c n and c → ∞. Therefore, we next show that, with high probability over the choices of R, N t = Θ(n), for any t ≥ ϵn + 1, where ϵ is the constant used in the Majority Algorithm. In particular, even though we cannot control the variables, in the definition of N t , we will find a lower bound that holds whp by using the random variableand employing the following inequalityIndeed, (25) holds because, for any i, no matter what value the x i 's have. Therefore, i∈[t−1] R ℓ,i x i will contribute at least 1 to one of the two terms in the maximum from the right side of (23), and thus (25) follows.Notice now that, for any fixed i and t ≥ ϵnwhere in the last inequality we set p = c n . Taking c → ∞, the latter bound becomes 1 2 − o(1). Therefore, by independence of the entries of R, Y t stochastically dominates a binomial random variable B(t − 1, 1 3 ). Furthermore, by the multiplicative Chernoff (upper) bound, for any δ > 0,Taking δ = 1 2 and noting that t ≥ ϵn + 1, we havewhich is o(1/n), for any constant ϵ > 0. By the union bound,By inequality (25), we thus have that, with high probability over the choices of R, N t ≥ t−1 12 ≥ ϵn 12 , for all t ≥ ϵn + 1, as needed. Combining the above, by the Berry-Esseen Theorem, givenE Proof of Lemma 8Proof. We will use the first moment method and so we need to prove that the expectation of the number of pairs of distinct 0-strong closed vertex-label sequences in G that have at least one label in common goes to 0. To this end, for j ∈ [min(k, k ′ ) − 1], let A j (k, k ′ ) denote the number of such sequences σ, σ ′ , with k = |σ|, k ′ = |σ ′ |, that have j labels in common. In particular, for integers k, k ′ , let σ := v 1 , ℓ 1 , v 2 , ℓ 2 , · · · , v k , ℓ k , v k+1 = v 1 , and letNotice that, any such fixed pair σ, σ ′ has the same probability to appear, namely p 2(k+k ′ −j) (1 − p) (n−2)(k+k ′ −j) ; indeed, p 2k (1 − p) (n−2)k is the probability that σ appears (recall that σ has k labels and it is 0-strong, i.e. each label is only selected by two vertices) and p 2(k ′ −j) (1 − p) (n−2)(k ′ −j) is the probability that σ ′ appears given that σ has appeared. Furthermore, the number of such pairs of sequences is dominated by the number of sequences that overlap in j consecutive labels (e.g. the first j), which is at most n k m k n k ′ −j−1 m k ′ −j (notice that j common labels implies that there are at least j ′ + 1 common vertices). Overall, since n = m and p = c n , we haveSince n → ∞ and p = c n , by elementary calculus we have that c 2 (1 − p) n−2 bounded by a constant (which depends only on c) strictly less than 1. Therefore, the above expectation is at most e − ln n−Θ(1)(k+k ′ −j) . Therefore, summing over all choices of k, k ′ ∈ [n] and j ∈ [min(k, k ′ ) − 1], we get that the expected number of pairs of distinct 0-strong closed vertex-label sequences that have at least one label in common is at mostF Proof of Lemma 10Proof. For the sake of contradiction, assume C odd (G (b) ) = ∅, but G (b) = ∪ + ℓ∈[m] M (ℓ) has an odd cycle C k that is not 0-strong and has minimum length. Notice that C k corresponds to a closed vertex-label sequence, say σ :. Furthermore, by assumption, conditions (b) and (c) of Definition 9 are satisfied by σ (indeed {v i , v i+1 } ∈ M (ℓi) , for all i ∈ [k], and σ is λ-strong, for some λ > 0). Therefore, the only reason for which σ does not belong to C odd (G (b) ) is that condition (a) of Definition 9 is not satisfied, i.e. there are distinct indices i > i ′ ∈ [k] such that ℓ i = ℓ i ′ . Clearly, such indices are not consecutive (i.e. i ′ ̸ = i + 1), because ℓ i is strong and step 6 of our algorithm implies that M (ℓi) is a matching of K (ℓi) . But then either the vertexlabel sequence v 1 , . . . , v i , ℓ i , v i ′ +1 , ℓ i ′ +1 , v i ′ +2 , . . . , v k+1 = v 1 or the vertex-label sequence v i+1 , ℓ i+1 , v i+2 , . . . , v i ′ , ℓ i , v i+1 corresponds to a shorter odd cycle, which is a contradiction on the minimality of C k . ◀
The Discrepancy of Random Rectangular Matrices. D Altschuler, J Niles-Weed, CoRR abs/2101.04036D. Altschuler, and J. Niles-Weed. The Discrepancy of Random Rectangular Matrices. CoRR abs/2101.04036 (2021)
On the discrepancy of random low degree set systems. N Bansal, R Meka, Proceedings of the 30th Annual ACM-SIAM Symposium on Discrete Algorithms SODA 2019. the 30th Annual ACM-SIAM Symposium on Discrete Algorithms SODA 2019N. Bansal, and R. Meka. On the discrepancy of random low degree set systems. In Proceedings of the 30th Annual ACM-SIAM Symposium on Discrete Algorithms SODA 2019: 2557-2564.
An Application of Combinatorial Optimization to Statistical Physics and Circuit Layout Design. F Barahona, M Gr Otschel, M Unger, G Reinelt, Operations Research. 363F. Barahona, M. Gr otschel, M. J unger, and G. Reinelt. An Application of Combinatorial Optimization to Statistical Physics and Circuit Layout Design. Operations Research. 36 (3): 493-513, 1988.
Combinatorial approach to the interpolation method and scaling limits in sparse random graphs. M Bayati, D Gamarnik, P Tetali, Ann. Probab. 41M. Bayati, D. Gamarnik, and P. Tetali. Combinatorial approach to the interpolation method and scaling limits in sparse random graphs. Ann. Probab. 41 (2013), 4080-4115.
Coloring Random Intersection Graphs and Complex Networks. M Behrisch, A Taraz, M Ueckerdt, SIAM J. Discret. Math. 231M. Behrisch, A. Taraz, and M. Ueckerdt. Coloring Random Intersection Graphs and Complex Networks. SIAM J. Discret. Math. 23(1): 288-299 (2009).
Recent Progress in Complex Network Analysis: Models of Random Intersection Graphs. Studies in Classification, Data Analysis, and Knowledge Organization. M Bloznelis, E Godehardt, J Jaworski, V Kurauskas, K Rybarczyk, SpringerM. Bloznelis, E. Godehardt, J. Jaworski, V. Kurauskas, and K. Rybarczyk. Recent Progress in Complex Network Analysis: Models of Random Intersection Graphs. Studies in Classification, Data Analysis, and Knowledge Organization, Springer 2015, pages 69-78.
MAX k-CUT and approximating the chromatic number of random graphs. A Coja-Oghlan, C Moore, V Sanwalani, Random Struct. Algorithms. 283A. Coja-Oghlan, C. Moore, and V. Sanwalani. MAX k-CUT and approximating the chromatic number of random graphs. Random Struct. Algorithms 28(3): 289-322 (2006).
Random maxsat, random maxcut, and their phase transitions. D Coppersmith, D Gamarnik, M Hajiaghayi, G Sorkin, Rand. Struct. Alg. 244D. Coppersmith, D. Gamarnik, M. Hajiaghayi, and G. Sorkin. Random maxsat, random maxcut, and their phase transitions. Rand. Struct. Alg. 24 (2004), no. 4, 502-545.
Extremal Cuts of Sparse Random Graphs. The Annals of. A Dembo, A Montanari, S Sen, 45A. Dembo, A. Montanari, and S. Sen. Extremal Cuts of Sparse Random Graphs. The Annals of Probability, 2017, Vol. 45, No. 2, 1190-1217.
A survey on graph layout problems. J Díaz, J Petit, M Serna, ACM Comput. Surveys. 34J. Díaz, J. Petit, and M. Serna. A survey on graph layout problems. ACM Comput. Surveys 34 (2002), 313-356.
On the Beck-Fiala Conjecture for Random Set Systems. E Ezra, S Lovett, Proceedings of Approximation, Randomization, and Combinatorial Optimization -Algorithms and Techniques (APPROX-RANDOM. Approximation, Randomization, and Combinatorial Optimization -Algorithms and Techniques (APPROX-RANDOM29E. Ezra, and S. Lovett. On the Beck-Fiala Conjecture for Random Set Systems. In Proceed- ings of Approximation, Randomization, and Combinatorial Optimization -Algorithms and Techniques (APPROX-RANDOM) 2016: 29:1-29:10.
Random intersection graphs when m = ω(n): an equivalence theorem relating the evolution of the G(n, m, p) and G(n, p) models. Random Struct. J Fill, E Sheinerman, K Singer-Cohen, Algorithms. 162J. Fill, E. Sheinerman, and K. Singer-Cohen. Random intersection graphs when m = ω(n): an equivalence theorem relating the evolution of the G(n, m, p) and G(n, p) models. Random Struct. Algorithms 16(2), 156-176 (2000).
On the max-cut of sparse random graphs. D Gamarnik, Q Li, Random Struct. Algorithms. 522D. Gamarnik, and Q. Li. On the max-cut of sparse random graphs. Random Struct. Algorithms 52(2): 219-262 (2018).
A Fourier-Analytic Approach for the Discrepancy of Random Set Systems. R Hoberg, T Rothvoss, Proceedings of the 30th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA). the 30th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)2019R. Hoberg, and T. Rothvoss. A Fourier-Analytic Approach for the Discrepancy of Random Set Systems. In Proceedings of the 30th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA) 2019: 2547-2556.
On random intersection graphs: the subgraph problem. M Karoński, E Scheinerman, K Singer-Cohen, Combinatorics, Probability and Computing journal. 8M. Karoński, E. Scheinerman, and K. Singer-Cohen. On random intersection graphs: the subgraph problem. Combinatorics, Probability and Computing journal 8: 131-159 (1999).
Efficient Approximation Algorithms in Random Intersection Graphs. S Nikoletseas, C Raptopoulos, P Spirakis, Handbook of Approximation Algorithms and Metaheuristics. Chapman and Hall/CRCS. Nikoletseas, C. Raptopoulos, and P. Spirakis. Efficient Approximation Algorithms in Random Intersection Graphs. Handbook of Approximation Algorithms and Metaheuristics (2), Chapman and Hall/CRC, 2018.
Optimization, approximation, and complexity classes. C Papadimitriou, M Yannakakis, Journal of Computer and System Sciences. 433C. Papadimitriou, and M. Yannakakis. Optimization, approximation, and complexity classes. Journal of Computer and System Sciences, 43 (3): 425-440, 1991.
Clustering pairwise distances with missing data: Maximum cuts versus normalized cuts. J Poland, T Zeugmann, Lecture Notes in Comput. Sci. 4265J. Poland, and T. Zeugmann. Clustering pairwise distances with missing data: Maximum cuts versus normalized cuts. In Lecture Notes in Comput. Sci., 4265 (2006), pp. 197-208.
Maximum cuts and largest bipartite subgraphs. S Poljak, Z Tuza, DIMACS series in Discrete Mathematics and Theoretical Computer Science. 20American Mathematical SocietyS. Poljak, and Z. Tuza. Maximum cuts and largest bipartite subgraphs. DIMACS series in Discrete Mathematics and Theoretical Computer Science, vol. 20, pp. 181-244, American Mathematical Society, Providence, R.I., 1995.
Discrepancy in random hypergraph models. A Potukuchi, CoRR abs/1811.01491A. Potukuchi. Discrepancy in random hypergraph models. CoRR abs/1811.01491 (2018)
Simple and Efficient Greedy Algorithms for Hamilton Cycles in Random Intersection Graphs. C Raptopoulos, P Spirakis, Proceedings of the 16th International Symposium on Algorithms and Computation (ISAAC) 2005. the 16th International Symposium on Algorithms and Computation (ISAAC) 2005C. Raptopoulos, and P. Spirakis. Simple and Efficient Greedy Algorithms for Hamilton Cycles in Random Intersection Graphs. In Proceedings of the 16th International Symposium on Algorithms and Computation (ISAAC) 2005: 493-504.
Equivalence of a random intersection graph and G(n, p). K Rybarczyk, 38Random Structures and AlgorithmsK. Rybarczyk. Equivalence of a random intersection graph and G(n, p). Random Structures and Algorithms 38(1-2)): 205-234 (2011).
| [] |
[
"Efficient Prompting via Dynamic In-Context Learning",
"Efficient Prompting via Dynamic In-Context Learning"
] | [
"Wangchunshu Zhou wangchunshu.zhou@inf.ethz.ch ",
"Yuchen Eleanor Jiang yuchen.jiang@inf.ethz.ch ",
"Ryan Cotterell ryan.cotterell@inf.ethz.ch ",
"Mrinmaya Sachan mrinmaya.sachan@inf.ethz.ch ",
"Eth Zürich "
] | [] | [] | The primary way of building AI applications is shifting from training specialist models to prompting generalist models. A common practice for prompting generalist models, often referred to as in-context learning, is to append a few examples (demonstrations) to the prompt to help the model better understand the task. While effective, in-context learning can be inefficient because it makes the input prompt much longer, consuming valuable space in the context window and leading to larger computational costs. In this paper, we propose DYNAICL, a recipe for efficient prompting with black-box generalist models that dynamically allocate in-context examples according to the input complexity and the computational budget. To achieve this, we train a meta controller that predicts the number of in-context examples suitable for the generalist model to make a good prediction based on the performance-efficiency trade-off for a specific input. We then dynamically allocate the number of demonstrations for an input according to predictions from the meta controller and the given computation budget. Experimental results show that dynamic example allocation helps achieve a better performance-efficiency trade-off in two practical settings where computational resources or the required performance is constrained. Specifically, DYNAICL saves up to 46% token budget compared to the common practice that allocates the same number of in-context examples to each input. We also find that a meta controller trained on a certain backbone model and tasks can successfully generalize to unseen models and tasks. | null | [
"https://export.arxiv.org/pdf/2305.11170v1.pdf"
] | 258,762,345 | 2305.11170 | 8ce6ad6d8a73757309d3b9f525cf15cb68e32397 |
Efficient Prompting via Dynamic In-Context Learning
Wangchunshu Zhou wangchunshu.zhou@inf.ethz.ch
Yuchen Eleanor Jiang yuchen.jiang@inf.ethz.ch
Ryan Cotterell ryan.cotterell@inf.ethz.ch
Mrinmaya Sachan mrinmaya.sachan@inf.ethz.ch
Eth Zürich
Efficient Prompting via Dynamic In-Context Learning
The primary way of building AI applications is shifting from training specialist models to prompting generalist models. A common practice for prompting generalist models, often referred to as in-context learning, is to append a few examples (demonstrations) to the prompt to help the model better understand the task. While effective, in-context learning can be inefficient because it makes the input prompt much longer, consuming valuable space in the context window and leading to larger computational costs. In this paper, we propose DYNAICL, a recipe for efficient prompting with black-box generalist models that dynamically allocate in-context examples according to the input complexity and the computational budget. To achieve this, we train a meta controller that predicts the number of in-context examples suitable for the generalist model to make a good prediction based on the performance-efficiency trade-off for a specific input. We then dynamically allocate the number of demonstrations for an input according to predictions from the meta controller and the given computation budget. Experimental results show that dynamic example allocation helps achieve a better performance-efficiency trade-off in two practical settings where computational resources or the required performance is constrained. Specifically, DYNAICL saves up to 46% token budget compared to the common practice that allocates the same number of in-context examples to each input. We also find that a meta controller trained on a certain backbone model and tasks can successfully generalize to unseen models and tasks.
Introduction
The field of Artificial Intelligence is witnessing a major paradigm shift from training and deploying multiple specialist models for specific tasks to pre-training a generalist model (e.g., a large language model (LLM)) and prompting for different tasks [1][2][3][4][5][6][7][8]. While prompting is an elegant and effective way to utilize generalist models, the issue of computational inefficiency remains a major limitation. We identify two key sources of the computational inefficiency of prompting generalist models: model size and sample size. The former is arguably a prerequisite for generalist models to solve all kinds of tasks via prompting and there already exist a number of model compression techniques [9][10][11][12] that aim to reduce the size of generalist models. One obvious limitation of these approaches is that they all require users to have access to the model parameters, which may not be the case in the era of generalist models. For instance, some state-of-the-art generalist models like ChatGPT, Bard, PaLM, and Claude, are pre-trained by corporations and therefore closed-sourced for commercial reasons.
In this paper, we instead focus on reducing sample size, a relatively new perspective for improving the efficiency of black-box generalist models of which the parameters are unavailable to users. This particular direction has received relatively limited or negligible exploration within the era of specialist models, as the inputs and outputs associated with it are clearly defined and largely devoid of redundancy. This is no longer true in the context of prompting generalist models such as LLMs because we have a lot of different ways to prompt a model that results in prompts of different lengths. We identify the main factor influencing the prompt length to be the use of in-context learning and the number of in-context examples (demonstrations) in the prompt. Specifically, in-context learning [3] refers to the practice of adding a few exemplar input-output pairs that are related to the input, which helps the generalist model better understand and solve the problem. Although it is still unclear how in-context examples help a generalist model [13][14][15], it is evident that samples of greater complexity necessitate a greater number of in-context examples for a generalist model to acquire contextual understanding. Conversely, simpler samples may be solvable even without relying on in-context learning. This is confirmed by our preliminary study, which also finds that assigning more in-context examples to simple samples occasionally confuses the generalist model and turns its prediction from correct to erroneous. This suggests that the current practice of allocating a fixed number of in-context examples for all inputs is sub-optimal.
To this end, we propose Dynamic In-Context Learning (DYNAICL), a dynamic computation framework for prompting generalist models. DYNAICL is conceptually similar to previous work on input adaptive computation for specialist models [16][17][18][19][20][21]. The main difference is that DYNAICL aims to dynamically adjust the size of the input while previous work focuses on adjusting the complexity of the model. This results in a major advantage of DYNAICL: it only operates on inputs, thus is disentangled with model architectures or parameters, and suits an increasingly common scenario in the era of generalist models where the users do not have access to the model's parameters. To achieve this, we train a meta controller that predicts the number of in-context examples suitable for the generalist model to make a good performance-efficiency trade-off given a specific input. The meta controller can be instantiated with a smaller pre-trained model (e.g., FLAN-T5 [22]) and multi-task fine-tuned with the combination of supervised learning with a novel data synthesis algorithm and reinforcement learning with rewards based on performance-efficiency trade-off. Then at test time, we can dynamically allocate the number of demonstrations for an input according to both the predictions from the meta controller and the computation budget. We illustrate the procedure of efficient prompting with DYNAICL in Figure 1.
We test the effectiveness of DYNAICL in the context of prompting LLMs due to its prominence as the predominant use case for generalist models at present. We experiment with ChatGPT as the generalist model and train a meta controller on a subset of the FLAN datasets collection [23]. We evaluate DYNAICL in two practical settings where either the computational resources or the performance is under constraints. We find that compared with the common practice of uniformly allocating in-context examples, DYNAICL can achieve an averaged absolute performance improvement of 2.6% within a certain computational budget or reach a certain performance requirement with up to 46% less compute (in terms of total token consumption) across 8 tasks. We also find that a meta controller trained on certain tasks with a certain generalist model (i.e., ChatGPT) can generalize well to unseen tasks (even with different output formats) and other generalist models (e.g., LLAMA [8]). To the best of our knowledge, our work is among the first approaches that can accelerate a black-box generalist model without access to its parameters.
Methodology
Background: Prompting and In-Context Learning
We first recall some basics of prompting and in-context learning. Prompting refers to the process of providing a prompt, which typically contains a description of the task and the task input, to a generalist model that guides its response generation. Formally, let G be a generalist model and P be a prompt. Then, the output O is given by: O = G(P ). Prompting relies on the generalist model's ability to understand and follow abstract instructions, which sometimes leads to unsatisfactory empirical performance, especially for hard tasks that require complex reasoning.
On the other hand, in-context learning leverages the ability of a generalist model to adapt to new information provided within the input context. Formally, given N labeled examples {(x i , y i )} N i=1 and a hand-crafted template T , in-context learning first verbalizes each input-output pair with a template, resulting in demonstrations d i = T (x i , y i ). Then the generalist model takes the concatenation of the original prompt and the demonstrations to generate the output:
O = G(P ⊕ d 1 ⊕ d 2 ⊕ · · · ⊕ d N )
(1) where ⊕ denotes the concatenation of token sequences.
Meta Controller
Architecture and Input/Output Formats: The meta controller C can be modeled by any sequence generation model including both encoder-decoder models and decoder-only models. We use an instruction-tuned model such as FLAN-T5 as the backbone for the meta controller to facilitate training. As illustrated in Figure 1, it receives a task instruction and an input, which is identical to most instruction tuning literature [24,22,25]. But instead of generating the corresponding outputs like instruction-tuned models, our meta controller is trained to generate the number of in-context examples suitable for the input to achieve the best performance-efficiency trade-off, which we denote as k. This process can be expressed by k = C(P ). The output expresses the confidence modeling of the meta controller for the generalist model to some extent. This method pertains to, albeit distinguishes itself from, prior existing work on model calibration [26,27], which addresses the inherent confidence levels of the model itself.
Training We then present our two-stage training framework for the meta controller. In the first stage, we train the meta controller to predict the minimum number of in-context examples for the generalist model to produce a good output. "A good output" can have different definitions for different tasks. For example, it can be defined as predicting the correct label with a high probability for classification tasks and generating outputs similar to the ground truth for generation tasks. In this paper, we consider only classification tasks following [28,29]. To synthesize training data for supervised training, we propose a simple and intuitive data generation method. Specifically, for a prompt P , we consider the minimum number of in-context examples k * for it to be the number that makes the generalist model's expected accuracy exceed a certain (hand-crafted) threshold t:
k * = min k∈N k E (xi 1 ,yi 1 )...(xi k ,yi k ))∼D k [Acc(G(P, T (x 1 , y 1 ) ⊕ · · · ⊕ T (x k , y k )))] > t(2)
where D k denotes all subsets of the training data of size k.
We synthesize (P, k * ) pairs on a mixture of instruction-tuning datasets from the FLAN collection and train the meta controller with maximum likelihood estimation.
After the first stage, the meta controller can already predict a reasonable number of in-context examples for a prompt. However, we may want it to better satisfy a certain performance-efficiency trade-off in a more fine-grained way. To this end, we propose to fine-tune the meta controller with reinforcement learning using a reward reflecting the performance-efficiency trade-off. In particular, we define the reward R to be a linear interpolation of the expected performance (defined as accuracy in case of classification task), and the efficiency, defined as the number of in-context examples k:
R(G, P, k) = E (xi 1 ,yi 1 )...(xi k ,yi k ))∼D k [Acc(G(P, T (x 1 , y 1 ) ⊕ · · · ⊕ T (x k , y k )))] + α · k (3)
where α is the weight controlling whether the controller should lean towards better performance or efficiency. The meta controller C is then fine-tuned with policy gradient:
∇ θ J(θ) = E P ∼P,k∼C(k|p,θ) [∇ θ log C(k|P, θ)(R(G, P, k) − b)](4)
where P is the set of prompts from a mixture of instruction tuning datasets, b is the baseline calculated as the moving average of previous rewards, and C(k|P, θ) denotes the predicted probability mass of k from the meta controller C for a prompt P .
The training framework can be easily adapted for generation tasks by changing the accuracy metric to some generation metrics such as BLEU [30] or BERTScore [31], and doing some normalization to make it compatible with classification tasks. We leave this for future work.
Dynamic In-Context Example Allocation
After training, the meta controller predicts the number of in-context examples for a specific input. This is a naive version of DYNAICL. However, in practice one may have a different computation budget. Therefore it is often desirable to normalize the predictions from the meta controller and dynamically adjust the actual number of in-context examples according to the computation budget.
In this work, we propose a simple recipe for dynamic in-context example allocation. Assuming we have a budget of N tokens 1 for K samples. The uniform baseline is to allocate N/(K · L) in-context examples for each sample assuming L is the average length of an example. DYNAICL instead allocates E in-context examples for an input P following:
E(P ) = [β · (C(P )/C) · N/(K · L)](5)
where C(P ) is the prediction from the meta controller, [] denotes the rounding operator,C is the averaged prediction for all examples, and β is the token saving ratio ranging from 0 to 1.
Experiments
In this section, we test the empirical effectiveness of DYNAICL by experimenting on some NLP tasks with ChatGPT, a popular large language model, as the generalist model. We first describe the experimental settings. Then we begin with a preliminary study about the impact of the number of in-context examples to motivate our approach. After that, we evaluate DYNAICL by answering two research questions for two realistic settings:
• RQ1: To what extent can DYNAICL improves the performance of a generalist model with fixed computational budgets? • RQ2: To what extent can DYNAICL reduce computational cost or token consumption for a generalist model to achieve a fixed target performance?
Experimental Settings
Models We consider ChatGPT as the generalist model for training the meta controller and the main experiments. We use LLAMA-65B as an unseen generalist model for evaluating the generalization ability of the meta controller. We use FLAN-T5-large, which has less than 1B parameters, to initialize the meta controller. We also test with FLAN-T5-base in the analysis.
Tasks We use a subset in the FLAN collection containing 30+ classification tasks to train the meta controller. For evaluation, we test DYNAICL on both seen and unseen tasks, which are explicitly excluded from the training data for the meta controller. To be specific, we use SST-2 [32], AGNews [33], RTE [34][35][36][37], CB [38], ARC-E [39], ARC-C [39], MRPC [40], and COPA [41] as the seen tasks, and PIQA [42], OpenBookQA [43], CommonsenseQA [44], TriviaQA [45], Natural Questions [46], and Web Questions [47] as unseen tasks. It is noteworthy that TriviaQA, Natural Questions, and Web Questions are not classification tasks but a trained meta controller can still be used despite being trained only on classification tasks. This is because its input format (i.e., instruction + input) is agnostic to the type of the task.
Training Details We follow Wei et al. [22] and fine-tune the meta controller for 30k/5k gradient steps with a batch size of 8,192 tokens using the Adafactor Optimizer [48] with a learning rate of 3e-5/1e-5, for the first/second training stage, respectively.
Baselines We mainly compare DYNAICL with the uniform baseline that allocates the same number of in-context examples for each sample, and the random baseline that randomly samples a number of in-context examples from a Gaussian distribution. We only compare these two naive baselines because there is no prior work in this direction and popular methods for efficient NLP can not be applied in this setting.
Preliminary Study: How Much Do More In-Context Examples Help?
We first conduct a preliminary study investigating the role of adding more in-context examples to the prompt for different samples. We first test if most samples for a task require a similar amount of in-context examples for a generalist model to generate a good output. We plot the distribution of the number of in-context examples that suffice for making the correct prediction for samples from the CommonsenseQA dataset that cannot be answered correctly by zero-shot inference with ChatGPT or LLAMA-65B but can be solved with in-context learning for up to 10 shots. As shown in Figure 2 This suggests that a meta controller trained with one generalist model may be able to generalize to other generalist models, which is later proved in our analysis.
Then we further analyze the effect of scaling more in-context examples. As shown in Figure 3
Main Results
We first compare the performance of DYNAICL with the baselines in Table 1. We can see that DYNAICL leads to an averaged performance improvement of 2.6% and 1.4% over the uniform baseline with budgets of 5 and 10 in-context examples for each sample, respectively. This confirms that DYNAICL leads to improved performance with fixed budgets. We also plot the trend of averaged performance on seen tasks with different token-saving ratios in Figure 4 (a). We can see that DYNAICL leads to consistent improvements across all budgets and the improvements are larger when the computation/token budget is more limited. We then show the extent to which DYNAICL can save tokens for achieving a fixed target performance in Figure 4 (b). We can see that DYNAICL consistently require fewer tokens to match the performance achieved by the uniform baseline with certain budgets. Specifically, DYNAICL only consumes 108 tokens on average to match the performance of the common practice with 200 tokens on average. This confirms that DYNAICL can effectively reduce token/computation consumption for achieving a fixed target performance.
Analysis
We then conduct an analysis investigating the impact of different components in DYNAICL and the generalization ability of DYNAICL on unseen tasks or generalist models when training the meta controller.
Ablation Study We first analyze the impact of the two training stages, the size of the meta controller, and the number of tasks the meta controller is trained with. The results are shown in Table 2. We find that both training stages contributes to the performance of DYNAICL and the first stage is more important. We think this is because the first training stage provides an important starting point for the second stage using reinforcement learning. We also find that DYNAICL with a smaller meta controller or a meta controller train on fewer tasks also achieves competitive performances.
Generalization on Unseen Tasks
We then test how well DYNAICL can generalize on unseen tasks. The results are shown in Table 3. We find that DYNAICL consistently leads to performance improvements across all 6 unseen tasks. Notably, DYNAICL also leads to substantial improvements on Natural Questions and Web Questions, which are generative question answering datasets that are very different from text classification tasks during training. This confirms that DYNAICL can generalize well on tasks that are not used to train the meta controller. Table 2: Ablation study results. "-first stage" and "-second stage" denotes the ablated variants where the meta controller is not trained with the first or second stage training, respectively. "w/ smaller model" and "w/ fewer tasks" denotes the ablated variants where the meta controller is parameterized with FLAN-T5-Base and the meta controller is trained with 50% less training tasks.
Generalization on Unseen Generalist Models
We also test if DYNAICL can generalize to other generalist models that are not used for training the meta controller by applying the meta controller trained with ChatGPT with LLAMA-65B as the generalist model. Results in Figure 5 (a) show that DYNAICL still saves a great number of tokens for achieving the same performance with the uniform baseline even tested with a different generalist model. This confirms that DYNAICL can generalize well on generalist models that are not used to train the meta controller.
Distribution of In-context Examples Count
We then plot the distribution of samples according to the number of in-context examples allocated for them to better understand the meta controller. As shown in Figure 5 Computation Cost of the Meta Controller Finally, it is noteworthy that the meta controller does add some computational cost and latency overhead to the overall prompting procedure. However, since the meta controller can use a very small backbone such as T5-large or T5-base, its computation cost is negligible compared to that of a generalist model. To be specific, the computational cost (in terms of FLOPs) of a T5-large based meta controller for a sample of 50 tokens is less than 0.1% of the change of the computation cost when changing the input from 200 tokens to 199 tokens, or less than 0.0005% of the computational cost saved by reducing one in-context example from the prompt. Similarly, since the meta controller only needs to predict 1 or 2 tokens, the latency overhead accounts for only 0.1% to 0.2% of the latency of calling the GPT-3.5-turbo API, and reducing one in-context example will lead to a speedup of around 10%. In sum, we believe the computational and latency overhead from the meta controller is almost negligible.
Related Works
Generalist Models, Prompting, and In-context Learning
Training a generalist model that can solve a wide range of tasks without task-specific training has been a long-standing goal in the field of artificial intelligence. One pioneering work dates back to Collobert and Weston [49] that attempted to solve all NLP tasks with a shared architecture using multi-task learning. This idea is further improved by decaNLP [50] that proposes to convert all NLP tasks to question answering format. T5 [51] then improves this paradigm by using text-to-text format for unifying all NLP tasks, which is more general and friendly to scaling. Finally, GPT-3 [3] show that by scaling model size, training data, and training FLOPs, a large language model can serve as a generalist model that solves many tasks by simply writing a prompt that describes the task and the input. They also showed that the zero-shot ability of a large language model can be further improved by adding a few input-output demonstrations in the prompt to help the model better understand the task. Since then, a large number of work has been done for improving and understanding prompting and in-context learning with large language models. For instance, Schick and Schütze [52] show that small encoder models can also be prompted. Min et al. [13] show that in-context examples mainly help a generalist model learn output label space and distribution of input text. Kadavath et al. [27] prove that generalist models are well calibrated and can be trained to model their confidence level. Hao et al. [28] and Li et al. [29] show that in-context learning with many examples improves the overall performance of a generalist model. Recently, the paradigm of prompting generalist models is successfully transferred to other modalities other than language. For example, Zhou et al. [53] and Li et al. [54] explored prompting vision models. It is foreseeable that prompting generalist models will become the de-facto paradigm for most domains in artificial intelligence.
Efficient Deep Learning
Improving the performance-efficiency trade-off of deep learning models is a very interesting research problem and is attracting more and more attention since the rise of large generalist models [55,56]. A large number of work has been done on improving the speed and efficiency of large language models, including both static methods such as knowledge distillation [57,58,9], pruning [59,10], quantization [60,61,11] and module replacing [12]; and dynamic methods such as adaptive computation [17], early-exiting [18][19][20], and model cascade [62,63].
However, most of the aforementioned methods require access to the model parameters, which may not be possible in the era of generalist models since most state-of-the-art generalist models such as ChatGPT and PaLM are closed-sourced. One potential exemption is model cascade, which first sends the input to a cheaper model and optionally sends it to a more powerful model if the previous model is not confident enough. This method, however, also faces server latency issues because harder samples will be computed by multiple generalist models sequentially. Modarressi et al. [64] proposed dynamic input reduction methods to reduce the length of the input. However, their approach requires access to the model parameters and needs to modify the model architecture and additional training. Concurrently to our work, Mu et al. [65] proposes to train gist tokens to replace long-form prompts and show promising results on prompt compression. However, this approach is still limited to white-box settings where the model parameter is available and also compromises interpretability. Our approach instead focuses on the black-box setting and can be applied to any closed-sourced generalist models.
Limitations
As for technical limitations, the main limitation of this work is that we only test DYNAICL on NLP tasks with LLMs as the backbone, while it may also be interesting to test on other modalities such as vision tasks with multi-modal generalist models. This is because the main experiments are conducted before multi-modal instruction following models such as LLAVA came out. We leave this for future work. Another limitation is that we only train the meta controller with text classification datasets. We explain how the meta controller can be trained on generation tasks at the end of Section 2.2. We also experiment with some generative question answering datasets and show DYNAICL trained only on classification tasks can successfully transfer to these tasks. Finally, the dynamic in-context example allocation algorithm is quite naive. Potential improvements may be made using some more sophisticated planning or optimization algorithms. We also leave this for future work.
As for social impact, this work aims to reduce the token/computation consumption of prompting generalist models. It probably leads to a positive environmental impact and will unlikely lead to any negative social impact.
Conclusions
This paper introduces DYNAICL, a framework for efficiently prompting generalist models. We propose to train a meta controller that predicts the suitable number of in-context examples for a specific sample with a two-stage training framework. During inference, DYNAICL dynamically allocate different number of in-context examples to samples according to the predictions from the meta controller and the given computational budget. Our experiments show that DYNAICL consistently leads to better performance-efficiency trade-offs across tasks and generalist models that are either seen or unseen when training the meta controller. This work is among the first that investigates how to prompt generalist models more efficiently even they are only available through API calls and will hopefully shed light on this direction.
Figure 1 :
1Input: [example1]. Did this critic like the movie? OPTIONS: yes,no. Output: yes. …… Input: [example9]. Did this critic like the movie? OPTIONS: yes,no. Output: no. Input: The movie demonstrates that the director of such hollywood blockbusters as patriot games can still turn out a small , personal film with an emotional wallop. Did this critic like the movie? OPTIONS: yes,no. Output: Overview of the DYNAICL framework. Given a set of samples and a token/computation budget, a meta controller first predict a number of in-context examples suitable for each sample. The predictions are then normalized and adjusted according to the budget. We then append the corresponding number of in-context examples to the original prompt. The prompts are then fed into a generalist model to generate predictions.
Figure 2 :
2Distribution of the number of incontext examples that suffice for making the correct prediction for samples that cannot be answered correctly by zero-shot inference with generalist models but can be solved with in-context learning for up to 10 shots. The generalist model we consider are ChatGPT and LLAMA-65B, and the dataset is CSQA.
Figure 3 :
3The impact of adding more incontext examples. ∆ Accuracy denotes the change of accuracy after adding more incontext examples. → and → denotes the percentage of examples of which the predictions are changed from incorrect to correct, and vice versa, after adding more in-context examples. We use ChatGPT as the generalist model and TriviaQA as the dataset.
, different samples requires a very different amount of in-context examples. Some hard examples require 10 in-context examples for a generalist model to make the correct prediction while most examples require only one in-context example or can be solved with zero-shot inference. This observation confirms the necessity of dynamically allocating in-context examples according to sample difficulties. Moreover, we can see that ChatGPT and LLAMA-65B share similar trends in the Figure.
, the effectiveness of adding more in-context examples to the prompt is amortized when there are already a few (e.g., 5) in-context examples. This also supports our motivation that only a few samples require many in-context examples and uniformly allocating an equal number of in-context examples for all samples is a waste of tokens and computation. More interestingly, we find that sometimes it can be Models SST-2 AGNews RTE CB ARC-E ARC-C MRPC COPA Avg. Acc zero-shot
Performance comparison between DYNAICL and the uniform baseline under different token saving ratios. The token saving ratio is defined as the ratio between actual token usage and the token usage of using 20 in-context examples per sample. The accuracy is averaged across all seen test datasets. The dashed line is the zero-shot performance. saving ratio of DYNAICL compared to the uniform baseline under performance constraints, which are defined by the performance of the uniform baseline with different token budgets. Each point (x,y) in the line indicates that on average, DYNAICL needs to use y tokens to match the performance of the uniform baseline with x tokens.
Figure 4 :
4Performance of DYNAICL on settings where either the computation (token) budget is fixed (a) or the target performance is fixed (b).ModelsSST-2 AGNews RTE CB ARC-E ARC-C MRPC COPA Avg. AccBudget: 5-shots on average
(b), with a target budget of 5 in-context examples, a large portion of samples are allocated with 5 in-context examples in DYNAICL. This indicates that most samples are predicted to need a similar number of in-context examples as the averaged prediction. We also find that more samples are assigned with fewer than 5 in-context examples while a few hard samples are assigned with more in-context examples. We present a qualitative study of different samples and the corresponding number of in-context examples allocated to them in the Appendix.
Distribution of samples (on seen tasks) according to the number of in-context examples allocated for them. The computational budget is fixed to 5 in-context examples per sample.
Figure 5 :
5Analysis on the generalization ability of DYNAICL on unseen generalist models and the distribution of samples according to the number of in-context examples allocated for them.
Table 1: Main results on seen tasks during meta controller training. The total computation/token budget is the same inside each group. DYNAICL consistently outperforms all baselines across all tasks under different computation/token budgets.harmful to include more in-context examples for a sample that can already be correctly solved by the generalist model, which is shown by a non-negligible amount of samples' predictions are changed from correct to incorrect after adding more in-context examples. This further confirms the potential of DYNAICL to achieve better performance while consuming fewer tokens.ChatGPT
88.5
84.5
84.5 89.5
85.1
61.0
88.4
67.2
81.1
Budget: 5-shots on average
Uniform
93.2
87.9
86.1 91.1
88.3
64.8
90.4
88.2
86.2
Random
93.0
87.7
86.1 91.0
88.1
65.0
90.4
89.4
86.3
DYNAICL
95.3
90.2
88.1 92.9
90.5
68.4
91.8
93.0
88.8
Budget: 10-shots on average
Uniform
95.8
90.9
88.5 93.1
90.8
68.3
92.0
93.4
89.1
Random
95.9
90.7
88.4 93.3
90.8
68.2
92.1
92.8
88.9
DYNAICL
96.7
92.5
90.0 94.1
91.9
70.0
93.1
95.8
90.5
Table 3 :
3Analysis of the generalization ability of DYNAICL on datasets that are unseen when training
the meta controller. Tasks with (EM) suffix denotes the task is generative question answering and we
use exact match as the metric. DYNAICL still consistently outperforms the baseline across all tasks.
400
800
1200
1600
2000
Baseline # Tokens
400
800
1200
1600
2000
DynaICL # Tokens (Avg.)
Uniform
DynaICL
(a) Token saving ratio of DYNAICL compared to
the uniform baseline under different performance
constraints on seen tasks. DYNAICL is trained
with ChatGPT but tested with LLAMA-65B.
We consider the budget in terms of the token count because this is the typical scenario for using commercial generalist models such as ChatGPT. We omit the token consumption for the original input for simplicity.
Improving language understanding by generative pre-training. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by generative pre-training. 2018.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, OpenAI blog. 189Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
Language models are few-shot learners. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Advances in Neural Information Processing Systems. H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. LinScott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec RadfordCurran Associates, Inc33Ilya Sutskever, and Dario AmodeiTom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhari- wal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agar- wal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Ma- teusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCan- dlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Ad- vances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/ 1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won, Charles Chung, Sebastian Sutton, Gehrmann, arXiv:2204.02311Scaling language modeling with pathways. arXiv preprintAakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022.
Training language models to follow instructions with human feedback. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Gray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, Ryan Lowe, Advances in Neural Information Processing Systems. Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun ChoLong Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Gray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, and Ryan Lowe. Training language models to follow instructions with human feedback. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/ forum?id=TG8KACxEON.
. Openai, Gpt-4 technical reportOpenAI. Gpt-4 technical report, 2023.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, arXiv:2205.01068Open pre-trained transformer language models. arXiv preprintSusan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022.
Llama: Open and efficient foundation language models. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Naman Baptiste Rozière, Eric Goyal, Faisal Hambro, Aurelien Azhar, Armand Rodriguez, Edouard Joulin, Guillaume Grave, Lample, Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models, 2023.
Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf, Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter, 2020.
Are sixteen heads really better than one?. Paul Michel, Omer Levy, Graham Neubig ; In, H Wallach, H Larochelle, A Beygelzimer, F Buc, E Fox, R Garnett, Advances in Neural Information Processing Systems. Curran Associates, Inc32Paul Michel, Omer Levy, and Graham Neubig. Are sixteen heads really better than one? In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, ed- itors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL https://proceedings.neurips.cc/paper_files/paper/2019/file/ 2c601ad9d2ff9bc8b282670cdd54f69f-Paper.pdf.
GPT3.int8(): 8-bit matrix multiplication for transformers at scale. Tim Dettmers, Mike Lewis, Younes Belkada, Luke Zettlemoyer, Advances in Neural Information Processing Systems. Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun ChoTim Dettmers, Mike Lewis, Younes Belkada, and Luke Zettlemoyer. GPT3.int8(): 8-bit matrix multiplication for transformers at scale. In Alice H. Oh, Alekh Agarwal, Danielle Belgrave, and Kyunghyun Cho, editors, Advances in Neural Information Processing Systems, 2022. URL https://openreview.net/forum?id=dXiGWqBoxaD.
BERT-of-theseus: Compressing BERT by progressive module replacing. Canwen Xu, Wangchunshu Zhou, Tao Ge, Furu Wei, Ming Zhou, 10.18653/v1/2020.emnlp-main.633Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Association for Computational LinguisticsOnline, November 2020Canwen Xu, Wangchunshu Zhou, Tao Ge, Furu Wei, and Ming Zhou. BERT-of-theseus: Com- pressing BERT by progressive module replacing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing (EMNLP), pages 7859-7869, Online, Novem- ber 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.633. URL https://aclanthology.org/2020.emnlp-main.633.
Rethinking the role of demonstrations: What makes in-context learning work?. Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, Luke Zettlemoyer, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. the 2022 Conference on Empirical Methods in Natural Language ProcessingAbu Dhabi, United Arab EmiratesAssociation for Computational LinguisticsSewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettlemoyer. Rethinking the role of demonstrations: What makes in-context learning work? In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11048-11064, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. URL https://aclanthology.org/2022.emnlp-main.759.
Ground-truth labels matter: A deeper look into inputlabel demonstrations. Junyeob Kang Min Yoo, Hyuhng Joon Kim, Hyunsoo Kim, Hwiyeol Cho, Sang-Woo Jo, Sang-Goo Lee, Taeuk Lee, Kim, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. the 2022 Conference on Empirical Methods in Natural Language ProcessingAbu Dhabi, United Arab EmiratesAssociation for Computational LinguisticsKang Min Yoo, Junyeob Kim, Hyuhng Joon Kim, Hyunsoo Cho, Hwiyeol Jo, Sang-Woo Lee, Sang-goo Lee, and Taeuk Kim. Ground-truth labels matter: A deeper look into input- label demonstrations. In Proceedings of the 2022 Conference on Empirical Methods in Nat- ural Language Processing, pages 2422-2437, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. URL https://aclanthology.org/2022. emnlp-main.155.
Why can gpt learn in-context? language models secretly perform gradient descent as meta-optimizers. Damai Dai, Yutao Sun, Li Dong, Yaru Hao, Zhifang Sui, Furu Wei, Damai Dai, Yutao Sun, Li Dong, Yaru Hao, Zhifang Sui, and Furu Wei. Why can gpt learn in-context? language models secretly perform gradient descent as meta-optimizers, 2022.
Dynamic neural networks: A survey. Yizeng Han, Gao Huang, Shiji Song, Le Yang, Honghui Wang, Yulin Wang, Yizeng Han, Gao Huang, Shiji Song, Le Yang, Honghui Wang, and Yulin Wang. Dynamic neural networks: A survey, 2021.
Adaptive computation time for recurrent neural networks. Alex Graves, Alex Graves. Adaptive computation time for recurrent neural networks, 2017.
Surat Teerapittayanon, Bradley Mcdanel, H T Kung, Branchynet: Fast inference via early exiting from deep neural networks. In ICPR. IEEESurat Teerapittayanon, Bradley McDanel, and H. T. Kung. Branchynet: Fast inference via early exiting from deep neural networks. In ICPR, pages 2464-2469. IEEE, 2016.
The right tool for the job: Matching model and instance complexities. Roy Schwartz, Gabriel Stanovsky, Swabha Swayamdipta, Jesse Dodge, Noah A Smith, Proceedings of the 58th. the 58thRoy Schwartz, Gabriel Stanovsky, Swabha Swayamdipta, Jesse Dodge, and Noah A. Smith. The right tool for the job: Matching model and instance complexities. In Proceedings of the 58th
10.18653/v1/2020.acl-main.593Annual Meeting of the Association for Computational Linguistics. OnlineAssociation for Computational LinguisticsAnnual Meeting of the Association for Computational Linguistics, pages 6640-6651, Online, July 2020. Association for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.593. URL https://aclanthology.org/2020.acl-main.593.
BERT loses patience: Fast and robust inference with early exit. Wangchunshu Zhou, Canwen Xu, Tao Ge, Julian J Mcauley, Ke Xu, Furu Wei, NeurIPS. Wangchunshu Zhou, Canwen Xu, Tao Ge, Julian J. McAuley, Ke Xu, and Furu Wei. BERT loses patience: Fast and robust inference with early exit. In NeurIPS, 2020.
Glance and focus networks for dynamic visual recognition. Gao Huang, Yulin Wang, Kangchen Lv, Haojun Jiang, Wenhui Huang, Pengfei Qi, Shiji Song, 10.1109/TPAMI.2022.3196959IEEE Trans. Pattern Anal. Mach. Intell. 454Gao Huang, Yulin Wang, Kangchen Lv, Haojun Jiang, Wenhui Huang, Pengfei Qi, and Shiji Song. Glance and focus networks for dynamic visual recognition. IEEE Trans. Pattern Anal. Mach. Intell., 45(4):4605-4621, apr 2023. ISSN 0162-8828. doi: 10.1109/TPAMI.2022. 3196959. URL https://doi.org/10.1109/TPAMI.2022.3196959.
Finetuned language models are zero-shot learners. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, Quoc V Le, International Conference on Learning Representations. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M. Dai, and Quoc V Le. Finetuned language models are zero-shot learners. In International Conference on Learning Representations, 2022. URL https://openreview. net/forum?id=gEZrGCozdqR.
The flan collection: Designing data and methods for effective instruction tuning. Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won, Yi Chung, Denny Tay, Quoc V Zhou, Barret Le, Jason Zoph, Adam Wei, Roberts, Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V. Le, Barret Zoph, Jason Wei, and Adam Roberts. The flan collection: Designing data and methods for effective instruction tuning, 2023.
Multitask prompted training enables zero-shot task generalization. Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, Canwen Bari, Urmish Xu, Shanya Thakker, Eliza Sharma Sharma, Taewoon Szczechla, Gunjan Kim, Nihal Chhablani, Debajyoti Nayak, Jonathan Datta, Mike Chang, Tian-Jian, Han Jiang, Matteo Wang, Sheng Manica, Shen, International Conference on Learning Representations. Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le ScaoStella Biderman, Leo Gao, Thomas Wolf, and Alexander M RushVictor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. Multi- task prompted training enables zero-shot task generalization. In International Conference on Learning Representations, 2022. URL https://openreview.net/forum?id=9Vrb9D0WI4.
Stanford alpaca: An instruction-following llama model. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, Tatsunori B Hashimoto, 2023Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca, 2023.
On calibration of modern neural networks. Chuan Guo, Geoff Pleiss, Yu Sun, Kilian Q Weinberger, ICML. PMLR70Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q. Weinberger. On calibration of modern neural networks. In ICML, volume 70 of Proceedings of Machine Learning Research, pages 1321-1330. PMLR, 2017.
. Saurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova Dassarma, Eli Tran-Johnson, Scott Johnston, Sheer El-Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, Deep Ganguli, Danny Hernandez, Josh Jacobson, Jackson Kernion, Shauna Kravec, Liane Lovitt, Kamal Ndousse, Catherine Olsson, Sam Ringer, Dario Amodei, Tom Brown, Jack Clark, Nicholas Joseph, Ben Mann, Sam McCandlish, Chris Olahand Jared Kaplan. Language models (mostly) know what they knowSaurav Kadavath, Tom Conerly, Amanda Askell, Tom Henighan, Dawn Drain, Ethan Perez, Nicholas Schiefer, Zac Hatfield-Dodds, Nova DasSarma, Eli Tran-Johnson, Scott Johnston, Sheer El-Showk, Andy Jones, Nelson Elhage, Tristan Hume, Anna Chen, Yuntao Bai, Sam Bowman, Stanislav Fort, Deep Ganguli, Danny Hernandez, Josh Jacobson, Jackson Kernion, Shauna Kravec, Liane Lovitt, Kamal Ndousse, Catherine Olsson, Sam Ringer, Dario Amodei, Tom Brown, Jack Clark, Nicholas Joseph, Ben Mann, Sam McCandlish, Chris Olah, and Jared Kaplan. Language models (mostly) know what they know, 2022.
. Yaru Hao, Yutao Sun, Li Dong, Zhixiong Han, Yuxian Gu, Furu Wei, Structured prompting: Scaling in-context learning to 1,000 examplesYaru Hao, Yutao Sun, Li Dong, Zhixiong Han, Yuxian Gu, and Furu Wei. Structured prompting: Scaling in-context learning to 1,000 examples, 2022.
In-context learning with many demonstration examples. Mukai Li, Shansan Gong, Jiangtao Feng, Yiheng Xu, Jun Zhang, Zhiyong Wu, Lingpeng Kong, Mukai Li, Shansan Gong, Jiangtao Feng, Yiheng Xu, Jun Zhang, Zhiyong Wu, and Lingpeng Kong. In-context learning with many demonstration examples, 2023.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, 10.3115/1073083.1073135Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, Pennsylvania, USAAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA, July 2002. Association for Computational Linguistics. doi: 10.3115/1073083.1073135. URL https: //aclanthology.org/P02-1040.
Bertscore: Evaluating text generation with BERT. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, Yoav Artzi, ICLR. OpenReview.net. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. Bertscore: Evaluating text generation with BERT. In ICLR. OpenReview.net, 2020.
Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, Christopher Potts, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingSeattle, Washington, USAAssociation for Computational LinguisticsRichard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA, October 2013. Association for Computational Linguistics. URL https://aclanthology.org/D13-1170.
Character-level convolutional networks for text classification. Xiang Zhang, Junbo Jake Zhao, Yann Lecun, NIPS. Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. Character-level convolutional networks for text classification. In NIPS, 2015.
The pascal recognising textual entailment challenge. Oren Ido Dagan, Bernardo Glickman, Magnini, Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment: First PASCAL Machine Learning Challenges Workshop, MLCW 2005. Southampton, UKSpringerRevised Selected PapersIdo Dagan, Oren Glickman, and Bernardo Magnini. The pascal recognising textual entailment challenge. In Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment: First PASCAL Machine Learning Chal- lenges Workshop, MLCW 2005, Southampton, UK, April 11-13, 2005, Revised Selected Papers, pages 177-190. Springer, 2006.
The second pascal recognising textual entailment challenge. Ido R Bar Haim, Bill Dagan, Lisa Dolan, Danilo Ferro, Bernardo Giampiccolo, Idan Magnini, Szpektor, Proceedings of the Second PASCAL Challenges Workshop on Recognising Textual Entailment. the Second PASCAL Challenges Workshop on Recognising Textual Entailment7R Bar Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. The second pascal recognising textual entailment challenge. In Proceedings of the Second PASCAL Challenges Workshop on Recognising Textual Entailment, volume 7, 2006.
The third pascal recognizing textual entailment challenge. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, William B Dolan, Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing. the ACL-PASCAL workshop on textual entailment and paraphrasingDanilo Giampiccolo, Bernardo Magnini, Ido Dagan, and William B Dolan. The third pascal recognizing textual entailment challenge. In Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing, pages 1-9, 2007.
The fifth pascal recognizing textual entailment challenge. Luisa Bentivogli, Peter Clark, Ido Dagan, Danilo Giampiccolo, TAC. CiteseerLuisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. The fifth pascal recognizing textual entailment challenge. In TAC. Citeseer, 2009.
The commitmentbank: Investigating projection in naturally occurring discourse. Marie-Catherine De Marneffe, Mandy Simons, Judith Tonhauser, proceedings of Sinn und Bedeutung. Sinn und Bedeutung23Marie-Catherine De Marneffe, Mandy Simons, and Judith Tonhauser. The commitmentbank: Investigating projection in naturally occurring discourse. In proceedings of Sinn und Bedeutung, volume 23, pages 107-124, 2019.
Think you have solved question answering? try arc, the ai2 reasoning challenge. Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, Oyvind Tafjord, Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. Think you have solved question answering? try arc, the ai2 reasoning challenge, 2018.
Automatically constructing a corpus of sentential paraphrases. B William, Chris Dolan, Brockett, Proceedings of the Third International Workshop on Paraphrasing (IWP2005). the Third International Workshop on Paraphrasing (IWP2005)William B. Dolan and Chris Brockett. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005), 2005. URL https://aclanthology.org/I05-5002.
Choice of plausible alternatives: An evaluation of commonsense causal reasoning. Melissa Roemmele, Andrew S Cosmin Adrian Bejan, Gordon, AAAI spring symposium: logical formalizations of commonsense reasoning. Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S Gordon. Choice of plausible alterna- tives: An evaluation of commonsense causal reasoning. In AAAI spring symposium: logical formalizations of commonsense reasoning, pages 90-95, 2011.
PIQA: reasoning about physical commonsense in natural language. Yonatan Bisk, Rowan Zellers, Jianfeng Ronan Le Bras, Yejin Gao, Choi, AAAI. AAAI PressYonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. PIQA: reasoning about physical commonsense in natural language. In AAAI, pages 7432-7439. AAAI Press, 2020.
Can a suit of armor conduct electricity? a new dataset for open book question answering. Todor Mihaylov, Peter Clark, Tushar Khot, Ashish Sabharwal, 10.18653/v1/D18-1260Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsTodor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. Can a suit of armor conduct electricity? a new dataset for open book question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2381-2391, Brussels, Belgium, October-November 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1260. URL https://aclanthology.org/D18-1260.
CommonsenseQA: A question answering challenge targeting commonsense knowledge. Alon Talmor, Jonathan Herzig, Nicholas Lourie, Jonathan Berant, 10.18653/v1/N19-1421Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149- 4158, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1421. URL https://aclanthology.org/N19-1421.
TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. Mandar Joshi, Eunsol Choi, Daniel Weld, Luke Zettlemoyer, 10.18653/v1/P17-1147Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics1Long Papers)Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601-1611, Vancouver, Canada, July 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1147. URL https://aclanthology.org/P17-1147.
Natural questions: A benchmark for question answering research. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M Dai, Jakob Uszkoreit, Quoc Le, Slav Petrov, 10.1162/tacl_a_00276Transactions of the Association for Computational Linguistics. 7Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452-466, 2019. doi: 10.1162/tacl_a_00276. URL https://aclanthology.org/Q19-1026.
Semantic parsing on Freebase from question-answer pairs. Jonathan Berant, Andrew Chou, Roy Frostig, Percy Liang, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingSeattle, Washington, USAAssociation for Computational LinguisticsJonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533-1544, Seattle, Washington, USA, October 2013. Association for Computational Linguistics. URL https://aclanthology.org/D13-1160.
Adafactor: Adaptive learning rates with sublinear memory cost. Noam Shazeer, Mitchell Stern, Noam Shazeer and Mitchell Stern. Adafactor: Adaptive learning rates with sublinear memory cost, 2018.
A unified architecture for natural language processing: Deep neural networks with multitask learning. Ronan Collobert, Jason Weston, 10.1145/1390156.1390177Proceedings of the 25th International Conference on Machine Learning, ICML '08. the 25th International Conference on Machine Learning, ICML '08New York, NY, USAAssociation for Computing MachineryRonan Collobert and Jason Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning, ICML '08, page 160-167, New York, NY, USA, 2008. Association for Computing Machinery. ISBN 9781605582054. doi: 10.1145/1390156.1390177. URL https://doi.org/10.1145/1390156.1390177.
The natural language decathlon: Multitask learning as question answering. Bryan Mccann, Nitish Shirish Keskar, Caiming Xiong, Richard Socher, Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. The natural language decathlon: Multitask learning as question answering, 2018.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, J Peter, Liu, J. Mach. Learn. Res. 21140Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, et al. Exploring the limits of transfer learning with a unified text-to-text transformer. J. Mach. Learn. Res., 21(140):1-67, 2020.
It's not just size that matters: Small language models are also few-shot learners. Timo Schick, Hinrich Schütze, 10.18653/v1/2021.naacl-main.185Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnlineAssociation for Computational LinguisticsTimo Schick and Hinrich Schütze. It's not just size that matters: Small language models are also few-shot learners. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2339- 2352, Online, June 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021. naacl-main.185. URL https://aclanthology.org/2021.naacl-main.185.
Learning to prompt for vision-language models. Kaiyang Zhou, Jingkang Yang, Chen Change Loy, Ziwei Liu, Int. J. Comput. Vis. 1309Kaiyang Zhou, Jingkang Yang, Chen Change Loy, and Ziwei Liu. Learning to prompt for vision-language models. Int. J. Comput. Vis., 130(9):2337-2348, 2022.
BLIP-2: bootstrapping language-image pre-training with frozen image encoders and large language models. Junnan Li, Dongxu Li, Silvio Savarese, Steven C H Hoi, abs/2301.12597CoRRJunnan Li, Dongxu Li, Silvio Savarese, and Steven C. H. Hoi. BLIP-2: bootstrapping language-image pre-training with frozen image encoders and large language models. CoRR, abs/2301.12597, 2023.
A survey on green deep learning. Jingjing Xu, Wangchunshu Zhou, Zhiyi Fu, Hao Zhou, Lei Li, Jingjing Xu, Wangchunshu Zhou, Zhiyi Fu, Hao Zhou, and Lei Li. A survey on green deep learning, 2021.
. Roy Schwartz, Jesse Dodge, Noah A Smith, Oren Etzioni Green, A I , Commun. ACM. 6312Roy Schwartz, Jesse Dodge, Noah A. Smith, and Oren Etzioni. Green AI. Commun. ACM, 63 (12):54-63, 2020.
Distilling the knowledge in a neural network. Geoffrey Hinton, Oriol Vinyals, Jeff Dean, Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network, 2015.
Fitnets: Hints for thin deep nets. Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, Yoshua Bengio, Adriana Romero, Nicolas Ballas, Samira Ebrahimi Kahou, Antoine Chassang, Carlo Gatta, and Yoshua Bengio. Fitnets: Hints for thin deep nets, 2015.
Optimal brain damage. Yann Lecun, John Denker, Sara Solla, Advances in Neural Information Processing Systems. D. TouretzkyMorgan-Kaufmann2Yann LeCun, John Denker, and Sara Solla. Optimal brain damage. In D. Touret- zky, editor, Advances in Neural Information Processing Systems, volume 2. Morgan- Kaufmann, 1989. URL https://proceedings.neurips.cc/paper_files/paper/1989/ file/6c9882bbac1c7093bd25041881277658-Paper.pdf.
Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. Song Han, Huizi Mao, William J Dally, ICLR. Song Han, Huizi Mao, and William J. Dally. Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding. In ICLR, 2016.
. Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, W Michael, Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma, Zhewei Yao, Amir Gholami, Michael W.
Q-BERT: hessian based ultra low precision quantization of BERT. Kurt Mahoney, Keutzer, AAAI. AAAI PressMahoney, and Kurt Keutzer. Q-BERT: hessian based ultra low precision quantization of BERT. In AAAI, pages 8815-8821. AAAI Press, 2020.
CascadeBERT: Accelerating inference of pre-trained language models via calibrated complete models cascade. Lei Li, Yankai Lin, Deli Chen, Shuhuai Ren, Peng Li, Jie Zhou, Xu Sun, 10.18653/v1/2021.findings-emnlp.43Findings of the Association for Computational Linguistics: EMNLP 2021. Punta Cana, Dominican RepublicAssociation for Computational LinguisticsLei Li, Yankai Lin, Deli Chen, Shuhuai Ren, Peng Li, Jie Zhou, and Xu Sun. CascadeBERT: Accelerating inference of pre-trained language models via calibrated complete models cascade. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 475-486, Punta Cana, Dominican Republic, November 2021. Association for Computational Linguis- tics. doi: 10.18653/v1/2021.findings-emnlp.43. URL https://aclanthology.org/2021. findings-emnlp.43.
Model cascading: Towards jointly improving efficiency and accuracy of NLP systems. Neeraj Varshney, Chitta Baral, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. the 2022 Conference on Empirical Methods in Natural Language ProcessingAbu Dhabi, United Arab EmiratesAssociation for Computational LinguisticsNeeraj Varshney and Chitta Baral. Model cascading: Towards jointly improving efficiency and accuracy of NLP systems. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11007-11021, Abu Dhabi, United Arab Emirates, December 2022. Association for Computational Linguistics. URL https://aclanthology. org/2022.emnlp-main.756.
AdapLeR: Speeding up inference by adaptive length reduction. Ali Modarressi, Hosein Mohebbi, Mohammad Taher Pilehvar, 10.18653/v1/2022.acl-long.1Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandAssociation for Computational Linguistics1Long Papers)Ali Modarressi, Hosein Mohebbi, and Mohammad Taher Pilehvar. AdapLeR: Speeding up inference by adaptive length reduction. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1-15, Dublin, Ireland, May 2022. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-long. 1. URL https://aclanthology.org/2022.acl-long.1.
Learning to compress prompts with gist tokens. Jesse Mu, Lisa Xiang, Noah Li, Goodman, Jesse Mu, Xiang Lisa Li, and Noah Goodman. Learning to compress prompts with gist tokens, 2023.
| [
"https://github.com/tatsu-lab/stanford_alpaca,"
] |
[
"Applications of UWB Technology",
"Applications of UWB Technology"
] | [
"Sana Ullah \nGraduate School of IT and Telecommunications\nSchool of Public Policy and Management\nInha University\n253 Yonghyun-Dong, Nam-Gu, 87 Hoegiro Dondaemun402-751, 130-868Incheon, SeoulSouth Korea. + KDI, South Korea\n",
"MuradAli + \nGraduate School of IT and Telecommunications\nSchool of Public Policy and Management\nInha University\n253 Yonghyun-Dong, Nam-Gu, 87 Hoegiro Dondaemun402-751, 130-868Incheon, SeoulSouth Korea. + KDI, South Korea\n",
"MdHussain Ψ Asdaque itsasdaque@hotmail.com \nGraduate School of IT and Telecommunications\nSchool of Public Policy and Management\nInha University\n253 Yonghyun-Dong, Nam-Gu, 87 Hoegiro Dondaemun402-751, 130-868Incheon, SeoulSouth Korea. + KDI, South Korea\n",
"Kyung Sup Kwak kskwak@inha.ac.kr \nGraduate School of IT and Telecommunications\nSchool of Public Policy and Management\nInha University\n253 Yonghyun-Dong, Nam-Gu, 87 Hoegiro Dondaemun402-751, 130-868Incheon, SeoulSouth Korea. + KDI, South Korea\n"
] | [
"Graduate School of IT and Telecommunications\nSchool of Public Policy and Management\nInha University\n253 Yonghyun-Dong, Nam-Gu, 87 Hoegiro Dondaemun402-751, 130-868Incheon, SeoulSouth Korea. + KDI, South Korea",
"Graduate School of IT and Telecommunications\nSchool of Public Policy and Management\nInha University\n253 Yonghyun-Dong, Nam-Gu, 87 Hoegiro Dondaemun402-751, 130-868Incheon, SeoulSouth Korea. + KDI, South Korea",
"Graduate School of IT and Telecommunications\nSchool of Public Policy and Management\nInha University\n253 Yonghyun-Dong, Nam-Gu, 87 Hoegiro Dondaemun402-751, 130-868Incheon, SeoulSouth Korea. + KDI, South Korea",
"Graduate School of IT and Telecommunications\nSchool of Public Policy and Management\nInha University\n253 Yonghyun-Dong, Nam-Gu, 87 Hoegiro Dondaemun402-751, 130-868Incheon, SeoulSouth Korea. + KDI, South Korea"
] | [] | Recent advances in wideband impulse technology, low power communication along with unlicensed band have enabled ultra wide band (UWB) as a leading technology for future wireless applications. This paper outlines the applications of emerging UWB technology in a private and commercial sector. We further talk about UWB technology for a wireless body area network (WBAN). | null | [
"https://arxiv.org/pdf/0911.1681v3.pdf"
] | 9,502,326 | 0911.1681 | 6997eb1b6be780a23160d2de2eb4d1ee2de51005 |
Applications of UWB Technology
Sana Ullah
Graduate School of IT and Telecommunications
School of Public Policy and Management
Inha University
253 Yonghyun-Dong, Nam-Gu, 87 Hoegiro Dondaemun402-751, 130-868Incheon, SeoulSouth Korea. + KDI, South Korea
MuradAli +
Graduate School of IT and Telecommunications
School of Public Policy and Management
Inha University
253 Yonghyun-Dong, Nam-Gu, 87 Hoegiro Dondaemun402-751, 130-868Incheon, SeoulSouth Korea. + KDI, South Korea
MdHussain Ψ Asdaque itsasdaque@hotmail.com
Graduate School of IT and Telecommunications
School of Public Policy and Management
Inha University
253 Yonghyun-Dong, Nam-Gu, 87 Hoegiro Dondaemun402-751, 130-868Incheon, SeoulSouth Korea. + KDI, South Korea
Kyung Sup Kwak kskwak@inha.ac.kr
Graduate School of IT and Telecommunications
School of Public Policy and Management
Inha University
253 Yonghyun-Dong, Nam-Gu, 87 Hoegiro Dondaemun402-751, 130-868Incheon, SeoulSouth Korea. + KDI, South Korea
Applications of UWB Technology
Recent advances in wideband impulse technology, low power communication along with unlicensed band have enabled ultra wide band (UWB) as a leading technology for future wireless applications. This paper outlines the applications of emerging UWB technology in a private and commercial sector. We further talk about UWB technology for a wireless body area network (WBAN).
I. INTRODUCTION
There have been tremendous research efforts to apply ultra wide band (UWB) technology to the military and government sectors. Some of them are already accomplished and some are intended for future. These applications are mainly categorized into three parts: communications and sensors, position location and tracking, and radar. This paper presents a brief discussion on the aforementioned applications. It is divided into five sections. Section 2 outlines the application of UWB technology in communication and sensors. Section 3 presents discussion on position location and tracking. In section 4, we talk about radar. Section 5 presents the application of UWB in a wireless body area network (WBAN). The final section presents conclusion.
II. COMMUNICATIONS AND SENSORS
A. Low Data Rate
The power spectral density (PSD) of UWB signals is extremely low, which enables UWB system to operate in the same spectrum with narrowband technology without causing undue interference. The solution on the market for today's indoor application is infrared or ultrasonic approaches. The line-of-sight propagation in infrared technology cannot be guaranteed all the time. It is also affected by shadows and light-related interferences. Ultra sonic approach propagates with confined penetration. UWB technology is less affected by shadows and allows the transmission through objects. The innovative communication method of UWB at low data rate gives numerous benefits to government and private sectors. For instance, the wireless connection of computer peripherals such as mouse, monitor, keyboard, joystick and printer can utilize UWB technology. UWB allows the operation of multiples devices without interference at the same time in the same space. It can be used as a communication link in a sensor network. It can also create a security bubble around a specific area to ensure security. It is the best candidate to support a variety of WBAN applications. A network of UWB sensors such as electrocardiogram (ECG), oxygen saturation sensor (SpO2) and electromyography (EMG) can be used to develop a proactive and a smart healthcare system. This can benefit the patient in chronic condition and provides long term health monitoring. In UWB system, the transmitter is often kept simpler and most of the complexity is shifted towards receiver, which permits extremely low energy consumption and thus extends battery life.
Designing RAKE-receiver for low power devices is a complicated issue. Energy detection receivers are the best approach to build simple receivers [1]. A RAKE receiver for on-body network is presented in [2], which shows that 1 or 2 fingers is sufficient to collect 50% to 80% of maximum energy for links with a distance of 15cm, independent of its placement on the body. Different energy management schemes may also assist in extending the battery life . Positioning with previously unattained precision, tracking, and distance measuring techniques, as well as accommodating high node densities due to the large operating bandwidth are also possible [3]. Global positioning system (GPS) system is often available in low date rate applications and requires new solutions. The reduction in protocol overhead can decrease the energy consumption of the complex GPS transceivers and extends the battery life. Even though, a low data rate application using alternative PHY concepts is currently discussed in IEEE 802.15.4a [4] but tremendous research efforts are required to bring those systems to real world applications.
B. High Data Rate
The unique applications of UWB systems in different scenarios have initially drawn much attention, since many applications of UWB spans around existing market needs for high data rata applications. Demand for high density multimedia applications is increasing, which needs innovative methods to better utilize the available bandwidth. UWB system has the property to fill the available bandwidth as demand increases. The problem of designing receiver and robustness against jamming are main challenges for high-rate applications [5]. The large high-resolution video screens can benefit from UWB. These devices stream video content wirelessly from video source to a wall-mounted screen. Various high data rata applications include internet access and multimedia services, wireless peripheral interfaces and location based services. Regardless of the environment, very high data rate applications (>1 Gbit/sec) have to be provided. The use of very large bandwidth at lower spectral efficiency has designated UWB system as a suitable candidate for high internet access and multimedia applications. The conventional narrowband system with high spectral efficiency may not be suitable for low cost and low power devices such as PDA or other handheld devices. Standardized wireless interconnection is highly desirable to replace cables and propriety plugs [6].The interconnectivity of various numbers of devices such as laptops and mobile phones is increasingly important for battery-powered devices.
C. Home Network Applications
Home network application is a crucial factor to make pervasive home network environment. The wireless connectivity of different home electronic systems removes wiring clutter in living room. This is particularly important when we consider the bit rate needed for high definition television that is in excess of 30 Mbps over a distance of at least few meters [6]. In IEEE 1394, an attempt has been made to integrate entertainment, consumer electronics and computing within a home environment. It provides isochronous mode where data delivery is guaranteed with constant transmission speed. It is important for real time applications such as video broadcasts. The required data rates and services for different devices are given in Table I. IEEE 1394 also provides asynchronous mode where data delivery is guaranteed but no guarantee is made about the time of arrival of the data [7]. The isochronous data can be transferred using UWB technology. A new method which allows IEEE 1394 equipment to transfer an isochronous data using a UWB wireless communication network is presented in [8]. A connection management protocol (CMP) and IEEE 1394 over UWB bridge module can exchange isochronous data through IEEE 1394 over UWB network.
III. POSITION LOCATION AND TRACKING
Position location and tracking have wide range of benefits such as locating patient in case of critical condition, hikers injured in remote area, tracking cars, and managing a variety of goods in a big shopping mall. For active RF tracking and positioning applications, the short-pulse UWB techniques offer distinct advantages in precision time-of-flight measurement, multipath immunity for leading edge detection, and low prime power requirements for extended-operation RF identification (RFID) tags [10]. The reason of supporting human-space intervention is to identify the persons and the objects the user aims at, and identifying the target task of the user. Knowing where a person is, we can figure out near to what or who this person is and finally make a hypothesis what the user is aiming at [11]. This human-space intervention could improve quality of life when used in a WBAN. In a WBAN, a number of intelligent sensors are used to gather patient's data and forwards it to a PDA which is further forwarded to a remote server. In case of critical condition such as arrhythmic disturbances, the correct identification of patient's location could assist medical experts in treatment.
IV. RADAR
A short-pulse UWB techniques have several radar applications such as higher range measurement accuracy and range resolution, enhanced target recognition, increased immunity to co-located radar transmissions, increased detection probability for certain classes of targets and ability to detect very slowly moving or stationary targets [10]. UWB is a leading technology candidate for micro air vehicles (MAV) applications [12]. The nature of creating millions of ultra-wideband pulses per second has the capability of high penetration in a wide range of materials such as building materials, concrete block, plastic and wood.
V. ULTRA WIDEBAND TECHNOLOGY IN WBAN
A WBAN consists of miniaturised, low power, and noninvasive/invasive wireless biosensors, which are seamlessly placed on or implanted in human body in order to provide a smart and adaptable healthcare system. Each tiny biosensor is capable of processing its own task and communicates with a network coordinator or a PDA. The network coordinator sends patient's information to a remote server for diagnosis and prescription. A WBAN requires the resolution of many technical issues and challenges such as interoperability, QoS, scalability, design of low power RF data paths, privacy and security, low power communication protocol, information infrastructure and data integrity of the patient's medical records. The average power consumption of a radio interface in a WBAN must be reduced below 100µW . Moreover, a WBAN is a one-hop star topology where power budget of the miniaturised sensor nodes is limited while network coordinator has enough power budget. In addition, most of the complexity is shifted to the network coordinator due to its capability of having abundant power budget. The emerging UWB technology promises to satisfy the average power consumption requirement of the radio interface (100µW ), which cannot be achieved by using narrowband radio communication, and increases the operating period of sensors. In the UWB system, considerable complexity on the receiver side enables the development of ultra-low-power and low-complex UWB transmitters for uplink communication, thereby making UWB a perfect candidate for a WBAN. The difficulty in detecting noise-like behavior and robustness of UWB signals offer high security and reliability for medical applications [5]. [16] Existing technological growth has facilitated research in promoting UWB technology for a WBAN. The influence of human body on UWB channel is investigated in [13] and results about the path loss and delay spread have been reported. The behavior of UWB antenna in a WBAN has also been presented in [14]. Moreover, a UWB antenna for a WBAN operating in close vicinity to a biological tissue is proposed in [15]. This antenna can be used for WBAN applications between 3 GHz and 6 GHz bands.
A low complex UWB transmitter being presented in [16] adapts a pulse-based UWB scheme where a strong duty cycle is produced by restricting the operation of the transmitter to pulse transmission. This pulse-based UWB scheme allows the system to operate in burst mode with minimal duty cycle and thus, reducing the baseline power consumption. A low power UWB transmitter for a WBAN requires the calibration of PSD inside the federal communication commission (FCC) mask for indoor application. The calibration process is a challenging task due to the discrepancies between higher and lower frequencies. Two calibration circuits are used to calibrate the spectrum inside FCC mask and to calibrate the bandwidth. A pulse generator presented in Fig. 1 activates triangular pulse generator and a ring oscillator simultaneously. During pulse transmission, the ring oscillator is activated by a gating circuit, thus avoiding extra power consumption. A triangular pulse is obtained at output when the triangular signal is multiplied with the output carrier produced by the oscillator.
Finite-difference time-domain (FDTD) is used to model UWB prorogation around the human body where the path loss depends on the distance and increases with a large fading variance. The narrowband implementation is compared in terms of power consumption using a WBAN channel model. Simulation results showed that the path loss near the body was higher than the path loss in free space. Moreover, it was concluded that the performance of a UWB transmitter for a WBAN is better for best channel while for average channels narrow band implementation is a good solution [16].A considerable research efforts are required both at algorithmic and circuit level to make UWB a key technology for WBAN applications.
VI. CONCLUSIONS
The UWB is a leading technology for wireless applications including numerous WBAN applications. In this paper, we discussed the current and future applications of emerging UWB technology in a private and commercial sector. We believe that the UWB technology can easily satisfy the energy consumption requirements of a WBAN. Our future work includes the investigation of UWB technology for a non invasive WBAN.
Fig. 1 .
1Block Diagram of Pulse Generator
TABLE I CONTENTS
IAND REQUIREMENTS FOR HOME NETWORKING AND COMPUTING[9] Service
Data Rate (Mbps)
Real Time Feature
Digital Video
32
Yes
DVD, TV
2-16
Yes
Audio
1.5
Yes
PC
32
No
Internet
>10
No
Other
<1
No
Federal Communications Commission (FCC) FCC NOI: Rules Regarding Ultra-Wideband Transmission Systems, ET Docket No. 98-153. Federal Communications Commission (FCC) FCC NOI: Rules Regarding Ultra-Wideband Transmission Systems, ET Docket No. 98-153, Sept. 1, 1998.
UWB for Non-invasive Wireless Body Area Networks: channel measurements and results. Thomas Zasowski, Frank Althaus, Mathias Stager, A Wittneben, G Troster, IEEE Conference on Ultra Wideband Systems and Technologies. UWBST; Reston, Virginia, USAThomas Zasowski, Frank Althaus, Mathias Stager, A. Wittneben, and G. Troster, UWB for Non-invasive Wireless Body Area Net- works: channel measurements and results, IEEE Conference on Ultra Wideband Systems and Technologies, UWBST 2003, Reston, Virginia, USA, Nov. 2003.
Ultra Wideband Wireless Sensor Networks. B Allen, IEE UWB Symposium. B Allen, "Ultra Wideband Wireless Sensor Networks", IEE UWB Symposium, June 2004.
wideband: applications, technology and future perspectives. B Allen, T Brown, K Schwieger, E Zimmermann, W Q Malik, D J Edwards, L Ouvry, I Oppermann, Proc. Int. Workshop Convergent Tech. Int. Workshop Convergent TechUltra; Oulu, FinlandB. Allen, T. Brown, K. Schwieger, E. Zimmermann, W. Q. Malik, D. J. Edwards, L. Ouvry, and I. Oppermann, Ultra wideband: appli- cations, technology and future perspectives, in Proc. Int. Workshop Convergent Tech. Oulu, Finland, June 2005.
The Holy Grail of Wire Replacement. B Allen, Ghavami, A Armogida, Aghvami, IEE Communications Engineer. B Allen, M Ghavami, A Armogida, A.H Aghvami, The Holy Grail of Wire Replacement, IEE Communications Engineer, Oct/Nov 2003.
Isochronous Data Transfer between AV Devices Using Pseudo CMP Protocol in IEEE 1394 over UWB Network. S Park, S Lee, IEICE TRANS. COMMUN. 12S.H park, S.H Lee, Isochronous Data Transfer between AV Devices Using Pseudo CMP Protocol in IEEE 1394 over UWB Network, IEICE TRANS. COMMUN., Vol.E90-B, NO.12 December 2007.
White paper Ultra Wideband: Technology and Future perspectives V3.0. Ben Allen, Ben Allen, White paper Ultra Wideband: Technology and Future perspectives V3.0, March 2005.
Recent System Applications of Short-Pulse Ultra-Wideband (UWB) Technology. R J Fontana, IEEE Microwave Theory and Tech. 529R.J. Fontana, Recent System Applications of Short-Pulse Ultra- Wideband (UWB) Technology, IEEE Microwave Theory and Tech., Vol. 52, No. 9, September 2004.
Survey of Position Location Techniques in Mobile Systems. T Manesis, N Avouris, MobileHCI, SalzburgT. Manesis and N. Avouris, Survey of Position Location Techniques in Mobile Systems, MobileHCI, Salzburg, September 2005.
An Ultra Wideband Radar for Micro Air Vehicle Applications. R J Fontana, E A Richley, TechnologiesR. J. Fontana, E. A. Richley et.al, An Ultra Wideband Radar for Micro Air Vehicle Applications, Reprinted from 2002 IEEE Conference on Ultra Wideband Systems and Technologies, May 2002.
. Ping Yue, Zhang, Yue Ping Zhang;
Performance of UWB Impulse Radio With Planar Monopoles Over On-Human-Body Propagation Channel for Wireless Body Area Networks. Qiang Li, IEEE Transactions on. 5510Antennas and PropagationQiang Li, "Performance of UWB Impulse Radio With Planar Monopoles Over On-Human-Body Propagation Chan- nel for Wireless Body Area Networks," Antennas and Propagation, IEEE Transactions on , vol.55, no.10, pp.2907-2914, Oct. 2007
UWB antenna for wireless body area network. K Y Yazdandoost, R Kohno, Microwave Conference. Yazdandoost, K.Y.; Kohno, R., "UWB antenna for wireless body area network," Microwave Conference, 2006. APMC 2006. Asia- Pacific , vol., no., pp.1647-1652, 12-15 Dec. 2006
Comparison of directional and omnidirectional UWB antennas for Wireless Body Area Network applications. M Klemm, I Z Kovacs, 18th International Conference on Applied Electromagnetics and Communications. ICEComM. Klemm, I.Z. Kovacs, et.al, Comparison of directional and omni- directional UWB antennas for Wireless Body Area Network appli- cations, 18th International Conference on Applied Electromagnetics and Communications, pg 1-4, ICECom 2005.
Ultra-wide-band transmitter for low-power wireless body area networks: design and evaluation. J Ryckaert, C Desset, A Fort, M Badaroglu, V De Heyn, P Wambacq, G Van Der Plas, S Donnay, B Van Poucke, B Gyselinckx, IEEE Transactions on Circuits and Systems I: Regular Papers. 5212J. Ryckaert, C. Desset, A. Fort, M. Badaroglu, V. De Heyn, P. Wambacq, G. Van der Plas, S. Donnay, B. Van Poucke, B. Gyselinckx, "Ultra-wide-band transmitter for low-power wireless body area networks: design and evaluation," IEEE Transactions on Circuits and Systems I: Regular Papers, vol.52, no.12, pp. 2515- 2525, Dec. 2005
| [] |
[
"EFT, Decoupling, Higgs Mixing and All That Jazz",
"EFT, Decoupling, Higgs Mixing and All That Jazz"
] | [
"Upalaparna Banerjee \nIndian Institute of Technology Kanpur\nUttar Pradesh208016KalyanpurKanpurIndia\n",
"Joydeep Chakrabortty joydeep@iitk.ac.in \nIndian Institute of Technology Kanpur\nUttar Pradesh208016KalyanpurKanpurIndia\n",
"Christoph Englert christoph.englert@glasgow.ac.uk \nSchool of Physics & Astronomy\nUniversity of Glasgow\nG12 8QQGlasgowUnited Kingdom\n",
"Wrishik Naskar w.naskar.1@research.gla.ac.uk \nSchool of Physics & Astronomy\nUniversity of Glasgow\nG12 8QQGlasgowUnited Kingdom\n",
"Shakeel Ur Rahaman \nIndian Institute of Technology Kanpur\nUttar Pradesh208016KalyanpurKanpurIndia\n",
"Michael Spannowsky michael.spannowsky@durham.ac.uk \nInstitute for Particle Physics Phenomenology\nDepartment of Physics\nDurham University\nDH1 3LEDurhamUnited Kingdom\n"
] | [
"Indian Institute of Technology Kanpur\nUttar Pradesh208016KalyanpurKanpurIndia",
"Indian Institute of Technology Kanpur\nUttar Pradesh208016KalyanpurKanpurIndia",
"School of Physics & Astronomy\nUniversity of Glasgow\nG12 8QQGlasgowUnited Kingdom",
"School of Physics & Astronomy\nUniversity of Glasgow\nG12 8QQGlasgowUnited Kingdom",
"Indian Institute of Technology Kanpur\nUttar Pradesh208016KalyanpurKanpurIndia",
"Institute for Particle Physics Phenomenology\nDepartment of Physics\nDurham University\nDH1 3LEDurhamUnited Kingdom"
] | [] | The effective field theory (EFT) framework is a precise approximation procedure when the inherent assumptions of a large-scale separation between the Standard Model (SM) and new interactions alongside perturbativity are realised. Constraints from available data might not automatically guarantee these circumstances when contrasted with UV scenarios that the EFT analysis wishes to inform. From an EFT perspective, achieving sufficient precision in navigating the alignment or decoupling limits beyond the SM scenarios can necessitate moving beyond the SM's leading, dimension six EFT deformation. Using the example of Higgs boson mixing, we demonstrated the importance of higher-dimensional terms in the EFT expansion. We analyse the relevance of virtual EFT corrections and dimension eight contributions for well-determined electroweak precision observables. We find that when moving away from the decoupling limit, the relevance of additional terms in the EFT expansion quickly becomes relevant. This demonstrates the necessity to move beyond dimension six interactions for any scenario that contains Higgs boson mixing. | null | [
"https://export.arxiv.org/pdf/2303.05224v1.pdf"
] | 257,427,628 | 2303.05224 | 4395ec9ce6b5b5a9dfc48dc12a3a6a85992b0e5c |
EFT, Decoupling, Higgs Mixing and All That Jazz
9 Mar 2023
Upalaparna Banerjee
Indian Institute of Technology Kanpur
Uttar Pradesh208016KalyanpurKanpurIndia
Joydeep Chakrabortty joydeep@iitk.ac.in
Indian Institute of Technology Kanpur
Uttar Pradesh208016KalyanpurKanpurIndia
Christoph Englert christoph.englert@glasgow.ac.uk
School of Physics & Astronomy
University of Glasgow
G12 8QQGlasgowUnited Kingdom
Wrishik Naskar w.naskar.1@research.gla.ac.uk
School of Physics & Astronomy
University of Glasgow
G12 8QQGlasgowUnited Kingdom
Shakeel Ur Rahaman
Indian Institute of Technology Kanpur
Uttar Pradesh208016KalyanpurKanpurIndia
Michael Spannowsky michael.spannowsky@durham.ac.uk
Institute for Particle Physics Phenomenology
Department of Physics
Durham University
DH1 3LEDurhamUnited Kingdom
EFT, Decoupling, Higgs Mixing and All That Jazz
9 Mar 2023IPPP/23/14
The effective field theory (EFT) framework is a precise approximation procedure when the inherent assumptions of a large-scale separation between the Standard Model (SM) and new interactions alongside perturbativity are realised. Constraints from available data might not automatically guarantee these circumstances when contrasted with UV scenarios that the EFT analysis wishes to inform. From an EFT perspective, achieving sufficient precision in navigating the alignment or decoupling limits beyond the SM scenarios can necessitate moving beyond the SM's leading, dimension six EFT deformation. Using the example of Higgs boson mixing, we demonstrated the importance of higher-dimensional terms in the EFT expansion. We analyse the relevance of virtual EFT corrections and dimension eight contributions for well-determined electroweak precision observables. We find that when moving away from the decoupling limit, the relevance of additional terms in the EFT expansion quickly becomes relevant. This demonstrates the necessity to move beyond dimension six interactions for any scenario that contains Higgs boson mixing.
Introduction
Effective field theory (EFT) [1] is a formidable tool for communicating sensitivity beyond the Standard Model (BSM) physics in times when particle physics data seemingly points towards a large-scale separation of new states relative to the Standard Model (SM) degrees of freedom. The extension of the SM by effective interactions relevant to the high-energy frontier of, e.g., the Large Hadron Collider (LHC), i.e. Standard Model Effective Theory (SMEFT) at dimension six [2] has received a lot of theoretical attention and improvement alongside its application in a series of experimental investigations. Matching calculations [3][4][5][6][7][8][9][10][11][12][13][14] that coarse grain ultra-violet (UV) BSM scenarios into EFT provide the technical framework to marry together concrete scenarios of new interactions with the generic EFT analysis of particle data. The latter is typically plagued with considerable uncertainties, both experimentally and theoretically. Even optimistic extrapolations of specific processes to the LHC's high luminosity (HL) phase can imply a significant tension between the intrinsic viability criteria that underpin the EFT limit setting trying to inform the UV scenarios' parameter spaces: EFT cut-offs need to be lowered into domains that can be directly resolved at the LHC. This can be at odds with the perturbativity of the obtained constraints (and hence limits the reliability of the fixed-order matching).
The obvious way out of this conundrum is to include higher-dimensional terms in the EFT expansion. Dimension eight interactions have increasingly moved into the focus of the theory community [15][16][17]. From a practical point of view, this prompts the question of when we can be confident about reaching the point where phenomenologically-minded practitioners can stop. Unfortunately, an answer to this question is as process and model-dependent as matching a UV-ignorant EFT to a concrete UV scenario. Therefore, the phenomenological task is developing theory-guided intuition using representative scenarios that transparently capture key issues. The purpose of this note is to contribute to this evolving discussion using (custodial iso-singlet) Higgs boson mixing as an example. This scenario has seen much attention from the EFT perspective as the number of degrees of freedom and free parameters is relatively small, thus enabling a transparent connection of EFT and UV theory beyond the leading order of the EFT approach (see, e.g., Refs. [3,18,19]). Higgs mixing also arises in many BSM theories. We focus on electroweak precision observables as these are well-constrained by collider data, thus enabling us to navigate cut-offs and Wilson coefficients of the effective theory under experimental circumstances where precise predictions and matching are very relevant. This work is structured as follows: In Sec. 2, we first discuss the oblique corrections and their relation to the polarisation functions to make this work self-contained; Sec. 2.1 gives a quick discussion of the oblique corrections in the singlet scenario (see also [20][21][22][23][24][25]) with formulae provided in the appendix. We then focus on the oblique parameters for this case in dimensions six and eight SMEFT in Sec. 2.2. We detail the comparison in Sec. 3 with a view towards perturbative unitarity. Finally, we provide conclusions in Sec. 4.
Electroweak Precision Observables
Extensions of the SM with modified Higgs sectors can be constrained through electroweak precision measurements. A famous subset of these that were instrumental in discovering the Higgs boson is the so-called oblique corrections parametrized by the Peskin-Takeuchi parameters [26] (see also [27]). These S, T , U are chiefly extracted from Drell-Yan-like production during the LEP era using global fits, e.g., Ref. [28,29]. Defining the off-shell two-point gauge boson functions for the SM gauge bosons as
H V µ (p) V ′ν (p) = Π V V (p 2 )g µν + Σ V V (p 2 )p µ p ν , (2.1)
s W , c W denote the sine and cosine of the Weinberg angle, α is the fine structure constant, and M i stands for the gauge boson masses. 1 Constraints on new physics can then be formulated by examining the difference of these parameters from the best SM fit point. To draw a comparison between the full theory and EFT we restrict our analysis to one loop order. In the next subsection, we provide the contributions to the oblique parameters from the full theory.
SM extended by a real singlet scalar
The most general scalar potential for the SM Higgs Sector extended by a real singlet scalar field (S) is (ignoring tadpoles fixed through minimizing conditions),
V (H, S) = −µ 2 H † H + 1 2 m 2 S S 2 + η S (H † H)S + k S (H † H)S 2 + λ h (H † H) 2 + 1 4! λ S S 4 , (2.3)
with H being the SM Higgs doublet, which gets a vacuum expectation value (vev) v 246 GeV. H can then be expanded around the vev:
H = 1 √ 2 √ 2 G + v + h + i η . (2.4)
The presence of the mixing terms in the potential given in Eq. Here, cos θ can be written in terms of the Lagrangian parameters as
cos 2 θ = 1 2 1 + 1 + 4 v 2 η 2 S (M 2 S − M 2 h ) 2 − 4v 2 η 2 S −1/2 = 1 − v η S M 2 S − M 2 h 2 − v η S M 2 S − M 2 h 4 + ... . (2.6)
The eigenvalues corresponding to these mass eigenstates shown in Eq. (2.5) correspond to the masses of the scalars in the theory, i.e., M h = 125 GeV and a free parameter M S , respectively. These mass eigenvalues can be expressed in terms of the Lagrangian parameters and the Higgs vev (v) as,
M 2 h , M 2 S = 1 2 m 2 S + m 2 h ∓ 4v 2 η S + (m 2 S − m 2 h ) 2 ,(2.7)
where m 2 h = 2λ h v 2 . For the computation of the oblique parameters we only consider the radiative corrections from the scalar-involved diagrams shown in Fig. 1, since the other diagrams will provide the same contribution to BSM and SM theory, therefore dropping out from the deviation. The explicit expressions are given in the appendix A.1.
(a) V µ V ν φ (b) V µ V ν φ G 0 /G ± (c) V µ V ν φ V (e) H W W G V V (f) H W W V V H, G (g) H W W V V V (h) H W W V V (i) H W W G G (j) H W W G V (k)
H W W f f f Figure 1. Relevant Feynman diagrams with scalars in the loop that have been considered to compute the oblique corrections. Here φ ∈ (h,s) when one-loop correction in the full theory is computed.
Equation (2.5) clearly shows that the light (heavy) scalar couplings to the SM particles are suppressed by a factor of cos θ (sin θ). Therefore, the contributions to the gauge boson self-energies get modified by a factor of cos 2 θ or sin 2 θ depending on the neutral scalar coupled to. We then express the mixing angle regarding the BSM parameters in the potential. Limits are then imposed on the independent BSM parameters (in our case, it is just η S ) and the mass of the heavy scalar (M S ) using the constraints of GFitter data [31] as shown in Fig. 2. GFitter [31]. An additional unitarity bound is imposed which sets the limit of η S , as |η S | M S (see appendix B).
Real Singlet Model from SMEFT perspective
To investigate how well the effective theory replicates the minute signatures of the singlet extension of the SM described in Sec. 2.1 or, in turn, adjudge the significance of the higherorder effective corrections, we extend the effective series with relevant operator structures -4 -till dimension eight:
L = L SM + i C (6) i Λ 2 O (6) i + j C (8) j Λ 4 O (8) j . (2.8)
Here, the Wilson coefficients C i parametrize the strengths of the operators O i that are produced after integrating out the heavy real singlet scalar (for a complete matching of such operators at dimension six, see Refs. [17]). We have chosen the cut-off scale Λ to be m s = M S . To validate EFT, as we will be working with small mixing, thus the parameter space of our interest will satisfy the following equivalence M S m s . Figure 3. Tree-level correction to the gauge boson propagators due to the presence of effective operators.
(a) V µ V ν φ (b) V µ V ν φ G 0 /G ± (c) V µ V ν φ V V µ V ν (f) H W W V V H, G (g) H W W V V V (h) H W W V V
In particular, this implies that in the case of effective theory, there may be tree-level electroweak corrections, as shown in Fig. 3, to Eq. (2.1) from the effective operators that may emerge in the process of integrating out heavy fields from the UV diagram and(or) through the renormalization group running of effective operators generated by integrating out treelevel heavy propagator at the cut-off scale. These contributions depend on the renormalization scale and play an essential role in our further computation, see also [3]. Depending on whether the operators that could contribute to the dominant (when considered in a model-independent way) tree-level correction, as shown in Fig. 3, are generated at one-loop itself, the contributions from the operators generated at the tree level, which can modify the interactions at the one-loop, can become significant.
We categorically list the effective corrections to S, T up to one loop:
• Tree-level correction: Expanding the Lagrangian with dimension six and dimension eight operators can induce corrections to the transverse tree-level vector boson propagators (Π V V ) itself, which in turn modifies S, T parameters [15]
S eft,tree = 4 c W s W v 2 α C (6) HW B + 2 c W s W v 4 α C (8) HW B , T eft,tree = − v 2 2 α C (6) HD − v 4 2 α C (8)
HD,2 .
(2.9)
The expressions for modifying the individual Π V V functions are given in the appendix A.2. The dimension six operators contributing to Eq. (2.9) are generated at one-loop while integrating out the heavy field. The matching expressions for these coefficients are given in Tab. 1. We have also computed the one-loop matching for the dimension eight operators involved in Eq. (2.9) and noticed that these do not receive any correction while integrating out complete heavy loop diagrams. On the other hand, these coefficients receive contributions from removing the redundant structures at dimension six, as discussed in Ref. [17]. Since the latter corresponds to a two-loop suppressed sub-leading contribution, we neglect the associated effects in our analysis.
• One-loop insertion of operators: One-loop corrections to the oblique parameters are essential for the tree-level generated operators, for they provide a similar contribution as the operators that are produced at one-loop contributing to the tree-level Operator Op. Structure Wilson coeffs. Table 1. Relevant operators that produce tree and one-loop corrections to the gauge boson selfenergy. The structures in blue first appear at tree-level correction, whereas the rest of the operators contribute at one-loop first.
O (6) H (H † H) (H † H) − η 2 S 2 M 4 S O (6) HW B (H † τ I H)W I µν B µν g W g Y η 2 S 128 M 4 S π 2 O (6) HD (H † D µ H) * (H † D µ H) − 7g 2 Y η 2 S 288 M 4 S π 2
propagator corrections shown in Eq. (2.9) for a model-dependent analysis. In our case, such a contribution arises from the operators O
L h,kin = 1 2 1 − 2 v 2 C (6) H + v 4 4 C (8) HD,1 (∂ µ h) 2 , (2.10) which can be removed by redefining the field h → Z h h with Z h = 1 − v 2 C (6) H + v 4 8 C (8) HD,1 . (2.11)
This implies that while computing the higher order corrections for EFT, we need to recall that φ = h → Z h h in Fig. 1. This also accounts for suitable modifications in the vertices, involving Higgs and Goldstone in Fig. 1. This correction, up to O(1/Λ 4 ) capturing the effects from both (dimension six) 2 and dimension eight terms, is incorporated by replacing the cos 2 θ with Z 2 h and setting sin 2 θ to zero in the expressions shown in appendix A.1.
• RGE improved correction: It is important to include the running effects to the Wilson coefficients which arise at tree-level. T eft,tree in Eq. (2.9) at dimension six receives such an additional contribution from the operator O
HD . Contributions arising from the running of the coefficient of the operator O
H [32,33] 16π 2 d C
HD (µ) d ln µ = 20 3 g 2 Y C(6)
H , =⇒ C
HD | RGE = − 5 g 2 Y η 2 S 24 π 2 M 4 S log M Z M S . (2.12)(6)
So the total contribution to C HD at the EW scale is:
C (6) HD (M Z ) = − 7g 2 Y η 2 S 288 M 4 S π 2 − 5 g 2 Y η 2 S 24 π 2 M 4 S log M Z M S . (2.13) -6 -
The part of the beta function (cf. Eq. (2.12)) for the dimension eight Wilson coefficient C
HD,2 and C (8) HW B stems from
16π 2 d C (8) HD,2 (µ) d ln µ = 40 3 g 2 Y (C (6) H ) 2 + 10 3 g 2 Y C(8)
HD,1 + C
H 4 D 4 ,3 − 11 24 g 4 Y − 79 48 g 2 Y g 2 W + 3g 2 Y λ h =⇒ C (8) HD,2 | RGE = 1 16π 2 10 g 2 Y η 4 S 3 M 8 S + 10 3 g 2 Y 4η 2 S k S M 6 S − 8λ S η 2 S k S M 6 S + 2η 2 S M 6 S − 11 24 g 4 Y − 79 48 g 2 Y g 2 W + 3g 2 Y λ h log M Z M S ,(8)
14)
16π 2 d C (8) HW B (µ) d ln µ = C(8)H 4 D 4 ,3 − 11 48 g 3 Y g W − 29 24 g Y g 3 W + g Y g W λ h =⇒ C (8) HW B | RGE = η 2 S 8π 2 M 6 S − 11 48 g 3 Y g W − 29 24 g Y g 3 W + g Y g W λ h log M Z M S ,(2.15)
with g Y = e/c W , g W = e/s W . In addition to contributions from dimension six effective operators, we also compute the contribution to dimension eight operators from the equations of motion of the dimension six operators and the RGE-improved corrections due to dimension six operators, see further Refs. [15,16,34,35]. We note that the correction to ∆ T and ∆ S due to the dimension eight inclusion is of the order of the deviations. Thus, dimension eight interactions may be crucial to bringing EFT predictions close to the full theory for given measured constraints.
Full theory vs EFT
In this section, we compare a full theory and its effective version captured in SMEFT. We carefully investigate how the inclusion of higher mass dimensional operators, suppressed by the mass of the heavy integrated-out field, in the EFT expansion affects the computation of our chosen observable ( T , S). For this, we categorize the EFT contribution into three parts. To start with, we discuss the dimension six (d 6 ) part, which contributes at O(M −2 S ) containing linear dimension six Wilson coefficients (WCs). Here, we include the cumulative effects of field redefinition and radiative corrections on the oblique parameters. Then, we consider the corrections from the dimension eight (d 8 ) operators that are linear functions of dimension eight WCs at the O(M −4 S ). We also include the dimension eight equivalent contributions (referred to as (d 6 ) 2 ), at O(M −4 S ), from dimension six operators which are quadratic functions of dimension six WCs. This takes care of the radiative generation of dimension eight operators from dimension six ones, see Eq. (2.14) and the expansion of Z 2 h . We list all those operators that contribute in different orders:
• d 6 : C (6) HD , C(6)
HW B , C Table 2. Relevant operators that produce tree and one-loop corrections to the gauge boson selfenergy. The structures in blue first appear at tree-level correction, whereas the rest of the operators contribute at one-loop first. We can give two plots separately. The first one shows how large cos theta can be achieved for wide ranges of m. The second one is the allowed range of theta that can be achieved by minimizing η S .
H 4 D 4 ,3 , C(8)(H † H) 2 (D µ H † D µ H) 4η 2 S k S M 6 S − 8λ S η 2 S k S M 6 S O (8) H 4 D 4 ,3 (D µ H † D µ H)(D ν H † D ν H) 2η 2 S M 6 S
• (d 6 ) 2 : (C
(6) H ) 2 .
We investigate the departure of the truncated-EFT computation at dimension six from the full theory calculations and the role that the Higgs mixing plays in matching these two. The mixing can be expressed as a function of the trilinear coupling η S and the heavy cut-off scale M S , for allowed η S values, the decoupling can be quantified through the difference of the two theories. In Fig. 4(a), we show the lines for the constant mixing angles that allow a single η S value for each choice of the cut-off. We also impose the constraint from the perturbative unitarity that rules out a specific region in the η S − M S plane, in turn putting a lower-bound for the mixing for each value of the cut-off M S , that can be seen in Fig. 4(b).
Intuitively, adding higher and higher order terms in EFT expansion would take the EFT closer to the full theory. This concept is illustrated through the T parameter in Fig. 5. Here, we consider three different types of contributions. Firstly, the leading order terms in the expansion, i.e., d 6 ones. Then, we add d 8 contributions and finally, the (d 6 ) 2 ones. In passing, we want to highlight that though the d 8 term adds positively to the difference between the full theory and EFT, the further addition of (d 6 ) 2 , the equivalent of d 8 ones, allows us to capture the complete contribution at O(M −4 S ). Ultimately, it reflects that going to higher order in EFT expansion reduces the gap between full theory and EFT, especially for a relatively large mixing. We perform similar analyses for S parameter in Fig. 6. We draw a similar conclusion as the previous one, and that makes our conclusion more generic.
In Fig. 7, we have calculated the difference between full theory and EFT in calculating the T parameter for three different heavy mass scales. In each subfigure, we have shown that if we lower the value of η S for a fixed mass, the value of cos θ increases. As the cos θ reaches unity, the full theory and EFT are in excellent agreement, which is expected as the new physics contribution vanishes. It is also evident that for a fixed η S , once we go for higher masses the difference also decreases. This illustrates the interplay among the coupling η S , the heavy mass scale M S , and mixing parameter cos θ. One can tune the value of these parameters so that EFT can be a good explanation for the full theory. Doing the same kind of investigation for the S parameter in Fig. 8 further emphasises the idea.
Summary and Conclusions
Effective Field Theory is a powerful tool to look for deviations from the SM expectation in a theoretically well-motivated way. In a modern sense, it enables us to extend good quantum field theoretic properties to generic departures from the SM interactions, with potential relevance for UV complete scenarios depending on the accuracy with which constraints can be formulated. Along these lines, a set of particularly well-motivated observables are the oblique corrections as a subset of relevant electroweak corrections. In this work, we have analysed these observables from their dimension six and eight points of view with a critical perspective on how accurately EFT methods describe the full theory in the singlet extension scenarios where decoupling and alignment limits are exceptionally transparent. As expected, EFT approximates the full theory well in regions where it is valid. However, moving away from the alignment/decoupling limit, the relevance of the higher-dimensional terms in the EFT expansion quickly becomes relevant. Although these ranges are currently not probed by the experiments, it demonstrates the need to include higher-dimensional corrections (chiefly squared dimension six terms) to well-approximate the full theory. Furthermore, as Higgs boson mixing is a feature in almost all BSM theories with a non-minimal Higgs sector, this shows the necessity to go beyond dimension six interactions when data is very precise or when we want to inform a potential UV scenario accurately.
Feynman gauge) are then [36] Π ZZ (
p 2 ) = − M 4 W B 0 p 2 , M 2 h , M 2 Z 4π 2 c 4 W v 2 + M 2 W B 00 p 2 , M 2 h , M 2 Z 4π 2 c 2 W v 2 − M 2 h M 2 W 1 − log M 2 h µ 2 16π 2 c 2 W v 2 cos 2 θ + − M 4 W B 0 p 2 , M 2 S , M 2 Z 4π 2 c 4 W v 2 + M 2 W B 00 p 2 , M 2 S , M 2 Z 4π 2 c 2 W v 2 − M 2 S M 2 W 1 − log M 2 S µ 2 16π 2 c 2 W v 2 sin 2 θ, (A.1) Π W W (p 2 ) = − M 4 W B 0 p 2 , M 2 h , M 2 W 4π 2 v 2 + M 2 W B 00 p 2 , M 2 h , M 2 W 4π 2 v 2 − M 2 h M 2 W 1 − log M 2 h µ 2 16π 2 v 2 cos 2 θ + − M 4 W B 0 p 2 , M 2 S , M 2 W 4π 2 v 2 + M 2 W B 00 p 2 , M 2 S , M 2 W 4π 2 v 2 − M 2 S M 2 W 1 − log M 2 S µ 2 16π 2 v 2 sin 2 θ, (A.2) Π γγ (p 2 ) = Π γZ (p 2 ) = 0 , (A.3)
where, the Passarino-Veltman functions [37] (see also [38,39]) A 0 , B 0 and B 00 capture the scalar one-loop dynamics (the vev is fixed via v = 2M W s W /e). We have cross checked these results numerically against previous results [22,23].
A.2 Modification due to the corresponding EFT at tree-level
We note down the tree-level correction to the gauge boson propagators as shown in Fig. 3 due to the presence of effective operators.
Π (EFT) W W (p 2 ) = g 2 W v 6 16 C (8) HD,1 − g 2 W v 6 16 C (8) HD,2 + p 2 v 4 C (8) HW + 2p 2 v 2 C (6) HW , (A.4) Π (EFT) ZZ (p 2 ) = c 2 W g 2 W v 6 16 C (8) HD,1 + c 2 W g 2 W v 6 16 C (8) HD,2 + c 2 W g 2 W v 4 8 C (6) HD + c 2 W p 2 v 4 C (8) HW +2c 2 W p 2 v 2 C (6) HW + c W s W g W g Y v 6 8 C(8)HD,1 + c W s W g W g Y v 6 8 C (8) HD,2 + c W s W g W g Y v 4 4 C(6)HD + p 2 c W s W v 4 C(8)HW B + 2p 2 c W s W v 2 C (6) HW B + s 2 W g 2 Y v 6 C (8) HD,1 16 + s 2 W g 2 Y v 6 C (8) HD,2 16 + s 2 W g 2 Y v 4 C (6) HD 8 +p 2 s 2 W v 4 C (8) HB + 2p 2 s 2 W v 2 C(6)
HB , (A.5)
Π (EFT) γγ (p 2 ) = c 2 W g 2 Y v 6 16 C (8) HD,1 + c 2 W g 2 Y v 6 16 C (8) HD,2 + c 2 W g 2 Y v 4 8 C (6) HD + p 2 c 2 W v 4 C (8) HB +2p 2 c 2 W v 2 C(6)HB − c W s W g W g Y v 6 8 C(8)HD,1 − c W s W g W g Y v 6 8 C (8) HD,2 − g W g Y c W s W v 4 4 C (6) HD − p 2 c W s W v 4 C (8) HW B − 2p 2 c W s W v 2 C (6) HW B + s 2 W g 2 W v 6 16 C (8) HD,1 + s 2 W g 2 W v 6 16 C (8) HD,2 + s 2 W g 2 W v 6 8 C (6) HD +p 2 s 2 W v 4 C (8) HW + 2p 2 s 2 W v 2 C (6) HW , (A.6) Π (EFT) γZ (p 2 ) = − c 2 W g W g Y v 6 16 C (8) HD,1 − c 2 W g W g Y v 6 16 C (8) HD,2 − c 2 W g W g Y v 4 8 C (6) HD − p 2 c 2 W v 4 2 C (8) HW B − p 2 c 2 W v 2 C (6) HW B + c W s W g 2 W v 6 16 C (8) HD,1 + c W s W g 2 W v 6 16 C (8) HD,2 + c W s W g 2 W v 4 8 C (6) HD − c W s W g 2 Y v 6 16 C (8) HD,1 − c W s W g 2 Y v 6 16 C (8) HD,2 − c W s W g 2 Y v 4 8 C (6) HD − p 2 c W s W v 4 C (8) HB + p 2 c W s W v 4 C (8) HW − 2p 2 c W s W v 2 C (6) HB +2p 2 c W s W v 2 C (6) HW + s 2 W g W g Y v 6 16 C(8)
HD,1 +
s 2 W g W g Y v 6 16 C(8)
HD,2 +
s 2 W g W g Y v 4 8 C (6) HD + p 2 s W v 4 2 C(8)
HW B + p 2 s 2 W v 2 C The couplings are given by g W = e/s W , g Y = e/c W .
B Unitarity Constraints
Unitarity provides a suitable tool to gauge whether the matching is indeed for perturbative choices of the UV model parameters. Perturbativity, in one way or another, is implicitly assumed in analysing any collider data and this extends to the electroweak precision constraints as well. To this end, we consider the partial wave constraints that can be derived from longitudinal gauge boson scattering to identify the regions of validity this way. The zeroth partial wave relevant for this is given for scattering i 1 i 2 → f 1 f 2 (see Ref. [40]) suppressing factors of 1/ √ 2 for identical particles in the initial i or final state f . √ s denotes the centre-of-mass energy, and θ is the scattering angle in this frame for the 2 → 2 scattering process described by the amplitude ∼ M. Furthermore, β(x, y, z) = x 2 + y 2 + z 2 − 2xy − 2yz − 2xz . of which we use the first one to obtain the constraints in Sec. 2. The presence of large η S ∼ M S leads to unitarity violation through S contributions to hh → hh via the s, t, u channels as well as large values of M S for cos 2 θ = 1 in longitudinal gauge boson scattering [41]. Numerical investigation shows that for our choice close to the alignment limits longitudinal unitarity constraints are not as relevant as hh scattering constraints. Assuming perturbative unitarity up to a cut-off scale ∼ M S requires η S M S . This reflects the fact that when a dimensionful coupling (i.e. a mass scale) such as η S becomes comparable to a UV cut-off (M S in the EFT description), we enter strong coupling. This is also visible from the expansion of phenomenologically relevant quantities such as Eq. (2.6), which scales ∼ vη S /M 2 S in the EFT regime M S M h .
(2.3), results in different mass eigenstates that are a mixture of h that is the neutral component of H, and S, related by the mixing angle θ: h s = cos θ − sin θ sin θ cos
Figure 2 .
295% and 68% confidence interval bounds on the BSM trilinear coupling η S and the Mass of the heavy scalar (M S ) obtained from the full-theory calculation setting constraints from
HD, 1 .
1The explicit forms of their structures are given in Tabs. 1 and 2, respectively. These operators modify the canonical form of the kinetic term for the Higgs field
Figure 4 .
4(a) shows the variation of the trilinear coupling with respect to heavy scalar mass. The orange and yellow dots denote η S values to reproduce values of the respective cos 2 θ choices. In (b) we show how the mixing angle is a function of the heavy mass for fixed values of the trilinear coupling η S . In both plots, the gray-shaded region respects perturbative unitarity (see appendix B).
= M = 700 GeV η s = M = 1 TeV η s = M = 5 TeV
Figure 5 .
5Impact of individual contributions on ∆ T = | T Full − T eft | for three benchmark choices of η s = M S , having maximal allowed mixing. ∆ T , computed upto O(M −2 S ), i.e., truncating effective Lagrangian at mass dimension six, receives a positive contribution from dimension eight operators. Though, the total contributions upto order O(M −2 S ) reduces ∆ T signifying inclusion of dimension eight operators brings the effective theory prediction relatively closer to that from the full theory.
= M = 700 GeV η s = M = 1 TeV η s = M = 5 TeV
Figure 6 .
6Impact of individual contributions on ∆ S = | S Full − S eft | for three benchmark choices of η s = M S . We note that adding the RG evolution contribution from dimension eight operators improves the effective theory to replicate the full theory.
Figure 7 .
7Difference between full theory and the EFT computation for T parameter at different heavy field mass scales. The mass scales are chosen to be (a) 700 GeV, (b) 1 TeV and (c) 5 TeV. The values for η S are chosen such that they satisfy the unitarity bounds.
Figure 8 .
8Difference between full theory and the EFT computation for S parameter at different heavy field mass scales. The mass scales are chosen to be (a) 700 GeV, (b) 1 TeV and (c) 5 TeV. The values for η S are chosen such that they satisfy the unitarity bounds.
lim s→∞ β 2 /s = 1. Unitarity of the S matrix then translates for f = i to the conditions |Re a 0
HW B ;Operator
Op. Structure
Wilson coeffs.
O
(8)
HD,1
Note that we employ the normalization of Peskin and Takeuchi, although the hatted quantities are typically defined in the normalization of[30]. This is to avoid confusion between the oblique corrections and the singlet field introduced below.
A Gauge boson two-point functionsA.1 Modification due to a singlet scalar extensionWe note down the modifications to the gauge boson two-point functions due to the presence of a new heavier scalar degree of freedom. Here, only the contributions from the scalarinvolved diagrams are presented. The BSM contribution to the two-point functions (in
Phenomenological Lagrangians. S Weinberg, 10.1016/0378-4371(79)90223-1Physica A. 96S. Weinberg, Phenomenological Lagrangians, Physica A 96 (1979) 327-340.
Dimension-Six Terms in the Standard Model Lagrangian. B Grzadkowski, M Iskrzynski, M Misiak, J Rosiek, 10.1007/JHEP10(2010)0851008.4884JHEP. 1085B. Grzadkowski, M. Iskrzynski, M. Misiak and J. Rosiek, Dimension-Six Terms in the Standard Model Lagrangian, JHEP 10 (2010) 085, [1008.4884].
Complete one-loop matching for a singlet scalar in the Standard Model EFT. M Jiang, N Craig, Y.-Y Li, D Sutherland, 10.1007/JHEP02(2019)0311811.08878JHEP. 0231M. Jiang, N. Craig, Y.-Y. Li and D. Sutherland, Complete one-loop matching for a singlet scalar in the Standard Model EFT, JHEP 02 (2019) 031, [1811.08878].
Singlet night in Feynman-ville: one-loop matching of a real scalar. U Haisch, M Ruhdorfer, E Salvioni, E Venturini, A Weiler, 10.1007/JHEP04(2020)164JHEP. 041642003.05936U. Haisch, M. Ruhdorfer, E. Salvioni, E. Venturini and A. Weiler, Singlet night in Feynman-ville: one-loop matching of a real scalar, JHEP 04 (2020) 164, [2003.05936].
Matching scalar leptoquarks to the SMEFT at one loop. V Gherardi, D Marzocca, E Venturini, 10.1007/JHEP07(2020)225JHEP. 072252003.12525V. Gherardi, D. Marzocca and E. Venturini, Matching scalar leptoquarks to the SMEFT at one loop, JHEP 07 (2020) 225, [2003.12525].
Neutrino seesaw models at one-loop matching: discrimination by effective operators. Y Du, X.-X Li, J.-H Yu, 10.1007/JHEP09(2022)2072201.04646JHEP. 09207Y. Du, X.-X. Li and J.-H. Yu, Neutrino seesaw models at one-loop matching: discrimination by effective operators, JHEP 09 (2022) 207, [2201.04646].
Standard Model EFT and Extended Scalar Sectors. S Dawson, C W Murphy, 10.1103/PhysRevD.96.0150411704.07851Phys. Rev. D. 9615041S. Dawson and C. W. Murphy, Standard Model EFT and Extended Scalar Sectors, Phys. Rev. D 96 (2017) 015041, [1704.07851].
Complete one-loop matching of the type-I seesaw model onto the Standard Model effective field theory. D Zhang, S Zhou, 10.1007/JHEP09(2021)1632107.12133JHEP. 09163D. Zhang and S. Zhou, Complete one-loop matching of the type-I seesaw model onto the Standard Model effective field theory, JHEP 09 (2021) 163, [2107.12133].
Integrating out heavy fields in the path integral using the background-field method: general formalism. S Dittmaier, S Schuhmacher, M Stahlhofen, 10.1140/epjc/s10052-021-09587-72102.12020Eur. Phys. J. C. 81826S. Dittmaier, S. Schuhmacher and M. Stahlhofen, Integrating out heavy fields in the path integral using the background-field method: general formalism, Eur. Phys. J. C 81 (2021) 826, [2102.12020].
One-loop matching of the type-II seesaw model onto the Standard Model effective field theory. X Li, D Zhang, S Zhou, 10.1007/JHEP04(2022)0382201.05082JHEP. 0438X. Li, D. Zhang and S. Zhou, One-loop matching of the type-II seesaw model onto the Standard Model effective field theory, JHEP 04 (2022) 038, [2201.05082].
D Zhang, 2208.07869Complete One-loop Structure of the Type-(I+II) Seesaw Effective Field Theory. D. Zhang, Complete One-loop Structure of the Type-(I+II) Seesaw Effective Field Theory, 2208.07869.
Matchmakereft: automated tree-level and one-loop matching. A Carmona, A Lazopoulos, P Olgoso, J Santiago, 10.21468/SciPostPhys.12.6.1982112.10787SciPost Phys. 12198A. Carmona, A. Lazopoulos, P. Olgoso and J. Santiago, Matchmakereft: automated tree-level and one-loop matching, SciPost Phys. 12 (2022) 198, [2112.10787].
STrEAMlining EFT Matching. T Cohen, X Lu, Z Zhang, 10.21468/SciPostPhys.10.5.0982012.07851SciPost Phys. 1098T. Cohen, X. Lu and Z. Zhang, STrEAMlining EFT Matching, SciPost Phys. 10 (2021) 098, [2012.07851].
CoDEx: Wilson coefficient calculator connecting SMEFT to UV theory. S Das Bakshi, J Chakrabortty, S K Patra, 10.1140/epjc/s10052-018-6444-21808.04403Eur. Phys. J. C. 7921S. Das Bakshi, J. Chakrabortty and S. K. Patra, CoDEx: Wilson coefficient calculator connecting SMEFT to UV theory, Eur. Phys. J. C 79 (2019) 21, [1808.04403].
C W Murphy, Dimension-8 Operators in the Standard Model Effective Field Theory. 59C. W. Murphy, Dimension-8 Operators in the Standard Model Effective Field Theory, 2005.00059.
Complete set of dimension-eight operators in the standard model effective field theory. H.-L Li, Z Ren, J Shu, M.-L Xiao, J.-H Yu, Y.-H Zheng, 10.1103/PhysRevD.104.015026Phys. Rev. D. 104150262005.00008H.-L. Li, Z. Ren, J. Shu, M.-L. Xiao, J.-H. Yu and Y.-H. Zheng, Complete set of dimension-eight operators in the standard model effective field theory, Phys. Rev. D 104 (2021) 015026, [2005.00008].
Integrating out heavy scalars with modified EOMs: matching computation of dimension-eight SMEFT coefficients. U Banerjee, J Chakrabortty, C Englert, S U Rahaman, M Spannowsky, 2210.14761U. Banerjee, J. Chakrabortty, C. Englert, S. U. Rahaman and M. Spannowsky, Integrating out heavy scalars with modified EOMs: matching computation of dimension-eight SMEFT coefficients, 2210.14761.
Role of dimension-eight operators in an EFT for the 2HDM. S Dawson, D Fontes, S Homiller, M Sullivan, 10.1103/PhysRevD.106.0550122205.01561Phys. Rev. D. 10655012S. Dawson, D. Fontes, S. Homiller and M. Sullivan, Role of dimension-eight operators in an EFT for the 2HDM, Phys. Rev. D 106 (2022) 055012, [2205.01561].
Uncovering the High Scale Higgs Singlet Model. S Dawson, P P Giardino, S Homiller, 10.1103/PhysRevD.103.0750162102.02823Phys. Rev. D. 10375016S. Dawson, P. P. Giardino and S. Homiller, Uncovering the High Scale Higgs Singlet Model, Phys. Rev. D 103 (2021) 075016, [2102.02823].
Higgs singlet extension parameter space in the light of the LHC discovery. G M Pruna, T Robens, 10.1103/PhysRevD.88.1150121303.1150Phys. Rev. D. 88115012G. M. Pruna and T. Robens, Higgs singlet extension parameter space in the light of the LHC discovery, Phys. Rev. D 88 (2013) 115012, [1303.1150].
Influence of strongly coupled, hidden scalars on Higgs signals. T Binoth, J J Van Der, Bij, 10.1007/s002880050442hep-ph/9608245Z. Phys. C. 75T. Binoth and J. J. van der Bij, Influence of strongly coupled, hidden scalars on Higgs signals, Z. Phys. C 75 (1997) 17-25, [hep-ph/9608245].
Narrow trans-TeV Higgs bosons and H -> hh decays: Two LHC search paths for a hidden sector Higgs boson. M Bowen, Y Cui, J D Wells, 10.1088/1126-6708/2007/03/036hep-ph/0701035JHEP. 0336M. Bowen, Y. Cui and J. D. Wells, Narrow trans-TeV Higgs bosons and H -> hh decays: Two LHC search paths for a hidden sector Higgs boson, JHEP 03 (2007) 036, [hep-ph/0701035].
Exploring the Higgs portal. C Englert, T Plehn, D Zerwas, P M Zerwas, 10.1016/j.physletb.2011.08.0021106.3097Phys. Lett. B. 703C. Englert, T. Plehn, D. Zerwas and P. M. Zerwas, Exploring the Higgs portal, Phys. Lett. B 703 (2011) 298-305, [1106.3097].
Exploring the Higgs Portal with 10/fb at the LHC. B Batell, S Gori, L.-T Wang, 10.1007/JHEP06(2012)1721112.5180JHEP. 06172B. Batell, S. Gori and L.-T. Wang, Exploring the Higgs Portal with 10/fb at the LHC, JHEP 06 (2012) 172, [1112.5180].
Effective limits on single scalar extensions in the light of recent LHC data. S Anisha, Das, S Bakshi, A Banerjee, J Biekötter, S Chakrabortty, Kumar Patra, 2111.05876Anisha, S. Das Bakshi, S. Banerjee, A. Biekötter, J. Chakrabortty, S. Kumar Patra et al., Effective limits on single scalar extensions in the light of recent LHC data, 2111.05876.
Estimation of oblique electroweak corrections. M E Peskin, T Takeuchi, 10.1103/PhysRevD.46.381Phys. Rev. D. 46M. E. Peskin and T. Takeuchi, Estimation of oblique electroweak corrections, Phys. Rev. D 46 (Jul, 1992) 381-409.
G Altarelli, R Barbieri, 10.1016/0370-2693(91)91378-9Vacuum polarization effects of new physics on electroweak processes. 253G. Altarelli and R. Barbieri, Vacuum polarization effects of new physics on electroweak processes, Physics Letters B 253 (1991) 161-167.
ZFITTER v.6.21: A Semianalytical program for fermion pair production in e + e − annihilation. D Y Bardin, P Christova, M Jack, L Kalinovskaya, A Olchevski, S Riemann, 10.1016/S0010-4655(00)00152-1hep-ph/9908433Comput. Phys. Commun. 133D. Y. Bardin, P. Christova, M. Jack, L. Kalinovskaya, A. Olchevski, S. Riemann et al., ZFITTER v.6.21: A Semianalytical program for fermion pair production in e + e − annihilation, Comput. Phys. Commun. 133 (2001) 229-395, [hep-ph/9908433].
Revisiting the Global Electroweak Fit of the Standard Model and Beyond with Gfitter. H Flacher, M Goebel, J Haller, A Hocker, K Monig, J Stelzer, 10.1140/epjc/s10052-009-0966-6Eur. Phys. J. C. 600811.0009H. Flacher, M. Goebel, J. Haller, A. Hocker, K. Monig and J. Stelzer, Revisiting the Global Electroweak Fit of the Standard Model and Beyond with Gfitter, Eur. Phys. J. C 60 (2009) 543-583, [0811.0009].
Electroweak symmetry breaking after LEP-1 and LEP-2. R Barbieri, A Pomarol, R Rattazzi, A Strumia, 10.1016/j.nuclphysb.2004.10.014hep-ph/0405040Nucl. Phys. B. 703R. Barbieri, A. Pomarol, R. Rattazzi and A. Strumia, Electroweak symmetry breaking after LEP-1 and LEP-2, Nucl. Phys. B 703 (2004) 127-146, [hep-ph/0405040].
Update of the global electroweak fit and constraints on two-Higgs-doublet models. J Haller, A Hoecker, R Kogler, K Mönig, T Peiffer, J Stelzer, 10.1140/epjc/s10052-018-6131-31803.01853Eur. Phys. J. C. 78675J. Haller, A. Hoecker, R. Kogler, K. Mönig, T. Peiffer and J. Stelzer, Update of the global electroweak fit and constraints on two-Higgs-doublet models, Eur. Phys. J. C 78 (2018) 675, [1803.01853].
Renormalization Group Evolution of the Standard Model Dimension Six Operators I: Formalism and lambda Dependence. E E Jenkins, A V Manohar, M Trott, 10.1007/JHEP10(2013)0871308.2627JHEP. 1087E. E. Jenkins, A. V. Manohar and M. Trott, Renormalization Group Evolution of the Standard Model Dimension Six Operators I: Formalism and lambda Dependence, JHEP 10 (2013) 087, [1308.2627].
Renormalization Group Evolution of the Standard Model Dimension Six Operators III: Gauge Coupling Dependence and Phenomenology. R Alonso, E E Jenkins, A V Manohar, M Trott, 10.1007/JHEP04(2014)159JHEP. 041591312.2014R. Alonso, E. E. Jenkins, A. V. Manohar and M. Trott, Renormalization Group Evolution of the Standard Model Dimension Six Operators III: Gauge Coupling Dependence and Phenomenology, JHEP 04 (2014) 159, [1312.2014].
Towards the renormalisation of the Standard Model effective field theory to dimension eight: Bosonic interactions I. M Chala, G Guedes, M Ramos, J Santiago, 10.21468/SciPostPhys.11.3.0652106.05291SciPost Phys. 1165M. Chala, G. Guedes, M. Ramos and J. Santiago, Towards the renormalisation of the Standard Model effective field theory to dimension eight: Bosonic interactions I, SciPost Phys. 11 (2021) 065, [2106.05291].
Towards the renormalisation of the Standard Model effective field theory to dimension eight: bosonic interactions II. S Das Bakshi, M Chala, A Díaz-Carmona, G Guedes, 10.1140/epjp/s13360-022-03194-52205.03301Eur. Phys. J. Plus. 137973S. Das Bakshi, M. Chala, A. Díaz-Carmona and G. Guedes, Towards the renormalisation of the Standard Model effective field theory to dimension eight: bosonic interactions II, Eur. Phys. J. Plus 137 (2022) 973, [2205.03301].
Generating Feynman diagrams and amplitudes with FeynArts 3. T Hahn, 10.1016/S0010-4655(01)00290-9hep-ph/0012260Comput. Phys. Commun. 140T. Hahn, Generating Feynman diagrams and amplitudes with FeynArts 3, Comput. Phys. Commun. 140 (2001) 418-431, [hep-ph/0012260].
One Loop Corrections for e+ e-Annihilation Into mu+ mu-in the Weinberg Model. G Passarino, M J G Veltman, 10.1016/0550-3213(79)90234-7Nucl. Phys. B. 160G. Passarino and M. J. G. Veltman, One Loop Corrections for e+ e-Annihilation Into mu+ mu-in the Weinberg Model, Nucl. Phys. B 160 (1979) 151-207.
Techniques for calculation of electroweak radiative corrections at the one loop level and results for W physics at LEP-200. A Denner, 10.1002/prop.2190410402Fortsch. Phys. 410709.1075A. Denner, Techniques for calculation of electroweak radiative corrections at the one loop level and results for W physics at LEP-200, Fortsch. Phys. 41 (1993) 307-420, [0709.1075].
Electroweak Radiative Corrections for Collider Physics. A Denner, S Dittmaier, 10.1016/j.physrep.2020.04.0011912.06823Phys. Rept. 864A. Denner and S. Dittmaier, Electroweak Radiative Corrections for Collider Physics, Phys. Rept. 864 (2020) 1-163, [1912.06823].
On the General Theory of Collisions for Particles with Spin. M Jacob, G C Wick, 10.1016/0003-4916(59)90051-XAnnals Phys. 7M. Jacob and G. C. Wick, On the General Theory of Collisions for Particles with Spin, Annals Phys. 7 (1959) 404-428.
Weak Interactions at Very High-Energies: The Role of the Higgs Boson Mass. B W Lee, C Quigg, H B Thacker, 10.1103/PhysRevD.16.1519Phys. Rev. D. 161519B. W. Lee, C. Quigg and H. B. Thacker, Weak Interactions at Very High-Energies: The Role of the Higgs Boson Mass, Phys. Rev. D 16 (1977) 1519.
| [] |
[
"COUNTEREXAMPLE TO THE LAPTEV-SAFRONOV CONJECTURE",
"COUNTEREXAMPLE TO THE LAPTEV-SAFRONOV CONJECTURE",
"COUNTEREXAMPLE TO THE LAPTEV-SAFRONOV CONJECTURE",
"COUNTEREXAMPLE TO THE LAPTEV-SAFRONOV CONJECTURE"
] | [
"Sabine Bögli ",
"Jean-Claude Cuenin ",
"Sabine Bögli ",
"Jean-Claude Cuenin "
] | [] | [] | We prove that the Laptev-Safronov conjecture (Comm. Math. Phys. 2009) is false in the range that is not covered by Frank's positive result (Bull. Lond. Math. Soc. 2011). The simple counterexample is adaptable to a large class of Schrödinger type operators, for which we also prove new sharp upper bounds. | 10.1007/s00220-022-04546-z | [
"https://export.arxiv.org/pdf/2109.06135v1.pdf"
] | 237,491,585 | 2109.06135 | 29cad701b88540873115319940749f939bff1796 |
COUNTEREXAMPLE TO THE LAPTEV-SAFRONOV CONJECTURE
13 Sep 2021
Sabine Bögli
Jean-Claude Cuenin
COUNTEREXAMPLE TO THE LAPTEV-SAFRONOV CONJECTURE
13 Sep 2021
We prove that the Laptev-Safronov conjecture (Comm. Math. Phys. 2009) is false in the range that is not covered by Frank's positive result (Bull. Lond. Math. Soc. 2011). The simple counterexample is adaptable to a large class of Schrödinger type operators, for which we also prove new sharp upper bounds.
Introduction
Consider a Schrödinger operator H V = −∆ + V on L 2 (R d ) with a complexvalued potential V . The Laptev-Safronov conjecture [27] stipulates that in d ≥ 2 dimensions any non-positive eigenvalue z of H V satisfies the bound
|z| γ ≤ D γ,d R d |V (x)| γ+ d 2 dx(1)
for 0 < γ ≤ d/2, and with D γ,d independent of V and z. It is known that the condition γ ≤ d/2 is necessary, see [4]. The inequality (1) is known to be true if d = 1 and γ = 1/2 or if d ≥ 2 and γ ≤ 1/2. The one-dimensional bound (with the sharp constant D 1/2,1 = 1/2) is due to Abramov-Aslanyan-Davies [1], and the higher dimensional bound is due to Frank [12]. Originally, these bounds were stated for z ∈ C \ R + , but it was later realized by Frank and Simon [14] that embedded eigenvalues z ∈ R + can also be accommodated. In fact, Frank and Simon "almost disproved" the Laptev-Safronov conjecture by constructing a counterexample (based on an earlier example of Ionescu and Jerison [23]) that prohibits (1) for z ∈ R + whenever d ≥ 2 and γ > 1/2. Here we prove that the Laptev-Safronov conjecture is false in the form originally stated. In the following we write q = γ + d/2. For any ε > 0, let χ ε be the indicator function of
{x = (x 1 , x ′ ) ∈ R × R d−1 : |x 1 | < ε −1 , |x ′ | < ε −1/2 }.
We construct potentials V ε , ε > 0, with |V ε | ≤ εχ ε and such that z ε = 1 + iε is an eigenvalue of H Vε . This allows us to disprove the conjecture. 1 Actually, our counterexample shows more, namely that the following substitute of (1) for "long-range" potentials (i.e. q > (d + 1)/2), due to Frank [13],
dist(z, R + ) q− d+1 2 |z| 1 2 ≤ C q,d V q q ,(2)
is sharp (in the sense that the exponent of dist(z, R + ) cannot be made smaller while preserving scale-invariance).
Theorem 2. Let d ≥ 2 and q ≥ (d + 1)/2. Then
lim inf ε→0 dist(z ε , R + ) q− d+1 2 |z ε | 1 2 V ε q q > 0.
In addition, the same example saturates the recent bound of the second author [8, Th. 1.1], which states that
|z| 1 2 ≤ C d sup y∈R d R d |V (x)| d+1 2 exp(−E|x − y|)dx,(3)
with E = Im √ z > 0, and generalizes the one-dimensional analog of Davies and Nath [9] to higher dimensions. Note that estimating the right hand side of (3) from above by the same expression with E = 0 results in the endpoint estimate (1) with γ = 1/2 (or equivalently q = (d + 1)/2). Hence the endpoint case in Theorem 2 already implies that (3) is optimal in some sense. The exponential factor in (3) effectively localizes the integration to a ball B(y, C/E). Moreover, the right hand side of (3) may be much smaller than that of (2). There are estimates similar to (3) for any q ∈ (d/2, (d + 1)/2], see [8]. We do not state these here but remark that our counterexample also shows that an analog of [8, Th. 1] cannot hold for q > (d + 1)/2. For brevity, we denote the right hand side of (3) by F V (E).
F V (L Im √ z ε ) ≥ C ′ d L.
These three theorems show that the bounds (1),(2),(3) provide a rather complete picture of sharp eigenvalue inequalities for Schrödinger operators with complex potentials. Some refinements for singular potentials are known, see e.g. [12], [28], [29], [5], but we focus here on the long-range aspects of the potential. This is reflected by the fact that the construction of our counterexample is local in Fourier space, similar to the examples for embedded eigenvalues in [7], where a connection between the aforementioned Ionescu-Jerison example and the "Knapp example" in Fourier restriction theory (see e.g. [36], [15] , [30], [11] for textbook presentations) was made. The examples in [7] are based on superpositions of infinitely many Knapp wavepackets, while our example here is based on a single such wavepacket.
As in [7], the locality in Fourier space affords the flexibility to adapt the counterexample to a large class of Schrödinger type operators of the form
H V = h 0 (D) + V (x),(4)
where h 0 is a tempered distribution on R d , smooth in a neighborhood of some point ξ 0 ∈ R d and such that λ := h 0 (ξ 0 ) is a regular value of h 0 . This means that the isoenergy surface
S λ = {ξ ∈ R d : h 0 (ξ) = λ}(5)
is a smooth nonempty hyersurface near ξ 0 . Here h 0 (D)f = F −1 (h 0f ) is the Fourier multiplier corresponding to h 0 and F −1 is the inverse Fourier transform. It is well known that upper bounds for the resolvent (H 0 − z) −1 , for z close to λ, crucially depend on curvature properties of S λ , see e.g. [17], [6], [38]. For the Laplacian H 0 = −∆, i.e. h 0 (ξ) = ξ 2 , the surface S λ = √ λS d−1 has everywhere nonvanishing Gauss curvature if λ > 0. This fact lies at the heart of the Stein-Tomas theorem as well as the uniform resolvent estimates of Kenig-Ruiz-Sogge [24] that are behind the upper bound (1) for γ ≤ 1/2. We will prove generalizations of (2), (3) in Section 5 for operators of the form (4) (we actually allow V to be a pseudodifferential operator). Our counterexamples show that these upper bounds are sharp. To simplify the exposition we state the result for the fractional Laplacian H 0 = (−∆) s . We remark that part (i) of the following theorem was already proved in [6, Th. 6.1] (see also [22] for related resolvent estimates).
Theorem 4. Let d ≥ 1, s > 0 and q ≥ q s , where q s := d/s if s < d, 1+ if s = d, 1 if s > d.
Let
H V = (−∆) s/2 + V .
Then then any eigenvalue z ∈ C of H V satisfies the following.
(i) If q ≤ (d + 1)/2, then |z| q− d s ≤ D d,s,q V q q . (ii) If q > (d + 1)/2, then dist(z, R + ) q− d+1 2 |z| d+1 2 − d s ≤ D d,s,q V q q . (iii) For any natural number N , |z| d+1 2 − d s ≤ C d,s,N sup y∈R d R d (1 + | Im z(x − y)|) −N |V (x)| d+1 2 dx.
The estimate in (iii) corresponds to (3). Using explicit formulas for the resolvent kernel of the fractional Laplacian in terms of special functions one could probably replace the rapid decay by an exponential one. However, our proof only uses stationary phase estimates and works for more general constant coefficient operators H 0 . In practice, the difference between (3) and (iii) is not significant; only the decay scale | Im z| −1 is. Observe that if s < 2d/(d + 1) (this condition appears in [6,22]), then one is always in the long-range case (ii) since (d + 1)/2 < q s . The proof of (ii), (iii) could be obtained by closely following the arguments in [13] and [8], respectively. However, our main point here is to show that all the statements of Theorem 4 follow from the general results of Propositions 20 and 24 below in the special case h 0 (ξ) = |ξ| s .
As a further consequence of our counterexample to the Laptev-Safronov conjecture, one can modify the construction of [4, Th. 1], valid for q > d, to q > (d + 1)/2. Here, σ p (H V ) denotes the set of eigenvalues.
Theorem 5. Let d ≥ 2, q > (d + 1)/2 and ε > 0. Then there exists V ∈ L ∞ (R d ) ∩ L q (R d ) with max{ V ∞ , V q } ≤ ε such that σ p (H V )\R + accumulates at every point in R + .
To conclude the introduction we give some comments on the idea behind the counterexample to the Laptev-Safronov conjecture. A key difference to the constructions in [14,23,7] (for embedded eigenvalues) is that the potential here is not explicit, but depends on an (unknown) eigenfunction of a compact operator K (see the proof of Lemma 9 for details). In [14,23,7] one starts with a putative eigenfunction of H V and determines V from the eigenvalue equation. The strategy we adopt here more closely follows the standard approach to prove upper bounds, the so-called Birman-Schwinger principle [3], [34]. In its simplest form, this principle says that z is an eigenvalue of H V if and only if −1 is an eigenvalue of the compact operator |V |(H 0 − z) −1 √ V . A simplified sketch of the proof of (1) for d ≥ 2 and γ ≤ 1/2 then goes as follows,
1 ≤ |V |(H 0 − z) −1 √ V ≤ V q (H 0 − z) −1 p→p ′ ≤ (D γ,d |z| −γ ) 1/q V q ,(6)
where p −1 + (p ′ ) −1 = q −1 and we recall that q = γ + d/2. The second inequality above is simply Hölder's inequality, while the last inequality follows from the work of Kenig-Ruiz-Sogge [24] (and is due to Frank [12] in its rescaled version). This inequality is closely related to the Stein-Tomas theorem for the Fourier restriction operator
F S f :=f ↾ S, where S = √ λS d−1 if z = λ + iε. The Knapp example shows that p = 2(d+1)/(d+3) (corresponding to q = (d+1)/2) is the best (largest) possible exponent in the inequality f L 2 (S d−1 ) ≤ C p f L p (R d ) . The same is true for the p → p ′ resolvent estimate since Im(H 0 − (λ + iε)) −1 = − ε (H 0 − λ) 2 + ε 2(7)
and the right hand side converges to a constant times F * S F S , as ε → 0. Similarly to the previous argument, the proof of (2) for q > (d + 1)/2 (γ > 1/2) in [13] uses the non-uniform bound
(H 0 − z) −1 p→p ′ dist(z, R + ) d+1 2q −1 (|z| = 1),
which is also sharp [26,Prop. 1.3]. In our construction the potential V is adapted to the Knapp example, making the second (Hölder) and third inequality in (6) optimal simultanously. The only possible loss of optimality thus comes from the first inequality, and this may happen if the spectral radius of |V |(H 0 − z) −1 √ V is much smaller than its norm. We avoid this problem by working with (7) instead of the full resolvent. It turns out that one can redefine V (without making it larger in L q norm) in such a way that z becomes an eigenvalue of H V .
Organization of the paper. In Section 2 we construct the counterexample to the Laptev-Safronov conjecture and prove Theorems 1-3 and 5. In Section 3 we give an alternative (non compactly supported) counterexample that is a perturbation of the Frank-Simon example for embedded eigenvalues. In Section 4 we prove an almost sharp quantitative lower bound on the norm of the compact operator K (which implies an upper bound on the potential) and generalize the counterexample to generalized Schrödinger operators of the form (4). Corresponding upper bounds for such operators (in particular, a proof of Theorem 4) are collected in Section 5.
Notation. For a, b ≥ 0 the statement a b means that a ≤ Cb for some universal constant C. The expression a ≍ b means a b and b a. If the estimate depends on a parameter τ , we indicate this by writing a τ b. In particular, if τ = N , we always mean that the estimate is true for any natural number N , with a constant depending on N . The expression a b κ+ (κ ∈ R) means a δ b κ+δ for any δ > 0, and similarly for κ−. The dependence on the dimension and on other fixed quantities is always suppressed. An assumption a ≪ b means that there is a small constant c such that if a ≤ cb, then the ensuing conclusion holds. We also use c as a generic positive constant in estimates involving exponentials, as in (3). The big oh notation a = O(b) means |a| b (here we are not assuming a ≥ 0). For an integral operator K on R d we denote by K r→s its L r → L s operator norm. If r, s = 2, then we just write K . Similarly, we denote the L r norm of a function f by f r and write f if r = 2. We denote by σ(T ) = {z ∈ C : T − z not boundedly invertible} the spectrum of a linear operator T . We write
x = (1 + x 2 ) 1/2 , where x 2 = x · x for x ∈ R d .
If nothing else is indicated, integrals are always understood to be over R d . The indicator function of a set A is denoted by 1 A . If we speak of a bump function we mean a smooth, compactly supported, real-valued function with values in [0, 1].
Counterexample to the conjecture
The counterexample to the Laptev-Safronov conjecture is based on Lemmas 9 and 11 below. In the following we use the notation
δ λ,ε (H 0 ) := ε (H 0 − λ) 2 + ε 2 ,
where λ ∈ R and ε > 0. We abbreviate this by
δ ε (H 0 ) if λ = 1. Let H 0 = h 0 (x, D) be a self-adjoint, elliptic pseudodifferential operator on L 2 (R d ) with domain D(H 0 ) := {u ∈ L 2 (R d ) : H 0 u ∈ L 2 (R d )}.
Here h(x, D) is the Kohn-Nirenberg quantization of a symbol h ∈ S m ̺,δ (the standard Hörmander classes, see e.g. [39]) where 0 ≤ δ < ̺ ≤ 1 and m > 0. Hence,
h 0 (x, D)u(x) = (2π) −d R d e ix·ξ h(x, ξ)û(ξ)dξ,
whereû is the Fourier transform of a Schwartz function u. We will write H 0 ∈ OP S m ̺,δ . We assume that H 0 is elliptic, i.e. |h 0 (x, ξ)| |ξ| m for |ξ| ≥ C. By [39, Prop. 5.5], we have D(H 0 ) = H m (R d ). We also assume that H 0 is real, i.e. commutes with complex conjugation. On the symbol level, this means that h 0 (x, ξ) = h 0 (x, −ξ). Note that this is the case for the Laplacian, for which h 0 (ξ) = ξ 2 .
Let U ⊂ R d be a nonempty open, precompact set, and let χ = 1 U be its indicator function. In the following we will consider the operator K := χδ λ,ε (H 0 )χ.
Lemma 6. K is compact. Proof. Let Λ = (1 − ∆) 1/2 and write K = χΛ −m (Λ m δ λ,ε (H 0 )Λ m )Λ −m χ.
By the Kato-Seiler-Simon inequality [35,Theorem 4.1], χΛ −m is in the Schatten class S p for any p > d/m (and p ≥ 2). In particular, it is compact, and so is its adjoint Λ −m χ. It remains to show that Λ m δ λ,ε (H 0 )Λ m is L 2 bounded. By Beal's theorem [2, Theorem 3.2] it follows that δ λ,ε (H 0 ) ∈ OP S −2m ̺,δ , and by the L 2 boundedness of zero order pseudodifferential operators (see e.g. [39,Theorem 5.3] for the symbol classes considered here), Λ m δ λ,ε (H 0 )Λ m is bounded.
We next state an analog of the Birman-Schwinger principle for the operator K. The proof is a straightforward verification. Here we do not need to assume χ = 1 U .
Lemma 7. Let µ ∈ C \ {0}. Then µ is an eigenvalue of K if and only if the operator (H 0 − λ) 2 + ε 2 − (ε/µ)χ 2 has nontrivial kernel. Moreover, δ λ,ε (H 0 )χ : ker(K − µ) → ker((H 0 − λ) 2 + ε 2 − (ε/µ)χ 2 )
is a linear isomorphism with inverse µ −1 χ.
The following lemma is a standard elliptic regularity result, but we provide a proof for completeness.
Lemma 8. Let µ ∈ C \ {0}. Then ker(K − µ) ⊂ C ∞ (U ). Proof. By Lemma 7 it suffices to prove that if u ∈ D(H 2 0 ), (H 0 − λ) 2 u + ε 2 u − (ε/µ)χ 2 u = 0, then u ∈ C ∞ (U )
. It is clear that the above equation takes the form P u = f with P ∈ OP S 2m ̺,δ elliptic and u, f ∈ L 2 (R d ). Let Q ∈ OP S 2m ̺,δ be a parametrix for P (see e.g. [39, Ch. 1, Sect. 4]). Then, modulo smooth functions, we have QP u = u and hence u = Qf ∈ H 2m (R d ) by [39,Prop. 5.5]. To bootstrap this, we localize near a point x 0 ∈ U and let χ j be a sequence of bump functions in U such that χ j = 1 near x 0 and χ j χ j−1 = χ j . Then, again modulo smooth functions, [39,Prop. 5.5] and since the commutator reduces the order by ̺ − δ, see [39, (3.24)]). Applying the previous elliptic regularity estimate successively yields
u j = χ j u satisfies P u j = f j with f j = [P, χ j ]u = [P, χ j ]u j−1 ∈ H j(̺−δ) (R d ) (again byu j ∈ H 2m+j(̺−δ) (R d ). By Sobolev embedding, u j ∈ C k (R d ) for 2m + j(̺ − δ) > d/2 + k. This shows that u is smooth at x 0 . Lemma 9. There exists V ∈ L ∞ (R d ) such that z = λ + iε is an eigenvalue of H V and |V | ≤ K −1 χ.
Proof. Since K is a nonnegative compact operator, its largest eigenvalue equals K . Hence, there is a nontrivial φ ∈ L 2 (R d ) such that Kφ = K φ. Since K is real we may and will assume that φ is real-valued. Using this together with the identity
δ λ,ε (H 0 ) = Im(H 0 − z) −1 = 1 2i ((H 0 − z) −1 − (H 0 − z) −1 ),
the eigenvalue equation takes the form
(H 0 − z)ψ = K −1 χ Im ψ, where ψ := (H 0 − z) −1 φ and where we used that χφ = φ since χ 2 = χ.
Here, Im ψ denotes the imaginary part of a function, in distinction to the meaning of Im above for the imaginary part of an operator. Let N := {x ∈ U : ψ(x) = 0} be the nodal set of ψ, and set V :
= − K −1 1 U\N Im ψ ψ .
Note that the nodal set is well defined since φ is smooth in U , by Lemma 8, and hence ψ is smooth in U , by the pseudolocal property [39, page 6]. Here we are again using Beal's theorem to assert that (H 0 − z) −1 is a pseudodifferential operator. Then
(H 0 − z)ψ + V ψ = K −1 1 N Im ψ = 0
and V satisfies the claimed bound.
Remark 10. In the case of the Laplacian, H 0 = −∆, the set U \ N has positive Lebesgue measure. This can be most easily seen a posteriori: Since φ is nontrivial, ψ is nontrivial as well. If U \ N had zero measure, then we would have −∆ψ = zψ, but this has no H 2 solution, as a consequence of Rellich's theorem [32].
The L q -norm of V is estimated by
V q ≤ K −1 χ q ,(8)
so it remains to estimate K from below and χ q from above.
To avoid technicalities at this stage we restrict attention to the Laplacian H 0 = −∆, i.e. h 0 (ξ) = ξ 2 . Without loss of generality (scaling) we restrict ourselves to λ = 1. For ε > 0 let T ε be an ε −1 × ε −1/2 tube centred at the origin, with long side pointing in the x 1 direction, i.e.
T ε = {|x 1 | < ε −1 , |x ′ | < ε −1/2 }.(9)
Let χ ε be the indicator function of the tube T ε . We then have the following (qualitative) lower bound for K ε = χ ε δ ε (H 0 )χ ε . In Lemma 17 below we will prove a quantitative (almost optimal) lower bound.
Lemma 11. Let H 0 = −∆, λ = 1 and χ ε as above. For 0 < ε ≪ 1 the operator norm ε K ε is bounded below by a positive constant, independent of ε.
Proof. We conjugate K ε by e ix1 and rescale (y 1 , y ′ ) = (εx 1 , ε 1/2 x ′ ). The resulting operator is isospectral to εK ε and given by
K ′ ε = χ 1 δ 1 (H ′ ε )χ 1(10)
where χ 1 is the indicator function of T 1 and H ′ ε = −2i∂ y1 − ∆ y ′ − ε∂ 2 y1 . In the limit ε → 0 the operator K ′ ε converges strongly to K ′ 0 . Since this is not the zero operator, there exists an L 2 -normalized function f such that K ′ 0 f > 0. Now the strong convergence implies that for 0
< ε ≪ 1 we have K ′ ε f ≥ K ′ 0 f /2. This implies that ε K ε = K ′ ε ≥ K ′ ε f ≥ K ′ 0 f /2
and thus proves the claim.
Remark 12. Laptev and Safronov based their conjecture on the famous Wignervon Neumann example [40] (see also [31]), which is a potential decaying like 1/|x| with embedded eigenvalue at λ = 1. The potential is in L q for any q > d, corresponding to γ > d/2 in (1). Had they been aware of the Ionescu-Jerison example [23] they might have conjectured the smaller range γ ≤ 1/2, for which (1) indeed holds [12]. To prove the weaker statement with q > d in Theorem 1 one can replace the tube (9) by the ball {|x| < ε −1 }. Then the operator (10) (under the scaling y = εx) is independent of ε and becomes
K ′ = χ 1 δ 1 (−∆ y )χ 1 .
One can repeat the argument in the proof of Lemma 11 (i.e. K ′ = 0) and combine the result with Lemma 9 to conclude Theorem 1 for q > d since now V ε q ε 1−d/q . Remark 13. A straightforward adaptation of the proof of Lemma 11 yields the same conclusion for the Laplacian H 0 = −∆ g where g is a short range perturbation of the Euclidean metric, g ij (x) − δ ij (x) = O(|x| −2− ) as |x| → ∞. This shows that the results of Guillarmou, Hassell and Krupchyk [19,Theorem 4] are optimal for such metrics. The upper bounds in [19] are proved for the more general case of nontrapping asymptotically conic manifolds of dimension d ≥ 3. Of course, scaling is not available in this situation, and the mentioned modification of Lemma 11 only shows optimality for λ in a bounded interval.
2.1. Proofs of Theorems 1-3. Combining Lemmas 9 and 11 we obtain a sequence of potentials V ε and a sequence z ε = 1 + iε of eigenvalues of −∆ + V ε , 0 < ε ≪ 1, such that
V ε q ε 1− d+1 2q .
This follows from (8) and the fact that χ ε q = |T ε | 1/q (where | · | denotes Lebesgue measure). Theorems 1 and 2 follow immediately since |z ε | ≍ 1, dist(z, R + ) = ε. Theorem 3 follows from the same counterexample. We use in addition that, in the limit ε → 0, Im √ z ε /ε → 1/2, and then the substitution ( Proof. If λ = 1, we use Lemmas 9 and 11 to obtain a sequence of potentials V ε and a sequence z ε = 1+iε of eigenvalues of −∆+V ε , 0 < ε ≪ 1, such that V ε q ε 1− d+1 2q and V ε ∞ ε. Then the claim follows by taking ε sufficiently small. If λ = 1, the claim follows by scaling to the previous case.
x 1 , x ′ ) = (ε −1 y 1 , ε −1/2 y ′ ) yields F V (L Im √ z ε ) ε d+1 2 Tε exp(−L Im √ z ε |x|)dx ≍ T1 exp(−L|y 1 |/2)dy ≍ L −1 .
Now the proof of Theorem 5 is completely analogous to the one of [4, Th. 1], using Lemma 14 instead of [4,Lem. 1]. Note that in [4] the eigenvalues are constructed in the lower complex half-plane whereas here we constructed eigenvalues in the upper complex half-plane; one could take the adjoint operator to transform one case into the other one.
Perturbation of embedded eigenvalues
In this section we provide an alternative counterexample to the Laptev-Safronov conjecture that is closer to that suggested by Frank and Simon [14]. We continue to make the same assumptions on H 0 and U as in Lemma 9. Proof. We set X = (H 0 − λ) and write εδ λ,ε (H 0 ) = 1 − X 2 /(X 2 + ε 2 ). By the Peter Paul inequality, ε X/(X 2 + ε 2 ) ≤ 1/2. Since Xf ≤ ε/2, this yields X 2 /(X 2 + ε 2 )f ≤ 1/4, and hence (using X 2 /(X 2 + ε 2 ) ≤ 1 and χ ≤ 1)
χX 2 /(X 2 + ε 2 )χf ≤ X 2 /(X 2 + ε 2 )f + X 2 /(X 2 + ε 2 ) (1 − χ)f ≤ 1 2 .
We have also used (1 − χ)f ≤ 1/4 in the last inequality. Using this once more, we obtain
ε χδ λ,ε (H 0 )χf ≥ χf − 1 2 ≥ 1 4 .
Thus ε K ≥ 1/4 and Lemma 9 implies the claim.
Remark 16. Recall that the self-adjointness of H 0 implies that for every λ ∈ σ(H 0 ) there exists a normalized sequence (f n ) n∈N ⊂ D(H 0 ) with (H 0 − λ)f n → 0. Thus we can always find a normalized function for which (H 0 − λ)f is as small as we want. However, we need to make sure that U is so large that the assumption (1 − χ)f ≤ 1/4 is satisfied.
3.1. Alternative counterexample to Laptev-Safronov conjecture. Let q > (d+1)/2. Consider the sequence of potentials V n ∈ C ∞ (R d ), n ∈ N, in [14, Theorem 2.1] where λ = 1 is an embedded eigenvalue of −∆ + V n for each n. The potentials satisfy |V n (x)| (n + |x 1 | + |x ′ | 2 ) −1 . In particular, V n q → 0 and V n ∞ → 0 as n → ∞. Now fix n ∈ N. Denote by f n a normalized eigenfunction corresponding to the embedded eigenvalue. Let U n ⊂ R d be a compact subset that is so large that (1 − 1 Un )f n ≤ 1/4. Then, by Proposition 15, for all ε > 0 there exist potentials W n,ε ∈ L ∞ (R d ) such that z ε = 1 + iε ∈ σ p (−∆ + V n + W n,ε ) with |W n,ε | ≤ 4ε1 Un . Let (ε n ) n be such that ε n = o(|U n | −1/q ) as n → ∞. Then W n,εn q ≤ 4ε n |U n | 1/q = o(1).
This disproves the Laptev-Safronov conjecture since z εn → 1, V n + W n,εn q → 0 in the limit n → ∞. Note that this construction is similar to the one in the previous section, as one can take ε n = 1/n and U n = T c0/n for a small positive constant c 0 . However, due to the additional V n , the potentials here don't have compact supports.
Quantitative lower bounds
The aim of this section is to optimize the lower bound on ε K ε in Lemma 11. Since the proof relied on soft arguments it did not provide a quantitative lower bound. The trivial upper bound is ε K ε ≤ 1. By stretching the tube T ε in (9), we are able to prove an almost sharp lower bound. More precisely, let χ ε be the indicator function of T ε/M , where M ≫ 1.
Lemma 17. Let χ ε be as above. Then
ε K ε ≥ 1 − O(M −2+ ).(11)
Proof. To prove the lower bound in (11) we first write
K ε = δ ε (H 0 ) − (1 − χ ε )δ ε (H 0 ) − χ ε δ ε (H 0 )(1 − χ ε ).
We will treat δ ε (H 0 ) as a main term and the other terms as errors. Since χ ε ∞ ≤ 1, 0, 2)) is a nonnegative bump function equal to 1 on B(0, 1), and c 0 is a small positive constant, to be chosen later. We mention that f is known as a 'Knapp example' in harmonic analysis, see e.g. [11,Example 1.8]. For ξ in the support off ε , we have |ξ 2 − 1| = O(c 0 ε), which means that εδ ε (ξ) ≥ 1 − O(c 2 0 ) there. Here and in the following δ ε (ξ) = ε (ξ 2 −1) 2 +ε 2 . By Plancherel, we conclude that
K ε f ≥ δ ε (H 0 )f − (1 − χ ε )δ ε (H 0 )f − δ ε (H 0 )(1 − χ ε )f for any f ∈ L 2 (R d ). We takê f ε (ξ) := η 0 ((c 0 ε) −1 (ξ 1 − 1), (c 0 ε) −1/2 ξ ′ ), where ξ = (ξ 1 , ξ ′ ) ∈ R × R d−1 , η 0 ∈ C ∞ 0 (B(ε δ ε (H 0 )f ε ≥ (1 − O(c 2 0 )) f ε .(12)
Note that εδ ε (ξ) ≤ 1 and is smooth on the scale of f ε ; in other words, we can write
(δ εfε )(ξ) = ε −1 η((c 0 ε) −1 (ξ 1 − 1), (c 0 ε) −1/2 ξ ′ )
for some bump function η, similar to η 0 . More precisely, by η we really mean a family of such bump functions, with smooth norms bounded uniformly in ε. Thus, both f ε and δ ε (H 0 )f ε are Schwartz functions decaying rapidly away from T c0ε ; in particular,
(1 − χ ε )f ε N (c 0 M ) −N f ε , (1 − χ ε )δ ε (H 0 )f ε N ε −1 (c 0 M ) −N f ε ,
where we used that f ε 2 ≍ (c 0 ε) d+1 2
and χ ε = 1 on T ε/M . Together with (12) and the fact that εδ ε (ξ) ≤ 1, this yields
ε K ε f ε ≥ (1 − O(c 2 0 ) − O N ((c 0 M ) −N )) f ε .
Choosing c 0 = M −1+ and taking N sufficiently large yields the lower bound in (11).
Remark 18. In view of ε K ε ≤ 1 the bound (11) is optimal in the limit M → ∞. If we choose e.g. M = log(1/ε), then we obtain from (11) that for all q > (d + 1)/2, lim ε→0 ε K ε = 1, lim ε→0 V ε q = 0.
Lower bounds for constant-coefficient operators.
Here we supplement Lemma 11 with quantitative bounds. We consider more general constant-coefficient operators than the Laplacian, that is we allow H 0 = h 0 (D). For instance, for the fractional Laplacian (Theorem 4) we have h 0 (ξ) = |ξ| s . For the lower bound we only assume that h 0 is a tempered distribution, smooth in a neighborhood of some ξ 0 ∈ R d and such that λ := h 0 (ξ 0 ) is a regular value of h 0 . If the hypersurface {h 0 (ξ) = λ} has everywhere nonvanishing Gauss curvature, then local versions of (1), (2) hold (see Section 5). If the Gauss curvature vanishes at some point, then the upper bound will be worse, but the following example still provides a lower bound. One could improve the example if more is known about the local geometry of the isoenergy surface.
The following example is very close to that of Lemma 17 and is based on the factorization
h 0 (ξ) − λ = e(ξ)(ξ 1 − a(ξ ′ )),
which holds locally near ξ 0 , with e nonvanishing there. We suppress the dependence of e, a on λ. By a linear change of coordinates we assume, as we may, that a(0) = 0 and ∂ ξ ′ a(0) = 0. Then a(ξ ′ ) = O(|ξ ′ | 2 ). We take a Knapp example f whose Fourier support is contained in the cap
κ ε := {|ξ 1 | < c 0 ε, |ξ ′ | < (c 0 ε) 1/2 }.
Clearly, κ ε is contained in an ε neighborhood of the isoenergy surface {ξ 1 = a(ξ ′ )}, hence
εδ ε (ξ) = ε 2 (h 0 (ξ) − λ) 2 + ε 2 ≥ 1 − O(c 2 0 ).
Now the same argument as for the Laplacian shows that (11) holds, for exactly the same function χ ε .
Upper bounds
In this section we prove a generalization of the bounds (1), (2) for Schrödinger type operators of the form
H V = h 0 (D) + V (x, D).(13)
We first consider the classical Schrödinger operator
H V = −∆ + V (x)
to explain what types of estimates we will prove. By homogeneity, the estimates (1), (2) may by reduced to |z| = 1 and by elliptic regularity to a small neighborhood of z = 1. Hence these bounds can collectively be expressed as
dist(z, σ(H 0 )) (q−(d+1)/2)+ V q q(14)
for q > d/2 (or q = 1 if d = 1), while (3) reads as 1 sup
y∈R d R d |V (x)| d+1 2 exp(−c| Im z||x − y|)dx(15)
for some constant c > 0 (we used that Im √ z ≍ | Im z| for |z − 1| small). Here, σ(H 0 ) denotes the spectrum of H 0 . The bounds (14)- (15) are universal in the sense that they are essentially independent of the specific form of h 0 , in a sense that we will make precise now.
In the following, we always assume that the spectral parameter is z = λ+iε, with λ ∈ σ(H 0 ) and |ε| ≤ 1. Then dist(z, σ(H 0 )) = |ε|. One could use the Phragmén-Lindelöf maximum principle to extend the results to the region |ε| ≥ 1 (see e.g. [6, Appendix A], [19], [33]), but we will not pursue this.
We assume that h 0 is a tempered distribution that is smooth near the foliation (see (5) for the definition of S λ )
S = λ∈I S λ = {ξ ∈ R d : h 0 (ξ) ∈ I}.(16)
Here I ⊂ R is a fixed compact subset of the set {λ ∈ R : ∇h 0 (ξ) = 0 for all ξ ∈ R d such that h 0 (ξ) = λ} of regular values of h 0 . We assume that S λ is compact and has everywhere nonvanishing curvature for each λ ∈ I. The following lemma was proved e.g. in [6,Lemma 3.3]. It is closely related to the Stein-Tomas theorem for the Fourier restriction operator.
Lemma 19. Let η be a bump function. Then
sup λ∈I |ε|≤1 η(D)[h 0 (D) − (λ + iε)] −1 p→p ′ 1.
The setup (13) generalizes (4) in that we allow V (x, D) to be a pseudodifferential operator. We assume that its Kohn-Nirenberg symbol V (x, ξ) is smooth in the fibre variable ξ, but we don't assume smoothness in x. More precisely, assume that
C q,Ω,N (V ) := |α|≤N sup ξ∈Ω ∂ α ξ V (·, ξ) q < ∞(17)
for some sufficiently large N (N > d would suffice) and some pre-compact subset Ω ⊂ R d such that S ⋐ Ω. The condition (17) is the natural generalization of V ∈ L q (R d ) and reduces to the latter if V is a potential. The order of V (x, ξ) will not play a role here since we will always localize in Fourier space. In fact, we place ourselves in the following abstract setting: Let η, η be bump functions supported on Ω such that η 2 + η 2 = 1 in a neighborhood of S. Define
H η V := h 0 (D) + η(D)V η(D) : Ran P → Ran P, H η V := h 0 (D) + η(D)V η(D) : Ran P → Ran P , where P = 1 Ω (D), P = 1 − P .
In order to avoid imposing global conditions on h 0 and V in ξ we make the following assumption:
I ⊂ ̺(H η V ), c q,I := sup λ∈I, |ε|≤1 [H η V − (λ + iε)] −1 p→p ′ < ∞.(18)
Here p is uniquely determined by q −1 = p −1 − (p ′ ) −1 , ̺(·) denotes the resolvent set and · p→p ′ denotes the L p → L p ′ norm. In most applications, (18) can easily be proved by standard elliptic estimates (see Subsection 5.1 for an example). This usually requires that q ≥ q 0 for some q 0 ≥ 1 depending on h 0 . We will ignore this and simply use (18) as a black box assumption. Hence, we only assume that q ≥ 1 in the following. In the proofs of Theorems 20-24 below we will make use of the smooth Feshbach-Schur map [18]. The latter is defined by (
H V − z) → F η (z), where F η (z) := H η V − z − ηV η[H η V − z] −1 ηV η : Ran P → Ran P.
Here and in the following we abbreviate η = η(D) and η = η(D). Theorem 1 in [18] (with V = Ran P there) asserts that H V − z is invertible if and only if F η (z) is invertible, and that
[H V − z] −1 = Q[F η (z)] −1 Q # + η[H η V − z] −1 η, where Q := η − η[H η V − z] −1 ηV η : Ran P → L 2 (R d ), Q # := η − ηV η[H η V − z] −1 η : L 2 (R d ) → Ran P. Proposition 20.
Assume H V is as above and that (17), (18) hold. Then dist(z, σ(H 0 )) (q−(d+1)/2)+ C q,Ω,N (V ) q (19) holds for every eigenvalue z = λ + iε of H V , λ ∈ I, |ε| ≤ 1, with implicit constant depending on h 0 , d, q, I, |Ω|, but not on z, V .
Proof. By compactness of I, it suffices to prove (19) at Re z = λ for a single λ ∈ I. We first consider the case q ≤ (d + 1)/2. Then (19) is equivalent to the statement that if C q,Ω,N (V ) is sufficiently small, then z is not an eigenvalue of H V . We will show that H V − z is invertible using the smooth Feshbach-Schur map. We write
F η (z) = h 0 (D) − z + η V z η, V z := V − V η[H η V − z] −1 ηV.(20)
By Lemma 21 below and (18),
η V z p ′ →p ηV p ′ →p + [H η V − z] −1 p→p ′ ηV p ′ →p ηV p ′ →p |Ω| C q,η V z η[h 0 (D) − z] −1 p→p ≤ η V z p ′ →p η[h 0 (D) − z] −1 p→p ′ D I,q,Ω,N ,
and hence, by a geometric series argument, that F η (z) is boundedly invertible in
L p (R d ) if C q,Ω,N (V ) ≪ 1.
Since the spectrum is independent of p (see e.g. [10, Th. 14.3.10] 1 ) it follows that 0 is not in the L 2 spectrum of F η (z), or equivalently, that z is not in the spectrum of H V .
To prove the claim for q > (d + 1)/2 we follow [13] and interpolate the bound of Lemma 19 (for q = (d + 1)/2) with the trivial estimate
[h 0 (D) − z] −1 2→2 ≤ dist(z, σ(H 0 )) −1 , which yields η[h 0 (D) − z] −1 p→p ′ |ε| d+1 2q −1 .
Repeating the previous argument, we find that z cannot be an eigenvalue if the quantity b := D I,q,Ω,N |ε| d+1 2q −1 is too small; in other words, if z is an eigenvalue, then we must have b 1. If we set a := C q,Ω,N (V ), then this means that we must have |ε| 1− d+1 2q a + c q,I a 2 . Since we are assuming |ε| ≤ 1, this is always satisfied if a ≥ 1/c q,I . If a ≤ 1/c q,I , then the condition becomes |ε| 1− d+1 2q a, in which case (19) holds.
In the above proof we used the following generalization of Hölder's inequality.
Lemma 21. Assume that V satisfies (17) and η is a bump function supported on Ω. Then, for N sufficiently large,
V (x, D)η(D) p ′ →p + η(D)V (x, D) p ′ →p |Ω|C q,Ω,N (V ) whenever q −1 = p −1 − (p ′ ) −1 .
Proof. By duality it suffices to estimate the first summand on the left. The kernel of V (x, D)η(D) (recall that we use the Kohn-Nirenberg quantization) is given by
k(x, x − y) = (2π) −d R d e i(x−y)·ξ V (x, ξ)η(ξ)dξ. 1
The statement there is given in one dimension. However, the proof only uses general facts about L p spaces, valid in any dimension.
Due to the cutoff η the integral is restricted to ξ ∈ Ω. Integration by parts shows that
|k(x, u)| N u −N Ω |α|≤N |∂ α ξ V (x, ξ)|dξ,
and (17), together with Minkowski's inequality, then provides the estimate
k(x, u) L q x N u −N |Ω|C q,Ω,N (V ).(21)
Changing variables from y to u = x − y and using Minkowski's and Hölder's inequality, we get
k(x, x − y)f (y)dy L p x ≤ k(x, u)f (x − u) L p x du ≤ f p ′ k(x, u) L q x du
and (21) yields the claimed inequality.
Remark 22.
If V is a potential, then of course V η(D) p ′ →p V q , i.e. the factor |Ω| can be dispensed with. From this point of view, a more natural norm than (17) would e.g. be
sup ξ∈Ω V (·, ξ) q + 1≤|α|≤N Ω ∂ α ξ V (·, ξ) q dξ.
However, we view Ω as fixed, which justifies our use of the simpler norm (17).
We now establish a more precise version of Lemma 19. In the following we denote R η,ζ λ,ε = η(D)[h 0 (D) − (λ + iε)] −ζ and also write R η,ζ λ,ε (x − y) for its kernel. Lemma 23. Let η be a bump function and ζ ∈ C, 0 ≤ Re ζ ≤ (d+1)/2, ε ∈ [−1, 1]. Then we have the kernel bound
sup λ∈I |R η,ζ λ,ε (x)| N e C| Im ζ| 2 x − d−1 2 +Re ζ εx −N(22)
for some C > 0.
Proof. Again, it suffices to prove this for a fixed λ. We absorb λ into the symbol, i.e. we consider p(ξ) = h 0 (ξ) − λ. By a partition of unity and a linear change of coordinates we may assume that, locally near an arbitrary point of Ω, either p = 0 or ∂p/∂ξ 1 > 0. In the case p = 0 we get the stronger bound
|R η,ζ λ,ε (x)| N x − y −N .
We turn to the case ∂p/∂ξ 1 > 0 and consider ζ = 1 first (i.e. the resolvent). By the implicit function theorem, {p(ξ) = 0} is then the graph of a smooth function ξ 1 = a(ξ ′ ), and we have the factorization
(p(ξ) − iε) −1 = (ξ 1 − a(ξ ′ ) − iεq(ξ)) −1 q(ξ)(23)
where q(ξ) = (ξ 1 − a(ξ ′ ))/p(ξ) > 0, see e.g. [20,Section 14.2], [6,Lemma 3.3] and [37, Section 3.1]. There it is sufficient to work with the limiting distributions corresponding to ε = 0±, which would yield (22) in this case. Here we need to keep ε fixed to get the desired decay for nonzero ε. In the following we assume that ε > 0; the case ε < 0 is similar. The factorization (23) does not work well for this since q depends on ξ 1 . To remedy this problem, we follow the approach of Koch-Tataru [25], albeit in the much simpler setting of constant coefficients. Lemma 3.8 in [25] provides the alternative factorization
e(ξ)(p(ξ) − iε) = ξ 1 + a(ξ ′ ) + iεb(ξ ′ ),(24)
where e is elliptic (e = 0) and a, b are real-valued. This is a version of the Malgrange preparation theorem [21,Th. 7.5.5] or the classical Weierstrass preparation theorem [21,Th. 7.5.1] in the analytic case. We appeal to [25] because it makes the dependence on ε explicit. Note that the imaginary part b is now independent of ξ 1 . The symbols e, a, b can be found by iteratively solving a system of algebraic equations and using Borel resummation of the resulting formal series (see [25,Lemma 3.9 and 3.10]). Moreover, a, b have asymptotic expansions in powers of ε, while e has an asymptotic expansions in powers of ε and ξ 1 . We will only need the first term b 1 in the expansion of b. Changing variables ξ → ξ 1 + a(ξ ′ ) we are reduced to p(ξ) = ξ 1 + iεq(ξ) for some real-valued function q. By the proof of [25, Lemma 3.9] we have b 1 = 1/(1 + q 2 1 ), where q 1 = ∂ ξ1 q| ξ1=0 . Therefore, b ≥ c on the closure of Ω for some constant c > 0 (we used compactness and the smallness of ε). Since we have constant coefficients, the simple parametrix (5.5) in [25], with (operator-valued) kernel K(x 1 − y 1 ), given by
K(x 1 ) = 1 x1<0 e εx1b(D ′ ) e −ix1a(D ′ ) ,(25)
is exact, i.e. (D 1 + a(D ′ ) + iεb(D ′ ))K is the identity (we denote both the operator and the kernel by K here). By the stationary phase estimate (for complex-valued phase functions) [21,Th. 7.7.5],
|K(x)| x − d−1 2 e −c|εx| + O N ( x −N ),
Using the factorization (24) and extending 1/e globally as a Schwartz function, we obtain (22) in the case ζ = 1. The case ζ = 1 requires only minor modifications. The kernel in (25) is replaced by
K ζ (x 1 ) = χ ζ−1 − (x 1 )e εx1b(D ′ ) e −ix1a(D ′ ) ,
where χ w − (τ ) := 1 τ <0 |τ | w /Γ(w + 1), w ∈ C, where Γ is the usual Gamma function. Then (D 1 + a(D ′ ) + iεb(D ′ )) −ζ K ζ is the identity. This follows immediately by applying the inverse Fourier transformation to the following identity (see [21], specifically the explanation after Example 7.1.17)
F τ → e −δτ χ ζ + (τ (ξ) = e −iπ(ζ+1)/2 (ξ − iδ) −ζ−1 , δ > 0, ζ ∈ C.
Again, by stationary phase,
|K ζ (x)| e C| Im ζ| 2 ( x − d−1 2 +Re ζ e −c|εx| + O N ( x −N ))
for 0 ≤ Re ζ ≤ (d + 1)/2. The growth estimate in | Im ζ| comes from a standard estimate on the Gamma function (see e.g. [16,Appendix A.7]).
To state an analog of the estimate (15) for H V in (13) we assume
C ε,q,Ω,N (V ) := |α|≤N sup ξ∈Ω sup y∈R d ε(x − y) −N ∂ α ξ V (x, ξ) L q x < ∞(26)
for some sufficiently large N (again, N > d would work). The norm (26) is the analog of the right hand side of (15). We also replace (18) by the new black box assumption
I ⊂ ̺(H η V ), c q,I,V := sup λ∈I, |ε|≤1 ηV η[H η V − (λ + iε)] −1 p→p < ∞,(27)
where we recall that q −1 = p −1 − (p ′ ) −1 . Note that, in contrast to (18), the potential still appears in (27) and thus we need a p → p norm here. In many applications of interest (for instance, in the proof of Theorem 4), (27) can be estimated perturbatively in terms of H η 0 , with an effective constant c q,I,V in (27), i.e. a constant only depending on C ε,q,Ω,N , but not on V itself (see Subsection 5.1).
Proposition 24. Assume that (26), (27) hold for some q ≤ (d + 1)/2. Then every eigenvalue z = λ + iε of H V , λ ∈ I, |ε| ≤ 1, satisfies 1 C ε,q,Ω,N (V ) (28) with implicit constant depending on h 0 , d, q, I, |Ω|, but not on z, V .
Proof. We again use the smooth Feshbach-Schur map. Thus the claim (28) is equivalent to the statement
C ε,q,Ω,N (V ) ≪ λ 1 =⇒ F η (z) boundedly invertible,
where F η (z) is given by (20). Again, by p-independence of the spectrum it suffices to prove invertibility in L p , with q −1 = p −1 − (p ′ ) −1 , and this would follow (by geometric series) from η V z η[h 0 (D) − z] −1 p→p < 1, and this in turn would follow from
V η[h 0 (D) − z] −1 p→p C ε,q,Ω,N (V )(29)
since then, by (20), (27), (29),
η V z η[h 0 (D) − z] −1 p→p (1 + ηV η[H η V − z] −1 η p→p )C ε,q,Ω,N (V ) (1 + c q,I,V )C ε,q,Ω,N (V ).
To estimate (29), let k(x, x − y) be the kernel of V η 1 (where η 1 is a bump function like η, but with η 1 η = η) and let R η λ,ε (x − y) be the kernel of η[h 0 (D) − z] −1 . Then the kernel of K :
= V η[h 0 (D) − z] −1 is K(x, y) = R d k(x, x − u)R η λ,ε (u − y)du.
As a warmup, we consider first the easiest case where d = 1 and V is a potential. Then k(x, x − y) = V (x)δ(x − y), and Lemma 23 (with ζ = 1) yields
K 1→1 ≤ sup y R d |K(x, y)|dx N sup y R d |V (x)| ε(x − y) −N dx.(30)
Comparing to (26), the right hand side is bounded by C ε,1,Ω,N (V ), and hence if the latter is small, then K 1→1 < 1. When V is no longer required to be a potential (but still in d = 1), then the previous estimate is replaced by
sup y R d |K(x, y)|dx sup y |k(x, u)| ε(x − u − y) −N dxdu,(31)
where we first used the change of variables u → x − u and then Fubini. We insert 1 = u N u −N and estimate the double integral by
C N sup y,u R d |k(x, u)| u N ε(x − u − y) −N dx
where C N = u −N du. Then (21) and (26), together with the first inequality in (30), yield K 1→1 C ε,1,Ω,N (V ). Moving on to the general, higher-dimensional case, we use Stein interpolation on the analytic family K ζ := V ζ η[h 0 (D) − z] −ζ to prove (29). For Re ζ = 0, we have the trivial bound K ζ 2→2 e c| Im ζ| (Re ζ = 0).
For Re ζ = q, we use the estimate of Lemma 23 to get (31) for K ζ , i.e. K ζ 1→1 e c| Im ζ| C ε,q,Ω,N (V ) q (Re ζ = q).
Interpolating the last two estimates gives K 1 p→p C ε,q,Ω,N (V ), which is just (29), i.e. what we needed to prove. 5.1. Proof of Theorem 4. By scaling, it suffices to prove the bounds for |z| = 1 only. We only give a proof in the case Re z 1, which is the most difficult one. Note that the factor Im z(x − y) is dimensionless as is should be. Hence we can take I = [1 − δ, 1 + δ] in (16) and find that S = {1 − δ ′ ≤ |ξ| ≤ 1 + δ ′ } for some small positive constants δ, δ ′ . Recall that Ω ⊂ R d was chosen such that S ⋐ Ω (see the paragraph after (17)). This implies that |h 0 (ξ) − z| ≥ C Ω + ξ s for all ξ ∈ R d \ Ω and hence, by Sobolev embedding,
[H η 0 − z] −1 p→p ′ [H η 0 − z] −1 H − σ 2 →H σ 2 C σ s −1 Ω
for σ/d ≥ p −1 − (p ′ ) −1 = q −1 , where we denoted the L 2 based Sobolev space of order σ by H σ and used Plancherel in the last inequality. We choose σ = d/q s . If C σ s −1 Ω V q ≪ 1, then (18) holds by Hölder's inequality and a geometric series argument, and hence Proposition 20 yields (i), (ii).
Moving on to the proof of (iii), the claim would follow from Proposition 24 if we could show (27). For brevity, we restrict our attention to the case s < d. Precisely, we will show that c q,I,V < ∞ if
sup y∈R d R d x − y −N |V (x)| q dx ≪ 1,(32)
where N ≫ 1 and q ≥ d/s. Let us abbreviate the constant c q,I,V by c V . We also set c 0 := V η[H η V − z] −1 p→p (by compactness we can fix z). Without loss of generality assume that the inverse Fourier transform of η is normalized in L 1 , so that η(D) is an isometry in L p (and similarly for η). If we could prove c 0 < 1, then a geometric series argument would yield c ≤ c 0 /(1 − c 0 ) and we would be done. The next lemma establishes c 0 < 1. Clearly, we may bound the exponential from above by x − y −N for any N , which we will do. In view of the elementary estimate
x − y −N ∞ j=0 2 −N j 1 |x−y|≤2 j
it would suffice to prove the following bound on c j := V Λ s,j p→p , where Λ s,j has kernel |x − y| s−d 1 |x−y|≤2 j : (B(u,2 j+1 )) .
c j 2 j(d+s−d/q) sup u∈R d V L q
By homogeneity it suffices to prove this for j = 0. Using |V (x)| ≤ u∈Z d |V u (x)|, with V u (x) = V (x)1 |x−u|≤2 , we estimate
| V Λ s,0 f, g | ≤ u∈Z d |V u (x)||g u (x)||f (y)||x − y| s−d 1 |x−y|≤1 dxdy.
Note that, by the triangle inequality, we can insert 1 |y−u|≤3 for free into the integral. Then, by Young's inequality (or by the Hardy-Littlewood-Sobolev inequality if q = d/s) and Hölder (once for integrals and once for sums),
| V Λ s,0 f, g | ≤ u∈Z d V L q (B(u,2)) f L p (B(u,3)) g L p ′ (B(u,2)) ≤ sup u∈Z d V L q (B(u,2)) u∈Z d f p L p (B(u,3)) 1/p u∈Z d g p ′ L p ′ (B(u,2)) 1/p ′ sup u∈R d V L q (B(u,2)) f p g p ′ .
This completes the proof.
Theorem 1 .
1Let d ≥ 2 and q > (Date: September 14, 2021. 2020 Mathematics Subject Classification. 35P15, 31Q12.
Theorem 3 .
3Let d ≥ 2. There exists C ′ d > 0 such that for all L
2. 2 .
2Proof of Theorem 5. As explained in[4, Rem. 1], a counterexample to the Laptev-Safronov conjecture for a q > (d+1)/2 allows one to modify the construction in [4, Th. 1] to hold for this particular q. The only modification in the proof is to find a class of potentials satisfying the claim of [4, Lem. 1], now for q > (d + 1)/2, which is done in the following result.Lemma 14. Let d ≥ 2, λ ∈ R + and q > (d + 1)/2. For any ε 0 , δ 0 , r 0 > 0 there exists V ∈ L ∞ (R d ) ∩ L q (R d ) with V q < ε 0 , V ∞ < δ 0 and such that there exists a non-real eigenvalue of H V in the ball B(λ, r 0 ).
Proposition 15 .
15Let λ ∈ σ(H 0 ) and f ∈ D(H 0 ) with f = 1. Assume that (1 − χ)f ≤ 1/4. Then, for any ε ≥ 2 (H 0 − λ)f , there exists V ∈ L ∞ (R d ) such that z = λ + iε is an eigenvalue of H 0 + V and |V | ≤ 4εχ. In particular, if λ is an eigenvalue and (H 0 − λ)f = 0, then the conclusion holds for all ε ≥ 0.
Ω,N (V ) + c q,I C q,Ω,N (V ) 2 =: D I,q,Ω,N , where the L p boundedness of η, η is a consequence of the Mikhlin multiplier theorem[16, Th. 5.2.7]. By Lemma 19 we conclude that
Lemma 25 .
25Let s < d. If (32) holds with q ≥ d/s, then c 0 < 1. Proof. By the Mikhlin multiplier theorem [16, Th. 5.2.7] it suffices to prove this for V Λ −s p→p in place of c 0 , where we recall that Λ = (1−∆). Standard estimates for Bessel potentials (see e.g. [15, Prop. 6.1.5]) yield Λ −s (x − y) |x − y| s−d e −|x−y|/2 .
Acknowledgements. The authors wish to thank Ari Laptev and Rupert Frank for many illuminating discussions.
Bounds on complex eigenvalues and resonances. A A Abramov, A Aslanyan, E B Davies, J. Phys. A. 341A. A. Abramov, A. Aslanyan, and E. B. Davies. Bounds on complex eigenvalues and reso- nances. J. Phys. A, 34(1):57-72, 2001.
Characterization of pseudodifferential operators and applications. R Beals, Duke Math. J. 441R. Beals. Characterization of pseudodifferential operators and applications. Duke Math. J., 44(1):45-57, 1977.
On the spectrum of singular boundary-value problems. M Š Birman, Mat. Sb. 5597M.Š. Birman. On the spectrum of singular boundary-value problems. Mat. Sb. (N.S.), 55 (97):125-174, 1961.
Schrödinger operator with non-zero accumulation points of complex eigenvalues. S Bögli, Comm. Math. Phys. 3522S. Bögli. Schrödinger operator with non-zero accumulation points of complex eigenvalues. Comm. Math. Phys., 352(2):629-639, 2017.
Eigenvalue bounds and spectral stability of lamé operators with complex potentials. B Cassano, L Cossetti, L Fanelli, B. Cassano, L. Cossetti, and L. Fanelli. Eigenvalue bounds and spectral stability of lamé operators with complex potentials, 2021.
Eigenvalue bounds for Dirac and fractional Schrödinger operators with complex potentials. J.-C Cuenin, J. Funct. Anal. 2727J.-C. Cuenin. Eigenvalue bounds for Dirac and fractional Schrödinger operators with complex potentials. J. Funct. Anal., 272(7):2987-3018, 2017.
Embedded eigenvalues of generalized Schrödinger operators. J.-C Cuenin, J. Spectr. Theory. 102J.-C. Cuenin. Embedded eigenvalues of generalized Schrödinger operators. J. Spectr. Theory, 10(2):415-437, 2020.
Improved eigenvalue bounds for Schrödinger operators with slowly decaying potentials. J.-C Cuenin, Comm. Math. Phys. 3763J.-C. Cuenin. Improved eigenvalue bounds for Schrödinger operators with slowly decaying potentials. Comm. Math. Phys., 376(3):2147-2160, 2020.
On the occasion of the 65th birthday of Professor Michael Eastham. E B Davies, Jiban Nath, J. Comput. Appl. Math. 1481Schrödinger operators with slowly decaying potentialsE. B. Davies and Jiban Nath. Schrödinger operators with slowly decaying potentials. J. Comput. Appl. Math., 148(1):1-28, 2002. On the occasion of the 65th birthday of Professor Michael Eastham.
Linear operators and their spectra. E B Davies, of Cambridge Studies in Advanced Mathematics. CambridgeCambridge University Press106E. B. Davies. Linear operators and their spectra, volume 106 of Cambridge Studies in Ad- vanced Mathematics. Cambridge University Press, Cambridge, 2007.
Fourier restriction, decoupling, and applications. C Demeter, of Cambridge Studies in Advanced Mathematics. CambridgeCambridge University Press184C. Demeter. Fourier restriction, decoupling, and applications, volume 184 of Cambridge Stud- ies in Advanced Mathematics. Cambridge University Press, Cambridge, 2020.
Eigenvalue bounds for Schrödinger operators with complex potentials. R L Frank, Bull. Lond. Math. Soc. 434R. L. Frank. Eigenvalue bounds for Schrödinger operators with complex potentials. Bull. Lond. Math. Soc., 43(4):745-750, 2011.
Eigenvalue bounds for Schrödinger operators with complex potentials. R L Frank, III. Trans. Amer. Math. Soc. 3701R. L. Frank. Eigenvalue bounds for Schrödinger operators with complex potentials. III. Trans. Amer. Math. Soc., 370(1):219-240, 2018.
Eigenvalue bounds for Schrödinger operators with complex potentials. R L Frank, B Simon, II. J. Spectr. Theory. 73R. L. Frank and B. Simon. Eigenvalue bounds for Schrödinger operators with complex po- tentials. II. J. Spectr. Theory, 7(3):633-658, 2017.
Modern Fourier analysis. L Grafakos, Graduate Texts in Mathematics. 250Springerthird editionL. Grafakos. Modern Fourier analysis, volume 250 of Graduate Texts in Mathematics. Springer, New York, third edition, 2014.
Classical Fourier analysis. L Grafakos, Graduate Texts in Mathematics. 249Springerthird editionL. Grafakos. Classical Fourier analysis, volume 249 of Graduate Texts in Mathematics. Springer, New York, third edition, 2014.
Principal curvature and harmonic analysis. A Greenleaf, Indiana Univ. Math. J. 304A. Greenleaf. Principal curvature and harmonic analysis. Indiana Univ. Math. J., 30(4):519- 537, 1981.
On the smooth Feshbach-Schur map. M Griesemer, D Hasler, J. Funct. Anal. 2549M. Griesemer and D. Hasler. On the smooth Feshbach-Schur map. J. Funct. Anal., 254(9):2329-2335, 2008.
Eigenvalue bounds for non-self-adjoint Schrödinger operators with nontrapping metrics. C Guillarmou, Andrew Hassell, Katya Krupchyk, Anal. PDE. 136C. Guillarmou, Andrew Hassell, and Katya Krupchyk. Eigenvalue bounds for non-self-adjoint Schrödinger operators with nontrapping metrics. Anal. PDE, 13(6):1633-1670, 2020.
The analysis of linear partial differential operators. L Hörmander, Grundlehren der Mathematischen Wissenschaften. IIFundamental Principles of Mathematical SciencesL. Hörmander. The analysis of linear partial differential operators. II, volume 257 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathemati- cal Sciences].
Differential operators with constant coefficients. Springer-Verlag, BerlinSpringer-Verlag, Berlin, 1983. Differential operators with constant coefficients.
The analysis of linear partial differential operators. I, volume 256 of Grundlehren der Mathematischen Wissenschaften. L Hörmander, Fundamental Principles of Mathematical SciencesL. Hörmander. The analysis of linear partial differential operators. I, volume 256 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathemati- cal Sciences].
Distribution theory and Fourier analysis. Springer-Verlag, Berlinsecond editionSpringer-Verlag, Berlin, second edition, 1990. Distribution theory and Fourier analysis.
Remarks on L p -limiting absorption principle of Schrödinger operators and applications to spectral multiplier theorems. S Huang, X Yao, Q Zheng, Forum Math. 301S. Huang, X. Yao, and Q. Zheng. Remarks on L p -limiting absorption principle of Schrödinger operators and applications to spectral multiplier theorems. Forum Math., 30(1):43-55, 2018.
On the absence of positive eigenvalues of Schrödinger operators with rough potentials. A D Ionescu, D Jerison, Geom. Funct. Anal. 135A. D. Ionescu and D. Jerison. On the absence of positive eigenvalues of Schrödinger operators with rough potentials. Geom. Funct. Anal., 13(5):1029-1081, 2003.
Uniform Sobolev inequalities and unique continuation for second order constant coefficient differential operators. C E Kenig, A Ruiz, C D Sogge, Duke Math. J. 552C. E. Kenig, A. Ruiz, and C. D. Sogge. Uniform Sobolev inequalities and unique continuation for second order constant coefficient differential operators. Duke Math. J., 55(2):329-347, 1987.
Dispersive estimates for principally normal pseudodifferential operators. H Koch, D Tataru, Comm. Pure Appl. Math. 582H. Koch and D. Tataru. Dispersive estimates for principally normal pseudodifferential oper- ators. Comm. Pure Appl. Math., 58(2):217-284, 2005.
Sharp resolvent estimates outside of the uniform boundedness range. Y Kwon, S Lee, Comm. Math. Phys. 3743Y. Kwon and S. Lee. Sharp resolvent estimates outside of the uniform boundedness range. Comm. Math. Phys., 374(3):1417-1467, 2020.
Eigenvalue estimates for Schrödinger operators with complex potentials. A Laptev, O Safronov, Comm. Math. Phys. 2921A. Laptev and O. Safronov. Eigenvalue estimates for Schrödinger operators with complex potentials. Comm. Math. Phys., 292(1):29-54, 2009.
A note on eigenvalue bounds for Schrödinger operators. Y Lee, I Seo, J. Math. Anal. Appl. 4701Y. Lee and I. Seo. A note on eigenvalue bounds for Schrödinger operators. J. Math. Anal. Appl., 470(1):340-347, 2019.
Eigenvalue bounds for non-self-adjoint Schrödinger operators with the inversesquare potential. H Mizutani, J. Spectr. Theory. 92H. Mizutani. Eigenvalue bounds for non-self-adjoint Schrödinger operators with the inverse- square potential. J. Spectr. Theory, 9(2):677-709, 2019.
Classical and multilinear harmonic analysis. C Muscalu, W Schlag, Cambridge Studies in Advanced Mathematics. ICambridge University PressC. Muscalu and W. Schlag. Classical and multilinear harmonic analysis. Vol. I, volume 137 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 2013.
Methods of modern mathematical physics. IV. Analysis of operators. M Reed, B Simon, Academic PressNew York-LondonHarcourt Brace Jovanovich, PublishersM. Reed and B. Simon. Methods of modern mathematical physics. IV. Analysis of operators. Academic Press [Harcourt Brace Jovanovich, Publishers], New York-London, 1978.
über das asymptotische Verhalten der Lösungen von ∆u + λu = 0 in unendlichen Gebieten. F Rellich, Jber. Deutsch. Math. Verein. 53F. Rellich.über das asymptotische Verhalten der Lösungen von ∆u + λu = 0 in unendlichen Gebieten. Jber. Deutsch. Math. Verein., 53:57-65, 1943.
Harmonic analysis and inverse problems. A Ruiz, Lecture notes. A. Ruiz. Harmonic analysis and inverse problems. Lecture notes, 2002. https://www.uam.es/gruposinv/inversos/publicaciones/Inverseproblems.pdf.
On the bound states of a given potential. J Schwinger, Proceedings of the National Academy of Sciences. 471J. Schwinger. On the bound states of a given potential. Proceedings of the National Academy of Sciences, 47(1):122-129, 1961.
Trace ideals and their applications. B Simon, Mathematical Surveys and Monographs. 120American Mathematical Societysecond editionB. Simon. Trace ideals and their applications, volume 120 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, second edition, 2005.
Harmonic analysis: real-variable methods, orthogonality, and oscillatory integrals. E M Stein, With the assistance of Timothy S. Murphy, Monographs in Harmonic Analysis. Princeton, NJPrinceton University Press43IIIE. M. Stein. Harmonic analysis: real-variable methods, orthogonality, and oscillatory inte- grals, volume 43 of Princeton Mathematical Series. Princeton University Press, Princeton, NJ, 1993. With the assistance of Timothy S. Murphy, Monographs in Harmonic Analysis, III.
Uniform bounds of discrete Birman-Schwinger operators. Y Tadano, K Taira, Trans. Amer. Math. Soc. 3727Y. Tadano and K. Taira. Uniform bounds of discrete Birman-Schwinger operators. Trans. Amer. Math. Soc., 372(7):5243-5262, 2019.
Limiting absorption principle on L p -spaces and scattering theory. K Taira, J. Math. Phys. 61928K. Taira. Limiting absorption principle on L p -spaces and scattering theory. J. Math. Phys., 61(9):092106, 28, 2020.
Partial differential equations II. Qualitative studies of linear equations. M E Taylor, Applied Mathematical Sciences. 116Springersecond editionM. E. Taylor. Partial differential equations II. Qualitative studies of linear equations, volume 116 of Applied Mathematical Sciences. Springer, New York, second edition, 2011.
Upper Mountjoy Campus, Durham DH1 3LE, United Kingdom Email address: sabine.boegli@durham.ac.uk. E P Wigner, J Von-Neumann, Cuenin) Department of Mathematical Sciences. 30465S. Bögli) Department of Mathematical Sciences, Durham UniversityZ. Phys. United Kingdom Email address: J.Cuenin@lboro.ac.ukE. P. Wigner and J. Von-Neumann.Über merkwürdige Eigenwerte. Z. Phys, 30:465, 1929. (S. Bögli) Department of Mathematical Sciences, Durham University, Upper Moun- tjoy Campus, Durham DH1 3LE, United Kingdom Email address: sabine.boegli@durham.ac.uk (J.-C. Cuenin) Department of Mathematical Sciences, Loughborough University, Lough- borough, Leicestershire LE11 3TU, United Kingdom Email address: J.Cuenin@lboro.ac.uk
| [] |
[] | [
"Xiang-Ping Wu ",
"Li-Zhi Fang fanglz@time.physics.arizona.edu ",
"\nDAEC\nObservatoire de Paris\n92195Meudon, Meudon Principal CedexFrance\n",
"\nDepartment of Physics\nBeijing Astronomical Observatory Chinese Academy of Sciences\n100080BeijingChina\n",
"\nUniversity of Arizona\n85721TucsonAZ\n"
] | [
"DAEC\nObservatoire de Paris\n92195Meudon, Meudon Principal CedexFrance",
"Department of Physics\nBeijing Astronomical Observatory Chinese Academy of Sciences\n100080BeijingChina",
"University of Arizona\n85721TucsonAZ"
] | [
"Astrophysical Journal"
] | We present an exact solution of the anisotropies of cosmic background radiation (CBR) from a local collapse described by a spherical over-dense region embedded in a flat universe, with the emphasis on the relationship between the dipole (∆T/T) d and the quadrupole (∆T/T) q anisotropy. This result has been used to examine the kinematic quadrupole correction (∆T/T) q = (∆T/T) 2 d /2, which is usually applied to remove the contamination of the quadrupole produced by local density inhomogeneities when finding the cosmic amplitude of the quadrupole at the surface of last scattering. We have found that the quadrupole of local collapse origin cannot always be approximately described by the kinematic quadrupole. Our numerical result shows that the difference between the kinematic and local quadrupoles depends on the size and matter density in the peculiar field, and the position of the observer. For a given dipole, the local quadrupole can be different from the kinematic quadrupole by a factor as large as 3. Therefore, the kinematic quadrupole correction remains an uncertain factor in the determination of the amplitude of a cosmic quadrupole. Nevertheless, a preliminary analysis shows that this uncertainty might not dramatically change the cosmological origin of the COBE-DMR's quadrupole, unless a huge peculiar gravitational field is assumed. | 10.1086/173911 | [
"https://arxiv.org/pdf/astro-ph/9312057v1.pdf"
] | 16,906,459 | astro-ph/9312057 | 809ebf8a41a0ebef02ed821a4319dba442e11a3c |
April 1, 1994
Xiang-Ping Wu
Li-Zhi Fang fanglz@time.physics.arizona.edu
DAEC
Observatoire de Paris
92195Meudon, Meudon Principal CedexFrance
Department of Physics
Beijing Astronomical Observatory Chinese Academy of Sciences
100080BeijingChina
University of Arizona
85721TucsonAZ
Astrophysical Journal
April 1, 1994ANISOTROPIES OF COSMIC BACKGROUND RADIATION FROM A LOCAL COLLAPSESubject headings: cosmic background radiation -cosmology: theory
We present an exact solution of the anisotropies of cosmic background radiation (CBR) from a local collapse described by a spherical over-dense region embedded in a flat universe, with the emphasis on the relationship between the dipole (∆T/T) d and the quadrupole (∆T/T) q anisotropy. This result has been used to examine the kinematic quadrupole correction (∆T/T) q = (∆T/T) 2 d /2, which is usually applied to remove the contamination of the quadrupole produced by local density inhomogeneities when finding the cosmic amplitude of the quadrupole at the surface of last scattering. We have found that the quadrupole of local collapse origin cannot always be approximately described by the kinematic quadrupole. Our numerical result shows that the difference between the kinematic and local quadrupoles depends on the size and matter density in the peculiar field, and the position of the observer. For a given dipole, the local quadrupole can be different from the kinematic quadrupole by a factor as large as 3. Therefore, the kinematic quadrupole correction remains an uncertain factor in the determination of the amplitude of a cosmic quadrupole. Nevertheless, a preliminary analysis shows that this uncertainty might not dramatically change the cosmological origin of the COBE-DMR's quadrupole, unless a huge peculiar gravitational field is assumed.
INTRODUCTION
This paper is aimed at studying the contribution of local collapse of matter to the anisotropies of cosmic background radiation (CBR), especially the relationship between the dipole and the kinematic quadrupole.
It is generally believed that the CBR dipole anisotropy comes from a Doppler effect. If the observer moves with velocity v with respect to the CBR rest frame, the special relativity (SR) Doppler effect would lead to a frequency-independent thermodynamic temperature distribution on CBR (Peebles & Wilkinson, 1968)
T = T 0 1 + v c cos Ψ + 1 2 ( v c ) 2 cos 2Ψ + · · ·(1)
The terms in the right-hand side are, respectively, the monopole, the dipole and the kinematic quadrupole anisotropies. Eq.
(1) is often used 1) to determine the peculiar velocity v, and 2) to calculate the correction of the kinematic quadrupole. For instance, the recent COBE-DMR first year sky maps show a dipole amplitude (∆T) d = 3.365 ± 0.027 mK toward direction (l II , b II ) = (264.4 o ± 0.3 o , 48.4 o ± 0.5 o ) (Smoot et al, 1992;Bennett et al, 1992;Kogut et al, 1993). If the entirely observed dipole results from our peculiar motion, the kinematic quadrupole anisotropy should be (∆T) q = (∆T) 2 d /2T ≈ 2.1µK. This result has been used by the COBE-DMR team to obtain the CBR quadrupole anisotropy (Smoot et al, 1991(Smoot et al, , 1992. They removed the kinematic quadrupole term from the DMR maps in order to eliminate the influence of local density inhomogeneities on the quadrupole amplitude. Since the amplitude of the kinematic quadrupole is about 13% of the cosmic quadrupole, the difference between the quadrupoles produced by the SR kinematics and local density inhomogeneities would be one of the factors leading to uncertainty in the amplitude of the cosmological quadrupole. Therefore, it is necessary to study the conditions, under which the kinematic quadrupole correction given by eq.(1) is valid.
The initial velocity of cosmic matter underwent a decrease due to the expansion of the universe. It can then be reasonably assumed that our present peculiar velocity results totally from the infall motion toward the center of the local collapse of matter, i.e., the peculiar motion of the observer is completely given by the gravitation of local matter clustering. In linear approximation, the relationship between the matter density fluctuation δ and the peculiar velocity is (Peebles, 1980)
v = 1 3 (H 0 D) Ω 0.6 δ(2)
where D is the distance to the center of the perturbation, Ω, the density parameter and H 0 , the Hubble constant. Thus, eqs.
(1) and (2) imply that the CBR dipole and kinematic quadrupole anisotropies are essentially caused by the local density inhomogeneities. In other word, the CBR's dipole and quadruple anisotropy should be calculated as an effect of a locally time-dependent density inhomogeneity. The question is then: can the contribution of a local collapse of matter to the CBR dipole and quadrupole anisotropies be approximately described by a Doppler effect of eq.(1), i.e. a pure SR effect? Obviously, the SR Doppler effect description is principally different from that of a local collapse. The former is a kinematic effect, of which the only parameter is the observer's velocity v. The latter is a dynamical model, and governed by the matter distribution or the initial density contrast δ, the location of the observer and the size of the local collapse.
Eqs. (1) and (2) also tell us that the dipole anisotropy depends on the first order of the density fluctuation δ and then the kinematic quadrupole is of the second order. It is well known from general relativity that in a linear approximation the behavior of a comoving object in an expansion or collapsing metric can be equivalently described as a Doppler motion. But such an equivalence will no longer be held if the higher orders are involved. Indeed, in terms of the dipole term one can not distinguish between a SR Doppler effect and a local collapse. The velocity v determined by a dipole anisotropy is just equal to the v caused by the locally gravitational collapse. However, one can not expect that the SR kinematic quadrupole remains the same as the quadrupole caused by the local collapse.
The original intention of the kinematic quadrupole correction in the COBE sky maps is to remove the contamination of a quadrupole given by the local density inhomogeneity. Therefore, in principle, this correction should not be done by the kinematic quadrupole, but the local collapse of matter. We need then to conduct a quantitative comparison of the kinematic quadrupole with that of a local collapse.
The CBR anisotropies from a collapse at redshift greater than 1 have been addressed by several authors (e.g. Rees & Sciama, 1968;Dyer, 1976;Olson & Silk;1979;Raine & Thomas, 1981;Kaiser, 1982;Occhionero, Santangelo & Vittorio, 1983;Nottale, 1984;Dyer & Ip, 1988;Arnau et al, 1993). However, the contribution from a local collapse to the CBR anisotropies, especially the relationship between the dipole and local quadrupole, has not been carefully studied thus far. We will investigate the CBR dipole and quadrupole anisotropies from a locally spherical collapse described by a Tolman-Bondi universe. This model has recently been used to analyze the CBR anisotropy produced by perturbations with sizes of 100 ∼ 1000 Mpc (Fang and Wu, 1993, hereafter Paper I). The advantage of this model is that one can find all needed exact solutions, which allow us to check the availability of the kinematic quadrupole correction of eq.(1).
In Section 2, we will discuss the metric and null geodesic in a spherically symmetric collapse. Section 3 shows the linear, the high-order and the exact numerical solutions of the dipole and quadrupole anisotropies. In Section 4, a relationship between peculiar velocity and CBR anisotropies will be presented. Finally, a brief summary and discussion will be given in Section 5.
METRIC AND NULL GEODESIC
For simplicity, we consider a spherical density perturbation embedded in a flat universe. The universe can then be generally modeled as a Tolman-Bondi metric (Paper I):
ds 2 = e λ(x,t) dx 2 + r 2 (x, t)(dθ 2 + sin 2 θdφ 2 ) − dt 2
and e λ(x,t) = r ′2
1 + x 2 H 2 i (1 − ρ(x,t i ) ρ ci )(4)
where H i , ρ(x, t i ) and ρ ci are, respectively, the Hubble constant, the density and the critical density of the universe at epoch t i . We denote that ′ = ∂/∂x and˙= ∂/∂t. The dynamics of the collapse from the initial perturbation at x = 0 is described by the collapse factor S(x, t) which is defined as
r = S(x, t)x, S(x, t i ) = 1(5)
The dynamical equation of S(x, t) iṡ
S 2 − H 2 i ρ(x, t i ) ρ ci 1 S = H 2 i (1 − ρ(x, t i ) ρ ci )(6)
Let the initial density perturbation be δ(x), the density distribution ρ(x, t i ) in eqs. (4) and (6) can then be written as
ρ(x, t i ) = ρ ci (1 + δ(x))(7)
If the initial density perturbation δ 0 is assumed to be constant in the range of x < x c , one has
δ(x) = δ 0 , x ≤ x c δ 0 (x c /x) 3 , x > x c .(8)
In this case, the dynamical equation (6) can be solved analytically (Paper I).
The 0th-component of the null geodesic in the metric of eq.(3) is
dk 0 dσ = − 1 2 e λλ dx dσ 2 − rṙ dφ dσ 2(9)
where k 0 is the 0th-component of a photon's four momentum and σ is an affine parameter of the null geodesic. From the condition of ds = 0, one can find the energy shift of a photon, which is assumed to be emitted at t = t e with frequency ν e and received at (x 0 , t 0 ) with frequency ν 0 ,
ν e ν 0 = exp t 0 te 1 2λ dt + t 0 te (ṙ r − 1 2λ )r 2 dφ dt 2 dt (10)
The trajectories of the photon can be obtained from
dx dt = ±e −λ/2 1 − r 2 dφ dt 2 (11) d 2 φ dt 2 + 2 r dr dt − 1 2λ dφ dt = ṙ r − 1 2λ r 2 dφ dt 3(12)
Considering the trajectory of a photon is approaching a straight line when r is large, one can obtain dφ/dt from solving the eq.(12), and then the energy shift eq.(10) and the trajectory eq.(11) become
ν e ν 0 = e T 0 1 1 2λ tedT 1 − T 0 1 Ut e dT 1/2 (13) dX dT = ±e −λ/2 1 − ξ 1 − T 0 T Ut e dT 1/2(14)
where the new time and space coordinates (T , X) are defined by T = t/t e and X = x/t e . Therefore, T 0 = t 0 /t e , X 0 = x 0 /t e and X c = x c /t e . U and ξ in eqs. (13) and (14) are, respectively,
U = 2ξ(λ 2 −ṙ r ) (15) ξ = r 0 r 2 sin 2 Ψe T T 0λ tedT(16)
with r 0 = r(x 0 , t 0 ), and Ψ is the incidence angle of the photon, i.e. the angle between the directions of the photon's trajectory and the line of sight from observer to the center of the perturbation, Since the universe is flat, the CBR anisotropy produced by the local collapse is given by
∆T T = ν 0 ν e T 2/3 0 − 1(17)
where T 0 = (1 + z d ) 3/2 and z d is the redshift at decoupling time (t e ). Without a loss of generality, we will take t i = t e , i.e., the δ(x) is the density fluctuation at the recombination time t e . Because we are interested in the effect of a local collapse, in the following, only the case of x 0 < x c or X 0 < X c will be considered. Using the expansion of S(X, T ) given in the Appendix, one find the solution of ∆T/T up to the second order of δ 0 as
∆T T = − T 0 1 f 1 δdT + X 2 0 sin 2 Ψ T c0 1 u 1 X (0) 2 δdT + 1 2 T 0 1 f 1 δdT − X 2 0 sin 2 Ψ T c0 1 u 1 X (0) 2 δdT 2 + 3 T c0 1 f 1 ∆X X (0) δdT −5X 2 0 sin 2 Ψ T c0 1 u 1 X (0) 2 ∆X X (0) δdT + 3f 1 (T c0 ) + X 2 0 sin 2 Ψ u 1 (T c0 ) X 2 c δ 0 ∆T c − T 0 1 f 2 δ 2 dT + X 2 0 sin 2 Ψ T c0 1 u 2 X (0) 2 δ 2 dT +2δ 0 X 2 0 sin 2 Ψ S 1 (T 0 ) T 2/3 0 T c0 1 u 1 X (0) 2 δdT − 2X 2 0 sin 2 Ψ T c0 1 S 1 (T ) T 2/3 u 1 X (0) 2 δ 2 dT + 2X 2 0 sin 2 Ψ T c0 1 u 1 X (0) 2 δ T T 0 f 1 δdT + X 4 0 sin 4 Ψ T c0 1 u 1 X (0) 2 δdT 2(18)
where f i and u i (i = 1, 2) are defined in the Appendix. ∆X is the first-order correction to the photon trajectory, and ∆T c , the cross time correction when the photon enters into the perturbation regime. They are, respectively,
∆X = ± (X (0) ) 2 − X 2 0 sin 2 Ψ X (0) T 0 T F (X (0) , T ) T 2/3 dT (19) ∆T c = T 2/3 c0 T 0 T c0 F (X (0) , T ) T 2/3 dT(20)
where X (0) is the zero-order solution of the photon's trajectory, which can be found from eq.(14) by taking δ = 0. It is
(X (0) ) 2 = X 2 0 sin 2 Ψ + (3T 1/3 0 − 3T 1/3 − X 0 cos Ψ) 2(21)
Similarly, the zero-order solution to the photon cross time (T c ) is found to be
T c0 = T 1/3 0 − X 0 cos Ψ 3 − 1 3 X 2 c − X 2 0 sin 2 Ψ 3 .(22)
The function F (X, T ) in eqs. (19) and (20) is defined as
F (X, T ) = g 1 T 2/3 δ − X 2 0 sin 2 Ψ X 2 − X 2 0 sin 2 Ψ S 1 (T 0 ) T 2/3 0 δ 0 − S 1 (T ) T 2/3 δ + T T 0 f 1 δdT + X 2 0 sin 2 Ψ T 0 T u 1 X 2 δdT(23)
and g 1 is also given in the Appendix.
SOLUTIONS OF ∆T/T
First-Order Solution
The first-order solution of ∆T/T can be obtained by substituting the zero-order solution of the photon trajectory X (0) of eq.(21) into the first two terms of eq.(18). After a straightforward computation, the first-order solution is found to be
∆T T = δ 0 X 2 c 15 − X 2 0 45 + 2 15 T 1/3 0 X 0 cos Ψ − 2 135 X 3 c T 1/3 0 + O(T −2/3 0 )(24)
The above expression is also an expansion with respect to the parameter (1/T 0 ). The largest term is of the order of (1/T 0 ) −1/3 . Eq. (24) shows that in the approximation up to the first-order of δ 0 , ∆T/T consists mainly of two parts: a monopole term
∆T T 0 ≃ X 2 c 15 − X 2 0 45 − 2 135 X 3 c T 1/3 0 δ 0(25)
and a dipole term
∆T T d ≃ 2 15 T 1/3 0 X 0 δ 0 cos Ψ(26)
The physical explanations of these results are simple. The monopole is given by the gravitational redshift of the local matter inhomogeneity δ 0 , which leads to an isotropic increase of the CBR temperature. The dipole term depends on the distance between the observer and the center of the density perturbation (X 0 ). In the case of X 0 = 0, i.e., the observer sits at the center of the local collapse, the dipole term will disappear. This indicates that the dipole anisotropy is indeed due to the asymmetry of the local collapse around the observer. However, it should be pointed out that the dipole anisotropy depends on the time-dependence of the local inhomogeneity. If the gravitation field around the observer is static, the asymmetry (X 0 = 0) do not produce such a dipole anisotropy.
Comparing eq.(26) with eq.(1), we have
v = 2 15 T 1/3 0 X 0 δ 0(27)
This means that the CBR dipole anisotropy produced by a local collapse can be equivalently described as a SR Doppler effect if the observer is assumed to have a peculiar velocity given by eq. (27). Let's show that eq. (27) is indeed the observer's peculiar velocity produced by the gravitation field of the local collapse. The proper distance D corresponding to X = x/t e is
D = 2 3 c H 0 1 √ 1 + z i X(28)
where z i is the redshift when the perturbation occurred. We will take it to be the redshift at decoupling era, i.e. z i = z d . In the paper I, we have found that for a given δ 0 the present density contrast of the local collapse should be
∆ρ ρ ≈ 3 5 (1 + z d )δ 0 .(29)
In a flat universe, ρ = ρ cr = 3H 2 0 /8πG. Using eqs. (28) and (29), eq. (27) can be rewritten as
v = 1 3 (H 0 D) ∆ρ ρ = 2 3H 0 g(30)
where g = G∆M/D 2 is the observer's acceleration raised by the extra-mass ∆M = (4π/3)D 3 ∆ρ 0 . Therefore, v in eq. (27) is the same as that in eq.(2). In a word, in the linear approximation it is reasonable to determine the observer's peculiar velocity by the CBR dipole anisotropy and the Doppler effect formula (1). If the CBR dipole is totally given by the local collapse, one can then find from eq.(26) that X 0 δ 0 = (15/2)(∆T/T) d T −1/3 0 ∼ 2.6 × 10 −4 . This is, in fact, a constrain to the local collapse causing the dipole. For instance, if we assume the size of this local collapse is of the order of the distance to the Great Aractor, i.e. X 0 ∼ 1, the initial density perturbation δ 0 in this area should be about δ 0 ∼ 2 × 10 −4 , or from eq.(29) today's density contrast is about 1 × 10 −1 .
Second-Order Solutions and Quadrupole Anisotropy
In a similar way, the second-order solution of ∆T/T can be found from eq.(18) as follows
∆T T = δ 2 0 ( 3 175 X 2 c − 11 1575 X 2 0 )T 2/3 0 + 4 175
T 0 X 0 cos Ψ + 2 225 T 2/3 0 X 2 0 cos 2Ψ (31) which is also written in the series of (1/T 0 ) up to the order of (1/T 0 ) −2/3 . Eq. (31) indicates that, up to the δ 2 0 approximation, the dominant term is still the dipole, because the amplitude of dipole is of the order of T 0 , while the amplitudes of the monopole and quadrupole terms are only of the order of T 2/3 0 . The quadrupole anisotropy of the local collapse now is
∆T T q = 2 225 T 2/3 0 X 2 0 δ 2 0 cos 2Ψ(32)
Comparing this amplitude with that in eq.(26), one find
∆T T q = 1 2 ∆T T 2 d(33)
This is just the SR relationship between the anisotropies of the dipole and kinematic quadrupole. Therefore, the SR kinematic quadrupole correction is available till the approximation of eq.(32) is correct. However, when the terms of the order of T 1/3 0 , T 0 0 are taken into account, the dipole-quadrupole relation of eq(33) will no longer hold true. For instance, up to the order of δ 2 0 and T 1/3 0 , eq.(33) should be corrected as
∆T T 0 q = 1 2 ∆T T 2 d + T 1/3 0 X 2 0 ∆ q δ 2 0(34)
where the factor ∆ q is given by
∆ q = − 19X c 3780 − 1 X 0 X 0 140X c + 229X 3 0 61440X 3 c + 261X 5 0 81920X 5 c + 3X 7 0 4096X 7 c + X 0 41X 0 9800X c − 1333X 3 0 2064384X 3 c + 467X 5 0 5734400X 5 c + 3833X 7 0 11468800X 7 c + O( X 9 0 X 9 c )(35)
Eq.(35) shows that the term of the SR kinematic quadrupole does not always dominate the quadrupole anisotropy shown in eq.(34). Therefore, the SR kinematic quadrupole may not always be a good approximation of the quadrupole produced by a local collapse. In principle, the kinematic quadrupole correction of eq.(33) should also be replaced by the local quadrupole correction of eq.(34). Let's define a ratio between the local quadrupole to the SR kinematic quadrupole as q = (∆T/T) q (1/2)(∆T/T) 2 d (36) Figure 1 plots the dependence of q on X c and X 0 as given by eq.(35). One can see that q does not sensitively depend on the observer distance but the whole size of the local inhomogeneity. The difference between local and kinematic quadrupole, i.e. that q significantly deviates from 1, mainly occurs in two cases: 1) very small clustering with a scale less than a few Mpc, and 2) very large clustering with a scale greater than 10 3 Mpc.
Numerical Solution
In order to accurately compare the kinematic quadrupole with the local collapse model, we made the numerical solutions of the dipole and quadrupole amplitude of the local collapse model. These solutions depend on three parameters: 1) the initial density fluctuation δ 0 , 2) the size of local collapse X c , and 3) the distance of the observer to the center of the local collapse X 0 .
We still lack the detailed knowledge of the local collapse. It is generally believed that the bulk motion of horizon sized volume is negligible. Therefore, the Local Group's peculiar velocity, which may govern the dipole term, was probably induced by the local inhomogeneities in the density field with size comparable with horizon at the time of decoupling. We have shown in Paper I that the COBE-DMR result of CBR anisotropy on an angular scale of 10 • implies the existence of collapses on scale as large as about 1000 h −1 50 Mpc with an initial density fluctuation δ 0 ∼ (2.8 − 6.9) × 10 −6 , or its present density enhancement is (1.7 − 4.2) × 10 −3 . On the other hand, the distance of our Local Group to the center of the collapse should at least be greater than the distance to the Great Attractor, which is estimated to be 80 h −1 50 Mpc (Lynden-Bell et al, 1988), i.e. X 0 = 0.7. Therefore, it would be valuable to consider the following two cases: X c = 1.4 (∼ 150 h −1 50 Mpc) and 10 (∼ 1000 h −1 50 Mpc), i.e., the lower value of X c is about 2 times of the distance to the Great Attractor and the higher value of X c is about the size of horizon.
The numerical results are listed in Table 1, in which [exact] indicates the exact solution of the dipole amplitude, and [δ 0 ] and [δ 2 0 ] denote, respectively, the solutions up to the first-and second-order of δ 0 . The density fluctuations at the decoupling epoch (z d = 10 3 ) are taken to be 10 −3 , 10 −4 and 10 −5 , respectively. The corresponding values of the SR kinematic quadrupole (1/2)(∆T/T) 2 d = (1/2)[exact] 2 have also been listed for comparisons. The term of (∆T/T) q is the exact solution of the quadrupole amplitude.
One can find from Table 1 that: a) For the dipole anisotropy the linear approximation is good if the initial density fluctuation is less than 10 −4 . b) The dipole is nearly independent of the size of the local collapse. c) The local quadrupole sensitively depends on the size of the perturbation. d) The kinematic quadrupole is always larger than the exact solution of the local quadrupole. e) The difference between the kinematic and exact quadrupoles does not vanish with the decrease of density perturbation δ 0 . In the case of X c = 10 the ratio 1/q can be as large as 3.
DIPOLE AS THE FUNCTIONS OF DENSITY CONTRAST AND PECULIAR VELOCITY
In this section, we intend to represent the CBR's dipole anisotropy by the observable parameters such as the present density contrast and peculiar velocity. Up to the second order of δ 0 the dipole anisotropy is (see eqs. (24) and (31))
∆T T d = 2 15 T 1/3 0 δ 0 + 4 175 T 0 δ 2 0 + ... X 0 cos Ψ(37)
First, the present density contrast ∆ρ/ρ as a function of initial fluctuation δ 0 is given by
∆ρ ρ = S 0 S(X, T 0 ) 3 (1 + δ 0 ) − 1(38)
Using eq.(A1-A5), one can find the expression of ∆ρ/ρ expanded as a series of δ 0 ∆ρ ρ = δ 0 3 5 T 2/3 0
+ 2 5 T −1 0 +δ 2 0 51 175 T 4/3 0 − 2 5 T 2/3 0 + 4 25 T −1/3 0 − 6 35 T −1 0 + 3 25 T −2 0 +δ 3 0 341 2625 T 2 0 − 68 175 T 4/3 0 + 1 3 T 2/3 0 + 34 875 T 1/3 0 − 92 525 T −1/3 0 + 2 21 T −1 0 + 14 375 T −4/3 0 − 18 175 T −2 0 + 4 125 T −3 0 + O(δ 4 0 )(39)
Reversing this expansion and retaining the first three terms, we have
δ 0 ≈ 1 1 + z d ∆ρ ρ 5 3 − 85 63 ∆ρ ρ + 14075 11907 ∆ρ ρ 2 + ... (40)
Substituting this result into eq.(37), one find the expression of dipole up to the secondorder of the present density contract ∆ρ/ρ,
∆T T d = 1 3 H 0 D c ∆ρ ρ 1 − 11 21 ∆ρ ρ .(41)
Therefore, in linear approximation, the dipole anisotropy would be overestimated from the measurement of the local density fluctuation. Second, the peculiar velocity inside the perturbation regime can be derived from the continuity equation (Peebles, 1980) ∂∆ρ/ρ ∂T
+ 1 S 0 ∇ · (1 + ∆ρ/ρ)v = 0.(42)It is v = 1 2 (H 0 D) T 1/3 0 ∂∆ρ/ρ ∂T S 0 1 + ∆ρ/ρ(43)
Using eqs. (38), (39) and (40), the peculiar velocity can be expanded by ∆ρ/ρ as
v ≈ (H 0 D) 1 3 ∆ρ ρ − 4 63 ∆ρ ρ 2 + 328 11907 ∆ρ ρ 3 + · · · (44)
This result is consistent with the recent work by Gramann (1993), who showed that the peculiar velocity would be overestimated by the linear approximation. Substituting this expression into eq.(37), we have the solution of the dipole anisotropy as a function of peculiar velocity
∆T T d = c v 1 + 25 63 c DH 0 v c + ...(45)
or
v c = ∆T T d / 1 + 25 198 ∆ρ ρ(46)
Therefore, the SR relation eq.(1) can be used to determine peculiar velocity from the CBR measurement only if the local density contrast ∆ρ/ρ is very small. Considering the fact that the present density contrast ∆ρ/ρ is of the order of 1 on the scale of superclusters, the higher order correction for the peculiar field may not be negligible. The peculiar velocity divergence in our model is simply
∇ · v = −Ṡ 0 ∆ρ ρ 1 − 4 21 ∆ρ ρ + 328 3969 ∆ρ ρ 2 + ... (47) or ∇ · v = −Ṡ 0 (∆ρ/ρ) 1 + 0.190(∆ρ/ρ) − 0.046(∆ρ/ρ) 2 + ...(48)
A similar result has been empirically found recently by Nusser andDekel (1991, 1993).
DISCUSSION AND CONCLUSIONS
It is usually believed that in linear approximation the CBR anisotropies consist primarily of three parts, namely, 1) a Doppler effect from the motion of the observer with respect to the CBR rest frame, 2) a Sachs-Wolfe effect at the surface of last scattering and 3) a time-dependent potential effect along the photon path (Martínez-González, Sanz & Silk, 1990). In this description, one can not, in fact, distinguish between the effects of a time-dependent potential and a Doppler motion. The contribution of a local collapse of matter to CBR anisotropy can totally be contained in the Doppler effect of the observer's peculiar velocity. One can then use the linear relationship between the peculiar velocity v and local density contrast ∆ρ/ρ to calculate the CBR dipole anisotropy (∆T/T) d [eq.(1)].
However, in terms of the CBR quadrupole anisotropy, the equivalence between the Doppler effect and the local gravitational field will no longer exists. This is because kinematic quadrupole is essentially non-linear. As a result, the quadrupole anisotropy produced by a local inhomogeneity is not generally equal to the SR kinematic quadrupole. Especially, if the peculiar field is comparable with the horizon size, the relationship between the dipole and the quadrupole terms will be substantially different from that given by the SR Doppler effect. We have found that, based on a simply spherical density perturbation model, the local quadrupole can be different from the kinematic quadrupole by a factor as large as 3.
The purpose of the SR kinematic quadrupole correction used in the data reduction of COBE observation is to remove the local quadrupole amplitude from the CBR sky temperature maps, and then the remaining maps will contain only the component of the cosmic quadrupole, i.e. the component given by the density fluctuation on the last scattering surface. The kinematic quadrupole correction is, in fact, the correction of local quadrupole. Yet, as we have shown, the local quadrupole can not be simply replaced by the SR kinematic quadrupole. Therefore, the original purpose of the kinematic quadrupole correction may not be completely achieved by removing the quadrupole given by eq.(1).
The quadrupole amplitude of the local density inhomogeneities depends on the matter distribution, the density contrast, the size of the local gravitational field and the position of the observer. All these parameters are poorly known at the moment. Therefore, it seems to be very hard to precisely identify the local quadrupole. A possible way to obtain the local quadrupole would be to analyze the temperature map to see if they contain a component with polar axis paralleled to the direction of dipole. This is because the axes of both the dipole and local quadrupole should point out to the center of the local inhomogeneity. Principally, without a precise local quadrupole correction, the observed quadrupole should be regarded as a sum of cosmological and local components. Thus, the local quadrupole correction leads to an uncertain factor in finding the cosmological quadrupole term from observed CBR maps (e.g., the COBE measurement).
In spite of the fact that the local quadrupole may be remarkably different from the kinematic quadrupole, our simple model shows that for a given dipole amplitude the amplitude of the local quadrupole is always less than the corresponding kinematic quadrupole, if the local peculiar field is of the order of 100 Mpc in size. This means that the kinematic quadrupole seems to be an upper limit to the local quadrupole. Because the presently observed quadrupole amplitude by COBE is about 8 times larger than this upper limit, one may concludes that the uncertainty of the local quadrupole would not lead to an uncertainty greater than 13% of the observed cosmic quadrupole.
Certainly, this conclusion is model-dependent. The present model is actually a toy one, which has assumed the simplest matter distribution -a constant density contrast. It may largely deviate from the real local matter distribution. For instance, the peculiar velocity distribution derived from this toy model [eq.(44)] shows a linear increase with the distance from the perturbation center, which does not fit with the observations around the Great Attractor (Faber & Burstein, 1989). Moreover, if the initial density perturbation is assumed to be highly non-spherical, the local quadrupole may have an amplitude greater than (∆T/T) 2 d . Therefore, the loophole hidden in the kinematic quadrupole correction has not been totally closed yet. One needs to simulate this effect by using a more realistic model of local collapse of matter, e.g., the models given by N-body simulations. The study of the interaction between the local matter distribution and the CBR quadrupole would be of great significance for a better understanding of both the initial perturbation and the local collapse of matter.
APPENDIX
For the initial perturbation eq.(8), the collapse factor S(x, t) can be expanded in the series of δ 0 as S(X, T ) = S 0 (T ) + S 1 (T )δ 0 + S 2 (T )δ 2 0 + S 3 (T )δ 3 0 + ...
where the coefficients S i (i = 0, 1, 2, · · ·) can be determined by eq.(6) to be S 0 (T ) = T 2/3 (A2)
S 1 (T ) = − 2 15T 1/3 + Similarly, one can find the expansions of r, λ, ξ and U with respect to δ 0 by their definitions (Paper I) λ = 2ṙ ′ r ′ (A6) and 1 2λ = xṠ ′ +Ṡ xS ′ + S (A7) as well as eqs. (15) and (16). Leṫ λt e 2 = 2 3T + f 1 δ 0 + f 2 δ 2 0 + ...
λ 2 −ṙ r = u 1 δ 0 + u 2 δ 2 0 + ...
e −λ/2 = 1 T 2/3 + g 1 δ 0 + g 2 δ 2 0 + ...
Figure 1 .
1The ratio q of the local quadrupole (∆T/T) q to the SR kinematic quadrupole (1/2)(∆T/T) 2 d . Only the terms with orders greater than T .(34) are considered. This plot illustrates the dependence of the ratio q on the size of the inhomogeneity and the position of the observer. For the collapse with intermediate scale of about 10 2 Mpc, we have q ≈ 1. For both the very small and very large scale collapses, the local quadrupole will significantly be different from the kinematic quadrupole.
Table 1
1Comparison of the SR Kinematic Quadrupole with the Exact Solutions∆T/T) d ] 2 /2 (∆T/T) q [δ 0 ]/δ 0 ([δ 0 ] + [δ 2 ])/δ 0 [exact]/δ 0 ×δ −2size
δ 0
(∆T/T) d
[(0
×δ −2
0
10 −3 2.951
3.457
3.638
6.6
6.0
X c = 1.4 10 −4 2.951
3.002
3.003
4.5
4.1
10 −5 2.951
2.957
2.957
4.4
4.1
10 −3 2.951
3.457
3.632
6.6
2.6
X c = 10 10 −4 2.951
3.002
3.000
4.5
1.4
10 −5 2.951
2.957
2.953
4.4
1.4
The authors thanks Dr. J. Heine for his helps. WXP is grateful to the CNRS and the Wong K.C. Foundation for financial support. This work was also partially supported by NSF contract INT 9301805.
J V ??rnau, M J Fullana, L Monreal, D Saez, C L Ennett, L47 ??aiser, N. 1982, MNRAS, 198, 1033 ??ogut, A. et al. ees, M.J. & Sciama, D.W.Princeton; PrincetonPrinceton University Press4021ApJ??rnau, J.V., Fullana, M.J., Monreal, L., & Saez, D. 1993, ApJ, 402, 359 ??ennett, C.L. et al 1992, ApJ, 396, L7 ??yer, C.C. 1976, MNRAS, 175, 429 ??yer, C.C. & Ip, P.S.S. 1988, MNRAS, 235, 895 ??aber, S.M. & Burstein, D. 1989, in Large-Scale Motion in the Universe, eds. V.C.Rubin & G.V. Coyne, (Princeton: Princeton University Press) ??ang, L.Z. & Wu, X.P. 1993, ApJ, 408, 25 (Paper I) ??ramann,M. 1993, ApJ, 405, L47 ??aiser, N. 1982, MNRAS, 198, 1033 ??ogut, A. et al 1993, ApJ, 1993, submitted ??ynden-Bell,D., Faber,S.M., Burstein,D., Davies,R.L., Dressler,A., Terlevich,R.J. & Wegner,G. 1988, ApJ, 326, 19 ??artínez-González, E., Sanz, J.L. & Silk, J. 1990, ApJ, 355, L5 ??ottale,L. 1984, MNRAS, 206, 713 ??usser,A. & Dekel,A. 1991, ApJ, 379, 6 ??usser,A. & Dekel,A. 1993, ApJ, 405, 437 ??cchionero, F. Santangelo, P. & Vittorio, N. 1983, A&A, 177, 365 ??lson, D.W. & Silk, J. 1979, ApJ, 233, 395 ??eebles, P.J.E. 1980, in The Large-Scale Structure of the Universe, ed. A.S.Wightman & P.W.Anderson (Princeton: Princeton University Press) ??eebles, P.J.E. & Wilkinson, D.T. 1968, Phys.Rev., 174, 2168 ??aine, D.J. & Thomas, E.G. 1981, MNRAS, 195, 649 ??ees, M.J. & Sciama, D.W. 1968, Nature, 217, 511 ??moot, G.F. et al 1991, ApJ, 371, L1 ??moot, G.F. et al 1992, ApJ, 396, L1
| [] |
[
"Fine-Grained Complexity of Safety Verification",
"Fine-Grained Complexity of Safety Verification"
] | [
"Peter Chini p.chini@tu-bs.de \nBraunschweigGermany\n",
"Roland Meyer roland.meyer@tu-bs.de \nBraunschweigGermany\n",
"Prakash Saivasan p.saivasan@tu-bs.de \nBraunschweigGermany\n"
] | [
"BraunschweigGermany",
"BraunschweigGermany",
"BraunschweigGermany"
] | [] | We study the fine-grained complexity of Leader Contributor Reachability (LCR) and Bounded-Stage Reachability (BSR), two variants of the safety verification problem for shared memory concurrent programs. For both problems, the memory is a single variable over a finite data domain. Our contributions are new verification algorithms and lower bounds. The latter are based on the Exponential Time Hypothesis (ETH), the problem Set Cover, and cross-compositions. LCR is the question whether a designated leader thread can reach an unsafe state when interacting with a certain number of equal contributor threads. We suggest two parameterizations: (1) By the size of the data domain D and the size of the leader L, and (2) by the size of the contributors C. We present algorithms for both cases. The key techniques are compact witnesses and dynamic programming. The algorithms run in O * ((L · (D + 1)) L·D · D D ) and O * (2 C ) time, showing that both parameterizations are fixed-parameter tractable. We complement the upper bounds by (matching) lower bounds based on ETH and Set Cover. Moreover, we prove the absence of polynomial kernels. For BSR, we consider programs involving t different threads. We restrict the analysis to computations where the write permission changes s times between the threads. BSR asks whether a given configuration is reachable via such an s-stage computation. When parameterized by P, the maximum size of a thread, and t, the interesting observation is that the problem has a large number of difficult instances. Formally, we show that there is no polynomial kernel, no compression algorithm that reduces the size of the data domain D or the number of stages s to a polynomial dependence on P and t. This indicates that symbolic methods may be harder to find for this problem.ProblemUpper Bound Lower Bound KernelNo poly.No poly.BSR(s, D) IntractableThe paper at hand is the full version of[10]. It presents some new results. This includes an improved algorithm for LCR running in O * (2 C ) time instead of O * (4 C ) and a new (2 − δ) C lower bound based on Set Cover. Together, upper and lower bound show that the optimal algorithm for the problem has been found. Moreover, we give proofs for the intractability of certain parameterizations of LCR and BSR. This justifies our choice of parameters. Technical details can be found in the appendix of the paper.Related work Concurrent programs communicating through a shared memory and having a fixed number of threads have been extensively studied[15,26,32,2]. The leader contributor reachability problem as considered in this paper was introduced as parametrized reachability in[31]. In[17], it was shown to be NPcomplete when only finite state programs are involved and PSPACE-complete for recursive programs. In[35], the parameterized pairwise reachability problem was considered and shown to be decidable. Parameterized reachability under a variant of round robin scheduling was proven decidable in[37].The bounded stage restriction on the computations of concurrent programs as considered here was introduced in [1]. The corresponding reachability problem was shown to be NP-complete when only finite state programs are involved. The problem remains in NEXP-time and PSPACE-hard for a combination of counters and a single pushdown. The bounded stage restriction generalizes the concept of bounded context switching from[39], which was shown to be NP-complete in that paper. In [9], FPT-algorithms for bounded context switching were obtained under various parameterization. In[3], networks of pushdowns communicating through a shared memory were analyzed under topological restrictions.There have been few efforts to obtain fixed-parameter tractable algorithms for automata and verification-related problems. FPT-algorithms for automata problems have been studied in[21,20,40]. In[13], model checking problems for synchronized executions on parallel components were considered and proven intractable. In[16], the notion of conflict serializability was introduced for the TSO memory model and an FPT-algorithm for checking serializability was provided. The complexity of predicting atomicity violation on concurrent systems was considered in[19]. The finding is that FPT-solutions are unlikely to exist. In[18], the problem of checking correctness of a program along a pattern is investigated. The authors conduct an analysis in several parameters. The results range from NP-hardness even for fixed parameters to FPT-algorithms.PreliminariesWe introduce our model for programs, which is fairly standard[1,31,17], and give the basics on fixed-parameter tractability.Programs A program consists of finitely many threads that access a shared memory. The memory is modeled to hold a single value at a time. Formally, a (shared memory) program is a tuple A = (D , a 0 , (P i ) i∈ [1..t] ). Here, D is the data domain of the memory and a 0 ∈ D is the initial value. Threads are modeled as control-flow graphs that write values to or read values from the memory. These operations are captured by Op(D ) = {!a, ?a | a ∈ D }. We use the notation W (D ) = {!a | a ∈ D } for the write operations and R(D ) = {?a | a ∈ D } for the read operations. A thread P id is a non-deterministic finite automaton (Op(D ), Q, q 0 , δ) over the alphabet of operations. The set of states is Q with q 0 ∈ Q the initial state. The final states will depend on the verification task. The transition relation is δ ⊆ Q × (Op(D ) ∪ {ε}) × Q. We extend it to words and also write q w − → q ′ for q ′ ∈ δ(q, w). Whenever we need to distinguish between different threads, we add indices and write Q id or δ id .The semantics of a program is given in terms of labeled transitions between configurations. A configuration is a pair (pc, a) ∈ (Q 1 × · · · × Q t ) × D . The program counter pc is a vector that shows the current state pc(i) ∈ Q i of each thread P i . Moreover, the configuration gives the current value in memory. We call c 0 = (pc 0 , a 0 ) with pc 0 (i) = q 0 i for all i ∈ [1..t] the initial configuration. Let C denote the set of all configurations. The program's transition relation | 10.1007/s10817-020-09572-x | [
"https://export.arxiv.org/pdf/1802.05559v3.pdf"
] | 3,324,292 | 1802.05559 | 9ea1d6d95135259d00cc51a62d6db631fa07246e |
Fine-Grained Complexity of Safety Verification
10 Jan 2020
Peter Chini p.chini@tu-bs.de
BraunschweigGermany
Roland Meyer roland.meyer@tu-bs.de
BraunschweigGermany
Prakash Saivasan p.saivasan@tu-bs.de
BraunschweigGermany
Fine-Grained Complexity of Safety Verification
10 Jan 2020arXiv:1802.05559v3 [cs.LO]
We study the fine-grained complexity of Leader Contributor Reachability (LCR) and Bounded-Stage Reachability (BSR), two variants of the safety verification problem for shared memory concurrent programs. For both problems, the memory is a single variable over a finite data domain. Our contributions are new verification algorithms and lower bounds. The latter are based on the Exponential Time Hypothesis (ETH), the problem Set Cover, and cross-compositions. LCR is the question whether a designated leader thread can reach an unsafe state when interacting with a certain number of equal contributor threads. We suggest two parameterizations: (1) By the size of the data domain D and the size of the leader L, and (2) by the size of the contributors C. We present algorithms for both cases. The key techniques are compact witnesses and dynamic programming. The algorithms run in O * ((L · (D + 1)) L·D · D D ) and O * (2 C ) time, showing that both parameterizations are fixed-parameter tractable. We complement the upper bounds by (matching) lower bounds based on ETH and Set Cover. Moreover, we prove the absence of polynomial kernels. For BSR, we consider programs involving t different threads. We restrict the analysis to computations where the write permission changes s times between the threads. BSR asks whether a given configuration is reachable via such an s-stage computation. When parameterized by P, the maximum size of a thread, and t, the interesting observation is that the problem has a large number of difficult instances. Formally, we show that there is no polynomial kernel, no compression algorithm that reduces the size of the data domain D or the number of stages s to a polynomial dependence on P and t. This indicates that symbolic methods may be harder to find for this problem.ProblemUpper Bound Lower Bound KernelNo poly.No poly.BSR(s, D) IntractableThe paper at hand is the full version of[10]. It presents some new results. This includes an improved algorithm for LCR running in O * (2 C ) time instead of O * (4 C ) and a new (2 − δ) C lower bound based on Set Cover. Together, upper and lower bound show that the optimal algorithm for the problem has been found. Moreover, we give proofs for the intractability of certain parameterizations of LCR and BSR. This justifies our choice of parameters. Technical details can be found in the appendix of the paper.Related work Concurrent programs communicating through a shared memory and having a fixed number of threads have been extensively studied[15,26,32,2]. The leader contributor reachability problem as considered in this paper was introduced as parametrized reachability in[31]. In[17], it was shown to be NPcomplete when only finite state programs are involved and PSPACE-complete for recursive programs. In[35], the parameterized pairwise reachability problem was considered and shown to be decidable. Parameterized reachability under a variant of round robin scheduling was proven decidable in[37].The bounded stage restriction on the computations of concurrent programs as considered here was introduced in [1]. The corresponding reachability problem was shown to be NP-complete when only finite state programs are involved. The problem remains in NEXP-time and PSPACE-hard for a combination of counters and a single pushdown. The bounded stage restriction generalizes the concept of bounded context switching from[39], which was shown to be NP-complete in that paper. In [9], FPT-algorithms for bounded context switching were obtained under various parameterization. In[3], networks of pushdowns communicating through a shared memory were analyzed under topological restrictions.There have been few efforts to obtain fixed-parameter tractable algorithms for automata and verification-related problems. FPT-algorithms for automata problems have been studied in[21,20,40]. In[13], model checking problems for synchronized executions on parallel components were considered and proven intractable. In[16], the notion of conflict serializability was introduced for the TSO memory model and an FPT-algorithm for checking serializability was provided. The complexity of predicting atomicity violation on concurrent systems was considered in[19]. The finding is that FPT-solutions are unlikely to exist. In[18], the problem of checking correctness of a program along a pattern is investigated. The authors conduct an analysis in several parameters. The results range from NP-hardness even for fixed parameters to FPT-algorithms.PreliminariesWe introduce our model for programs, which is fairly standard[1,31,17], and give the basics on fixed-parameter tractability.Programs A program consists of finitely many threads that access a shared memory. The memory is modeled to hold a single value at a time. Formally, a (shared memory) program is a tuple A = (D , a 0 , (P i ) i∈ [1..t] ). Here, D is the data domain of the memory and a 0 ∈ D is the initial value. Threads are modeled as control-flow graphs that write values to or read values from the memory. These operations are captured by Op(D ) = {!a, ?a | a ∈ D }. We use the notation W (D ) = {!a | a ∈ D } for the write operations and R(D ) = {?a | a ∈ D } for the read operations. A thread P id is a non-deterministic finite automaton (Op(D ), Q, q 0 , δ) over the alphabet of operations. The set of states is Q with q 0 ∈ Q the initial state. The final states will depend on the verification task. The transition relation is δ ⊆ Q × (Op(D ) ∪ {ε}) × Q. We extend it to words and also write q w − → q ′ for q ′ ∈ δ(q, w). Whenever we need to distinguish between different threads, we add indices and write Q id or δ id .The semantics of a program is given in terms of labeled transitions between configurations. A configuration is a pair (pc, a) ∈ (Q 1 × · · · × Q t ) × D . The program counter pc is a vector that shows the current state pc(i) ∈ Q i of each thread P i . Moreover, the configuration gives the current value in memory. We call c 0 = (pc 0 , a 0 ) with pc 0 (i) = q 0 i for all i ∈ [1..t] the initial configuration. Let C denote the set of all configurations. The program's transition relation
Introduction
We study the fine-grained complexity of two safety verification problems [1,17,31] for shared memory concurrent programs. The motivation to reconsider these problems are recent developments in fine-grained complexity theory [11,38,7,34]. They suggest that classifications such as NP or even FPT are too coarse to explain the success of verification methods. Instead, it should be possible to identify the precise influence that parameters of the input have on the verification time. Our contribution confirms this idea. We give new verification algorithms for the two problems that, for the first time, can be proven optimal in the sense of finegrained complexity theory. To state the results, we need some background. As we proceed, we explain the development of fine-grained complexity theory.
There is a well-known gap between the success that verification tools see in practice and the judgments about computational hardness that worst-case complexity is able to give. The applicability of verification tools steadily increases by tuning them towards industrial instances. The complexity estimation is stuck with considering the input size or at best assuming certain parameters to be constant. However, the latter approach is not very enlightening if the runtime is n k , where n is the input size and k the parameter.
The observation of a gap between practical algorithms and complexity theory is not unique to verification but made in every field that has to solve hard computational problems. Complexity theory has taken up the challenge to close the gap. So-called fixed-parameter tractability (FPT) [12,14] proposes to identify parameters k so that the runtime is f (k)poly(n), where f is a computable function and poly(n) denotes any polynomial dependent on n. These parameters are powerful in the sense that they dominate the complexity.
For an FPT-result to be useful, function f should only be mildly exponential, and of course k should be small in the instances of interest. Intuitively, they are what one needs to optimize. Fine-grained complexity is the study of upper and lower bounds on the function. Indeed, the fine-grained complexity of a problem is written as O * (f (k)), emphasizing f and k and suppressing the polynomial part. For upper bounds, the approach is still to come up with an algorithm.
For lower bounds, fine-grained complexity has taken a new and very pragmatic perspective. For the problem of n-variable 3-SAT the best known algorithm runs in O(2 n ) time, and this bound has not been improved since 1970. The idea is to take improvements on this problem as unlikely, known as the exponential-time hypothesis (ETH) [34]. Formally, it asserts that there is no 2 o(n) -time algorithm for 3-SAT. ETH serves as a lower bound that is reduced to other problems [38]. An even stronger assumption about SAT, called strong exponential-time hypothesis (SETH) [34,7], and a similar one about Set Cover [11] allow for lower bounds like the absence of O * ((2 − δ) n )-time algorithms.
In this work, we contribute fine-grained complexity results for verification problems on concurrent programs. The first problem (LCR) is reachability for a leader thread that is interacting with an unbounded number of contributors [31,17]. We show that, assuming a parameterization by the size of the leader L and the size of the data domain D, the problem can be solved in O * ((L · (D + 1)) L·D · D D ). At the heart of the algorithm is a compression of computations into witnesses. To check reachability, our algorithm then iterates over candidates for witnesses and checks each of them for being a proper witness. Interestingly, we can formulate a variant of the algorithm that seems to be suited for large state spaces.
Using ETH, we show that the algorithm is (almost) optimal. Moreover, the problem is shown to have a large number of hard instances. Technically, there is no polynomial kernel [5,6]. Experience with kernel lower bounds is still limited. This notion of hardness seems to indicate that symbolic methods are hard to apply here. The lower bounds that we present share similarities with the reductions presented in [28,8,29].
If we consider the size C of the contributors as a parameter, we obtain an O * (2 C ) upper bound. Our algorithm is based on dynamic programming. We use the technique to solve a reachability problem on a graph that is shown to be a compressed representation for LCR. The compression is based on a saturation argument which is inspired by thread-modular reasoning [22,23,30,33]. With the hardness assumption on Set Cover we show that the algorithm is indeed optimal. Moreover, we prove the absence of a polynomial kernel.
Parameterizations of LCR involving just a single parameter D or L are intractable. We show that these problems are W[1]-hard. This proves the existence of an FPT-algorithm for those parameterizations unlikely.
The second problem we study generalizes bounded context switching. Bounded stage reachability (BSR) asks whether a state is reachable if there is a bound s on the number of times the write permission is allowed to change between the threads [1]. Again, we show the new form of kernel lower bound. The result is tricky and highlights the power of the computation model.
The results are summarized by the table below. Main findings are highlighted in gray. We present two new algorithms for LCR. Moreover, we suggest kernel lower bounds as hardness indicators for verification problems. The corresponding lower bound for BSR is particularly difficult to achieve. among configurations → ⊆ C × (Op(D ) ∪ {ε}) × C is obtained by lifting the transition relations of the threads. To define it, let pc 1 = pc[i = q i ], meaning thread P i is in state q i and otherwise the program counter coincides with pc. Let pc 2 = pc[i = q ′ i ]. If thread P i tries to read with the transition q i ?a − → q ′ i , then (pc 1 , a)
?a − → (pc 2 , a). Note that the memory is required to hold the desired value.
If the thread has the transition q i a). The program's transition relation is generalized to words, c w − → c ′ . We call such a sequence of consecutive labeled transitions a computation. To indicate that there is a word justifying a computation from c to c ′ , we write c → * c ′ . We may use an index w − → i to indicate that the computation was induced by P i . Where appropriate, we use the program as an index,
!b − → q ′ i , then (pc 1 , a) !b − → (pc 2 , b). Finally, q i ε − → q ′ i yields (pc 1 , a) ε − → (pc 2 ,w − → A .
Fixed-Parameter Tractability We wish to study the fine-grained complexity of safety verification problems for the above programs. This means our goal is to identify parameters of these problems that satisfy two properties. First, in practical instances they are small. Second, assuming that these parameters are small, show that efficient verification algorithms can be obtained. Parametrized complexity is a branch of complexity theory that makes precise the idea of being efficient relative to a parameter.
Fix a finite alphabet Σ. A parameterized problem L is a subset of Σ * ×N. The problem is called fixed-parameter tractable if there is a deterministic algorithm that, given (x, k) ∈ Σ * × N, decides (x, k) ∈ L in time f (k) · |x| O(1) . We use FPT for the class of all such problems and say a problem is FPT to mean it is in that class. Note that f is a computable function only depending on the parameter k. It is common to denote the runtime by O * (f (k)) and suppress the polynomial part. We will be interested in the precise dependence on the parameter, in upper and lower bounds on the function f . This study is often referred to as fine-grained complexity.
Lower bounds on f are usually obtained from assumptions about SAT. The most famous is the Exponential Time Hypothesis (ETH). It assumes that there is no algorithm solving n-variable 3-SAT in 2 o(n) time. Then, the reasoning is as follows: If f drops below a certain bound, ETH would fail. Other standard assumptions for lower bounds are the Strong Exponential Time Hypothesis (SETH) and the hardness assumption of Set Cover. We postpone the definition of the latter and focus on SETH. This assumption is more restrictive than ETH. It asserts that n-variable SAT cannot be solved in O * ((2 − δ) n ) time for any δ > 0.
While many parameterizations of NP-hard problems were proven to be fixedparameter tractable, there are problems that are unlikely to be FPT. Such problems are hard for the complexity class W [1]. For a theory of relative hardness, the appropriate notion of reduction is called parameterized reduction. Given parameterized problems L, L ′ ⊆ Σ * × N, we say that L is reducible to L ′ via a parameterized reduction if there is an algorithm that transforms an input (x, k) to an input (x ′ , k ′ ) in time g(k) · |x| O(1) such that (x, k) ∈ L if and only if (x ′ , k ′ ) ∈ L ′ . Here, g is a computable function and k ′ is computed by a function only dependent on k.
Leader Contributor Reachability
We consider the leader contributor reachability problem for shared memory programs. The problem was introduced in [31] and shown to be NP-complete in [17] for the finite state case. 1 We contribute two new verification algorithms that target two parameterizations of the problem. In both cases, our algorithms establish fixed-parameter tractability. Moreover, with matching lower bounds we prove them to be optimal even in the fine-grained sense.
An instance of the leader contributor reachability problem is given by a shared memory program of the form A = (D , a 0 , (P L , (P i ) i∈ [1..t] )). The program has a designated leader thread P L and several contributor threads P 1 , . . . , P t . In addition, we are given a set of unsafe states for the leader. The task is to check whether the leader can reach an unsafe state when interacting with a number of instances of the contributors. It is worth noting that the problem can be reduced to having a single contributor. Let the corresponding thread P C be the union of P 1 , . . . , P t (constructed using an initial ε-transition). We base our complexity analysis on this simplified formulation of the problem.
For the definition, let A = (D , a 0 , (P L , P C )) be a program with two threads. Let F L ⊆ Q L be a set of unsafe states of the leader. For any t ∈ N, define the program A t = (D , a 0 , (P L , (P C ) i∈ [1..t] )) to have exactly t copies of P C . Further, let C f be the set of configurations where the leader is in an unsafe state (from F L ). The problem of interest is as follows:
Leader Contributor Reachability (LCR) Input:
A program A = (D , a 0 , (P L , P C )) and a set of states F L ⊆ Q L . Question: Is there a t ∈ N such that c 0 → * A t c for some c ∈ C f ?
We consider two parameterizations of LCR. First, we parameterize by D, the size of the data domain D , and L, the number of states of the leader P L . We denote the parameterization by LCR(D, L). The second parameterization that we consider is LCR(C), a parameterization by the number of states of the contributor P C . For both, LCR(D, L) and LCR(C), we present fine-grained analyses that include FPT-algorithms as well as lower bounds for runtimes and kernels.
While for LCR(D, L) we obtain an FPT-algorithm, it is not likely that LCR(D) and LCR(L) admit the same. We prove that these problems are W[1]-hard.
Parameterization by Memory and Leader
We give an algorithm that solves LCR in time O * ((L·(D+1)) L·D ·D D ), which means LCR(D, L) is FPT. We then show how to modify the algorithm to solve instances of LCR as they are likely to occur in practice. Interestingly, the modified version of the algorithm lends itself to an efficient implementation based on off-the-shelf sequential model checkers. We conclude with lower bounds for LCR(D, L).
Upper Bound We give an algorithm for the parameterization LCR(D, L). The key idea is to compactly represent computations that may be present in an instance of the given program. To this end, we introduce a domain of so-called witness candidates. The main technical result, Lemma 6, links computations and witness candidates. It shows that reachability of an unsafe state holds in an instance of the program if and only if there is a witness candidate that is valid (in a precise sense). With this, our algorithm iterates over all witness candidates and checks each of them for being valid. To state the overall result, let Wit (L, D ) = (L · (D + 1)) L·D · D D · L be the number of witness candidates and let Valid (L, D, C ) = L 3 · D 2 · C 2 be the time it takes to check validity of a candidate. Note that it is polynomial.
Theorem 1. LCR can be solved in time O(Wit (L, D ) · Valid (L, D, C )).
Let A = (D , a 0 , (P L , P C )) be the program of interest and F L be the set of unsafe states in the leader. Assume we are given a computation ρ showing that P L can reach a state in F L when interacting with a number of contributors. We explain the main ideas to find an efficient representation for ρ that still allows for the reconstruction of a similar computation. To simplify the presentation, we assume the leader never writes !a and immediately reads ?a (same value). If this is the case, the read can be replaced by ε.
In a first step, we delete most of the moves in ρ that were carried out by the contributors. We only keep first writes. For each value a, this is the write transitions fw (a) = c !a − → c ′ where a is written by a contributor for the first time. The reason we can omit subsequent writes of a is the following: If fw (a) is carried out by contributor P 1 , we can assume that there is an arbitrary number of other contributors that all mimicked the behavior of P 1 . This means whenever P 1 did a transition, they copycatted it right away. Hence, there are arbitrarily many contributors pending to write a. Phrased differently, the symbol a is available for the leader whenever P L needs to read it. The idea goes back to the Copycat Lemma stated in [17]. The reads of the contributors are omitted as well. We will make sure they can be served by the first writes and the moves done by P L .
After the deletion, we are left with a shorter expression ρ ′ . We turn it into a word w over the alphabet
Q L ∪D ⊥ ∪D with D ⊥ = D ∪{⊥} andD = {ā | a ∈ D }.
Each transition c !a/?a/ε − −−−− → L c ′ in ρ ′ that is due to the leader moving from q to q ′ is mapped (i) to q.a.q ′ if it is a write and (ii) to q.⊥.q ′ otherwise. A first write fw (a) = c a − → c ′ of a contributor is mapped toā. We may assume that the resulting word w is of the form w = w 1 .w 2 with w 1 ∈ ((Q L .D ⊥ ) * .D) * and w 2 ∈ (Q L .D ⊥ ) * .F L . Note that w can still be of unbounded length.
In order to find a witness of bounded length, we compress w 1 and w 2 to w ′ 1 and w ′ 2 . Between two first writesā andb in w 1 , the leader can perform an unbounded number of transitions, represented by a word in (Q L .D ⊥ ) * . Hence, there are states q ∈ Q L repeating betweenā andb. We contract the word between the first and the last occurrence of q into just a single state q. This state now represents a loop on P L . Since there are L states in the leader, this bounds the number of contractions. Furthermore, we know that the number of first writes is bounded by D, each symbol can be written for the first time at most once. Thus, the compressed string w ′ 1 is a word in the language ((Q L .D ⊥ ) ≤L .D ) ≤D . The word w 2 is of the form w 2 = q.u for a state q ∈ Q L and a word u. We truncate the word u and only keep the state q. Then we know that there is a computation leading from q to a state in F L where P L can potentially write any symbol but read only those symbols which occurred as a first write in w ′ 1 . Altogether, we are left with a word of bounded length.
Definition 2. The set of witness candidates is
E = ((Q L .D ⊥ ) ≤L .D) ≤D .Q L .
Before we elaborate on the precise relation between witness candidates and computations, we turn to an example. It shows how an actual computation is compressed to a witness candidate following the above steps.
Example 3. Consider the program A = (D , a 0 , (P L , P C )) with domain D , leader thread P L , and contributor thread P C given in Figure 1. We follow a computation in A 2 that reaches the unsafe state q 4 of the leader. Note that the transitions are labeled by L and C, depending on whether the leader or a contributor moved.
(q 0 , p 0 , p 0 , a 0 ) !a − → C (q 0 , p 1 , p 0 , a) ?a − → L (q 1 , p 1 , p 0 , a) !b − → L (q 2 , p 1 , p 0 , b) ?b − → C (q 2 , p 1 , p 2 , b) !c − → C (q 2 , p 1 , p 2 , c) ?c − → L (q 3 , p 1 , p 2 , c) !a − → C (q 3 , p 1 , p 2 , a) ?a − → L (q 4 , p 1 , p 2 , a).
We construct a witness candidate out of the computation. To this end, we only keep the first writes of the contributors. These are the write !a in the first transition and the write !c in the fifth transition. Both are marked red. They will be represented in the witness candidate by the symbolsā,c ∈D . Now we map the transitions of the leader to words. Writes are preserved, reads are mapped to ⊥. Then we obtain the witness candidatē
a . q 0 . ⊥ . q 1 . b .c . q 2 .
Note that we omit the last two transitions of the leader. The reason is as follows. After the first writec, the leader is in state q 2 . From this state, the leader can reach q 4 while only reading from first writes that have already appeared in the candidate, namely a and c. Hence, we can truncate the witness candidate at that point and do not have to keep the remaining computation to q 4 .
To characterize computations in terms of witness candidates, we define the notion of validity. This needs some notation. Consider a word w = w 1 . . . w ℓ over some alphabet Γ . For i ∈ [1..ℓ], we set w[i] = w i and w[1. Consider a witness candidate w ∈ E and let i ∈ [1.
.i] = w 1 . . . w i . If Γ ′ ⊆ Γ , we use w ↓ Γ ′ for the projection of w to the letters in Γ ′ . q0 q1 q2 q3 q4 PL ?a !b ε ?c ?a p0 p1 p2 PC !a !a ?b !c
.|w|]. We useD (w, i) for the set of all first writes that occurred in w up to position i. Formally, we define it to beD(w, i) = {a |ā is a letter in w[1..i] ↓D }. We abbreviateD (w, |w|) asD (w). Let q ∈ Q L and S ⊆ D . Recall that the state represents a loop in P L . The set of all letters written within a loop from q to q when reading only symbols from
S is Loop(q, S) = {a | a ∈ D and ∃v 1 , v 2 ∈ (W (D ) ∪ R(S)) * : q v1!av2 − −−− → L q}.
The definition of validity is given next. Technical details of the three requirements are made precise in the text below. (1) If w ↓D =c 1 . . .c ℓ , then thec i are pairwise different.
(2) Let w ↓ QL∪D ⊥ = q 1 a 1 q 2 a 2 . . . a ℓ q ℓ+1 . If a i ∈ D , then q i !ai − − → L q i+1 ∈ δ L is a write transition of P L . If a i = ⊥, then we have an ε-transition q i ε − → L q i+1 .
Alternatively, there is a read q i ?a − → L q i+1 of a symbol a ∈D(w, pos(a i )) that already occurred within a first write (the leader does not read its own writes). Here, we use pos(a i ) to access the position of a i in w. State q 1 = q 0 L is initial. There is a run from q ℓ+1 to a state q f ∈ F L . During this run, reading is restricted to symbols that occurred as first writes in w. Formally, there is a word v ∈ (W (D ) ∪ R(D (w))) * leading to an unsafe state q f . We have q ℓ+1 v − → L q f .
(3) For each prefix vā of w withā ∈D there is a computation q 0 C u!a − − → C q on P C so that the reads in u can be obtained from v. Formally, let u ′ = u ↓ R(D) . Then there is an embedding of u ′ into v, a monotone map µ : [1..|u ′ |] → [1..|v|] that satisfies the following. Let u ′ [i] = ?a with a ∈ D . The read is served in one of the following three ways. We may have v[µ(i)] = a, which corresponds to a write of a by P L . Alternatively, v[µ(i)] = q ∈ Q L and a ∈ Loop(q,D (w, µ(i))). This amounts to reading from a leader's write that was executed in a loop. Finally, we may have a ∈D(w, µ(i)), corresponding to reading from another contributor.
Our goal is to prove that a valid witness candidate exists if and only if there is a computation leading to an unsafe state. Before we state the corresponding lemma, we provide some intuition for the three requirements along an example.
Example 5. Reconsider the program A from Figure 1. We elaborate on why the three requirements for validity are essential. To this end, we present three witness candidates, each violating one of the requirements. Thus, these candidates cannot correspond to an actual computation of the program.
The witness candidate w 1 =ā . q 0 . ⊥ . q 1 . b .ā . q 2 clearly violates requirement (1) due to the repetition ofā. Since first writes are unique there cannot exist a computation of program A following candidate w 1 .
Requirement (2) asks for a proper run on the leader thread P L . Hence, the witness candidate w 2 =ā . q 0 . a . q 1 . b .c . q 2 violates the requirement although it satisfies (1). The subword q 0 . a . q 1 of w 2 encodes that the leader should take the transition q 0 !a − → L q 1 . But this transition does not exist in P L . Consequently, there is no computation of A which corresponds to the witness candidate w 2 .
For requirement (3), consider the candidate w 3 =ā . q 0 . ⊥ . q 1 . ⊥ .c . q 2 . It clearly satisfies (1). Requirement (2) is also fulfilled. In fact, the subwords encoding transitions of the leader are q 0 . ⊥ . q 1 and q 1 . ⊥ . q 2 . The first subword corresponds to transition q 0 ?a − → L q 1 which can be takes since a already appeared as a first write in w 3 . The second subword refers to the transition q 1 ε − → L q 2 . To explain that w 3 does not satisfy requirement (3), we show that c cannot be provided as a first write. To this end, assume that w 3 satisfies (3). Then,
for the prefix v.c with v =ā . q 0 . ⊥ . q 1 . ⊥, there is a computation of the form p 0 u!c − − → C p 2 .
The reads in u are either first writes in v or writes provided by the leader (potentially in loops). Symbol b is not provided as such: It is neither a first write in v nor a symbol written by the leader (in a loop) along v. However, a computation u leading to state p 2 in P C needs to read b once. Hence, such a computation does not exist and c cannot be provided as a first write.
The witness candidate w =ā . q 0 . ⊥ . q 1 . b .c . q 2 from Example 3 satisfies all the requirements. In particular (3) is fulfilled since b is written by the leader in the transition q 1 !b − → q 2 . Hence, in this case, c can be provided as a first write. Lemma 6. There is a t ∈ N so that c 0 → * A t c with c ∈ C f if and only if there is a valid witness candidate w ∈ E.
Our algorithm iterates over all witness candidates w ∈ E and tests whether w is valid. The number of candidates Wit (L, D) is (L · (D + 1)) L·D · D D · L. This is due to the fact that we can force a witness candidate to have maximum length via inserting padding symbols. Hence, the number of candidates constitutes the first factor of the complexity estimation stated in Theorem 1. The polynomial factor Valid (L, D, C) is due to the following lemma.
Lemma 7. Validity of w ∈ E can be checked in time O(L 3 · D 2 · C 2 ).
Practical Algorithm We improve the above algorithm so that it should work well on practical instances. The idea is to factorize the leader along its strongly connected components (SCCs), the number of which is assumed to be small in real programs. Technically, our improved algorithm works with valid SCC-witnesses. They symbolically represent SCCs rather than loops in the leader. To state the complexity, we first define the straight line depth, the number of SCCs the leader may visit during a computation. The definition needs a graph construction.
Let V ⊆D ≤D contain only words that do not repeat letters. Take an element r =c 1 . . .c ℓ ∈ V and let i ∈ [0..ℓ]. By P L ↓ i we denote the automaton obtained from P L by removing all transitions that read a value outside {c 1 , . . . , c i }. Let SCC(P L ↓ i ) denote the set of all SCCs in this automaton. We construct the directed graph G(P L , r) as follows. The vertices are the SCCs of all P L ↓ i where i ∈ [0..ℓ]. There is an edge between S, S ′ ∈ SCC(P L ↓ i ), if there are states
q ∈ S, q ′ ∈ S ′ with q ?a/!a/ε − −−−− → q ′ in P L ↓ i . If S ∈ SCC(P L ↓ i−1 ) and S ′ ∈ SCC(P L ↓ i ),
we only get an edge if we can get from S to S ′ by reading c i . Note that the resulting graph is acyclic. The depth d(r) of P L relative to r is the length of the longest path in G(P L , r). The straight line depth is d = max{d(r) | r ∈ V}. The number of SCCs s is the size of SCC(P L ↓ 0 ). With these values at hand, the number of SCC-witness candidates (the definition of which can be found in Appendix A) can be bounded by Wit SCC (s, D, d) ≤ (s · (D + 1)) d · D D · 2 D+d . The time needed to test whether a candidate is valid is For this algorithm, what matters is that the leader's state space is strongly connected. The number of states has limited impact on the runtime.
Valid SCC (L, D, C, d) = L 2 · D · C 2 · d 2 .
Lower bound We prove that the algorithm from Theorem 1 is only a rootfactor away from being optimal: A 2 o( √ L·D·log(L·D)) -time algorithm for LCR would contradict ETH. We achieve the lower bound by a reduction from k × k Clique, the problem of finding a clique of size k in a graph the vertices of which are elements of a k × k matrix. Moreover, the clique has to contain one vertex from each row. Unless ETH fails, the problem cannot be solved in time 2 o(k·log(k)) [38].
Technically, we construct from an instance (G, k) of k × k Clique an instance (A = (D , a 0 , (P L , P C )), F L ) of LCR such that D = O(k) and L = O(k). Furthermore, we show that G contains the desired clique of size k if and only if there is a t ∈ N such that c 0 → * A t c with c ∈ C f . Suppose we had an algorithm for LCR running in time 2 o( √ L·D·log(L·D)) . Combined with the reduction, this would yield an algorithm for k × k Clique with runtime 2 o( √ k 2 ·log(k 2 )) = 2 o(k·log k) . But unless the exponential time hypothesis fails, such an algorithm cannot exist. We assume that the vertices V of G are given by tuples (i, j) with i, j ∈ [1..k], where i denotes the row and j denotes the column in the matrix. In the reduction, we need the leader and the contributors to communicate on the vertices of G. However, we cannot store tuples (i, j) in the memory as this would cause a quadratic blow-up D = O(k 2 ). Instead, we communicate a vertex (i, j) as a string row(i). col(j). We distinguish between row-and column-symbols to avoid stuttering, the repeated reading of the same symbol. With this, it cannot happen that a thread reads a row-symbol twice and takes it for a column.
The program starts its computation with each contributor choosing a vertex (i, j) to store. For simplicity, we denote a contributor storing the vertex (i, j) by P (i,j) . Note that there can be copies of P (i,j) .
Since there are arbitrarily many contributors, the chosen vertices are only a superset of the clique we want to find. To cut away the false vertices, the leader P L guesses for each row the vertex belonging to the clique. Contributors storing other vertices than the guessed ones will be switched off bit by bit. To this end, the program performs for each i ∈ [1..k] the following steps: If (i, j i ) is the vertex of interest, P L first writes row(i) to the memory. Each contributor that is still active reads the symbol and moves on for one state. Then P L communicates the column by writing col(j i ). Again, the active contributors P (i ′ ,j ′ ) read.
Upon transmitting (i, j i ), the contributors react in one of the following three ways: (1) If i ′ = i, the contributor P (i ′ ,j ′ ) stores a vertex of a different row. The computation in P (i ′ ,j ′ ) can only go on if (i ′ , j ′ ) is connected to (i, j i ) in G. Otherwise it will stop. (2) If i ′ = i and j ′ = j i , then P (i ′ ,j ′ ) stores exactly the vertex guessed by P L . In this case, P (i ′ ,j ′ ) can continue its computation. (3) If i ′ = i and j ′ = j, thread P (i ′ ,j ′ ) stores a different vertex from row i. The contributor has to stop.
After k such rounds, there are only contributors left that store vertices guessed by P L . Furthermore, each two of these vertices are connected. Hence, they form a clique. To transmit this information to P L , each P (i,ji) writes # i to the memory, a special symbol for row i. After P L has read the string # 1 . . . # k , it moves to its final state. A formal construction is given in Appendix A.
Note that the size O(k) of the data domain cannot be avoided, even if we encoded the row and column symbols in binary. The reason is that P L needs a confirmation of k contributors that were not stopped during the guessing and terminated correctly. Since contributors do not have final states, we need to transmit this information in the form of k different memory symbols.
Absence of a Polynomial Kernel A kernelization of a parameterized problem is a compression algorithm. Given an instance, it returns an equivalent instance the size of which is bounded by a function only in the parameter. From an algorithmic perspective, kernels put a bound on the number of hard instances. Indeed, the search for small kernels is a key interest in algorithmics, similar to the search for FPT-algorithms. It can be shown that kernels exist if and only if a problem admits an FPT-algorithm [12].
Let Q be a parameterized problem. A kernelization of Q is an algorithm that given an instance (B, k), runs in polynomial time in B and k, and outputs an equivalent instance (B ′ , k ′ ) such that |B ′ | + k ′ ≤ g(k). Here, g is a computable function. If g is a polynomial, we say that Q admits a polynomial kernel.
Unfortunately, for many problems the community failed to come up with polynomial kernels. This lead to the contrary approach, namely disproving their existence [27,5,6]. The absence of a polynomial kernel constitutes an exponential lower bound on the number of hard instances. Like computational hardness results, such a bound is seen as an indication of general hardness of the problem.
Technically, the existence of a polynomial kernel for the problem of interest is shown to imply NP ⊆ coNP/poly. However, the inclusion is considered unlikely as it would cause a collapse of the polynomial hierarchy to the third level [41].
In order to link the existence of a polynomial kernel for LCR(D, L) with the above inclusion, we follow the framework developed in [6]. Let Γ be an alphabet. A polynomial equivalence relation is an equivalence relation R on Γ * with the following properties: Given x, y ∈ Γ * , it can be decided in time polynomial in |x| + |y| whether (x, y) ∈ R. Moreover, for n ∈ N there are at most polynomially many equivalence classes in R restricted to Γ ≤n .
The key tool for proving kernel lower bounds are cross-compositions. Let L ⊆ Γ * be a language and Q ⊆ Γ * × N be a parameterized language. We say that L cross-composes into Q if there exists a polynomial equivalence relation R and an algorithm C, together called the cross-composition, with the following properties: C takes as input ϕ 1 , . . . , ϕ I ∈ Γ * , all equivalent under R. It computes in time polynomial in I ℓ=1 |ϕ ℓ | a string (y, k) ∈ Γ * × N such that (y, k) ∈ Q if and only if there is an ℓ ∈ [1..I] with ϕ ℓ ∈ L. Furthermore, parameter k is bounded by p(max ℓ∈ [1..I] |ϕ ℓ | + log(I)), where p is a polynomial.
It was shown in [6] that a cross-composition of any NP-hard language into a parameterized language Q prohibits the existence of a polynomial kernel for Q unless NP ⊆ coNP/poly. In order to make use of this result, we show how to cross-compose 3-SAT into LCR(D, L). This yields the following: The difficulty in coming up with a cross-composition is the restriction on the size of the parameters. In our case, this affects D and L: Both parameters are not allowed to depend polynomially on I, the number of given 3-SAT-instances. We resolve the polynomial dependence by encoding the choice of such an instance into the contributors via a binary tree.
Proof (Idea). Assume some encoding of Boolean formulas as strings over a finite alphabet. We use the polynomial equivalence relation R defined as follows: Two strings ϕ and ψ are equivalent under R if both encode 3-SAT-instances, and the numbers of clauses and variables coincide.
Let the given 3-SAT-instances be ϕ 1 , . . . , ϕ I . Every two of them are equivalent under R. This means all ϕ ℓ have the same number of clauses m and use the same set of variables {x 1 , . . . , x n }. We assume that ϕ ℓ = C ℓ 1 ∧ · · · ∧ C ℓ m . We construct a program proceeding in three phases. First, it chooses an instance ϕ ℓ , then it guesses an evaluation for all variables, and in the third phase it verifies that the evaluation satisfies ϕ ℓ . While the second and the third phase do not cause a dependence of the parameters on I, the first phase does. It is not possible to guess a number ℓ ∈ [1..I] and communicate it via the memory as this would provoke a polynomial dependence of D on I.
To implement the first phase without a polynomial dependence, we transmit the indices of the 3-SAT-instances in binary. The leader guesses and writes tuples (u 1 , 1), . . . , (u log(I) , log(I)) with u ℓ ∈ {0, 1} to the memory. This amounts to choosing an instance ϕ ℓ with binary representation bin(ℓ) = u 1 . . . u log(I) .
It is the contributors' task to store this choice. Each time the leader writes a tuple (u i , i), the contributors read and branch either to the left, if u i = 0, or to the right, if u i = 1. Hence, in the first phase, the contributors are binary trees with I leaves, each leaf storing the index of an instance ϕ ℓ . Since we did not assume that I is a power of 2, there may be computations arriving at leaves that do not represent proper indices. In this case, the computation deadlocks.
The size of D and P L in the first phase is O(log(I)). Note that this satisfies the size-restrictions of a cross-composition.
For guessing the evaluation in the second phase, the program communicates on tuples (x i , v) with i ∈ [1..n] and v ∈ {0, 1}. The leader guesses such a tuple for each variable and writes it to the memory. Any participating contributor is free to read one of these. After reading, it stores the variable and the evaluation.
In the third phase, the satisfiability check is performed as follows: Each contributor that is still active has stored in its current state the chosen instance ϕ ℓ , a variable x i , and its evaluation v i . Assume that x i when evaluated to v i satisfies C ℓ j , the j-th clause of ϕ ℓ . Then the contributor loops in its current state while writing the symbol # j . The leader waits to read the string # 1 . . . # m . If P L succeeds, we are sure that the m clauses of ϕ ℓ were satisfied by the chosen evaluation. Thus, ϕ ℓ is satisfiable and P L moves to its final state. For details of the construction and a proof of correctness, we refer to Appendix A.
⊓ ⊔
Parameterization by Contributors
The size of the contributors C has substantial influence on the complexity of LCR.
We show that the problem can be solved in time O * (2 C ) via dynamic programming. Moreover, we present a matching lower bound proving it unlikely that LCR can be solved in time O * ((2 − δ) C ), for any δ > 0. The result is obtained by a reduction from Set Cover. Finally, we prove the absence of a polynomial kernel.
Upper Bound Our algorithm is based on dynamic programming. Intuitively, we cut a computation of the program along the states reached by the contributors. To this end, we keep a table with an entry for each subset of the contributors' states. The entry of set S ⊆ Q C contains those states of the leader that are reachable under a computation where the behavior of the contributors is limited to S. We fill the table by a dynamic programming procedure and check in the end whether a final state of the leader occurs in an entry. The result is as follows.
Theorem 11. LCR can be solved in time O(2 C · C 4 · L 2 · D 2 ).
To define the table, we first need a compact way of representing computations that allows for fast iteration. The observation is that keeping one set of states for all contributors suffices. Let S ⊆ Q C be the set of states reachable by the contributors in a given computation. By the Copycat Lemma [17], we can assume for each q ∈ S an arbitrary number of contributors that are currently in q. This means that we do not have to distinguish between different contributor instances.
Formally, we reduce the search space to V = Q L × D × P(Q C ). Instead of explicit configurations, we consider tuples (q, a, S), where q ∈ Q L , a ∈ D , and S ⊆ Q C . Between these tuples, we define an edge relation E.
If P L writes a ∈ D with transition q !a − → q ′ , we get (q, b, S) → E (q ′ , a, S) for each b ∈ D and S ⊆ Q C .
Reads of the leader are similar. Contributors also change the memory but saturate set S instead of changing the state: If there is a transition p
!a − → p ′ in P C with p ∈ S, then (q, b, S) → E (q, a, S ∪ {p ′ }) for each b ∈ D and q ∈ Q L .
Reads are handled similarly.
The set V together with the relation E form a finite directed graph G = (V, E). We call the node v 0 = (q 0 L , a 0 , {q 0 C }) the initial node. Computations are represented by paths in G starting in v 0 . Hence, we reduced LCR to the problem of checking whether the set of nodes
F L × D × P(Q C ) is reachable from v 0 in G. Lemma 12. There is a t ∈ N so that c 0 → * A t c with c ∈ C f if and only if there is a path in G from v 0 to a node in F L × D × P(Q C ).
Before we elaborate on solving reachability on G we turn to an example. It shows how G is constructed from a program and illustrates Lemma 12. Example 13. We consider the program A = (D , a 0 , (P L , P C )) from Figure 2. The nodes of the corresponding graph G are given by V = Q L × D × P({p 0 , p 1 , p 2 }). Its edges E are constructed following the above rules. For instance, we get an edge (q 1 , a, {p 0 }) → E (q 1 , a, {p 0 , p 1 }) since P C has a read transition p 0 ?a − → p 1 . Intuitively, the edge describes that currently, the leader is in state q 1 , the memory holds a, and an arbitrary number of contributors is waiting in p 0 . Then, some of these read a and move to p 1 . Hence, we might assume an arbitrary number of contributors in the states p 0 and p 1 .
q0 q1 q2 q3 PL !a ?b ?c p0 p1 p2 PC ?a !c ?a !b
The complete graph G is presented in Figure 3. For the purpose of readability, we only show the nodes reachable from the initial node v 0 = (q 0 , a 0 , {p 0 }). Moreover, we omit self-loops and we present the graph as a collection of subgraphs. The latter means that for each subset S of P({p 0 , p 1 , p 2 }), we consider the induced subgraph G[Q L ×D ×{S}]. It contains the set of nodes Q L ×D ×{S} and all edges that start and end in this set. Note that we omit the last component from a node (q, a, S) in G[Q L × D × {S}] since it is clear from the context. The induced subgraphs are connected by edges that saturate S.
The red marked nodes are those which contain the unsafe state q 3 of the leader. Consider a path from v 0 to one of these nodes. It starts in the subgraph
G[Q L × D × {{p 0 }}].
To reach one of the red nodes, the path has to traverse via
G[Q L ×D ×{{p 0 , p 1 }}] to G[Q L ×D ×{Q C }] or via G[Q L ×D ×{{p 0 , p 2 }}].
Phrased differently, the states of the contributors need to be saturated two times along the path. This means that in an actual computation, there must be contributors in p 0 , p 1 , and p 2 . These can then provide the symbols b and c which are needed by the leader to reach the state q 3 . Constructing G for a program and solving reachability takes time O * (4 C ) [10]. Hence, we have to solve reachability without constructing G explicitly. Our algorithm computes a table T which admits a recurrence relation that simplifies the reachability query: Instead of solving reachability directly on G, we can restrict to so-called slices of G. These are subgraphs of polynomial size where reachability queries can be decided efficiently.
(q0, a 0 ) (q1, a) G[QL × D × {{p0}}] (q1, a) (q1, c) G[QL × D × {{p0, p1}}] (q1, a) (q1, b) (q2, b) G[QL × D × {{p0, p2}}] (q1, a) (q1, b) (q2, b) (q3, b) (q1, c) (q2, c) (q3, c) G[QL × D × {QC}]
We define the table T . For each set S ⊆ Q C , we have an entry T [S] given by:
T [S] = {(q, a) ∈ Q L × D | v 0 → * E (q, a, S)}. Intuitively, T [S] contains all nodes from the induced subgraph G[Q L × D × {S}]
that are reachable from the initial node v 0 .
Assume we have already computed T . By Lemma 12 we get: There is a
t ∈ N so that c 0 → * A t c ∈ C f if and only if there is a set S ⊆ Q C such that T [S] ∩ F L × D = ∅. The latter can be checked in time O(2 C · L 2 · D 2 ) as there are 2 C candidates for a set S.
It remains to compute the table. Our goal is to employ a dynamic programming based on a recurrence relation over T . To formulate the relation, we need the notion of slices of G. Let W ⊆ Q C be a subset and p ∈ Q C \ W be a state. We denote by S the union
S = W ∪ {p}. The slice G W,S is the induced subgraph G[Q L × D × {W, S}].
We denote its set of edges by E W,S .
The main idea of the recurrence relation is saturation. When traversing a path π in G, the set of contributor states gets saturated over time. Assume we cut π each time after a new state gets added. Then we obtain subpaths, each being a path in a slice: If p ∈ Q C gets added to W ⊆ Q C , the corresponding subpath is in G W,W ∪{p} . This means that for a set S ⊆ Q C , the entry T [S] contains those nodes that are reachable from T [S \{p}] in the slice G S\{p},S , for some state p ∈ S.
Formally, we define the set R(W, S) for each W ⊆ Q C , p ∈ Q C \ W , and S = W ∪ {p}. These sets contain the nodes reachable from T [W ] in G W,S :
R(W, S) = {(q, a) ∈ Q L × D | ∃(q ′ , a ′ ) ∈ T [W ] with (q ′ , a ′ , W ) → * EW,S (q, a, S)}.
Lemma 14. Table T admits the recurrence relation T [S] = p∈S R(S \{p}, S).
We illustrate the lemma and the introduced notions on an example. Afterwards, we show how to compute the table T by exploiting the recurrence relation.
Example 15.
Reconsider the program given in Figure 2. The table T has eight entries, one for each subset of Q C . The entries that are non-empty can be seen in the graph of Figure 3. Each of the subgraphs contains exactly those nodes that are reachable
from v 0 . For instance T [{p 0 , p 1 }] = {(q 1 , a), (q 1 , c)}.
Let W = {p 0 } and S = {p 0 , p 1 }. Then, the slice G W,S is shown in the figure as blue highlighted area. Note that it also contains the edge from (q 1 , a,
{p 0 }) to (q 1 , a, {p 0 , p 1 }), leading from G[Q L × D × {{p 0 }}] to G[Q L × D × {{p 0 , p 1 }}].
The set R(W, S) contains those nodes in G[Q L × D × {S}] that are reachable from T [W ] in the slice G W,S . According to the graph, these are (q 1 , a, S) and (q 1 , c, S) and hence we get T [S] = R(W, S).
In general, not all nodes in We apply the recurrence relation in a bottom-up dynamic programming to fill the table T . Let S ⊆ Q C be a subset and assume we already know T [S \{p}], for each p ∈ S. Then, for a fixed p, we compute R(S \{p}, S) by a fixed-point iteration on the slice G S\{p},S . The number of nodes in the slice is O(L·D). Hence, the iteration takes time at most O(L 2 · D 2 ). It is left to construct G S\{p},S . We state the time needed in the following lemma. The proof is postponed so as to finish the complexity estimation of Theorem 11.
T [S] are reachable from a single T [W ]. But if a node is reachable, then it is reachable from some set T [S \ {p}] with p ∈ S. NoteLemma 16. Slice G S\{p},S can be constructed in time O(C 3 · L 2 · D 2 ).
Wrapping up, we need O(C 3 · L 2 · D 2 ) time for computing a set R(S \{p}, S). Due to the recurrence relation of Lemma 14, we have to compute at most C sets R(S \{p}, S) for a given S ⊆ Q C . Hence, an entry T [S] can be computed in time O(C 4 · L 2 · D 2 ). The estimation also covers the base case
S = {p 0 C }, where T [S] can be computed by a fixed-point iteration on the induced subgraph G[Q L × D × {S}].
Since the table T has 2 C entries, the complexity estimation of Theorem 11 follows. It is left to prove Lemma 16.
Proof. The slice G S\{p},S consists of the two subgraphs
G S\{p} = G[Q L × D × {S \{p}}] and G S = G[Q L × D × {S}],
and the edges leading from G S\{p} to G S . We elaborate on how to construct G S . The construction of G S\{p} is similar.
First, we write down the nodes of G S . This can be done in time O(L · D). Edges in the graph are either induced by transitions of the leader or by the contributor. The former ones can be added in time O(|δ L | · D) = O(L 2 · D 2 ) since a single transition of P L may lead to D edges. To add the latter edges, we browse
δ C for transitions of the form s !a − → s ′ with s, s ′ ∈ S. Each such transition may induce L · D edges. Adding them takes time O(|δ C | · C · L · D) = O(C 3 · L · D 2 ) since we have to test membership of s, s ′ in S. Note that we can omit transitions s ?a − → s ′ with s, s ′ ∈ S as their induced edges are self-loops in G S .
To complete the construction, we add the edges from G S\{p} to G S . These are induced by transitions r ?a/!a −−− → p ∈ δ C with r ∈ S \{p}. Since each of these may again lead to L · D different edges, adding all of them takes time O(C 3 · L · D 2 ). In total, we estimate the time for the construction by O(C 3 · L 2 · D 2 ).
⊓ ⊔
Lower bound
We prove it unlikely that LCR can be solved in O * ((2−δ) C ) time, for any δ > 0. This shows that the algorithm from Section 3.2 has an optimal runtime. The lower bound is achieved by a reduction from Set Cover, one of the 21 original NP-complete problems by Karp [36]. We state its definition.
Set Cover
Input:
A family of sets F ⊆ P(U ) over a universe U , and r ∈ N.
Question: Are there sets S 1 , . . . , S r in F such that U = i∈[1..r] S i ?
Besides its NP-completeness, it is known that Set Cover admits an O * (2 n )time algorithm [24], where n is the size of the universe U . However, no algorithm solving Set Cover in time O * ((2 − δ) n ) for a δ > 0 is known so far. Actually, it is conjectured in [11] that such an algorithm cannot exist unless the SETH breaks.
While a proof for the conjecture in [11] is still missing, the authors provide evidence in the form of relative hardness. They obtain lower bounds for prominent problems by tracing back to the assumed lower bound of Set Cover. These bounds were not known before since SETH is hard to apply: No suitable reductions from SAT to these problems are known so far. Hence, Set Cover can be seen as an alternative source for lower bounds whenever SETH seems out of reach. This made the problem a standard assumption for hardness [11,4,9].
To obtain the desired lower bound for LCR, we establish a polynomial time reduction from Set Cover that strictly preserves the parameter n. Formally, if (F , U, r) is an instance of Set Cover, we construct (A = (D , a 0 , (P L , P C )), F L ), an instance of LCR where C = n + c with c a constant. Note that even a linear dependence on n is not allowed. Moreover, the instance satisfies the equivalence: There is a set cover if and only if there
is a t ∈ N such that c 0 → * A t c with c ∈ C f . Assume we had an O * ((2 − δ) C )-time algorithm for LCR. With the reduction, this would immediately yield an O * ((2 − δ) n+c ) = O * ((2 − δ) n )-time algorithm
for Set Cover breaking its hardness.
Proposition 17. If LCR can be solved in O * ((2 − δ) C ) time for a δ > 0, then Set Cover can be solved in O * ((2 − δ) n ) time.
For the proof of the proposition, we elaborate on the aforementioned reduction. The main idea is the following: We let the leader guess r sets from F . The contributors store the elements that got covered by the chosen sets. In a final communication phase, the leader verifies that it has chosen a valid cover by querying whether all elements of U have been stored by the contributors.
Leader and contributors essentially communicate over the elements of U . For guessing r sets from F , the automaton P L consists of r similar phases. Each phase starts with P L choosing an internal transition to a set S ∈ F . Once S is chosen, the leader writes a sequence of all u ∈ S to the memory.
A contributor in the program consists of C = n + 1 states: An initial state and a state for each u ∈ U . When P L writes an element u ∈ S to the memory, there is a contributor storing this element in its states by reading u. Hence, each element that got covered by S is recorded in one of the contributors.
After r rounds of guessing, the contributors hold those elements of U that are covered by the chosen sets. Now the leader verifies that it has really picked a cover of U . To this end, it needs to check whether all elements of U have been stored by the contributors. Formally, the leader can only proceed to its final state if it can read the symbols u # , for each u ∈ U . A contributor can only write u # to the memory if it stored the element u before. Hence, P L reaches its final state if and only if a valid cover of U was chosen.
Absence of a Polynomial Kernel We prove that 3-SAT can be cross-composed into LCR(C). This shows that the problem is unlikely to admit a polynomial kernel. The result is the following. For the cross-composition, let ϕ 1 , . . . , ϕ I be the given 3-SAT-instances, each two equivalent under R, where R is the polynomial equivalence relation from Theorem 10. Then, each formula has the same number of clauses m and variables x 1 , . . . , x n . Let us fix the notation to be ϕ ℓ = C ℓ 1 ∧ · · · ∧ C ℓ m . The basic idea is the following. Leader P L guesses the formula ϕ ℓ and an evaluation for the variables. The contributors store the latter. At the end, leader and contributors verify that the chosen evaluation indeed satisfies formula ϕ ℓ .
For guessing ϕ ℓ , the leader has a branch for each instance. Note that we can afford the size of the leader to depend on I since the cross-composition only restricts parameter C. Hence, we do not face the problem we had in Theorem 10.
Guessing the evaluation of the variables is similar to Theorem 10: The leader writes tuples (
x i , v i ) with v i ∈ {0,
1} to the memory. The contributors store the evaluation in their states. After this guessing-phase, the contributors can write the symbols # ℓ j , depending on whether the currently stored variable with its evaluation satisfies clause C ℓ j . As soon as the leader has read the complete string # ℓ 1 . . . # ℓ m , it moves to its final state, showing that the evaluation satisfied all clauses of ϕ ℓ .
Note that parameter C is of size O(n) and does not depend on I at all. Hence, the size-restrictions of a cross-composition are met.
Intractability
We show the W[1]-hardness of LCR(D) and LCR(L). Both proofs rely on a parameterized reduction from k-Clique, the problem of finding a clique of size k in a given graph. This problem is known to be W[1]-complete [14]. We state our result. We first reduce k-Clique to LCR(L). To this end, we construct from an instance (G, k) of k-Clique in polynomial time an instance (A = (D , a 0 , (P L , P C )), F L ) of LCR with L = O(k). This meets the requirements of a parameterized reduction.
Program A operates in three phases. In the first phase, the leader chooses k vertices of the graph and writes them to the memory. Formally, it writes a sequence (v 1 , 1).(v 2 , 2) . . . (v k , k) where the v i are vertices of G. During this selection, the contributors non-deterministically choose to store a suggested vertex (v i , i) in their state space.
In the second phase, the leader again writes a sequence of vertices using different symbols:
(w # 1 , 1)(w # 2 , 2) . . . (w # k , k)
. Note that the vertices w i do not have to coincide with the vertices from the first phase. It is then the contributor's task to verify that the new sequence constitutes a clique. To this end, for each i, the program does the following: If a contributor storing (v i , i) reads the value (w # i , i), the computation on the contributor can only continue if
w i = v i . If a contributor storing (v j , j) with j = i reads (w # i , i)
, the computation can only continue if v j = w i and if there is an edge between v j and w i .
Finally, in the third phase, we need to ensure that there was at least one contributor storing (v i , i) and that the above checks were all positive. To this end, a contributor that has successfully gone through the second phase and stores (v i , i) writes the symbol # i to the memory. The leader pends to read the sequence of symbols # 1 . . . # k . This ensures the selection of k different vertices, where each two are adjacent.
For proving W[1]-hardness of LCR(D), we reuse the above construction. However, the size of the data domain is |V | · k, where V is the set of vertices of G. Hence, it is not a parameterized reduction for parameter D. The factor |V | appears since leader and contributors communicate on the pure vertices. The main idea of the new reduction is to decrease the size of D by transmitting the vertices in binary. To this end, we add binary branching trees to the contributors that decode a binary encoding. We omit the details and refer to Appendix C.
Bounded-Stage Reachability
The bounded-stage reachability problem is a simultaneous reachability problem. It asks whether all threads of a program can reach an unsafe state when restricted to s-stage computations. These are computations where the write permission changes s times. The problem was first analyzed in [1] and shown to be NPcomplete for finite state programs. We give matching upper and lower bounds in terms of fine-grained complexity and prove the absence of a polynomial kernel.
Let
A = (D , a 0 , (P i ) i∈[1..t] ) be a program.
A stage is a computation in A where only one of the threads writes. The remaining threads are restricted to reading the memory. An s-stage computation is a computation that can be split into s parts, each of which forming a stage. We state the decision problem.
Bounded-Stage Reachability (BSR)
Input:
A program A = (D , a 0 , (P i ) i∈[1..t] )
, a set C f ⊆ C, and s ∈ N. Question: Is there an s-stage computation c 0 → * A c for some c ∈ C f ?
We focus on a parameterization of BSR by P, the maximum number of states of a thread, and t, the number of threads. Let it be denoted by BSR(P, t). We prove that the parameterization is FPT and present a matching lower bound. The main result in this section is the absence of a polynomial kernel for BSR(P, t). The result is technically involved and shows what makes the problem hard.
Parameterizations of BSR involving only D and s are intractable. We show that BSR remains NP-hard even if both, D and s, are constants. This proves the existence of an FPT-algorithm for those cases unlikely.
Parameterization by Number of States and Threads
We first give an algorithm for BSR, based on a product construction of automata. Then, we present a lower bound under ETH. Interestingly, the lower bound shows that we cannot avoid building the product. We conclude with proving the absence of a polynomial kernel. As before, we cross-compose from 3-SAT but now face the problem that two important parameters in the construction, P and t, are not allowed to depend polynomially on the number of 3-SAT-instances.
Upper Bound We show that BSR(P, t) is fixed-parameter tractable. The idea is to reduce to reachability on a product automaton. The automaton stores the configurations, the current writer, and counts up to the number of stages s. To this end, it has O * (P t ) many states. Details can be found in Appendix D.
Proposition 20. BSR can be solved in time O * (P 2t ).
Lower Bound By a reduction from k × k Clique, we show that a 2 o(t·log(P)) -time algorithm for BSR would contradict ETH. The above algorithm is optimal.
Proposition 21. BSR cannot be solved in time 2 o(t·log(P)) unless ETH fails.
The reduction constructs from an instance of k × k Clique an equivalent in-
stance (A = (D , a 0 , (P i ) i∈[1..t] ), C f , s) of BSR.
Moreover, it keeps the parameters small. We have that P = O(k 2 ) and t = O(k). As a consequence, a 2 o(t·log(P))time algorithm for BSR would yield an algorithm for k × k Clique running in time 2 o(k·log(k 2 )) = 2 o(k·log(k)) . But this contradicts ETH.
Proof (Idea). For the reduction, let
V = [1..k] × [1.
.k] be the vertices of G. We define D = V ∪ {a 0 } to be the domain of the memory. We want the threads to communicate on the vertices of G. For each row we introduce a reader thread P i that is responsible for storing a particular vertex of the row. We also add one writer P ch that is used to steer the communication between the P i . Our program A is given by the tuple (D , a 0 , ((P i ) i∈[1..k] , P ch )).
Intuitively, the program proceeds in two phases. In the first phase, each P i non-deterministically chooses a vertex from the i-th row and stores it in its state space. This constitutes a clique candidate (1, j 1 ), . . . , (k, j k ) ∈ V . In the second phase, thread P ch starts to write a random vertex (1, j ′ 1 ) of the first row to the memory. The first thread P 1 reads (1, j ′ 1 ) from the memory and verifies that the read vertex is actually the one from the clique candidate. The computation in P 1 will deadlock if j ′ 1 = j 1 . The threads P i with i = 1 also read (1, j ′ 1 ) from the memory. They have to check whether there is an edge between the stored vertex (i, j i ) and (1, j ′ 1 ). If this fails in some P i , the computation in that thread will also deadlock. After this procedure, the writer P ch guesses a vertex (2, j ′ 2 ), writes it to the memory, and the verification steps repeat. In the end, after k repetitions of the procedure, we can ensure that the guessed clique candidate is indeed a clique. Formal construction and proof are given in Appendix D.
⊓ ⊔
Absence of a Polynomial Kernel
We show that BSR(P, t) does not admit a polynomial kernel. To this end, we cross-compose 3-SAT into BSR(P, t).
Theorem 22. BSR(P, t) does not admit a poly. kernel unless NP ⊆ coNP/poly.
In the present setting, coming up with a cross-composition is non-trivial. Both parameters, P and t, are not allowed to depend polynomially on the number I of given 3-SAT-instances. Hence, we cannot construct an NFA that distinguishes the I instances by branching into I different directions. This would cause a polynomial dependence of P on I. Furthermore, it is not possible to construct an NFA for each instance as this would cause such a dependence of t on I. To circumvent the problems, some deeper understanding of the model is needed.
Proof (Idea). Let ϕ 1 , . . . , ϕ I be given 3-SAT-instances, where each two are equivalent under R, the polynomial equivalence relation of Theorem 10. Then each ϕ ℓ has m clauses and n variables {x 1 , . . . , x n }. We assume ϕ ℓ = C ℓ 1 ∧ · · · ∧ C ℓ m . In the program that we construct, the communication is based on 4-tuples of the form (ℓ, j, i, v). Intuitively, such a tuple transports the following information:
The j-th clause in instance ϕ ℓ , C ℓ j , can be satisfied by variable
x i with evalua- tion v. Hence, our data domain is D = ([1..I] × [1..m] × [1..n] × {0, 1}) ∪ {a 0 }.
For choosing and storing an evaluation of the x i , we introduce so-called variable threads P x1 , . . . , P xn . In the beginning, each P xi non-deterministically chooses an evaluation for x i and stores it in its state space.
We further introduce a writer P w . During a computation, this thread guesses exactly m tuples (ℓ 1 , 1, i 1 , v 1 ), . . . , (ℓ m , m, i m , v m ) in order to satisfy m clauses of potentially different instances. Each (ℓ j , j, i j , v j ) is written to the memory by P w . All variable threads then start to read the tuple. If P xi with i = i j reads it, then the thread will just move one state further since the suggested tuple does not affect the variable x i . If P xi with i = i j reads the tuple, the thread will only continue its computation if v j coincides with the value that P xi guessed for x i and, moreover, x i with evaluation v j satisfies clause C ℓj j . Now suppose the writer did exactly m steps while each variable thread did exactly m + 1 steps. This proves the satisfiability of m clauses by the chosen evaluation. But these clauses can be part of different instances: It is not ensured that the clauses were chosen from one formula ϕ ℓ . The major difficulty of the cross-composition lies in how to ensure exactly this.
We overcome the difficulty by introducing so-called bit checkers P b , where b ∈ [1.. log(I)]. Each P b is responsible for the b-th bit of bin(ℓ), the binary representation of ℓ, where ϕ ℓ is the instance we want to satisfy. When P w writes a tuple (ℓ 1 , 1, i 1 , v 1 ) for the first time, each P b reads it and stores either 0 or 1, according to the b-th bit of bin(ℓ 1 ). After P w has written a second tuple (ℓ 2 , 2, i 2 , v 2 ), the bit checker P b tests whether the b-th bit of bin(ℓ 1 ) and bin(ℓ 2 ) coincide, otherwise it will deadlock. This will be repeated any time P w writes a new tuple to the memory.
Assume the computation does not deadlock in any of the P b . Then we can ensure that the b-th bit of bin(ℓ j ) with j ∈ [1..m] never changed during the computation. This means that bin(ℓ 1 ) = · · · = bin(ℓ m ). Hence, the writer P w has chosen clauses of just one instance ϕ ℓ . Moreover, the current evaluation satisfies the formula. Since the parameters are bounded, P ∈ O(m) and t ∈ O(n + log(I)), the construction constitutes a proper cross-composition. For a formal construction and proof, we refer to Appendix D.
⊓ ⊔ Variable threads and writer thread are needed for testing satisfiability of clauses. The need for bit checkers comes from ensuring that all clauses stem from the same formula. We illustrate the notion with an example.
Example 23. Let four formulas ϕ 1 , ϕ 2 , ϕ 3 , ϕ 4 with two clauses each be given. We show how the bit checkers are constructed. To this end, we first encode the index of the instances as binary numbers using two bits. The encoding is shown in Figure 4 on the right hand side. Note the offset by one in the encoding.
We focus on the bit checker P b1 responsible for the first bit. It is illustrated in Figure 4 on the left hand side. Note that the label ℓ = 1, ℓ = 3 refers to transitions of the form ?(ℓ, j, i, v) with ℓ either 1 or 3 and arbitrary values for i,j, and v. On reading the first of these tuples, P b1 stores the first bit of ℓ in its state space. The blue marked states store that b 1 = 0, the red states store b 1 = 1. Then, the bit checker can only continue on reading tuples (ℓ, j, i, v) where the first bit of ℓ matches the stored bit. In the case of b 1 = 0, this means that P b1 can only read tuples (ℓ, j, i, v) with ℓ either 1 or 3.
Assume the writer thread has output two tuples (ℓ 1 , 1, i 1 , v 1 ) and (ℓ 2 , 2, i 2 , v 2 ) and the bit checker P b1 has reached a last state. Since the computation did not deadlock on P b1 , we know that the first bits of ℓ 1 and ℓ 2 coincide. If the bit checker for the second bit does not deadlock as well, we get that ℓ 1 = ℓ 2 . Hence, the writer has chosen two clauses from one instance ϕ ℓ1 .
Intractability
We show that parameterizations of BSR involving only s and D are intractable.
To this end, we prove that BSR remains NP-hard even if both parameters are con-stant. This is surprising as the number of stages s seems to be a powerful parameter. Introducing such a bound in simultaneous reachability lets the complexity drop from PSPACE to NP. But it is not enough to guarantee an FPT-algorithm. Proof (Idea). We give a reduction from 3-SAT to BSR that keeps both parameters constant. Let ϕ be a 3-SAT-instance with m clauses and variables x 1 , . . . , x n . We construct a program A = (D , a 0 , P 1 , . . . , P n , P v ) with D = 4 different memory symbols that can only run 1-stage computations.
The program cannot communicate on literals directly, as this would cause a blow-up in parameter D. Instead, variables and evaluations are encoded in binary in the following way. Let ℓ be a literal in ϕ. It consists of a variable x i and an evaluation v ∈ {0, 1}. The padded binary encoding bin # (i) ∈ ({0, 1}.#) log(n)+1 of i is the usual binary encoding where each bit is separated by a #. The string Enc(ℓ) = v# bin # (i) encodes that variable x i has evaluation v. We need the padding symbol # to prevent the threads in A from reading the same symbol more than once. Program A communicates by passing messages of the form Enc(ℓ). To this end, we need the data domain D = {a 0 , #, 0, 1}.
The program contains threads P i , i ∈ [1..n], called variable threads. Initially, these threads choose an evaluation for the variables and store it: Each P i can branch on reading a 0 and choose whether it assigns 0 or 1 to x i . Then, a verifier thread P v starts to iterate over the clauses. For each clause C, it picks a literal ℓ ∈ C that should evaluate to true and writes its encoding Enc(ℓ) to the memory. Each of the P i reads Enc(ℓ). Note that reading and writing Enc(ℓ) needs a sequence of transitions. In the construction, we ensure that all the needed states and transitions are provided. It is the task of each P i to check whether the chosen literal ℓ is conform with the chosen evaluation for x i . We distinguish two cases.
(1) If ℓ involves a variable x j with j = i, variable thread P i just continues its computation by reading the whole string Enc(ℓ).
(2) If ℓ involves x i , P i has to ensure that the stored evaluation coincides with the one sent by the verifier. To this end, P i can only continue its computation if the first bit in Enc(ℓ) shows the correct evaluation. Formally, there is only an outgoing path of transitions on Enc(x i ) if P i stored 1 as evaluation and on Enc(¬x i ) if it stored 0.
Note that each time P v picks a literal ℓ, all P i read Enc(ℓ), even if the literal involves a different variable. This means that the P i count how many literals have been seen already. This is important for correctness: The threads will only terminate if they have read a word of fixed length and did not miss a single symbol. There is no loss in the communication between P v and the P i . Now assume P v iterated through all m clauses and none of the variable threads got stuck. Then, each of them read exactly m encodings without running into a deadlock. Hence, the picked literals were all conform with the evaluation chosen by the P i . This means that a satisfying assignment for ϕ is found.
During a computation of A, the verifier P v is the only thread that has write permission. Hence, each computation of A consists of a single stage. For a formal construction, we refer to Appendix E.
⊓ ⊔
Conclusion
We have studied several parameterizations of LCR and BSR, two safety verification problems for shared memory concurrent programs. In LCR, a designated leader thread interacts with a number of equal contributor threads. The task is to decide whether the leader can reach an unsafe state. The problem BSR is a generalization of bounded context switching. A computation gets split into stages, periods where writing is restricted to one thread. Then, BSR asks whether all threads can reach a final state simultaneously during an s-stage computation.
For LCR, we identified the size of the data domain D, the size of the leader L and the size of the contributors C as parameters. Our first algorithm showed that LCR(D, L) is FPT. Then we modified the algorithm to obtain a verification procedure valuable for practical instances. The main insight was that due to a factorization along strongly connected components, the impact of L can be reduced to a polynomial factor in the time complexity. We also proved the absence of a polynomial kernel for LCR(D, L) and presented an ETH-based lower bound which shows that the upper bound is a root-factor away from being optimal.
For LCR(C) we presented a dynamic programming, running in O * (2 C ) time. The algorithm is based on slice-wise reachability. This reduces a reachability problem on a large graph to reachability problems on subgraphs (slices) that are solvable in polynomial time. Moreover, we gave a tight lower bound based on Set Cover and proved the absence of a polynomial kernel.
Parameterizations different from LCR(D, L) and LCR(C) were shown to be intractable. We gave reductions from k-Clique and proved W[1]-hardness.
The parameters of interest for BSR are the maximum size of a thread P and the number of threads t. We have shown that a parameterization by both parameters is FPT and gave a matching lower bound. The main contribution was to prove it unlikely that a polynomial kernel exists for BSR(P, t). The proof relies on a technically involved cross-composition that avoids a polynomial dependence of the parameters on the number of given 3-SAT-instances.
Parameterizations involving other parameters like s or D were proven to be intractable for BSR. We gave an NP-hardness proof where s and D are constant.
Extension of the Model In this work, the considered model for programs allows the memory to consist of a single cell. We discuss whether the presented results carry over when the number of memory cells increases. Having multiple memory cells is referred to as supporting global variables. Extending the definition of programs in Section 2 to global variables is straightforward.
For the problem LCR, allowing global variables is a rather powerful mechanism. Let LCR Var denote the problem LCR where the input is a program featuring global variables. The interesting parameters for the problem are D, L, C, and v, the number of global variables. It turns out that LCR Var is PSPACE-hard, even when C is constant. One can reduce the intersection emptiness problem for finite automata to LCR Var . The reduction makes use only of the leader, contributors are not needed.
A program A with global variables can always be reduced to a program A ′ with a single memory cell [25]. Roughly, the reduction constructs the leader of A ′ in such a way that it can store the memory contents of A and manage contributor accesses to the memory. This means the new leader needs exponentially many states since there are D v many possible memory valuations. The domain and the contributor of A ′ are of polynomial size. In fact, we can then apply the algorithm from Section 3.1 to the program A ′ . The runtime depends exponentially only on the parameters D, L, and v. This shows that LCR Var (D, L, v) is fixed-parameter tractable. It is an interesting question whether this algorithm can be improved. Moreover, it is open whether there are other parameterizations of LCR Var that have an FPT-algorithm. A closer investigation is considered future work.
For BSR, allowing global variables also leads to PSPACE-hardness. The problem BSR Var , defined similarly to LCR Var , is PSPACE-hard already for a constant number of threads. In fact, the proof is similar to the hardness of LCR Var where only one thread is needed. To obtain an algorithm for the problem, we modify the construction from Proposition 20. The resulting product automaton then also maintains the values of the global variables. This shows membership in PSPACE. But the size of the product now also depends exponentially on D and v. The interesting question is whether we can find an algorithm that avoids an exponential dependence on one of the parameters P, t, D or v. It is a matter of future work to examine the precise complexity of the different parameterizations.
A Proofs for Section 3.1
We give the missing constructions and proofs for Section 3.1.
Proof of Lemma 6
Proof. We will first show that for each computation leading to an unsafe state, there is a corresponding valid witness candidate. To this end, assume there is a t ∈ N and a computation π = c 0 → * A t c with c ∈ C f . The computation π acts on configurations but we want to work with transitions of the leader and contributor instead. To this end, let σ be the sequence of transitions appearing in π. Without loss of generality, we assume that the last transition in σ is due to the leader. In the following we show how to construct a valid witness candidate out of the sequence σ. It is useful to assume that each transition in σ is uniquely identifiable. We use Pos(τ ) to access the position of a certain transition τ in σ. Hence, we have σ[Pos(τ )] = τ .
The first step to construct the witness candidate is collecting the first writes from σ. Identifying these is simple. One only needs to iterate over σ and mark those write transitions of the contributors that write a symbol for the first time. Then, the transitions of the contributors that are not marked are removed from σ. Moreover, each marked transition is replaced by the symbol that it writes. Formally, if a marked transition is of the form (q, !a, q ′ ), it is replaced byā ∈D . The resulting sequence is of the form σ 1c1 σ 2c2 . . . σ ncn σ n+1 , where thec i are the symbols written by the first writes and σ i is the sequence of transitions performed by the leader between the first writesc i andc i+1 . Note that we havec i =c j for i = j and n ≤ |D | since first writes can only be written once and there are at most |D | many of them.
In order to define a witness candidate in E = ((Q L .D ⊥ ) ≤L .D) ≤D .Q L , we need to cut out loops in the σ i and map the resulting sequences to a word. We define a procedure Shrinkmap that performs these two operations. As input, it takes a tuple (α, c) where α is a sequence of transitions of the leader and c is a natural number. The procedure computes a tuple (v, ϕ) where the word v ∈ (Q L .D ⊥ ) ≤L is obtained by cutting out the loops in α and mapping writes of a symbol a to a and reads of any symbol to ⊥. The function
ϕ : {τ | τ a transition in α} → [c..|v| + c]
maps the transitions of the given sequence α into the word v. It is used to recover the sequence α from v. Moreover, the constant c is needed to right shift the map ϕ. This gets important when we append different words obtained from applying Shrinkmap. The procedure is explained in Algorithm 1.
We consecutively apply Shrinkmap. We begin with the input (σ 1 , 0) and obtain the tuple (w 1 , ϕ 1 ). In the i-th step, we run the procedure on the input (σ i , j∈[1..i−1] |w j |) and get the output (w i , ϕ i ). We do not apply Shrinkmap
Algorithm 1 Shrinkmap
Input: (α = τ1 . . . τ k , c) where α is a sequence of leader transitions and c is a constant.
Output: (v, ϕ) with v ∈ (QL.D ⊥ ) ≤L and ϕ : {τ1, . . . , τ k } → [c..|v| + c]. i = 1; v = ε; while i ≤ k do let τi = (q, op, p); if ∃j : τj = (q, op ′ , p ′ ) then ∀ℓ ∈ [i..j − 1], set ϕ(τ ℓ ) = c + |v| + 1; \\ Cutting out detected loop i = j; else if op = !b for some b ∈ D then v = v.q.b; \\ op is a write of symbol b else v = v.q.⊥; \\ op is a read or ε end if ϕ(τi) = c + |v|; i = i + 1; end if end while return (v,ϕ);
to the last sequence σ n+1 . Let this sequence be given by σ n+1 = τ 1 . . . τ t with transition τ 1 = (q, op, q ′ ). Then, the witness candidate is defined by w = w 1 .c 1 .w 2 .c 2 . . . w n .c n .q ∈ E.
Moreover, we define the map ϕ to be the concatenation of the ϕ i . Formally,
ϕ : {τ | τ a transition in σ 1 . . . σ n } → [1..|w|] with ϕ(τ ) = ϕ i (τ ) if τ is a transition in σ i .
We show that the witness w is valid. Requirement (1) is clearly satisfied since the symbolsc i written by the first writes are pairwise different. The second requirement is also fulfilled since we started with a proper run of the leader leading to an unsafe state q f ∈ F L . Formally, let w ↓ QL∪D ⊥ = q 0 a 0 q 1 a 1 . . . q m a m q. Since σ 1 . . . σ n is a run of the leader starting in q 0 and ending in q, we get that q 0 is indeed the initial state of P L . Moreover, the transition sequence σ n+1 leads from q to the state q f and reading in this sequence is restricted to the symbolsc i that were provided by the first writes. This means there is a word
u ∈ (W (D ) ∪ R(D(w))) * with q v − → L q f . Let i ∈ [1.
.m] and consider a i . If a i ∈ D , we know that there is a transition (q i , !a i , q i+1 ). This follows from the application of Shrinkmap. Similarly, if a i = ⊥, we get a transition of the form (q i , ε, q i+1 ) or (q i , ?a, q i+1 ). In the latter case, the read symbol a is provided by an earlier first write. This is due to the fact that the read transition appears in the computation σ 1 . . . σ n of the leader. Formally, a ∈D (w, pos(a i )). (3) is satisfied. We show that the reads of contributors that are responsible for first writes can be embedded into the witness candidate w. To this end, consider the i-th first writec i and the corresponding prefix v.c i of w. Since π is a computation of the system, we know there is a contributor providingc i . Formally, there is a computation ρ on this contributor of the form ρ = q 0 C u!ci −−→ C p. Let u ′ = u ↓ R(D) be the reads of u and τ C 1 . . . τ C z be the read transitions along ρ. Note that |u ′ | = z. Our goal is to define a monotonic function µ : [1..z] → [1.
It is left to show that Requirement
.|v|] that maps the reads of ρ into v.
We first identify those transitions among the τ C i that read a value written by the leader. Let these be τ C i1 , . . . τ C is . Then, there are writes of the leader in π that serve these reads. Let τ L ij denote the transition of the leader that writes the symbol read in τ C ij . This is the transition of the leader (writing the correct symbol) that immediately precedes τ C ij . We set µ(i j ) = ϕ(τ L ij ). Note that this already covers two cases of Requirement (3). If the read is served from a write of the leader that appears in w, the map µ directly points to that write. If the corresponding write stems from a loop, the map µ points to the state in w where the loop starts. This is due to the application of Shrinkmap. When loops are cut out, the procedure ensures that ϕ is changed accordingly.
Let τ C j1 , . . . , τ C jr be the read transitions among the τ C i that read symbols not provided by the leader. We consider τ C ji . Let the transition read the symbol a. Then, we need to ensure that µ maps j i to a position in w such that a ∈D (w, µ(j i )). Moreover, we need to keep µ monotonic. The idea is to map j i either to the position of the write transition of the leader preceding τ C ji or to the position of the last first write before τ C ji , depending on which of the two positions is larger. Let τ L be the write transition of the leader that precedes τ C ji in π. Moreover, letc h be the last first write before τ C ji . If Pos(τ L ) > |w 1 .c 1 . . . w h .c h |, we set µ(j i ) = ϕ(τ L ). Otherwise, we set µ(j i ) = |w 1 .c 1 . . . w h .c h |. The resulting map µ is indeed monotonic and satisfies Requirement (3).
For the other direction, we assume the existence of a valid witness w = w 1 .c 1 .w 2 .c 2 . . . w n .c n .q ∈ E.
Our goal is to show that there is a t ∈ N and a computation c 0 → * A t c leading to a configuration c ∈ C f . Since w is valid according to Definition 4, we get by the first requirement that thec i are pairwise different. This shows that thec i are unique and are thus candidates for a sequence of first writes. By Requirement (2), we obtain a computation of the leader from the w i . Formally, there are γ 1 , . . . , γ n and γ n+1 such that q 0 Before we construct the computation c 0 → * A t c, we need to determine the number of contributors t that are involved. Consider a first writec i . Each time, c i is read, we need a contributor to provide it. Hence, we first give a bound t(i) on how often c i needs to be provided. Summing up all the t(i) then bounds the number of involved contributors. Let
t(i) = |w| + |γ n+1 | + j∈[1..n] L · |α j |.
Intuitively, the leader P L can read c i at most |w| + |γ n+1 | many times when executing a loop free computation along the witness. The loops are taken care of separately. During a loop in the leader, it can read c i at most L many times. Moreover, loops appear at most |α j | many times for each j. The latter is true since a contributor currently performing the computation q 0 C αj !cj − −− → p j for a j, may need the leader to run a complete loop for each step in α j . We set the total number of involved contributors to be t = j∈[1..n] t(i).
We introduce some notions needed for the construction of the computation. For each i ∈ [1..n], variable x i is used to point to a position of the word u i . Moreover, variable x points to a position in the witness w. Initially, these variables are set to zero. The computation will involve t contributors captured in the set S. Each of these provides a certain symbol c i . We partition S into sets We construct the computation inductively, in such a way that it maintains the following invariants. Roughly, these describe that there are always enough contributors in the set S(i) and those can execute the computation q 0 C αi −→ p ′ i to reach the write transition of c i .
S(i) = {P C ∈ S | P C provides c i }
(1) If w[x] =c i , we need that all contributors in S(i) have already executed q 0 C αi − → p ′ i so that they can provide c i whenever it is needed. To this end, all pending reads in α i need to be served during the computation. We can ensure the latter by the invariant: If x i = |u i | then µ i (x i ) ≥ x. It means that whenever there are still pending reads in α i , the currently pending read at position x i can still be served since the current position in the witness is not further than µ i (x i ).
(2) The number of contributors in S(i) needs to be large enough in order to provide c i during the ongoing computation. This is ensured by the invariant
|S(i)| ≥ k + j∈[1..n] L · k j , where k = |w| + |γ n+1 | − x and k j = |α j | − x j for j ∈ [1..n].
(3) We synchronize the contributors in the sets S(i), i ∈ [1..n]. To this end, we demand that after each step of the construction, all contributors from a particular set S(i) are currently in the same state.
We elaborate on the inductive construction of the computation. To this end, assume that a computation ρ = c 0 → * A t c was already constructed and that the variables x, x 1 , . . . , x n admit values such that the invariants (1), (2), and (3) hold. We show how to extend ρ to a computation ρ ′ = c 0 → * A t c → * A t c ′ . Moreover, ρ ′ satisfies (1), (2), and (3) along new values x ′ , x ′ 1 , . . . , x ′ n for the variables with x ′ = x + 1 and
x ′ i ≥ x i for i ∈ [1.
.n]. Note that the induction basis is simple. The computation c 0 along with x = x 1 = · · · = x n = 0 already satisfies the invariants (1), (2), and (3). We perform the induction step by distinguishing the following four cases:
Case 1: w[x] = ⊥. Then, the corresponding transition τ in the computation γ of the leader is either an ε-transition or a read of an earlier first write. In the premier case, we extend the computation ρ by the ε-transition τ and increment x by 1. Then, clearly invariants (2) and (3) If transition τ reads the symbol c i of an earlier first write, we add two transitions to ρ. First, we add a transition c !ci S(i)ĉ to write c i to the memory. Note that we have a contributor in S(i) that can perform the transition due to invariants (1) and (2). Then, we add the read τ of the leader, resulting in
ρ ′ = c 0 → * A t c !ci S(i)ĉ ?ci − − → c ′ ,
and increment x by 1. Invariant (1) is still satisfied. Invariant (2) also holds since one contributor is removed from S(i) and x is incremented by 1. And since no other contributor moved, Invariant (3) is preserved, as well. Case 2: w[x] = a ∈ D . This means that the corresponding transition τ in the computation γ of the leader writes a to the shared memory. In this case, we first append τ to ρ and obtain
c 0 → * A t c !a − →ĉ.
Now we serve all contributors that need to read the value a in order to reach their first write. Let i ∈ [1..n] and let P be a contributor in S(i) with x i = |u i | and µ i (x i ) = x. This means that P needs to read a in order to go on with its computation. Hence, we extend the computation byĉ ?a − → S(i)ĉ ′ . This ensures that all contributors in S(i) do the required transition and read a. Note that since Invariant (3) holds, all these contributors are in the required state to perform the transition. After that, we increment x i by 1. When we appended the required transitions for each i ∈ [1..n], we increment x by 1.
Invariant (1) is satisfied by the new values since the maps µ i are monotonic. Invariant (2) is preserved since we did not remove any of the contributors from the sets S(i). And Invariant (3) also holds since all the contributors in S(i) do the same transition or do not move at all.
Case 3: w[x] =c i . We have that x i = |u i | since µ i maps the positions of u i to positions of w that occur beforec i . Hence, by invariants (1) and (3) we get that all contributors in S(i) are in the same state from which they can write c i . The transitions that we need to add to ρ in this case stem from those contributors in S(j) with j > i that need to read c i in order to reach their first write c j . We serve these reads with a single contributor from S(i). Hence, we first add a corresponding write transition, resulting in:
c 0 → * A t c !ci S(i)ĉ .
Then, if x j = |u j | and µ j (x j ) = x, we add the read transitions c ?ci − − → S(j)ĉ to the computation and increase x j by 1. After adding the transitions for each j, we increment x by 1. Now, Invariant (1) is satisfied since the µ i are monotonic. Invariant (2) holds, since we remove only one contributor from S(i) and increase x (and potentially some x j ) by 1. Invariant (3) is fulfilled since the contributors from S(j) that move, all do the same transition. And the one contributor from S(i) is removed after moving to a different state.
Case 4:
w[x] = q ∈ Q L . Let i ∈ [1..n] and suppose x i = |u i | µ i (x i ) = x.
Hence, the contributors in S(i) need to read either a first write c j ∈D(w, x) that appeared before x or a value that is written in a simple loop of the leader, D (w, x)) while reading is restricted to earlier first writes. For the premier case, we can append the same transitions to ρ as in Case 3 above. We focus on the latter.
u i [x i ] ∈ Loop(q,
Assume that u i [x i ] = a is the value that the contributors in S(i) need to read. Moreover, according to Requirement (3) of validity, the symbol is written in a loop q β.!a.β ′ − −−− → L q of the leader. The reads in β and β ′ are only from earlier first writesD (w, x). Since the loop is simple, the leader can read at most L first writes along it. Let c j1 . . . c j ℓ be the sequence of first writes, the leader reads along the loop. Note that there might be repetition among the c j k . Since x i < |u i | ≤ |α i |, there are at least L many contributors in each S(j) according to Invariant (2). Hence, we can provide enough contributors to execute the loop.
We add the following transitions to ρ. The leader executes along β i until it needs a first write c j k . Then, we let a contributor perform the transition !cj k S(j) , followed by the leader reading c j k . When the leader reaches the transition for writing a, it performs the transition, followed by the contributors in S(i) reading the value:
?a − → S(i) . In the same manner, β ′ i is processed. After adding the loop and contributor transitions to ρ, we increment x i by 1 since we have served the read request of S(i). Once we did this for each such i, we increment x by 1.
Invariant (1) is satisfied since we increased the corresponding x i by 1 and µ i is monotonic. Invariant (2) holds since we use at most L many contributors from S(j) in a loop that writes a symbol required by S(i). After the loop, x i is increased by 1, preserving the inequality for S(j) from Invariant (2). Moreover, Invariant (3) also holds since the contributors that move, all perform the same transition and contributors writing first writes are removed from S(j).
From the induction we get a computation π ′ : c 0 → * A t c ′ which satisfies the invariants and where the leader arrives in state q. Now we add the transitions of q γn+1 − −− → q f to π ′ . Reading in γ n+1 is restricted to first writes. For each first writē c i , by Invariant (3), we have that |S(i)| ≥ |γ n+1 | since x ≤ |w|. This means, that each time c i is required, we can add a contributor transition !ci S(i) followed by the corresponding read of the leader. This way we construct a computation π :
c 0 → * A t c with c ∈ C f . ⊓ ⊔
Proof of Lemma 7
Proof. Note that our complexity estimation are conservative. We do not assume that states of an automaton are stored in special lists which allow for faster iteration.
It is clear that Property (1) can be checked in time O(L · D). We claim that Property (2) can be tested in time O(L 3 · D 2 ). To this end, we check for every adjacent pair of states q, q ′ and letter a ∈ D between them if there is a transition of type (q, !a, q ′ ) ∈ δ L . Similarly if ⊥ is the symbol between q and q ′ . Then we look for a transition (q, ε, q ′ ) or (q, ?c i , q ′ ) for an i. Each such transition can be found in time |δ L | ≤ L 2 · D. Finally, checking whether there is a run from the last state of w to a state in F L , can be decided in time O(L 2 ), as it is just a reachability query on an NFA.
Property (3) can be checked in time O(C 2 · L 3 · D 2 ). We reduce to reachability in a finite state automata N constructed as follows. The states of N are given by
Q N = Q C × [1..|v|] ∪ {f }.
The initial state is (q 0 C , 1). We set up the transition relation: For all (q, !a, q ′ ) ∈ δ C , we add the transitions (q, i) → N (q ′ , i). For each read transitions (q, ?a, q ′ ) ∈ δ C and state (q, i), we add the transition (q, i) → N (q ′ , i) if one of the following three options hold: D(w, i)). If a ∈D (w, i). Further, we add (q, i) → N (q, i + 1). Finally, for all states (q, i) in N such that (q, !c j , q ′ ) ∈ δ C , we add the transition (q, i) → N f . This ends the computation since c j was written. Now we have that Property (3) is satisfied if and only if there is a computation (q 0 C , 1) → * N f . The construction of N is the dominant factor and takes time O(C 2 · L 3 · D 2 ).
If v[i] = a. If v[i] = p ∈ Q L and a ∈ Loop(p,
If we now combine the three results, we get that validity can be tested in time O(L 3 · D 2 · C 2 ).
⊓ ⊔
Formal Definition of SCC-witness Candidates and Validity
We give a formal definition of SCC-witness candidates. Let r =c 1 . . .c ℓ ∈ V. We denote by C(r) the set of all strings in
(SCC(P L ↓ 0 ).(D ∪ {⊥})) k0 .c 1 . . .c ℓ .(SCC(P L ↓ ℓ ).(D ∪ {⊥})) k ℓ .(SCC(P L ↓ ℓ ) such that i∈[0..ℓ] k i ≤ d − 1.
The set of SCC-witness candidates is the union C = r∈V C(r). A SCC-witness candidate w ∈ C is called valid if it satisfies the following properties:
1. The sequence in w without the barred symbol induces a valid run in the leader. For this we need to find appropriate entry and exit states in each SCC such that the exit state of each SCC is connected to its adjacent SCC through a transition. Let v = scc 1 1 a 1 1 . . . scc 1 k1 a 1 k1 scc 2 1 a 2 1 . . . scc 2 k2 a 2 k2 . . . scc 1 ℓ be the sequence obtained by projecting out the barred symbols. Further for any symbol α appearing in v, let pos(α) denote the position of α in v. For any i ∈ [1..|v|], We will also useD (v, i) to refer the the set of all barred symbols appearing before the position i:D (v, i) = {a |ā appears in v[1..i]}. Now corresponding to any sub-sequence of v of the form scc 1 ascc 2 that appears in v, one of the following is true.
-If a ∈ D , then we can find states q ∈ scc 1 , q ′ ∈ scc 2 such that there is a transition of the form q a − → q ′ in P L . -If a = ⊥ then, then we can find states q ∈ scc 1 , q ′ ∈ scc 2 such that there is a transition of the form q ε/c? − −− → q ′ in P L for some c ∈D (v, pos(a)). We also require that q 0 L ∈ scc 1 1 . Finally we require that from any state q in the final scc scc 1 ℓ , a run to the final state only involving writes, internal or a read of the barred symbol occurring in w. That is a run of the form q σ − → q f such that σ ∈ (W (D ) ∪ R(D (v, |v|))) * . 2. We can construct supportive computations on the contributors. For each prefix vā of w withā ∈D there is a computation q 0 C u!a − − → C q on P C to some q ∈ Q C such that the reads within u can be obtained from w. Formally, let
u ′ = u ↓ R(D) . Then there is an embedding of u ′ into v, a map µ : [1..|u ′ |] → [1..|v|] with µ(i) ≤ µ(j) for i < j and v[µ(i)] / ∈D ∪ {⊥}.
Hence, u ′ is only mapped to elements from i∈[0..ℓ] SCC(P L ↓ i ) ∪ D . Let u ′ [i] =?a with a ∈ D . Then either v[µ(i)] = a, which corresponds to a write of a by P L , or v[µ(i)] = scc ∈ SCC(P L ↓ i ) for some i ∈ [0..ℓ]. In the latter case, we have a ∈ D (scc), this corresponds to the letter being a write by the leader through the scc or write of a value by a contributor that was already seen. 3. Let w be of the form v 1c1 v 2c2 . . . v ℓcℓ scc 1 ℓ , then the scc dont repeat in each of v 1 , v 2 . . . v ℓ .
We refer to the above Properties as (SCC) validity. Instead of stating a characterization of computations in terms of SCC-witnesses directly, we relate the SCC-witnesses to the witnesses as defined in Section 3.1.
Lemma 25. There is a valid SSC-witness candidate in C if and only if there is a valid witness candidate in E.
Proof. First we will prove the ⇒ direction. For this, we will assume a valid witness string w ∈ E. Further let w = v 0c1 v 1c2 . . . v k−1ck q. Now consider the decomposition (v ′ i = scc i 1 . . . scc i ki ) of each v i according to their SCC in P L ↓ i . This we do by taking the maximum subsequence and replace it with the SCC to which it belongs. We can be sure that none of the SCC in this sequence repeats (otherwise they will already form a bigger SCC). Secondly notice that such a SCC sequence will be a path in the SCC graph G(P L ,c 1c2 . . .c k ). This implies that the decomposition thus obtained is a scc-witness. However we need it to be a valid scc-witness. The fact that such a scc-witness satisfies property-1 of the scc-validity property follows from the fact that we started with a valid-witness that satisfies property-2 of the validity-property, this already provides us with the requires states in the adjacent scc and a transition relation between them.
Proving property-2 of the scc-validity is slightly more complicated. For this, we need to construct a run in the contributor for each prefix that ends in a barred symbol and show that there is an appropriate mapping from this run to the scc-witness string. But notice that since we started with a valid-witness string, we are guaranteed a computation in the contributor and a mapping from such a run into the witness string. For this, let us fix one such prefix to be α ′c j . Let the corresponding prefix of w be αc j . Let q 0 c u·c! − − → q be such a run and µ : [1..|u ′ |] → [1..|α ′ |] (where u ′ is obtained from u by deleting all the values that do not correspond to a read transition) be the corresponding mapping. We construct from this mapping, another mapping µ ′ :
[1..|u ′ |] → [1..|α|]. Observe that α = v 0c1 v 1c2 v 2 . . .c j , let α ′ = v ′ 0c1 v ′ 1c2 v ′ 2 . . .c j ,
where v ′ i are the corresponding decomposition of v i into SCC. The required mapping µ ′ is constructed as follows. Suppose the original mapping µ mapped a value in [1..|u|] to a state or letter that got replaced by an SCC then we let µ ′ to map such a position to the SCC that replaced the state. If the original mapping mapped the position in [1..|u ′ |] to a letter that survied, then we also let µ ′ to do the same. That is, suppose for some v i = q 1 a 1 . . . q k a k if its corresponding decomposition be scc 1 a i1 . . . scc j a k and µ(i) = j, where j is a position into the string v i . If j points to one of a i1 . . . a k (the letters that survived the decomposition), then we let µ ′ (i) = j ′ , where j ′ is the position of such a letter in α ′ . If j points to a state or a letter in D that was decomposed into scc i , then µ ′ (i) gets the value of the position of scc i in α ′ . The correctness of such a mapping follows from the following reasoning. Clearly µ ′ thus constructed is monotonic since µ already was. Suppose µ ′ maps a position in u ′ to a letter, such a letter is guaranteed to be the same since it was the same for the mapping µ. Suppose µ mapped a position in u ′ to a letter a that got decomposed into scc i , then clearly a ∈ D (scc i ). Suppose µ mapped a position i in u ′ to a state q that got decomposed into scc i , we observe that for any a ∈ Loop(q,D (w, µ(i))), we also have a ∈ D (scc i ). With this, we get the result for one direction of the proof.
For the (⇐) direction, we will assume a valid scc-witness w and show how to construct a valid witness from this. Let w be of the form
scc 1 1 a 1 1 . . . scc 1 k1 a 1 k1c1 scc 2 1 a 2 1 . . . scc 2 k2 a 2 k2c2
. . .c ℓ scc 1 ℓ . From the scc-validity property-1, we have q 0 L ∈ scc 1 1 . Further for every subsequence of the form scc 1 ascc 2 , we have the states q 1 ∈ scc 1 and q 2 ∈ scc 2 such that there is a transition of the form q 1 b − → q 2 , where b is either ε/c! or in a? depending on the type of a. We will call the state q 1 the exit state of scc 1 and q 2 the entry state of scc 2 . From this, corresponding to each scc i j , there are entry and exit states en i j , ex i j . In each such scc, corresponding to its entry and exit states, there is also a shortest path between them. That is, corresponding to scc of the form scc i j ∈ SCC(P L ↓ i ) there is a shortest path of the form
en i j a1 − → PL↓i q (i,j) 1 a2 − → PL↓i q (i,j) 2 . . . a k −→ PL↓i q (i,j) k a k+1 − −− → PL↓i ex i j .
To get the valid witness string, we replace each scc in the valid scc-witness by its shortest path as follows. There is one exception to this replacement, the final scc occuring in the scc-witness is simply replaced by its entry state. The scc scc i j is replaced by a string of the form en i j b 1 q
(i,j) 1 b 2 q (i,j) 2 . . . ex i j , where b i = a if a i = a! and b i = ⊥ otherwise. Let w ′ = v 0c1 v 1c2 .
. .c ℓ q ℓ be the string thus obtained. Notice that in each v i , none of the states repeat, this is because none of the SCC repeats in w and SCC partitions the state space. We still need to show that the string thus obtained satisfies the validity properties.
Property-1 of validity is satisfied since all the barred symbols in valid sccwitness do not repeat. Property-2 is satisfied because between any adjacent SCC in w, there is a valid move by definition. Further the way we constructed the valid witness, we replaced each scc in the scc-witness by a valid shortest path in that scc. Finally, by definition there is also a run to final state from any state in the final scc .
Finally we need to prove that property-3 of validity holds. For this, we need to show existence of an appropriate computation in the contributor along with a mapping to the witness string for every prefix of w ′ such that the prefix ends in a barred symbol. Let us fix one such prefix to be σ = v 0c1 . . . v jcj+1 , let the corresponding prefix in w be σ ′ = v ′ 0c 1 . . . v ′ jc j+1 . From the scc-validity property, we already have a computation of the form q 0 c u·cj+1! −−−−→ q and a mapping µ from positions of u ′ (u ′ is the projection of u onto only read operation) to positions in σ ′ . We provide a mapping from the positions in u ′ to positions in σ as follows. If µ maps a position to a symbol in D , then µ ′ maps such a position to the corresponding position of the same letter in σ ′ . Otherwise, the position is mapped to the first state of the shortest path of the corresponding SCC. If a letter is in D (scc), then one can always find a loop that visits such a letter. Hence such a mapping satisfies the validity property.
⊓ ⊔
As above, the algorithm iterates over all SCC-witness candidates and each is tested to be valid. The validity check can be performed in polynomial time, as shown in the below lemma.
Lemma 26. Validity of w ∈ C can be checked in time O(L 2 · D · C 2 · d 2 ).
Proof. Checking whether a SCC-witness candidate w satisfies Property (1) can be tested in time O(d ·L 2 ·D). For this, between any two SCC in w of the form the scc 1 ascc 2 , we need to check if there are states q 1 ∈ scc 1 and q 2 ∈ scc 2 such that q 1 → q 2 . This can easily be done in time O(|δ L |) ≤ O(L 2 · D). This operation, we need to perform between every pair of adjacent SCC, there are at most d many such adjacent pairs. Hence overall the time required to check whether there are proper transitions between each scc is O(d · L 2 · D) Checking whether there is a path from the final SCC to the final state reduces to reachability in an automata and can be easily done in time O(L 2 ).
Property (2) can be tested in time O(d 2 · L 2 · D · C 2 ). This can be achieved by reducing the problem to reachability in a NFA. The idea is similar to the one we saw in proof of lemma-7. The automaton we construct will have the states of the contributor and an index into the given witness string.
For constructing this automaton, we need to check for any given scc and a letter a ∈ D , whether a ∈ D (scc). This can easily be achieved in time O(L 2 ) by reducing it to reachability problem in a graph. We add a transition of the form (c 1 , i) − → (c 2 , i) if there is a transition of the form c 1 a? − → PC c 2 , i refers to scc in w and a ∈ scc. So for each i, we go through each transition in δ C and perform the scc-check. This takes time O(d · C 2 · D · L 2 ).
Finally, the size of such an automaton is simply O(C × d) and we need to perform a reachability check which is quadratic and hence in O(C 2 × d 2 ) . From this, we get the required complexity.
Finally, it is obvious that Property (3) can be checked in time O(d). This completes the proof.
⊓ ⊔
Comparison between introduced Methods
Compared to the formerly introduced witnesses, there are less candidates to test. We have that Wit SCC (s, D, d) ≤ Wit (L, D).
Lemma 27. Wit SCC (s, D, d) ≤ Wit (L, D).
Proof. For this, we show how to construct an injective function from every string in Wit SCC (s, D, d) to strings in Wit (L, D). This would already give us the result. The idea here is to represent each scc by an unique state present in that scc. For this, we will assume an arbitrary ordering on the states of the leader Q L . Recall that the scc partitions the states space of the leader, hence for any scc 1 , scc 2 ∈ SCC(P L ↓ i ), for some i we have scc 1 ∩scc 2 = ∅. Now given any string w = scc 1 1 a 1 1 . . . scc 1 k1 a 1 k1c 1 scc 2 1 a 2 1 . . . scc 2 k2 a 2 k2c 2 . . .c ℓ scc 1 ℓ ∈ Wit SCC (s, D, d), let f (w) = v be obtained by replacing each scc i j by the minimum state q i j in that scc. Since no two scc share a state, clearly v ∈ Wit (L, D). Quite clearly such a mapping is injective since each of the replacing states uniquely represents the scc.
⊓ ⊔
A Bound on the Number of SCC-witness Candidates
We will here show that Wit SCC (s, D, d) ≤ (s · (D + 1)) d · D D · 2 D+d . For this, we note that for any w ∈ Wit SCC (L, D, d), the size of such a string is bounded by 2d + D.
Formal Construction and Proof of Proposition 9
The domain of the memory is D = {row(i), col(i), # i | i ∈ [1..k]} ∪ {a 0 }. The contributor threads are defined by P C = (Op(D ), Q C , q 0 C , δ C ) with set of states
Q C = {q (r,ℓ) (i,j) , q (c,ℓ ′ ) (i,j) | i, j, ℓ ∈ [1..k], ℓ ′ ∈ [0..k]} ∪ {q 0 C , q f C }.
Intuitively, we use the states q (r,ℓ) (i,j) , q (c,ℓ ′ ) (i,j) to indicate that the contributor has chosen (i, j) to store and to count the number of vertices that the contributor has read so far. More precise, the state q (r,ℓ) (i,j) reflects that the last symbol read was row(ℓ), the row of the ℓ-th vertex. The state q (c,ℓ) (i,j) indicates that the last read symbol was the column-symbol belonging to the ℓ-th vertex. Note that this can be any column and thus different from col(ℓ).
The transition relation δ C is defined by the following rules. We have a rule to choose a vertex: q 0
C ?a 0 − − → C q (c,0) (i,j) for any i, j ∈ [1..k].
To read the ℓ-th rowsymbol, we have q
(c,ℓ−1) (i,j) ? row(ℓ) − −−−− → C q (r,ℓ) (i,j) for any ℓ ∈ [1..k].
For reading the ℓ-th column-symbol, we get the transition q
(r,ℓ) (i,j) col(j ′ ) − −−− → C q (c,ℓ) (i,j)
, but only if one of the following is satisfied: (1) We have that i = ℓ and there is an edge between (ℓ, j ′ ) and (i, j) in G. Intuitively, the contributor stores a vertex (i, j) from a row, different than ℓ. But then it can only continue its computation if (i, j) and (ℓ, j ′ ) share an edge. (2) We have that i = ℓ and j ′ = j. This means that the contributor stores the vertex that it has read. Note that with this, we rule out all contributors storing other vertices from the i-th row.
To end the computation in a contributor, we get the rule q We further set F L = {q (#,k) L }, the state after receiving all #-symbols. Then our program is defined to be A = (D , a 0 , (P L , P C )). Note that the parameters are D = L = 3k + 1. The correctness of the construction is proven in the following lemma.
Lemma 28. There is a t ∈ N so that c 0 → * A t c with c ∈ C f if and only if there is a clique of size k in G with one vertex from each row.
Proof. First assume that G contains the desired clique and let (1, j 1 ), . . . , (k, j k ) be its vertices. For t = k we construct a computation from c 0 to a configuration in C f . We have k copies of the contributor P C in the system A t . We shall denote them by P 1 , . . . , P k .
The computation starts with P i choosing the vertex (i, j i ) to store, it performs the step q 0
Ci ji) . Hence, we get as a computation on A t :
?a 0 − − → i q (c,0) (i,c 0 → * A t (q 0 L , q (c,0) (1,j1) , . . . , q (c,0) (k,j k ) , a 0 ) = c 0 .
Then P L writes the symbol row(1) and each contributor reads it. We get
c 0 → * A t (q (r,1) L , q (r,1) (1,j1) , . . . , q (r,1) (k,j k ) , row(1)) = c (r,1) .
After transmitting the first row to all contributors, P L then communicates the first column by writing col(j 1 ) to the memory. Again, each contributor reads it. Note that P 1 can read col(j 1 ) since it stores exactly the vertex (1, j 1 ). A contributor P i with i = k can also read col(j 1 ) since P i stores (i, j i ), a vertex which shares an edge with (1, j 1 ) due to the clique-assumption. Hence, we get the computation j1) , . . . , q (c,1) (k,j k ) , col(j 1 )) = c 1 . Similarly, we can construct a computation leading to a configuration c 2 . By iterating this process, we get j1) , . . . , q (c,k) (k,j k ) , col(j k )). Then each contributor P i can write the symbol # i . This is done in ascending order: First, P 1 writes # 1 and P L reads it. Then, it is P 2 's turn and it writes # 2 . Again, the leader reads the symbol. After k rounds, we reach the configuration (q (#,k) L , q f C1 , . . . , q f C k , # k ) which lies in C f . Now let a t ∈ N together with a computation ρ = c 0 → * A t c with c from C f be given. Let ρ L be the subcomputation of ρ carried out by P L . Technically, the projection of ρ to P L . Then ρ L has the form ρ L = ρ 1 L .ρ 2 L with
c (r,1) → * A t (q (c,1) L , q (c,1) (1,c 0 → * A t c k = (q (c,k) L , q (c,k) (1,ρ 1 L = q 0 L ! row(1) −−−−→ L q (r,1) L ! col(j1) − −−−− → L q (c,1) L ! row(2) −−−−→ L . . . ! col(j k ) − −−−− → L q (c,k) L and ρ 2 L = q (c,k) L ?#1 −−→ L q (#,1) L ?#2 −−→ L . . . ?# k −−→ L q (#,k) L .
We show that the vertices (1, j 1 ), . . . , (k, j k ) form a clique in G.
Since in ρ 2 L , the leader is able to read the symbols # 1 up to # k , there must be at least k contributors writing them. Due to the structure of P C , it is not possible to write different # i symbols. Hence, we get one contributor for one symbol and thus, t ≥ k.
Let P Ci be a contributor writing # i . Then P Ci stores the vertex (i, j i ), it performs the initial move q 0
Ci
?a 0 − − → i q (c,0) (i,ji)
. Assume P Ci stores the vertex (i ′ , j ′ ). Since the thread writes # i in the end, we get i ′ = i due to the structure of P Ci .
During the computation ρ, the thread performs the step q
(r,i) (i,j ′ ) ? col(ji) − −−−− → i q (c,i) (i,j ′ )
since P L writes the symbol col(j i ) to the memory and the computation on P Ci does not deadlock. Note that we use the following: The leader writes row(i) before col(j i ). This ensures that col(j i ) is indeed the column of the i-th transmitted vertex and the above transition is correct. However, the contributor P Ci can only do the transition if j ′ = j i . Thus, we get that (i ′ , j ′ ) = (i, j i ).
Let P (i,ji) denote a contributor that writes # i during ρ. Since the contributor P (i,ji) stores the vertex (i, j i ), the leader P L has written row(i) and col(j i ) to the memory. Now let P (i ′ ,j i ′ ) be another contributor with i ′ = i. Then also this thread needs to perform the step q
(r,i) (i ′ ,j i ′ ) ? col(ji) − −−−− → i q (c,i) (i ′ ,j i ′ )
since the computation does not end on P (i ′ ,j i ′ ) at this point. But by definition, the transition can only be carried out if there is an edge between (i, j i ) and (i ′ , j i ′ ). Hence, each two vertices of (1, j 1 ), . . . , (k, j k ) share an edge.
⊓ ⊔
Formal Construction and Proof of Theorem 10
We define the polynomial equivalence relation R in more detail. Assume some encoding of Boolean formulas over some finite alphabet Γ . Let F ⊆ Γ * be the encodings of proper 3-SAT-instances. We say that two encodings ϕ, ψ ∈ Γ * are equivalent under R if either ϕ, ψ ∈ F and the formulas have the same number of clauses and variables or if ϕ, ψ are both not in F . Then R is a polynomial equivalence relation. We define D to be the union of the sets Intuitively, the states q w with w ∈ {0, 1} ≤log(I) form the nodes of the tree of the first phase. The remaining states are needed to store the chosen instance, a variable, and its evaluation.
{(u, ℓ) | u ∈ {0, 1}, ℓ ∈ [1.. log(I)]}, {(x i , v) | i ∈ [1..n], v ∈ {0, 1}}, {# j | j ∈ [1..m]},
The transition relation δ C contains rules for the three phases. In the first phase, P C reads the bits chosen by the leader. According to the value of the bit, it branches to the next state: q w ?(u,|w|+1) − −−−−−− → C q w.u for u ∈ {0, 1} and w ∈ {0, 1} ≤log(I)−1 . Then we get an ε-transition from those leaves of the tree that encode a proper index, lying in [1..I]. We have the rule q w ε − → C q (ch,ℓ) if w = bin(ℓ) and ℓ ∈ [1..I]. For the second phase, we need transitions to store a variable and its evaluation:
q (ch,ℓ) ?(xi,v) −−−−→ C q ℓ (xi,v) for ℓ ∈ [1..I], i ∈ [1.
.n], and v ∈ {0, 1}. In the third phase, the contributor loops. We have q ℓ
(xi,v) !#j − − → C q ℓ (xi,v) if x i evaluated to v satisfies clause C ℓ j .
The program A is defined to be A = (D , a 0 , (P L , P C )) and the LCR-instance of interest is thus (A, F L ). The correctness of the construction is shown in the following lemma. Hence, all requirements of a cross-composition are met.
Lemma 29. There is a t ∈ N so that c 0 → * A t c with c ∈ C f if and only if there is an ℓ ∈ [1..I] such that ϕ ℓ is satisfiable.
Proof. We first assume that there is an ℓ ∈ [1.
.I] such that ϕ ℓ is satisfiable. Let v 1 , . . . , v n be the evaluation of the variables x 1 , . . . , x n that satisfies ϕ ℓ . Further, we let bin(ℓ) = u 1 . . . u log(I) be the binary representation of ℓ. Set t = n. Then the program A t has n copies of the contributor. Denote them by P 1 , . . . , P n . Intuitively, P i is responsible for variable x i .
We construct a computation of A t from c 0 to a configuration in C f . We proceed as in the aforementioned phases. In the first phase, P L starts to guess the first bit of bin(ℓ). This is read by all the contributors. We get a computation
c 0 !(u1,1) − −−− → L (q (b,1) L
, q ε , . . . , q ε , (u 1 , 1))
?(u1,1) −−−−→ 1 (q (b,1) L
, q u1 , q ε , . . . , q ε , (u 1 , 1))
. . .
?(u1,1) −−−−→ n (q (b,1) L
, q u1 , . . . , q u1 , (u 1 , 1)) = c (b,1) .
To extend the computation, we let P L guess the remaining bits, while the contributors read and store. Hence, we get
c 0 → * A t c (b,log(I)) = (q (b,log(I)) L
, q bin(ℓ) , . . . , q bin(ℓ) , (u log(I) , log(I))).
Before the second phase starts, the contributors perform an ε-transition to the state q (ch,ℓ) . This is possible since bin(ℓ) encodes a proper index in [1..I].
We get c (b,log(I)) → * A t (q (b,log(I)) L , q (ch,ℓ) , . . . , q (ch,ℓ) , (u log(I) , log(I))) = c (x,0) . In the second phase P L chooses the correct evaluation for the variables, it writes (x i , v i ) for each variable x i . Contributor P i reads (x i , v i ) and stores it.
Hence, we get the computation
c (x,0) !(x1,v1) − −−−− → L (q (x,1) L , q (ch,ℓ) , . . . , q (ch,ℓ) , (x 1 , v 1 )) ?(x1,v1) − −−−− → 1 (q (x,1) L , q ℓ (x1,v1) , q (ch,ℓ) , . . . , q (ch,ℓ) , (x 1 , v 1 )) !(x2,v2) − −−−− → L (q (x,2) L , q ℓ (x1,v1) , q (ch,ℓ) , . . . , q (ch,ℓ) , (x 2 , v 2 )) ?(x2,v2) − −−−− → 2 (q (x,2) L , q ℓ (x1,v1) , q ℓ (x2,v2) , q (ch,ℓ) , . . . , q (ch,ℓ) , (x 2 , v 2 ))
. . .
?(xn,vn) −−−−−→ n (q (x,n) L , q ℓ (x1,v1) , . . . , q ℓ (xn,vn) , (x n , v n )) = c (x,n) .
In the last phase, the contributors write the symbols # j . Since ϕ ℓ is satisfied by the evaluation v 1 , . . . , v n , there is a variable i 1 ∈ [1.
.n] such that x i1 evaluated to v i1 satisfies clause C ℓ 1 . Hence, due to the transition relation δ C , we can let P i1 write the symbol # 1 . After that, the leader reads it and moves to the next state. This amounts to the computation
c (x,n) !#1 − − → i1 (q (x,n) L , q ℓ (x1,v1) , . . . , q ℓ (xn,vn) , # 1 ) ?#1 −−→ L (q (#,1) L , q ℓ (x1,v1) , . . . , q ℓ (xn,vn) , # 1 ) = c (#,1) .
Similarly, we can extend the computation to reach the configuration
c (#,m) = (q (#,m) L , q ℓ (x1,v1) , . . . , q ℓ (xn,vn) , # m )
which lies in C f . This proves the first direction. Now we assume the existence of a t ∈ N such that there is a computation ρ from c 0 to a configuration in C f . Let ρ L be the subcomputation of ρ carried out by the leader P L . Then ρ L can be split into ρ L = ρ 1 L .ρ 2 L .ρ 3 L such that Let ℓ be the natural number such that bin(ℓ) = u 1 . . . u log(I) . We show that ℓ ∈ [1.
ρ 1 L = q (b,0) L !(u1,1) − −−− → L q (b,1) L !(u2,2) − −−− → L . . .
.I] and ϕ ℓ is satisfied by evaluating the variables x i to v i . In ρ 3 L , the leader can read the symbols # 1 , . . . , # m . This means that there is at least one contributor writing them. Let P C be a contributor writing such a symbol. Then, after P L has finished ρ 1 L , the contributor P C is still active and performs the step q bin(ℓ) ε − → C q (ch,ℓ) . This is true since P C did not miss a bit transmitted by P L and P C has to reach a state where it can write the #-symbols. Thus, we get that ℓ ∈ [1..I] and P C stores ℓ in its state space.
We denote the number of contributors writing a #-symbol in ρ by t ′ ≥ 1. Each of these contributors gets labeled by C(j) = {# j1 , . . . , # j k j }, the set of #-symbols it writes during the computation ρ. Hence, we have the contributors P C(1) , . . . , P C(t ′ ) and since each symbol in {# 1 , . . . , # m } is written at least once, we have:
{# 1 , . . . , # m } = t ′ j=1 C(j).(1)
Now we show that each P C(j) with C(j) = {# j1 , . . . , # j k j } stores a tuple (x i , v i ) such that x i evaluated to v i satisfies the clauses C ℓ j1 , . . . , C ℓ j k j . We already know that P C(j) is in state q (ch,ℓ) after P L has executed ρ 1 L . During P L executing ρ 2 L , the contributor P C(j) has to read a tuple (x i , v i ) since it has to reach a state where it can write the #-symbols. More precise, P C(j) has to perform a transition q (ch,ℓ) ?(xi,vi) − −−−− → C(j) q ℓ (xi,vi) for some i. Then the contributor writes the symbols # j1 , . . . , # j k j while looping in the current state. But by the definition of the transition relation for the contributors, this means that x i evaluated to v i satisfies the clauses C ℓ j1 , . . . , C ℓ j k j . By Equation (1) we can deduce that every clause in ϕ ℓ is satisfied by the chosen evaluation. Hence, ϕ ℓ is satisfiable.
⊓ ⊔
B Proofs for Section 3.2
We give the missing constructions and proofs for Section 3.2.
Proof of Lemma 12
Before we elaborate on the proof, we introduce a few notations. Let t ∈ N and c a configuration of A t . To access the components of c we use the following projections: π L (c) returns the state of the leader in c, π D (c) the value of the shared memory. For p ∈ Q C , we denote by # C (c, p) the number of contributors that are currently in state p. Finally, we use π C (c) for the set of contributor states that appear in c. Formally, π C (c) = {p ∈ Q C | # C (c, p) > 0}. The proof of Lemma 12 is a consequence of the following stronger lemma. It states that for any reachable configuration in the program, there is a node in G reachable by v 0 such that: (1) The state of the leader and the memory value are preserved and (2) the possible states of the contributors can only increase.
Lemma 30. There is a t ∈ N so that c 0 → * A t c if and only if there is a path v 0 → * E (q, a, S) in G,
where π L (c) = q, π D (c) = a, and π C (c) ⊆ S. First assume that a computation c 0 → * A t c for a t ∈ N is given. We proceed by induction on the length of the computation. In the base case, the length is 0. This means that c = c 0 is the initial configuration. But then π L (c) = q 0 L , π D (c) = a 0 , and π C (c) = {q 0 C }. This characterizes the initial node of G and there is a path v 0 → * E v 0 of length 0 which proves the base case. Suppose the statement holds for all computations of length at most ℓ. Let c 0 → * A t c be a computation of length ℓ + 1. Then, it can be split into c 0 → *
A t c ′ → A t c, where c 0 → * A t c ′ is a computation of length ℓ. By induction, there is a path v 0 → * E (q ′ , a ′ , S ′ ) in G such that q ′ = π L (c ′ ), a ′ = π D (c ′ )
, and π C (c ′ ) ⊆ S ′ . Now we distinguish two cases:
(1) If c ′ → A t c is induced by a transition of the leader, the leaders' state and the memory value get updated, but the contributor states do not. We have that π C (c) = π C (c ′ ) ⊆ S ′ . Now we set q = π L (c), a = π D (c) and S ′ = S. Then, on G we have an edge (q ′ , a ′ , S ′ ) → E (q, a, S).
(2) If c ′ → A t c is induced by a transition of a contributor, we immediately get that π L (c) = π L (c ′ ) = q ′ . Let the transition of the contributor be of the
form p ′ ?a ′ /ε − −−→ p. Then we have that π C (c) ⊆ π C (c ′ ) ∪ {p} ⊆ S ′ ∪ {p} and π D (c ′ ) = π D (c) = a ′ .
Note that it can happen that p ′ is not an element of π C (c) since there might be just one contributor in state p ′ which switches to state p. We set q = q ′ , a = a ′ , and S = S ′ ∪{p}. Then we have an edge (q ′ , a ′ , S ′ ) → E (q, a, S) induced by the transition. Writes of the contributors are similar.
This shows the first direction of the lemma. For the other direction, we apply induction to prove a slightly stronger statement: For each path v 0 → * E (q, a, S), there is a t ∈ N and a computation c 0 → * A t c such that π L (c) = q, π D (c) = a, and π C (c) = S. In the proof we rely on the Copycat Lemma, presented in [17]. Roughly it states that for a computation where a state p ∈ Q C is reached by one of the contributors, there is a similar computation where p is reached by an arbitrary number of contributors. We restate the lemma in our setting.
Lemma 31 (Copycat Lemma [17]). Let t ∈ N and c 0 → * A t c a computation. Moreover, let p ∈ Q C such that # C (c, p) > 0. Then for all k ∈ N, we have a computation of the form c 0 → * A t+k d, where configuration d satisfies the following: π L (d) = π L (c), π D (d) = π D (c), # C (d, p) = # C (c, p) + k and for all p ′ = p we have # C (d, p ′ ) = # C (c, p ′ ).
We turn back to the induction on the length of the given path. In the base case, the length is 0. Then, we have that (q, a, S) = v 0 . This means q = q 0 L , a = a 0 , and S = {q 0 C }. Considering the initial configuration c 0 for an arbitrary t ∈ N, we get the computation c 0 → * A t c 0 of length 0, with π L (c 0 ) = q, π D (c 0 ) = a, and π C (c 0 ) = S.
Assume the statement holds true for all paths of length at most ℓ. Let v 0 → * E (q, a, S) be a path of length ℓ + 1. We split the path into a subpath v 0 → * E (q ′ , a ′ , S ′ ) of length ℓ and an edge (q ′ , a ′ , S ′ ) → E (q, a, S). Invoking the induction hypothesis, we get a t ∈ N and a computation c 0 → * A t c ′ such that π L (c ′ ) = q ′ , π D (c ′ ) = a ′ , and π C (c ′ ) = S ′ . We distinguish two cases:
(1) The edge (q ′ , a ′ , S ′ ) → E (q, a, S) was induced by a transition of the leader. Since π L (c ′ ) = q ′ and π D (c ′ ) = a ′ , the same transition also induces a step c ′ → A t c with π L (c) = q, π D (c) = a, and π C (c) = S = S ′ .
(2) The edge (q ′ , a ′ , S ′ ) → E (q, a, S) was induced by a transition of a contributor. Suppose, this transition is of the form τ = p ′ ?a ′ /ε −−− → p. The case of a write is similar. Then we get S = S ′ ∪ {p} and p ′ ∈ S ′ . Since π C (c ′ ) = S ′ , we get that # C (c ′ , p ′ ) > 0. By an application of the Copycat Lemma with k = 1, we obtain a computation of the form c 0 → * A t+1 d such that # C (d, p ′ ) > 1 and for all r = p ′ we have # C (d, r) = # C (c ′ , r). Furthermore, we get that π L (d) = π L (c ′ ) and π D (d) = π D (c ′ ). Hence, transition τ induces a move d → A t+1 c, where c is a configuration with π L (c) = π L (d) = q ′ = q, π D (c) = π D (d) = a ′ = a and π C (c) = π C (d) ∪ {p} = S ′ ∪ {p} = S.
Formal Construction and Proof of Proposition 17
The memory domain is defined by Lemma 32. There is a t ∈ N so that c 0 → * A t c with c ∈ C f if and only if there are sets S 1 , . . . S r ∈ F such that U = i∈[1,r] S i .
D = U ∪ {u # | u ∈ U } ∪ {a 0 }.
Proof. Let S 1 , . . . , S r ∈ F be a cover of U . We can construct a computation with t = n contributors. The leader first guesses the set S 1 . It writes all elements u ∈ S 1 to the memory and there is one contributor storing each element in its states by reading the corresponding u. Then, the leader decides for S 2 and writes the elements in the set to the memory. Now, only the new elements got stored by a contributor. Elements that were seen already are ignored. We proceed for r phases. Then, the contributors store exactly those elements that got covered by S 1 , . . . , S r . Since these cover U , the contributors can write all symbols u # to the memory in any order. The leader P L can thus read the required string and reach its final state. Now assume there is a t ∈ N together with a computation ρ on A t from c 0 to a configuration c ∈ C f . Consider ρ L , the projection of ρ to the leader P L . Then, the computation ρ L is of the form ρ L = ρ 1 L . . . ρ r L .ρ f L with:
ρ i L = q i ε − → q (i,0) Si !u S i 1 − −− → q (i,1) Si !u S i 2 − −− → . . . !u S i n i −1 − −−− → q (i,ni−1) Si !u S i n i − −− → q i+1 ,
where S i = {u Si 1 , . . . , u Si ni } is a set in F , and
ρ f L = q r+1 ?u # 1 −−→ q 1 # ?u # 2 −−→ . . . ?u # n −−→ q n # .
The candidate for the cover of U is S 1 , . . . , S r ∈ F . These are the sets selected by P L during its r initial phases. A contributor can only read and thus move in its state space during the leader is in a phase ρ i L . This means that contributors can only store symbols that got covered by the chosen sets S i . Moreover, they can only write what they have stored. Since ρ f L can be carried out by the leader, the contributors can write all elements u ∈ U to the memory. Phrased differently, all elements u ∈ U were stored by contributors and hence covered by S 1 , . . . , S r .
⊓ ⊔
Formal Construction and Proof of Proposition 18
The construction of Proposition 18 is similar to the construction in the following statement. It presents a lower bound for LCR based on ETH and shows that the Algorithm of Section 3.2 has an optimal exponent. For the reduction, let ϕ be a given 3-SAT-instance. We assume ϕ to have the variables x 1 , . . . , x n and clauses C 1 , . . . , C m . The construction of an LCR-instance relies on the following idea which is similar to Proposition 18. The leader P L will guess an evaluation for each variable, starting with x 1 . To this end, it will write a tuple of the form (x 1 , v 1 ), with v 1 ∈ {0, 1}, to the memory. A contributor will read the tuple and stores it in its state space. This is repeated for each variable. After the guessing-phase, the contributors can write the symbols # j , depending on whether the currently stored variable with its evaluation satisfies clause C j . As soon as the leader has read the complete string # 1 . . . # m , it moves to its final state, showing that the guessed evaluation satisfied all the clauses.
For the formal construction, let
D = {(x i , v) | i ∈ [1..n], v ∈ {0, 1}} ∪ {# j | j ∈ [1..m]} ∪ {a 0 }.
We define the leader to be the tuple P L = (Op(D ), Q L , q (x,0) L , δ L ), where the states are given by
Q L = {q (x,i) L | i ∈ [0..n]} ∪ {q (#,j) L | j ∈ [1.
.m]}. The transition relation δ L is defined as follows. We have rules for guessing the evaluation: − − → C q (xi,v) if variable x i evaluated to v satisfies clause C j . The program A is defined as the tuple A = (D , a 0 , (P L , P C )) and the LCRinstance is (A, F L ). The correctness of the construction is proven in the following lemma.
q (x,i−1) L !(xi,v) − −−− → L q (x,i) L for each i ∈
Lemma 34. There is a t ∈ N so that c 0 → * A t c with c ∈ C f if and only if ϕ is satisfiable.
Proof. Let ϕ be satisfiable, by the evaluation v 1 , . . . , v n . We show how to construct the desired computation. First set t = n, so we have n copies of P C . Let these denoted by P 1 , . . . , P n .
The leader P L first guesses the correct evaluation of the variables. Each P i will store the evaluation for variable x i . , q (x1,v1) , . . . , q (xn,vn) , (x n , v n )) = c (x,n) .
Since v 1 , . . . , v n is a satisfying assignment, there is an index i 1 ∈ [1.
.n] such that x i1 evaluated to v i1 satisfies clause C 1 . Hence, the corresponding contributor P i1 can write the symbol # 1 . This is then read by the leader. The process gets repeated for # 2 , . . . , # m . Hence, we get c (x,n) !#1 − − → i1 (q with c (#,m) ∈ C f . For the other direction, let a t ∈ N and a computation ρ from c 0 to a configuration in C f be given. Let ρ L denote the subcomputation of ρ carried out by the leader P L . Then ρ L has the form ρ L = ρ 1 L .ρ 2 L with ρ 1 L = q We show that v 1 , . . . , v n is a satisfying assignment for ϕ.
Since P L can read the symbol # 1 during ρ 2 L , there is a contributor P ℓ writing the symbol. But this can only happen if P ℓ has stored a tuple (x i , v i ), written by P L during ρ 1 L , and if x i evaluated to v i satisfies clause C 1 . Since all symbols # 1 , . . . # m are read by P L , we get that each clause in ϕ is satisfiable by the evaluation chosen by P L during ρ 1 L .
⊓ ⊔
To prove Proposition 18, we change the above construction slightly. Let ϕ 1 , . . . , ϕ I be the given 3-SAT-instances, each pair equivalent under R, where R is the polynomial equivalence relation from Theorem 10. Then each formula has the same number of clauses m and uses the set of variables {x 1 , . . . , x n }. We assume ϕ ℓ = C ℓ 1 ∧ · · · ∧ C ℓ m . First, we let the leader chose an evaluation of the variables x 1 , . . . , x n as above. The contributors are used to store it. Then, instead of writing just # j , the contributors can write the symbols # ℓ j to mention that the currently stored variable with its evaluation satisfies clause C ℓ j . The leader can now branch into one of the I instances. It waits to read a string # ℓ 1 . . . # ℓ m for a certain ℓ ∈ [1..I]. If it can succeed, it moves to its final state.
To realize the construction, we need to slightly change the structure of the leader, extend the data domain and add more transitions to the contributors. The parameter C will not change in this construction, it is still O(n). Hence, the sizerestrictions of a cross-composition are met. The correctness of the construction is similar to Lemma 34, the only difference is the fact that P L also chooses the instance ϕ ℓ that should be satisfied.
C Proofs for Section 3.3
We give the missing constructions and proofs for Section 3.3.
Formal Construction and Proof of Proposition 19
We first give construction and proof for the W[1]-hardness of LCR(L). Lemma 35. There is a t ∈ N so that c 0 → * A t c with c ∈ C f if and only if there is a clique of size k in G.
Proof. We first assume that G contains a clique of size k. Let it be the vertices v 1 , . . . , v k . We construct a computation on A t with t = k that leads from c 0 to a configuration c in C f . The program contains k contributors, denoted by P 1 , . . . , P k . We proceed in three phases, as described above.
In the first phase, the leader writes the values (v 1 , 1), . . . , (v k , k) to the memory. Contributor P i reads value (v i , i) and stores it in its state space. We get: −−−−→ 2 (q 2 V , q 0 (v1,1) , q 0 (v2,2) , q 0 C , . . . , q 0 C , (v 2 , 2))
. . . ,1) , . . . , q 0 (v k ,k) , (v k , k)) = c 0 .
?(v k ,k) −−−−→ k (q k V , q 0 (v1
After reaching c 0 , the leader starts the second phase. It writes (v # 1 , 1) and each contributor reads it: ,1) , . . . , q 0 (v k ,k) , (v # 1 , 1)) ?(v # 1 ,1) − −−−− → 1 (q 1 V # , q 1 (v1,1) , q 0 (v2,2) , . . . , q 0 (v k ,k) , (v # 1 , 1))
c 0 !(v # 1 ,1) −−−−→ L (q 1 V # , q 0 (v1
. . . ,1) , . . . , q 1 (v k ,k) , (v # 1 , 1)) = c 1 .
?(v # 1 ,1) − −−−− → k (q 1 V # , q 1 (v1
Note that P 1 can read (v # 1 , 1) and move since it stores exactly (v 1 , 1). Any P i with i = 1 can read (v # 1 , 1) and continue its computation since v i = v 1 and the two vertices share an edge. Similarly, one can continue the computation: c 1 → * A t c k = (q k V # , q k (v1,1) , . . . , q k (v k ,k) , (v # k , k)). In the third phase, contributor P i writes the symbol # i to the memory. The leader waits to read the complete string # 1 . . . # k . This yields the following computation:
c k !#1 − − → 1 (q k V # , q f C , q k (v2,2) , . . . , q k (v k ,k) , # 1 ) ?#1
−−→ L (q 1 # , q f C , q k (v2,2) , . . . , q k (v k ,k) , # 1 )
. . .
?# k −−→ L (q k # , q f C , . . . , q f C , # k ) ∈ C f .
For the other direction, let a t ∈ N and a computation ρ = c 0 → * A t c with c ∈ C f be given. We denote by ρ L the part of the computation that is carried out by the leader P L . Then we can factor ρ L into ρ L = ρ 1 .ρ 2 .ρ 3 with
ρ 1 = q 0 !(v1,1) − −−− → L q 1 V !(v2,2) − −−− → L . . . !(v k ,k) − −−− → L q k V , ρ 2 = q k V !(w # 1 ,1) − −−−− → L q 1 V # !(w # 2 ,2) − −−−− → L . . . !(w # k ,k) − −−−− → L q k V # , ρ 3 = q k V # ?#1 −−→ L q 1 # ?#2 −−→ L . . . ?# k −−→ L q k # .
We show that w i = v i for any i ∈ [1.
.k] and that v i = v j for i = j. Furthermore, we prove that each two vertices v i , v j share an edge. Hence, v 1 , . . . , v k form a clique of size k in G.
Since P L is able to read the symbols # 1 , . . . , # k in ρ 3 , there are at least k contributors writing them. But a contributor can only write # i in its computation if it reads (and stores) the symbol (v i , i) from ρ 1 . Hence, there is at least one contributor storing (v i , i). We denote it by P vi .
The computation ρ 2 starts by writing (w # 1 , 1) to the memory. The contributors P vi have to read it in order to reach a state where they can write the symbol # i . Hence, P v1 reads (w # 1 , 1). By the definition of the transition relation of P v1 , this means that w 1 = v 1 . Now let P vi with i = 1. This contributor also reads (w # 1 , 1) = (v # 1 , 1). By definition this implies that v i = v 1 and the two vertices share an edge.
By induction, we get that w # i = v i , the v i are distinct and each two of the v i share an edge.
⊓ ⊔
To prove the W[1]-hardness of LCR(D), we go back to our idea to transmit vertices in binary. Let t = log(|V |) and bin : V → {0, 1} t be a binary encoding of the vertices. Instead of a single symbol (v, i) with v ∈ V and i ∈ [1..k], the leader will write a string #.(α 1 , i).#.(α 2 , i).# . . . (α t , i).# to the memory, where t = log(|V |), α 1 .α 2 . . . α t = bin(v), and # is a special padding symbol. We need the padding in order to prevent the contributors from reading a symbol (α j , i) multiple times. Note that the new data domain contains only O(k) many symbols.
The idea of the program over the changed data domain is similar to the idea above: It proceeds in three phases. In the first phase, the leader chooses the vertices of a clique candidate. This is done by repeatedly writing a string #.(α 1 , i).#.(α 2 , i).# . . . (α t , i).# to the memory, for each i ∈ [1..k]. Like above, the contributors non-deterministically decide to store a written vertex. To this end, a contributor that wants to store the i-th suggested vertex has a binary tree branching on the symbols (0, i) and (1, i). Leaves of the tree correspond to binary encodings of vertices. Hence, a particular vertex can be stored in the contributor's states. Note, as we did not assume |V | to the a power of 2, there might be leaves of the tree that do not correspond to encodings of vertices. If a computation reaches such a leaf it will deadlock.
In the second phase, the leader again writes the binary encoding of k vertices to the memory. But this time, it uses a different set of symbols: Instead of 0 and 1, the leader uses 0 # and 1 # to separate Phase two from Phase one. The contributors need to compare the suggested vertices as in the above construction. To this end, a contributor storing the vertex (v, i) proceeds in k stages. In stage j = k it can only read the encodings of those vertices which are connected to v. Hence, if the leader suggests a wrong vertex, the computation will deadlock. In stage i, the contributor can only read the encoding of the stored vertex v. This allows a verification of the clique as above.
The last phase is identical to the last phase of the above construction. The contributors write the symbols # i , while the leader waits to read the string # 1 . . . # k . This constitutes a proper clique. The formal construction and proof are omitted as they are quite similar to the above case.
D Proofs for Section 4.1
We give the missing constructions and proofs for Section 4.1.
Fig. 1 :
1Leader thread P L (left) and contributor thread P C (right) over the data domain D = {a 0 , a, b, c}. The only unsafe state is given by F L = {q 4 }.
Definition 4 .
4A witness candidate w ∈ E is valid if it satisfies the following properties: (1) First writes are unique. (2) The word w encodes a run on P L . (3) There are supportive computations on the contributors.
Theorem 8 .
8LCR can be solved in time O(Wit SCC (s, D, d)·Valid SCC (L, D, C, d)).
Proposition 9 .
9LCR cannot be solved in time 2 o( √ L·D·log(L·D)) unless ETH fails.
Theorem 10 .
10LCR(D, L) does not admit a poly. kernel unless NP ⊆ coNP/poly.
Fig. 2 :
2Leader thread P L (left) and contributor thread P C (right). The data domain is given by D = {a 0 , a, b, c} and the only unsafe state is F L = {q 3 }.
Fig. 3 :
3Graph G summarizing the computations of the program given inFigure 2. Self-loops and nodes not reachable from the initial node v 0 = (q 0 , a 0 , {p 0 }) are omitted. We further omit the third component of nodes since it is clear from the context. Nodes that are marked red involve the unsafe state q 3 of the leader. The blue highlighted area shows the slice G {p0},{p0,p1} .
that this is covered by the recurrence relation in Lemma 14. It branches over all S \ {p} and hence collects all nodes that are reachable from an entry T [S \ {p}].
Proposition 18 .
18LCR(C ) does not admit a poly. kernel unless NP ⊆ coNP/poly.
Proposition 19 .
19Both parameterizations, LCR(D) and LCR(L), are W[1]-hard.
Fig. 4 :
4A binary encoding (right) of the numbers 1 up to 4 using two bits. First bits are either marked blue, if they are 0, or red if they are 1. The bit checker P b1 (left) focuses on the first bit. The label ℓ = 1, ℓ = 3 means that P b1 has transitions on ?(ℓ, j, i, v) for ℓ = 1, 3 and arbitrary values for i, j, and v. The blue marked states store that the first bit b 1 is 0. Red marked states store that b 1 is 1.
Proposition 24 .
24BSR is NP-hard even if both s and D are constant. The proposition implies intractability: Assume there is an FPT-algorithm A for BSR running in time f (s, D) · poly(|x|), where x denotes the input. Then BSR ′ , the variant of BSR where s and D are constant, can also be solved by A. In this case, the runtime of A is O(poly(|x|)) since f (s, D) is a constant on every instance of BSR ′ . But this contradicts the NP-hardness of BSR ′ .
q f for some q f ∈ Q L . Moreover, the reads in γ i are restricted to {c 1 , . . . ,c i−1 }.From Requirement (3) we get for eachc i a computation of the contributor of the form q 0 C αi!ci − −− → p i . We let u i = α i ↓ R(D) be the reads that occur along the computation. Then, we also obtain a function µ i that maps the positions of u i into the witness w.
of contributors that provide c i during the computation. Note that |S(i)| = t(i). Given a configuration c of A t , we use cτ − → S(i) c ′ to denote that all contributors in S(i) execute a transition τ . This may happen in any order. The definition extends to sequences of transitions. Moreover, we write c !ci S(i) c ′ if exactly one contributor in S(i) writes the value c i . The corresponding contributor is then removes from S(i) since it has provided c i and therefore fulfilled it's duty.
are still satisfied. Moreover, Invariant (1) holds since no map µ i points onto a position x with w[x] = ⊥.
Further if we remove the last appearing SCC from w, all other SCC appearing there has an letter from D ∪ {⊥} appearing next to it. Hence if we consider such a pair as a single cell and the symbols fromD as a single cell, there are at-most d + D many cells. Now if we fix the positions and the value of the barred symbols in the sequence of cells, then each of the unoccupied cells (d many of them) have at-most s · (D + 1) choices. Hence there are (s · (D + 1)) d many such strings. Now for any fixed v ∈ V, there are D+d D many ways to pick D positions from D + d positions. This is upper bounded by 2 D+d . Since we have D D many strings in V, we get that Wit SCC (s, D, d) ≤ (s · (D + 1)) d · D D · 2 D+d .
L
any i, j ∈ [1..k]. The rule writes # i to the memory, reflecting the correctness of the clique in row i. The leader P L is given by the tuple P L = (Op(D ), Q L , q 0 L , δ L ) with set of states Q L = {q ∈ [1..k]} ∪ {q 0 L }. Unlike for the contributor, the state q (r,i) L indicates that the last written symbol was row(i), the row of the i-th vertex guessed by P L . State q (c,i) L reflects that the last written symbol was the column-symbol belonging to the i-th vertex. The remaining states q (#,i) L are used to receive the verification symbols and count up to k.The transition relation δ L is described by the below rules. For transmitting the row-symbols we have q , where i, j ∈ [1..k]. After sending k row-and column-symbols, we receive the symbols # i via the following transitions: q
.
and {a 0 }. Thus, we have that D = O(log(I)+n+m), which does not exceed the bounds of a cross-composition. The leader is defined by the tuple P L = (Op(D ), Q L , q ∈ [0.. log(I)], i ∈ [1..n], j ∈ [1..m]}. Hence, we have L = O(log(I) + n + m). The transition relation is defined by the following rules: For transmitting the bits in the first phase, we have for each ℓ ∈ [1.. log(I)] the transition q u ∈ {0, 1}. For choosing the evaluation of variables in the second phase, we have for i ∈ [1..the phases. In the third phase, P L wants to read the symbols # j . Thus, we get for any j ∈ [1..m] the transition q We set F L = {q (#,m) }.The contributor P C is defined by P C = (Op(D ), Q C , q ε C , δ C ). The set of states Q C is the union of{q w | w ∈ {0, 1} ≤log(I) }, {q (ch,ℓ) | ℓ ∈ [1..I]}, and {q ℓ (xi,v) | ℓ ∈ [1..I], i ∈ [1..n], v ∈ {0, 1}}.
The leader thread is the tuple P L = (Op(D ), Q L , q 1 , δ L ), where the set of states Q L is theunion of {q i | i ∈ [1..r + 1]}, {q (i,j) S | S ∈ F , j ∈ [0..|S| − 1] and i ∈ [1..r]}, and {q i # | i ∈ [1..n]}. Recall that n = |U |. The states q i are needed to choose a set S ∈ F . Then, the q(i,j) S are used to iterate over the elements in S. For the final phase in P L , the states q i # are needed to read all elements u # for u ∈ U . The transition relation δ L contains the following rules: For choosing a set, we have transitions of the form q i ε − → q (i,0) S for each S ∈ F and i ∈ [1..r]. Iterating through a set S = {v 1 , . . . , v |S| } is done via the transitions q j ∈ [0, |S| − 2]. For the last element, we have a transition q (i,|S|−1) S !v |S| − −− → q i+1 that enters the new phase. Fix an order on U = {u 1 , . . . , u n }. The final check is realized by the transitions q r+1 ?u i ∈ [1..n − 1]. The leader only reaches a final state after the last check: F L = {q n # }. A contributor is defined by the tuple P C = (Op(D ), Q C , p 0 , δ C ) where the set of states is given by Q C = {p u | u ∈ U } ∪ {p 0 }. The transition relation contains rules to store elements of U in the state space: p 0 ?u −→ p u , for each u ∈ U . Once an element is stored, the contributor can write it to the memory: p u !u # − − → p u . Correctness is proven in the following lemma.
Proposition 33 .
33Unless ETH fails, LCR cannot be solved in time 2 o(C) .
[1..n] and v ∈ {0, 1}. And we have rules for verifying that the guessed evaluation is correct: q The contributor P C is defined byP C = (Op(D ), Q C , q 0 , δ C ) with set of states Q C = {q (xi,v) | i ∈ [1..n], v ∈ {0, 1}} ∪ {q 0 }.Then we have C = 2n + 1 = O(n). The transition relation contains the following rules. For storing a read evaluation we have: q 0 ?(xi,v) −−−− → C q (xi,v) , for i ∈ [1..n], v ∈ {0, 1}. And for satisfying clauses, we get a rule q (xi,v) !#j
(x1,v1) , q 0 , . . . , q 0 , (x 1 , v 1 ))
(x1,v1), . . . , q (xn,vn) , # 1 ) (x1,v1) , . . . , q (xn,vn) , # 1 ) (x1,v1) , . . . , q (xn,vn) , # 1 ) = c (#,m) ,
We denote by V the vertices of G and by E the edges. Set the data domain D ={(v, i), (v # , i), # i | v ∈ V, i ∈ [1..k]} ∪ {a 0 }. The leader P L is given by the tupleP L = (Op(D ), Q L , q 0 , δ L ) with set of states Q L = {q i V , q i V # , q i # | i ∈ [1..k]}∪{q 0 }. The transition relation δ L is defined by the following rules. (First Phase) For eachi ∈ [1..k] and v ∈ V , we add the rule q i→ L q i V . We identify the vertex q 0 V by q 0 . (Second Phase) For each i ∈ [1..k] and v ∈ V , we add q → L q i V # . Here, we denote by q 0 V # the vertex q k V . (Third Phase) For each i ∈ [1..→ L q i # . Here, we assume q 0 # = q k V # . Further, we set F L = {q k # } to be the final state of interest.The contributor is defined by P C = (Op(D ), Q C , q 0 C , δ C ). The states are given byQ C = {q j (v,i) | i ∈ [1..k], j ∈ [0..k]} ∪ {q 0 C , q f C }. Wedefine the transition relation by the following rules. (First Phase) For each i ∈ [1..k] and v ∈ V i) . (Second Phase) For i ∈ [1..k], j ∈ [0..k] and v, w ∈ V we have q j→ C q j (v,i) if (1) j = i and v = w, or if (2) i = j, v = w, and there is an edge between v and w in E. (Third Phase) For any i ∈ [1..k] and v ∈ V , add the rule q k (v,i) !#i − − → C q f C . The correctness is shown in the next lemma.
The problem is called parameterized reachability in these works. We renamed it to avoid confusion with parameterized complexity.
Formal Construction and Proof of Proposition 20The idea here is to reduce BSR to the reachability problem on an NFA N of size at most O(P t · poly(k, n, D)). The states of the NFA N are the product of the states of each P i along with the current stage number, currently active process and the last value written to the memory i.e. it is of the formThe currently active process records the information about who is allowed to write in that stage. The initial state of N is given by q 0 N = (q 0 1 , . . . q 0 t , 0, 0, a 0 ). From any state of the form (q 1 , . . . , q n , i, j, a), i ∈ [0..t], j ∈ [0..k], for all moves of the form q ℓ a?/ǫ − −− → P ℓ q ′ ℓ for some ℓ ∈ [1..t], we have a corresponding move of the form (q 1 , . . . , q n , i, j, a) − → N (q 1 , . . . , q ℓ−1 , q ′ ℓ , . . . , q n , i, j, a), this freely simulates any read move within a stage.Similarly from any state of the form (q 1 , . . . , q n , i, j, a), a ∈ Σ, i ∈ [1..t], j ∈ [1..k], for all moves of the form q i b! − → Pi q ′ i , we have a corresponding move of the form (q 1 , . . . , q n , i, j, a) − → N (q 1 , . . . , q i−1 , q ′ i , . . . , q n , i, j, b). These set of transitions allow the currently active process P i during any stage to write values to shared memory.Finally from any state of the form (q 1 , . . . , q n , i, j, a), i ∈ [0..t], j ∈ [0..k−1] we have moves of the form (q 1 , . . . , q n , i, j, a) − → N (q 1 , . . . , q n , m, j + 1, a), for all m ∈ [1..t]. The correctness of such a construction is guaranteed by the following easy to see lemma. The complexity follows from the fact that reachability is quadratic and size of the automata that we construct is at-most O(P t · poly(k, n, D)).Proof. (⇒) For this direction, we will assume an ℓ-stage computation of the form c 0 → * A c and show that there is a computation in N of the form q 0.ℓ], let p i be the writer corresponding to the stage π i . It is easy to see that corresponding to every move in the subcomputation of the form (q 1 , . . . , q t , a)Given a configuration c = (q 1 , . . . , q t , a), we let µ(c, p, j) = (q 1 , . . . , q t , p, j, a). Given a computation π, we let µ(π, x, y) to be the sequence obtained by replacing each configuration c occurring in π by µ(c, x, y). It is easy to see that µ(π 0 , 0, 0), µ(π 1 , p 1 , 1), . . . , µ(π ℓ , p ℓ , ℓ) are all valid sub-computations in N . This is because we have for every move in the program, a corresponding move in the G we construct. Combining these sub-computations will give us the required run in N . For combining these sub-computations, we use the transition of the form (q 1 , . . . , q n , i, j, a) − → N (q 1 , . . . , q n , m, j + 1, a), for all m ∈ [1..t] that was finally added in the construction.(⇐) For this direction, we will assume a computation of the form π = q 0 N → * N q N , where q N = (q 1 , . . . , q n , i, ℓ, a) for some i ∈ [1..t], a ∈ Σ. Clearly such a computation can be split as followsDefine π 0 = (q 0 1 , . . . , q 0 t , 0, 0, r) → N (q 1 1 , . . . , q 1 t , 0, 0, a 1 ) and computation π j = (q j 1 , . . . , q j t , i j , j, a j ) → N (q j+1 1 , . . . , q j+1 t , i j , j, a j+1 ). It is easy to see that corresponding to every move of the form (q 1 , . . . , q n , i, j, a) − → N (q ′ 1 , . . . , q ′ n , i, j, a ′ ), there is a move of the form (q 1 , . . . , q n , a) − → (q ′ 1 , . . . , q ′ n , a ′ ) such that it is either an read or internal move of some process or a write move of process P i . Notice that the transition in G was added because of existence of one such move in the program. Using this fact, it is easy to see that corresponding to each π i , i ∈ [0..ℓ], there is a 1-stage sub-computation in A. Concatenating these sub-computations will now give us the required run.⊓ ⊔Formal Construction and Proof of Proposition 21For the formal construction, we define for i ∈ [1..k] the NFA P i to be the tupleThe states q ij are used to store one of the k vertices of the i-th row, the states q ℓ ij are needed to check the edge relations to other rows and to perform the equality check. The transition relation δ i is given by the following rules: For choosing and storing a vertex of the i-th row we have q 0For checking the edge relations to vertices from a different row, we have forthere is an edge between (i, j) and (ℓ, m) in G. Note that we identify q ij as q 0 ij . Finally, to test the equivalence in the case of the same row we have.k]. The writer P ch is given by P ch = (Op(D ), Q, q 0 , δ), where Q = {q 0 , . . . , q k } and δ is given by the rules q i−1.k] , P ch )) and the set of configurations we want to reach asThe correctness of the construction follows by the lemma below.Proof. First note that any computation in A is a 1-stage computation since the only thread that can write to the memory is P ch .Let a clique of size k in G be given. It consists of the vertices (1, j 1 ), . . . , (k, j k ). Then it is easy to construct a computation of A starting in (q 0 1 , . . . , q 0 k , q 0 , γ) and ending in (q k 1j1 , . . . , q k kj k , q k , (k, j k )) ∈ C f : The system just guesses the right vertices (1, j 1 ) up to (k, j k ) and then performs the edge-tests which are all positive since the vertices form a clique.For the other direction, let a computation ρ leading to (q k 1j1 , . . . , q k kj k , q 0 , (k, j k )) be given. Then we show that the vertices (1, j 1 ), . . . , (k, j k ) form a clique. Since the P i can only start their computations on the initial memory symbol a 0 , they have to perform a step before P ch changes the memory content. Thus, we can split ρ into ρ = ρ 1 .ρ 2 where ρ 1 contains only moves of the P i on reading a 0 , in any order. We may assume the following form for ρ 1 :. . .Note that the computation ρ 1 corresponds to the choice of (1, j 1 ), . . . , (k, j k ) as a clique-candidate. After ρ 1 , the thread P ch needs to write the symbol (1, j 1 ) to the memory since otherwise, P 1 would deadlock. But then each P i with i = 1 needs to do a step on reading (1, j 1 ) and this only happens if (i, j i ) and (1, j 1 ) share an edge. Then the computation ρ 2 goes on with P ch writing (2, j 2 ) since otherwise, P 2 would deadlock. The other threads again perform a verification step on reading (2, j 2 ). Since the computation reaches the configuration (q k 1j1 , . . . , q k kj k , q 0 , (k, j k )) in the end, we can ensure that all chosen vertices indeed share an edge.⊓ ⊔Formal Construction and Proof of Theorem 22Let ϕ 1 , . . . , ϕ I be given 3-SAT-instances, each two equivalent under R. We assume that each ϕ ℓ has the form: ϕ ℓ = C ℓ 1 ∧ · · · ∧ C ℓ m and the set of variablesIf the requested variable is not x i , the thread P xi just performs a counting step: q j−1.m], and v ′ ∈ {0, 1}.Next, we introduce the writer-thread P w = (Op(Σ), Q w , q 0 w , δ w ) with set of states Q w = {q 0 w , . . . , q m w }. Thus, we have |Q w | = m + 1. The writer picks m clauses of probably different instances that need to be satisfied. To this end, it will not only guess the clause but also the instance that contains the clause, the variable that should satisfy it, and the evaluation of the variable. This will then be discarded or verified by the variable-threads. The transitions that we need in.n], and v ∈ {0, 1}. Writing (ℓ, j, i, v) to the memory reflects the claim that clause C ℓ j gets satisfied by variable x i under evaluation v.The last type of threads that we introduce are the bit-checkers. For each b ∈ [1.. log(I)] we define the thread PHence, we have that |Q b | = 2m + 1. The task of bit-checker P b is to verify that along the instances ϕ ℓ1 , . . . , ϕ ℓm guessed by the writer, the b-th bit of bin(ℓ j ) for j ∈ [1..m] does not change. To this end, we construct δ b the following way: Initially, P b stores the b-th bit of the first guessed instance. We add for ℓ ∈ [1..I], i ∈ [1..n], and v ∈ {0, 1} the rule. log(I)] )). We want the writer and the bit-checkers to perform exactly m steps and the variable-threads to move exactly m + 1 steps. Intuitively, this amounts to the satisfiability of m clauses that all belong to the same instance. Hence, the set of configurations C f that we want to reach is the following:v1), . . . , q m (xn,vn) , q m(1,u1), . . . , q m (log(I),u log(I) ) , a) | v i , u ℓ ∈ {0, 1}, a ∈ Σ}.Since P w is the only thread which is allowed to write, we are interested in reaching C f within a 1-stage computation. Hence, the BSR-instance of interest is the tuple(A, C f , 1). Note that the parameters obey the bounds of a cross-composition: P = 2(m + 1) + 1 and t = 1 + n + log(I). It is thus left to show that the above construction is correct. This is proven in the following lemma.Lemma 38.There is a 1-stage computation from c 0 → * A c for a c ∈ C f if and only if there is an ℓ ∈ [1..I] such that ϕ ℓ is satisfiable.Proof. First assume that there is an ℓ ∈ [1..I] such that ϕ ℓ is satisfiable. Let v 1 , . . . , v n be the evaluation of the variables x 1 , . . . , x n that satisfies ϕ ℓ and let bin(ℓ) = u 1 . . . u log(I) be the binary representation of ℓ. We construct a 1-stage computation of A from c 0 to the configurationv1), . . . , q m (xn,vn) , q m(1,u1), . . . , q m (log(I),u log(I) ) , (ℓ, m, x z , v z )), where x z is a variable in C ℓ m . The computation starts with choosing the right evaluation for the variables:vi). This leads to the computation c 0 → * A (q 0 w , q 0 (x1,v1) , . . . , q 0 (xn,vn) , q 0 1 , . . . , q 0 log(I) , γ) = c 0 . Then P w writes the tuple (ℓ, 1, i ′ , v i ′ ) to the memory, where x i ′ is a variable in clause C ℓ 1 . Furthermore, x i ′ evaluated to v i ′ satisfies the clause. This is read by all P xi and each performs the step q 0vi). Note that due to the definition of δ xi , in both cases, i = i ′ and i = i ′ , the move can indeed be done. The bit-checkers P b also read the tuple. Each P b does the following step:If we put the individual moves together, this leads to a new configuration c 0 → * A (q 1 w , q 1 (x1,v1) , . . . , q 1 (xn,vn) , q 1(1,u1), . . . , q 1 (log(I),u log(I) ) , (ℓ, 1, i ′ , v i ′ )) = c 1 .Similarly, we can construct a computation that leads to a configuration c 2 . If we go on with the construction, we get a computation that leads to c.For the other direction, let ρ be a 1-stage computation of A, ending in the configuration (q m w , q m (x1,v1) , . . . , q m (xn,vn) , q m (1,u1) , . . . , q m (log(I),u log(I) ) , a) = c. Let ℓ ∈ [1..I] be such that bin(ℓ) = u 1 . . . u log(I) . We show that ϕ ℓ is satisfiable. More precise, ϕ ℓ is satisfied under x i evaluating to v i .Since P xi can start its computation only on reading a 0 , we get that each P xi performs the step q 0 xi ?a 0 − − → xi q 0 (xi,vi) . Note that the chosen v i is indeed the one appearing in c. Hence, we get as an initial part of ρ the computationv1), . . . , q 0 (xn,vn) , q 0 1 , . . . , q 0 log(I) , a 0 ) = c 0 .The computation can then only continue if P w changes the content of the memory. The thread guesses and writes (ℓ, 1, i ′ , v i ′ ) to the memory. He has to choose the index ℓ, and thus the instance ϕ ℓ , since otherwise there would be a bit-checker P b not reaching q m (b,u b ) . The bit-checkers P b read (ℓ, 1, i ′ , v i ′ ) and store u b in their states:. Now all P xi have to perform a step since the computation does not deadlock. This means that especially P x i ′ performs a step on reading (ℓ, 1, i ′ , v i ′ ). By definition, this is only possible if x i ′ is a variable that evaluated to v i ′ satisfies clause C ℓ 1 . Hence, we have that under evaluating each x i to v i , clause C ℓ 1 is satisfied. If we combine all the moves done, we get another part ρ 1 of the computation ρ that leads from c 0 to the configuration (q 1 w , q 1 (x1,v1) , . . . , q 1 (xn,vn) , q 1(1,u1), . . . , q 1 (log(I),u log(I) ) , (ℓ, 1, i ′ , v i ′ )) = c 1 . Similarly, one proves that the second part of ρ, the computation ρ 2 shows that C ℓ 2 is satisfiable under evaluating each x i to v i . Hence, by induction we get that ϕ ℓ is satisfiable.⊓ ⊔ E Proofs for Section 4.2We give the missing constructions and proofs for Section 4.2.Proof of Proposition 24We show that a memory domain of constant size and a single stage suffice to reduce 3-SAT to BSR. Let ϕ = C 1 ∧ · · · ∧ C m be a formula in CNF with at most three literals per clause C i . Let the variables of ϕ be x 1 , . . . , x n . Our goal is to construct a program A = (D , a 0 , (P 1 , . . . , P n , P v )) such that c 0 can reach an unsafe configuration of A in a single stage if and only if ϕ has a satisfying assignment. Moreover, the size of the domain in the construction will be D = 4 and is thus constant. We set D = {a 0 , #, 0, 1}. For communication over this domain, we encode literals of ϕ in binary. Let bin # (i) ∈ ({0, 1}.#) log(n)+1 be the binary encoding of i into n bits where each bit is separated by the symbol #. For instance, we get bin # (2) = 0#0#1#0# in the case n = 8. Given a literal ℓ of ϕ, we encode it by Enc(ℓ) = v# bin # (i), where x i is the variable in ℓ and v its evaluation.We have a separate thread P v , called the verifier. It iterates over the clauses and for each clause C i = ℓ i1 ∨ ℓ i2 ∨ ℓ i3 , the thread picks a literal and writes Enc(ℓ i1 ), Enc(ℓ i2 ), or Enc(ℓ i3 ) to the shared memory. To this end, it has states {q 1 , . . . , q m } and sequences of transitionsThe notation !Enc(ℓ ij ) indicates that the whole encoding of ℓ ij is written to the shared memory. This can be easily achieved by adding log(n) + 1 many intermediary states. Hence, P v has O(m · log(n)) many states in total and writes the encoding of exactly m literals to the shared memory.For each variable x i , we have a thread P i . Initially, on reading a 0 , the thread P i chooses the evaluation for variable x i . It stores the evaluation. To this end, the thread has states {p i (v,j) | v = 0, 1 and j ∈ [1..m]}. The m copies are needed to count the number of literal encodings that were written to the memory by the verifier.For each literal ℓ / ∈ {x i , ¬x i }, thread P i has sequences of transitionsSince ℓ contains a different variable than x i , the thread P i does not need to check whether the evaluation in ℓ matches the stored one. These transitions only ensure that P i can keep track of the number of encodings that was already written by the verifier. Note that, as above, the sequences can be realized by adding intermediary states. For literals containing x i , the thread P i needs to check whether the evaluation of the literal matches the stored evaluation. This can be realized as follows. If P i decides to store evaluation v for x i , then only encodings of the form v# bin # (i) can be read. Hence, P i has the transition sequences Note that, if the verifier P v writes a literal ℓ to the memory which contains x i but has the wrong evaluation, P i is not able to read the encoding of ℓ and deadlocks. Moreover, note the importance of the symbol #. It avoids repeated reading of the same symbol which can cause false encodings. By construction, we get that ϕ has a satisfying assignment if and only if all threads reach their last state. If ϕ has a satisfying assignment, the threads P i choose exactly this assignment and store it. Now the verifier P v chooses for each clause a literal that satisfies it and the P i can read the encodings of these literals and terminate. For the other direction, assume that all P i and P v reach their last state. Since there is no loss in the communication between the threads, the assignment chosen by the P i is satisfying for ϕ. This is due to that all encodings of literals chosen by P v can be read without a P i getting stuck.Since P v is the only thread that writes to the shared memory, the computation has only one stage.
On bounded reachability analysis of shared memory systems. M F Atig, A Bouajjani, K N Kumar, P Saivasan, FSTTCS, volume 29 of LIPIcs. M. F. Atig, A. Bouajjani, K. N. Kumar, and P. Saivasan. On bounded reachability analysis of shared memory systems. In FSTTCS, volume 29 of LIPIcs, pages 611- 623. Schloss Dagstuhl, 2014.
Context-bounded analysis for concurrent programs with dynamic creation of threads. M F Atig, A Bouajjani, S Qadeer, TACAS, volume 5505 of LNCS. SpringerM. F. Atig, A. Bouajjani, and S. Qadeer. Context-bounded analysis for concurrent programs with dynamic creation of threads. In TACAS, volume 5505 of LNCS, pages 107-123. Springer, 2009.
On the reachability analysis of acyclic networks of pushdown systems. M F Atig, A Bouajjani, T Touili, CONCUR. Springer5201M. F. Atig, A. Bouajjani, and T. Touili. On the reachability analysis of acyclic networks of pushdown systems. In CONCUR, volume 5201 of LNCS, pages 356- 371. Springer, 2008.
Constrained multilinear detection and generalized graph motifs. A Björklund, P Kaski, L Kowalik, Algorithmica. 742A. Björklund, P. Kaski, and L. Kowalik. Constrained multilinear detection and generalized graph motifs. Algorithmica, 74(2):947-967, 2016.
On problems without polynomial kernels. H L Bodlaender, R G Downey, M R Fellows, D Hermelin, JCSS. 758H. L. Bodlaender, R. G. Downey, M. R. Fellows, and D. Hermelin. On problems without polynomial kernels. JCSS, 75(8):423-434, 2009.
Kernelization lower bounds by cross-composition. H L Bodlaender, B M P Jansen, S Kratsch, SIDAM28H. L. Bodlaender, B. M. P. Jansen, and S. Kratsch. Kernelization lower bounds by cross-composition. SIDAM, 28(1):277-305, 2014.
The complexity of satisfiability of small depth circuits. C Calabro, R Impagliazzo, R Paturi, IWPEC. Springer5917C. Calabro, R. Impagliazzo, and R. Paturi. The complexity of satisfiability of small depth circuits. In IWPEC, volume 5917 of LNCS, pages 75-85. Springer, 2009.
The complexity of verifying memory coherence. J F Cantin, M H Lipasti, J E Smith, SPAA. ACMJ. F. Cantin, M. H. Lipasti, and J. E. Smith. The complexity of verifying memory coherence. In SPAA, pages 254-255. ACM, 2003.
On the complexity of bounded context switching. P Chini, J Kolberg, A Krebs, R Meyer, P Saivasan, :15. Schloss Dagstuhl. 87ESAP. Chini, J. Kolberg, A. Krebs, R. Meyer, and P. Saivasan. On the complexity of bounded context switching. In ESA, volume 87 of LIPIcs, pages 27:1-27:15. Schloss Dagstuhl, 2017.
Fine-grained complexity of safety verification. P Chini, R Meyer, P Saivasan, TACAS. Springer10806P. Chini, R. Meyer, and P. Saivasan. Fine-grained complexity of safety verification. In TACAS, volume 10806 of LNCS, pages 20-37. Springer, 2018.
On problems as hard as CNF-SAT. M Cygan, H Dell, D Lokshtanov, D Marx, J Nederlof, Y Okamoto, R Paturi, S Saurabh, M Wahlström, 41:1-41:24ACM TALG. 123M. Cygan, H. Dell, D. Lokshtanov, D. Marx, J. Nederlof, Y. Okamoto, R. Paturi, S. Saurabh, and M. Wahlström. On problems as hard as CNF-SAT. ACM TALG, 12(3):41:1-41:24, 2016.
. M Cygan, F V Fomin, L Kowalik, D Lokshtanov, D Marx, M Pilipczuk, M Pilipczuk, S Saurabh, SpringerM. Cygan, F. V. Fomin, L. Kowalik, D. Lokshtanov, D. Marx, M. Pilipczuk, M. Pilipczuk, and S. Saurabh. Parameterized algorithms. Springer, 2015.
A parametric analysis of the state explosion problem in model checking. S Demri, F Laroussinie, P Schnoebelen, STACS, volume 2285 of LNCS. SpringerS. Demri, F. Laroussinie, and P. Schnoebelen. A parametric analysis of the state explosion problem in model checking. In STACS, volume 2285 of LNCS, pages 620-631. Springer, 2002.
R G Downey, M R Fellows, Fundamentals of Parameterized Complexity. SpringerR. G. Downey and M. R. Fellows. Fundamentals of Parameterized Complexity. Springer, 2013.
Model checking parameterized asynchronous shared-memory systems. A Durand-Gasselin, J Esparza, P Ganty, R Majumdar, CAV. Springer9206A. Durand-Gasselin, J. Esparza, P. Ganty, and R. Majumdar. Model checking parameterized asynchronous shared-memory systems. In CAV, volume 9206 of LNCS, pages 67-84. Springer, 2015.
On atomicity in presence of non-atomic writes. C Enea, A Farzan, TACAS. Springer9636C. Enea and A. Farzan. On atomicity in presence of non-atomic writes. In TACAS, volume 9636 of LNCS, pages 497-514. Springer, 2016.
Parameterized verification of asynchronous shared-memory systems. J Esparza, P Ganty, R Majumdar, CAV. 8044J. Esparza, P. Ganty, and R. Majumdar. Parameterized verification of asyn- chronous shared-memory systems. In CAV, volume 8044 of LNCS, pages 124-140.
. Springer, Springer, 2013.
Pattern-based verification for multithreaded programs. J Esparza, P Ganty, T Poch, ACM TOPLAS. 36329J. Esparza, P. Ganty, and T. Poch. Pattern-based verification for multithreaded programs. ACM TOPLAS, 36(3):9:1-9:29, 2014.
The complexity of predicting atomicity violations. A Farzan, P Madhusudan, TACAS, volume 5505 of LNCS. SpringerA. Farzan and P. Madhusudan. The complexity of predicting atomicity violations. In TACAS, volume 5505 of LNCS, pages 155-169. Springer, 2009.
A multi-parameter analysis of hard problems on deterministic finite automata. H Fernau, P Heggernes, Y Villanger, JCSS. 814H. Fernau, P. Heggernes, and Y. Villanger. A multi-parameter analysis of hard problems on deterministic finite automata. JCSS, 81(4):747-765, 2015.
Problems on finite automata and the exponential time hypothesis. H Fernau, A Krebs, CIAA, volume 9705 of LNCS. SpringerH. Fernau and A. Krebs. Problems on finite automata and the exponential time hypothesis. In CIAA, volume 9705 of LNCS, pages 89-100. Springer, 2016.
Thread-modular verification for sharedmemory programs. C Flanagan, S N Freund, S Qadeer, ESOP, volume 2305 of LNCS. SpringerC. Flanagan, S. N. Freund, and S. Qadeer. Thread-modular verification for shared- memory programs. In ESOP, volume 2305 of LNCS, pages 262-277. Springer, 2002.
Thread-modular model checking. C Flanagan, S Qadeer, SPIN, volume 2648 of LNCS. SpringerC. Flanagan and S. Qadeer. Thread-modular model checking. In SPIN, volume 2648 of LNCS, pages 213-224. Springer, 2003.
Exact (exponential) algorithms for the dominating set problem. F V Fomin, D Kratsch, G J Woeginger, WG. 3353F. V. Fomin, D. Kratsch, and G. J. Woeginger. Exact (exponential) algorithms for the dominating set problem. In WG, volume 3353 of LNCS, pages 245-256.
. Springer, Springer, 2004.
On parametrized verification of asynchronous, shared-memory pushdown systems. M Fortin, A Muscholl, I Walukiewicz, abs/1606.08707CoRR. M. Fortin, A. Muscholl, and I. Walukiewicz. On parametrized verification of asyn- chronous, shared-memory pushdown systems. CoRR, abs/1606.08707, 2016.
Model-checking linear-time properties of parametrized asynchronous shared-memory pushdown systems. M Fortin, A Muscholl, I Walukiewicz, CAV, volume 10427 of LNCS. SpringerM. Fortin, A. Muscholl, and I. Walukiewicz. Model-checking linear-time properties of parametrized asynchronous shared-memory pushdown systems. In CAV, volume 10427 of LNCS, pages 155-175. Springer, 2017.
Infeasibility of instance compression and succinct PCPs for NP. L Fortnow, R Santhanam, JCSS. 771L. Fortnow and R. Santhanam. Infeasibility of instance compression and succinct PCPs for NP. JCSS, 77(1):91-106, 2011.
Memory-model-aware testing: A unified complexity analysis. F Furbach, R Meyer, K Schneider, M Senftleben, 63:1-63:25ACM TECS. 144F. Furbach, R. Meyer, K. Schneider, and M. Senftleben. Memory-model-aware testing: A unified complexity analysis. ACM TECS, 14(4):63:1-63:25, 2015.
Testing shared memories. P B Gibbons, E Korach, SIAM J. Comput. 264P. B. Gibbons and E. Korach. Testing shared memories. SIAM J. Comput., 26(4):1208-1244, 1997.
Thread-modular shape analysis. A Gotsman, J Berdine, B Cook, M Sagiv, PLDI. ACMA. Gotsman, J. Berdine, B. Cook, and M. Sagiv. Thread-modular shape analysis. In PLDI, pages 266-277. ACM, 2007.
Parameterised pushdown systems with non-atomic writes. M Hague, FSTTCS, volume 13 of LIPIcs. M. Hague. Parameterised pushdown systems with non-atomic writes. In FSTTCS, volume 13 of LIPIcs, pages 457-468. Schloss Dagstuhl, 2011.
Synchronisation-and reversal-bounded analysis of multithreaded programs with counters. M Hague, A W Lin, CAV. 7358M. Hague and A. W. Lin. Synchronisation-and reversal-bounded analysis of mul- tithreaded programs with counters. In CAV, volume 7358 of LNCS, pages 260-276.
. Springer, Springer, 2012.
Effect summaries for thread-modular analysis -sound analysis despite an unsound heuristic. L Holík, R Meyer, T Vojnar, S Wolff, SAS, volume 10422 of LNCS. SpringerL. Holík, R. Meyer, T. Vojnar, and S. Wolff. Effect summaries for thread-modular analysis -sound analysis despite an unsound heuristic. In SAS, volume 10422 of LNCS, pages 169-191. Springer, 2017.
On the complexity of k-sat. R Impagliazzo, R Paturi, JCSS. 622R. Impagliazzo and R. Paturi. On the complexity of k-sat. JCSS, 62(2):367-375, 2001.
Parameterization as abstraction: A tractable approach to the dataflow analysis of concurrent programs. V Kahlon, LICS. IEEEV. Kahlon. Parameterization as abstraction: A tractable approach to the dataflow analysis of concurrent programs. In LICS, pages 181-192. IEEE, 2008.
Reducibility among combinatorial problems. R M Karp, Complexity of Computer Computations, The IBM Research Symposia Series. Plenum PressR. M. Karp. Reducibility among combinatorial problems. In Complexity of Com- puter Computations, The IBM Research Symposia Series, pages 85-103. Plenum Press, 1972.
Model-checking parameterized concurrent programs using linear interfaces. S La Torre, P Madhusudan, G Parlato, CAV. Springer6174S. La Torre, P. Madhusudan, and G. Parlato. Model-checking parameterized con- current programs using linear interfaces. In CAV, volume 6174 of LNCS, pages 629-644. Springer, 2010.
Slightly superexponential parameterized problems. D Lokshtanov, D Marx, S Saurabh, SODA. SIAMD. Lokshtanov, D. Marx, and S. Saurabh. Slightly superexponential parameterized problems. In SODA, pages 760-776. SIAM, 2011.
Context-bounded model checking of concurrent software. S Qadeer, J Rehof, TACAS. Springer3440S. Qadeer and J. Rehof. Context-bounded model checking of concurrent software. In TACAS, volume 3440 of LNCS, pages 93-107. Springer, 2005.
The parameterized complexity of intersection and composition operations on sets of finite-state automata. T Wareham, CIAA, volume 2088 of LNCS. SpringerT. Wareham. The parameterized complexity of intersection and composition op- erations on sets of finite-state automata. In CIAA, volume 2088 of LNCS, pages 302-310. Springer, 2000.
Some consequences of non-uniform conditions on uniform classes. C K Yap, TCS. 26C. K. Yap. Some consequences of non-uniform conditions on uniform classes. TCS, 26:287-300, 1983.
| [] |
[
"Cognitive science as a source of forward and inverse models of human decisions for robotics and control",
"Cognitive science as a source of forward and inverse models of human decisions for robotics and control"
] | [
"Mark K Ho \nDepartment of Computer Science\nPrinceton University\nPrincetonNJUSA\n",
"Thomas L Griffiths \nDepartment of Computer Science\nPrinceton University\nPrincetonNJUSA\n\nDepartment of Psychology\nPrinceton University\nPrincetonNJUSA\n"
] | [
"Department of Computer Science\nPrinceton University\nPrincetonNJUSA",
"Department of Computer Science\nPrinceton University\nPrincetonNJUSA",
"Department of Psychology\nPrinceton University\nPrincetonNJUSA"
] | [] | Those designing autonomous systems that interact with humans will invariably face questions about how humans think and make decisions. Fortunately, computational cognitive science offers insight into human decision-making using tools that will be familiar to those with backgrounds in optimization and control (e.g., probability theory, statistical machine learning, and reinforcement learning). Here, we review some of this work, focusing on how cognitive science can provide forward models of human decisionmaking and inverse models of how humans think about others' decision-making. We highlight relevant recent developments, including approaches that synthesize blackbox and theory-driven modeling, accounts that recast heuristics and biases as forms of bounded optimality, and models that characterize human theory of mind and communication in decision-theoretic terms. In doing so, we aim to provide readers with a glimpse of the range of frameworks, methodologies, and actionable insights that lie at the intersection of cognitive science and control research. | 10.1146/annurev-control-042920-015547 | [
"https://arxiv.org/pdf/2109.00127v1.pdf"
] | 237,371,653 | 2109.00127 | 276a457ccaabaa410c25d31cb83e54a582688c9e |
Cognitive science as a source of forward and inverse models of human decisions for robotics and control
Mark K Ho
Department of Computer Science
Princeton University
PrincetonNJUSA
Thomas L Griffiths
Department of Computer Science
Princeton University
PrincetonNJUSA
Department of Psychology
Princeton University
PrincetonNJUSA
Cognitive science as a source of forward and inverse models of human decisions for robotics and control
cognitive scienceroboticspsychologydecision-makingresource rationality
Those designing autonomous systems that interact with humans will invariably face questions about how humans think and make decisions. Fortunately, computational cognitive science offers insight into human decision-making using tools that will be familiar to those with backgrounds in optimization and control (e.g., probability theory, statistical machine learning, and reinforcement learning). Here, we review some of this work, focusing on how cognitive science can provide forward models of human decisionmaking and inverse models of how humans think about others' decision-making. We highlight relevant recent developments, including approaches that synthesize blackbox and theory-driven modeling, accounts that recast heuristics and biases as forms of bounded optimality, and models that characterize human theory of mind and communication in decision-theoretic terms. In doing so, we aim to provide readers with a glimpse of the range of frameworks, methodologies, and actionable insights that lie at the intersection of cognitive science and control research.
Introduction
As robots and other automated systems are beginning to become more integrated into human lives, engineers face a new problem: designing these systems to effectively and safely interact with people. Part of the challenge is that humans are themselves autonomous agents, making decisions and acting in ways that introduce potentially unpredictable dynamics into the environment. Even more challenging, humans change their behavior in response to the actions of the system they are interacting with, meaning that the engineer has to consider not just how to predict and interpret human behavior, but how the behavior of the system that they are designing might be predicted and interpreted by humans in turn.
As cognitive scientists, we have had many enjoyable conversations with engineers about how to solve these problems. Typically these conversations begin with an email or a knock on the door requesting the most up-to-date model of human behavior in a format that can be easily integrated into a control-theoretic framework. We disappoint our colleagues by telling them that unfortunately no such model exists, but then excite them with how much progress has been made towards this goal and all of the research possibilities that this entails. Our intent in this article is to offer our readers a chance to follow the same emotional trajectory, highlighting the ways in which we think contemporary cognitive science can provide tools that may be useful to engineers designing systems that interact with humans and identifying some of the exciting possibilities for future research in this area.
Speaking broadly, computational models developed by cognitive scientists offer solutions to at least two of the problems that engineers face ( Figure 1). First, they provide forward models that can be used to generate predictions about human behavior based on assumptions about the beliefs, goals, and desires of human agents. These models can be useful for anticipating what a person will do in a given situation and hence provide a way to enrich modeling of the environment in which an automated system operates. These forward models can also be used as an ingredient in inverse models, which infer the beliefs, goals, and desires of humans based on their actions -a necessary step for any agents that seek to coordinate their behavior with humans, collaborate effectively, provide assistance, or cooperate as they pursue common goals.
In addition to this, cognitive science also offers insight into the way that humans solve exactly this inverse problem. People routinely make inferences about the beliefs, goals, and desires of other people, a process that has been extensively studied by psychologists [1,2,3] and is increasingly captured in computational models [4,5,6]. These models are useful both as a source of insight for engineers seeking to re-create this capacity to draw inferences from the actions of others, but also as a tool for anticipating how the actions of an automated system will be interpreted by a human [7,8]. Research in human-robot interaction has begun to make use of these ideas, designing systems that act in a way that is more legible to humans [9,10,11], a line of work that we will also review.
In considering these two ways that computational models of cognition can be used by Figure 1: Forward and inverse models from cognitive science. In this paper, we review work in cognitive science on forward models of how humans make decisions and inverse models that humans use to reason about other agents. Cognitive scientists are increasingly using computational tools such as probability theory, reinforcement learning, and statistical machine learning to characterize forward and inverse models of human decision-making. This provides opportunities for cross-talk and collaboration between cognitive science and control research.
engineers designing automated systems -as both forward and inverse models -we will also highlight the ways in which recent work in computational cognitive science has emphasized formalisms that will be very familiar to researchers coming from a background of optimization and control. Cognitive scientists increasingly use ideas from probability theory, statistical machine learning, and reinforcement learning in specifying models of human cognition [12].
This creates an opportunity to develop a common language for describing the behavior of both humans and machines, and supports easier integration of insights from cognitive science into control.
Given the vast scope of human behavior, is necessary to limit our review to a specific subdomain of human activity. To that end, we will focus on models of human decisionmaking, broadly construed. The decisions people make reveal their preferences and determine their actions, key to the design of interactive systems. They also provide a rich territory for researchers, with formal models of human decision-making going back almost 300 years [13].
Our goal is to summarize the current state of the art in predicting and interpreting human decisions in a form that is immediately actionable by designers of automated systems.
The remainder of the paper is split into two parts. In the first part we focus on forward models, considering the criteria for useful computational models of human decision-making and summarizing recent research that aims to satisfy these criteria. We then turn to inverse models, describing the problem of inverse inference, summarizing the key ideas from the psychological literature, explaining how this has been translated into formal models, and highlighting some of the ways in which these ideas have been applied within robotics. We close with a brief discussion of some of the remaining open questions in these areas and possibilities for future research.
Forward models of human decision-making
For a theory of human decision-making to be useful to an engineer designing a system that has human behavior as a component, that theory should have two properties: it should be generalizable, meaning that it can be applied in any context in which the engineer needs to be able to make predictions about how people will act, and it should be accurate, producing good predictions about human behavior in that context. The development of theories of human decision-making has historically tended to alternate between these criteria, making progress on one at the cost of the other ( Figure 2).
The earliest formal theories of human decision-making made the strong assumption that humans are rational, in the sense of pursuing actions that are in their self-interest and in compliance with axioms that can be widely agreed to characterize rational behavior [14,15].
The impressive result of this investigation is that the preferences of a rational agent can be characterized by a utility function that assigns a numerical value to each possible outcome, and that when faced with decisions that involve uncertainty that agent should pursue the option that has highest expected utility.
This theory -which we will refer to as expected utility theory -fulfills the goal of generalizability. In order to predict the actions that a rational agent will take in a new environment, it is necessary only to identify the utility assigned to different outcomes -the decisions that agent will take can then be derived directly from these quantities. The tools that are used for deriving this behavior are exactly the tools that are used in optimization and control, as we typically seek to define agents that are rational. As a consequence, human rationality is a common assumption in interactive systems, albeit with some allowance for stochasticity (e.g., [16]). It is also a common assumption in the kind of choice models that are widely used in econometrics (e.g., [17]).
The only problem with assuming that humans are rational is that this assumption turns out to be false. Starting in the 1970s, psychologists (led by Daniel Kahneman and Amos Tversky) began to document the ways in which people's decisions violate the axioms that are assumed in rational models [18]. This led to a swing towards a more qualitative psychology of human decision-making, in which the emphasis was placed on identifying all of the heuristic shortcuts that people seem to use when making decisions, and the behavioral biases that result. The outcome of this process is a long list of the things that people do wrong in specific situations. This is something that might potentially increase the accuracy of our models, but since these behaviors are specific to particular scenarios and it is hard to know which heuristic might dominate in a new setting, this accuracy comes at the cost of generalizability.
This qualitative approach to understanding human decision-making was complemented by efforts to formalize the cognitive processes that people engage in when making decisions.
Kahneman and Tversky developed prospect theory, which extends expected utility theory by allowing different functions characterizing the subjective value of gains and losses and recognizing that the probabilities of events may also be subjectively transformed [23]. Subsequent work has introduced further nuances to this theory (e.g., [24]), together with hypotheses for how to formalize ideas about different heuristics that people might follow (e.g., [25]) as well as other cognitive factors such as the salience of different options (e.g., [26]).
Recent work in psychology and neuroscience has drilled down even further into the cognitive processes that might account for these behaviors. One prominent line of work focuses [18] demonstrated that people systematically violate basic predictions of expected utility theory. This resulted in a focus on the heuristics that people tend to use in specific situations as opposed to general theories. (C) Decision-making models represented by neural networks, combined with large data sets of human choice behavior and informed by psychological theory (e.g., [19,20]), provide a way to predict about human decisions (see section 2.1). (D) The theoretical framework of resource rationality [21,22] aims to provide a general formal theory that accounts for people's heuristics and biases. Specifically, resource rationality proposes that human decision-making reflects expected utility maximization subject to computational costs and cognitive limitations (see section 2.2). on the idea that people make decisions by accumulating evidence that one option is better than the alternatives [27]. There is ongoing debate about the precise mechanisms by which such a process could operate (e.g., [28]), but these accumulator models have also received support from results in neuroscience that seemed to show areas in the brain that engage in evidence accumulation (e.g., [29], but see [30]).
For the engineer, the precise cognitive and neural mechanisms underlying people's decisions might matter less than what those decisions actually are and how good predictions about human decision-making can be generated in other contexts -our two criteria of accuracy and generalizability. To this end, we are going to focus on two recent developments that increase the potential for models of human decision-making to be used effectively as forward models in control settings. The first is the potential to use ideas from machine learning, combined with the availability of large data sets on human behavior, to develop more accurate models of human decision-making. The second is recent efforts to revisit the notion of rationality, with the goal of obtaining a theoretical framework that has the same generalizability as expected utility theory while incorporating what we know about human cognitive limitations in a way that supports greater accuracy.
Developing more accurate models of human decisions using machine learning
The recent success of machine learning in many domains raises the possibility that such systems may be able to better predict human decisions than the theories of choice developed by psychologists and economists. For the engineer seeking a forward model of human decisionmaking, it may be tempting to collect a data set of human decisions and train an off-the-shelf machine learning method to predict people's behavior. This possibility has been explored extensively over the last decade, showing that machine learning has a great deal of promise in this area, but also that performance of these systems can be significantly improved by the injection of some psychological insight.
A first extensive comparison of psychological models against machine learning systems for predicting human decisions occurred in the 2015 Choice Prediction Competition [31].
The competition employed a standard risky choice paradigm that has been used extensively to study human decisions, informing the development of many of the models summarized above. In this task, participants make a choice between two gambles. In each gamble different outcomes -here corresponding to actual monetary gains and losses -occur with different probabilities. The pairs of gambles can be described by an 11 dimensional vector that summarizes the payoffs and their probabilities. The task is to map this 11 dimensional vector to a probability of choosing one gamble over the other, with the goal of getting this probability as close as possible to the choice probabilities of a group of human participants.
In the competition, the choice probabilities for 90 such pairs of gambles were provided, and the goal was to predict the corresponding probabilities for a held-out test set.
The results of the 2015 Choice Prediction Competition showed that psychological models -that is, those developed by psychologists and economists -tended to outperform off-theshelf machine learning methods. The best-performing model instantiated a set of heuristics that had been identified in the psychological literature. Subsequent work showed that machine learning methods could improve on this performance, but only when provided with features that were motivated by psychological theory [32,33].
To machine learning practitioners, these results may not come as a big surprise. This prediction problem has a relatively large number of features compared to the amount of available data (90 pairs of gambles). The success of psychological models can be interpreted as an instance of the bias-variance trade-off [34], with the small amounts of data involved meaning that models with carefully crafted inductive biases are most likely be successful.
However, the other side of that trade-off is the expectation that as the amount of data increases we should expect to see improved performance from machine learning models with weaker inductive biases.
Consistent with this hypothesis, applications of machine learning to predicting other kinds of human decisions have shown greater success. With more instances of more constrained problems, machine learning methods can outperform psychological models, and have even been suggested as offering an upper bound on the amount of variance we can expect to account for [35,36]. Indeed, in a subsequent Choice Production Competition where models were trained on 210 pairs of gambles, relatively generic machine learning models were able to outperform psychological theories [37].
Based on this insight, recent work has collected and analyzed a risky choice dataset that involves orders of magnitude more problems than the original Choice Prediction Competition [38,19]. In this data set, human participants made decisions for over 10,000 pairs of gambles.
The size of the data set makes it possible to systematically evaluate existing models of choice, and to use machine learning to exhaustively explore the space of possible theories.
Different models of choice can be expressed in terms of constraints on the functional form of a predictive model. For example, under the expected etility theory we can take the probability that people choose a gamble to be proportional to exp{ i p i u(x i )} where p i is the probability of the outcome x i and u(·) is a utility function. By taking an arbitrary differentiable form for this utility function -such as an artificial neural network -we can employ standard tools for automatic differentiation to use gradient descent to optimize the form of this function against human data. This approach generalizes to other psychological theories. For example, prospect theory corresponds to assuming the choice probability is proportional to exp{ i π(p i )u(x i )} where π(·) is a probability weighting function.
Peterson et al. [19] used this approach to identify the optimal functional form for various classic theories of choice, and also evaluated unconstrained artificial neural networks for predicting people's decisions (see Figure 3). The results showed that when using the entire data set of different pairs of gambles, an unconstrained neural network systematically outperformed all existing psychological theories. However, they also showed that equivalent performance could be obtained by defining a model based on a mixture of these classic theories, and that this model achieved a high level of predictive performance far faster than an unconstrained neural network. The results of this analysis suggest that, given enough data, we can obtain better forward models of human decisions using machine learning, but that these models are likely to be enhanced by drawing on psychological theory when possible.
A further challenge of using off-the-shelf machine learning methods when developing forward models of human decision-making is that these methods often result in models that are uninterpretable. Previous work in this area has relied on post hoc analysis of models to identify features that are psychologically interpretable (e.g., [36]). An alternative approach was recently outlined by Agrawal et al. [20], in which an off-the-shelf machine learning model is used to critique a more interpretable model until that interpretable model yields similar performance. This approach was applied to a large data set of human decisions that is likely to be of interest to researchers working on autonomous systems: the Moral Machine project [39]. This data set consists of more than 10 million human decisions about what an autonomous vehicle should do when faced with an inevitable collision, where the only available choice is about which group of pedestrians the vehicle will collide with (a version of the classic trolley problem [40]). Having an uninterpretable model that predicts these choices is not particularly useful for designing autonomous systems, but Agrawal et al. showed that their approach can be used to identify the features that a predictive model had discovered, making those features explicit in a way that is likely to be useful when deciding how to design and regulate autonomous vehicles.
Developing more generalizable models via resource rationality
Despite the promise of machine learning to improve the accuracy of a models of human decisions, generalizability is still likely to be a challenge. Machine learning systems are typically trained in a specific domain, and can face difficulty when applied in another related domain. Making this kind of generalization requires extracting the causal principles that underlie people's decisions. In this section we consider an approach that has the potential to do just that. Figure 3: Performance of machine learning models protecting people's decisions in a risky choice task (data from [19]). Neural networks constrained to a functional form consistent with classic theories of decision-making such as expected utility (EU), prospect theory (PT), and cumulative prospect theory (CPT) are compared against networks that directly estimate the value of a gamble from its features (Value-Based) or directly predict people's choices based on the features of both gambles (Context-Dependent). The vertical axis shows means squared error in predicting the probability with which people choose a particular gamble, the horizontal axis shows the percent of The training data (approximately 10,000 pairs of gambles) that was used. The previous largest data set for risky choice [37] is shown. Given enough data, all neural network models outperform the best models in their class proposed by human psychologists and economists, but this requires orders of magnitude more data than have previously been collected. For comparison, the dotted line shows the performance of the Best Estimate And Sampling Tools (BEAST) model [31] that won the 2015 Choice Prediction Competition.
The classical notion of rationality was our prime example of a theory that satisfies the goal of generalizability -for any new situation, it is possible to derive predictions about behavior. However, this classical theory falls short not just because it fails to empirically capture aspects of how people make decisions, but because it represents an unrealistic ideal for any intelligent system with finite computational resources. This is a long-standing idea, going all the way back to the classic work of Herbert Simon on bounded rationality [41].
However, this idea has recently begun to receive a more comprehensive mathematical and empirical treatment.
The classical theory of rational action via maximizing expected utility doesn't take into account the computational cost of selecting that action. As a consequence, it's easy to imagine an agent trying to follow the prescriptions of this theory ending up paralyzed as it tries to compute all of the possible outcomes and their probabilities. To address this, researchers in the artificial intelligence literature have sought a more realistic criterion for rational action for agents with finite computational resources. The outcome of this investigation is the theory of bounded optimality, which focuses not on the optimal action that an agent should take but rather on the optimal algorithm an agent should follow in order to select that action [42,43]. This theory explicitly trades off the expected utility of finally taking an action with the computational cost that's involved in getting to that point.
For cognitive scientists, bounded optimality offers a way to theorize about the optimal cognitive processes that intelligent agent should engage in when trying to make a decision [44,45]. As it puts an emphasis on rational use of the cognitive resources an agent is able to apply, this approach has been referred to as resource rational analysis [21,22]. Considering how an agent should rationally deploy its cognitive resources provides a way to explain why people may choose to adopt particular heuristics -even if those heuristics result in systematic biases -and to make generalizable predictions about the kinds of cognitive strategies that we expect people to engage in.
Recent research has recast some of the classic heuristics discovered by psychologists from the perspective of the rational use of cognitive resources. For example, focusing on extreme events when considering the outcomes of a decision is a strategy that can minimize the variance of estimates of expected utility based on small samples, even though it introduces a bias to those estimates [46]. Thinking in these terms allows us to potentially begin to reconcile the various heuristics and biases identified by psychologists into a broader mathematical theory.
To the engineer, resource rationality offers the potential to make better predictions about human behavior in a way that incorporates realistic assumptions about the cognitive limitations of human agents. Generalizability results from the fact that deriving an optimal resource-rational strategy can be formulated as a sequential decision problem. An agent trying to make a decision is going to execute the sequence of computations that provide information about the possible outcomes of their actions, at some point selecting an action to perform based on this information. The sequential decision problem here corresponds to the choice of that sequence of computations: we can construe each computation as a kind of mental action, ending with the decision that we have done enough computation and are ready to act as the end of the sequence.
Expressed in these terms, it is possible to see that we can use familiar tools such as Markov decision processes to formalize the internal decision-making we do about how to deploy our cognitive resources. Solving the resulting MDPs (referred to as meta-level MDPs To provide a concrete example, one recent paper [48] used this approach to examine how can model attention allocation in a simple decision-making task. In this task people are presented with three objects -in this case snack foods -and asked to decide which they prefer. While they are doing this, their gaze it recorded using an eye tracker. People show a consistent pattern of behavior in this task. For example, they spend more time looking at items that they assign a higher subjective value. These patterns of behavior can be explained by assuming that people are trying to estimate the subjective value of each item, and that each moment they spend looking at an option provides a sample from a Gaussian distribution centered on that value. The problem of deciding whether to sample can then be formulated as an MDP, and the resulting policy generates predictions about which objects people look at and in what sequence. We spend more time looking at items with higher subjective value because those are the items that are most relevant to our ultimate decision.
Describing cognitive processes in terms of the solution to Markov decision processes has the virtue of characterizing human cognition using formal tools that are likely to be familiar to those working in control theory. Other work on resource rationality has likewise employed formalisms that will be familiar to engineers. For example, one line of work focuses on the information-theoretic costs of maintaining mental representations at a given degree of precision [49,50]. This approach also has connections with work in economics that explains apparently irrational aspects of human choice in terms of rational inattention, where information-theoretic costs are assumed to apply to the precision of the signal an agent uses to inform a decision [51,52].
Resource rationality offers a new set of tools for capturing human behavior with greater precision in a way that is compatible with standard modeling techniques used in robotics and control. Being able to make generalizable predictions about how long it will take people to make a decision, what information they going to seek when making that decision, and what kinds of information are likely not to include when making a decision are all things that can facilitate the design of human-machine interfaces. Furthermore, it is possible to use this kind of approach to engage with the question of how to improve human decision-making: if we assume that people are rational but resource-limited, we can think about how assistive robots might modify the environments in which humans are making decisions to allow them to make better use of those resources.
While forward models can help us to make predictions about what people will do given their preferences, this is typically not all we need to know in order to design systems that are able to interact effectively with humans. In an ideal world, autonomous systems would effectively help fulfill people's needs and desires, which can change from person to person and situation to situation. We need a way to infer those needs and desires from people's behavior -inverse models. Furthermore, people are adaptive; they often change their behavior in response to a system based on their best guess as to how it functions and may even expect the system to do the same. So it is not enough to be able to make inferences about people's needs and desires, we also need to anticipate the inferences that people will be making in turn.
Developing systems that can solve these problems is a daunting challenge, but fortunately, we can take inspiration from existing systems that must regularly interact with humans: other humans. And while nearly everyone has had to deal with other humans, cognitive science offers an extensive, systematic understanding of how we effectively solve the problem of understanding and interacting with others in our everyday lives.
In this section, we turn our focus towards what cognitive science has to say about people's inverse models of cognition and action. One of the most remarkable capacities that humans have is the ability to understand the hidden mental states that give rise to other people's observable behavior [53,3]. This ability is often referred to as theory of mind, and cognitive scientists have studied it in adults [3], children [54], infants [55], and even other species [1].
Theory of mind has played a key role in our evolution as a social species capable of large-scale culture, coordination, and cooperation, and its development within the first year of life is a major milestone that enables us to comprehend and participate in the social world [2].
One exciting development in the recent study of theory of mind has been the application of ideas from economics, artificial intelligence, and control theory to characterizing mental state inference in computational terms. These approaches are reminiscent of methods familiar to engineers such as imitation learning, inverse reinforcement learning, and apprenticeship learning [56,57,58]. However, in modeling the varieties of human social inference and interaction, they depart from and extend these ideas in numerous ways. This presents an exciting opportunity for collaboration between the cognitive sciences and engineering by providing new perspectives on inverse decision-making but in a shared conceptual and technical framework.
Here, we will focus on two lines of research on inverse models at the intersection of artificial intelligence and cognitive science. The first is work on cataloguing and systematizing the conceptual primitives involved in mental state inference-e.g., how people reason about mental entities like beliefs, desires, intentions, emotions, etc. The second is on inverse models in the context of teaching and communication, which are among the most basic types of social interactions that also expose the complexity of how humans use theory of mind productively.
Along the way, we will discuss cases in which ideas from cognitive science have already been applied to the design of automated systems, limitations of existing approaches, and the possibilities for future research and applications.
Identifying the building blocks of theory of mind
Put in simple computational terms, theory of mind is an inference problem. That is, given limited observations of a process (e.g., a person's behavior), the task is to identify the hidden variables that produced those observations (e.g., the person's thoughts, desires, or feelings).
Of course, this requires not only having concepts like thoughts and desires but an understanding of how these elements combine to produce behavior. This can be understood in rough analogy to another machine learning problem: parsing natural language. For example, inferring the parse tree of a particular sentence is jointly constrained by knowledge of primitive types of words (e.g., nouns, verbs, prepositions) and a grammar of how words tend to be combined. Recent models of theory of mind can be understood in terms of this linguistic metaphor: To explain how humans parse the behavior of other agents, we must understand the mental state primitives and mental state grammars that dictate how they are combined.
Incidentally, we have already covered one possible theory of how people parse behavior: expected utility theory [14,15]. Taken as a generative model of people's intentional action, expected utility theory posits that others have beliefs about the state of the world (e.g., the belief that there is a burger joint down the street) and desires that certain states of the world are realized (e.g., the desire to eat a burger for lunch) and that people act rationally to realize their desires given their beliefs (e.g., the act of walking down the street to the burger joint). As an account of theory of mind, inverse expected utility theory makes several generalizable predictions that have been confirmed with human experiments. For example, adults, children, and infants can reason about how others integrate information about goals and action costs [4,59,60], features of different choices [61], the statistics of the environment [5], and limited perception of the environment [62]. Findings such as these have led to the proposal that human common sense psychology consists of a naïve utility calculus where we abstractly reason about other decision-makers as utility-maximizing agents [6].
At a broad level, the formal tools used characterize inverse utility theory will be familiar to those from a control background as they build on standard formalisms like MDPs, POMDPs, and inverse reinforcement learning [63]. However, there are interesting differences between how they are applied as models of human inference versus their typical engineering applications. For example, cognitive models tend to assume that people have highly structured representations of others' mental states (e.g., discrete objects), whereas in engineering applications the state representations are less structured (e.g., weights on a vector of continuous features [58]). This reflects the fact that cognitive scientists aim to explain how people can rapidly draw inferences based on only a few observations and a large base of background knowledge, while engineers are often trying to analyze large data sets of expert trajectories with minimal fine-tuning. As a result, cognitive scientists tend to use Bayesian methods while engineers are likely to be more familiar with methods designed to scale to large data sets. An important direction for future work is developing methods that cut across these different research agendas and can replicate the sophistication of human theory of mind with tractable implementations.
Expected utility theory captures an important dimension of how humans parse others behavior in terms of beliefs, desires, and intentions. But, psychologists have also long studied other types of mental states that people reason about, such as emotions, habits, norms, rules, values, and social affinities, to name only a few [3]. While the traditional frameworks of expected utility and inverse reinforcement learning have not typically focused on these kinds of mental states, cognitive scientists have made great strides in extending the formalism to study these types of representations. For example, inverse planning models have been combined with models of habits [64], emotion and appraisal [65,66,67], responsibility judgments [68], values and norms in moral dilemmas [69], and social groups [70,71]. Although do we do not typically think of robots as having these types of internal states, systems that are expected to interact with humans invariably need a basic understanding of the complete repertoire of psychological states that affect our individual and collective behavior.
The models described so far generally rely on assuming the standard formulation of rational action as optimizing a utility function defined over states of an MDP. For instance, they can express the idea of reaching a goal state while attempting to minimize costs along the way, but they cannot generally express history dependent or temporally specified constraints such as only going to a goal state only after accomplishing a subgoal. Recent work in both cognitive science and computer science have attempted to remedy this by introducing more flexible "logics" to express an agent's utilities, including linear temporal logic [72,73,74], finite state machines [75], and simple programs composed of sub-processes [76]. These methods are powerful because they can express rational cognition and action in a rich, compositional manner that may reflect human intuitions about agency. At the same time, this expressiveness comes at the cost of more complex and costly inference, and identifying appropriate constraints and settings in which tractable inference methods can be applied is an active area of research.
A complementary method for sidestepping strong assumptions about the structure of rational action is to try and learn a theory of mind directly from data without recourse to pre-defined structural priors (e.g., using a neural network). Rabinowitz et al. [77] take this approach by generating a large data set of behaviors from synthetic agents with different goals, utilities, and perceptual abilities, and then using meta-learning to train networks with no prior conception of theory of mind or rationality to predict features of future behavior.
Their networks were able to acquire a low dimensional embedding of behaviors and agent types capable of recreating several qualitative findings associated with theory of mind, such as inference about goals, costs, perceptions, and, to a certain extent, false beliefs. This work is an important demonstration of how relatively standard machine-learning methods can learn theory of mind-like representations given enough data and computation. However, studies by Nematzadeh et al. [78] have indicated that standard neural networks are limited in their capacity to explicitly represent false beliefs as they do not distinguish between appearance and reality, which is considered by psychologists to be a defining feature of theory of mind [54]. This suggests that some kind of structural priors about the nature of rational action are likely needed to capture the conceptual primitives and grammar comprising human theory of mind.
Identifying generalizable principles of communication and teaching
The previous section focused on theory of mind as a pure inference problem, but theory of mind also influences how people act and interact. Building on ideas originally explored by philosophers [79,80,81], developmental psychologists and linguists have extensively studied the principles underlying how people use theory of mind to learn from others and communicate [82,2,83]. A key idea is that when people communicate (e.g., by saying words, making gestures, providing examples, etc.), they do so with communicative goals like modifying the receiver's mental state or future actions. The person receiving these communicative signals can then reason about these goals to flexibly and efficiently draw inferences about what the sender meant to convey. Note that this process requires both the receiver and sender to have a capacity for theory of mind and, more specifically, a capacity for recursive theory of mind, in which one agent reasons about another agent reasoning about the original agent (and potentially up to higher levels of recursion). Computational research over the past few years has formalized and extended many of these findings within the framework of probabilistic inference and decision-making [84,85]. Here, we provide a broad overview of these developments.
One approach to studying communication in computational terms is the Bayesian pedagogy and cooperative communication framework [86,84,87], which characterizes how a teacher who presents data and a learner who interprets data should coordinate to efficiently and successfully communicate. To illustrate this idea, suppose you wanted to teach someone the concept of the even numbers by giving them a series of examples. Some examples will be better than others, for instance, {2, 2, 2} is technically a sequence of even numbers, but is not very informative, whereas {2, 4, 6} is clearly more helpful. Additionally, if the person receiving these examples knows you are being helpful, they can draw even stronger inferences based on what they are shown. Bayesian pedagogy models formalize this intuition about helpful, informative examples in terms of recursively defined teacher-learner equations, whose fixed points are optimal teacher-learner communication protocols. This approach has been successful in characterizing how both adults and children teach and learn concepts during pedagogical interactions [88,89]. Additionally, recent theoretical work has established a direct correspondence between optimal transport problems and the equations that characterize teacher-learner fixed points [90,91]. This opens the door for algorithmic insights to be shared between engineering disciplines such as operations research and the study of optimal cooperative communication in humans.
Cooperative communication models capture settings in which a teacher has a sole, explicit goal of being helpful and informative. However, communication is not always so clear-cut, and cognitive scientists are modeling more complex communicative situations within the Rational Speech Act (RSA) framework [85]. For example, people may have goals other than being informative, like being succinct or inoffensive, and they may expect others to perform joint inference over these non-communicative goals. This process can give rise to linguistic phenomena as varied as hyperbole ("This kettle cost $1,000!" [92]) and politeness ("Your poem wasn't bad!" [93]). And while it may be extreme to expect automated systems to understand ironic humor, they will likely need to recognize more mundane forms of indirect speech.
More broadly, RSA can characterize how the context surrounding communicative acts shapes their meaning. For example, the statement "It's warm today" has different consequences if said during the spring versus the winter (e.g., you would likely wear shorts in the first but not the second case) [94]. Computational cognitive models of how humans flexibly reason about the shared context has been shown to be a key part of understanding general statements about the world [95]. Similarly, the ability to establish contexts as communicative (e.g., intentionally getting someone's attention in order to convey some information) has been shown to be an essential precursor to the types of cooperative communication interactions discussed above [96,97]. Formal models that combine RSA with sequential decision-making and inference about whether partners have informative goals can provide a starting point for implementing such flexible reasoning about communicative context in autonomous systems [98,99].
Applying human inverse models to the design of autonomous systems
Ideas from the computational cognitive science of mental state inference and communication have already begun to inform research in human-robot interaction and reinforcement learning. This showcases the potential for scaling up probabilistic models of cognition to complex, real-world domains, while also revealing novel insights about communication and teaching. For example, work in motion planning has led to robots capable of legible mo- Figure 4: Learning from humans with communicative intent (data and figure from [10]). Ho et al. recruited human participants to perform grid navigation tasks that required reaching a goal state while not losing points. Each column represents one trial. Row 1 represents the true underlying reward function where white tiles are 0 points, red tiles are -2 points and the yellow goal tile is 10 points. Participants could not directly view the reward values, but were shown colors on the grid (orange, purple, blue) and told the value of each color (e.g., "orange and purple are safe"). Row 2 shows the visible layout of each grid and each black line represents one participant's trajectory on the task when they were only told to do the task. Row 3 shows the same grid and trajectories for participants told to do the task as well as show the reward function to an anonymous observer. Rows 3 and 4 show the reward weights estimated by Maximum-Likelihood Inverse Reinforcement Learning (MLIRL) [100] when given the Doing versus Showing demonstrations. Agents trained by Showing demonstrations obtain better estimates of the underlying reward function than those trained by Doing demonstrations. tion [9,11], in which action sequences are modified to allow humans to more quickly and successfully understand a robot's goals. Related work by Ho et al. [10] showed that when learning from human demonstrations, inverse reinforcement learning algorithms can benefit from being shown intentionally communicative expert behaviors (Figure 4). These modifications by both robots and humans are directly analogous to the approach taken in the Bayesian pedagogy framework, and demonstrate how consideration of communicative goals can facilitate human-machine teaching, learning, and cooperation.
Insights from cognitive science can also inform the design of learning algorithms themselves. For example, humans readily use rewards and punishments to modify the behavior of other animals and even other humans, and there is good reason to believe that similar principles of mental state inference and communication apply to these interactions [101]. In a series of experiments with humans interacting with different learning algorithms, Ho et al. [102,103] demonstrated precisely this. Specifically, they found that people do not use rewards in a manner consistent with the standard interpretation as a quantity to directly maximize, as is typically done in reinforcement learning. Rather, they expect learners to reason about a teacher's pedagogical goals and interpret rewards as signaling information about whether an agent is "headed in the right direction" during the learning process. Such findings help motivate the development of learning algorithms that interpret reward in more sophisticated ways and attempt to infer people's teaching strategies and goals [104,105].
Along similar lines, MacGlashan et al. [106] found that the structure of a human teacher's feedback depends on an agent's current stage of learning-a simple form of context. In a behavioral study, the authors had human participants interact with either a completely naïve agent or an expert agent that knew the optimal path to a goal. When the naive agent took a sub-optimal, but moderately good action, participants provided a high reward, while they gave the expert agent who should have known better a low or even negative reward for taking the same action. The logic of this strategy closely resembles that of the advantage function, which reflects the relative value of each action in a state under the agent's current policy [107]. This insight motivated the design of the Convergent Actor-Critic by Humans (COACH) algorithm, which treats human feedback as an advantage signal. Subsequent work has also successfully applied COACH to training deep learning agents [108].
Additionally, researchers in control and robotics have developed unifying frameworks for modeling cooperative interactions likely to be faced in human-robot applications. For instance, cooperative inverse reinforcement learning [109] defines a general class of games from which specific cooperative strategies can be derived (e.g., legible motion, requests for information, etc.) based on a particular set up. Similarly, reward-rational learning [110] has been proposed as a framework for characterizing different types of human-robot interaction problems (e.g., learning from demonstrations, feedback, or examples) in terms of inference about human preferences that may be implicit. Combining these general computational frameworks with insights from the cognitive science of human decision-making will be an important direction for future research.
Opportunities for further research
The research we have reviewed is only a starting point for exploring potential applications of cognitive science to engineering, robotics, and control. There are a number of exciting directions based on these ideas we have covered. Here, we focus on three broad future directions.
Inverse resource rationality As we have noted, cognitive scientists have proposed that people reason about others as expected utility maximizers. Additionally, we discussed how expected utility theory is inaccurate and how resource rationality is a promising alternative framework. This raises the obvious possibility that people understand that others have limited cognitive resources and therefore reason about them not as pure utility maximizers, but as resource-rational utility maximizers. Importantly, progress here will depend on the continued development of plausible forward resource-rational models, especially models of planning (e.g., [111,112]). Nonetheless, several lines of research have already begun to explore inverse resource rationality. For example, research on perspective-taking and communication has shown that people can flexibly reason about the division of cognitive labor required for efficient communication [113]. Additionally, recent work has demonstrated how humans can infer preferences by jointly reasoning about the time it takes to make a decision and the decision itself [114], while other studies have shown how people reason about sub-optimal or inconsistent planning [115,116,117]. These ideas have begun to be incorporated into inverse reinforcement learning settings [118]. As with models based on inverse expected utility theory, understanding people's inverse models of resource-rational processes will be essential for how they interpret how machines make decisions and can also serve as inspiration for how machines interpret and interact with humans.
Blackbox vs. Theory-driven models for inferring intent In our discussion of forward and inverse models, we encountered three different examples of how machine learning tools can complement traditional psychological theory building. This included work on how humans make risky choices, work on how humans make moral decisions, and work on learning theory of mind concepts without hard-coding conceptual primitives or a principle of rationality. These approaches have illustrated how large data sets can be combined with machine learning tools to search the vast space of cognitive models in an efficient manner. However, they also illustrated some of the limitations of a purely blackbox approach and the benefits of also incorporating explicit, interpretable theories and structured prior knowledge.
Thus, an important direction for future work will be developing methods that seamlessly integrate the benefits of each approach-on the one hand the scalability of blackbox methods, and on the other hand, the efficiency and interpretability of explicit psychological theories.
Schemas and mechanisms for human-robot interaction Unlike settings involving a single agent, interactions between two or more agents are challenging to design because they are fundamentally ill defined: Each agent might have their own goals and beliefs, which means there may not exist a single "yardstick" by which to measure whether the engineering problem has been solved. This has occasionally prompted researchers to ask themselves "If multi-agent learning is the answer, what is the question?" [119].
We propose that cognitive science is uniquely positioned to provide guidance to question that multi-agent learning answers in the form of schemas and mechanisms grounded in the types of interactions that humans are adapted for. We have already encountered one example of this: The interaction of a teacher and learner engaged in cooperative communication can serve as a template for developing robots capable of legible action and value alignment [9,109]. Beyond this, there are many other types of human interactions and socio-cognitive mechanisms that we have not discussed that could inspire future research.
For example, humans form joint intentions to achieve shared goals, and this underlies our ability to cooperatively solve novel problems [120]. At a broader scale, human interactions are often shaped by norms, which can be understood as shared behavioral tendencies that generalize across interactions with different agents in a population. Norms and normative cognition have been extensively studied in cognitive science and psychology, and researchers have begun to explore these processes computationally [121]. Finally, an additional benefit of designing autonomous systems around how humans actually interact is the potential for new insights into those very interactions, leading to further collaboration between the cognitive sciences and engineering disciplines.
Conclusion
At the moment, there is no unified model of human cognition and decision-making that engineers can draw on when designing their systems. Nonetheless, cognitive science has much to offer those designing autonomous systems that interact with humans. In particular, cognitive science has a rich trove of theories and methods for systematically studying how humans think, decide, and interact with one another, and these discoveries are increasingly being couched in formal terms that are familiar to researchers in robotics and control. Here, we have discussed several recent frameworks and methodologies, such as the synthesis of blackbox and theory-driven methods, resource-rational decision making, cooperative cognition, and rational speech act theory. We then surveyed how they have been applied to derive insights into the mechanisms and principles underlying people's forward and inverse models of decision making. As autonomous systems continue to become more commonplace in people's everyday lives, we expect engineers will also need to think systematically about how the humans their systems encounter will make decisions. We hope this review can provide clarity into the types of actionable insights and open questions that sit at the intersection of cognitive science and control research.
Figure 2 :
2Four approaches to studying human decision-making (A) Early formal theories of decision-making assumed humans were expected utility maximizers. (B) The heuristics and biases research program initiated in the 1970's by Tversky and Kahneman
[ 47 ]
47) provides a way to derive predictions about behavior. Crucially, this approach carries with it the same generality as the classic theory of rational action. It simply moves rationality up the level of the choice of how to deploy cognitive resources, and requires us to be explicit about what those resources might be.
Does the chimpanzee have a theory of mind?. D Premack, G Woodruff, Behavioral and brain sciences. 14Premack D, Woodruff G. 1978. Does the chimpanzee have a theory of mind? Behavioral and brain sciences 1(4):515-526
Understanding and sharing intentions: The origins of cultural cognition. M Tomasello, M Carpenter, J Call, T Behne, H Moll, Behavioral and brain sciences. 285Tomasello M, Carpenter M, Call J, Behne T, Moll H. 2005. Understanding and sharing intentions: The origins of cultural cognition. Behavioral and brain sciences 28(5):675- 690
The fundamental tools, and possibly universals, of human social cognition. Handbook of motivation and cognition across cultures. B F Malle, Malle BF. 2008. The fundamental tools, and possibly universals, of human social cog- nition. Handbook of motivation and cognition across cultures :267-296
Action understanding as inverse planning. C L Baker, R Saxe, J B Tenenbaum, Cognition. 1133Baker CL, Saxe R, Tenenbaum JB. 2009. Action understanding as inverse planning. Cognition 113(3):329-349
The child as econometrician: A rational model of preference understanding in children. C G Lucas, T L Griffiths, F Xu, C Fawcett, A Gopnik, PLOS One. 9392160Lucas CG, Griffiths TL, Xu F, Fawcett C, Gopnik A, et al. 2014. The child as econometrician: A rational model of preference understanding in children. PLOS One 9(3):e92160
The naïve utility calculus: Computational principles underlying commonsense psychology. J Jara-Ettinger, H Gweon, L E Schulz, J B Tenenbaum, Trends in cognitive sciences. 208Jara-Ettinger J, Gweon H, Schulz LE, Tenenbaum JB. 2016. The naïve utility calcu- lus: Computational principles underlying commonsense psychology. Trends in cognitive sciences 20(8):589-604
Theory of mind for a humanoid robot. B Scassellati, Autonomous Robots. 121Scassellati B. 2002. Theory of mind for a humanoid robot. Autonomous Robots 12(1):13-24
Toward sociable robots. C Breazeal, Robotics and autonomous systems. 423-4Breazeal C. 2003. Toward sociable robots. Robotics and autonomous systems 42(3- 4):167-175
Legibility and predictability of robot motion. A D Dragan, K C Lee, S S Srinivasa, 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEEDragan AD, Lee KC, Srinivasa SS. 2013. Legibility and predictability of robot motion. In 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 301-308. IEEE
Showing versus doing: Teaching by demonstration. M K Ho, M Littman, J Macglashan, F Cushman, Jl ; Dd Austerweil, Lee, Sugiyama, Uv Luxburg, Guyon, Garnett, Advances in Neural Information Processing Systems. Curran Associates, Inc29Ho MK, Littman M, MacGlashan J, Cushman F, Austerweil JL. 2016. Showing versus doing: Teaching by demonstration. In Advances in Neural Information Processing Sys- tems 29, ed. DD Lee, M Sugiyama, UV Luxburg, I Guyon, R Garnett, pp. 3027-3035, pp. 3027-3035. Curran Associates, Inc.
Pragmaticpedagogic value alignment. J F Fisac, M A Gates, J B Hamrick, C Liu, D Hadfield-Menell, Robotics ResearchSpringerFisac JF, Gates MA, Hamrick JB, Liu C, Hadfield-Menell D, et al. 2020. Pragmatic- pedagogic value alignment. In Robotics Research. Springer
The Cambridge handbook of computational psychology. R Sun, Cambridge University PressSun R. 2008. The Cambridge handbook of computational psychology. Cambridge Uni- versity Press
Exposition of a new theory on the measurement of risk. D Bernoulli, Econometrica. 221Bernoulli D. 1738. Exposition of a new theory on the measurement of risk. Economet- rica 22(1):22-36
Theory of games and economic behavior. Von Neumann, J Morgenstern, O , Princeton University PressPrinceton, NJVon Neumann J, Morgenstern O. 1944. Theory of games and economic behavior. Princeton, NJ: Princeton University Press
The Foundations of Statistics. L J Savage, Courier Corporation. Savage LJ. 1972. The Foundations of Statistics. Courier Corporation
Maximum entropy inverse reinforcement learning. B D Ziebart, A Maas, J A Bagnell, A K Dey, Proceedings of the 23rd national conference on Artificial intelligence. the 23rd national conference on Artificial intelligence3Ziebart BD, Maas A, Bagnell JA, Dey AK. 2008. Maximum entropy inverse reinforce- ment learning. In Proceedings of the 23rd national conference on Artificial intelligence- Volume 3, pp. 1433-1438
Conditional logit analysis of qualitative choice behavior. D Mcfadden, Frontiers in Econometrics. McFadden D. 1973. Conditional logit analysis of qualitative choice behavior. Frontiers in Econometrics :105-135
Judgment under uncertainty: Heuristics and biases. A Tversky, D Kahneman, science. 1854157Tversky A, Kahneman D. 1974. Judgment under uncertainty: Heuristics and biases. science 185(4157):1124-1131
Using largescale experiments and machine learning to discover theories of human decision-making. J C Peterson, D Bourgin, M Agrawal, D Reichman, T L Griffiths, SciencePeterson JC, Bourgin D, Agrawal M, Reichman D, Griffiths TL. 2021. Using large- scale experiments and machine learning to discover theories of human decision-making. Science
Scaling up psychology via scientific regret minimization. M Agrawal, J C Peterson, T L Griffiths, Proceedings of the National Academy of Sciences. 11716Agrawal M, Peterson JC, Griffiths TL. 2020. Scaling up psychology via scientific regret minimization. Proceedings of the National Academy of Sciences 117(16):8825-8835
Rational use of cognitive resources: Levels of analysis between the computational and the algorithmic. T L Griffiths, F Lieder, N D Goodman, Topics in Cognitive Science. 72Griffiths TL, Lieder F, Goodman ND. 2015. Rational use of cognitive resources: Levels of analysis between the computational and the algorithmic. Topics in Cognitive Science 7(2):217-229
Resource-rational analysis: understanding human cognition as the optimal use of limited computational resources. F Lieder, T L Griffiths, Behavioral and Brain Sciences. 43Lieder F, Griffiths TL. 2020. Resource-rational analysis: understanding human cog- nition as the optimal use of limited computational resources. Behavioral and Brain Sciences 43
Prospect theory: An analysis of decisions under risk. D Kahneman, Econometrica. 47278Kahneman D. 1979. Prospect theory: An analysis of decisions under risk. Econometrica 47:278
Advances in prospect theory: Cumulative representation of uncertainty. A Tversky, D Kahneman, Journal of Risk and Uncertainty. 54Tversky A, Kahneman D. 1992. Advances in prospect theory: Cumulative representa- tion of uncertainty. Journal of Risk and Uncertainty 5(4):297-323
Simple heuristics that make us smart. G Gigerenzer, P M Todd, Oxford University PressUSAGigerenzer G, Todd PM. 1999. Simple heuristics that make us smart. Oxford University Press, USA
Salience theory of choice under risk. P Bordalo, N Gennaioli, A Shleifer, The Quarterly journal of economics. 1273Bordalo P, Gennaioli N, Shleifer A. 2012. Salience theory of choice under risk. The Quarterly journal of economics 127(3):1243-1285
Diffusion decision model: Current issues and history. R Ratcliff, P L Smith, S D Brown, G Mckoon, Trends in cognitive sciences. 204Ratcliff R, Smith PL, Brown SD, McKoon G. 2016. Diffusion decision model: Current issues and history. Trends in cognitive sciences 20(4):260-281
Using time-varying evidence to test models of decision dynamics: bounded diffusion vs. the leaky competing accumulator model. K Tsetsos, J Gao, J L Mcclelland, M Usher, Frontiers in neuroscience. 679Tsetsos K, Gao J, McClelland JL, Usher M. 2012. Using time-varying evidence to test models of decision dynamics: bounded diffusion vs. the leaky competing accumulator model. Frontiers in neuroscience 6:79
Representation of confidence associated with a decision by neurons in the parietal cortex. R Kiani, M N Shadlen, Science. 3245928Kiani R, Shadlen MN. 2009. Representation of confidence associated with a decision by neurons in the parietal cortex. Science 324(5928):759-764
Single-trial spike trains in parietal cortex reveal discrete steps during decision-making. K W Latimer, J L Yates, M L Meister, A C Huk, J W Pillow, Science. 3496244Latimer KW, Yates JL, Meister ML, Huk AC, Pillow JW. 2015. Single-trial spike trains in parietal cortex reveal discrete steps during decision-making. Science 349(6244):184- 187
From anomalies to forecasts: Toward a descriptive model of decisions under risk, under ambiguity, and from experience. I Erev, E Ert, O Plonsky, D Cohen, O Cohen, Psychological Review. 1244Erev I, Ert E, Plonsky O, Cohen D, Cohen O. 2017. From anomalies to forecasts: To- ward a descriptive model of decisions under risk, under ambiguity, and from experience. Psychological Review 124(4):369-409
Behavior-based machine-learning: A hybrid approach for predicting human decision making. G Noti, E Levi, Y Kolumbus, A Daniely, arXiv:1611.10228arXiv preprintNoti G, Levi E, Kolumbus Y, Daniely A. 2016. Behavior-based machine-learning: A hy- brid approach for predicting human decision making. arXiv preprint arXiv:1611.10228
Psychological forest: Predicting human behavior. O Plonsky, I Erev, T Hazan, M Tennenholtz, Thirty-First AAAI Conference on Artificial Intelligence. Plonsky O, Erev I, Hazan T, Tennenholtz M. 2017. Psychological forest: Predicting human behavior. In Thirty-First AAAI Conference on Artificial Intelligence
Neural networks and the bias/variance dilemma. S Geman, E Bienenstock, R Doursat, Neural computation. 41Geman S, Bienenstock E, Doursat R. 1992. Neural networks and the bias/variance dilemma. Neural computation 4(1):1-58
Measuring the completeness of theories. D Fudenberg, J Kleinberg, A Liang, S Mullainathan, arXiv:1910.07022arXiv preprintFudenberg D, Kleinberg J, Liang A, Mullainathan S. 2019. Measuring the completeness of theories. arXiv preprint arXiv:1910.07022
Using methods from machine learning to evaluate behavioral models of choice under risk and ambiguity. A Peysakhovich, J Naecker, Journal of Economic Behavior & Organization. 133Peysakhovich A, Naecker J. 2017. Using methods from machine learning to evaluate behavioral models of choice under risk and ambiguity. Journal of Economic Behavior & Organization 133:373-384
O Plonsky, R Apel, E Ert, M Tennenholtz, D Bourgin, arXiv:1904.06866Predicting human decisions with behavioral theories and machine learning. arXiv preprintPlonsky O, Apel R, Ert E, Tennenholtz M, Bourgin D, et al. 2019. Predicting human decisions with behavioral theories and machine learning. arXiv preprint arXiv:1904.06866
Cognitive model priors for predicting human decisions. D D Bourgin, J C Peterson, D Reichman, S J Russell, T L Griffiths, International Conference on Machine Learning. Bourgin DD, Peterson JC, Reichman D, Russell SJ, Griffiths TL. 2019. Cognitive model priors for predicting human decisions. In International Conference on Machine Learning, pp. 5133-5141
The moral machine experiment. E Awad, S Dsouza, R Kim, J Schulz, J Henrich, Nature. 5637729Awad E, Dsouza S, Kim R, Schulz J, Henrich J, et al. 2018. The moral machine experiment. Nature 563(7729):59-64
Killing, letting die, and the trolley problem. J J Thomson, The Monist. 592Thomson JJ. 1976. Killing, letting die, and the trolley problem. The Monist 59(2):204- 217
A behavioral model of rational choice. H A Simon, The Quarterly Journal of Economics. 691Simon HA. 1955. A behavioral model of rational choice. The Quarterly Journal of Economics 69(1):99-118
Reasoning about beliefs and actions under computational resource constraints. E J Horvitz, Proceedings of the Third Conference on Uncertainty in Artificial Intelligence. the Third Conference on Uncertainty in Artificial IntelligenceHorvitz EJ. 1987. Reasoning about beliefs and actions under computational resource constraints. In Proceedings of the Third Conference on Uncertainty in Artificial Intel- ligence, pp. 429-447
Principles of metareasoning. S Russell, E Wefald, Artificial Intelligence. 491-3Russell S, Wefald E. 1991. Principles of metareasoning. Artificial Intelligence 49(1- 3):361-395
Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. S J Gershman, E J Horvitz, J B Tenenbaum, Science. 3496245Gershman SJ, Horvitz EJ, Tenenbaum JB. 2015. Computational rationality: A con- verging paradigm for intelligence in brains, minds, and machines. Science 349(6245)
Computational rationality: Linking mechanism and behavior through bounded utility maximization. R L Lewis, A Howes, S Singh, Topics in Cognitive Science. 62Lewis RL, Howes A, Singh S. 2014. Computational rationality: Linking mechanism and behavior through bounded utility maximization. Topics in Cognitive Science 6(2):279- 311
Overrepresentation of extreme events in decision making reflects rational use of cognitive resources. F Lieder, T L Griffiths, M Hsu, Psychological review. 12511Lieder F, Griffiths TL, Hsu M. 2018. Overrepresentation of extreme events in decision making reflects rational use of cognitive resources. Psychological review 125(1):1
Selecting computations: Theory and applications. N Hay, S Russell, D Tolpin, S Shimony, Proceedings of the 28th Conference on Uncertainty in Artificial Intelligence. N de Freitas, K Murphythe 28th Conference on Uncertainty in Artificial IntelligenceCorvallis, ORAUAI PressHay N, Russell S, Tolpin D, Shimony S. 2012. Selecting computations: Theory and applications. In Proceedings of the 28th Conference on Uncertainty in Artificial Intel- ligence, ed. N de Freitas, K Murphy. Corvallis, OR: AUAI Press
Fixation patterns in simple choice reflect optimal information sampling. F Callaway, A Rangel, T L Griffiths, PLoS computational biology. 1731008863Callaway F, Rangel A, Griffiths TL. 2021. Fixation patterns in simple choice reflect optimal information sampling. PLoS computational biology 17(3):e1008863
Information, utility and bounded rationality. D A Ortega, P A Braun, International Conference on Artificial General Intelligence. SpringerOrtega DA, Braun PA. 2011. Information, utility and bounded rationality. In Interna- tional Conference on Artificial General Intelligence, pp. 269-274. Springer
Decision by sampling implements efficient coding of psychoeconomic functions. R Bhui, S J Gershman, Psychological Review. 1256985Bhui R, Gershman SJ. 2018. Decision by sampling implements efficient coding of psy- choeconomic functions. Psychological Review 125(6):985
Implications of rational inattention. C A Sims, Journal of Monetary Economics. 503Sims CA. 2003. Implications of rational inattention. Journal of Monetary Economics 50(3):665-690
Rationally inattentive intertemporal choice. S J Gershman, R Bhui, Nature communications. 111Gershman SJ, Bhui R. 2020. Rationally inattentive intertemporal choice. Nature com- munications 11(1):1-8
Teleological reasoning in infancy: The naıve theory of rational action. G Gergely, G Csibra, Trends in cognitive sciences. 77Gergely G, Csibra G. 2003. Teleological reasoning in infancy: The naıve theory of rational action. Trends in cognitive sciences 7(7):287-292
Theory-of-mind development: Retrospect and prospect. J H Flavell, Merrill-Palmer Quarterly. 503Flavell JH. 2004. Theory-of-mind development: Retrospect and prospect. Merrill- Palmer Quarterly 50(3):274-290
Taking the intentional stance at 12 months of age. G Gergely, Z Nádasdy, G Csibra, S Bíró, Cognition. 562Gergely G, Nádasdy Z, Csibra G, Bíró S. 1995. Taking the intentional stance at 12 months of age. Cognition 56(2):165-193
Apprenticeship Learning via Inverse Reinforcement Learning. P Abbeel, A Y Ng, Proceedings of the Twenty-first International Conference on Machine Learning, ICML '04. the Twenty-first International Conference on Machine Learning, ICML '04New York, NY, USAACMAbbeel P, Ng AY. 2004. Apprenticeship Learning via Inverse Reinforcement Learn- ing. In Proceedings of the Twenty-first International Conference on Machine Learning, ICML '04, pp. 1-. New York, NY, USA: ACM
A survey of robot learning from demonstration. B D Argall, S Chernova, M Veloso, B Browning, Robotics and autonomous systems. 575Argall BD, Chernova S, Veloso M, Browning B. 2009. A survey of robot learning from demonstration. Robotics and autonomous systems 57(5):469-483
A survey of inverse reinforcement learning: Challenges, methods and progress. S Arora, P Doshi, Artificial Intelligence. 103500Arora S, Doshi P. 2021. A survey of inverse reinforcement learning: Challenges, meth- ods and progress. Artificial Intelligence :103500
Children's understanding of the costs and rewards underlying rational action. J Jara-Ettinger, H Gweon, J B Tenenbaum, L E Schulz, Cognition. 140Jara-Ettinger J, Gweon H, Tenenbaum JB, Schulz LE. 2015. Children's understanding of the costs and rewards underlying rational action. Cognition 140:14-23
Ten-month-old infants infer the value of goals from the costs of actions. S Liu, T D Ullman, J B Tenenbaum, E S Spelke, Science. 3586366Liu S, Ullman TD, Tenenbaum JB, Spelke ES. 2017. Ten-month-old infants infer the value of goals from the costs of actions. Science 358(6366):1038-1041
People learn other people's preferences through inverse decision-making. A Jern, C G Lucas, C Kemp, Cognition. 168Jern A, Lucas CG, Kemp C. 2017. People learn other people's preferences through inverse decision-making. Cognition 168:46-64
Rational quantitative attribution of beliefs, desires and percepts in human mentalizing. C L Baker, J Jara-Ettinger, Saxe R Tenenbaum, J B , Nature Human Behaviour. 14Baker CL, Jara-Ettinger J, Saxe R, Tenenbaum JB. 2017. Rational quantitative attri- bution of beliefs, desires and percepts in human mentalizing. Nature Human Behaviour 1(4):1-10
Theory of mind as inverse reinforcement learning. J Jara-Ettinger, Current Opinion in Behavioral Sciences. 29Jara-Ettinger J. 2019. Theory of mind as inverse reinforcement learning. Current Opin- ion in Behavioral Sciences 29:105-110
Plans, habits, and theory of mind. S J Gershman, T Gerstenberg, C L Baker, F A Cushman, PLOS One. 119162246Gershman SJ, Gerstenberg T, Baker CL, Cushman FA. 2016. Plans, habits, and theory of mind. PLOS One 11(9):e0162246
Affective cognition: Exploring lay theories of emotion. D C Ong, J Zaki, N D Goodman, Cognition. 143Ong DC, Zaki J, Goodman ND. 2015. Affective cognition: Exploring lay theories of emotion. Cognition 143:141-162
Formalizing emotion concepts within a Bayesian model of theory of mind. R Saxe, S D Houlihan, Current opinion in Psychology. 17Saxe R, Houlihan SD. 2017. Formalizing emotion concepts within a Bayesian model of theory of mind. Current opinion in Psychology 17:15-21
Computational models of emotion inference in theory of mind: A review and roadmap. D C Ong, J Zaki, N D Goodman, Topics in cognitive science. 112Ong DC, Zaki J, Goodman ND. 2019. Computational models of emotion inference in theory of mind: A review and roadmap. Topics in cognitive science 11(2):338-357
Lucky or clever? from expectations to responsibility judgments. T Gerstenberg, T D Ullman, J Nagel, M Kleiman-Weiner, D A Lagnado, J B Tenenbaum, Cognition. 177Gerstenberg T, Ullman TD, Nagel J, Kleiman-Weiner M, Lagnado DA, Tenenbaum JB. 2018. Lucky or clever? from expectations to responsibility judgments. Cognition 177:122-141
Inference of Intention and Permissibility in Moral Decision Making. M Kleiman-Weiner, T Gerstenberg, S Levine, Jb ; D Tenenbaum, Noelle, Dale, Warlaumont, Yoshimi, Matlock, Jennings, Maglio, Proceedings of the 37th Annual Conference of the Cognitive Science Society. the 37th Annual Conference of the Cognitive Science SocietyAustin, TXCognitive Science SocietyKleiman-Weiner M, Gerstenberg T, Levine S, Tenenbaum JB. 2015. Inference of Inten- tion and Permissibility in Moral Decision Making. In Proceedings of the 37th Annual Conference of the Cognitive Science Society, ed. D Noelle, R Dale, AS Warlaumont, J Yoshimi, T Matlock, CD Jennings, PP Maglio, pp. 920-925, pp. 920-925. Austin, TX: Cognitive Science Society
Discovering social groups via latent structure learning. T Lau, H T Pouncy, S J Gershman, M Cikara, Journal of Experimental Psychology: General. 147121881Lau T, Pouncy HT, Gershman SJ, Cikara M. 2018. Discovering social groups via latent structure learning. Journal of Experimental Psychology: General 147(12):1881
Theory of minds: Understanding behavior in groups through inverse planning. M Shum, M Kleiman-Weiner, M L Littman, J B Tenenbaum, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Shum M, Kleiman-Weiner M, Littman ML, Tenenbaum JB. 2019. Theory of minds: Understanding behavior in groups through inverse planning. In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 6163-6170
Environmentindependent task specifications via gltl. M L Littman, U Topcu, J Fu, C Isbell, M Wen, J Macglashan, arXiv:1704.04341arXiv preprintLittman ML, Topcu U, Fu J, Isbell C, Wen M, MacGlashan J. 2017. Environment- independent task specifications via gltl. arXiv preprint arXiv:1704.04341
Interpreting actions by attributing compositional desires. J Velez-Ginorio, M H Siegel, J B Tenenbaum, J Jara-Ettinger, ; G Gunzelmann, T Howes, Tenbrink, Davelaar, Proceedings of the 39th Annual Conference of the Cognitive Science Society. the 39th Annual Conference of the Cognitive Science SocietyVelez-Ginorio J, Siegel MH, Tenenbaum JB, Jara-Ettinger J. 2017. Interpreting actions by attributing compositional desires. In Proceedings of the 39th Annual Conference of the Cognitive Science Society, ed. G Gunzelmann, A Howes, T Tenbrink, J Davelaar.
. Cognitive Science Society. Cognitive Science Society
Learning task specifications from demonstrations. M Vazquez-Chanlatte, S Jha, A Tiwari, M K Ho, S Seshia, Advances in Neural Information Processing Systems. 31Vazquez-Chanlatte M, Jha S, Tiwari A, Ho MK, Seshia S. 2018. Learning task spec- ifications from demonstrations. Advances in Neural Information Processing Systems 31:5367-5377
Using reward machines for highlevel task specification and decomposition in reinforcement learning. R T Icarte, T Klassen, R Valenzano, S Mcilraith, International Conference on Machine Learning. PMLRIcarte RT, Klassen T, Valenzano R, McIlraith S. 2018. Using reward machines for high- level task specification and decomposition in reinforcement learning. In International Conference on Machine Learning, pp. 2107-2116. PMLR
Human priors in hierarchical program induction. M K Ho, S Sanborn, F Callaway, D Bourgin, T Griffiths, Computational Cognitive Neuroscience (CCN). 1Ho MK, Sanborn S, Callaway F, Bourgin D, Griffiths T. 2018. Human priors in hier- archical program induction. Computational Cognitive Neuroscience (CCN) 1
Machine theory of mind. N Rabinowitz, F Perbet, F Song, C Zhang, S A Eslami, M Botvinick, International conference on machine learning. PMLRRabinowitz N, Perbet F, Song F, Zhang C, Eslami SA, Botvinick M. 2018. Machine theory of mind. In International conference on machine learning, pp. 4218-4227. PMLR
Evaluating Theory of Mind in Question Answering. A Nematzadeh, K Burns, E Grant, A Gopnik, T Griffiths, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, Belgium: Association for Computational LinguisticsNematzadeh A, Burns K, Grant E, Gopnik A, Griffiths T. 2018. Evaluating Theory of Mind in Question Answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2392-2400. Brussels, Belgium: Associa- tion for Computational Linguistics
L Wittgenstein, Philosophical Investigations. New YorkMacMillanWittgenstein L. 1953. Philosophical Investigations. New York: MacMillan
. H P Grice, Meaning. The Philosophical Review. 663Grice HP. 1957. Meaning. The Philosophical Review 66(3):377-388
Relevance: Communication and Cognition. D Sperber, D Wilson, Harvard University PressCambridge, MA, USASperber D, Wilson D. 1986. Relevance: Communication and Cognition. Cambridge, MA, USA: Harvard University Press
Using language. H H Clark, Cambridge University PressCambridgeClark HH. 1996. Using language. Cambridge: Cambridge University Press
Natural pedagogy. G Csibra, G Gergely, Trends in cognitive sciences. 134Csibra G, Gergely G. 2009. Natural pedagogy. Trends in cognitive sciences 13(4):148- 153
A rational account of pedagogical reasoning: Teaching by, and learning from, examples. P Shafto, N D Goodman, T L Griffiths, Cognitive Psychology. 71Shafto P, Goodman ND, Griffiths TL. 2014. A rational account of pedagogical reason- ing: Teaching by, and learning from, examples. Cognitive Psychology 71:55-89
Pragmatic language interpretation as probabilistic inference. N D Goodman, M C Frank, Trends in cognitive sciences. 2011Goodman ND, Frank MC. 2016. Pragmatic language interpretation as probabilistic inference. Trends in cognitive sciences 20(11):818-829
Teaching games: Statistical sampling assumptions for learning in pedagogical situations. P Shafto, N D Goodman, Proceedings of the 30th Annual Conference of the Cognitive Science Society. the 30th Annual Conference of the Cognitive Science SocietyAustin, TXCognitive Science SocietyShafto P, Goodman ND. 2008. Teaching games: Statistical sampling assumptions for learning in pedagogical situations. In Proceedings of the 30th Annual Conference of the Cognitive Science Society. Austin, TX: Cognitive Science Society
Learning to trust and trusting to learn: A theoretical framework. A R Landrum, B S EavesJr, P Shafto, Trends in Cognitive Sciences. 193Landrum AR, Eaves Jr BS, Shafto P. 2015. Learning to trust and trusting to learn: A theoretical framework. Trends in Cognitive Sciences 19(3):109-111
The doubleedged sword of pedagogy: Instruction limits spontaneous exploration and discovery. E Bonawitz, P Shafto, H Gweon, N D Goodman, E Spelke, L Schulz, Cognition. 1203Bonawitz E, Shafto P, Gweon H, Goodman ND, Spelke E, Schulz L. 2011. The double- edged sword of pedagogy: Instruction limits spontaneous exploration and discovery. Cognition 120(3):322-330
Young children consider the expected utility of others' learning to decide what to teach. S Bridgers, J Jara-Ettinger, H Gweon, Nature human behaviour. 42Bridgers S, Jara-Ettinger J, Gweon H. 2020. Young children consider the expected utility of others' learning to decide what to teach. Nature human behaviour 4(2):144- 152
A mathematical theory of cooperative communication. P Wang, J Wang, P Paranamana, P Shafto, Advances in Neural Information Processing Systems. 33Wang P, Wang J, Paranamana P, Shafto P. 2020. A mathematical theory of cooperative communication. Advances in Neural Information Processing Systems 33
Cooperative communication as belief transport. P Shafto, J Wang, P Wang, Trends in Cognitive Sciences. Shafto P, Wang J, Wang P. 2021. Cooperative communication as belief transport. Trends in Cognitive Sciences
Nonliteral understanding of number words. J T Kao, J Y Wu, L Bergen, N D Goodman, Proceedings of the National Academy of Sciences. 11133Kao JT, Wu JY, Bergen L, Goodman ND. 2014. Nonliteral understanding of number words. Proceedings of the National Academy of Sciences 111(33):12002-12007
Polite speech emerges from competing social goals. E J Yoon, M H Tessler, N D Goodman, M C Frank, Open Mind. 4Yoon EJ, Tessler MH, Goodman ND, Frank MC. 2020. Polite speech emerges from competing social goals. Open Mind 4:71-87
Warm (for winter): Comparison class understanding in vague language. M H Tessler, M Lopez-Brau, N D Goodman, 15th International Conference on Cognitive Modeling. 193Tessler MH, Lopez-Brau M, Goodman ND. 2017. Warm (for winter): Comparison class understanding in vague language. In 15th International Conference on Cognitive Modeling, pp. 193
The language of generalization. M H Tessler, N D Goodman, Psychological review. 1263395Tessler MH, Goodman ND. 2019. The language of generalization. Psychological review 126(3):395
Recognizing communicative intentions in infancy. G Csibra, Mind & Language. 252Csibra G. 2010. Recognizing communicative intentions in infancy. Mind & Language 25(2):141-168
Speaking our minds: Why human communication is different, and how language evolved to make it special. T Scott-Phillips, Macmillan International Higher EducationScott-Phillips T. 2014. Speaking our minds: Why human communication is different, and how language evolved to make it special. Macmillan International Higher Education
Epistemic trust: Modeling children's reasoning about others' knowledge and intent. P Shafto, B Eaves, D J Navarro, A Perfors, Developmental science. 153Shafto P, Eaves B, Navarro DJ, Perfors A. 2012. Epistemic trust: Modeling children's reasoning about others' knowledge and intent. Developmental science 15(3):436-447
Communication in action: Planning and interpreting communicative demonstrations. M K Ho, F Cushman, M L Littman, J L Austerweil, Journal of Experimental Psychology: General. in pressHo MK, Cushman F, Littman ML, Austerweil JL. in press. Communication in action: Planning and interpreting communicative demonstrations. Journal of Experimental Psychology: General
Between imitation and intention learning. J Macglashan, M L Littman, Proceedings of the 24th International Conference on Artificial Intelligence. the 24th International Conference on Artificial IntelligenceMacGlashan J, Littman ML. 2015. Between imitation and intention learning. In Pro- ceedings of the 24th International Conference on Artificial Intelligence, pp. 3692-3698
Social is special: A normative framework for teaching with and learning from evaluative feedback. M K Ho, J Macglashan, M L Littman, F Cushman, Cognition. 167Ho MK, MacGlashan J, Littman ML, Cushman F. 2017. Social is special: A normative framework for teaching with and learning from evaluative feedback. Cognition 167:91- 106
Teaching with Rewards and Punishments: Reinforcement or Communication?. M K Ho, M L Littman, F Cushman, Jl ; D Austerweil, Noelle, Dale, Warlaumont, Yoshimi, Matlock, Jennings, Maglio, Proceedings of the 37th Annual Conference of the Cognitive Science Society. the 37th Annual Conference of the Cognitive Science SocietyAustin, TXCognitive Science SocietyHo MK, Littman ML, Cushman F, Austerweil JL. 2015. Teaching with Rewards and Punishments: Reinforcement or Communication? In Proceedings of the 37th Annual Conference of the Cognitive Science Society, ed. D Noelle, R Dale, AS Warlaumont, J Yoshimi, T Matlock, CD Jennings, PP Maglio, pp. 920-925, pp. 920-925. Austin, TX: Cognitive Science Society
People teach with rewards and punishments as communication, not reinforcements. M K Ho, F Cushman, M L Littman, J L Austerweil, Journal of Experimental Psychology: General. 1483Ho MK, Cushman F, Littman ML, Austerweil JL. 2019. People teach with rewards and punishments as communication, not reinforcements. Journal of Experimental Psy- chology: General 148(3):520-549
A strategy-aware technique for learning behaviors from discrete human feedback. R Loftin, J Macglashan, B Peng, M Taylor, M Littman, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence28Loftin R, MacGlashan J, Peng B, Taylor M, Littman M, et al. 2014. A strategy-aware technique for learning behaviors from discrete human feedback. In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28
Inverse reward design. D Hadfield-Menell, Milli S Abbeel, P Russell, S Dragan, A D , Proceedings of the 31st International Conference on Neural Information Processing Systems. the 31st International Conference on Neural Information Processing SystemsHadfield-Menell D, Milli S, Abbeel P, Russell S, Dragan AD. 2017. Inverse reward design. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 6768-6777
Interactive learning from policy-dependent human feedback. J Macglashan, M K Ho, R Loftin, B Peng, G Wang, PMLRInternational Conference on Machine Learning. MacGlashan J, Ho MK, Loftin R, Peng B, Wang G, et al. 2017. Interactive learn- ing from policy-dependent human feedback. In International Conference on Machine Learning, pp. 2285-2294. PMLR.
Residual algorithms: Reinforcement learning with function approximation. L Baird, Machine Learning Proceedings. ElsevierBaird L. 1995. Residual algorithms: Reinforcement learning with function approxima- tion. In Machine Learning Proceedings 1995. Elsevier
D Arumugam, J K Lee, S Saskin, M L Littman, arXiv:1902.04257Deep reinforcement learning from policy-dependent human feedback. arXiv preprintArumugam D, Lee JK, Saskin S, Littman ML. 2019. Deep reinforcement learning from policy-dependent human feedback. arXiv preprint arXiv:1902.04257
Cooperative inverse reinforcement learning. D Hadfield-Menell, A Dragan, P Abbeel, S Russell, Proceedings of the 30th International Conference on Neural Information Processing Systems. the 30th International Conference on Neural Information Processing SystemsHadfield-Menell D, Dragan A, Abbeel P, Russell S. 2016. Cooperative inverse reinforce- ment learning. In Proceedings of the 30th International Conference on Neural Infor- mation Processing Systems, pp. 3916-3924
Reward-rational (implicit) choice: A unifying formalism for reward learning. H J Jeon, Milli S Dragan, A ; H Larochelle, M Ranzato, Hadsell, H Mf Balcan, Lin, Advances in Neural Information Processing Systems. Curran Associates, Inc33Jeon HJ, Milli S, Dragan A. 2020. Reward-rational (implicit) choice: A unifying for- malism for reward learning. In Advances in Neural Information Processing Systems, ed. H Larochelle, M Ranzato, R Hadsell, MF Balcan, H Lin, pp. 4415-4426, vol. 33, pp. 4415-4426. Curran Associates, Inc.
Resource-rational Task Decomposition to Minimize Planning Costs. C G Correa, M K Ho, F Callaway, Griffiths Tl ; S Denison, Y Mack, Xu, Armstrong, Proceedings of the 42nd Annual Conference of the Cognitive Science Society. the 42nd Annual Conference of the Cognitive Science SocietyCorrea CG, Ho MK, Callaway F, Griffiths TL. 2020. Resource-rational Task Decom- position to Minimize Planning Costs. In Proceedings of the 42nd Annual Conference of the Cognitive Science Society, ed. S Denison., M Mack, Y Xu, B Armstrong, pp. 2974-2980, pp. 2974-2980. Cognitive Science Society
M K Ho, D Abel, C G Correa, M L Littman, J D Cohen, T L Griffiths, arXiv:2105.06948Control of mental representations in human planning. arXiv preprintHo MK, Abel D, Correa CG, Littman ML, Cohen JD, Griffiths TL. 2021. Control of mental representations in human planning. arXiv preprint arXiv:2105.06948
The division of labor in communication: Speakers help listeners account for asymmetries in visual perspective. R D Hawkins, H Gweon, N D Goodman, Cognitive science. 45312926Hawkins RD, Gweon H, Goodman ND. 2021. The division of labor in communication: Speakers help listeners account for asymmetries in visual perspective. Cognitive science 45(3):e12926
A rational model of people's inferences about others' preferences based on response times. V Gates, F Callaway, M K Ho, T L Griffiths, Cognition. 217104885Gates V, Callaway F, Ho MK, Griffiths TL. 2021. A rational model of people's infer- ences about others' preferences based on response times. Cognition 217:104885
Learning the preferences of ignorant, inconsistent agents. O Evans, A Stuhlmüller, N Goodman, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence30Evans O, Stuhlmüller A, Goodman N. 2016. Learning the preferences of ignorant, inconsistent agents. In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30
A Alanqary, G Z Lin, J Le, T Zhi-Xuan, V K Mansinghka, arXiv:2106.13249Tenenbaum JB. 2021. Modeling the mistakes of boundedly rational agents within a bayesian theory of mind. arXiv preprintAlanqary A, Lin GZ, Le J, Zhi-Xuan T, Mansinghka VK, Tenenbaum JB. 2021. Model- ing the mistakes of boundedly rational agents within a bayesian theory of mind. arXiv preprint arXiv:2106.13249
Thinking about thinking through inverse reasoning. M Berke, J Jara-Ettinger, PsyArXiv:10.31234/osf.io/r25qnPsyArXiv preprintBerke M, Jara-Ettinger J. 2021. Thinking about thinking through inverse reasoning. PsyArXiv preprint PsyArXiv:10.31234/osf.io/r25qn
Online bayesian goal inference for boundedly rational planning agents. T Zhi-Xuan, J Mann, T Silver, J Tenenbaum, V Mansinghka, Advances in Neural Information Processing Systems. 33Zhi-Xuan T, Mann J, Silver T, Tenenbaum J, Mansinghka V. 2020. Online bayesian goal inference for boundedly rational planning agents. Advances in Neural Information Processing Systems 33
If multi-agent learning is the answer, what is the question?. Y Shoham, R Powers, T Grenager, Artificial intelligence. 1717Shoham Y, Powers R, Grenager T. 2007. If multi-agent learning is the answer, what is the question? Artificial intelligence 171(7):365-377
Coordinate to cooperate or compete: Abstract goals and joint intentions in social interaction. M Kleiman-Weiner, M K Ho, J L Austerweil, M L Littman, Jb ; A Tenenbaum, D Papafragou, Grodner, Mirman, Proceedings of the 38th Annual Conference of the Cognitive Science Society. the 38th Annual Conference of the Cognitive Science SocietyKleiman-Weiner M, Ho MK, Austerweil JL, Littman ML, Tenenbaum JB. 2016. Coordi- nate to cooperate or compete: Abstract goals and joint intentions in social interaction. In Proceedings of the 38th Annual Conference of the Cognitive Science Society, ed. A Papafragou, D Grodner, D Mirman, JC Trueswell, pp. 1679-1684, pp. 1679-1684.
. T X Austin, Cognitive Science SocietyAustin, TX: Cognitive Science Society
The emergence of social norms and conventions. R X Hawkins, N D Goodman, R L Goldstone, Trends in cognitive sciences. 232Hawkins RX, Goodman ND, Goldstone RL. 2019. The emergence of social norms and conventions. Trends in cognitive sciences 23(2):158-169
| [] |
[
"TLRM: Task-level Relation Module for GNN-based Few-Shot Learning",
"TLRM: Task-level Relation Module for GNN-based Few-Shot Learning"
] | [
"Yurong Guo \nPattern Recognition and Intelligent System Lab\nBeijing University of Posts and Telecommunications\nBeijingChina\n",
"Zhanyu Ma \nPattern Recognition and Intelligent System Lab\nBeijing University of Posts and Telecommunications\nBeijingChina\n",
"Xiaoxu Li \nLanzhou University of Technology\nLanzhouChina\n",
"Yuan Dong \nPattern Recognition and Intelligent System Lab\nBeijing University of Posts and Telecommunications\nBeijingChina\n"
] | [
"Pattern Recognition and Intelligent System Lab\nBeijing University of Posts and Telecommunications\nBeijingChina",
"Pattern Recognition and Intelligent System Lab\nBeijing University of Posts and Telecommunications\nBeijingChina",
"Lanzhou University of Technology\nLanzhouChina",
"Pattern Recognition and Intelligent System Lab\nBeijing University of Posts and Telecommunications\nBeijingChina"
] | [] | Recently, graph neural networks (GNNs) have shown powerful ability to handle few-shot classification problem, which aims at classifying unseen samples when trained with limited labeled samples per class. GNN-based few-shot learning architectures mostly replace traditional metric with a learnable GNN. In the GNN, the nodes are set as the samples' embedding, and the relationship between two connected nodes can be obtained by a network, the input of which is the difference of their embedding features. We consider this method of measuring relation of samples only models the sample-tosample relation, while neglects the specificity of different tasks. That is, this method of measuring relation does not take the task-level information into account. To this end, we propose a new relation measure method, namely the task-level relation module (TLRM), to explicitly model the task-level relation of one sample to all the others. The proposed module captures the relation representations between nodes by considering the sample-to-task instead of sample-to-sample embedding features. We conducted extensive experiments on four benchmark datasets: mini-ImageNet, tiered-ImageNet, CUB-200-2011, and CIFAR-FS. Experimental results demonstrate that the proposed module is effective for GNN-based few-shot learning. | 10.1109/vcip53242.2021.9675452 | [
"https://arxiv.org/pdf/2101.09840v3.pdf"
] | 237,396,058 | 2101.09840 | 6659d6b3d72fcd55a3d2a903785af424ba181cbb |
TLRM: Task-level Relation Module for GNN-based Few-Shot Learning
Yurong Guo
Pattern Recognition and Intelligent System Lab
Beijing University of Posts and Telecommunications
BeijingChina
Zhanyu Ma
Pattern Recognition and Intelligent System Lab
Beijing University of Posts and Telecommunications
BeijingChina
Xiaoxu Li
Lanzhou University of Technology
LanzhouChina
Yuan Dong
Pattern Recognition and Intelligent System Lab
Beijing University of Posts and Telecommunications
BeijingChina
TLRM: Task-level Relation Module for GNN-based Few-Shot Learning
Index Terms-Few-shot learningGraph Neural NetworksTask-level Relation
Recently, graph neural networks (GNNs) have shown powerful ability to handle few-shot classification problem, which aims at classifying unseen samples when trained with limited labeled samples per class. GNN-based few-shot learning architectures mostly replace traditional metric with a learnable GNN. In the GNN, the nodes are set as the samples' embedding, and the relationship between two connected nodes can be obtained by a network, the input of which is the difference of their embedding features. We consider this method of measuring relation of samples only models the sample-tosample relation, while neglects the specificity of different tasks. That is, this method of measuring relation does not take the task-level information into account. To this end, we propose a new relation measure method, namely the task-level relation module (TLRM), to explicitly model the task-level relation of one sample to all the others. The proposed module captures the relation representations between nodes by considering the sample-to-task instead of sample-to-sample embedding features. We conducted extensive experiments on four benchmark datasets: mini-ImageNet, tiered-ImageNet, CUB-200-2011, and CIFAR-FS. Experimental results demonstrate that the proposed module is effective for GNN-based few-shot learning.
I. INTRODUCTION
Deep Learning has been achieved great success in visual recognition tasks [1]- [4], which depends on powerful model and amounts of labelled samples [5]. However, humans can learn new concepts with little examples, or none at all. The gap motivated researchers to study few-shot learning and zeroshot learning.
The goal of few-shot learning is to classify unseen samples, given just a small number of labeled samples in each class. It has attracted considerable attention [6]- [17]. One promising study is metric-based few-shot learning [6]- [13]. Given just a query sample and a few labeled support samples, the embedding function extracts feature for all samples, and then a metric module measures distance between the query embedding and class embedding to give a recognition result. Recently, there have some studies of utilizing Graph Neural Networks (GNNs) [8], [10], [11], [18] to handle the few-shot classification task, which can be seen as a kind of metric learning method. In GNN-based few-shot learning model, all embedding features are connected to construct a graph. And each node is represented by the embedding feature of a sample. * Corresponding author Then the graph classifies the unlabeled query by measuring the similarity between two samples.
2 V 1 V 3 V 4 V 5 V 11 1 1 R V V = − 12 1 2 R V V = − 13 1 3 R V V = − 14 1 4 R V V = −
Even though GNN-based model have made significant advance in few-shot classification, they do suffer from distinct limitation. In the metric module of GNN-based methods, relation representation for a pair samples is obtained by calculating the absolute difference [8], [10], [11], [18]. It only considers the corresponding embedding features of the samples. Intuitively, pair-wise relationship is not only dependent on the distance between corresponding embedding features, but also related to all embedding features in a task. As shown in the left panel of Figure 1, there is no significant difference between the target sample and all other samples in the task. The distance representation between two samples neglects the specificity of the task and lacks discrimination. This will cause the problem that the similarity scores are not significantly different, so that the category of the target sample is not clear. To deal with the key challenge of how to learn relation representations with distinctive information, we propose a sample-to-task metric module, as shown in the right panel of Figure 1, which adopts a meta learning strategy to learn the relation representations. The main contributions of this paper are summarized as follows:
• We propose an task-level relation module (TLRM). The proposed TLRM utilizes the attention mechanism to learn task-specific relation representations for each task. mark datasets show that our proposed module is effective for GNN-based few-shot model. In addition, the results of semi-supervised few-shot classification and visualization of similarity scores are provided to further evaluate our module.
N K T C + 1 C transpose 1 C ( ) 1 N K T + ( ) N K T C + CNN Graph Neural Network (a) (b)
II. RELATED WORK Meta Learning in Few-shot Learning: Meta Learning framework is an effective study for few-shot learning, which mainly focuses on how to learn and utilize meta-level knowledge to adapt to new tasks quickly and well. One of the excellent studies is model-agnostic meta-learning (MAML) [19]. The MAML learned initialization parameters by cross-task training strategy such that the base learner can rapidly generalize new tasks using a few support samples. Subsequently, many MAML variants [20]- [25] have been developed.
Metric Learning in Few-shot Learning: On the metric learning side, most of algorithms consist of embedding function extracting features for instances and metric function for measuring sample between the query embedding and class embedding. Koch et al. [9] used siamese network to compute the pair-wise distance between samples. Prototypical networks [6] firstly built a prototype representation of each class and measured the samples between the query embedding and class's prototype by using euclidean distance. Matching network [26] used a neural network with external memories to map samples to embedding features, which considers full context in a task. TADAM [27] introduced a metric scaling factor to optimize the similarity metric of prototypical nets. Zheng et al. [28] believed that the average prototype ignores the different importance of different support samples and proposed principal characteristic nets.
Fixed metric methods will restrict the embedding function to produce discriminative representations. Sung et al. [7] introduced relation network (RN) for few-shot learning. The relation network learns to learn a deep distance metric by a neural network. However, due to the inherent local connectivity of CNN, the RN can be sensitive to the spatial position relationship of semantic objects in two compared images. To address this problem, Wu et al. [29] introduced a deformable feature extractor (DFE) to extract more efficient features, and designed a dual correlation attention mechanism (DCA) to deal with its inherent local connectivity. Hou et al. [30] proposed a cross attention network for few-shot classification, which is designed to model the semantic relevance between class and query features.
GNN-based methods in Few-shot Learning: Recently, most approaches are proposed to exploit GNN in the field of few-shot learning task. Specifically, Garcia et al. [8] first utilized GNN to solve few-shot learning problem, where all embedding features extracting by a convolutional neural network are densely connected. Liu et al. [11] proposed a transductive propagation network (TPN). The TPN utilizes the entire query set for transductive inference. To further exploit intra-cluster similarity and inter-cluster dissimilarity, kim et al. [10] proposed an edge-labeling graph neural network. Then in order to explicitly model the distribution-level relation, Yang et al. [18] proposed distribution propagation graph network (DPGN).
In the existing GNN-based few-shot learning methods, pair-wise distance representations are absolute difference of the embedding features. However, when the classes in the task are similar, it will lead to the problem of insufficient discrimination in metric representations. So, in this paper, we focus on learning distinctive relation information through an task-level relation module.
III. THE PROPOSED METHOD
A. GNN-based few-shot learning As shown in Figure 2 (a), GNN-based few-shot model usually consists of a CNN for extracting features and a GNN for propagating labels from labeled nodes to unlabeled according to similarity scores between nodes. In the training and testing process, GNN-based few-shot model usually adopts the episodic mechanism, in which each episode (task) consists of the support set S and the query set Q. And the support set contains N * K labeled support samples and the query set contains T unseen samples in a N -way K-shot problem.
Generally, the CNN g(·) as backbone of extracting features has two different types 1) the 4-layer convolution network (ConvNet) [10], [11] and 2) the 12-layer residual network (ResNet-12) used in [18]. The GNN consists of L layers to process the graph. Let V = {V 1 , V 2 , ..., V N ×K+T } be embedding features for all nodes extracted by the CNN, R ij be relation representations between nodes, and s ij be similarity score between node i and j. Given V L−1 and s L−1 from the layer L − 1, node feature update is firstly conducted by a neighborhood aggregation procedure. And node i is updated as
V L i = f v N ×K+T j=1 V L−1 j s L−1 ij ,(1)
where f v is the feature (node) transformation network. Then, the relation representation is obtained by calculating the absolute difference between two vector nodes. It can be denoted as
R ij = V L i − V L j = C k=1 V L ik − V L jk .(2)
Finally, the relation representation R ij is input into a Multilayer Perceptron (MLP) to capture the similarity scores between nodes
s ij = f s (R ij ) = f s C k=1 V L ik − V L jk = σ C k=1 ω k V L ik − V L jk .(3)
Where f s is transformation network. The goal of GNNbased few-shot learning is to learn function g, f v and f s to classify query sample x query byŷ query = f s (f v (g(x query ; D support ))) ∈ (0, 1) N . Note that the relationship is obtained by measuring the distance between two corresponding node, which is node-to-node and task-agnostic.
B. Task-level Relation Module
In this paper, attention mechanism is employed to transform sample embedding to relation representations with consideration to task-specific embedding. Note that the relation representation is task-specific and not only the distance between nodes. We denote it as Task-level Relation Module (TLRM). The proposed TLRM can avoid direct comparison relative relationship irrelevant local representations. As shown in Figure 2 (b), given the feature representations V ∈ R (N ×K+T )×C , the relation representations can be obtained. The implementation details are performed as follows.
For node i, the attention value between the target embedding and all other samples in the task can be obtained by adopting method commonly used in the attention mechanism. The attention value is performed as follows
a (V i , V j ) = exp(e ij ) N ×K+T k=1 exp(e ik ) ,(4)
Where a ∈ R (N ×K+T )×(N ×K+T ) , which represents the similarity between nodes comparing to all other embedding in the task. e ij reflects the matching degree of node i to node j. When the degree is higher, a ij is bigger. The matching degree e ij is performed as follows
e ij = s V j , V T i / √ C,(5)
Where, first, the feature representation V i ∈ R 1×C of target sample is reshaped to V i ∈ R C×1 through a transpose operation and s V i , V T j is the vector multiplication operation. And then, a ij is used to encode V j and the relation representations can be obtained, which can be denoted as
R ij = a (V i , V j ) V j ,(6)
The relation representation R ij models the relation representation between node i and j, which is a task-level relation representation of sample i to j comparing to all the other
s ij = MLP( R ij ).(7)
IV. EXPERIMENTS AND DISCUSSIONS
A. Datasets and setups
To evaluate our module, we select two GNN-based few-shot models: EGNN and DPGN, and four standard few-shot learning benchmarks: mini-ImageNet [26], tiered-ImageNet [31], CUB-200-2011 [32] and CIFAR-FS [33].
For the sake of fairness, all experiments employed the same setups as EGNN and DPGN. EGNN used ConvNet, and DPGN used ResNet-12 for extracting features. In training process, the Adam optimizer was used in all experiments with the initial learning rate 10 −3 . And the learning rate was decayed by 0.1 per 15000 iterations. The weight decay was set as 10 −5 . For all datasets, 5-way 1-shot and 5-way 5-shot experiments were conducted. We randomly sampled 10, 000 tasks and then reported the mean accuracy along with its 95% confidence interval.
B. Results and discussions for few-shot classification
Experimental results for 5-way 1-shot and 5-way 5-shot classification are shown in Table I and Table II. We can see that EGNN or DPGN with our TLRM have higher accuracy than the ones without TLRM on mini-ImageNet, tiered-ImageNet, and CUB-200-2011. Meanwhile, partial experimental results on the CIFAR-FS dataset dropped slightly, the reason of which might lie in the categories in the CIFAR-FS dataset are highly distinguishable. In addition, the CUB-200-2011 dataset is the most widely used benchmark for fine-grained image classification, which has significant intra-class variance and inter-class similarity. Fine-grained image task is more challenging in few-shot learning. Clearly, the improvement on the CUB-200-2011 dataset is significant in Table I and Table II, which shows that the relationship representation obtained by our module is more discriminative than previous method for tasks with high similarity. And overall, our method is simple and effective. Semi-supervised experiments were conducted in 5-way 5shot setting on mini-ImageNet with two backbones, in which the support samples are only partially labeled. The results are presented in Table III. Notably, the EGNN and DPGN with our TLRM outperforms the previous backbones especially when the labeled samples portion was decreased.
C. Ablation studies
In order to investigate the effect of our proposed TLRM on different layer of GNN, ablation studies were conducted with L = 1, L = 2, and L = 3 on mini-ImageNet with EGNN backbone. It can be observed from Table IV that the proposed TLRM plays a significant role in each layer of EGNN.
D. Visualization of similarity scores
For further analysis, Figure 3 shows similarity scores in the last layer of EGNN. The similarity scores are the average of 10000 tasks in setting of 5-way 5-shot and 5 queries for each class. The 25 samples in vertical axis are support set, and 25 samples in horizontal axis are query set. Notably, EGNN with our module not only contributes to predicting more accurately but also reduces the similarity score between samples in different classes and increases the similarity score between samples in the same classes.
V. CONCLUSIONS
In this paper, we propose an task-level relation module to capture the relation representations by employing all the embedding features in a single task. By considering all the samples in the task, our method can hold discriminative relation features for each node pair. Experimental results demonstrate that it improves the performance of recently proposed GNNbased methods on four benchmark datasets: mini-ImageNet, tiered-ImageNet, CUB-200-2011, and CIFAR-FS.
Fig. 1 .
1The left panel shows a general framework of previous metric approaches based on GNN. The right panel briefly illustrates our approach.
•
The comprehensive experimental results on four bench-arXiv:2101.09840v3 [cs.CV]
Fig. 2 .
2(a) shows the GNN-based few-shot model. And in (b), the left panel shows a general framework of our approach used for calculating the similarity scores, and the right panel shows the task-level relation module.
Fig. 3 .
3Visualization of similarity scores obtained by EGNN (top) and EGNN with our module (bottom).
TABLE I 5
I-WAY 1-SHOT CLASSIFICATION ACCURACY ON FOUR BENCHMARK DATASETS: MINI-IMAGENET, TIERED-IMAGENET, CUB-200-2011, AND CIFAR-FS SHOT CLASSIFICATION ACCURACY ON FOUR BENCHMARK DATASETS: MINI-IMAGENET, TIERED-IMAGENET, CUB-200-2011, AND CIFAR-FS No" means non-transductive method, and "Yes" means transductive method.Model
Trans.
mini-ImageNet
tiered-ImageNet
CUB-200-2011
CIFAR-FS
EGNN (CVPR 19)
No
52.86 ± 0.42
57.09 ± 0.42
64.82 ± 0.41
65.51 ± 0.43
EGNN + TLRM
No
53.65 ± 0.43
57.40 ± 0.42
65.07 ± 0.41
65.00 ± 0.42
TPN (ICLR 18)
Yes
59.46
−
−
−
EGNN
Yes
58.94 ± 0.51
62.37 ± 0.51
73.18 ± 0.51
72.20 ± 0.49
DPGN (CVPR 20)
Yes
66.41 ± 0.51
71.86 ± 0.50
75.25 ± 0.46
75.83 ± 0.47
EGNN + TLRM
Yes
60.79 ± 0.51
64.52 ± 0.51
75.02 ± 0.46
73.42 ± 0.50
DPGN + TLRM
Yes
66.97 ± 0.53
72.24 ± 0.50
77.53 ± 0.46 77.05 ± 0.46
TABLE II
5-WAY 5-Model
Trans.
mini-ImageNet
tiered-ImageNet
CUB-200-2011
CIFAR-FS
EGNN
No
68.20 ± 0.41
71.13 ± 0.39
80.05 ± 0.36
76.95 ± 0.37
EGNN + TLRM
No
68.72 ± 0.40
72.39 ± 0.39
81.03 ± 0.36
77.78 ± 0.37
TPN
Yes
75.65
−
−
−
EGNN
Yes
75.71 ± 0.46
81.04 ± 0.43
87.68 ± 0.38
86.13 ± 0.41
DPGN
Yes
82.04 ± 0.45
82.70 ± 0.43
87.72 ± 0.36
87.85 ± 0.38
EGNN + TLRM
Yes
76.18 ± 0.45
81.47 ± 0.43
88.00 ± 0.36
85.70 ± 0.39
DPGN + TLRM
Yes
82.58 ± 0.45
83.31 ± 0.44
90.39 ± 0.34 89.15 ± 0.37
* "
TABLE III SEMI
III-SUPERVISED FEW-SHOT CLASSIFICATION ACCURACY ON MINI-IMAGENET. THE RESULTS ARE TESTED IN TRANSDUCTIVE LEARNING.Model
20%
40%
60%
EGNN
63.91 ± 0.42
65.84 ± 0.41
68.06 ± 0.44
EGNN + TLRM 66.37 ± 0.41
66.82 ± 0.42
69.08 ± 0.42
DPGN
74.16 ± 0.44
81.23 ± 0.44
80.84 ± 0.46
DPGN + TLRM
80.99 ± 0.44
81.70 ± 0.45
80.94 ± 0.45
TABLE IV
EFFECT OF ADDING THE PROPOSED TLRM TO DIFFERENT LAYER OF
EGNN. THE RESULTS ON 5-WAY 1-SHOT ARE REPORTED.
Model
mini-ImageNet
EGNN
52.86 ± 0.42
EGNN+TLRM
L = 1
53.11 ± 0.43
L = 2
53.29 ± 0.44
L = 3
53.42 ± 0.42
L = 1 & 2 & 3
53.65 ± 0.43
samples. Afterwards, R ij is fed to an MLP to capture the
relation score for performing further classification
Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, CVPR. K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," in CVPR, 2014.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, CVPR. K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in CVPR, 2016.
The devil is in the channels: Mutual-channel loss for finegrained image classification. D Chang, Y Ding, J Xie, A Bhunia, X Li, Z Ma, M Wu, J Guo, Y Song, IEEE Transactions on Image Processing. 29D. Chang, Y. Ding, J. Xie, A. Bhunia, X. Li, Z. Ma, M. Wu, J. Guo, and Y. Song, "The devil is in the channels: Mutual-channel loss for fine- grained image classification," IEEE Transactions on Image Processing, vol. 29, pp. 4683-4695, 2020.
Ap-cnn: Weakly supervised attention pyramid convolutional neural network for fine-grained visual classification. Y Ding, Z Ma, S Wen, J Xie, D Chang, Z Si, M Wu, H Ling, IEEE Transactions on Image Processing. 30Y. Ding, Z. Ma, S. Wen, J. Xie, D. Chang, Z. Si, M. Wu, and H. Ling, "Ap-cnn: Weakly supervised attention pyramid convolutional neural network for fine-grained visual classification," IEEE Transactions on Image Processing, vol. 30, pp. 2826-2836, 2021.
ImageNet Large Scale Visual Recognition Challenge. O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Z Huang, A Karpathy, A Khosla, M Bernstein, A Berg, F Li, International Journal of Computer Vision. 1153O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. Berg, and F. Li, "ImageNet Large Scale Visual Recognition Challenge," International Journal of Computer Vision, vol. 115, no. 3, pp. 211-252, 2015.
Prototypical networks for few-shot learning. J Snell, K Swersky, R Zemel, NIPS. J. Snell, K. Swersky, and R. Zemel, "Prototypical networks for few-shot learning," in NIPS, 2017.
Learning to compare: Relation network for few-shot learning. F Sung, Y Yang, L Zhang, T Xiang, P Torr, T Hospedales, CVPR. F. Sung, Y. Yang, L. Zhang, T. Xiang, P. Torr, and T. Hospedales, "Learning to compare: Relation network for few-shot learning," in CVPR, 2018.
Few-shot learning with graph neural networks. V Garcia, J Bruna, ICLR. V. Garcia and J. Bruna, "Few-shot learning with graph neural networks," in ICLR, 2018.
Siamese neural networks for one-shot image recognition. G Koch, R Zemel, R Salakhutdinov, ICML. G. Koch, R. Zemel, and R. Salakhutdinov, "Siamese neural networks for one-shot image recognition," in ICML, 2015.
Edge-labeling graph neural network for few-shot learning. J Kim, T Kim, S Kim, C Yoo, CVPR. J. Kim, T. Kim, S. Kim, and C. Yoo, "Edge-labeling graph neural network for few-shot learning," in CVPR, 2019.
Learning to propagate labels: Transductive propagation network for few-shot learning. Y Liu, J Lee, M Park, S Kim, E Yang, S Hwang, Y Yang, in ICLR. Y. Liu, J. Lee, M. Park, S. Kim, E. Yang, S. Hwang, and Y. Yang, "Learning to propagate labels: Transductive propagation network for few-shot learning," in ICLR, 2018.
Distribution consistency based covariance metric networks for few-shot learning. W Li, J Xu, J Huo, L Wang, Y Gao, J Luo, AAAI. W. Li, J. Xu, J. Huo, L. Wang, Y. Gao, and J. Luo, "Distribution consistency based covariance metric networks for few-shot learning," in AAAI, 2019.
Revisiting local descriptor based image-to-class measure for few-shot learning. W Li, L Wang, J Xu, J Huo, Y Gao, J Luo, CVPR. W. Li, L. Wang, J. Xu, J. Huo, Y. Gao, and J. Luo, "Revisiting local descriptor based image-to-class measure for few-shot learning," in CVPR, 2019.
Bsnet: Bisimilarity network for few-shot fine-grained image classification. X Li, J Wu, Z Sun, Z Ma, J Cao, J Xue, IEEE Transactions on Image Processing. 30X. Li, J. Wu, Z. Sun, Z. Ma, J. Cao, and J. Xue, "Bsnet: Bi- similarity network for few-shot fine-grained image classification," IEEE Transactions on Image Processing, vol. 30, pp. 1318-1331, 2021.
Learning dynamic alignment via meta-filter for few-shot learning. C Xu, Y Fu, C Liu, C Wang, J Li, F Huang, L Zhang, X Xue, CVPR. C. Xu, Y. Fu, C. Liu, C. Wang, J. Li, F. Huang, L. Zhang, and X. Xue, "Learning dynamic alignment via meta-filter for few-shot learning," in CVPR, 2021.
Rethinking class relations: Absolute-relative supervised and unsupervised few-shot learning. H Zhang, P Koniusz, S Jian, H Li, P Torr, CVPR. H. Zhang, P. Koniusz, S. Jian, H. Li, and P. Torr, "Rethinking class relations: Absolute-relative supervised and unsupervised few-shot learning," in CVPR, 2021.
Prototype completion with primitive knowledge for few-shot learning. B Zhang, X Li, Y Ye, Z Huang, L Zhang, CVPR. B. Zhang, X. Li, Y. Ye, Z. Huang, and L. Zhang, "Prototype completion with primitive knowledge for few-shot learning," in CVPR, 2021.
DPGN: Distribution propagation graph network for few-shot learning. L Yang, L Li, Z Zhang, X Zhou, E Zhou, Y Liu, CVPR. L. Yang, L. Li, Z. Zhang, X. Zhou, E. Zhou, and Y. Liu, "DPGN: Distribution propagation graph network for few-shot learning," in CVPR, 2020.
Model-agnostic meta-learning for fast adaptation of deep networks. C Finn, P Abbeel, S Levine, ICML. C. Finn, P. Abbeel, and S. Levine, "Model-agnostic meta-learning for fast adaptation of deep networks," in ICML, 2017.
Task agnostic meta-learning for fewshot learning. M Jamal, G Qi, CVPR. M. Jamal and G. Qi, "Task agnostic meta-learning for fewshot learning," in CVPR, 2019.
Meta-learning with latent embedding optimization. A Rusu, D Rao, J Sygnowski, O Vinyals, R Pascanua, S Osindero, R Hadsell, ICLR. A. Rusu, D. Rao, J. Sygnowski, O. Vinyals, R. Pascanua, S. Osindero, and R. Hadsell, "Meta-learning with latent embedding optimization," in ICLR, 2019.
Learning to learn with conditional class dependencies. X Jiang, M Havaei, F Varno, G Chartrand, N Chapados, S Matwin, ICLR. X. Jiang, M. Havaei, F. Varno, G. Chartrand, N. Chapados, and S. Matwin, "Learning to learn with conditional class dependencies," in ICLR, 2019.
Optimization as a model for few-shot learning. S Ravi, H Larochelle, ICLR. S. Ravi and H. Larochelle, "Optimization as a model for few-shot learning," in ICLR, 2017.
Meta networks. T Munkhdalai, H Yu, ICML. T. Munkhdalai and H. Yu, "Meta networks," in ICML, 2017.
LGM-Net: learning to generate matching networks for few-shot learning. H Li, W Dong, X Mei, C Ma, F Huang, B Hu, ICML. H. Li, W. Dong, X. Mei, C. Ma, F. Huang, and B. Hu, "LGM-Net: learning to generate matching networks for few-shot learning," in ICML, 2019.
Matching networks for one shot learning. O Vinyals, C Blundell, T Lillicrap, K Kavukcuoglu, D Wierstra, NIPS. O. Vinyals, C. Blundell, T. Lillicrap, K. Kavukcuoglu, and D. Wierstra, "Matching networks for one shot learning," in NIPS, 2016.
TADAM: task dependent adaptive metric for improved few-shot learning. B Oreshkin, A Lacoste, P Rodriguez, NIPS. B. Oreshkin, A. Lacoste, and P. Rodriguez, "TADAM: task dependent adaptive metric for improved few-shot learning," in NIPS, 2018.
Principal characteristic networks for few-shot learning. Y Zheng, R Wang, J Yang, L Xue, M Hu, Visual communication and Image Representation. 59Y. Zheng, R. Wang, J. Yang, L. Xue, and M. Hu, "Principal characteristic networks for few-shot learning," Visual communication and Image Representation, vol. 59, pp. 563-573, 2019.
PARN: position-aware relation networks for few-shot learning. Z Wu, Y Li, L Guo, K Jia, ICCV. Z. Wu, Y. Li, L. Guo, and K. Jia, "PARN: position-aware relation networks for few-shot learning," in ICCV, 2019.
Cross attention network for few-shot classification. R Hou, H Chang, B Ma, S Shan, X Chen, NIPS. R. Hou, H. Chang, B. Ma, S. Shan, and X. Chen, "Cross attention network for few-shot classification," in NIPS, 2019.
Meta-learning for semi-supervised fewshot classification. M Ren, E Triantafillou, S Ravi, J Snell, K Swersky, J Tenenbaum, H Larochelle, R Zemel, in ICLR. M. Ren, E. Triantafillou, S. Ravi, J. Snell, K. Swersky, J. Tenenbaum, H. Larochelle, and R. Zemel, "Meta-learning for semi-supervised few- shot classification," in ICLR, 2018.
C Wah, S Branson, P Welinder, P Perona, S Belongie, CNS-TR-2011- 001The Caltech-UCSD Birds-200-2011 Dataset. California Institute of TechnologyTech. Rep.C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie, "The Caltech-UCSD Birds-200-2011 Dataset," Tech. Rep. CNS-TR-2011- 001, California Institute of Technology, 2011.
Meta-learning with differentiable closedform solvers. L Bertinetto, J Henriques, P Torr, A Vedaldi, ICML. L. Bertinetto, J. Henriques, P. Torr, and A. Vedaldi, "Meta-learning with differentiable closedform solvers," in ICML, 2019.
| [] |
[
"A critical analysis of the modelling of dissipation in fission a",
"A critical analysis of the modelling of dissipation in fission a"
] | [
"B Jurado \nGSI\nPlanckstr.164291DarmstadtGermany\n",
"C Schmitt \nGSI\nPlanckstr.164291DarmstadtGermany\n",
"K.-H Schmidt \nGSI\nPlanckstr.164291DarmstadtGermany\n",
"J Benlliure \nUniv. de Santiago de Compostela\n15706 S. de CompostelaSpain\n",
"A R Junghans \nForschungszentrum Rossendorf\n510119, 01314Postfach, DresdenGermany\n"
] | [
"GSI\nPlanckstr.164291DarmstadtGermany",
"GSI\nPlanckstr.164291DarmstadtGermany",
"GSI\nPlanckstr.164291DarmstadtGermany",
"Univ. de Santiago de Compostela\n15706 S. de CompostelaSpain",
"Forschungszentrum Rossendorf\n510119, 01314Postfach, DresdenGermany"
] | [] | The time-dependent flux over the fission barrier of an excited nucleus under the influence of dissipation is investigated. Characteristic features of the evolution of the amplitude of the probability distribution and the velocity profile at the fission barrier are derived. Analytical results are compared to numerical Langevin calculations and used to develop a new analytical approximation to the solution of the Fokker-Planck equation for the time-dependent fission-decay width. This approximation is shown to be more realistic than previously proposed descriptions, which were widely used in the past.. | 10.1016/j.nuclphysa.2004.09.123 | [
"https://arxiv.org/pdf/nucl-ex/0302003v2.pdf"
] | 17,943,992 | nucl-ex/0302003 | 3ec4c5b9222127e68a3ce94518eca8ca4ff16373 |
A critical analysis of the modelling of dissipation in fission a
B Jurado
GSI
Planckstr.164291DarmstadtGermany
C Schmitt
GSI
Planckstr.164291DarmstadtGermany
K.-H Schmidt
GSI
Planckstr.164291DarmstadtGermany
J Benlliure
Univ. de Santiago de Compostela
15706 S. de CompostelaSpain
A R Junghans
Forschungszentrum Rossendorf
510119, 01314Postfach, DresdenGermany
A critical analysis of the modelling of dissipation in fission a
1Nuclear fissiondissipation effectstime-dependent fission-decay widthLangevin equationFokker-Planck equationanalytical approximation
The time-dependent flux over the fission barrier of an excited nucleus under the influence of dissipation is investigated. Characteristic features of the evolution of the amplitude of the probability distribution and the velocity profile at the fission barrier are derived. Analytical results are compared to numerical Langevin calculations and used to develop a new analytical approximation to the solution of the Fokker-Planck equation for the time-dependent fission-decay width. This approximation is shown to be more realistic than previously proposed descriptions, which were widely used in the past..
Introduction
One of the topical features of nuclear dynamics during the last decades is the role of dissipation for such processes as deep-inelastic heavy-ion collisions [1], damping of giant resonances [2] and induced fission [3] -to learn whether collective motion in nuclei is underdamped like in water or over-damped like in honey droplets. In spite of intensive experimental and theoretical work, most conclusions on the dissipation strength are not well established. The situation still remains unclear concerning the deformation and temperature dependence of nuclear friction as well. From the theoretical point of view, different models have been developed, e.g. the one-body dissipation [4] concept based on the wall-and-window formula, the two-body viscosity model [5] or quantum transport theories [6,7,8]. They yield different results for the magnitude as well as for the dependence on temperature and deformation of the dissipation strength.
Although the role of nuclear dissipation in fission has been recognized long time ago, fission continues to be one of the most promising tools for deducing quantitative conclusions. More than 60 years ago, Bohr and Wheeler [9] proposed a derivation of the fission decay width equilibrium. The success of the transition-state model of Bohr and Wheeler prevented this idea of Kramers to establish. Approximately forty years later, experimentally observed high pre-scission neutron multiplicities [11,12] gave the impetus to Grangé et al. [13] to theoretically investigate the influence of dissipation on the fission time scale. Their numerical solution of the time-dependent FPE shows that it takes some time, the so-called transient time trans τ , until the current over the saddle point reaches its stationary value. Grangé et al. [13] pointed out that this transient time trans τ but also an additional saddle-to-scission time ss τ lead to an increase of pre-scission particle multiplicities. The transient time originates from the time needed by the probability distribution of the particle
) t ; v , x ( W (deformation x and conjugate momentum µ ⋅ = v p ,
where v is the velocity of the system and µ its inertia) to spread out in deformation space. From their numerical calculations, the authors of ref. [13] extracted the following approximation for the transient time, defined as the time until the fission width ) (t f Γ reaches 90% of its asymptotic value: where f B is the fission-barrier height, T is the nuclear temperature, g ω is the effective oscillator frequency at the ground state, and β is the reduced dissipation coefficient which measures the relative rate with which the excitation energy is transferred between the collective and intrinsic degrees of freedom.
Later, full dynamical calculations were performed with a stochastic approach based on the multidimensional time-dependent FPE [14] or Langevin equation [15,16,17,18] with allowance for evaporation during the dynamical evolution of the system. These are two equivalent methods, corresponding to the integral, respectively differential, formulation of the same process. Whereas the solution of the FPE leads to the probability distribution ) ; , ( t v x W of the particle as a function of time, the Langevin approach consists of following the trajectory of every individual nucleus all along its path to fission. Because the Fokker-Planck approach gives directly access to the probability distribution ) ; , ( t v x W
, and consequently to the evolution of the fission decay width
) (t f Γ
with time (see equation 9), it corresponds to a more transparent way to get information on transient effects. However, for realistic physical cases the FPE can only be solved numerically. The same information can be extracted from a Langevin treatment as well, but only after some average over a large amount of trajectories. The possibility of the Langevin method to follow individual trajectories may explain why it is often preferred to the Fokker-Planck approach since several years.
Another procedure widely used to study fission dynamics consists of introducing a timedependent fission decay width
) (t f Γ
in an evaporation code [19,20,21,22,23,24]. In section 3 we will point out that such a treatment is exactly equivalent to solving the above-mentioned equation of motion with allowance for evaporation under the condition that the used time-
dependent width ) (t f Γ
is obtained by the Fokker-Planck or Langevin solution at each evaporation step. Thus, it takes into account the changes in mass, charge, excitation energy and angular momentum of the decaying system. Unfortunately, following a large amount of trajectories or solving numerically the FPE at each evaporation step needs a very high computational effort, which may exceed the technical possibilities actually available for many applications. However, in an evaporation code one could get around the high computing time needed to calculate numerically the time-dependent fission width at each evaporation step by using a suitable analytically calculable expression for
) (t f Γ .
With this aim in view, the main task of the present work is to shed light on the characteristic features of transient effects in the dynamical evolution of the nuclear system. On this basis, we propose a way to model the influence of dissipation in nuclear fission in terms of a realistic analytical approximation for
) (t f Γ
. In the following paper [25], we investigate, how transient effects manifest. There we will make use of peripheral nuclear collisions at relativistic energies as an appropriate reaction mechanism dedicated to dissipation studies and establish the requirements on relevant experimental observables that are sensitive to transient effects.
This work is also motivated by the need of incorporating realistic features of fission dynamics in complex model calculations for technical applications, e.g. the nuclide production in secondary-beam facilities, in spallation-neutron sources, in the core of an accelerator-driven system, and in shielding calculations. In these codes, the computational effort is already very high due to the necessary transport calculations in a thick-target environment, and thus the explicit solution of the equation of motion, e.g. by the use of the Langevin approach, seems to be excluded.
Time evolution of the fission-decay width under the influence of dissipation
Previously used approximations of the fission-decay width
If the initial population of the nuclear system in deformation and conjugate momentum differs from equilibrium, the system is subject to relaxation effects. The influence of these effects on the time dependence of the fission-decay width has carefully been studied by Grangé, Weidenmüller and collaborators [13,26] on the basis of the numerical solution of the FPE. As an example, we show their result for the evolution of the escape rate
( ) ( ) D t t f f Γ = λ
of the compound nucleus 248 Cm at a temperature of T = 5 MeV with a friction coefficient of β =5⋅10 21 s -1 in Figure 1. The potential used is specified in Figure 2. The fission-decay width as a function of time can be characterised by three main features: a delayed onset, a rising part and a stationary value. We will see that the initial suppression of the fission width has a decisive influence on the evolution of the system. Indeed, the inhibition of fission during the transient time trans τ increases the chance of the nucleus to decay at the earliest times by particle emission. The resulting loss of excitation energy and the change of the properties (e.g. fissility) of the residual nucleus reduce its fission probability. ) for the time-dependent escape rate for the schematic case of a fissioning 248 Cm nucleus, using the potential introduced by Bhatt et al. [26] (see Figure 2) at a temperature T = 5 MeV and for a reduced dissipation coefficient β = 5⋅10 21 s -1 . The initial condition corresponds to equilibrium at a temperature T initial = 0.3 MeV, introduced in ref. [26] to represent the quantum-mechanical zero-point motion in the ground state. This solution is compared with different approximations (the step function [27]: thick dotted line, the exponential-like function [28]: thin dotted line, the approximate formulation of Bhatt et al. [26]: dash-dotted line, the approximate formulation of ref. [30]: thin dashed line, and the improved expression proposed in this work: thick dashed line). The solution of the FPE and the approximation of Bhatt et al. are taken from Figure 2 of ref. [26], where values for t < 0.5⋅10 -21 s are not shown.
In the past, several approximations for the time evolution of the fission-decay width have been proposed. The two most widely used are: -a step function [27] that sets in at the transient time trans τ :
ï î ï í ì ≥ < = trans k trans f t Γ t t Γ f τ τ , , 0 ) ((2)
-an exponential in-growth function [28] defined by:
k f f Γ (t) Γ = ⋅{1-exp(-t/τ)}(3)
where : τ= trans τ /2.3 and K f Γ is the Kramers decay width: in ref. [30]. In the present work, we will present a refined version of this description (see section 2.5). All these approximations are compared to the exact solution of the FPE in Figure 1. The exponential-like in-growth function shows strongly rising values already at very early time, which is in contrast to the numerical solution of the FPE. Even if the step function is able to describe the inhibition of fission for small times, this suppression is certainly too strong, and its steep slope is too crude. The two rather elaborate approximations of ref. [26] and [30] are better suited. However, while the formulation of Bhatt et al. [26] overestimates the fission-decay rate at early times, the onset of ) (t f Γ is well described by our approximation [30] and its improved expression proposed in this work. The curves also slightly differ in their asymptotic values. In view of these important differences between the various approximations, it is legitimate to investigate what are the most relevant characteristics for a realistic description of dissipative effects in fission.
BW f K f K Γ ⋅ = Γ(4)
Features of the relaxation process
In this section, some basic features of the relaxation process towards equilibrium are carefully investigated with the help of Langevin calculations. We take the case of 248 Cm introduced by Bhatt et al. [26] as an example. The discretisation method we use in this work to solve the Langevin equation is documented in appendix A2.
( ) t b x φ = ò +∞ ∞ − ⋅ dv t v x W v b ) ; , ((7)
Let )
; ( t x b Π = ò ò + ∞ − +∞ ∞ − b x dv t v x W dx ) ; , ((8)
be the probability that the system is at deformations x < b x .
The time-dependent fission width
) (t f Γ , related to the escape rate ) (t f λ
, is then defined by:
= Π = = Γ ) ; ( ) ( ) ( ) ( t x t t t b x f f b φ λ D D ò ò ò + ∞ − ∞ + ∞ − +∞ ∞ − ⋅ b x b dv t v x W dx dv t v x W v ) ; , ( ) ; , ( D(9)
By introducing the mean velocity at the barrier ( )
t x x v b ; =
into the definition of ( )
t b x φ
, we see that the flux at the barrier can be expressed as the product of the mean velocity
( ) t x x v b ; =
and the amplitude of the probability distribution ( )
ò +∞ ∞ − = v t v x x W b d ; ,
at the barrier
x b : ( ) ( ) ( ) ( ) ( ) ( ) ò ò ò ò ∞ + ∞ − ∞ + ∞ − ∞ + ∞ − +∞ ∞ − = ⋅ = = = ⋅ = = ⋅ = v t v x x W t x x v v t v x x W v t v x x W v t v x x W v t b b b b b x b d ; , ; d ; , d ; , d ; , φ(10)
Let us investigate the time evolution of the two terms of equation (10). Figure 3 represents the velocity distribution at the saddle point ) ;
, ( t v x x W b =
as a function of time, and Figure 4 shows the variation with time of the amplitude of the probability distribution
( ) dv t v x x W b ò +∞ ∞ − = ; ,
at the barrier. Both quantities were obtained by a Langevin calculation.
As can be seen in Figure 3, the mean velocity at the barrier ( )
t x x v b ; =
gradually decreases during the onset of the fission width before reaching its asymptotic value in the stationary state. However, the variation of the amplitude of the probability distribution
( ) dv t v x x W b ò +∞ ∞ − = ; ,
is much more important. Indeed the variation of the amplitude with time extends over many orders of magnitude during the transient time as exhibited in Figure 4. Therefore, the variation of the amplitude of the probability distribution at the fission barrier represents the dominating influence on the onset of the flux, and consequently of the decay width.
For a more general understanding of the respective importance of the different contributions, we would like to compare the influences of the amplitude and of the mean velocity on the flux in the simple example of diffusion (over-damped motion) without driving force. In this limit, the FPE leads to the following reduced Smoluchowski equation [31]: The probability distribution with initial width zero in the ground state at x = 0 and initial velocity v = 0 thus evolves in the following way [32]: The amplitude W(x b ;t) of the system at the barrier is thus given by:
) t ; x ( W T x ) t ; x ( W t ÷ ÷ ø ö ç ç è ae ∂ ∂ + = ∂ ∂ βµ 2 2(11)( ) ( ) ( ) 2 2 2 / exp 2 1 ; x x x t x W σ σ π − ⋅ = (12) with t T x ⋅ = µβ σ 2 2(13)( ) ( ) ( ) ÷ ø ö ç è ae − ∝ ÷ ÷ ø ö ç ç è ae − = − ⋅ = t t Tt x t T x t x W b x b x b 1 exp 1 4 exp 1 4 2 / exp 2 1 ; 2 2 2 βµ π βµ σ σ π(14)
By applying the continuity equation (see also equation (36)), one obtains the mean velocity v at the barrier:
( ) t t x x T t x v b x b b 1 2 ; 2 ∝ = = σ βµ(15)
Thus, the flux as the product of amplitude and velocity varies like
2 / 3 1 t ÷ ø ö ç è ae − t 1 exp . Since for small times ÷ ø ö ç è ae − t t n 1 exp 1 behaves like ÷ ø ö ç è ae − t 1 exp
, ∀ n , the amplitude governs the evolution of the flux at the beginning of the process as illustrated in Figure 5. Let us now consider the influence of the shape of the potential on the time dependence of the amplitude at the barrier. With this aim, Figure 4 compares the variation of the amplitude at the barrier
) t ; x x ( W b =
obtained numerically using the realistic potential of Figure 2 with the solution of the FPE obtained analytically for a parabolic potential. The curvature of the latter corresponds to the curvature of the realistic potential in the ground state. One observes that both curves vary over many orders of magnitude in a very similar way. This result, perhaps surprisingly, suggests the minor impact of the shape of the potential on the variation of the amplitude.
In addition to these numerical results, we would like to strengthen this conclusion by more analytical arguments. With this aim in view, we compare the result for a simple diffusion problem without driving force obtained by solving the Smoluchowski equation (11) and the result for the parabolic potential in the over-damped region at a given deformation ∆x. In both cases, the distribution is a Gaussian function in deformation given above by equation (12). The solution for σ x as a function of time for the first diffusion case was already given in equation (13) t T x ⋅ = µβ σ 2 2 while the full solution for a parabolic potential [32] corresponds to
( ) ( ) ï þ ï ý ü ï î ï í ì ú û ù ê ë é + + ÷ ø ö ç è ae − − = 1 sinh 2 sinh 2 exp 1 1 1 1 2 2 1 2 2 2 t t t T g x β β β β β β β µω σ(16)
where:
( ) 2 1 2 2 1 4 g ω β β − =(17)
In the over-damped regime, the physical meaning of equation (16) may be easily understood: β is large and equation (16) can be approximated by the following equation (18) in which we have replaced β 1 = β -2ω g 2 /β by β and neglected the terms exp(-βt). Actually, equation (16) is already quantitatively very similar to equation (18) for β ≥ 5⋅10 21 s -1 . Equation (18) shows that the process of the population of the deformation space can be described by a probability distribution with the shape of a Gaussian whose second moment exponentially approaches the asymptotic value.
ï þ ï ý ü ï î ï í ì ÷ ÷ ø ö ç ç è ae ⋅ − − ≈ t T g g x β ω µω σ 2 2 2 2 exp 1
We see that equation (13) is the first-order approximation of equation (18), indicating that up
to 2 2 g t ω β ≈
the evolution of the amplitude of the probability distribution at the barrier deformation for a parabolic potential is very similar to that for the simple diffusion problem. Since the transient time trans τ in the over-damped regime as defined by Grangé et al. [13] is approximately given by equation (1), the above-mentioned similarity extends to around half the transient time. We conclude that the spreading of the probability distribution during the time lapse in which the flux sets in is rather insensitive to the shape of the potential.
In Figure 4, we have compared the amplitude obtained for a realistic and a parabolic potential because the FPE can be solved analytically only in the simplest cases for which the potential has a parabolic form and the transport coefficients are constant. More complicated situations, e.g. for a realistic potential, like the one displayed in Figure 2, or in the case of deformationdependent transport coefficients, have to be solved numerically. Concerning the approximation of transport coefficients that do not depend on deformation, the importance of such an assumption can be considerably reduced by the choice of an appropriate coordinate system as demonstrated in appendix A1. There it is shown that any given problem can be reformulated in an adapted coordinate system, where the inertia or friction coefficient does not vary with deformation, without changing the physics of the problem. In addition it is pointed out that theoretical one-body and two-body dissipation models predict that the variation of the dissipation strength with deformation is weak.
The initial conditions of the system concern the population in deformation and conjugate momentum as well as the temperature and angular momentum of the nucleus. Their determination relies on the description of the initial properties of the nuclear system after the reaction to be studied. Therefore, the initial population depends strongly on the entrance channel considered. While e.g. fusion reactions can lead to rather deformed compound nuclei and involve a broad range of angular momentum, very peripheral nuclear collisions at relativistic energies are characterised by small initial shape distortions and low angular momenta. The initial conditions are important for the evolution of the system, because they affect the time dependence of the fission-decay width
) (t f Γ
. In section 2.4 we will propose a way to take the initial conditions into account in the analytical approximation of the timedependent fission width ) (t f Γ .
New description of the time-dependent fission-decay width
) (t f Γ
In the previous section, we have collected some characteristics of the dissipation process, which have a rather general validity: the predominance of the amplitude for the time evolution of the flux and the minor importance of the shape of the potential for the early evolution of the probability distribution in deformation and conjugate momentum. In ref. [30] a realistic approximation for the decay width
) (t f Γ
that is based on these features has been presented. In the following, the various steps to derive this approximation will be illustrated in detail.
First of all, we will develop a slightly modified expression for the exact description of the fission-decay width, whose definition was already given above in equation (9). We start by defining the normalised probability distribution
( ) t v x x W b n , , =
at the fission-barrier deformation x b as:
( ) ( ) ( ) x v t v x W t v x x W t v x x W b x b b n d d , , , , , , ò ò ∞ − ∞ + ∞ − = = =(19)
Considering equation (19) and introducing the mean velocity v at the barrier deformation already used in equation (10), the fission width of equation (9), can be reformulated as follows:
( ) ò +∞ ∞ − = ⋅ = ⋅ = dv t v x x W t x x v (t) Γ b n b f ) , , ( ; D(20)
By defining the amplitude at the barrier, integrated over velocity by
( ) ( ) ò +∞ ∞ − = = = dv t v x x W t x x W b n b n ; , ;(21)
the fission width can be written as
( ) ) ; ( ; t x x W t x x v (t) Γ b n b f = ⋅ = ⋅ = D(22)
In the stationary case we get:
( )
K f b n b f Γ t x x W t x x v ) (t Γ ≈ ∞ → = ⋅ ∞ → = ⋅ = ∞ → ) , ( ; D(23)
where we have identified the asymptotic fission width by the Kramers expression K f Γ from equation (4).
Finally, combining equations (22) and (23), we can reformulate the time-dependent fission width as:
( ) ( ) ( ) ( ) K f b n b b n b f Γ t x x W t x x v t x x W t x x v (t) Γ ⋅ ∞ → = ⋅ ∞ → = = ⋅ = ≈ ; ;
; ; (24) At this point, we introduce two approximations that lead to a new analytical description of the time-dependent fission width. The first approximation is to neglect the variation of the mean velocity with time. Thus, ( )
t x x v b ; = is replaced by its asymptotic value ( ) ∞ → = t x x v b ;
in the numerator of equation (24), leading to the following expression:
K f b n b n f Γ t x x W t x x W t Γ ) ; ( ) ; ( ) ( ∞ → = = ≈(25)
This approximation is well justified by our previous investigations in section 2.2 that have shown that the variation of the mean velocity at the barrier is small compared to the variation of the amplitude, and therefore the evolution of the amplitude
( ) dv t v x x W b ò +∞ ∞ − = ; ,
with time has a much stronger influence than the mean velocity.
A second approximation which we used in ref. [30] consists of expressing the shape of the ingrowth function at the fission barrier, which is given by the shape of W n (x = x b , t) in equation (25), by the analytical solution derived for a parabolic potential [32]. The validity of this second simplification is again justified by our previous investigations in section 2.2, where we have shown that the amplitudes at the barrier W and W par for a realistic and a parabolic potential, respectively, evolve in a quite similar way at the beginning of the process, see Figure 4. Consequently, the following set of equations represents an analytical expression of the fission decay width that is based on realistic assumptions:
K f b par b par f Γ t x x W t x x W t Γ ) , ( ) , ( ) ( ∞ → = = ≈(26)
in which we implement the parabolic solution (equations (12) and (16)):
( ) ( ) ( ) 2 2 2 exp 2 1 x x par / x t ; v , x W σ σ π − ⋅ = ( ) ( ) ï þ ï ý ü ï î ï í ì ú û ù ê ë é + + ÷ ø ö ç è ae − − = 1 sinh 2 sinh 2 exp 1 1 1 1 2 2 1 2 2 2 t t t T g x β β β β β β β µω σ
Note, that we replaced the normalised probability par n W by the unnormalised quantity W par in equation (26), because in the case of the parabolic potential the probability distribution is confined, and thus ò ò
+∞ ∞ − +∞ − ≈ ) t ; x ( W ) t ; x ( W par x par b .
Initial conditions
The analytical solution of the FPE for a parabolic potential derived in [32], which we use in equation (16), refers to specific initial conditions corresponding to a δ function in deformation and momentum. This means that the initial deformation is in the minimum of the parabola, corresponding to the nuclear ground state, and the initial momentum is zero. However, as discussed in section 2.2., a realistic description should include the initial population related to the reaction under study. Due to the uncertainty principle, the initial probability distribution ) 0 ; ,
( = t v x W
differs from a δ function whatever the entrance channel is. To account for this effect, we introduced in ref. [30] a time shift 0 t in equation (16)
( ) ( ) ( ) ( ) ( ) ï þ ï ý ü ï î ï í ì ú û ù ê ë é + − + ÷ ø ö ç è ae − − − − = 1 sinh 2 sinh 2 exp 1 0 1 1 0 1 2 2 1 2 0 2 2 t t t t t t T g x β β β β β β β µω σ(27)
The time shift 0 t accounts for the time needed for the probability distributions, expressed by equations (12) and (16), to establish the initial distribution in deformation space. One should stress that such a procedure is not restricted to any specific initial condition. In fact, it can be applied to any reaction, under the condition that the phase space populated by the entrance channel can be assumed to be Gaussian. An alternative way to consider a finite width in the initial conditions of the analytical solution of the FPE for the parabolic potential (also restricted to Gaussian-like distributions) has been developed recently by Boilley et al. [33].
In [30] we assumed an initial distribution in deformation and momentum, which corresponds to the zero-point motion at the ground state of the nucleus. This is the narrowest distribution compatible with the uncertainty principle. This condition, related to a nucleus with a distribution in deformation corresponding to the nuclear ground state c , is approximately valid for reactions which introduce small shape distortions like peripheral collisions at relativistic energies [34]. These reactions will be investigated in the framework of dissipation studies in our following paper [25]. The time shift 0 t corresponding to the zero-point motion was given in ref. [30] by:
ï þ ï ý ü ï î ï í ì ÷ ÷ ø ö ç ç è ae − = T , T T ln t g g ω β ω β 4 2 2 1 MAX 0 D D(28)
Equation (28) accounts for both the under-damped and the over-damped cases. In the underdamped case, the deformation and the momentum coordinate saturate at about the same time.
Therefore, the time shift needed for the probability distribution, expressed by equations (12) and (16), to reach the width of the zero-point motion in deformation is equal to the time that the average energy of the collective degree of freedom needs to reach the value g E ω D 2 1 0 = associated to the zero-point motion. Considering the energy transfer t E from the heat bath of the intrinsic excitations at temperature T to the oscillation in deformation as given by the FPE, the effective time delay under t 0 is determined by:
( ) [ ] under o t t T E ⋅ − ⋅ = β exp - 1(29)
This leads to the following equation:
÷ ÷ ø ö ç ç è ae − = ÷ ÷ ø ö ç ç è ae − = g under T T E T T t ω β β D 2 2 ln 1 ln 1 0 0(30)
and gives rise to equation (28) for the under-damped case in which β is small.
In the over-damped case, it is the width in deformation, which mostly determines the timedependent behaviour of the system due to diffusion, while the velocity profile adapts very fast due to the strong coupling between intrinsic and collective degrees of freedom. Therefore, we determine the time delay t 0 by requiring the width in deformation of the solution of the FPE at t = 0 to be equal to the width of the zero-point motion
g x E µω µ ω σ 2 2 1 0 2 0 D = ⋅ = .
The full analytical solution obtained in [32] for the width of the probability distribution for a parabolic potential was already given by equation (16) and approximated in the over-damped region by equation (18). As a consequence of equation (18)
where over t 0 stands for the time needed to establish the zero-point motion distribution in deformation starting the calculation from a δ function. In praxis, the temperature of the system is usually much higher than the zero-point energy (let us remind that the effective temperature which corresponds to the zero point motion amounts to 0.5 MeV in our case for which
It is legitimate to consider the solution of the parabolic potential in the derivation of over t 0 , since we have shown in Figure 4 that the variation of the amplitude with time for the realistic and the parabolic potential is very similar at small times.
In Figure 6, we compare the standard deviation of the probability distribution for x < x b as given by a Langevin calculation with the standard deviation σ x of the approximate solution obtained from equation (27). In the strongly under-damped case (Figure 6a) as well as in the strongly over-damped case (Figure 6d (27) and the Langevin calculations come very close around t = 0.5⋅10 -21 s. The deviations at smaller times are not important, since the amplitude of the probability distribution at the barrier is still so low that the flux is practically zero (see Figure 7 and the discussion below). Figure 2 (full lines) compared to the standard deviation of the analytical solution of the FPE for the parabolic potential with (long-dashed line) and without (shortdashed line) effective time shift t 0 given by equation (28). In all cases we have considered the nucleus 248 Cm and T = 3 MeV. Two curves are given for the Langevin calculation: The thin full curve starts from x = 0 and p = 0. The thick full curve has been obtained by starting the trajectories with a sample from Gaussian distributions in deformation and momentum, corresponding to the zero-point motion in the parabolic potential adapted to the curvature of the potential of Figure 2 at the ground state. The comparison is given for four examples, from a strongly under-damped (β= 0.1⋅10 21 s -1 ) to a strongly over-damped case (β = 20⋅10 21 s -1 ).
The thin full lines in Figure 6 have been obtained assuming a δ function located at x = 0 and p = 0 as initial condition. The comparison between the thin and the thick full curves shows that the inclusion of the zero-point motion as initial condition in the Langevin equation corresponds essentially to a time shift. Furthermore, the analytical approximation shifted by the time lapse 0 t of equation (28) enables us to reproduce the width in deformation of the probability distribution corresponding to the initial conditions of the Langevin calculation, as shown by the long-dashed lines.
The influence of the introduction of 0 t on the fission-decay rate Figure 7. There it is clearly seen that shifting the analytical expression (long dashed line) by the time lapse 0 t nicely reproduces the onset of fission obtained directly from the solution of the equation of motion (histogram) started with the probability distribution corresponding to the zero-point motion.
( ) t f λ is illustrated in
To finish this section, we would like to mention that elaborate studies [35] have shown that the variance found for an undamped oscillator, which we have used to derive equation (33). In particular it was shown, that this difference becomes more important in the over-damped regime with increasing damping [35,36]. Quantitatively, for β =1.52⋅10 21 s -1 the variance of the damped quantum oscillator at the ground state is 24% smaller than the variance of the undamped quantum oscillator, and for β =15.2⋅10 21 s -1 it is around 67% smaller. Since over t 0 is proportional to 2 0 x σ , the same reduction as for the variance applies for over t 0 . However, Figure 7 shows that the transient time is about one order of magnitude larger than the effective time shift over t 0 and therefore the smaller variance of the damped quantum oscillator has only little effect on the time dependence of the fission-decay rate.
Improved approximation for the time-dependent fission width
The analytical approximation of the time-dependent fission decay width ) (t f Γ discussed in section 2.3. was derived neglecting the variation of the mean velocity at the barrier
( ) t x x v b ; = .
In this section we propose to include this variation in an approximate way. It can be shown that in the FPE the mean velocity and the logarithmic slope of the probability distribution at a given deformation x are closely related. To demonstrate this, we will make use of the Smoluchowki equation [31]:
( ) þ ý ü î í ì ∂ ∂ + ú û ù ê ë é ∂ ∂ = ∂ ∂ t x; W x T t) W(x; dx dV x t) W(x; t 2 2 βµ 1(34)
where: V stands for the nuclear potential (see the example in Figure 2).
After integration, regarding that 0 = dx dV at the barrier x b and that W(x;t) = 0 for x→-∞ as well
as 0 ) ; ( = ∂ ∂ t x W x
for x→-∞, we obtain the following result:
ò ∞ − ∂ ∂ = ∂ ∂ b b x x x t x W T dx t x W t ) ; ( ) ; ( βµ(35)
Applying the continuity equation with the definition of the mean velocity introduced in equation (10) results in:
( ) ( ) ( ) ( ) t x x W v v t v x x W v v t v x x W v dx t x W t b x b x b x b b b ; d ; , d ; , ; = ⋅ − = = ⋅ − = = ⋅ − = ∂ ∂ ò ò ò +∞ ∞ − +∞ ∞ − ∞ −(36)
Note that by the continuity equation the velocity enters explicitly into this equation, while it is only an implicit variable in the Smoluchowski equation.
By dividing the previous expression by W(x=x b ;t) and using equation (35) we finally obtain:
( ) ( ) ( ) ( ) [ ] b b b x b x x t ; x W ln x T t ; x x W x t ; x W T v ∂ ∂ − = = ∂ ∂ − = βµ βµ(37)
As can be seen from equation (37), the mean velocity
( ) t ; x x v v b x b = =
at the fission barrier is proportional to the logarithmic derivative of the amplitude at the fission barrier
( ) b x x x W = ∂ ∂ / ln
. Therefore, the approximated analytical description of the fission-decay width
) (t f Γ
we proposed in ref. [30] and derived in detail in section 2.2 can be improved by introducing the variation of the logarithmic slope of the probability distribution at the barrier.
Unfortunately, we have no access to an analytical solution of the probability distribution above the realistic potential. Therefore, we use again the analytical solution of the FPE for the parabolic potential to account for the variation of the velocity profile in an approximate way. Figure 8 demonstrates with two examples that the variation in time of the logarithmic slope of the normalised amplitude at the barrier deformation is qualitatively similar for the parabolic and the realistic potential, although it quantitatively fails to reproduce the full variation. This similarity is understandable: Although the parabolic potential at x = x b is not flat, the expression for the mean velocity for the parabolic potential corresponding to equation (37) still contains a term which is proportional to the logarithmic slope of the potential. This means that one obtains an improved estimation of the time-dependent fission width when equation (26) is replaced by the following expression: Another deficiency of the analytical approximation proposed in ref. [30] results from the normalisation to the Kramers stationary value K f Γ . Indeed it is known that Kramers prediction underestimates the stationary fission width for temperatures larger than the fission barrier, leading to a discrepancy between our approximation and the exact FP or Langevin result in the stationary regime for f B T ≥ (see Figure 1). To remove this deficiency, we propose to make use of the concept of the Mean First Passage Time (MFPT) [37]. The MFPT is the mean value of the distribution of first passage times. Although it can be evaluated at any deformation point x e , it is physically meaningful at scission only, where the negative current of trajectories is negligible. This restriction is due to the absorption at x e in the definition of the MFPT itself. Consequently, at scission the MFPT is equivalent to the mean scission time τ scission . Furthermore, with the help of numerical FP calculations, Grangé et al. showed that τ scission can be expressed as the sum of three contributions: the initial delay due to transient effects, the statistical decay time τ stat and the dynamical saddle-to-scission time τ ss . Thus, they estimated the mean scission time by the following sum:
( ) ( ) ( ) { } ( ) { } K f x x par par b par b par f b x t ; x W x t ; x W t ; x x W t ; x x W Γ ⋅ ú û ù ê ë é ∞ → − − ⋅ ∞ → = = ≈ Γ = d ln d d ln d (38)ss trans stat scission τ τ τ τ + + = 2 1 (39)
where the delay due to transient effects is approximated by half the transient time trans τ as given by equation (1), and the saddle-to-scission time ss τ is taken from [26] The determination of the MFPT requires a numerical FP or Langevin calculation. However, in the over-damped regime, it can be evaluated analytically using the closed expression [37]:
( ) ( ) ò ò ∞ − ú û ù ê ë é − ú û ù ê ë é = u x T v V dv T u V du T MFPT e exp exp 0 βµ(41)
Thus, the validity of the analytical approximation of ( ) , independently of the value of β . Thus, we investigated the behaviour of r as a function of the dissipation strength β by dividing the statistical fission-decay width stat Γ obtained from the numerical Langevin calculation by
Kramers' prediction K f Γ . As can be seen in Figure 9, the value of r was found to vary little as a function of β for a given temperature.
Quantitative results for the fission-decay rate
In the previous sections we have shown that the analytical approximation of the timedependent fission-decay width proposed in ref. [30] and improved in this work, is based on realistic assumptions. In this section, we systematically investigate its qualities and shortcomings over a wide range of dissipation strength and temperature. Figure 10 compares the fission-decay rate obtained by a Langevin code (histogram), the analytical approximation of [30] (short dashed line) and the improved expression of it given in the previous section (long dashed line). The single FP curve (full line) drawn in Figure 10 For the numerical Langevin and FP calculations, the deformation-dependent potential shown in Figure 2 has been used. The zero-point motion is considered by an initial effective temperature of 0.5 MeV.
Whereas the description of Bhatt et al. [26] starts too early as seen in Figure 1, the approximation of ref. [30] describes quite well the initial inhibition of fission. Under the condition that the system is not strongly under-damped and that the temperature is not too low, the analytical approximation reproduces the slope of the escape rate quite well. The agreement is less good in the under-damped region. However, as will be seen from the comparison with the experimental data, the range g ω β that defines the limit between the under-damped and over-damped regime. This equation, in which the influence of inertia is neglected, corresponds to the reduced FPE in the asymptotic case of strongly over-damped systems (β → ∞ ). In ref. [26], g ω = 1.83⋅10 21 s -1 is used so that β = 5⋅10 21 s -1 belongs to the over-damped region. We compare in Figure 11 [26] is rather well suited in the underdamped regime (see Figure 2 of ref. [26]), its deficiency to describe the earliest stages of the process for β 5 ≈ ⋅10 21 s -1 was the reason that motivated us to derive another analytical approximation to the exact solution of the FPE. Our approximation gives a uniform continuous formulation for both the under-damped and the over-damped region and reproduces the onset of fission resulting from the numerical calculations rather closely.
Dynamical approaches to the nuclear de-excitation process
In general, there exist three methods to model the decay of a heavy excited nucleus in a dynamical way. One method is based on the Langevin equation [15,16], another one on the FP equation [14]. In both cases, the evolution of the system is followed in small time steps, either by computing the individual trajectories or the integral probability distribution of the system, respectively. At each time step, the probability for the evaporation of particles is computed and randomly decided. The third option corresponds to a dynamical evaporation code, in which the fission decay width is obtained by the solution of the Langevin or FP equation at each step of the evaporation chain. Such a code is equivalent to the Langevin or FP treatment. Unfortunately, as already stressed in section 1, such a procedure is inconceivable in many applications due to the high computational time required. However, any analytical approximation of the time-dependent fission decay width can replace the numerical solution, without destroying the equivalence of the code with a Langevin or FP approach, under the condition that the approximation used is as close as possible to the numerical solution. This crucial condition is fulfilled by the analytical formulation we propose in the present work, as we have shown.
In view of the above-mentioned equivalence, a statistical evaporation-fission code can be transformed into a dynamical de-excitation code by introducing a time-dependent fissiondecay width. When two slightly simplifying assumptions are applied, it is enough to consider the time-dependence of the fission width. In this case, the evolution of the system in deformation does not enter explicitly in the code. Firstly, the deformation dependence of the particle-decay widths may be neglected. Secondly, the variation of the available intrinsic excitation energy as a function of deformation, investigated in reference [38], may be disregarded. Both approximations are not crucial in calculations which are restricted to the small deformation range from the initial state up to the saddle point. These effects could be considered by replacing the respective constant values by the values obtained by averaging over the actual deformation distribution in the corresponding time steps. Details on the implementation of the analytical approximation to the solution of the FPE developed in this work in our de-excitation code ABRABLA are given in reference [25].
Summary
The relaxation process of a nuclear system towards equilibrium leads to a delay of fission compared to the Kramers fission decay time. This feature of nuclear fission, which originates from dissipation, is automatically brought to light by solving the equation of motion in the Fokker-Planck or Langevin approach. An equivalent procedure, which in addition enables one to avoid high computational times, consists of including a realistic analytical time-dependent fission-decay width in an evaporation code. A meticulous investigation of the evolution of the probability distribution of the system in phase space all along its dynamical path permitted us to extract the main features of the relaxation process. Making use of these results, we have developed an easily calculable approximation of the time-dependent fission-decay width that is based on realistic physical assumptions. Compared to other approximations widely used in the past, our new analytical formulation has proven to reproduce rather closely the trend of the exact solution in the under-as well as in the over-damped regime. At this stage, it is more than desirable to carefully study, how the description of transient effects influences the conclusions drawn on dissipation. Indeed, such an investigation may be crucial with respect to the reliability of previous works using less realistic formulations for the time-dependence of the fission decay width, and namely the exponential in-growth function. The new analytical expression, which we propose in the present work, definitely represents an improvement in that direction.
Appendices A1: Choice of the coordinate system
Describing any physical process needs to have recourse to some coordinate system. This is particularly important for the studies of fission dynamics, which deal with the evolution of a nucleus in deformation space. In this appendix we will point out that the process can be described using a constant, coordinate-independent, mass parameter, respectively mass tensor in the case of more than one dimension, without any restriction on the physics of the problem. Also the use of a constant friction strength turns out to be close to the theoretical expectations as will be shown below.
In our work, the Langevin as well as the Fokker-Planck equation were solved by assuming that neither nuclear inertia, nor nuclear friction depends on deformation. It is the aim of this appendix to estimate how crude this assumption is. As an example, we will consider the Langevin equation.
A1.1 Equation of motion
The dynamical evolution of a fissioning nucleus can be described by the following Langevin equation of motion for a given deformation coordinate q and its conjugate momentum p : (for simplification we restrict ourselves to the one-dimensional Langevin equation, but it can easily be generalized to n dimensions)
) q ( p dt dq µ = (A1.1) ( ) ( ) ) ( ) ( ) ( ) ( )( 2 1( ) ( ) ) ( ) ( ) ( 1 ) ( ) ( ) ( ) ( 1 1 2 2 2 2 t f q D q dt dq q q dq q dV q dt q d dq q d q dt q d L µ µ γ µ µ µ + − − − = (A1.2)
Numerical calculations show that the first term on the right-hand side of equations (A1.1) and (A1.2) can be neglected. Consequently, this term will be omitted in the following.
For the simple case of an oscillator characterised by its frequency ω , equation (A1.2) turns into:
) t ( A ) t ( dt dq ) q ( ) q ( q dt q d + − − = µ γ ω 2 2 2 (A1.3)
where the last random term of the right-hand side of (A1.2) has been replaced by ) (t A (which does not depend on the velocity and is assumed to change rapidly compared to the variations of the velocity [32]). The frequency ) q ( ω is given by:
) ( ) ( / ) ( 2 2 q C q dq V d q µ µ ω = = (A1.4)
Remembering that the diffusion term ) (t A is ultimately determined by the equilibrium fluctuations [6], it clearly appears that only the two ratios µ C and µ γ of the transport coefficients govern the average motion. Note that both quantities µ C and µ γ have the dimension of inverse time. Thus, these ratios are invariant to transformations of the coordinate system in contrast to the transport coefficients ) q ( µ , ) q ( γ and C themselves.
A.1.2 Coordinate transformation
As the physics does not depend on the coordinate system, whereas nuclear inertia µ as well as nuclear friction γ do, it may be convenient to choose a system in which one of these two transport coefficients is constant. Let us assume that starting from the coordinate system q , we are interested in a deformation space x in which the inertia coefficient is constant. The total energy pot kin tot
E E E + =
has to be conserved when switching from one system to the other what requires:
) ( ) ( 2 1 ) ( ) ( 2 1 2 2 x V dt dx x q V dt dq q + ÷ ø ö ç è ae = + ÷ ø ö ç è ae µ µ (A1.5) ]) [ ( ) ( 2 1 2 2 x q V dt dx dx dq q + ÷ ø ö ç è ae ÷ ø ö ç è ae = µ
As the potential energy V at a given deformation remains the same in both coordinate systems, it follows:
2 ÷ ø ö ç è ae = dx dq ) q ( ) x ( µ µ (A1.6)
Requiring that the inertia coefficient is constant and equal to 0 µ in the deformation space x , one obtains from equation (A1.6):
0 µ µ ) q ( dq dx = (A1.7)
Starting from a given coordinate system q , the numerical solution of equation (A1.7) enables us to construct another coordinate system x in which µ is deformation independent. After discretisation, equation(A1.7) transforms to:
0 µ µ ) q ( q x ∆ = ∆ (A1.8)
The correspondence between two initial coordinate values 0 x and 0 q as the starting point of the discretisation procedure may be arbitrarily chosen.
As the ratio µ γ β = which describes the damping of the system is a physical property of the process which is invariant against a coordinate transformation, the deformation dependence of γ in the coordinate system x is given by:
) q ( ) q ( ) x ( µ γ µ γ 0 = (A1.9)
In the local harmonic approximation, which has proven to be quite well suited to describe nuclear collective motion [6], the invariance of the frequency ω mentioned above relates the second derivative of the potential in the respective coordinate system to the corresponding mass parameter:
) q ( ) x ( ω ω = Þ ) ( / ) ( 2 2 x dx x V d µ = ) ( / ) ( 2 2 q dq q V d µ (A1.10)
In an analogous way, one may construct a coordinate system, in which the friction coefficient γ becomes a constant, however, it is in general not possible to obtain both a constant mass parameter and a constant friction coefficient at the same time by any coordinate transformation.
A1.3 Coordinate system introduced by Grangé et al.
In our work, the Langevin calculations were performed using the cubic potential shape proposed by Grangé et al. in ref. [13] displayed in Figure 2 and assuming that the transport coefficients do not vary with deformation. The nuclear inertia is taken equal to the reduced mass 4 A , and the dissipation strength µ γ β = is an adjustable constant input parameter. In the present section, we check with the help of independent calculations that the deformationparameterised potential ) (x V proposed in [13] is not unrealistic and, moreover, that it is quite well adapted to a coordinate system in which the inertia parameter is constant.
Lets us consider the collective deformation parameter q introduced in ref. [39] based on the nuclear-shape parameterisation of Trentalange et al. [40]. In ref. [41] Pomorski et al. studied the dynamical evolution of a fissioning nucleus on the Liquid Drop Model (LDM) potential landscape by solving the Langevin equation of motion for the collective deformation parameter q . This model has proven quite successful since the agreement between predicted and measured neutron prescission multiplicities is rather good and this over a wide range of nuclear fissioning masses [17]. Furthermore, Pomorski et al. [41] take into account a deformation-dependent inertia calculated in the framework of the incompressible Werner-Wheeler fluid approach [5] as well as a deformation-dependent friction coefficient determined by the wall-and-window formula [4]. With the help of this model, we determined the mean fission path of a Cm 248 compound nucleus parameterised as a function of q . We chose an excitation energy of 165 MeV so that the model of [41] leads to a fission-barrier height of about 3.8 MeV that is close to the height obtained with the cubic potential of Grangé et al. [13]. This calculation permitted us to evaluate the deformation-dependent inertia ) (q M and friction ) (q γ along the mean symmetric fission path. On the basis of this result, we performed a coordinate transformation using the procedure described above by equations (A1.8) and (A1.9) requiring a constant mass equal to the reduced mass 4 A . That enables us to define the potential V as well as the friction coefficient γ in a new coordinate system, which we call x . The potential V(x), resulting from this procedure was found to be very similar to the potential introduced by Grangé et al. [13], shown in Figure 2. Approximating the two potential landscapes ) (q V and ) (x V by a parabola around their respective minimum and maximum, we evaluated and compared the frequencies ) (q ω and ) (x ω both in the ground state and at the barrier. We obtained that the frequencies in both coordinate systems differ by about 3-4% only.
This brief study allows us to conclude that the potential as parameterised by Grangé et al. [13] and which we widely used in our work is consistent with a coordinate system in which the mass parameter is constant.
A.1.4 Deformation-dependence of the damping strength
Although the result of the previous section can justify our approximation of a constant mass, it does not have any consequence on the additional approximation we made concerning the deformation-independence of the friction parameter γ . Indeed, in our calculation neither µ γ β = nor µ depends on deformation, and thus the friction coefficient γ is deformationindependent as well. We would like to investigate, how crude this assumption is expected to be.
As we have already stressed in the discussion of equation (A1.3), what defines the physical process, is neither µ nor γ but the ratio β between both, which describes the damping of the system. In ref. [16], Fröbrich et al. studied the variation of the dissipation strength β as a function of half the distance between the centres of mass of the emerging fission fragments for several fissioning nuclei and two different friction models, one based on the one-body wall-and-window formula and the other on the two-body viscosity theory. They showed that the dissipation strength β does not vary drastically with deformation whatever friction approach one considers. Consequently, while our deformation-independent mass does not introduce any restriction, our single approximation of a constant friction is not crucial.
A2: Details of the Langevin calculations
The Langevin calculations performed in this work are based on numerically solving equation (A1.1). This is done using the following discretised equations: The variables and parameters are defined in table A2.1. For details see e. g. the review of Fröbrich and Gontchar [16]. The reduced friction coefficient β and the mass parameter µ were assumed not to vary with deformation. Following ref. [26], the value of the mass parameter was set to the reduced mass
Figure 1 :
1The solution of the Fokker-Planck equation (full line
The dynamics of a fissioning system is well characterised by the time-dependent flux of trajectories in deformation space. The time-dependent fission decay width at the saddle point bx , which represents the current across the fission barrier:
Figure 3 :
3Two-dimensional contour plot of the velocity distribution at the barrier ( ) time. Contour lines mark heights separated by a factor of two. The result has been obtained from a numerical calculation using the Langevin approach with the parameters T = 3 MeV and β = 1⋅10 21 s -1 (upper figure) and T = 3 MeV and β = 5⋅10 21 s -1 (lower figure). The calculation starts with the distribution in deformation x and conjugate momentum p = µ v given by the zero-point motion in the ground state. The dashed line represents the mean velocity at the barrier as a function of time.
Figure 4 :
4Amplitude of the probability distribution at the barrier deformation x b from the Langevin calculation (full line) above the potential ofFigure 2with the parameters T = 3 MeV and β = 1⋅10 21 s -1 (upper part), respectively β = 5⋅10 21 s -1 (lower part), compared with the amplitude at the barrier deformation x b above the parabola adapted to the curvature of the potential in the ground state (dashed line). All functions are normalised to the value at t = 5⋅10 -21 s.
Figure 5 :
5Schematic comparison of the influence of amplitude and velocity on the flux at the barrier in a simple diffusion problem. See text for details.
), the initial width of the parabolic solution including the time shift (long dashed line) agrees well with the numerical result of the Langevin calculation (thick full line). But also in the intermediate range (Figure 6b and Figure 6c), the time-shifted parabolic solution of equation
Figure 6 :
6Standard deviation σ x of the probability distribution from a Langevin calculation inside the fission barrier of the potential given in
Figure 7 :
7Time) compared to the analytical approximation of[30] including (long dashed line, see equation(27)) or not (short dashed line, see equation(17)) the time shift 0 t defined by equation(28) for the nucleus 248 Cm with β = 2⋅10 21 s -1 at T = 3 MeV. Note that shifting the analytical approximation in time permits to describe the onset of the process as done by the numerical solution of the equation of motion.
Figure 8 :
8The negative logarithmic slope of the probability distribution at the barrier position part). The full line shows the numerical result of the Langevin approach with a realistic potential, while the dashed line corresponds to the result obtained with the analytical approximation for the parabolic potential given by equation(27).
by equation(38) can be extended to temperatures larger than the fission barrier, for which the stationary value of Kramers cannot be used, replacing the Kramers width K f Γ by the statistical fission-decay width stat Γ , obtained from the following equation: the closed expression of the MFPT as expressed with equation(41)being restricted to the over-damped region, such an analytical determination of the normalisation factor stat Γ is not valid in the under-damped region. However, since the validity of the Kramers solution for the stationary fission rate is limited by the
Figure 9 :
9Ratio r between the statistical fission-decay width num stat Γ as extracted from a numerical Langevin calculation and Kramers' prediction K f Γ as a function of the dissipation strength β . The calculations were performed for a 248 Cm fissioning nucleus and a fission barrier of 3.7 MeV. The results are shown for three different temperatures, T = 2 MeV (circles), T = 4 MeV (triangles) and T = 5 MeV (squares). Thus, we propose the following prescription to evaluate the stationary fission-decay width: With the help of equations (42) and (43), the ratio r can be determined analytically in the over-damped regime, where The statistical fission-decay width in the under-damped regime can then be calculated according to equation (43) as the product of the ratio r obtained for the over-damped regime and the corresponding K v Γ for the underdamped regime. This solution allows overcoming the deficiency of Kramers prediction for large temperatures in the cases of over-damped and under-damped systems.
MeV stands only for reminding the equivalence of both Langevin and Fokker-Planck methods.
Figure 10 :
10Systematic comparison of different approaches to the time-dependent fission-decay rate for different values of temperature T and reduced friction coefficient β: Langevin approach (histogram), analytical approximation of [30] (short dashed line), improved expression given in this work (long dashed line). For T = 5 MeV and β = 5⋅10 21 s -1 , the solution of the FPE (full line) is shown in addition.
important. Let us note that the improved expression proposed by equation(38) leads to a slightly better agreement with the Langevin result at earliest times, and the modifications introduced by equations(41)to (43) permit to overcome the limitations of Kramers' description at temperatures larger than the fission barrier. In ref. [26] the analytical approximation was derived separately for the under-damped and the over-damped region. The authors used the Smoluchowski equation (34) as soon as 1 2 >
the escape rate obtained by the Langevin and Smoluchowski equations for g ω = 1.83⋅10 21 s -1 and for three values of β in the over-damped regime. There it can be seen that the sufficiently over-damped regime in which the Smoluchowski equation (34) is valid starts for values of β larger than about 10⋅10 21 s -1 . This observation explains the discrepancy between the Langevin calculation and the approximation of Bhatt et al. for β = 5⋅10 21 s -1 , illustrated in Figure 1. This proves that β = 5⋅10 21 s -1 does not correspond to a strong enough over-damped system for the Smoluchowski equation to be valid, at least for the present purpose to study transient effects where the onset of the fission-decay width is crucial. This implies that the Smoluchowski equation is not valid in the regime of one-body dissipation which typically corresponds to β ≈ 5⋅10 21 s -1 . Although the approximation of Bhatt et al. in
Figure 11 :
11Time-dependent fission-decay rate as obtained from the resolution of the Smoluchowski equation (dash-dotted histograms) compared to the result of the Langevin equation (full histograms) calculated with different values of β for the nucleus 248 Cm at T = 5MeV and a fission barrier of 3.7 MeV.
reduced friction parameter 10 21 s -1 ∆t time step 0.01⋅10 -21 s Γ Random variable Gaussian distribution with variance σ 2 = 2
Γwas derived by Grangé et al.[13] and improved by Bhatt et al.[26] a few years later. Using the Gaussian approximation, they derived an analytical solution of the FPE in the under-damped regime and an analytical solution of the Smoluchowski equation in the over-damped case. Recently, we have proposed anotherFigure 2: Presentation of the potential in fission direction of 248 Cm as given by Bhatt et al. [26]: (V =
8.61⋅10 -3 (x-3.41) 2 ⋅[(x-23.098)(x+1.59)]+3.7). The fission-barrier height is 3.7 MeV.
A more elaborate formulation of
)
(t
f
expression for
)
(t
f
Γ
correspond to the nuclear inertia and friction coefficient, respectively. force, last term of the right-hand side of the second part of equation (A1.1), describes the fluctuating, or Brownian, part of the surrounding medium on the motion of the particle. In the framework of the fluctuation-dissipation theorem, the diffusion coefficient2
t
f
q
D
p
q
q
dq
q
dV
dq
q
d
q
p
dt
dp
L
+
−
−
÷ ÷
ø
ö
ç ç
è
ae
=
µ
γ
µ
µ
where:
)
q
(
µ
and
)
q
(
γ
The driving force
dq
q
dV )
(
−
is derived from the nuclear potential
)
(q
V
. The Langevin random
)
q
(
D
can be
related to friction via the Einstein relation:
T
)
q
(
)
q
(
D
γ
=
.
Dividing equation (A1.1) by
)
q
(
µ
so that to make appear the velocity
)
q
(
p
dt
dq
µ
=
, it
follows:
Table A2 . 1 :
A21Variables and parameters used in the Langevin calculations.
c ) A recent publication [Phys.Lett. B 567 (2003) 189.] criticises the application of an effective time shift, which was proposed in our previous publication[30] for introducing the initial condition of the zero-point motion. The criticism is based on a misinterpretation of " 0 t as a kind of relaxation time to the equilibrium of the oscillator, as represented by the ground state". We stress here that this time shift t 0 does not account for the relaxation towards equilibrium but represents a mean to model the distribution corresponding to the zero-point motion quantum state.
AcknowledgementWe acknowledge valuable discussions with Hans Feldmeier, Anatoly V. Ignatyuk, and David Boilley. This work has been supported by the European Union in the frame of the HINDAS project under contract FIKW-CT-2000-0031 and by the Spanish MCyT under contract FPA2002-04181-C04-01. One of us (C. S.) is thankful for the financing of a one-year stay at GSI by a Humboldt fellowship. The work profited from a collaboration meeting on "Fission at finite thermal excitations" in April 2002, sponsored by the ECT* ("STATE" contract).
. H Feldmeier, Rep. Prog. Phys. 50915H. Feldmeier, Rep. Prog. Phys. 50, 915 (1987).
J R Nix, A J Sierk, Proceedings of the South Adriatic Conference on Nuclear Physics: Frontiers of Heavy Ion Physics. N. Cindro, R. Caplar, W. Greinerthe South Adriatic Conference on Nuclear Physics: Frontiers of Heavy Ion PhysicsDubna, USSR; Dubna; Dubrovnik, Yugoslavia; SingaporeWorld Scientific333Proceedings of the International School Seminar on Heavy Ions PhysicsJ. R. Nix, A. J. Sierk, in Proceedings of the International School Seminar on Heavy Ions Physics, Dubna, USSR, 1986, edited by M. I. Zarubina, E. V. Ivashkevich (JINR, Dubna, 1987) 453; in Proceedings of the South Adriatic Conference on Nuclear Physics: Frontiers of Heavy Ion Physics, Dubrovnik, Yugoslavia, 1987, edited by N. Cindro, R. Caplar, W. Greiner (World Scientific, Singapore, 1990) 333.
. D Hilscher, H Rossner, Ann. Phys (Paris). 17471D. Hilscher, H. Rossner, Ann. Phys (Paris) 17, 471 (1992).
. J Blocki, J Randrup, W J Swiatecki, C W Tsang, Ann. Phys. 105427J. Blocki, J. Randrup, W. J. Swiatecki, C. W. Tsang, Ann. Phys. 105(1977) 427.
. K T R Davies, A J Sierk, J R Nix, Phys. Rev. C. 132385K. T. R. Davies, A. J. Sierk, J. R. Nix, Phys. Rev. C 13(1976) 2385.
. H Hofmann, Phys. Rep. 284137H. Hofmann, Phys. Rep. 284 (1997) 137.
H Hofmann, Proceedings of the RIKWN Symposium on 'Dynamics in Hot Nuclei'. the RIKWN Symposium on 'Dynamics in Hot Nuclei'TokyoH. Hofmann, Proceedings of the RIKWN Symposium on 'Dynamics in Hot Nuclei', Tokyo (1998).
. H Hofmann, F A Ivanyuk, C Rummel, S Yamaji, Phys. Rev. C. 64H. Hofmann, F. A. Ivanyuk, C. Rummel, S. Yamaji, Phys. Rev. C 64(2001) 054316-1.
. N Bohr, J A Wheeler, Phys. Rev. 56426N. Bohr, J. A. Wheeler, Phys. Rev. 56 (1939) 426.
. H A Kramers, Physika VII. 4284H. A. Kramers, Physika VII 4 (1940) 284.
. A Gavron, J R Beene, R L Ferguson, F E Obensahain, F Plasil, G R Young, G A Petit, M Jaaskelainen, D G Sarantites, C F Maguire, Phys. Rev. Lett. 47835Phys. Rev. Lett.A. Gavron, J. R. Beene, R. L. Ferguson, F. E. Obensahain, F. Plasil, G. R. Young, G. A. Petit, M. Jaaskelainen, D. G. Sarantites, C. F. Maguire, Phys. Rev. Lett. 47 (1981) 1255. Erratum: Phys. Rev. Lett. 48 (1982) 835.
D Hilscher, E Holub, U Jahnke, H Orf, H Rossner, Proc. of the 3rd Adriatic Europhysics Conference on the Dynamics of Heavy-Ion Collisions. of the 3rd Adriatic Europhysics Conference on the Dynamics of Heavy-Ion CollisionsHvar, Croatia, Yugoslavia225D. Hilscher, E. Holub, U. Jahnke, H. Orf, H. Rossner, Proc. of the 3rd Adriatic Europhysics Conference on the Dynamics of Heavy-Ion Collisions, Hvar, Croatia, Yugoslavia, May 25-30 (1981) 225.
. P Grangé, L Jun-Qing, H A Weidenmüller, Phys. Rev. C. 272063P. Grangé, L. Jun-Qing, H. A. Weidenmüller, Phys. Rev. C 27 (1983) 2063.
. E Strumberger, K Dietrich, K Pomorski, Nucl. Phys. 529522E. Strumberger, K. Dietrich, K. Pomorski, Nucl. Phys, A529, 522 (1991).
. Y Abe, S Ayik, P.-G Reinhard, E Suraud, Phys. Rep. 27549Y. Abe, S. Ayik, P.-G. Reinhard, E. Suraud, Phys. Rep. 275 (1996) 49.
. P Fröbrich, I I Gontchar, Phys. Rep. 292131P. Fröbrich, I. I. Gontchar, Phys. Rep. 292 (1998) 131.
. K Pomorski, B Nerlo-Pomorska, A Suroviec, M Kowal, J Bartel, K Dietrich, J Richert, C Schmitt, B Benoit, E De Goes, L Brennand, C Donadille, Badimon, Nucl. Phys. A. 67925K. Pomorski, B. Nerlo-Pomorska, A. Suroviec, M. Kowal, J. Bartel, K. Dietrich, J. Richert, C. Schmitt, B. Benoit, E. de Goes Brennand, L. Donadille, C. Badimon, Nucl. Phys. A 679 (2000) 25.
. P N Nadtochy, G D Adeev, A V Karpov, Phys. Rev. C. 6564615P. N. Nadtochy, G. D. Adeev, A. V. Karpov, Phys. Rev. C 65 (2002) 064615.
. J Benlliure, P Armbruster, M Bernas, A Boudard, T Enqvist, R Legrain, S Leray, F Rejmund, K.-H Schmidt, C Stephan, L Tassan-Got, C Volant, Nucl. Phys. A. 700469J. Benlliure, P. Armbruster, M. Bernas, A. Boudard, T. Enqvist, R. Legrain, S. Leray, F. Rejmund, K.-H. Schmidt, C. Stephan, L. Tassan-Got, C. Volant, Nucl. Phys. A 700 (2002) 469.
. G Van 't Hof, J C S Baceler, I Dioszegi, M N Harakeh, W H A Hesselink, N Kalantar-Nayestanaki, A Kugler, H Ploeg, Nucl. Phys. A. A. J. M. Plompen, J. P. S. van Schagen638G. van 't hof, J. C. S. Baceler, I. Dioszegi, M. N. Harakeh, W. H. A. Hesselink, N. Kalantar-Nayestanaki, A. Kugler, H. von der Ploeg, A. J. M. Plompen, J. P. S. van Schagen, Nucl. Phys. A 638 (1998) 613-661.
. V A Rubchenya, A V Kuznetsov, W H Trzaska, D N Vakhtin, A A Alexandrov, I D Alkazov, J Aeystoe, S V Khlebnikov, V G Lyapin, O I Osetrov, Yu E Penionzhkevich, Yu V Pyatkov, G P Tiourin, Phys. Rev. C. 581587V. A. Rubchenya, A. V. Kuznetsov, W. H. Trzaska, D. N. Vakhtin, A. A. Alexandrov, I. D. Alkazov, J. Aeystoe, S. V. Khlebnikov, V. G. Lyapin, O. I. Osetrov, Yu. E. Penionzhkevich, Yu. V. Pyatkov, G. P. Tiourin, Phys. Rev. C 58 (1998) 1587.
. B B Back, D J Blumenthal, C N Davids, D J Henderson, R Hermann, D J Hofman, C L Jiang, H T Penttilä, A H Wuosmaa, Phys. Rev. C. 6044602B. B. Back, D. J. Blumenthal, C. N. Davids, D. J. Henderson, R. Hermann, D. J. Hofman, C. L. Jiang, H. T. Penttilä, A. H. Wuosmaa, Phys. Rev. C 60 (1999) 044602.
. I Diószegi, N P Shaw, I Mazumdar, A Hatzikoutelis, P Paul, Phys. Rev. C. 6124613I. Diószegi, N. P. Shaw, I. Mazumdar, A. Hatzikoutelis, P. Paul, Phys. Rev. C 61 (2000) 024613.
. N P Shaw, I Dioszegi, I Mazumdar, A Buda, C R Morton, J Velkovska, J R Beene, D W Stracener, R L Varner, M Thoennessen, P Paul, Phys. Rev. C. 6144612N. P. Shaw, I. Dioszegi, I. Mazumdar, A. Buda, C. R. Morton, J. Velkovska, J. R. Beene, D. W. Stracener, R. L. Varner, M. Thoennessen, P. Paul, Phys. Rev. C 61 (2000) 044612.
Junghans, submitted (‚Manifestation of transient effects in fission induced by relativistic heavy-ion collisions'). B Jurado, C Schmitt, K.-H Schmidt, J Benlliure, A R , B. Jurado, C. Schmitt, K.-H. Schmidt, J. Benlliure, A. R. Junghans, submitted (‚Manifestation of transient effects in fission induced by relativistic heavy-ion collisions')
. K.-H Bhatt, P Grangé, B Hiller, Phys. Rev. C. 33954K.-H. Bhatt, P. Grangé, B. Hiller, Phys. Rev. C 33 (1986) 954.
. E M Rastopchin, S I Mul´gin, Yu B Ostapenko, V V Pashkevich, M I Svirin, G N Smirenkin, Sov. J. Nucl. Phys. 531200Yad. Fiz.E. M. Rastopchin, S. I. Mul´gin, Yu. B. Ostapenko, V. V. Pashkevich, M. I. Svirin, G. N. Smirenkin, Yad. Fiz. 53 (1991) 1200 [Sov. J. Nucl. Phys. 53 (1991) 741].
. R Butsch, D J Hofman, C P Montoya, P Paul, M Thoennessen, Phys. Rev. C. 441515R. Butsch, D. J. Hofman, C. P. Montoya, P. Paul, M. Thoennessen, Phys. Rev. C 44 (1991) 1515.
L G Moretto, Proc. Third IAEA Symp. on the Physics. Third IAEA Symp. on the PhysicsRochester, NY; IAEA, Vienna1329L. G. Moretto, Proc. Third IAEA Symp. on the Physics, Chemistry of Fission, Rochester, NY, 13-17 August 1973, vol 1 (IAEA, Vienna, 1974) p. 329.
. B Jurado, K.-H Schmidt, J Benlliure, arXiv/nucl- ex/0212020Phys. Lett. B. 533186B. Jurado, K.-H. Schmidt, J. Benlliure, Phys. Lett. B 533 (2003) 186 (arXiv/nucl- ex/0212020).
P Fröbrich, R Lipperheide, Theory of Nuclear Reactions. 18P. Fröbrich, R. Lipperheide, Theory of Nuclear Reactions, Oxford Studies in Nuclear Physics 18 (1996).
. S Chandrasekhar, Rev. Mod. Phys. 151S. Chandrasekhar, Rev. Mod. Phys. 15 (1943) 1.
. D Boilley, Y Abe, J D Bao, Eur. Phys. J. A. 18627D. Boilley, Y. Abe, J. D. Bao, Eur. Phys. J. A 18 (2003) 627.
. J D Bowman, W J Swiatecki, C E Tsang, Report LBL. 2908J. D. Bowman, W. J. Swiatecki, C. E. Tsang, Report LBL-2908 (1973).
. U Quantum Dissipative Systems, Weiss, 981- 02-0754-9World ScientificSingaporeQuantum Dissipative Systems, U. Weiss, World Scientific, Singapore, 1993, ISBN 981- 02-0754-9.
. J Ankerhold, P Pechukas, H Grabert, Phys. Rev. Lett. 87J. Ankerhold, P. Pechukas, H. Grabert, Phys. Rev. Lett. 87 (2001) 086802-1.
. P Hänggi, P Talkner, B Morkovec, Rev. Mod. Phys. 62251P. Hänggi, P. Talkner, B. Morkovec, Rev. Mod. Phys. 62 (1990) 251.
. G Chaudhuri, S , Eur. Phys. J. A. 14287G. Chaudhuri, S. Pal, Eur. Phys. J. A 14 (2002) 287.
. J Bartel, K Mahboud, J Richert, K Pomorski, Z. Phys. A. 35459J. Bartel, K. Mahboud, J. Richert, K. Pomorski, Z. Phys. A 354 (1996) 59.
. S Trentalange, S E Koonin, A Sierk, Phys. Rev. C. 221159S. Trentalange, S. E. Koonin, A. Sierk, Phys. Rev. C 22 (1980) 1159.
. K Pomorski, J Bartel, J Richert, K Dietrich, Nucl. Phys. A. 66587K.Pomorski, J. Bartel, J. Richert, K. Dietrich, Nucl. Phys. A 665 (1996) 87.
| [] |
[
"ION EXCHANGE IN SILICATE GLASS: MASS AVERAGE INTERDIFFUSION COEFFICIENT DETERMINATION",
"ION EXCHANGE IN SILICATE GLASS: MASS AVERAGE INTERDIFFUSION COEFFICIENT DETERMINATION"
] | [
"Guglielmo Macrelli guglielmomacrelli@hotmail.com \nIsoclima SpA -R&D Dept\nVia A.Volta 1435042Este(PDItaly\n"
] | [
"Isoclima SpA -R&D Dept\nVia A.Volta 1435042Este(PDItaly"
] | [] | The mass average interdiffusion coefficient DM is an approximated constant value of the interdiffusion coefficient which is relevant in the kinetics of ion exchange in silicate glasses. In this study it is presented a simple technique for its determination based on the weight change of a glass sample after ion exchange. The theoretical basis of the method is presented in detail together with the approximations assumed in considering the constancy of the DM . Experimental results are presented for soda-lime silicate glasses and it is demonstrated the correlation of DM with the ion exchange temperature following an Arrhenius type equation with a energy activation barrier and a preexponential factor. The determination of the mass average interdiffusion coefficient allows the estimation of the compression layer depth when a stress profile is build-up as a consequence of ion exchange. The estimated values of the compression layer depth have been compared with the ones measured by an optical technique based on differential surface refractometry (DSR). Values have been found quite compatible within the uncertainty limits indicated by the DSR method.I. | null | [
"https://export.arxiv.org/pdf/2212.13496v1.pdf"
] | 255,186,457 | 2212.13496 | b67166ecd0012adce0b92c7c77fdff5e598eaae6 |
ION EXCHANGE IN SILICATE GLASS: MASS AVERAGE INTERDIFFUSION COEFFICIENT DETERMINATION
Guglielmo Macrelli guglielmomacrelli@hotmail.com
Isoclima SpA -R&D Dept
Via A.Volta 1435042Este(PDItaly
ION EXCHANGE IN SILICATE GLASS: MASS AVERAGE INTERDIFFUSION COEFFICIENT DETERMINATION
1
The mass average interdiffusion coefficient DM is an approximated constant value of the interdiffusion coefficient which is relevant in the kinetics of ion exchange in silicate glasses. In this study it is presented a simple technique for its determination based on the weight change of a glass sample after ion exchange. The theoretical basis of the method is presented in detail together with the approximations assumed in considering the constancy of the DM . Experimental results are presented for soda-lime silicate glasses and it is demonstrated the correlation of DM with the ion exchange temperature following an Arrhenius type equation with a energy activation barrier and a preexponential factor. The determination of the mass average interdiffusion coefficient allows the estimation of the compression layer depth when a stress profile is build-up as a consequence of ion exchange. The estimated values of the compression layer depth have been compared with the ones measured by an optical technique based on differential surface refractometry (DSR). Values have been found quite compatible within the uncertainty limits indicated by the DSR method.I.
Introduction
Ion exchange in silicate glasses is an important process either scientifically and technologically. Apart from historical evidences of its applications 1 , ion exchange has been systematically scientifically studied since the 1960's of last century 1,2 . Applications have been identified in glass strengthening 2,3,4 (chemical strengthening) and in optical waveguides 1,5 . Looking for a suitable definition we can set the following: Ion Exchange (IX) in silicate glasses is a non-equilibrium thermodynamic process between a glass and an ion source, it is driven by gradients of electrochemical potentials of relatively mobile network modifiers. This means that, to activate ion exchange, we need a glass matrix with relatively mobile ions (typically monovalent alkali) and an ion source with available mobile ions. The contact of the glass matrix with the ion source generates the ion exchange as a consequence of the gradients in the electrochemical potentials of the involved ionic species. During the ion exchange process the Si-O network of the glass may be assumed stable. Considering ion A in the ion source (IS) and ion B in the glass matrix (GM) the IX process may be schematically written as:
IS GM IS GM A B B A + +(1)
and pictorially represented in Figure 1 where "A" is Potassium K + and "B" is Sodium Na + . Typical assumptions for Ion Exchange in silicate glasses are:
(a) Mobilities of cations in the ion source are much higher than those in the glass matrix.
(b) Interface reactions between ion source and glass are so fast that the time to get an equilibrium condition at the interface is much lower than the overall contact time between ion source and glass matrix.
From the above assumptions it is clear that the process rate critical factor is the inter-diffusion of the involved ions in the glass matrix. It may happen that the two above conditions are not met, or not entirely met during real technological processes, this may occur in presence of ion source contaminations that reduce ion mobilities or induce interface blocking mechanism or when the IS/GM total ion exchange time is of the same order of the time needed to get a surface equilibrium condition. In Figure 2 the two processes are identified: an equilibrium process at the interface Source/Glass and an interdiffusion process in the glass body. The main physical and chemical effects of Ion Exchange are, in a rational order:
(i) Near surface molar volume change when molar volume is different for the two exchanging ions (ii) Near surface molar concentration change 3 (iii) Near surface refractive index change (iv) Residual stress induction Both refractive index profile and residual stress profile are a consequence of the concentration of the invading ions in the glass matrix. The refractive index profile is of relevance in optical applications (optical waveguides) and in residual stress measurements 4,6 (optical determination of residual stress by birefringence) while the stress profile is relevant in strengthening applications. In both cases a key role is played by the ion concentration of the invading ions as a consequence of the IX process. In summary to have ion exchange in silicate glass we need an alkali-containing silicate glass and the contact of the glass with a ion source of another alkali. These conditions activate the exchange of host ions for invading ions from the source (usually: molten salt bath, usually KNO3). When ion exchange is applied for strengthening applications to generate a residual stress in the glass, temperature is managed to promote interdiffusion, and limit viscous relaxation. In this study we focus on the kinetics of the interdiffusion process which is characterized by the interdiffusion coefficient. The concept of mass average interdiffusion coefficient DM has been introduced by Varshneya and Milberg 7 . This parameter can be easily determined by measuring the change in weight of the glass article before and after ion exchange. The change in weight is due to the difference in molar weight between the incoming ion and the outgoing one. This method, known as weight gain or weight change 2,4 , is just mentioned in an international standard 8 for glass chemical strengthening without entering in any detail about its implementation. The weight change method is reported by Bartolomew and Garfinkel 2 and by Gy 4 for the determination of the average interdiffusion coefficient and, although its evident simplicity, there aren't other systematic discussions and treatments. The theoretical basis of the weight gain method for the determination of the average interdiffusion coefficient are discussed in detail. It is also discussed a further development where, under some general assumptions, the depth of the compression layer can be estimated. An experimental example is presented where the weight gain method is used and the estimated compression layer depth is compared to the one measured by the differential surface refractometry (DSR) technique. Results are critically reviewed and discussed. In Table I the list of symbols, their definitions and units is reported.
II. Theoretical part: Ion Flux Equations
The kinetics ion flux equations relevant to ion exchange are discussed by Macrelli, Mauro and Varshneya 9 . The relationship between ion fluxes Ji and electrochemical potentials i are established considering the components of the electrochemical potential, namely: the concentration related chemical potential i, the electrical potential (both external or internally generated because of different ion mobilities) , and the stress generated due to the different molar volumes of the involved ion species H:
= + − i i i H V F , i=A,B, (2a) −= i i i i JC x .(2b)
Applying electroneutrality and Gibbs-Duhem conditions, the individual flux equations for ion fluxes (2b) can be reduced 9 to a single Fick type equation:
4 ( ) − = + + − 0 0 ln 11 ln AA A AB A A AB A C J C D C RT x .(3)
In equation (3) concentration is expressed in terms of relative concentration:
+= 0 AB C C C ; += 0 0 0 AB C C C ; = 0 i i C C .(4)
From (3) it appears evident the complexity of the interdiffusion coefficient, this is discussed in some details in the literature 9 . For the sake of this study and following a large part of literature 2,3,4,10 we take the approximated assumption of constant value for the interdiffusion coefficient DM. After this assumption ion flux equation (3) is largely simplified in a more familiar first Fick equation between ion flux and concentration gradient:
( ) ( ) −= , , A AM C x t J x t D x .(5)
If the total contact time between the ion source and glass is and we indicate with Q the total exchanged molar flux (mol/cm 2 ) we can derive some important relationships without making any additional assumption or approximation. To keep it more general, let's consider the first Fick equation (5) with a space and time variable diffusion coefficient and let's remove the subscripts A and M. We make use of the Boltzmann transformation:
x z Dt = ,(6)
In terms of the new variable z, the flux equation (5)
From (7), the flux at the glass surface x=0 (z=0) is:
( ) == − = = = 0 00 00 0, xz D CC J x t D x t z ,(8)
Where D0 is the value of the diffusion coefficient at glass surface (x=0). The total exchanged molar flux Q is just the integral over the total contact time (total time of ion exchange) of the ion flux at glass boundary:
0 0 0 0 0 0 0 1 ( , ) 2 z z C C Q J x t dt D dt D z z t = = = = − = − .(9)
Relationship (9) is very general and it is valid for any first Fick type equation even with variable diffusion coefficient. In our case we assume that the interdiffusion coefficient is constant and we can write the second Fick equation by setting the continuity condition:
5 += 0 AA CJ tx ,(10)
for (5) we obtain the diffusion equation (second Fick equation):
= 2 2 AA M CC D tx .(11)
To solve this equation for the concentration of ion A in the glass, we need an initial and a boundary condition. We can assume, without loosing any generality, that this equation is for the excess of ion A in the glass, in this way the initial condition is CA(x,t)=0. The boundary condition at glass/source interface can be set looking at Figure 2 and assuming an instantaneous equilibrium condition C(x=0,t)=Cs for any t ≥ 0. The conditions under which this boundary condition is reasonable have been discussed by Macrelli 11 and are substantially justified when the total ion exchange time is much larger than the time to attain kinetic equilibrium at the IS/GM interface. Under the initial and boundary conditions above indicated, the solution to the diffusion equation is 1,2,3,4,10 :
== ( , ) 2 2 A s s M xz C x t C erfc C erfc Dt , (12a) ( ) = − = − − 2 0 2 ( ) 1 ( ) 1 exp erfc erf d . (12b)
Taking this solution (12a), the derivative in (9) can be calculated:
2 0 0 exp 2 s s z z C C C z z = = = − − = − .(13)
The total exchanged molar flux (9) results:
2 M s D QC = .(14)
This result (14) for the total exchanged molar flux is coming from the kinetic flux equations under some approximated assumptions. On the other side the total exchanged molar flux is related to the weight change (weight gain) at final time W() through a straightforward simple relationship:
( ) () ex A B W Q S M M = − ,(15)
where Sex is the exchange surface, and MA and MB are the molecular weights of the invading ions A and of the outgoing ions B. Comparing equations (14) and (15) it finally results the expression for the average interdiffusion coefficient in term of the weight change:
6 ( ) 2 () 2 M s ex A B W D C S M M = − .(16)
The determination of DM through the measurement of W() allows the reconstruction of the concentration profile of the invading ions through equation (12a). Through the knowledge of the concentration profile other important information can be deducted through additional theoretical equations. In particular the residual stress (equi-biaxial stress) introduced in the glass is related to the residual concentration of the invading ions through equations that, in the simplest approximated case, do not take into account the stress relaxation of the glass matrix 1,2,3,10 :
( , ) ( , ) ( ) (1 ) AA BE x t C x t C t = − − − (17)
Where, E is the Young modulus of the glass, is the glass Poisson ratio and B is the so called linear network dilatation coefficient (named also Cooper coefficient) 3,10 and () A Ctis the average (over x) of the concentration:
0 1 ( , ) ( , ) d AA C x t C x t dx d = ,(18)
and d is the glass thickness. From equation (17) we can argue that, as far as the invading ions concentration is higher than the average value, we have stress of compression (negative) while, if the concentration is lower than the average value, stress become positive indicating a tensile stress. The coordinate xc at which we have zero value of stress, corresponds to the so-called compression layer depth or case depth. The condition:
( , ) ( ) A c A C x C = ,(19)
set the zero value of residual stress and the solution of equation (19) for xc provides an estimate of the compression layer depth. According to the concentration profile (12a) the equation (19) is written:
= 2 c A s M x C erfc C Dt .(20)
Average concentration of the invading ions can be evaluated by the change in weight:
( ) () () A A B W C V M M = − , ,(21)
where V is the glass sample volume. Another possible approach to evaluate average concentration is to calculate the average concentration directly from equations (19) and (12a):
0 4 ( ) ( ) d ss M A M CC D x C erfc dx dd D == .(22)
7 This last approach is acceptable when the glass thickness d is negligeable in comparison with the sample dimensions (ion exchange in glass edges is neglected in (22)). In general the approach of equation (21) is acceptable for any glass sample thickness.
Even though condition (19) for case depth has been derived for the stress equation (17) that do not consider relaxation effects, a similar argument can be used for the more general stress equation with relaxation effects 12,13 :
( ) 0 ( ') ( , ) ( , ) ( , ) ( ) ( , ') ( , ') ( ') ' (1 ) ' t A A A A BE R t t x t x t C x t C t x t C x t C t dt t − = − − − − − V V V V ,(23)
where V(x,t) is the Varshneya relaxation function 12,13 and R(t) is the Maxwellian stretched relaxation function 3,4 . Condition (19) for equation (23) is
( , ) ( , ) ( ) A c A x C x C = VV .(24)
When the Varshneya function V(x,t) does not depend on the spatial coordinate than condition (24) relaxes to condition (19). For some specific glasses, namely a particular LithiumAluminoSilicate chemical composition 13 , submitted to ion exchange, residual stress profile relaxes in a way that condition (24) is satisfied for at least two different points 13 on the x coordinate and, as the ion exchange process is prolonged, surface compression turn to tensile status. In the following experimental part we use equation (16) for the determination of the mass average interdiffusion coefficient DM and condition (21) with equation (20) for the determination of the compression layer depth.
III. Experimental part: Interdiffusion coefficient determination
In this study we refer to soda-lime silicate glass. The glass chemical composition has been determined by X-rays Fluorescence and it is reported in Table II. Samples with nominal dimensions 66mm x 66mm with a nominal thickness of 1.6mm have been prepared by cutting from a larger sheet. Particular care has been taken in the cutting process to avoid chipping at the edge corners. Glass samples have been used for three Ion Exchange experiments (indicated as Q1,Q2 and Q3) where two samples per experiment type have been exposed to a specific Ion exchange schedule (Temperature and Immersion time) according to Table II. The ion source is a bath of molten analytical grade Potassium Nitrate (KNO3) manufactured by chemical synthesis. The experiments have been carried out in a range of temperatures within the melting point of the Potassium Nitrate (334°C) and the upper temperature at which the decomposition of the nitrate group NO3become significant (480°C). Samples have been carefully washed, cleaned and dried than weighed with a analytical calibrated balance with a sensitivity of 0.0001g (0.1 mg). They have been pre-heated for 15 minutes prior to immersion, at a temperature of 25°C lower than the salt bath temperature. At the end of the immersion cycle (in these experiments always 24 hours), samples have been removed from the salt bath allowed 5minutes dripping than cooled down to room temperature. After washing and cleaning and inspecting to detect any surface damage, they have been weighed again to determine weight change after ion exchange. Samples have been subsequently measured by differential surface refractometry 4,6,8 by using FSM 6000 instrument manufactured by Orihara-Japan Ltd 14 . The purpose of this last measurements is to determine compression layer depth Cd (m) and surface compression SC (MPa). 8 The FSM specification 14 indicates measurements uncertainties of +/-20MPa for surface compression and +/5m for the compression layer depth.
Results of weight change test, namely the determination of the mass average interdiffusion coefficient according to equation (16) and the estimation of the compression layer depth according to equation (20), are reported in table IV. In equation (16) it is needed the value of surface concentration Cs of Sodium ions and the exchange surface Sex of the sample exposed to ion exchange. The surface concentration of Sodium ions is determined by the value of the concentration fraction in weight of Sodium oxide w(Na2O), the density of the glass (2.484 g/cm 3 ) determined by the Archimede's method and the molecular weight of Sodium Oxide (MW = 61.98 g/mol) according to:
2 2 2 ( ) () () w S Na O C Na r MW Na O = .(25)
The exchange surface is determined from the geometry of the sample (width, length and thickness) which is a prismatic body. The difference of the molecular weigh of the exchanging ions can be easily calculated from the individual values of the molecular weight of Potassium (MK=39.0993 g/mol) and Sodium (MNa=22.9898 g/mol).
In equation (25) the parameter r represents the fraction of Sodium ions exchanged on the glass surface, if all Sodium is exchanged r = 1 while if no Sodium is exchanged r = 0. In all determinations of this study it has been assumed r = 1.
IV. Discussion.
From Table IV it is quite evident that the mass average interdiffusion coefficient increases with the ion exchange temperature. This means that it is reasonable to argue that the ion exchange kinetics rate is following similar mechanisms of temperature dependance of diffusion processes 3,10 . The temperature dependance of the mass average interdiffusion coefficient can be assumed to follow an Arrhenius type function with an activation energy (barrier) Hd and a pre-exponential factor D0 according to 3 The activation barrier energy is compatible with the value of 150 kJ/mol reported by Gy 4 for sodalime glass. The comparison of the compression layer depth estimated by the approach outlined in this study and formalized in equation (20), is compatible with the values measured by the DSR-FSM optical method. The difference (xc-Cd) in the values of table IV is ranging from -2.4 to 2.1 m that is 2 times lower than the declared uncertainty (+/-5m) of the FSM data sheet 14 . The weight change method proposed in this study provides an alternative and quite simple method to characterize ion exchanged silicate glasses. On the other side, because of the rough assumptions taken in this study, it is completely neglected the complexity of the interdiffusion coefficient in its dependence on concentration, stress and mutual interactions within the exchanging ions and with the silicate network 9 . It is important to emphasize the conceptual difference between the diffusion layer depth, indicated in figure 1 by the symbol xe, and the compression layer depth indicated either as xc ( Figure 1) and cd . The first represents the characteristic depth of ions penetration in the glass matrix, it can be associated to the concept of diffusion length 15,16 or conventionally defined by a suitable convened low value of the concentration depth as proposed by Gy 4 (c(xe)=0.005). The second represents the depth of the layer from the glass surface to the point where the compressive stress (see for example equation (17) is zero ((xc,t) = 0).
V. Conclusion
The weight change or weight gain measurement method outlined in this study is a quick and suitable approach to estimate the mass average interdiffusion coefficient DM and its dependency on ion exchange temperature. As a consequence of this determination the compression layer depth xc can be estimated as well. The simplicity of this method is based on rough approximations nevertheless it provides relevant information for industrial ion exchange process control and a guideline to more sophisticated approaches based on techniques for chemical depth profiling (secondary ion mass spectroscopy -SIMS, electron probe micro-analysis EMPA or x-ray photoelectron spectroscopy XPS) and the Boltzmann-Matano technique 16 suitably used when it is considered the concentration dependency of the interdiffusion coefficient.
Figure 1 -
1Representation of IX process. On the right side they are indicated the layers of ions invasion and the concepts of diffusion layer depth xe and compression layer depth also named case depth xc.
Figure 2 -
2Processes involved in ion exchange: surface equilibrium and interdiffusion in the bulk of material.
Figure 3 -
3(26) the activation barrier is expressed on a mole basis. The pre-exponential factor is interpreted as a parameter containing information about the entropy associated with the ion exchange interdiffusion reaction10 . The experimental results of table IV are reported inFigure 3on a logarithmic scale together with equation (26), best fitted with the least squares method to the experimental results. The values of activation energy barrier, HD and pre-exponential factor D0 of the best fitting are: Mass Average Interdiffusion coefficient DM, red boxes are results of this study (
Table IV )
IV, dashed
curve is Arrhenius equation (26) with best fit parameters (27), correlation factor R 2 =0.9924.
TABLES Table I -
TABLESIList of main symbols and units1,00E-12
1,00E-11
1,00E-10
425
430
435
440
445
450
455
460
465
470
475
Mass
Average
Intterdiffusion
Coefficient D M
(cm
2 /s)
Temperature (°C)
Symbol
Description
Units
MA
Molecular weight of ion A
g/mol
x,y,z
Spatial coordinates
m
t
Time
s
CA(x,t)
Molar Concentration of ion A
mol/cm 3
JA
Molar flux of ion A
mol/(cm 2 s)
QAB
Exchanged molar flux
mol/(cm 2 )
Sex
Sample Surface through which
we have ion exchange
cm 2
ΔMK
Molecular weight difference
between ion species
g/mol
DM
Mass Average Inter Diffusion
coefficient
cm 2 /s
T
Absolut Temperature
K
R
Gas constant
J/(molK)
F
Faraday Constant
C/mol
Electrochemical potential
J/mol
Chemical potential
J/mol
Mobility of Ion A
cm 2 /J·s
Activity coefficient
Dimensionless
xc , Cd
Depth of compression layer,
case depth
m
Sc
Surface Compression
MPa
Crel
Relative concentration
Dimensionless
HD
Activation Energy (Barrier)
J/mol
D0
Pre-exponential factor
cm 2 /s
Table II -
IIChemical composition expressed in weight fraction W and mole fraction (Determined by X-ray Fluorescence) of the silicate glass of this study (only main oxides are reported, minor oxides are not reported, their overall amount is less than 0.1 %).Table III -Ion Exchange experiments of this study.Table IV -Results of Ion Exchange experiments of this study. IX -Id.Oxide
W -Weight (%)
-Mol (%)
SiO2
72.4
71.09
Al2O3
0.55
0.32
Na2O
13.1
12.47
K2O
0.32
0.20
CaO
9.02
9.49
MgO
4.21
6.16
IX -Id.
Temperature
(°C)
Immersion time
(hours -h)
Q1
425
24
Q2
450
24
Q3
475
24
Weight
Change
W
(g)
Mass Average
Interdiffusion
Coefficient
from eq.(16)
DM
(cm 2 /s)
Case depth
from eq. (20)
xc
(m)
Case depth
measured by
DSR -FSM
Cd
(m)
Surface
Compression
measured by
DSR -FSM
Sc
(MPa)
Q1
0.0079
2.365• 10 -12
17.3
19.7
600
Q2
0.0127
7.319• 10 -12
28.5
28.7
477
Q3
0.0206
1.589• 10 -11
39.3
37.2
369
Ion Exchange process: History, evolution and applications. P Mazzoldi, S Carturan, A Quaranta, C Sada, V M Sglavo, DOI10.1393/ncr/i2013-10092-1Rivista del Nuovo Cimento. 369P.Mazzoldi, S.Carturan, A.Quaranta, C.Sada, V.M. Sglavo, Ion Exchange process: History, evolution and applications, Rivista del Nuovo Cimento, Vol.36, No 9, 2013, 397-460, DOI 10.1393/ncr/i2013-10092-1
Chemical strengthening of glass, Ch.6 in Glass Science and Technology Edited by D.R.Uhlmann and N. R F Bartolomew, H M Garfinkel, J. Kreidl. 5Academic PressStrength in GlassesR.F.Bartolomew, H.M.Garfinkel, Chemical strengthening of glass, Ch.6 in Glass Science and Technology Edited by D.R.Uhlmann and N.J. Kreidl vol. 5 Elasticity and Strength in Glasses, New York, 1980 Academic Press
A K Varshneya, J C Mauro, Fundamentals of silicate glasses, 3 rd Edition. Elsevier IncA.K.Varshneya, J.C.Mauro, Fundamentals of silicate glasses, 3 rd Edition, Elsevier Inc., 2019
Ion exchange for Glass Strengthening. R Gy, Mat.Sci.and Eng.B. 149R.Gy, Ion exchange for Glass Strengthening, Mat.Sci.and Eng.B, 149 (2008) 159-165
Theoretical analysis of Ion-Exchangd Glass Waveguides, Ch. 4 in Introduction to Glass Integrated Optics Edited by. A Tervonen, S.I.Najafi, NorwoodArtech House, IncA.Tervonen, Theoretical analysis of Ion-Exchangd Glass Waveguides, Ch. 4 in Introduction to Glass Integrated Optics Edited by S.I.Najafi, Norwood, 1992, Artech House, Inc.
H Aben, C Guillemet, Photoelasticity of Glass. Springer-VerlagH.Aben, C.Guillemet, Photoelasticity of Glass, Springer-Verlag, 1993
57[4] 165-169, 1974 and subsequent correction. A K Varshneya, M E Milberg, J.Amer.Ceram.Soc. 57328J.Amer.Ceram.Soc.A.K.Varshneya, M.E.Milberg, Ion Exchange in Sodium Borosilicate glasses, J.Amer.Ceram.Soc., 57[4] 165-169, 1974 and subsequent correction J.Amer.Ceram.Soc. 57[7] 328 (1974).
ASTM C1422/C1422M-20a Standard Specification for Chemically Strengthened Flat Glass. ASTM C1422/C1422M-20a Standard Specification for Chemically Strengthened Flat Glass
Coupling of diffusion and chemical stress: the case of ion exchange in glass. G Macrelli, J C Mauro, A K Varshneya, 10.1111/jace.17926J.Amer.Ceram.Soc. 104G.Macrelli, J.C.Mauro, A.K.Varshneya, Coupling of diffusion and chemical stress: the case of ion exchange in glass. J.Amer.Ceram.Soc. 2021;104:5599-5613. DOI: 10.1111/jace.17926
J C Mauro, Materials Kinetics Transport and rate phenomena. Elsevier Inc2021J.C.Mauro, Materials Kinetics Transport and rate phenomena, Elsevier Inc, 2021
The mathematical theory of diffusion in solids: time dependent first kind boundary condition. G Macrelli, 10.48550/arXiv.2211.050652022G.Macrelli, The mathematical theory of diffusion in solids: time dependent first kind boundary condition, https://arxiv.org/abs/2211.05065v5, https://doi.org/10.48550/arXiv.2211.05065, 2022
G Macrelli, A K Varshneya, J C Mauro, 10.48550/arXiv.2002.08016arXiv:2002.08016v.2[cond-mat.matrl-sciIon Exchange in Silicate Glasses: Physics of Ion Concentration, Residual Stress, and Refractive Index Profiles. 2020G.Macrelli, A.K. Varshneya, J.C.Mauro, Ion Exchange in Silicate Glasses: Physics of Ion Concentration, Residual Stress, and Refractive Index Profiles, arXiv: 2002.08016v.2[cond- mat.matrl-sci] https://doi.org/10.48550/arXiv.2002.08016, 2020
Simulation of glass network evolution during chemical strengthening: resolution of the subsurface compression maximum anomaly. G Macrelli, A K Varshneya, J C Mauro, J.Non-Cryst. Solids. 522119427G.Macrelli, A.K.Varshneya, J.C.Mauro, Simulation of glass network evolution during chemical strengthening: resolution of the subsurface compression maximum anomaly, J.Non-Cryst. Solids 522 (2019) 119427
Technical Data Sheet. Fsm 6000x, Orihara Industrial Co, Ltd. JapanFSM 6000X, Technical Data Sheet, Orihara Industrial Co, Ltd. Japan.
Diffusion in solids. P Shewmon, Springer International PublishersCham2nd EditionP Shewmon, Diffusion in solids, 2nd Edition, Cham, 2016, Springer International Publishers
J Crank, The Mathematics of Diffusion. OxfordClarendon Press2nd EditionJ. Crank, The Mathematics of Diffusion, 2nd Edition, Oxford, 1975, Clarendon Press
| [] |
[
"Thermal transport in crystals as a kinetic theory of relaxons",
"Thermal transport in crystals as a kinetic theory of relaxons"
] | [
"Andrea Cepellotti \nTheory and Simulations of Materials (THEOS), and National Centre for Computational Design and Discovery of Novel Materials (MARVEL)\nEcole Polytechnique Fédérale de Lausanne\nStation 91015LausanneSwitzerland\n",
"Nicola Marzari \nTheory and Simulations of Materials (THEOS), and National Centre for Computational Design and Discovery of Novel Materials (MARVEL)\nEcole Polytechnique Fédérale de Lausanne\nStation 91015LausanneSwitzerland\n"
] | [
"Theory and Simulations of Materials (THEOS), and National Centre for Computational Design and Discovery of Novel Materials (MARVEL)\nEcole Polytechnique Fédérale de Lausanne\nStation 91015LausanneSwitzerland",
"Theory and Simulations of Materials (THEOS), and National Centre for Computational Design and Discovery of Novel Materials (MARVEL)\nEcole Polytechnique Fédérale de Lausanne\nStation 91015LausanneSwitzerland"
] | [] | Thermal conductivity in dielectric crystals is the result of the relaxation of lattice vibrations described by the phonon Boltzmann transport equation. Remarkably, an exact microscopic definition of the heat carriers and their relaxation times is still missing: phonons, typically regarded as the relevant excitations for thermal transport, cannot be identified as the heat carriers when most scattering events conserve momentum and do not dissipate heat flux. This is the case for twodimensional or layered materials at room temperature, or three-dimensional crystals at cryogenic temperatures. In this work we show that the eigenvectors of the scattering matrix in the Boltzmann equation define collective phonon excitations, termed here relaxons. These excitations have well defined relaxation times, directly related to heat flux dissipation, and provide an exact description of thermal transport as a kinetic theory of the relaxon gas. We show why Matthiessen's rule is violated, and construct a procedure for obtaining the mean free paths and relaxation times of the relaxons. These considerations are general, and would apply also to other semiclassical transport models, such as the electronic Boltzmann equation. For heat transport, they remain relevant even in conventional crystals like silicon, but are of the utmost importance in the case of two-dimensional materials, where they can revise by several orders of magnitude the relevant time-and length-scales for thermal transport in the hydrodynamic regime. arXiv:1603.02608v3 [cond-mat.mtrl-sci] | 10.1103/physrevx.6.041013 | [
"https://arxiv.org/pdf/1603.02608v3.pdf"
] | 30,329,954 | 1603.02608 | f9d983143ff66314aadde70d28c02b094c32ae97 |
Thermal transport in crystals as a kinetic theory of relaxons
Andrea Cepellotti
Theory and Simulations of Materials (THEOS), and National Centre for Computational Design and Discovery of Novel Materials (MARVEL)
Ecole Polytechnique Fédérale de Lausanne
Station 91015LausanneSwitzerland
Nicola Marzari
Theory and Simulations of Materials (THEOS), and National Centre for Computational Design and Discovery of Novel Materials (MARVEL)
Ecole Polytechnique Fédérale de Lausanne
Station 91015LausanneSwitzerland
Thermal transport in crystals as a kinetic theory of relaxons
Thermal conductivity in dielectric crystals is the result of the relaxation of lattice vibrations described by the phonon Boltzmann transport equation. Remarkably, an exact microscopic definition of the heat carriers and their relaxation times is still missing: phonons, typically regarded as the relevant excitations for thermal transport, cannot be identified as the heat carriers when most scattering events conserve momentum and do not dissipate heat flux. This is the case for twodimensional or layered materials at room temperature, or three-dimensional crystals at cryogenic temperatures. In this work we show that the eigenvectors of the scattering matrix in the Boltzmann equation define collective phonon excitations, termed here relaxons. These excitations have well defined relaxation times, directly related to heat flux dissipation, and provide an exact description of thermal transport as a kinetic theory of the relaxon gas. We show why Matthiessen's rule is violated, and construct a procedure for obtaining the mean free paths and relaxation times of the relaxons. These considerations are general, and would apply also to other semiclassical transport models, such as the electronic Boltzmann equation. For heat transport, they remain relevant even in conventional crystals like silicon, but are of the utmost importance in the case of two-dimensional materials, where they can revise by several orders of magnitude the relevant time-and length-scales for thermal transport in the hydrodynamic regime. arXiv:1603.02608v3 [cond-mat.mtrl-sci]
I. INTRODUCTION
The foundations for the theories of lattice thermal transport have been set in place long ago, from the phonon Boltzmann transport equation (BTE) [1] to Green-Kubo linear-response theory [2,3]. However, only recently it has become possible, thanks to our increased computational capabilities, to solve these transport models with high accuracy and without resorting to oversimplifying assumptions [4][5][6][7][8][9][10][11][12][13][14][15].
In particular, the linear BTE can nowadays be solved exactly, using empirical or first-principles interactions, with iterative [4,16,17], variational [7,[18][19][20] or direct diagonalisation algorithms [21][22][23], all of which do not need to simplify the scattering operator with the often-adopted single-mode relaxation time approximation (SMA). In the SMA, each phonon mode relaxes independently to equilibrium, and it has long been known that this is an incorrect assumption for solids at low temperatures [18,24]. Importantly, this approximation fails dramatically in lower dimensions, as first found in graphene [25], boron nitride and other two-dimensional (2D) materials [26][27][28], as well as in layered crystals [29]. The origin of such failure of the SMA has been up to now a matter of debate, with an emerging picture of collective phonon excitations being responsible for heat transfer [26,[29][30][31][32][33], while nevertheless lacking a definition of such excitations.
The microscopic interpretation of thermal transport in the BTE is based on the kinetic theory of gases, used in various contexts since its development in the 19 th cen- * nicola.marzari@epfl.ch tury, which relates the thermal conductivity to the velocities and relaxation times of the carriers, with phonons being usually identified as the relevant gas of excitations. However, as argued below, this identification is incorrect, since only the adoption of the SMA allows for the definition of a time interval (i.e. a lifetime) between heat flux dissipation events taken as phonon scatterings. Going beyond the SMA, the full, exact solution of the BTE provides the correct thermal conductivity (dramatically improved in 2D materials or at low temperatures), but adds complexity in its interpretation. In fact, solving exactly the BTE implies abandoning the concept of phonon relaxation time and the description of heat being carried by a gas of phonons. In other words, phonon lifetimes or phonon mean free paths are no longer relevant quantities to describe thermal transport, since phonons are not the heat carriers anymore. Yet, the beauty of the SMA description lies in the simplicity of its description of transport. A natural question then arises: can one define heat carriers, relaxation times, or mean free paths within the exact treatment of the BTE?
In this work, we provide an answer to these questions. First, we recall why phonon lifetimes are unrelated to heat flux dissipation. Then, we define a set of collective excitations, termed relaxons, that diagonalize the scattering matrix. The BTE is rewritten in the basis of these relaxons; in this representation, each eigenvector represents a collective excitation consisting of a linear combination of out-of-equilibrium phonon populations, and it describes the thermal relaxation of a collective excitation of out-of-equilibrium lattice vibrations. We show that each relaxon is characterised by a well defined relaxation time; in the case of a homogeneous system at the steady-state each relaxon has also a well defined velocity and mean free path, and the thermal conductivity can be interpreted exactly as a kinetic theory of the relaxon gas. As a practical example, we compare thermal conductivities in graphene and silicon contrasting the relaxon and the phonon representations, and highlight the profoundly different pictures that emerge.
II. APPROXIMATED RELAXATION TIMES
We start our derivation by recalling the microscopic description of heat transport given by the linearised phonon BTE [18]:
∂n µ (x, t) ∂t + v µ · ∇n µ (x, t) = − 1 V µ Ω µµ ∆n µ (x, t) .
(1) This equation describes the out-of-equilibrium dynamics of the phonon excitation number n µ at position x and time t, for all possible phonon states µ (in shorthand notation µ ≡ (q, s), where q varies over the Brillouinzone and s over the phonon branches). Furthermore, v µ is the phonon group velocity, V is a normalization volume, Ω µµ is the linear phonon scattering operator and ∆n µ = n µ −n µ is the deviation of the phonon distribution from thermal equilibrium, i.e. the Bose-Einstein distributionn µ (x, t) = (e ωµ/k B T (x,t) − 1) −1 , with ω µ being the phonon frequency and T (x, t) the local temperature. This linear approximation, commonly used in most studies of transport, allows to describe scattering as a linear operator represented by the action of the matrix Ω µµ on ∆n µ : this assumption holds for small deviations from thermal equilibrium and will always be used in the rest of the manuscript. The scattering matrix appearing in Eq. (1) is in its most general form and describes all possible mechanisms by which a phonon excitation can be transferred from a state µ to a state µ . For the sake of simplicity, we will limit this manuscript to the inclusion of three-phonon processes and isotopic scattering events [7], whose expressions are reported for completeness in Appendix A.
For later convenience, it is useful to write the left-hand side of Eq. (1) in terms of the unknown ∆n µ :
∂n µ ∂T ∂T (x, t) ∂t + v µ · ∇T (x, t) + ∂(∆n µ (x, t)) ∂t + v µ · ∇(∆n µ (x, t)) = − 1 V µ Ω µµ ∆n µ (x, t) ;(2)
where T is the reference temperature at which the BTE has been linearized. To obtain Eq. (2), we substituted n µ =n µ + ∆n µ in Eq. (1) and used the fact that the Bose-Einstein distribution depends on space and time only through the temperature T (x, t).
A closed-form solution of the above equation can be obtained in the SMA, which replaces the scattering operator with its diagonal terms
1 V µ Ω µµ ∆n µ (x, t) ≈ ∆n µ (x, t) τ SMA µ .(3)
To show that in this simplified diagonal form τ SMA µ represents indeed a relaxation time, let's consider a system at thermal equilibrium (T (x, t) = T ), so that the phonon distribution isn µ everywhere and thus in Eq.
(2) ∇(∆n µ ) = 0, ∂T ∂t = 0 and ∇T = 0. If we excite a single phonon at time t 0 , its population relaxes back to equilibrium as ∆n µ (t) = (n µ (t 0 ) −n µ )e −t/τ SMA µ , i.e. with a characteristic time τ SMA µ . The thermal conductivity tensor k ij (i and j are cartesian indices) is defined as the ratio between a heat flux Q i and a static gradient of temperature (∇T ) j . Two simplifications apply in this case. First, a steady-state condition allows to simplify the BTE by setting time derivatives to zero. Second, the spatial gradient can be simplified taking ∇(∆n µ ) = 0. This assumption, frequently adopted in literature, holds for a homogeneous perturbation of a bulk crystal (as in our case): if we apply a thermal gradient to a crystal at temperature T , the response ∆n µ should not depend on the particular position x inside the sample. Although we will not consider it further here, we note that this assumption cannot be applied when studying systems that break translational invariance, involving e.g. surfaces or pointlike heat sources. Under these conditions and the SMA, the resulting BTE can be solved analytically and, using the harmonic approximation for the heat flux Q = 1 V µ ω µ v µ ∆n µ [34] and the definition Q i = − j k ij (∇T ) j , the thermal conductivity is given by
(k ij ) SMA = 1 V µ C µ v i µ (Λ j µ ) SMA ,(4)
where (Λ j µ ) SMA is the component of the phonon mean free path in direction j. This expression can be interpreted as the thermal conductivity of a gas of phonons, each carrying a specific heat C µ = 1 k B T 2nµ (n µ + 1)( ω µ ) 2 = ∂nµ ∂T ω µ , traveling at velocity v i µ and with a mean free path (Λ j µ ) SMA = v j µ τ SMA µ before being thermalised by scattering. Crucially, the definition of phonon lifetime or mean free path cannot be extended beyond the SMA, since the off-diagonal terms of the scattering operator introduce couplings between phonons, and phonon thermalisation stops being governed by an exponential relaxation.
III. RELAXONS
An exact definition of relaxation times has been formally derived by Hardy [22], as an auxiliary result in his study of second sound. To recall it, let's first note that the left side of the BTE in Eq. (2) has a drifting operator diagonal in µ, whereas the right side has a scattering operator (determining scattering time scales) that is non-diagonal. To identify meaningful scattering times, we proceed with a change of basis that diagonalises the scattering operator while allowing the drifting term to become non-diagonal. To make more apparent the symmetries within the BTE, we perform the transformations [22,23,35,36]:
Ω µµ = Ω µµ n µ (n µ + 1) n µ (n µ + 1) , and(5)∆ñ µ = (n µ (n µ + 1)) − 1 2 ∆n µ .(6)
These transformations are introduced to scale quantities appearing in the BTE in such a way thatΩ µµ =Ω µ µ (the matrix Ω does not obey this symmetry; see Appendix A for a detailed explanation). We note that sometimes these transformations appear in literature in the form of hyperbolic sines, by means of the identity sinh(
ωµ 2k B T ) = 1 2 √n µ(nµ+1)
. SinceΩ is a real symmetric matrix, it can be diagonalized, giving eigenvectors θ α µ and real eigenvalues 1 τα such that
1 V µ Ω µµ θ α µ = 1 τ α θ α µ ,(7)
where α is the eigenvalue index. In passing, we define the scalar product α α ≡ 1 V µ θ α µ θ α µ that allows to define the orthonormalization condition for the eigenvectors ( α α = δ αα ) which will be helpful in the next algebraic operations. It can be shown [22,35] thatΩ is positive-semidefinite, i.e. 1 τα ≥ 0 ∀α, and that its eigenvectors are either even or odd, i.e. θ α µ = ±θ α −µ , where −µ = (−q, s) [22]. Little else is known on the eigenvalue spectrum of Eq. (7), which therefore has to be characterized numerically. In contrast with Refs. [21,22], we remark that the Bose-Einstein distribution is not an eigenvector with zero eigenvalue: the scattering operator acts only on the deviation from equilibrium ∆n µ , therefore thermal equilibrium (∆n µ = 0) is a stationary solution ( µ Ω µµ ∆n µ = 0) because it's an algebraically trivial solution. However, the Bose-Einstein distribution allows the introduction of a vector of unitary length
θ 0 µ = n µ (n µ + 1) ω µ √ k B T 2 C ,(8)
where C = 1 V µ C µ , describing the increase of temperature. This vector is constructed as the linear deviation from equilibrium ofn µ (T + δT ), transformed using Eq. (6) and normalized to one. Note that θ 0 µ is not an eigenvector and doesn't have to be orthogonal to other eigenstates α.
Any response ∆ñ µ can be written as a linear combination of the θ α µ eigenvectors [22] ∆ñ µ (
x, t) = α f α (x, t)θ α µ ,(9)
and the BTE can be written in this θ α basis (to this aim, substitute Eq. (9) in (2) and take the scalar product of the equation with a generic eigenvector α ), becoming:
C k B T 2 ∂T (x, t) ∂t 0 α + ∇T (x, t) · V α + ∂f α (x, t) ∂t + α V αα · ∇f α (x, t) = − f α (x, t) τ α ,(10)
where
V αα = 1 V µ θ α µ v µ θ α µ ≡ α v α and V α = V 0α = 0 v α . V αα
derives from the action of the diffusion operator on the deviation from equilibrium, while V α derives from the action of the diffusion operator on the equilibrium distribution.
The physical picture encoded in Eq. (10) underlines one of the key statements of this work: by diagonalising the scattering operator, the information about the characteristic relaxation time of the thermal excitations is now given by the eigenvalues 1 τα . The eigenvectors θ α µ for which this picture emerges represent collective excitations which we call here relaxons. Each relaxon represents a distribution of phonon excitation numbers (a wave packet), describing how the phonon distribution is relaxing to equilibrium. The coefficients f α are the relaxon occupation numbers, which are determined by the BTE in the out-of-equilibrium state, and that at equilibrium are all 0, so that the deviation from equilibrium ∆n µ vanishes.
The name relaxon is easily justified by considering a system at thermal equilibrium, so that ∂T ∂t = 0, ∇T = 0 and, in terms of relaxons, all states are empty everywhere, so that ∇f α = 0 ∀α in Eq. (10). If we excite a single relaxon α at time t 0 , its occupation will relax back to equilibrium as f α (t) = f α (t 0 )e −t/τα , therefore endowing τ α with the meaning of a relaxation time. Although the theory allows for zero eigenvalues, we will find in our examples only strictly positive relaxation times, so that all relaxons decay to zero for t → ∞. Using Eq. (9), one can show instead that phonon populations do not have well-defined relaxation times: since each phonon population decays as a linear combination of relaxon processes ∆n µ (t) = α f α (t 0 )θ α µ e −t/τα , the characteristic time for the decay depends on the initial conditions of the thermal excitation and can even display damped oscillations. Conversely, let's suppose to excite only one phonon mode µ at time t 0 . This initial state can be decomposed as the sum of different relaxons, each evolving with a different relaxation time. Therefore, at a subsequent time t one will also observe that new phonon modes µ = µ have been excited out of thermal equilibrium. In Fig. 1 we graphically illustrate our interpretation: each relaxon is a collective excitation of phonons, which interact through scattering events among themselves, but are decoupled from phonons belonging to different relaxons; owing to their positive relaxation time, relaxons disappear at long times, allowing the system to reestablish equilibrium.
Velocities appear in Eq. (10) with a matrix V αα coupling different relaxons, since it is the phonon basis that diagonalises the drifting operator; therefore, one cannot always identify a relaxon velocity. However, if we suppose to work in an infinite crystal at temperature T and apply a temperature perturbation homogeneously to the entire crystal, the response of the system is constant throughout the space and we can set ∇f α = 0, ∀α. Therefore, the BTE simplifies to
∂f α (t) ∂t + C k B T 2 ∂T ∂t 0 α + ∇T · V α = − f α (t) τ α .
(11) Both the drifting and the collision operator are now diagonal in α, and V α identifies a well-defined relaxon velocity.
Let's simplify the problem further and consider steady state. In this case, time derivatives are set to zero in Eq. (11) and one can, for small deviations from equilibrium, search for linear solutions of the form
f α = i f i α ∇ i T ,
where i is a cartesian direction. The BTE reduces to
C k B T 2 V i α = − f i α τ α ,(12)
whose solution for f i α is trivial. Using the relation between phonons and relaxon occupation numbers ∆n µ = n µ (n µ + 1)( iα ∇ i T f i α θ α µ ), we obtain the thermal conductivity
k ij = −1 V∇ i T µ ω µ v j µ ∆n µ = − α f i α k B T 2 CV j α = α CV i α V j α τ α = α CV i α Λ j α ,(13)
where we introduced the relaxon mean free path Λ α (Λ j α is the component of Λ α in direction j). Therefore, the exact thermal conductivity in Eq. (13) is expressed in the framework of the kinetic theory of gases, and thermal transport can be thought as a flux of relaxons, each carrying a specific heat C, traveling at velocity V α for an average distance of Λ α before thermalisation occurs. At variance with the phonon picture, where each phonon participates to thermal conductivity with a mode specific heat C µ , all relaxons contribute with the same specific heat of the crystal C. Mathematically, the phonon-mode specific heat is moved in the vector θ 0 µ (note that (θ 0 µ ) 2 = Cµ C ) and thus is included into the relaxon velocity V α = 0 v α . To physically interpret this difference we recall that, from a thermodynamic point of view, the quantity CδT is the energy needed by the system to change temperature by δT . To observe such temperature change, all phonon modes must simultaneously change their occupation number according to the collective excitation θ 0 µ . The quantity C µ δT is the decomposition of such energy change in terms of each phonon mode. However, as explained before, one cannot suppose to excite a single phonon mode and bring it to a higher temperature without affecting the rest of the phonon ensemble: phonon scattering would redistribute the energy excess of such mode to the rest of the system. Only a collective excitation of phonons (θ 0 µ ) leads to a temperature change and the total energy cost for increasing temperature is necessarily associated with C; thus, from a thermodynamic point of view, one could state that the mode specific heat C µ does not have a well-defined meaning.
It's worth to point out the role played by the parity of relaxons. The quantity V α = 0 v α involves the odd function v µ (−v µ = v −µ ) and the even function θ 0 µ (owing to ω µ = ω −µ ). Therefore, relaxon velocities V α are different from zero only for odd relaxons α. Consequently, Eq. (12) implies also that only odd relaxons are excited in the steady state condition, and thus contribute to heat flux, while even relaxons have zero occupation number. The role of parity is reversed for determining the energy of the system, since the change from equilib-rium energy ∆E is
∆E = 1 V µ ω µ ∆n µ = 1 V µ ω µ n µ (n µ + 1)∆ñ µ = 1 V µ θ 0 µ k B T 2 C α f α θ α µ = k B T 2 C α f α 0 α .(14)
In this case, even relaxons have a non-zero coefficient 0 α and contribute to an energy change, but for odd relaxons 0 α = 0 and thus do not change energy. We can thus deduce that at the steady state defined by Eq. (11), where only odd relaxons are excited, the energy of the system is conserved.
IV. GRAPHENE
As a first numerical example supporting these conclusions, we study relaxons in graphene, the material with the highest known thermal conductivity [37], and contrast the phonon and the relaxon pictures at 300K. Due to its symmetry, graphene's k ij tensor is diagonal and, since k xx = k yy , it has only one independent component (verified also numerically); therefore in the following we will drop cartesian indices and compute quantities numerically along the zig-zag direction. To proceed, we calculate harmonic and anharmonic force constants using density-functional perturbation theory [38][39][40][41][42][43][44] as implemented in the Quantum-ESPRESSO distribution [45] and construct the scattering matrix using 3-phonon and isotope-phonon interactions. The diagonalisation of Eq. (7) provides all the relaxon eigenvectors θ α µ ; each of them represents, at fixed relaxon index α, a difference in phonon populations with respect to thermal equilibrium (provided that is back-transformed using Eq. (6)). Notably, only few θ α µ have large relaxation times; for example, the longest-lived relaxon (α=1) is plotted in Fig. 2 as a function of the phonon index µ = (q, s). The first three (s = 1, 2, 3) branches are shown, corresponding to the out-of-plane, transverse or longitudinal acoustic phonons (ZA, TA and LA respectively). This particular relaxon induces a population difference for the ZA branch mainly located close to the Brillouin zone center, whereas TA and LA modes are altered throughout the Brillouin zone. The variations of optical modes (s = 4, 5, 6 not shown) are an order of magnitude smaller. The complex landscape drawn by these phonon distributions reflects the fact that out-of-equilibrium lattice properties cannot be described in terms of single phonon properties, as the action of scattering tightly couples phonons of any wavevector and branch.
We analyse the entire phonon and relaxon spectrum in Fig. 3, where the contributions to the SMA or the exact thermal conductivities are plotted as a function of the relaxon or phonon relaxation times. The thermal conductivities computed in the two pictures differ significantly in FIG. 2. Representation of the relaxon θ α µ with the longest relaxation time (α=1) in graphene at room temperature as a function of the phonon index µ = (q, s), where we choose s to be the out-of-plane/transverse/longitudinal acoustic mode (ZA, TA and LA respectively). We recall that the relaxon is a difference in phonon populations with respect to thermal equilibrium: overpopulated modes are colored in red, depopulated ones are in blue. The fine structure of the ridges is a numerical artifact due to discrete Brillouin zone sampling.
graphene (in this work we compute 3894 W/mK with the exact BTE against 495 W/mK with the SMA), hence, to have more comparable quantities, we plot the percentage contribution to thermal conductivity. We first note that the spectrum of phonon lifetimes (and phonon velocities and mean free paths) is continuous, with a divergence τ µ → ∞ for acoustic ZA phonons at the Γ point [46]. This divergence cannot be accurately described with a finite mesh of points sampling the Brillouin-zone (in our case, a full mesh of 128×128 points), resulting in a sparse tail of long-lived phonons on the right side of Fig. 3 3. Relaxation times and their contribution to the thermal conductivity of graphene at room temperature, considering relaxons or phonons as heat carriers. Relaxons tend to be longer lived than single phonon excitations, with large contributions to thermal conductivity coming from excitations with relaxation time larger than 10 3 ps, whereas phonons have lifetimes mainly in the range 10-100 ps. The shaded area is a guide to the eyes to stress that phonos form a continuous spectrum, while relaxons are discretized: thermal conductivity can be accurately described using a small number of relaxons. whose contribution to k SMA is negligible [46]. Instead, relaxation times for relaxons are discrete and sparse, in particular in the region of large values, so that only a small number of relaxons is sufficient to describe thermal transport with high accuracy. This observation is robust with respect to Brillouin Zone sampling: an improvement in the integration mesh has new phonon modes appear in the long-lifetime region; instead the longest relaxon relaxation times converge -from above -to the discretized values shown in figure. On average, relaxon relaxation times are larger by at least two orders of magnitude with respect to phonon lifetimes. The large difference between the time scales of phonons and relaxons appears because a single phonon scattering cannot thermalize the system [18], as instead implied by the SMA. Therefore, while phonons scatter at timescales of about 10-100 ps, heat flux is dissipated by relaxons within nano-and microsecond timescales.
Before analysing velocities, we note that the sign of V α is arbitrary, since both θ α µ and −θ α µ are relaxon eigenvectors. As a convention, we select the sign of odd eigenvectors such that V α is non-negative (and so also Λ α ), noting that in any case the contribution to k would be positive (as V 2 α ). Phonon velocities can also assume both signs: in figure we plot their absolute values. Fig. 4 reveals that the velocities of relaxons are much smaller than those of phonons: while the scale of phonon velocities is set by the speed of sound (the group velocity of the longitudinal acoustic phonon is about 20 km/s), relaxons are slower by two orders of magnitude, indicating that heat is transferred through the material at 0.1-1 km/s. Finally, we show the relaxon mean free paths in Fig. 5 (projected along the transport direction). As other firstprinciples studies reported [8], phonon mean free paths for graphene are distributed in the 0.1-1µm region [29]; this is confirmed here. For relaxons, most contributions to thermal conductivity come with mean free paths above 0.1 µm, the longest and most important contributions having mean free paths up to tens of µm. The contribution to k is roughly monotonic with the mean free path, and the large increase in τ α is partly compensated by the decreased V α . The saturation of relaxons' mean free paths at tens of µm appears to be in contrast with recent estimates for saturation lengths of 100 µm [29] or longer [47]; we will comment this discrepancy after discussing the next example. Analysis of phonon and relaxon mean free paths and their contributions to the thermal conductivity of graphene at room temperature. The spectrum of phonon mean free paths is peaked in the submicron region, whereas relaxon mean free paths are skewed to values larger than 1µm. The two largest relaxon mean free paths, illustrating the maximum distance that heat flux can propagate inside the material before decaying, lie between 10 and 100µm.
V. SILICON
As a further test for relaxons, let us now turn our attention to silicon and examine its thermal transport properties at room temperature. The thermal conductivity tensor in silicon is diagonal and the three cartesian directions are equivalent; we therefore only consider transport properties along the (100) direction. At variance with graphene, here the SMA introduces an appealingly small error: in our calculations we find 138 W/mK instead of 141 W/mK for the exact solution; these estimates are in line with previous first-principles studies [5,6,8]. The small difference between the two pictures is somehow replicated in their relaxation times, reported in Fig. 6. The time-scales covered by phonons and relaxons covers approximately the same range of values, except for one relaxon, not shown in the graph, that has relaxation time of 2 · 10 5 ps but negligible contribution to thermal conductivity (10 −9 %). However, one can note that the two distributions of values do not perfectly overlap: even if the scattering matrix is diagonally dominant, there are anyway small non-zero out-of-diagonal matrix elements that introduce deviations from the SMA.
It is enlightening to analyze the different velocity scales set by the two pictures, depicted in Fig. 7. Once again, most of the contributions to SMA thermal transport come from phonons with velocities close to the speed of sound, which in silicon is approximately 8 km/s. However, relaxon velocities are two orders of magnitude smaller than this limiting value, reaching merely 60 m/s. Despite the fact that the instantaneous velocity of a lattice vibration is determined by the phonon dispersion, the velocity at which the heat flux propagates can be much different: the scattering between phonons slows it down. The mean free paths for relaxons and phonons in silicon are compared in Fig. 8. The vast difference originating from the velocities is carried over, so that while mean free paths of phonons extend up to 100µm, in agreement with other first-principles studies [8,48,49], relaxons travel for a distance two orders of magnitude smaller than phonons. It seems therefore puzzling that the two pictures give such large differences in the estimate of velocities and mean free paths, despite the fact that thermal conductivities are essentially identical. To explain this discrepancy, let's compare Eqs. (4) and (13) for thermal conductivity and recall that specific heat is constant for relaxons and mode-specific for phonons. Also the SMA conductivity can be written in a form with constant spe- cific heat for each phonon, provided that we rescale velocities as v µ → Cµ VC v µ (consequently, also the mean free path is scaled Λ SMA
µ = v µ τ SMA µ → Cµ VC Λ SMA µ )
. After this transformation, velocities and mean free paths of phonons and relaxons in silicon are again within the same order of magnitude, although residual discrepancies still persist (see Appendix B for the spectra after rescaling). Therefore, the differences observed in silicon between the two pictures arise mainly from the different interpretation of specific heat.
We note that other experimental and theoretical efforts have estimated heat mean free paths in silicon [8,[48][49][50][51][52], obtaining values that are comparable to those of the phonon mean free paths, and different from the relaxon mean free paths presented here. It is important to stress, though, that at least one of these two assumptions were used: first, that results can be interpreted considering phonons as the heat carriers; and second, that surface or grain boundary scattering can be exploited as a tool to estimate heat mean free paths. In the present work, we discussed already at length the limitations of the former assumption; as for the latter one, we cannot compare our results with studies that rely on surface scattering, since the data presented here pertain to a homogeneous bulk crystal. It is nevertheless possible to drop the bulk condition and solve the BTE in presence of surfaces, reconciling the different pictures; such discussion goes out of the scope of the present work and will be presented in an upcoming study.
VI. FURTHER PROPERTIES
A widely held assumption that is also violated by the exact BTE is the Matthiessen rule, which states that the total thermal resistivity (i.e. 1 k ) is the sum of the resistivities of each independent scattering mechanism; however, the Matthiessen rule is an approximation [18] relying on the possibility of exactly decoupling scattering mechanisms. To probe numerically this violation, we computed the resistivities of normal, Umklapp and isotopic processes, or any combination of these, and combined them according to Matthiessen rule. In Fig. 9 we show that, regardless of any particular decomposition, the conductivity obtained by imposing the Matthiessen sum deviates significantly from the exact conductivity.
k N + k U + k I we mean k −1 = (k N ) −1 + (k U ) −1 + (k I ) −1 )
. Due to the correlation between scattering events, there is no decomposition for which the Matthiessen rule is obeyed at all temperatures. The only curve that approaches the exact thermal conductivity is the decomposition (k N +U + k I ), just when the effect of isotopic scattering is negligible.
The only case in which a decomposition reproduces the exact result is when the effect of a separated mechanism is negligible; for the case shown in figure, one can sum separately the resistance due to isotopes only at high temperatures, when it's small. Finally, one can prove that the total thermal conductivity is always smaller or equal to the Matthiessen sum (see [18] or Appendix C); this is also verified in our calculations. Moreover, it is not possible to distinguish the contribution to a relaxon relaxation time due to each scattering mechanism, at variance with the case of a phonon lifetime in the SMA. This is because it would require that the eigenvalues of the sum of two matrices were the sum of eigenvalues of two matricesclearly not the case when the scattering matrix is not diagonal.
As an added benefit, the direct diagonalisation of the scattering matrix brings clear insight into the numerical stability of current methods used to solve the BTE. In particular, we show in Appendix D that the iterative method [4,16,17], often used to study 2D materials, is numerically unstable for graphene at room temperature, due to the dominant contribution of the out-of-diagonal terms in the scattering matrix (this is exactly the case when the relaxon picture differs significantly from the phonon picture).
VII. CONCLUSIONS
In summary, we have shown that by choosing the eigenvectors of the scattering matrix as a basis, the linear BTE can be greatly simplified. These eigenvectors are collective excitations of phonon populations, termed relaxons, that are characterised by well-defined relaxation times and, in the homogeneous case, also by proper velocities and mean free paths. Thermal transport can thus be described as a kinetic theory of a gas of relaxons. The characterisation of relaxon properties provides a description of the thermal transport in terms of proper time scales, and in the steady-state homogeneous case of velocity and length scales. For clarity, we report in Table I a summary of relaxon characteristics and how they compare with phonons. This theory is applied here first to graphene at room temperature where, as is typical of 2D materials or of 3D solids at low temperatures, the failure of the SMA and of its picture of phonons as heat carriers becomes dramatic; and to silicon at room temperature, where, although the SMA yields reasonable thermal conductivities, the theory brings new insight in the microscopic interpretation of heat flux and its typical velocities. Finally, we have shown that the Matthiessen rule is violated in the exact BTE, with significant consequences for all systems in which the SMA does not hold. As a final remark, the concept of relaxons has been applied in this work in the context of phonons; however, similar arguments will hold for the electron BTE or other semiclassical transport models.
METHODS
First-principles simulations
Density-functional theory calculations have been performed with the Quantum-ESPRESSO distribution [45], using the local-density approximation and normconserving pseudopotentials from the PSLibrary [53]; for graphene a plane-wave cutoff of 90 Ry and a Methfessel-Paxton smearing of 0.02 Ry have been used and for silicon a plane-wave cutoff of 100 Ry. Graphene is simulated with a slab geometry, using an optimized lattice parameter a = 4.607 Bohr and a cell height c = 3a; for silicon we find an optimized lattice parameter of 10.18 Bohr. The Brillouin zone is integrated with a Gammacentered Monkhorst-Pack mesh of 24×24×1 points for graphene and 12×12×12 for silicon. Second and third order force constants are computed on meshes of 16×16×1 and 4×4×1 points respectively for graphene and 8×8×8 and 4×4×4 for silicon, and are later Fourier-interpolated on finer meshes.
Thermal conductivity simulations
The scattering matrixΩ includes 3-phonon interactions and harmonic isotopic scattering [6,7] at natural abundances [54] (98.93% 12 C, 1.07% 14 C for carbon, and 92.22% 28 Si, 4.67% 29 Si, 3.09% 30 Si for silicon). For graphene, the scattering matrix is constructed using the same computational parameters of Ref. [29] (a Gaussian smearing of 10 cm −1 and a mesh of 128×128×1 points for integrating the Brillouin zone), resulting in a matrix of order 98304, while for silicon we use a Gaussian smearing of 7 cm −1 and a mesh of 30×30×30, yielding a matrix of order 162000.Ω is diagonalised exactly using the routine PDSYEV of the Scalapack library [55]. The simulation cell of graphene is renormalized using the interlayer distance of bulk graphite (c/a = 1.367), in order to have a thermal conductivity comparable with the 3D counterpart. We verified the correctness of the software implementation ensuring that the thermal conductivity estimated with the diagonalization solver coincides with that computed with the variational method of Ref. [7] up to at least 4 significant digits. It's worth mentioning that these calculations are not prohibitively expensive and could be extended to other systems. The present software implementation computesΩ, diagonalizes it and computes the conductivity of graphene in about 5 hours using 256 CPUs on the Piz Daint supercomputer of the Swiss National Supercomputer Center (CSCS), for a total of 1300 CPU hours (1000 of which are spent in the diagonalization). For silicon the calculation completed in 8 hours on 576 CPUs, for a total of 4600 CPU hours. Calculations have been managed using the AiiDA materials' informatics platform [56] APPENDIX A: SCATTERING RATES In this appendix we report the expressions for building the scattering matrix using 3-phonon and isotope scattering events, which are discussed in detail in Ref. [7].
To make a connection with other studies, we note that most contemporary ones have largely preferred to solve the BTE using a phonon deviation from equilibrium of the form n µ =n µ +n µ (n µ + 1)F µ . Since the action of the collision operator must not change, we have the relation:
µ A µµ F µ = µ Ω µµ ∆n µ ,(15)
where A is the scattering matrix when it acts on F , related with the scattering matrices used in our work by
Ω µµ = A µµ 1 n µ (n µ + 1) ,(16)
Ω µµ = 1 n µ (n µ + 1)
A µµ 1 n µ (n µ + 1)
.
The scattering rate for a phonon coalescence event is:
P µ µµ = 2π N 2 G |V (3) (µ, µ , −µ )| 2 ×n µnµ (n µ + 1)δ q+q −q ,G × δ( ω µ + ω µ − ω µ ) ,(18)
where N is the number of q points, G is a reciprocal lattice vector and V (3) is the third derivative of the unit cell energy E cell with respect to atomic displacements
V (3) (µ, µ , µ") = ∂ 3 E cell ∂X µ ∂X µ ∂X µ ,(19)
with
X qs = 1 N lbα 2M b ω qs z * ,bα qs u bα (R l )e −iq·R l ,(20)
where b is an index running on the basis of atoms in the unit cell, R l is a Bravais lattice vector identifying the l th unit cell inside the crystal, α is a cartesian index, M b is the mass of atom b, z is the phonon polarization vector and u is the vector of atomic displacements. The scattering rate for a phonon-isotope scattering event is:
P isot µµ = π 2N ω µ ω µ n µnµ +n µ +n µ 2 × b g b α z * ,bα µ z bα µ 2 δ(ω µ − ω µ ) ,(21)
where
g b = M b − M b 2 M b 2 .
Combining these scattering rates, the scattering matrix A is:
A µµ = µ ,µ P µ µµ + 1 2 P µ µ µ + µ P isot µµ δ µµ − µ (P µ µµ − P µ µµ + P µ µ µ ) + P isot µµ .(22)
In the numerical implementation, the Dirac delta conserving the energy is replaced by a Gaussian smearing.
As the authors of Ref. [7] noted, the above expression guarantees that the scattering matrix A is symmetric and positive-definite also in presence of a Gaussian smearing (other expressions, which would be equivalent with a Dirac delta function, may introduce spurious negative eigenvalues). By virtue of Eqs. (16) and (17), it follows thatΩ is symmetric but not Ω, hence the necessity of the transformations (5) and (6). Finally, we recall that phonon lifetimes are related to the diagonal elements of the scattering matrix as:
A µµ =n µ (n µ + 1) τ µ ,(23)Ω µµ = 1 τ µ .(24)
APPENDIX B: SILICON THERMAL PROPERTIES
In this appendix we study the effect of the scaling of specific heat on velocities and mean free paths. While relaxon properties are as defined in the main text, phonon velocities and mean free paths are scaled as:
v µ → C µ VC v µ ,(25)Λ SMA µ → C µ VC Λ SMA µ .(26)
With this choice of scaling, we can write the SMA thermal conductivity as:
(k ij ) SMA = µ Cv i µ (Λ j µ ) SMA ,(27)
which treats specific heat in the same way as Eq. 13. In Figure 10 we report the comparison of these scaled phonon quantities with the relaxon properties in silicon.
One can readily see that the 2 orders of magnitude of difference that appeared in Figs. 7 and 8 have almost disappeared. Most of the discrepancy is thus due to the usage of specific heat. Nevertheless, the largest phonon velocities are still a factor of 3 smaller than those of relaxons, and the two pictures do not perfectly overlap.
APPENDIX C: MATTHIESSEN RULE
Here we recall a known result [18] that proves that the application of the Matthiessen rule results in an overestimation of the exact thermal conductivity. The BTE for a homogeneous system under a static gradient of temperature can be written in a matrix form (see for example Ref. [18] or more recently Ref. [7]):
Aφ = b ,(28)
where A is related to the scattering matrix Ω via A µµ = Ω µµ n µ (n µ +1), b = − ∂nµ ∂T v µ and φ is the deviation from equilibrium defined as n µ =n µ +n µ (n µ + 1)∇T φ µ .
Another way of solving the BTE, besides the diagonalization approach discussed in the main article, is via the variational principle [18]. In particular, the solution of the BTE can be found from the minimization of the functional [18]
F[φ] = φ A φ ( φ b ) 2 .(29)
Let φ be the function minimising F. The minimum of F is directly proportional to the thermal resistivity ρ [18]; therefore we write
ρ = 1 k = F[φ] .(30)
Now, let's separate the scattering matrix into two different components (for example 3-phonon and isotopic scattering)
A = A 1 + A 2 .(31)
The exact resistivity is given by:
ρ = φ A 1 φ + φ A 2 φ ( φ b ) 2 .(32)
The function φ that minimises the functional defined by A will not be, in general, the function that minimises the functionals F 1 and F 2 defined by A 1 or A 2 only. The functionals F 1 and F 2 instead will be minimised by the functions φ 1 and φ 2 respectively. By the variational
principle ρ = φ A 1 φ ( φ b ) 2 + φ A 2 φ ( φ b ) 2 (33) ≥ φ 1 A 1 φ 1 ( φ 1 b ) 2 + φ 2 A 2 φ 2 ( φ 2 b ) 2 = ρ 1 + ρ 2 .
Alternatively, this can be written as
1 k ≤ 1 k 1 + 1 k 2 ,(34)
showing that the Matthiessen rule is a special case where the equalities holds exactly. More generally, its application leads to an overestimation of thermal conductivities.
APPENDIX D: ITERATIVE METHOD
In this appendix we examine the convergence properties of the iterative method for solving the BTE. Such method can be formalised as follows. The steady state homogeneous BTE in the phonon basis is:
∇T · v µ ∂n µ ∂T = − 1 V µ Ω µµ n µ .(35)
This is a linear algebra problem of the form AF = b, where b µ = −v µ ω µnµ (n µ + 1), F is defined by n µ = n µ +n µ (n µ + 1)∇T F µ and Ω µµ = A µµ n µ (n µ + 1). The iterative solution for F [4,16,17] can then be recast [7] as a geometric series F =
∞ j=0 − (A d ) −1 A od j (A d ) −1 b,
where A d and A od are respectively the diagonal and the off-diagonal parts of A. This series is convergent if and only if all the eigenvalues λ of (A d ) −1 A od are |λ| < 1. In Fig. 11, we show that in graphene |λ| > 1 for more than half of the spectrum, proving that the iterative method is numerically unstable for graphene at room temperature. In general, one might expect convergence issues for the iterative method whenever the relaxon picture differs significantly from the phonon picture and the contribution of the off-diagonal part is large compared to the diagonal part. Eigenvalues λ of the matrix A −1 d A od (see main text for the definition), for graphene at room temperature, ordered by their magnitude. The red dots, roughly half of the eigenvalue spectrum, indicate eigenvalues |λ| > 1 that cause a divergence of the iterative solution for the Boltzmann transport equation. Most of the unstable eigenvalues are greater than 1, with only one eigenvalue lower than -1.
FIG. 1 .
1Schematic illustration of the equilibration of lattice vibrations after a thermal excitation. Each relaxon consists of a linear combination of phonons, which interact through scattering events among themselves, but are decoupled from phonons belonging to different relaxons. The relaxon decays exponentially to equilibrium, where it disappears at a rate determined by its relaxation time.
FIG. 3. Relaxation times and their contribution to the thermal conductivity of graphene at room temperature, considering relaxons or phonons as heat carriers. Relaxons tend to be longer lived than single phonon excitations, with large contributions to thermal conductivity coming from excitations with relaxation time larger than 10 3 ps, whereas phonons have lifetimes mainly in the range 10-100 ps. The shaded area is a guide to the eyes to stress that phonos form a continuous spectrum, while relaxons are discretized: thermal conductivity can be accurately described using a small number of relaxons.
FIG. 4 .
4Comparison of relaxon and phonon velocities and their contributions to the room temperature thermal conductivity of graphene. When approximating phonons as heat carriers, the velocity scale of thermal transport is set by the speed of sound (about 20 km/s for longitudinal acoustic phonons in graphene). Instead, relaxon velocities are at least an order of magnitude smaller, illustrating how much the phonon scattering slows down the heat flux.
FIG. 5. Analysis of phonon and relaxon mean free paths and their contributions to the thermal conductivity of graphene at room temperature. The spectrum of phonon mean free paths is peaked in the submicron region, whereas relaxon mean free paths are skewed to values larger than 1µm. The two largest relaxon mean free paths, illustrating the maximum distance that heat flux can propagate inside the material before decaying, lie between 10 and 100µm.
FIG. 6 .
6Same as inFig. 3this time for silicon at room temperature, the spectrum of relaxation times and their contribution to thermal conductivity, considering relaxons or phonons as heat carriers. In this material, the relaxation times estimated with the SMA are relatively close to the exact relaxation times of the system, with contributions ranging from a few picoseconds up to approximately 10 4 ps.
FIG. 7 .
7Same as inFig. 4but for silicon, the comparison of relaxon and phonon velocities and their contributions to room temperature thermal conductivity. Similar to the case of graphene, the velocity of the SMA description is set by the velocity of phonons, with long-wavelength acoustic modes (those with the highest velocities) giving the largest contributions. Relaxons velocities are instead smaller by about two orders of magnitude, indicating that the heat flux is slowed down by the action of phonon scattering.
FIG. 8 .
8Analysis of phonon and relaxon mean free paths and their contributions to the thermal conductivity of silicon at room temperature. The difference between velocities is carried over to this spectrum, so that relaxon mean free paths are shifted to smaller values with respect to phonons. While our estimates of mean free paths for phonons extend up to hundreds of micrometers, relaxon mean free paths barely reach the micrometer scale.
FIG. 9 .
9Study of the failure of Matthiessen rule in graphene and silicon. The total thermal conductivity (black line) is compared to conductivities computed through various Matthiessen sums, where the total sum is given by the sum of the reciprocals (by
FIG. 10. Top panel: comparison of relaxon velocities with scaled phonon velocities. Bottom panel: same for mean free paths.
,
TABLE I. A comparison of the main properties of phonons and relaxons.Phonon
Relaxon
Definition Eigenstate of har-
monic Hamiltonian
Eigenstate of collision
matrix
Physical
meaning
Collective excitation of
atomic displacements
Collective excitation of
phonon populations
Quantum of vibra-
tional energy
Elementary carrier of
heat
Exact
quantities
Lifetime, velocity and
mean free path of the
vibration
Relaxation time, ve-
locity and mean free
path of the heat carrier
Quasiparticle (energy,
wavevector, dispersion
relations)
No dispersion relations
Thermal
conductivity
Only obtained as solu-
tion of the BTE
Obtained as a kinetic
theory of the relaxon
gas
ACKNOWLEDGEMENTSWe gratefully acknowledge F. Mauri and G. Fugallo for useful discussions; the Swiss National Science foundation under project ID 200021 143636 and National Centre of Competence in Research MARVEL, the Max Planck -EPFL Center for Molecular Nanoscience and Technology, and the Swiss National Supercomputing Center CSCS under project ID s580.
Zur Kinetischen Theorie der Wärmeleitung in Kristallen. R Peierls, 10.1002/andp.19293950803Ann. Phys. 3951055R. Peierls, Zur Kinetischen Theorie der Wärmeleitung in Kristallen, Ann. Phys. 395, 1055 (1929).
M S Green, 10.1063/1.1740082Markoff Random Processes and the Statistical Mechanics of Time-Dependent Phenomena. II. Irreversible Processes in Fluids. 22398M. S. Green, Markoff Random Processes and the Statis- tical Mechanics of Time-Dependent Phenomena. II. Ir- reversible Processes in Fluids, J. Chem. Phys. 22, 398 (1954).
Statistical-Mechanical Theory of Irreversible Processes. I. General Theory and Simple Applications to Magnetic and Conduction Problems. R Kubo, 10.1143/JPSJ.12.570J. Phys. Soc. Jpn. 12570R. Kubo, Statistical-Mechanical Theory of Irreversible Processes. I. General Theory and Simple Applications to Magnetic and Conduction Problems, J. Phys. Soc. Jpn. 12, 570 (1957).
Lattice Thermal Conductivity of Silicon from Empirical Interatomic Potentials. D A Broido, A Ward, N Mingo, 10.1103/PhysRevB.72.014308Phys. Rev. B. 7214308D. A. Broido, A. Ward, and N. Mingo, Lattice Ther- mal Conductivity of Silicon from Empirical Interatomic Potentials, Phys. Rev. B 72, 014308 (2005).
Intrinsic Lattice Thermal Conductivity of Semiconductors from First Principles. D A Broido, M Malorny, G Birner, N Mingo, D A Stewart, 10.1063/1.2822891Appl. Phys. Lett. 91231922D. A. Broido, M. Malorny, G. Birner, N. Mingo, and D. A. Stewart, Intrinsic Lattice Thermal Conductivity of Semiconductors from First Principles, Appl. Phys. Lett. 91, 231922 (2007).
Role of Disorder and Anharmonicity in the Thermal Conductivity of Silicon-Germanium Alloys: A First-Principles Study. J Garg, N Bonini, B Kozinsky, N Marzari, 10.1103/PhysRevLett.106.045901Phys. Rev. Lett. 10645901J. Garg, N. Bonini, B. Kozinsky, and N. Marzari, Role of Disorder and Anharmonicity in the Thermal Conduc- tivity of Silicon-Germanium Alloys: A First-Principles Study, Phys. Rev. Lett. 106, 045901 (2011).
Ab Initio Variational Approach for Evaluating Lattice Thermal Conductivity. G Fugallo, M Lazzeri, L Paulatto, F Mauri, 10.1103/PhysRevB.88.045430Phys. Rev. B. 8845430G. Fugallo, M. Lazzeri, L. Paulatto, and F. Mauri, Ab Initio Variational Approach for Evaluating Lattice Ther- mal Conductivity, Phys. Rev. B 88, 045430 (2013).
Heat Transport in Silicon from First-Principles Calculations. K Esfarjani, G Chen, H T Stokes, 10.1103/PhysRevB.84.085204Phys. Rev. B. 8485204K. Esfarjani, G. Chen, and H. T. Stokes, Heat Transport in Silicon from First-Principles Calculations, Phys. Rev. B 84, 085204 (2011).
Ab Initio Theory of the Lattice Thermal Conductivity in Diamond. A Ward, D A Broido, D A Stewart, G Deinzer, 10.1103/PhysRevB.80.125203Phys. Rev. B. 80125203A. Ward, D. A. Broido, D. A. Stewart, and G. Deinzer, Ab Initio Theory of the Lattice Thermal Conductivity in Diamond, Phys. Rev. B 80, 125203 (2009).
Coherent Phonon Heat Conduction in Superlattices. M N Luckyanova, J Garg, K Esfarjani, A Jandl, M T Bulsara, A J Schmidt, A J Minnich, S Chen, M S Dresselhaus, Z Ren, E A Fitzgerald, G Chen, 10.1126/science.1225549Science. 338936M. N. Luckyanova, J. Garg, K. Esfarjani, A. Jandl, M. T. Bulsara, A. J. Schmidt, A. J. Minnich, S. Chen, M. S. Dresselhaus, Z. Ren, E. A. Fitzgerald, and G. Chen, Co- herent Phonon Heat Conduction in Superlattices, Science 338, 936 (2012).
. D G Cahill, P V Braun, G Chen, D R Clarke, S Fan, K E Goodson, P Keblinski, W P King, G D Mahan, A Majumdar, H J Maris, S R Phillpot, E Pop, L Shi, 10.1063/1.4832615Nanoscale Thermal Transport. II. 111305Appl. Phys. Rev.D. G. Cahill, P. V. Braun, G. Chen, D. R. Clarke, S. Fan, K. E. Goodson, P. Keblinski, W. P. King, G. D. Ma- han, A. Majumdar, H. J. Maris, S. R. Phillpot, E. Pop, and L. Shi, Nanoscale Thermal Transport. II. 2003-2012, Appl. Phys. Rev. 1, 011305 (2014).
Atomistic Simulations of Heat Transport in Silicon Nanowires. D Donadio, G Galli, 10.1103/PhysRevLett.102.195901Phys. Rev. Lett. 102195901D. Donadio and G. Galli, Atomistic Simulations of Heat Transport in Silicon Nanowires, Phys. Rev. Lett. 102, 195901 (2009).
Quantitative Validation of the Boltzmann Transport Equation Phonon Thermal Conductivity Model under the Single-Mode Relaxation Time Approximation. A J H Mcgaughey, M Kaviany, 10.1103/PhysRevB.69.094303Phys. Rev. B. 6994303A. J. H. McGaughey and M. Kaviany, Quantitative Val- idation of the Boltzmann Transport Equation Phonon Thermal Conductivity Model under the Single-Mode Re- laxation Time Approximation, Phys. Rev. B 69, 094303 (2004).
Molecular Dynamics Simulation of Thermal Conductivity of Silicon Nanowires. S G Volz, G Chen, 10.1063/1.124914Appl. Phys. Lett. 752056S. G. Volz and G. Chen, Molecular Dynamics Simula- tion of Thermal Conductivity of Silicon Nanowires, Appl. Phys. Lett. 75, 2056 (1999).
Thermal Conductivity of Nanotubes Revisited: Effects of Chirality, Isotope Impurity, Tube Length, and Temperature. G Zhang, B Li, 10.1063/1.2036967J. Chem. Phys. 123114714G. Zhang and B. Li, Thermal Conductivity of Nanotubes Revisited: Effects of Chirality, Isotope Impurity, Tube Length, and Temperature, J. Chem. Phys. 123, 114714 (2005).
Heat Transport in Dielectric Solids with Diamond Structure. M Omini, A Sparavigna, Nuovo Cimento D. 191537M. Omini and A. Sparavigna, Heat Transport in Dielec- tric Solids with Diamond Structure, Nuovo Cimento D 19, 1537 (1997).
Beyond the Isotropic-Model Approximation in the Theory of Thermal Conductivity. M Omini, A Sparavigna, 10.1103/PhysRevB.53.9064Phys. Rev. B. 539064M. Omini and A. Sparavigna, Beyond the Isotropic-Model Approximation in the Theory of Thermal Conductivity, Phys. Rev. B 53, 9064 (1996).
Electrons and Phonons: The Theory of Transport Phenomena in Solids. J Ziman, Oxford Classic Texts in the Physical Sciences. USAOxford University PressJ. Ziman, Electrons and Phonons: The Theory of Trans- port Phenomena in Solids, Oxford Classic Texts in the Physical Sciences (Oxford University Press, USA, 2001).
Variational Calculation of the Thermal Conductivity of Germanium. R A H Hamilton, J E Parrot, 10.1103/PhysRev.178.1284Phys. Rev. 1781284R. A. H. Hamilton and J. E. Parrot, Variational Calcula- tion of the Thermal Conductivity of Germanium, Phys. Rev. 178, 1284 (1969).
Derivation and Calculation of Complementary Variational Principles for the Lattice Thermal Conductivity. G P Srivastava, J. Phys. C: Solid St. Phys. 93037G. P. Srivastava, Derivation and Calculation of Comple- mentary Variational Principles for the Lattice Thermal Conductivity, J. Phys. C: Solid St. Phys. 9, 3037 (1976).
Solution of the Linearized Phonon Boltzmann Equation. R A Guyer, J A Krumhansl, 10.1103/PhysRev.148.766Phys. Rev. 148766R. A. Guyer and J. A. Krumhansl, Solution of the Lin- earized Phonon Boltzmann Equation, Phys. Rev. 148, 766 (1966).
Phonon Boltzmann Equation and Second Sound in Solids. R J Hardy, 10.1103/PhysRevB.2.1193Phys. Rev. B. 21193R. J. Hardy, Phonon Boltzmann Equation and Second Sound in Solids, Phys. Rev. B 2, 1193 (1970).
Direct Solution to the Linearized Phonon Boltzmann Equation. L Chaput, 10.1103/PhysRevLett.110.265506Phys. Rev. Lett. 110265506L. Chaput, Direct Solution to the Linearized Phonon Boltzmann Equation, Phys. Rev. Lett. 110, 265506 (2013).
Model for Lattice Thermal Conductivity at Low Temperatures. J Callaway, 10.1103/PhysRev.113.1046Phys. Rev. 1131046J. Callaway, Model for Lattice Thermal Conductivity at Low Temperatures, Phys. Rev. 113, 1046 (1959).
Flexural Phonons and Thermal Transport in Graphene. L Lindsay, D A Broido, N Mingo, 10.1103/PhysRevB.82.115427Phys. Rev. B. 82115427L. Lindsay, D. A. Broido, and N. Mingo, Flexural Phonons and Thermal Transport in Graphene, Phys. Rev. B 82, 115427 (2010).
A Cepellotti, G Fugallo, L Paulatto, M Lazzeri, F Mauri, N Marzari, 10.1038/ncomms7400Phonon Hydrodynamics in Two-Dimensional Materials. 66400A. Cepellotti, G. Fugallo, L. Paulatto, M. Lazzeri, F. Mauri, and N. Marzari, Phonon Hydrodynamics in Two-Dimensional Materials, Nat. Commun. 6, 6400 (2015).
Enhanced Thermal Conductivity and Isotope Effect in Single-Layer Hexagonal Boron Nitride. L Lindsay, D A Broido, 10.1103/PhysRevB.84.155421Phys. Rev. B. 84155421L. Lindsay and D. A. Broido, Enhanced Thermal Con- ductivity and Isotope Effect in Single-Layer Hexagonal Boron Nitride, Phys. Rev. B 84, 155421 (2011).
Strongly Anisotropic In-Plane Thermal Transport in Single-Layer Black Phosphorene. A Jain, A J H Mcgaughey, 10.1038/srep08501Sci. Rep. 58501A. Jain and A. J. H. McGaughey, Strongly Anisotropic In-Plane Thermal Transport in Single-Layer Black Phos- phorene, Sci. Rep. 5, 8501 (2015).
Thermal Conductivity of Graphene and Graphite: Collective Excitations and Mean Free Paths. G Fugallo, A Cepellotti, L Paulatto, M Lazzeri, N Marzari, F Mauri, 10.1021/nl502059fNano Lett. 146109G. Fugallo, A. Cepellotti, L. Paulatto, M. Lazzeri, N. Marzari, and F. Mauri, Thermal Conductivity of Graphene and Graphite: Collective Excitations and Mean Free Paths, Nano Lett. 14, 6109 (2014).
On the Importance of Collective Excitations for Thermal Transport in Graphene. M Gill-Comeau, L J Lewis, 10.1063/1.4921127Appl. Phys. Lett. 106193104M. Gill-Comeau and L. J. Lewis, On the Impor- tance of Collective Excitations for Thermal Transport in Graphene, Appl. Phys. Lett. 106, 193104 (2015).
Hydrodynamic Phonon Transport in Suspended Graphene. S Lee, D Broido, K Esfarjani, G Chen, 10.1038/ncomms7290Nat. Commun. 66290S. Lee, D. Broido, K. Esfarjani, and G. Chen, Hydrody- namic Phonon Transport in Suspended Graphene, Nat. Commun. 6, 6290 (2015).
Intrinsic Thermal Conductivity in Monolayer Graphene is Ultimately Upper Limited: A Direct Estimation by Atomistic Simulations. G Barbarino, C Melis, L Colombo, 10.1103/PhysRevB.91.035416Phys. Rev. B. 9135416G. Barbarino, C. Melis, and L. Colombo, Intrinsic Ther- mal Conductivity in Monolayer Graphene is Ultimately Upper Limited: A Direct Estimation by Atomistic Simu- lations, Phys. Rev. B 91, 035416 (2015).
From Kinetic to Collective Behavior in Thermal Transport on Semiconductors and Semiconductor Nanostructures. C De Tomas, A Cantarero, A F Lopeandia, F X Alvarez, 10.1063/1.4871672J. Appl. Phys. 115164314C. de Tomas, A. Cantarero, A. F. Lopeandia, and F. X. Alvarez, From Kinetic to Collective Behavior in Thermal Transport on Semiconductors and Semiconductor Nanos- tructures, J. Appl. Phys. 115, 164314 (2014).
Energy-Flux Operator for a Lattice. R J Hardy, 10.1103/PhysRev.132.168Phys. Rev. 132168R. J. Hardy, Energy-Flux Operator for a Lattice, Phys. Rev. 132, 168 (1963).
Lowest-Order Contribution to the Lattice Thermal Conductivity. R J Hardy, 10.1063/1.1704719J. Math. Phys. 61749R. J. Hardy, Lowest-Order Contribution to the Lattice Thermal Conductivity, J. Math. Phys. 6, 1749 (1965).
Thermal Conductivity of Insulating Crystals in the Presence of Normal Processes. J A Krumhansl, 10.1088/0370-1328/85/5/310Proc. Phys. Soc. 85921J. A. Krumhansl, Thermal Conductivity of Insulating Crystals in the Presence of Normal Processes, Proc. Phys. Soc. 85, 921 (1965).
Superior Thermal Conductivity of Single-Layer Graphene. A A Balandin, S Ghosh, W Bao, I Calizo, D Teweldebrhan, F Miao, C N Lau, 10.1021/nl0731872Nano Lett. 8902A. A. Balandin, S. Ghosh, W. Bao, I. Calizo, D. Tewelde- brhan, F. Miao, and C. N. Lau, Superior Thermal Con- ductivity of Single-Layer Graphene, Nano Lett. 8, 902 (2008).
Phonons and Related Crystal Properties from Density-Functional Perturbation Theory. S Baroni, S De Gironcoli, A Corso, P Giannozzi, 10.1103/RevModPhys.73.515Rev. Mod. Phys. 73515S. Baroni, S. de Gironcoli, A. Dal Corso, and P. Giannozzi, Phonons and Related Crystal Proper- ties from Density-Functional Perturbation Theory, Rev. Mod. Phys. 73, 515 (2001).
Ab Initio Calculation of Phonon Dispersions in Semiconductors. P Giannozzi, S De Gironcoli, P Pavone, S Baroni, 10.1103/PhysRevB.43.7231Phys. Rev. B. 437231P. Giannozzi, S. de Gironcoli, P. Pavone, and S. Baroni, Ab Initio Calculation of Phonon Dispersions in Semicon- ductors, Phys. Rev. B 43, 7231 (1991).
Anharmonic Phonon Lifetimes in Semiconductors from Density-Functional Perturbation Theory. A Debernardi, S Baroni, E Molinari, 10.1103/PhysRevLett.75.1819Phys. Rev. Lett. 751819A. Debernardi, S. Baroni, and E. Molinari, Anharmonic Phonon Lifetimes in Semiconductors from Density- Functional Perturbation Theory, Phys. Rev. Lett. 75, 1819 (1995).
Anharmonic Properties from a Generalized Third-Order Ab Initio Approach: Theory and Applications to Graphite and Graphene. L Paulatto, F Mauri, M Lazzeri, 10.1103/PhysRevB.87.214303Phys. Rev. B. 87214303L. Paulatto, F. Mauri, and M. Lazzeri, Anharmonic Properties from a Generalized Third-Order Ab Initio Approach: Theory and Applications to Graphite and Graphene, Phys. Rev. B 87, 214303 (2013).
First-Principles Study of the Thermal Expansion of Be(1010). M Lazzeri, S De Gironcoli, 10.1103/PhysRevB.65.245402Phys. Rev. B. 65245402M. Lazzeri and S. de Gironcoli, First-Principles Study of the Thermal Expansion of Be(1010), Phys. Rev. B 65, 245402 (2002).
Green's-Function Approach to Linear Response in Solids. S Baroni, P Giannozzi, A Testa, 10.1103/PhysRevLett.58.1861Phys. Rev. Lett. 581861S. Baroni, P. Giannozzi, and A. Testa, Green's-Function Approach to Linear Response in Solids, Phys. Rev. Lett. 58, 1861 (1987).
Density-Functional Approach to Nonlinear-Response Coefficients of Solids. X Gonze, J.-P Vigneron, 10.1103/PhysRevB.39.13120Phys. Rev. B. 3913120X. Gonze and J.-P. Vigneron, Density-Functional Ap- proach to Nonlinear-Response Coefficients of Solids, Phys. Rev. B 39, 13120 (1989).
QUANTUM ESPRESSO: a Modular and Open-Source Software Project for Quantum Simulations of Materials. P Giannozzi, S Baroni, N Bonini, M Calandra, R Car, C Cavazzoni, D Ceresoli, G L Chiarotti, M Cococcioni, I Dabo, A Corso, S De Gironcoli, Stefano Fabris, G Fratesi, R Gebauer, U Gerstmann, C Gougoussi, A Kokalj, M Lazzeri, L Martin-Samos, N Marzari, F Mauri, R Mazzarello, S Paolini, A Pasquarello, L Paulatto, C Sbraccia, S Scandolo, G Sclauzero, A P Seitsonen, A Smogunov, P Umari, R M Wentzcovitch, 10.1088/0953-8984/21/39/395502J. Phys.: Condens. Matter. 21395502P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, G. L. Chiarotti, M. Cococ- cioni, I. Dabo, A. Dal Corso, S. de Gironcoli, Stefano Fab- ris, G. Fratesi, R. Gebauer, U. Gerstmann, C. Gougoussi, A. Kokalj, M. Lazzeri, L. Martin-Samos, N. Marzari, F. Mauri, R. Mazzarello, S. Paolini, A. Pasquarello, L. Paulatto, C. Sbraccia, S. Scandolo, G. Sclauzero, A. P. Seitsonen, A. Smogunov, P. Umari, and R. M. Wentz- covitch, QUANTUM ESPRESSO: a Modular and Open- Source Software Project for Quantum Simulations of Ma- terials, J. Phys.: Condens. Matter 21, 395502 (2009).
Acoustic Phonon Lifetimes and Thermal Transport in Free-Standing and Strained Graphene. N Bonini, J Garg, N Marzari, 10.1021/nl202694mNano Lett. 122673N. Bonini, J. Garg, and N. Marzari, Acoustic Phonon Lifetimes and Thermal Transport in Free-Standing and Strained Graphene, Nano Lett. 12, 2673 (2012).
Unusual Enhancement in Intrinsic Thermal Conductivity of Multilayer Graphene by Tensile Strains. Y Kuang, L Lindsay, B Huang, 10.1021/acs.nanolett.5b02403Nano Lett. 156121Y. Kuang, L. Lindsay, and B. Huang, Unusual En- hancement in Intrinsic Thermal Conductivity of Multi- layer Graphene by Tensile Strains, Nano Lett. 15, 6121 (2015).
First-principles determination of phonon lifetimes, mean free paths, and thermal conductivities in crystalline materials: Pure silicon and germanium. J Garg, N Bonini, N Marzari, Length-Scale Dependent Phonon Interactions. S. L. Shind and G. P. SrivastavaNew YorkSpringer128J. Garg, N. Bonini, and N. Marzari, in First-principles determination of phonon lifetimes, mean free paths, and thermal conductivities in crystalline materials: Pure sili- con and germanium, Topics in Applied Physics, vol. 128, edited by S. L. Shind and G. P. Srivastava, Length-Scale Dependent Phonon Interactions, (Springer New York, 2014).
Role of Low-Energy Phonons with Mean-Free-Paths >0.8µm in Heat Conduction in Silicon. P Jiang, L Lindsay, Y K Koh, 10.1063/1.4954674J. Appl. Phys. 119245705P. Jiang, L. Lindsay, and Y. K. Koh, Role of Low-Energy Phonons with Mean-Free-Paths >0.8µm in Heat Conduc- tion in Silicon, J. Appl. Phys. 119, 245705 (2016).
Reconstructing phonon mean-free-path contributions to thermal conductivity using nanoscale membranes. J Cuffe, J K Eliason, A A Maznev, K C Collins, J A Johnson, A Shchepetov, M Prunnila, J Ahopelto, C M Torres, G Chen, K A Nelson, 10.1103/PhysRevB.91.245423Phys. Rev. B. 91245423J. Cuffe, J. K. Eliason, A. A. Maznev, K. C. Collins, J. A. Johnson, A. Shchepetov, M. Prunnila, J. Ahopelto, C. M. Sotomayor Torres, G. Chen, and K. A. Nelson, Recon- structing phonon mean-free-path contributions to thermal conductivity using nanoscale membranes, Phys. Rev. B 91, 245423 (2015).
Thermal Conductivity Spectroscopy Technique to Measure Phonon Mean Free Paths. A J Minnich, J A Johnson, A J Schmidt, K Esfarjani, M S Dresselhaus, K A Nelson, G Chen, 10.1103/PhysRevLett.107.095901Phys. Rev. Lett. 10795901A. J. Minnich, J. A. Johnson, A. J. Schmidt, K. Esfarjani, M. S. Dresselhaus, K. A. Nelson, and G. Chen, Thermal Conductivity Spectroscopy Technique to Measure Phonon Mean Free Paths, Phys. Rev. Lett. 107, 095901 (2011).
Broadband phonon mean free path contributions to thermal conductivity measured using frequency domain thermoreflectance. K T Regner, D P Sellan, Z Su, C H Amon, A J H Mcgaughey, J A Malen, 10.1038/ncomms2630Nat. Commun. 41640K. T. Regner, D. P. Sellan, Z. Su, C. H. Amon, A. J. H. McGaughey, and J. A. Malen, Broadband phonon mean free path contributions to thermal conductivity measured using frequency domain thermoreflectance, Nat. Com- mun. 4, 1640 (2013).
. A Corso, Pslibrary , A. Dal Corso, PsLibrary, http://qe- forge.org/gf/project/pslibrary/ (2012).
M E Wieser, N Holden, T B Coplen, J K Boehlke, M Berglund, W A Brand, P De Bièvre, M Groening, R D Loss, J Meija, T Hirata, T Prohaska, R Schoenberg, G O'connor, T Walczyk, S Yoneda, X.-K Zhu, 10.1351/PAC-REP-13-03-02Atomic Weights of the Elements. 85883Technical ReportM. E. Wieser, N. Holden, T. B. Coplen, J. K. Boehlke, M. Berglund, W. A. Brand, P. De Bièvre, M. Groening, R. D. Loss, J. Meija, T. Hirata, T. Prohaska, R. Schoen- berg, G. O'Connor, T. Walczyk, S. Yoneda, and X.- K. Zhu, Atomic Weights of the Elements 2011 (IUPAC Technical Report), Pure Appl. Chem. 85, 883 (2013).
L S Blackford, J Choi, A Cleary, E Azevedo, J Demmel, I Dhillon, J Dongarra, S Hammarling, G Henry, A Petitet, K Stanley, D Walker, R C Whaley, ScaLAPACK Users' Guide (Society for Industrial and Applied Mathematics. Philadelphia, PAL. S. Blackford, J. Choi, A. Cleary, E. D'Azevedo, J. Demmel, I. Dhillon, J. Dongarra, S. Hammarling, G. Henry, A. Petitet, K. Stanley, D. Walker, and R. C. Whaley, ScaLAPACK Users' Guide (Society for Indus- trial and Applied Mathematics, Philadelphia, PA, 1997).
AiiDA: Automated Interactive Infrastructure and Database for Computational Science. G Pizzi, A Cepellotti, R Sabatini, N Marzari, B Kozinsky, 10.1016/j.commatsci.2015.09.013Comput. Mater. Sci. 111218G. Pizzi, A. Cepellotti, R. Sabatini, N. Marzari, and B. Kozinsky, AiiDA: Automated Interactive Infrastruc- ture and Database for Computational Science, Comput. Mater. Sci. 111, 218 (2016).
| [] |
[
"Competition between two-photon driving, dissipation and interactions in bosonic lattice models: an exact solution",
"Competition between two-photon driving, dissipation and interactions in bosonic lattice models: an exact solution"
] | [
"David Roberts \nPritzker School of Molecular Engineering\nUniversity of Chicago\nChicagoILUSA\n\nDepartment of Physics\nUniversity of Chicago\nChicago, PhysicalILUSA\n\nPritzker School of Molecular Engineering\nUniversity of Chicago\nChicagoILUSA\n\nDepartment of Physics\nUniversity of Chicago\nChicagoILUSA\n",
"A A Clerk \nPritzker School of Molecular Engineering\nUniversity of Chicago\nChicagoILUSA\n\nPritzker School of Molecular Engineering\nUniversity of Chicago\nChicagoILUSA\n"
] | [
"Pritzker School of Molecular Engineering\nUniversity of Chicago\nChicagoILUSA",
"Department of Physics\nUniversity of Chicago\nChicago, PhysicalILUSA",
"Pritzker School of Molecular Engineering\nUniversity of Chicago\nChicagoILUSA",
"Department of Physics\nUniversity of Chicago\nChicagoILUSA",
"Pritzker School of Molecular Engineering\nUniversity of Chicago\nChicagoILUSA",
"Pritzker School of Molecular Engineering\nUniversity of Chicago\nChicagoILUSA"
] | [] | We present an exact solution in arbitrary dimensions for the steady states of a class of quantum driven-dissipative bosonic models, where a set of modes is subject to arbitrary two-photon driving, single-photon loss and a global Hubbard (or Kerr)-like interaction. Our solutions reveal a wealth of striking phenomena, including the emergence of dissipative phase transitions, nontrivial mode competition physics and symmetry breaking, and the stabilization of many-body SU (1, 1) pair coherent states. Our exact solutions enable the description of spatial correlations, and are fully valid in regimes where traditional mean-field and semiclassical approaches break down. arXiv:2208.05451v2 [quant-ph] | 10.1103/physrevlett.130.063601 | [
"https://export.arxiv.org/pdf/2208.05451v2.pdf"
] | 251,468,321 | 2208.05451 | 5d521ab3af916af22bf6a5cb8500ff6d1cbf2ae7 |
Competition between two-photon driving, dissipation and interactions in bosonic lattice models: an exact solution
David Roberts
Pritzker School of Molecular Engineering
University of Chicago
ChicagoILUSA
Department of Physics
University of Chicago
Chicago, PhysicalILUSA
Pritzker School of Molecular Engineering
University of Chicago
ChicagoILUSA
Department of Physics
University of Chicago
ChicagoILUSA
A A Clerk
Pritzker School of Molecular Engineering
University of Chicago
ChicagoILUSA
Pritzker School of Molecular Engineering
University of Chicago
ChicagoILUSA
Competition between two-photon driving, dissipation and interactions in bosonic lattice models: an exact solution
We present an exact solution in arbitrary dimensions for the steady states of a class of quantum driven-dissipative bosonic models, where a set of modes is subject to arbitrary two-photon driving, single-photon loss and a global Hubbard (or Kerr)-like interaction. Our solutions reveal a wealth of striking phenomena, including the emergence of dissipative phase transitions, nontrivial mode competition physics and symmetry breaking, and the stabilization of many-body SU (1, 1) pair coherent states. Our exact solutions enable the description of spatial correlations, and are fully valid in regimes where traditional mean-field and semiclassical approaches break down. arXiv:2208.05451v2 [quant-ph]
Introduction. Spurred both by applications to quantum information and the advent of controllable dissipative quantum simulators [1][2][3][4][5] there is a renewed interest in exploring driven-dissipative bosonic quantum systems in the many body limit (see e.g. [6][7][8][9][10][11][12][13][14][15]). Of particular interest are the possibility of dissipative quantum phase transitions, and the emergence of highly nonthermal steady states. While a variety of numerical approaches have been devised to study such systems, they have limitations. Conventional Gutzwiller mean-field approaches (see e.g. [16][17][18][19]) are unable to account for strong correlations, whereas matrix-product state methods (see e.g. [20]) are largely restricted to 1D systems. Alternate numerical approaches for 2D exist [21,22], but these can become numerically infeasible for large systems. Given this, the ability to have exact analytic solutions for higher dimensional models would be extremely valuable.
In this Letter, we address this outstanding challenge. We introduce a class of strongly-interacting drivendissipative bosonic models, and show that it is possible to analytically describe their dissipative steady states in arbitrary dimensions. The basic system is shown in Fig. 1: a set of bosonic modes is subject to arbitrary two-photon driving (both on-site, and between sites), as well as to Markovian single-photon loss and a global Hubbard (Kerr) interaction that depends on total photon number. While there are no conventional hopping interactions, one still has a lattice structure defined by the intersite two-photon drives. We show that the steadystate density matrix of this model is amenable to exact solution via the hidden time-reversal symmetry method [23,24]. This method is related to other quantum optical solution methods [25][26][27][28][29], though attempts to use these in the many-body limit were unsuccessful [30,31].
Our exact solution reveals a wealth of physical phenomena. For weak driving, one sees the emergence of phase transition behaviour as system size is increased, with singularities arising in the thermodynamic limit from the merging of discrete photonic resonances. Unlike well-studied single-site models [32], the phase transition physics here can occur far from the many-photon semiclassical limit, and can show marked deviations from mean-field theory predictions. We also show surpris- ing connections to the representation theory of SU (1, 1). Strikingly, we find that with appropriate tuning, the driven-dissipative steady state is directly related to a non-trivial many-body generalization of SU (1, 1) pair coherent states [33][34][35].
We also find surprising behaviour in more stronglydriven regimes: the system can exhibit surprising symmetry breaking phenomena and mode-competition physics, with the exact solution again providing crucial insights. We stress that the class of models we study could be directly realized in e.g. superconducting quantum circuits experiments, and can be viewed as a many-body extension of the driven Kerr parametric oscillator systems that are being studied extensively in the context of bosonic error correction [36,37].
Two-photon driven global interaction models. We consider a set of N bosonic modes (lowering operatorsâ j ), subject to arbitrary two-photon (parametric) drives (amplitudes M ij ), as well as a global Hubbard interaction (i.e. equal-magnitude self-Kerr and cross-Kerr interactions U/N ). Assuming all drives to have an identical detuning ∆ from resonance, and working in the common rotating frame, the coherent system dynamics is given by:
H = U N jn j 2 − ∆ jn j + i,j M ijâ † iâ † j + h.c.(1)
wheren j ≡â † jâ j . While our solution technique is more general, we focus here on the case where our modes live on the sites of a D-dimensional hypercubic lattice, and we have translational invariance, with M ii = G, and offdiagonals M ij = Λ/2D if i, j are nearest neighbour sites, zero otherwise. This represents a modified two-photon driven Bose-Hubbard model, with single-particle hopping replaced with p-wave pairing terms, and the interaction made global. We also include dissipation: independent Markovian single-particle loss on each site. The full dynamics is thus described by the Lindblad master equation
∂ tρ = −i[Ĥ,ρ] + j κD[â j ]ρ ≡ Lρ,(2)
where D[X]ρ ≡XρX † −(1/2){X †X ,ρ} denotes the standard dissipative superoperator, constructed from an arbitrary linear operatorX acting on the Hilbert space of our system. We note that related two-photon driven many-body bosonic models have been recently studied numerically [12,14,15]. Eq. 1 exhibits a generic tension common to many driven-dissipative systems. The drives favour populating the system with pairs of photons, creating squeezing correlations. This is opposed by the losses, the energy detuning ∆ (which makes pair addition non-resonant), and most crucially the interaction U (which is like a numberdependent detuning). This yields the possibility of phase transitions, where a high density could self-consistently make the drives resonant. While there is no conventional hopping, the nonlocal pair drives can create spatial correlations (and are like an "Andreev-reflection" hopping process). Note that our model could be realized in a variety of setups including superconducting circuits and more conventional quantum optical platforms (see [38] for a simple circuit implementation of our model). We also note that our solution is even more general than Eq. 1. As shown in [38], for a given set of drive amplitudes M ij , there exist a class of standard hopping terms that can be added toĤ without changing the dissipative steady state. We can thus describe, e.g., bipartite lattices with local hopping and pairing terms.
Our goal in this work is to understand the dissipative steady stateρ ss of our system, which satisfies Lρ ss = 0. Surprisingly, for all parameter values and dimensionalities, this can be done exactly and analytically, using the hidden TRS (hTRS) / coherent quantum absorber approach introduced in [23,24]. This method postulates the existence of an anti-unitary operatorT , in terms of which the associated purification ofρ ss (which lives in a doubled Hilbert space)
ρ ss ≡ Tr R |ΨT ΨT |, |ΨT ≡ n √ p n |n LT |n R ,(3)
satisfies a generalized symmetry constraint [23]. Here |n , p n are the eigenvectors and eigenvalues ofρ ss , L denotes states in the physical Hilbert space, and R denotes states in the auxiliary Hilbert space, which is another copy of the physical Hilbert space. The ansatz thatT is a hTRS implies a set of conditions on |ΨT that must be solved. For this system, this can be done analytically [38]. The resulting solution for the pure state |ΨT has a striking form. It describes an unusual kind of pair condensate: all particles occupy the same two-body wavefunction whose spatial structure is determined by the driving amplitudes M ij . We find [38]:
|ΨT = ∞ m=0 c m m! K + m |Ω ,K + := N 2U ij M ijα † iα † j ,(4)
whereK + is the effective pair creation operator,α j ≡ (â j,L +â j,R )/ √ 2, and |Ω ≡ |0 L |0 R is the vacuum. The coefficients c m in the expansion take the simple form
c m ∝ (−1) m /(δ) m ,(5)
where (δ) m := δ(δ + 1) · · · (δ + m − 1) denotes the Pochhammer symbol (rising factorial), and where the dimensionless detuning parameter r is
δ := 1 − N ∆ eff /2U, ∆ eff := ∆ + iκ/2.(6)
We stress that when U = 0, this pure-state pair condensate is highly non-Gaussian and exhibits Wigner-function negativity. The parameter dependence of this state is also remarkable. The global Hubbard interaction U along with the detuning ∆ and loss κ determine the effective "fugacity" of our pair gas via the c m coefficients. In contrast, all spatial structure (encoded in M ij ) is encoded completely in the two-body "wavefunction" of each paired boson. Finally, the resulting dissipative steady state is non-thermal, in that it cannot be written as exp(−βĤ) for some β [38].
Emergence of phase transitions. The exact solution allows us to study the emergence of dissipative phase transitions as the number of sites N becomes large, i.e. in the thermodynamic limit. This can be done for arbitrary dimensionality D, and while still remaining in low-density regimes where semiclassical approximations would fail. We find a direct connection between first-order phase transitions that occur at large N , and discrete multiphoton resonances that can be resolved at smaller N . This is seen clearly in Fig. 2(a), which shows the average steady-state photon density versus ∆ in a D = 2 model, for different system sizes. The discrete resonances at modest N occur when the dimensionless detuning r is FIG. 2. Driven-dissipative phase transitions. (a) Average densityn versus detuning ∆ for various sized 2D square lattices (periodic boundary conditions, κ = 0.01U , G = U/5, Λ = U/4). As system size increases, discrete resonances merge to yield a jump in the density and a first-order phase transition. We also show the predictions of a basic semiclassical mean-field theory, which predicts a zero-density solution that cannot be shown here due to the log scale on the y-axis. (b) Here, we attempt to distinguish the bunched (red squares) and antibunched (blue circles) phases via their correlations, respectively single-particle (left panel) and densitydensity (right panel) correlations. We choose ∆ = +3U as representative of the bunched phase and ∆ = −3U as representative of the antibunched phase. Both plots show data for a N = 100 site periodic lattice with D = 1. All other parameters are the same as in panel (a). All results are computed using the exact solution in Eq. (4). close to a negative integer. The exact solution tells us that when δ = −n + with | | 1, the relative "fugacity" between the n + 1 and n pair configurations diverges as → 0: c n+1 /c n = −1/(n + δ) = O( −1 ). This divergence (cut-off by κ) leads to an enhanced photon number, and thus sharply-defined resonances occuring at detunings ∆ n = 2U (n + 1)/N (see Fig. 2(a)). As N → ∞, the spacing between resonances vanishes, leading to a first-order phase transition where the density exhibits a jump as a function of ∆. Fig. 2(a) also shows a comparison against the predictions of a simple semiclassical mean-field theory (see [38] for more details, as well as comparisons to Gutzwiller mean field theory).
A further virtue of the exact solution is that it gives full access to spatial correlations. We find that these correlations provide a much better way of distinguishing phases compared to purely local observables. In the large-N limit, two-point equal-time correlators in the steady state such as â i+râi ss , â † i+râ i ss always decay exponen-
FIG. 3. Phase diagram for D = 0. (a)
Average density as a function of detuning ∆ and loss κ, with N = 500, Λ = 0, and G = U . Phase boundaries can be seen, the critical damping value κc is also indicated: for κ > κc, the first order PT vanishes. (b) Asymptotic long-distance behavior of the density-density correlation function, as captured by
g (2)
∞ (c.f. Eq. (7)); the sign of this quantity more clearly distinguishes the two relevant phases in the model. A critical point ∆ c eff := ∆c + iκc/2 marks the exact location where g tially with distance (see Fig. 2(b)). In stark contrast, the global Hubbard interaction generates long-range (but weak) density-density correlations. To study this quantitatively, we define in D = 1 the reduced density-density corelator g (2) (i, r) := n i+rni ss −n 2 n 2 .
Here,n ≡ n j ss is the mean onsite occupation in the steady state, and we note that g (2) (i, r) is independent of i away from boundaries. An analogous definition holds for D > 1. We find that the two phases of our model can be cleanly distinguished by the sign of the large-distance densitydensity correlations, i.e. by g (2) ∞ ≡ lim |r|→∞ g (2) (r). We call the phase where g (2) ∞ > 0 a "bunched" phase, where density fluctuations are positively-correlated at long distances, and the remaining phase with g (2) ∞ < 0 an "antibunched" phase. When κ is sufficiently small, these phases are connected by the first-order phase transition mentioned above. The corresponding jump in density is accompanied by a sign change in g (2) ∞ , see Fig. 2(b), right panel. We also note that for modest values of N , the multiphoton resonance physics described above can also lead to interesting structures resembling Mott lobes [9,39], if one looks at intersite correlations. This is shown in Fig. 1(b).
Criticality in the D = 0 model. The above physics becomes especially clear in the limit where Λ ≡ 0, i.e. purely local driving. There is no remaining spatial structure, hence we call this the D = 0 limit. As we saw in Fig. 2, for D > 0, our model has a finite correlation length characterizing the decay of two-point correlators. The D = 0 model sets this length to zero, while retaining the more interesting physics associated with density-density correlations. The D = 0 limit is also experimentally relevant: it can be realized directly using a relatively simple superconducting circuit [38].
The D = 0 case has another key virtue: it allows a dramatic simplification in the calculation of observables, as nowK + ,K † + , andK z ≡ (2N G 2 /U )[K † + ,K + ] form a representation of the Lie algebra of SU (1, 1). This makes the problem of evaluating moments with respect to the state |ΨT given in Eq. (4) completely algebraic; one only requires knowledge of the bosonic representation theory of SU (1, 1). Further, harmonic analysis in R N yields a satisfactory characterization of the requisite representation theory [38]. We are thus able to compute local observables and correlators for systems with tens of thousands of sites and at unit density. For our D = 0 model and for large N , we can verify by brute force that lim ∆→∆ ± c sign{g
(2) ∞ } = ±1,
where ∆ c denotes the location of the discontinuity inn. This confirms that the first-order PT marks the boundary between bunchedand antibunched phases (c.f. Figure 2). We also find that this first-order PT only exists when κ < κ c , where κ c is a critical damping threshold, akin to a critical pressure in a liquid-gas transition (c.f. Figure 3a). As in a liquidgas transition, above the critical point the two phases are smoothly connected, as is indicated by the continuity of g (2) ∞ in Figure 3b. Here, we use the exact solution to estimate κ c , by explicitly observing the divergence of the susceptibility χ ≡ ∂n/∂∆ as κ → κ + c ; see [38] for more details.
Many-body pair-coherent states. When D > 0, analysis based on the exact solution becomes more challenging. One can still obtain a representation of the Lie algebra SU (1, 1) by defining a (generalized) pair-lowering opera-torK − := U 2N ij (M −1 ) ijαiαj , which has the effect of removing a pair of bosons:K −K m + |Ω ∝K m−1 + |Ω . However,K − is not equal or proportional toK † + unless D ≡ 0 or D ≡ ∞. The result is that representation-theoretic techniques are of no utility when D > 0. Nonetheless, the Lie-theoretic point of view is still useful in helping reveal unusual phenomena.
In particular, at special detuning values, the gas of boson pairs constituting the purification of the steady state (c.f. Eq. (4)) forms a many-body pair coherent state (PCS), that is, an eigenstate of the operatorK − [33]. From the form of the solution, we see that this happens when c m+1 /c m = −k/(N/2 + m), where k = −1 is the corresponding eigenvalue ofK − [38]. From Eq. (6), we see that this requires δ = N/2, corresponding to κ → 0 + and
∆ → ∆ P CS := U (2 − N )/N(8)
Note that for the case of just a single mode N = 1, this corresponds to the known physics of a Kerr parametric oscillator [26]. In this case, ∆ = U is the same as zero detuning if one normal-orders the Kerr interaction, and |ΨT reduces to an even-parity cat state. We stress that there are observable consequences associated with the formation of this many-body PCS. As one approaches the special detuning, there are no fluctuations in the global pairing, as quantified by the operator K − . One can explicitly show that:
ij (M −1 ) ijâiâj †n ij (M −1 ) ijâiâj m ss ∝ ∆→∆ P CS k * n k m .(9)
Similar to their two-mode counterparts [40,41], the many-body PCS we describe here may have utility for bosonic quantum error correction [36,37,[42][43][44]. We note that the many-body PCS that emerge here are distinct from the multi-mode states discussed in Ref. [35]. Symmetry breaking. In the strong-driving regime, our model exhibits a surprising symmetry breaking phenomenon. First, note that the singular values of our matrix of pair-driving amplitudes is λ k = 1 u Λ D D j=1 cos k j + G where the wavevector k labels standing wave modes.
Let λ * denote the maximum singular value, and s denote the number of distinct modes that it corresponds to (socalled max pairing modes). For large driving, one can analytically show that the steady state Wigner function W [{α k }] corresponds to a uniform distribution over the
(s − 1)-sphere defined by λ k =λ * x 2 k = const., x k ≡ e −iθ α k(10)
with x k ∈ R and θ an overall phase [38]. Even though there is a near continuum of pairing eigenvalues, for large driving, the max pairing modes completely dominate. This behaviour is shown explicitly in Fig. 4(a). The structure of this solution also directly leads to an anticorrelation between mode amplitudes that is purely geometric, see Fig. 4(b). The mode selection in our system can be related to spontaneous symmetry breaking. Real rotations amongst the max-pairing modes form a non-abelian group of weak symmetries isomorphic to O(s, R) which commutes with the Lindbladian L. At high driving strengths we conjecture that this symmetry is spontaneously broken. This is seen clearly at the semiclassical level, where one can show [38] that every point on the max pairing (s − 1)-sphere is a stable stationary state of the dynamics. Each such solution of course breaks the underlying mode-rotation symmetry. In the full quantum theory, fluctuations lead to a slow randomization on this space of symmetry broken solutions, yielding the final unique steady state. The effective mode selection phenomena in our model is reminscient (but not identical) to analogous effects in other systems (see e.g. [45][46][47]). Ref. [48] also describes (using semiclassical MFT) related phenomena in a manymode model with uniform pairing, with mode selection being controlled by dispersion as opposed to pairing amplitudes. We stress that in contrast to [48] our exact solution lets us describe all quantum fluctuation effects, allowing analytical insights into how our mode selection effect emerges as the dimensionless driving rates G/U, Λ/U become large, see Fig. 4(a).
Discussion. We have introduced a class of strongly interacting, two-photon driven bosonic lattice models whose dissipative steady states can be found exactly. The models exhibit a wealth of interesting phenomena, including emergent phase transitions, many-body pair coherent states, and novel mode competition and symmetry breaking. Our work provides an important means for benchmarking approximation techniques, and also reveals that the physics of Kerr parametric oscillators (studied extensively for error correction) is even richer in the many body limit. It also suggests that the hTRS solution method could be used to successfully address a host of truly many-body problems.
We Figure S1. Exactly-solvable Bose-Hubbard models with hopping and pair-driving. (a) A bipartite lattice is subjected to chirally-symmetric hopping (depicted using blue and red arrows) as well as pair-driving of the form δĤ = j (â † jb † j + h.c.) (not shown in this subpanel), and an infinite-range Bose-Hubbard interaction (not shown in this subpanel). (b) At late times, the steady state becomes stationary with respect to the hopping process depicted in panel (a), and so the steady state can be analyzed by ignoring the hopping processes and considering only the pair-driving process (depicted by blue ovals) and the Hubbard interaction (not shown in this subpanel) in isolation.
I. AUGMENTING THE MODEL WITH HOPPING TERMS
The driven-dissipative bosonic lattice model that we solve in this work is given by the following Lindblad master equation:
∂ tρ = −i[Ĥ,ρ] + j D[L j ],Ĥ = uN 2 − ∆N + ij M ijâ † iâ † j + h.c. ,L j = √ κâ j (S1)
where here,N ≡ N j=1â † jâ j denotes total boson number, and u ≡ U/N . In addition, we have an arbitrary complexvalued two-mode squeezing array M ij .
We will now describe how single-photon tunneling terms of the form Jâ † iâ j + h.c. can be added to the Hamiltonian while preserving its solvability. A proper explanation of this necessitates a discussion of the symmetries of the dissipative evolution generated by the Lindbladian L. We begin by discussing the two-mode squeezing array M . We define the symmetry group S of the array M to be the group formed by unitary beam-splitter transformations that leave the pairing array invariant. Under such a transformation, represented by a unitary matrix W , the pairing array M transforms as M → W M W T . Therefore
S = W ∈ U (N ) : W M W T = M = V λ∈Σ O(s λ , R) V † ,(S2)
where M = V ΣV T is the Autonne-Takagi factorization of the matrix M with Σ the diagonal matrix of singular values, and where s λ denotes the degeneracy of the singular value λ ∈ Σ. In arriving at the result (S2) we have used the fact that the symmetry group of Σ is the product group λ∈Σ O(s λ , R). Since S by definition conserves total particle number, any beam-splitter transformation that lies in S is a weak symmetry of the Lindbladian (S1).
Let us now assume that L has a unique steady state. Then this steady state must respect the weak symmetries of the Lindbladian L [S1]. In particular, the steady state is invariant under the symmetry group S of the pairing array. In particular, we can add any element in the Lie algebra s of S to the Hamiltonian in (S1) while still preserving the steady state. The Lie algebra of S can be explicitly computed:
s = V i λ∈Σ i,j∈s λ t ij (â † iâ j −â † jâ i ), t ij ∈ R V † .(S3)
where s λ denotes the subset of mode indices corresponding to the singular value s λ , and V is some beam-splitter transformation that implements the unitary transformation V , i.e. V †â
i V ≡ j V ijâj .
A. Imaginary hopping arrangements
When the singular values of the pairing array are fully degenerate, the pairing array is maximally symmetric: in this case, the symmetry group S is, up to the unitary similarity transform V , just the full rotation group as a subgroup of the unitary group. The simplest example is the case where the pairing array is diagonal and uniform: M ij = Gδ ij , which lets us set V = 1. Therefore, in this case, the symmetry group of the pairing array is generated by terms of the form
δĤ ≡ i N i,j=1 t ij (â † iâ j −â † jâ i ).(S4)
Recall that, since δĤ generates a weak symmetry of L, one can add δĤ to the Hamiltonian without changing the steady state of L. That is, one can add any set of imaginary hopping terms to the Hamiltonian without changing the steady state of L.
B. Chirally-symmetric hopping arrangements
Thanks to the similarity transform V appearing in the symmetry group S, there are many situations where our solvable model can be augmented with conventional hopping terms. As a particularly striking example of this, consider a global Bose-Hubbard model defined on a bipartite lattice with 2N sites, with corresponding modeŝ a 1 , . . .â N ,b 1 , . . . ,b N . We can consider a special case of the Lindbladian (S1) where the pairing array is fully "dimerized", that is, where the Hamiltonian takes the form
H = uN 2 − ∆N + Λ j (â † jb † j + h.c.).(S5)
In this case, one can augment the above Hamiltonian with the following hopping terms:
δĤ ≡ ij J ij (â † iâ j + h.c.) − (b † ib j + h.c.) .(S6)
Note that such a model describes arbitrary hopping within the A-sublattice, which is mirrored in the B-sublattice. This situation is schematically depicted in Fig. S1a. To see explicitly that the hopping terms generate rotational symmetries of L, we have to diagonalize the pairing array. In particular, one can do this by defining modeŝ
c † j,+ ≡â † j +b † j √ 2 ,ĉ † j,− ≡â † j −b † j i √ 2 .(S7)
In this mode basis, the Hamiltonian, with the hopping terms included, reads aŝ
H = uN 2 − ∆N + Λ j,± (ĉ †2 j,± + h.c.) + i N i,j=1 J ij (c † i,−ĉ j,+ − h.c.) + i N i,j=1 J ij (c † i,+ĉ j,− − h.c.).(S8)
In this diagonal mode basis, it is extremely clear that the hopping terms generate rotations of the modesĉ j,± ,ĉ † j,± that are weak symmetries of the Lindbladian. Therefore, the results of this work can be used to analytically describe the steady states of the model (S5), including the chirally-symmetric hopping arrangement depicted in Fig. S1a.
II. HIDDEN TRS CONDITIONS
Given a steady-stateρ ss and corresponding time-reversal operationT , one can construct the thermofield double state
|ΨT = n √ p n |n LT |n R ,(S9)
where |n are the eigenvectors ofρ ss , and p n the corresponding eigenvalues. Sufficient conditions forT to be a hidden time-reversal symmetry are [S2] u
(N L +N R ) − ∆ eff (N L −N R ) + 2 ij M ijα † i,+α † j,− |ΨT = 0, α j,− |ΨT = 0,(S10)
whereα j,± ≡ 2 −1/2 (â j,L ±â j,R ). Since the Bogoliubov transformationâ j,L ,â j,R →α j,+ ,α j,− is boson-number preserving, we haveN L +N R =N + +N − , whereN ± := jα † j,±α j,± . Therefore, we can write the above conditions equivalently as
jα † j,− u(N + +N − + 1) − ∆ eff α j,+ + 2 i M ijα † i,+ |ΨT = 0, (S11) α j,− |ΨT = 0.(S12)
Each term in the sum in Eq. (S11) has exactly one boson in the antisymmetric modeα j,− and is thus mutually orthogonal with the remaining terms. Therefore, each term in the sum must separately vanish. Factoring the thermofield double into symmetric and antisymmetric components |ΨT = |Ψ + |Ψ − , we obtain |Ψ − = |0 − , i.e. the antisymmetric component is in vacuum, and
(u(N + + 1) − ∆ eff )α j,+ + 2 i M ijα † i,+ |Ψ + = 0. (S13)
A. Exact solution via representation theory
We now proceed to solve the system of equations Eq. (S13). To begin, we multiply each constraint on the left bŷ
α † j,+ , yielding u(N + − ∆ eff )n j,+ + 2 i M ijα † i,+α † j,+ |Ψ + = 0,(S14)
wheren j,+ :=α † j,+α j,+ . We then sum over j:
u(N + − ∆ eff )N + + 2 ij M ijα † i,+α † j,+ |Ψ + = 0.(S15)
Note how the above operator annihilating |ΨT is almost like the effective nonhermitian HamiltonianĤ eff :=Ĥ − iκN /2, except that here the lowering operators associated with the drive are not present. We now identify a "hidden" nonunitary representation of SU (1, 1) viâ
K + = 1 2 ij M ij uα † i,+α † j,+ ,K − = 1 2 ij (uM −1 ) ijαi,+αj,+ ,K z = 1 2 j α † j,+α j,+ + 1 2 .(S16)H eff,+ := K z − N/4 − ∆ eff /2u K z − N/4 +K + .(S17)
To solve this condition, we make the ansatz that the dark state inhabits an irreducible representation of SU (1, 1), with the vacuum state constituting the vector of lowest weight (with respect to the subalgebra generated byK z ). This irreducible representation is spanned by vectors of the form |Ψ = m c mK m + |h 0 , |h 0 := |Ω (S18) The use of the notation h 0 to denote the bosonic vacuum |Ω := |0 L |0 R , as well as the fact that the solution Eq. (S18) to the constraints Eq. (S13) is unique will become apparent in later sections when we review the general representation theory of SU (1, 1). For now, however, we will treat Eq. (S18) as an ansatz and proceed to compute the exact coefficients c m occuring in the expansion:
H eff,+ |Ψ + = ∞ m=1 m(m − ∆ eff /2u)c m + c m−1 K m + |h 0 = 0,(S19)β † i,+ := i V ijα †
j,+ for the corresponding basis of singular vectors, we can re-express the SU (1, 1) representation (S16) as follows:K
+ = 1 2 j λ jβ †2 j,+ ,K − = 1 2 i λ −1 jβ 2 j,+ ,K z = 1 2 j β † j,+β j,+ + 1 2 .
(S20)
III. EXACT SOLUTION: COLLECTIVE MOMENTS
The simplest quantities to compute in our model are collective observables, that is, moments of normally-ordered monomials in the SU (1, 1) algebra.
A. Unitary case
Our job is easiest when the representation Eq. (S16) satisfiesK † + ∝K − . This happens if and only if the singular values of the matrix M/u are all the same. In this case, the SU (1, 1) representation can be made unitary by rescalinĝ K − , and the diagonal form of the representation is as follows:
K + = 1 2 j λβ †2 j,+ ,K − = 1 2 j λ −1β2 j,+ ,K z = 1 2 j β † j,+β j,+ + 1 2 ,(S21)
with λ > 0 the unique singular value of the matrix M/u. With this, we now turn to compute moments of monomials of SU (1, 1) within the representation:
ΨT |K n +N k +K m − |ΨT = p,q λ 2p c * p c q h 0 |K p −K n +N k +K m −K q + |h 0 (S22)
We reparametrize the sum by defining an auxiliary non-negative integer l = p − n = q − m. In terms of l:
ΨT |K n +N k +K m − |ΨT = ∞ l=0 λ 2(l+n) c * l+n c l+m h 0 |K l −K n −K n +N k +K m −K m +K l + |h 0 = λ 2n c * n c m ∞ l=0 (2l) k |c l | 2 λ 2l c * n+l c * l c *
We then compute
(2l) k |c l | 2 λ 2l c * n+l c * l c * n c m+l c l c m = |c l | 2 λ 2l l!n!l!m! (n + l)!(m + l)! (δ * ) l (δ * ) n (δ) l (δ) m (δ * ) l+n (δ * ) l+m = λ 2l n!m! (n + l)!(m + l)!(δ * + n) l (δ + m) l . (S25)
Putting everything together,
ΨT |K n +N k +K m − |ΨT = λ 2n c * n c m (N/2) n (N/2) m n!m! ∞ l=0 (2l) k λ 2l l! (N/2 + n) l (N/2 + m) l (N/2) l (δ * + n) l (δ + m) l = λ 2n (−1) n+m (N/2) n (N/2) m (δ * ) n (δ) m 2z d dz k 2 F 3 (N/2 + n, N/2 + m; N/2, δ * + n, δ + m; z) z=λ 2 ,(S26)
where here, p F q denotes the generalized hypergeometric function, defined as the analytic extension of the following infinite series:
p F q (a 1 , . . . a p ; b 1 , . . . b q ; z) = ∞ l=0 p m=1 (a m ) l q m=1 (b m ) l z l l! .(S27)
Up to this point, we have computed |ΨT up to a normalization prefactor. This normalization can also be computed in closed form: ΨT |ΨT = 1 F 2 (N/2; δ, δ * ; λ 2 ), thus completing the characterization of collective moments in the case that the representation (S16) is unitary.
B. Nonunitary case
We now accomplish the same task, but in the more general regime where the singular values λ j are possibly nondegenerate. In this situation, there is less structure to the calculation, but moments of SU (1, 1) monomials may nonetheless be extracted efficiently:
ΨT |K n +N k +K m − |ΨT = ∞ p,q c * p c q h 0 |K †p +K n +N k +K m −K q + |h 0 (S28)
We reparametrize the sum by defining an auxiliary non-negative integer l = p − n = q − m. In terms of l:
ΨT |K n +N k +K m − |ΨT = ∞ l=0 c * l+n c l+m (2l) k h 0 |K †l +K †n +K n +K m −K m +K l + |h 0 = c * n c m ∞ l=0 |c l | 2 c * l+n c * l c * n c l+m c l c m (2l) k g(n, m, l).(S29)
We evaluate:
g(n, m, l) = (N/2) m+l (m + l)! (N/2) l l! h 0 |K †(n+l) +K n+l + |h 0 := (N/2) m+l (m + l)!(n + l)!(n + l)! (N/2) l l! Φ n+l ( λ; N ),(S30)
where Φ n ( λ; N ) := (n!) 2 h 0 |K †n +K n + |h 0 is a form factor to be evaluated later on. Therefore, in total we have
ΨT |K n +N k +K m − |ΨT = (−1) n+m (N/2) m (δ * ) n (δ) m ∞ l=0 (N/2 + m) l (n + l)!(2l) k (δ * + n) l (δ + m) l (N/2) l l! Φ n+l ( λ; N ).(S31)
All that is left is to compute the normalization, which comes out to ΨT |ΨT =
Our work would not be complete if we did not give an efficient prescription for evaluating these sums. The key to evaluating these sums efficiently is the following observation:
Φ n ( λ; N ) = n p=0 (1/2) p λ 2p N N −1 j=1 kj =n−p j (1/2) kj λ 2kj j = n p=0 (1/2) p λ 2p N Φ n−p ( λ; N − 1) (S33)
We note that the above constitutes a recursion relation for Φ n ( λ; N ) with boundary condition Φ p ( λ; 1) = (1/2) p λ 2p 1 . Note that via this recursion relation, and fixing some boson-number cutoff k, the collection {Φ p ( λ; N )} p≤k of form factors may be evaluated using only O(k 2 N ) floating-point operations.
IV. EXACT SOLUTION: LOCAL MOMENTS
We now turn to a more difficult task: that of computing (equal-time) correlation functions of local observables, that is, observables that are not collective in nature. To accomplish this efficiently in the unitary case, we must solve the "addition of angular momentum" problem for SU (1, 1), i.e. we must understand how to decompose a tensor product of SU (1, 1) representations into irreducible components. Luckily, this task is easily solved in terms of the theory of harmonic functions on R N . In the nonunitary case, however, the SU (1, 1) structure is not so useful, and instead some generalization of the combinatorics in (S33) is needed to compute observables.
To make our task easier, we will compute observables in the basis of modesb † i := j V ijâ † j that diagonalizes the pair-driving matrix M/u.
A. Addition of angular momentum for SU (1, 1)
We can write our global SU (1, 1) representation (S20) as a tensor product of local representations:
K (j) + ≡ 1 2 λ jβ †2 j,+ ,K (j) − ≡ 1 2 λ −1 jβ 2 j,+ ,K (j) z ≡ 1 2 β † j,+β j,+ + 1 2 (S34)
It is easy to see that each local representation is reducible, and has the decomposition V (j) = V (j)
+ ⊕ V (j) − , where V (j) ±
denotes the subspace consisting of all states with a fixed boson-number parity ±1. Our global SU (1, 1) rep-resentationK + ,K − ,K z is the N -fold tensor product ⊗ j V (j) , and has the following decomposition into irreducible subrepresentations:
⊗ j V (j) ∞ l=0 d l p=1 V (p) l ,(S35)
where each irreducible subrepresentation V (p) l takes the form
|Ψ (p) l = ∞ m=0 c mK m + |h (p) l ,(S36)
and |h
(1) l , . . . , |h (d l ) l
is some orthonormal basis of the subspace of the kernel ofK − consisting of those states with fixed total photon number equal to l. Later on we sketch a proof of the decomposition (S35) using the Segal-Bargmann representation [S4, S5], which represents bosonic states as multivariate analytic functions, and bosonic creation-and annihilation operators as partial differential operators. In the Segal-Bargmann representation, the SU (1, 1) vacua |h (p) l are represented as homogeneous harmonic polynomials of degree l, hence the notation "h l ".
B. Unitary case
In the unitary case, the decomposition (S35) becomes an orthogonal direct-sum decomposition, with the superselection rules h (m) l |K †p +K q + |h (m ) l = δ l,l δ m,m δ p,q p!λ 2p (N/2 + l) p . These simple rules allow us to compute any local quantity of interest in our model. For the case of quadratic correlation functions, however, such rules are not necessary. To see this, it suffices to use the weak permutation symmetryb i ↔b j , as well as the weak parity symmetryb i → −b i of the Lindbladian L:
b † ib i+r = δ r,0 1 N j b † jb j = δ r,0 N ΨT |N + |ΨT , b ibi+r = δ r,0 1 N j b 2 j = δ r,0 N ΨT |K − |ΨT (S37)
Higher-order correlations, are not expressible in terms of collective moments, and so in general the decomposition (S35), along with the associated superselection rules, serve as a useful guide. As an example, we compute the paircorrelation function, as well as the density-density correlation function, and leave the general case to the reader. We compute the pair correlation function first:
b †2 ib 2 i+r = 1 4 ΨT |β 2 i+r,+β †2 i,+ |ΨT − 2δ r,0 N ΨT |K z |ΨT = 1 4 p,q c * p c q h 0 |β 2 i,+K †p +K q +β †2 i+r,+ |h 0 − 2δ r,0 N ΨT |K z |ΨT (S38)
For the calculation to proceed, it is necessary to decompose the statesβ †2 i,+ |h 0 into SU (1, 1) vacua. Although this can be done by hand in this simple case, in general it is useful to automate this process (especially for higher-order observables). For this purpose, the HFT Mathematica package [S6] is especially relevant, in particular the function harmonicDecomposition[], which automatically extracts the decomposition
β †2 i,+ |h 0 = 2N − 2 N |h (i) 2 + 2K + λN |h 0 ,(S39)
where |h (i) 2
:= 1 λ 2N N −1 (K (i) + −K + N )
|h 0 generates an irreducible subrepresentation with weight N/4 + 1. We thus obtain:
1 4 p,q c * p c q h 0 |β 2 i,+K †p +K q +β †2 i+r,+ |h 0 = 2N − 2 N 3 λ 2 l |c l | 2 h 0 |K †l+1 +K l+1 + |h 0 + N − 1 2N l |c l | 2 h (i) 2 |K †l +K l + |h (i+r) 2 = 2N − 2 N 3 z ∂ ∂z + 1 1 F 2 (N/2; δ, δ * ; z) + N δ r,0 − 1 2N 1 F 2 (N/2 + 2; δ, δ * ; z) z=λ 2 . (S40)
We now compute the density-density correlation function b † ib ib † i+rb i+r . Since the case r = 0 is subsumed by the previous calculation, without loss of generality we can assume r = 0. Again, we split the calculation into a collective part and a noncollective part (that populates a higher weight representation):
b † ib ib † i+rb i+r = 1 4 p,q c * p c q h 0 |β iβi+rK †p +K q +β † iβ † i+r |h 0 − 1 N ΨT |K z |ΨT . (S41)
We evaluate the noncollective part, by noticing that |h
(i,j) 2 :=β † i,+β †
j,+ |h 0 generates an irreducible subrepresentation with weight N/4 + 1:
1 4 p,q c * p c q h 0 |β iβi+rK †p +K q +β † iβ † i+r |h 0 = 1 4 l |c l | 2 h (i,i+r) 2 |K †l +K l + |h (i,i+r) 2 = 1 4 1 F 2 (N/2 + 2; δ, δ * ; λ 2 ). (S42)
Higher-order moments
The general procedure for calculating higher order moments is no different from what we have done in the previous subsection: given a higher-order moment to be evaluated, one writes the expectation value in terms of an antinormally-ordered correlation function involving the mode operatorsβ j,+ ,β † j,+ , and then uses standard harmonic analysis software [S6] to decompose the resulting states into components lying in higher-weight subrepresentations. Iterating this process yields formulae in terms of generalized hypergeometric functions and their derivatives.
C. Nonunitary case
We now turn to the most general task of computing local correlation functions in our global Bose-Hubbard model, in the nonunitary regime. In this case the calculation has the least amount of structure, and so here we just present the most general result. To better organize the calculation, we will write the purification |ΨT as a power series
|ΨT = m∈N N c m m! jβ †mj j,+ |h 0 ,(S43)
with c m = j (4λ j ) mj (1/2) mj /(δ) j mj . From the above form of the purification, we can calculate the parametric form of any normally-ordered correlation function:
b †n1 1 · · ·b †n N Nb m1 1 · · ·b m N N = 1 2 j nj +mj ΨT |β †n1 1,+ · · ·β †n N N,+β m1 1,+ · · ·β m N N,+ |ΨT = 1 2 j nj +mj k∈N N c * n+ k c m+ k k! (S44)
There are a number of issues, however, with the series expression given above: firstly, due to the weak symmetrŷ b j → −b j of L, only correlation functions with n j ≡ m j modulo two are nonzero. Secondly, the series expression naively seems to be useless, as, for a fixed total boson number cutoff, the series contains a number terms that is exponentially growing with N . Therefore, the naive way of evaluating (S44) scales no better than a direct simulation of the original master equation.
We will resolve both of these issues now: first of all, we can efficiently parametrize all of the nonzero correlators by replacing n → 2 n + b, m → 2 m + b, where b ∈ F N 2 is a fixed vector of booleans. To exponentially reduce the complexity of summing the series (S44), we define generalized combinatorial form factors
Φ l ( λ, n, m, b; N ) := j kj =l j (1/2 + n j + b j ) kj (1/2 + m j + b j ) kj (2k j + b j )! (4λ j ) 2kj . (S45)
Indeed, in terms of the above form factors, the series expressions for normally-ordered moments simplify quite considerably:
b †2 n+ bb2 m+ b = 1 2 j nj +mj +bj k∈N N c * 2 n+ b+ k c 2 m+ b+ k k! = 1 2 j nj +mj +bj k∈N N c * 2( n+ b+ k) c 2( m+ b+ k) (2 k + b)! = c * 2( n+ b) c 2( m+ b) 2 j nj +mj +bj ∞ l=0 Φ l ( λ, n, m, b) (δ * + j n j + j b j ) l (δ + j m j + j b j ) l .(S46)
Our job would not be finished if we did not give an efficient prescription for evaluating the form factors Φ l . Our task is made easier, however, by observing that, when n = m = b = 0, the form factors (S45) reduce to the form factor Φ n ( λ; N ) used previously to compute collective moments. In fact, these more general form factors satisfy an analogous recursion relation:
Φ l ( λ, n, m, b; N ) = n p=0 (1/2 + n N + b N ) p (1/2 + m N + b N ) p (2p + b N )! (4λ N ) 2p Φ n−p ( λ, n, m, ; N − 1). (S47)
We note that the above recursion relation has the boundary condition Φ p ( λ, n, m, b; 1) = (1/2 + n 1 + b 1 ) p (1/2 + m 1 + b 1 ) p (4λ 1 ) 2p /(2p + b 1 )!. Therefore, the following kth-order approximant for each normally-ordered moment,
c * 2( n+ b) c 2( m+ b) 2 j nj +mj +bj k l=0 Φ l ( λ, n, m, b) (δ * + j n j + j b j ) l (δ + j m j + j b j ) l ,(S48)
may be evaluated using only O(k 2 N ) operations. We now factor in considerations as to the scaling of the total-particlenumber cutoff k. Assuming the onsite densityn converges as N → ∞, the total number of particles is O(N ), and so the cutoff typically scales with the system size: k ∼ O(N ). Therefore, factoring in all considerations, time-complexity of evaluating a single normally-ordered moment is roughly O(N 3 ).
Quadratic correlation functions
Up until this point, we have given expressions for normally-ordered moments in the basis of singular modesb † i := V ijâ † j . To obtain correlation functions in the spatial mode basis, we must expand eachâ-mode in terms ofb-modes:
â † iâ i+r = j V i,j V * i+r,j b † jb j , â iâi+r = j V * i,j V * i+r,j b 2 j (S49)
In both cases, one has to evaluate N normally-ordered moments in theb-basis. By the previous estimates, each normally-ordered moment takes O(N 3 ) floating-point operations to evaluate, so that in general it takes O(N 4 ) floatingpoint operations to evaluate a quadratic correlation function.
Quartic correlation functions
We now turn to evaluate quartic correlators using the same re-expansion technique. We first compute the paircorrelation function:
â †2 iâ 2 i+r = j1,j2,j3,j4 V i,j1 V i,j2 V * i+r,j3 V * i+r,j4 b † j1b † j2b j3bj4 = 2 j =j V i,j V i,j V * i+r,j V * i+r,j b † jb † j b jbj + j,j V 2 i,j V * 2 i+r,j b †2 jb 2 j .(S50)
The calculation for the density-density correlation function (assuming r = 0) is very similar:
n ini+r = j1,j2,j3,j4 V i,j1 V i+r,j2 V * i,j3 V * i+r,j4 b † j1b † j2b j3bj4 = 1 2 j =j |V i,j V i+r,j + V i+r,j V i,j | 2 b † jb † j b jbj + j,j V i,j V i+r,j V * i,j V * i+r,j b †2 jb 2 j .(S51)
In both cases, one has to evaluate O(N 2 ) normally-ordered moments in theb-basis. By the previous estimates, each normally-ordered moment takes O(N 3 ) floating-point operations to evaluate, so that in general it takes O(N 5 ) floating-point operations to evaluate a quartic correlation function.
V. PHENOMENOLOGY OF THE EXACT SOLUTION
A. SU (1, 1) coherent states When δ = N/2, the purification |ΨT is an SU (1, 1)-coherent state in the sense of [S7], that is, an eigenstate of the lowering operatorK − . This holds regardless of whether the representation is unitary or not:
K − |ΨT = ∞ m=0 (−1) m (N/2) mK −K m + m! |h 0 = ∞ m=0 (−1) m+1 (N/2) m+1 (m + 1)(N/2 + m)K m + m! |h 0 = −|ΨT .(S52)
The above identity has a dramatic effect on the physical steady stateρ ss . In particular, fluctuations in K := k − exactly vanish when δ = N/2, whereas fluctuations in the onsite pairing φ := â 2 j remain nonzero and show no special behavior. Here,k − := 1 2 ij (uM −1 ) ijâiâj is a pair-lowering operator involving only the physical lattice modesâ j .
B. DMFT analysis of the D = 0 model
DMFT expansion
We now use dynamical mean field theory (DMFT) to derive a large-N expansion for our D = 0 model. To derive this expansion, we first fix a site (labelled j = 0) in our lattice model, and attempt to integrate-out the remaining degrees of freedom. Typically this integration procedure is carried out within the Schwinger-Keldysh formalism, which states that the action for the full D = 0 lattice model is
S = dt σ=±1 σ j (α j,σ ∂ t α * j,σ − ∆n j,σ + G(α 2 j,σ + c.c.)) + U N j n j,σ 2 + iκ dt j α j,+ α j,− − 1 2 σ=±1 |α j,σ | 2 , n j,σ := |α j,σ | 2 (S53)
where α j,± = α j,± (t) are complex fields associated to the bosonic degrees of freedomâ j ,â † j , as is standard in the Keldysh formalism. Within this formalism, the goal is now to compute the effective action for a fixed site j = 0. One can compute a large-N asymptotic expansion for the effective action for the fields α 0,+ , α 0,− , which takes the following self-consistent form at leading-order:
S eff = S free + dt σ=±1 2σU n 0,σ n 0,σ eff + O(N −1 ),(S54)
where S free is the Keldysh action for a single site, without the Bose-Hubbard interaction, and · eff denotes an average taken with respect to the effective action. Note that this leading-order theory is Markovian. Therefore, as N → ∞, one can evolve observables for a fixed site using the self-consistent Lindbladian
L effρ0 = −i[∆ eff (ρ 0 )n 0 + Gâ †2 0 + h.c.,ρ 0 ] + κD[â 0 ]ρ 0 , ∆ eff (ρ 0 ) = ∆ + 2U Tr[ρ 0n0 ].(S55)
One can then solve for the steady-state densityn ≡n MF within this leading-order mean-field description, leading to a cubic equation forn MF . This leads to a large-N phase diagram where regions exist with up to three self-consistent solutions for the density. We obtain good agreement between this leading-order DMFT description and the exact solution, and find that • The location ∆ c of the 1st-order phase transition obtained from the exact solution approaches the location of the bifurcation in the MFT cubic self-consistency condition as N → ∞. However, this convergence is extremely slow (finite-size effects are still noticeable for N ∼ 10 3 ). • The tristable region of the mean-field phase diagram has a unique point with maximal loss rate κ * . Let (∆ * , κ * ) denote this point. We call this point the mean-field critical-point. We find that the mean-field critical point is precisely the same as the critical point in the phase diagram obtained from the exact solution, that is,
κ * = κ c , ∆ c (κ c ) = ∆ * .
Establishing the validity of DMFT Normally DMFT expansions are hard to rigorously justify directly, even in the limit of large coordination number z → ∞. We will nonetheless give the standard justification here, and then see how the exact hTRS solution yields a much more direct perspective, at least with respect to the steady state problem. The usual justification for the asymptotic result (S54) is as follows: note that we can view our Hubbard interaction as a general (extended) Bose-Hubbard interaction for a general graph G with G = K N , i.e. a complete graph,
U N jn j 2 = U z i,j ∈Gn inj G=K N .(S56)
The large-N asymptotic result (S54) can be established, by copying exactly the calculation in [S9, S10], but by performing the cumulant expansion therein with respect to the density fields n j,σ instead of the ordinary complex fields α j,σ (the calculation is relatively unilluminating, and for the sake of brevity, we omit it here). This has the effect of establishing the desired result for G a regular tree graph with coordination number N , instead of a complete graph. After performing such a calculation, one then waves one's hands and claims that the same asymptotics (S54) holds on a complete graph. The exact solution yields a more direct way to check the validity of (S54), at least from the perspective of the steadystate. The steady-state(s) predicted by the leading-order DMFT dynamics L eff admit the following purification:
|Ψ DMFT = ∞ m=0 1 m! 2K + 2Un MF − ∆ eff m |h 0 ,(S57)
wheren MF is any solution to the cubic self-consistency condition mentioned in the preceeding subsection. We can directly see that the exact solution indeed converges to this form as N → ∞. We begin by noticing the asymptotics (S58)
Notice that the above is identical to (S57), provided that we replace m ∼ 2N + with its average value. Indeed, fluctuations inN + with respect to the exact solution |ΨT can be evaluated directly via (S26) and shown to be of order O(N 1/2 ), so that m/N becomes a deterministic variable in the large-N limit. The series above thus becomes well-concentrated about m/N ∼n MF , leading to the self-consistent form (S57) predicted by leading-order DMFT.
Establishing the location of the critical point
We will now test the claim made previously, namely that the critical point obtained from the exact solution lies exactly at the mean-field critical point, i.e. we will compute the maximum magnitude of the susceptibility
χ max (κ, N ) := sup ∆ ∂n ∂∆ (S59)
where all other parameters are held fixed. We confirm that χ max diverges as N → ∞ whenever κ < κ * , and converges otherwise. Repeatedly testing for the convergence of χ max in the thermodynamic limit for different values of κ allows us to approximately compute the rate at which χ max diverges, thus obtaining the critical exponent γ:
lim N →∞ χ max (κ, N ) ∼ κ→κ + * τ −γ ,(S60)
with τ := (κ − κ * )/κ * . Using a very crude polynomial fitting algorithm, we estimate γ ≈ −1 (see Figure S4(b)).
C. Semiclassical limit
We now investigate our model without the D ≡ 0 restriction, but in the semiclassical limitn 1. When the onsite photon occupation is large, the dynamics of the field amplitudes α j (t) := Tr[ρ(t)â j ] is well captured by the semiclassical equation of motion
iu −1 ∂ t β j = 2λ j β * j + β j 2 k |β k | 2 + 1 − ∆ eff /u ,(S61)
where
β * j ≡ N k=1 V j,k α * k (S62)
is the change-of-coordinates on the classical phase space induced by the unitary V in the Autonne-Takagi factorization M/u = V ΣV T . We now demonstrate the claim made in the main text, namely that the stable stationary states of the above dynamical system are spheres in phase space formed by the max-pairing modes. In particular, the semiclassical fixed-points satisfy the equations
0 = 2λ j β * j + β j 2 k |β k | 2 + 1 − ∆ eff /u (S63)
From now on, for clarity, we will use the symbol X to denote the set of fixed points. Note that, trivially, 0 ∈ X. However, we are interested in the nonzero fixed points. Therefore, let β ss ∈ X be a nonzero fixed point, i.e. a nonzero solution to (S63). Whenever β j,ss = 0, we can divide through by β j in (S63), yielding a constraint on the phase e iθj ≡ β j,ss /|β j,ss |:
e −2iθj = − 2R 2 ss + 1 − ∆ eff /u 2λ j ,(S64)
where R 2 ss ≡ j |β j,ss | 2 . In particular, if β i,ss , β j,ss = 0 is any pair of nonzero components of β ss , then, by taking the absolute value of the above equation,
1 2λ i |2R 2 ss + 1 − ∆ eff /u| = 1 2λ j |2R 2 ss + 1 − ∆ eff /u|.(S65)
In particular, κ = 0 and so we can divide both sides by |2R 2 ss + 1 − ∆ eff /u|. Therefore, λ i = λ j . We can go even further: (S64) also means that e 2iθi = e 2iθj . In summary, for each β ss ∈ X, there exists a λ, θ such that, for all nonzero components of β ss ,
λ j = λ, β j,ss = e iθ x j , x j ∈ R,(S66)
where θ is independent of j and uniquely determined by λ in the following way:
e −2iθ = − 2R 2 ss + 1 − ∆ eff /u 2λ . (S67)
By taking the real and imaginary parts of the above equation, we also have
2λ sin 2θ = −κ/2u,(S68)
2λ cos 2θ = −(2R 2 ss + 1 − ∆/u).
(S69)
Instability of solutions with λ < λ * Let β ss ∈ X be a nonzero fixed point, and λ the corresponding singular value. Also, let λ * ≡ sup j λ j denote the maximum singular value. We will now show that if λ = λ * , then β ss is unstable. For this purpose, it will be useful to rewrite the equations of motion in a coordinate system (β 1 , . . . , β N ) that is adapted to β ss , namely such that (β 1,ss , . . . , β N,ss ) = (e iθ R, 0, . . . , 0).
Crucially, by the arguments in the preceeding subsection, we can achieve this via a rotation of the mode amplitudes β j with λ j ≡ λ:
β j = λ k =λ A j,k β k λ j = λ β j λ j = λ , A ∈ O(s λ , R),(S71)
where s λ denotes the multiplicity of the singular value λ. Since the above transformation is a symmetry of the equations of motion (S61), the equations of motion are covariant with respect to this transformation:
iu −1 ∂ t β j = 2λ j (β j ) * + β j 2 k |β k | 2 + 1 − ∆ eff /u . (S72)
Now let δβ j ≡ β j − β j,ss denote the fluctuations about β ss . Assuming these fluctuations are small, we can obtain linearized equations of motion for these fluctuations:
iu −1 ∂ t δβ j = 2λ j (δβ j ) * − λ λ j e −2iθ δβ j + 2β j,ss k ((β k,ss ) * δβ k + c.c.) + O(δβ 2 ) (S73)
where we have implicitly used (S64). We now wish to argue that, if λ = λ * , then the Hurwitz criterion fails, that is, the associated dynamical matrix contains an eigenvalue with positive real part. It suffices to examine the stability of the fluctuations δβ j for j = 1, which evolve within this linear approximation as follows:
iu −1 ∂ t δβ j = 2λ j (δβ j ) * − λ λ j e −2iθ δβ j (S74) = 2λ j (δβ j ) * − κ 2 δβ j − i ∆ − u(2R 2 ss + 1) δβ j ,(S75)
where we have shown explicitly that this is the equation of motion for a detuned parametric amplifier, with detuning modified by the presence of the Hubbard interaction u. Proceeding with the calculation, we then split the fluctuations into real and imaginary parts via e −iθ δβ j = δx j + iδy j , and obtain the equations of motion
u −1 ∂ t (δx j + iδy j ) = −2λ j e −2iθ (1 + λ/λ j )δy j + i(1 − λ/λ j )δx j .(S76)
The corresponding eigenvalues of the dynamical matrix, using (S68), are
γ ± j = −κ/2 ± (κ/2) 2 − 4u 2 (λ 2 − λ 2 j ),(S77)
Therefore, if λ j > λ, then the eigenvalue γ + j has positive real part. Therefore, if λ = λ * , then β ss is unstable. We also have a partial converse statement: if λ j < λ, then the fluctuations δβ j are stable.
Stability of solutions with λ = λ * Let β ss ∈ X be a nonzero fixed point with corresponding singular value λ * . We will now investigate the conditions under which β ss is stable. By (S77), the fluctuations δβ j for j = 2, 3, . . . N are all appropriately damped. We thus must investigate the stability of the remaining fluctuations δβ 1 , which have the following linearized equations of motion:
u −1 ∂ t (e −iθ δβ 1 ) = −4λ * e −2iθ δy 1 − 4iR 2 ss δx 1 .(S78)
The corresponding eigenvalues of the dynamical matrix, using (S68-S69), are
γ ± j = −(κ/2) ± (κ/2) 2 − 8u 2 R 2 ss (2R 2 ss + 1) + 8u∆R 2 ss .(S79)
Therefore, β ss is stable if and only if
8u 2 R 2 ss (2R 2 ss + 1) + 8u∆R 2 ss > 0,(S80)
which happens if and only if u 2 (2R 2 ss + 1) − u∆ > 0. To see whether this criterion is satisfied, we must use the fact that β ss is a fixed point in order to obtain an additional constraint on R ss . In particular, taking the absolute value squared of (S64) yields a quadratic equation for R 2 ss + 1, with two possible solutions:
2R 2 ss + 1 = ∆/u ± (2λ * ) 2 − (κ/2) 2 .(S81)
Therefore, criterion (S80) is satisfied if and only if
R ss = ∆/u − 1 + (2λ * ) 2 − (κ/2) 2 2 > 0. (S82)
Finally, let β j for j = 1, 2, . . . , s denote the eigenmodes corresponding to the maximum singular value λ * , i.e. the so-called "max-pairing modes". Since rotations A ∈ O(s, R) of the max-pairing modes are symmetries of the equations of motion, any such rotation A must send a stable fixed point to another stable fixed point. Therefore, when the inequality (S82) is satisfied, the space X stab. ⊂ X of nonzero stable fixed points is a sphere:
X stab. = e iθ x j ∈ R s : λj =λ * x 2 j = R 2 ss (S83)
In particular, when s > 1, the fluctuations tangent to X stab. are Goldstone modes, i.e. zero-modes for the linearized dynamics. We can verify this explicitly by fixing a point β ss ∈ X stab. , and expanding the linearized equations of motion for the resulting fluctuations e −iθ δβ j = δx j + iδy j , in the coordinate frame adapted to β ss :
iu −1 ∂ t (δx j + iδy j ) = −4iλ * e −2iθ δy j , j = 2, . . . , s(S84)
In particular, the real-components δx j ∈ T βss X stab. of the fluctuations are zero modes of the dynamical matrix, as was expected. form. This task is solved easily via the Segal-Bargmann representation [S4, S5] of |Ψ + , which can be calculated in a straightforward manner from (S18):
Ψ SB ( α) = 0 F 1 δ; − 1 2 ij Mij u α i α j √ N , N = ∞ l=0 Φ l ( λ; N ) (δ) l (δ * ) l . (S99)
Now, the Q-function of a pure state |Ψ is given by
Q |Ψ ( α) = π −N |Ψ SB ( α * )| 2 e −| α| 2 , where Ψ SB is the Segal-Bargmann representation of |Ψ . Therefore, (S98) yields W ss ( α) = 1 N 2 π N 0 F 1 δ; − ij M ij u α * i α * j 2 e −2| α| 2 ,(S100)
Scaling limit for the Wigner function
We now investigate the expression for the Wigner function in the high-density limit M ij /u → ∞. Note that the expression (S100) for the Wigner function is non-negative, and thus can be interpreted as a bona-fide probability measure. In the limit M ij /u → ∞, this probability measure will become supported on larger and larger regions of phase space, and so it is useful to re-scale the phase space in such a way that the resulting rescaled distribution converges to a limit.
The correct scaling turns out to be β j := √ −λ * βj , where λ * ≡ sup j λ j is the maximum singular value appearing in the Autonne-Takagi factorization M/u = V ΣV T , and
β * j ≡ N k=1 V j,k α * k (S101)
is the change-of-coordinates on the classical phase space of our system, induced by the unitary V appearing in the factorization. The explicit form of V can be recovered from the spectral decomposition of the Laplacian of our underlying connectivity graph. One can show that, at least when ∆ = 0, κ = 0 + , in which case the Bessel function sitting inside the absolute value in (S100) becomes a hyperbolic cosine, the steady-state Wigner function W ss (β) limits to a uniform distribution on the sphere S ≡ (β 1 , . . . ,β s , 0, . . . , 0) ∈ R N :
s j=1β 2 j = 1 ,(S102)
where β 1 , . . . , β s are the max-pairing modes. This can be established by expanding the Wigner function (S100) into a sum of four exponentials, and then solving the associated saddle-point equations.
C. Using the Wigner function to verify the nonthermal character of the steady state
The exact solution for the Wigner function can be used to explicitly showcase the nonequilibrium character of the steady state. One non-thermal feature of the steady state is that it does not commute with the Hamiltonian, that is, [H, ρ ss ] = 0. Therefore, the steady state cannot be written as exp(−βĤ) for some β. This can be verified efficiently when Λ = 0, in which case we can write down the closed-form solution W ss ( α) = 2 π N 0 F 1 δ; −2λ j α * 2 j 2 1 F 2 (N/2; δ, δ * ; λ 2 )
e −2| α| 2 ,(S103)
with λ = N G/U in this case corresponding to the unique singular value of the pairing matrix. To show that the steady state and the Hamiltonian do not commute, we pass to the phase-space formulation of quantum mechanics. In the phase space formulation of quantum mechanics, the noncommutativity of the steady state and the Hamiltonian is equivalent to the statement that
W ss H − H W ss = 0,(S104)
where is the Moyal product [S13], and H denotes the symmetrically-ordered (i.e. Weyl) symbol of the Hamiltonian. Because H is a polynomial, the following derivative expansion terminates at a finite order and hence can be calculated symbolically in closed form using a simple computer algebra program:
(f g)( α) = exp − 1 2 N j=1 ∂ ∂x * j ∂ ∂y j − ∂ ∂x j ∂ ∂y * j f ( x)g( y)
x= y= α (S105)
Since both the Wigner function and Hamiltonian are generically smooth functions of α, the phase-space function S104 is generically a smooth function of α. In Fig. S5 we exhibit a single point where this phase-space function is non-vanishing. Figure S7. Circuit incorporating coherent two-photon driving. The global parametric drive is supplied by a flux-tunable transmon (blue shaded region).
A. Adding a two-photon drive
To obtain the D = 0 Hamiltonian for our model, we just have to add two-photon driving to the above scheme. To do this, we play the same trick as above: this time we add a symmetric SQUID in parallel with the oscillator chain. Assuming that the junction capacitances in the SQUID are also much smaller than the capacitances present in the oscillator chain, the new interaction Hamiltonian is simplŷ
H int = E L cos ϕ jâ j + h.c. + 2E R cos Φ e 2πΦ 0 cos ϕ jâ j + h.c. ,(S111)
where E R = E R,1 , E R,2 is the Josephson energy of each junction in the symmetric SQUID. We then choose to drive the SQUID in such a way that Φ e 2πΦ 0 = π 2 − p (t).
(S112)
We also assume that E R E L . As a result, we can truncate the expansion of the SQUID potential at quadratic order, while continuing to truncate the expansion of the left junction at quartic order:
H int = −E R ϕ 2 p (t) jâ j + h.c. 2 − E L ϕ 2 2! jâ j + h.c. 2 + E L ϕ 4 4! jâ j + h.c. 4 + O(E L ϕ 6 ) + O(E R ϕ 4 ).
(S113)
By modulating the pump amplitude appropriately via
p (t) := 0 j cos(2ω j − 2ω p )t,(S114)
and going into a rotating frame with respect to the free HamiltonianĤ free = j (ω j − ω p )n j , and again assuming that the mode frequencies are all appropriately detuned from each other, we obtain the following rotating-wave Hamiltonian:Ĥ
RWA = E L ϕ 4 2 jn j 2 + j ( ω p − E L ϕ 2 )n j − E R ϕ 2 0 j (â 2 j + h.c.)(S115)
We thus obtain the exact parameters of our solvable model in the regime D = 0 (in SI units!):
U = N E L ϕ 4 2 , G = − E R ϕ 2 0 , ∆ = ω p − E L ϕ 2 , ϕ = 2 (L j /C j ) 1/4 2πΦ 0 ≡ const.(S116)
B. Effect of junction capacitances
We now compute the corrections to the Hamiltonian due to junction capacitances, and demonstrate that these capacitances can be neglected when they are much smaller than the capacitances in the oscillator chain. To simplify the analysis we assume the junction capacitances in the symmetric SQUID are the same, and define a new parameter C Σ := C L + 2C R corresponding to the sum of the three junction capacitances. The Maxwell capacitance relation of the circuit, assuming the junction capacitances are all zero, is
q = C 0˙ φ, C 0 := C 1 −C 2 0 · · · 0 −C 2 C 2 + C 3 −C 3 . . . 0 −C 3 C 3 + C 4 −C 4 . . . −C 4 . . . −C N −1 0 · · · −C N −1 C N ,(S117)
where the nodal charges q j can be expressed in terms of the charges Q j in the capacitance chain via
Q N −j = j k=0 q N −k ,(S118)
With the junction capacitances included, the new capacitance matrix is obtained in a very simple manner from C 0 :
C = C 0 + 0 · · · 0 . . . . . . 0 · · · C Σ .(S119)
To obtain the corrections to the Hamiltonian that were neglected in the previous analysis, we use the Sherman-Morrison formula, which is exact:
C −1 k,k = (C −1 0 ) k,k + C Σ (C −1 0 ) k,N (C −1 0 ) N,k 1 + C Σ (C −1 0 ) N,N ,(S120)
so that the correction to the Hamiltonian is rigorously
δĤ = C Σ 2 k,k (C −1 0 ) k,N (C −1 0 ) N,k 1 + C Σ (C −1 0 ) N,N q k q k ,(S121)
which goes to zero as C Σ (C −1 0 ) i,j → 0, as expected.
FIG. 1 .
1(a) Schematic of the model: a lattice of bosonic modes, with two-photon drives on each site (G) and on each nearest-neighbor (nn) bond (Λ). There is also single-photon loss κ on each site, and a global Hubbard (Kerr) interaction U . (b) Our exact solution allows the description of steady-state spatial correlations. Here, nn pairing corelations are plotted as a function of drive detuning ∆ and drive amplitude Λ, for a N = 225 site 2D lattice with u ≡ U/N , κ = 0.01u. One sees clearly a Mott-lobe like structure associated with multiphoton resonances.
across the phase boundary. Same parameters as in panel (a). The parameter tuning that results in a many-body pair coherent state is indicated with a star.
FIG. 4 .
4Symmetry breaking at strong driving. (a) Occupancyn k of standing wave modes in a odd-length D = 1 open chain, as the drive Λ is increased. For large drives, the modes with the largest pairing amplitudes, k = 0, π, dominate. Ntot denotes average total photon number. Parameters are ∆ = 0, κ = u/100, u ≡ U/N , N = 31. (b) Normalized density correlations between the modes at k = 0, π (red curve), and the horizontal asymptote y ≡ −1/s predicted by a uniform sphere distribution (black dashed line). Here, s = 2. Parameters same as in panel (a).
Note that the representation presented here is unitary if and only if the pairing array is unitary, i.e. (M/u) −1 = (M/u) † . In terms of the Lie algebra generators, the hTRS condition simplifies to H eff,+ |Ψ + = 0, where
which yields the solution c m ∝ (−1) m /m!(δ) m , where δ := 1 − ∆ eff /2u, and (δ) m := δ(δ + 1) · · · (δ + m − 1) denotes the Pochhammer symbol (rising factorial).B. Diagonal form of the representationBy treating the raising operatorsα † j,+ as indeterminates, one can interpret the SU (1, 1) raising operatorK + presented in Eq. (S16) as a quadratic form over the complex numbers. Its matrix representation M/u is invariant under transposition and thus admits a kind of singular-value decomposition M/u = V ΣV T (called the Autonne-Takagi factorization[S3]), with V an N × N unitary matrix and Σ = diag(λ 1 , . . . , λ N ) the matrix of singular values. Writinĝ
SU (1, 1) commutation relations, we evaluate f (n, m, l) := h 0 |K l −K n −K n +K m +K m −K l + |h 0 = l!(N/2) l (n + l)! l! (m + l)! l! (N/2) n+l (N/2) l (N/2) m+l (N/2) l = (N/2) n (N/2) m l! (n + l)!(m + l)!(N/2 + n) l (N/2 + m) l (N/2) l .
∞
l=0 Φ l ( λ; N )/(δ) l (δ * ) l . Up until this point, we have given formulae in terms of the form factors Φ n ( λ; N ) := h 0 |K †n +K n + |h 0 /(n!) 2 , which, in the nonunitary case, cannot be evaluated purely using the internal relations of the SU (1, 1) algebra. Instead, via the multinomial expansion, we are left with a sum over an exponential number of terms: Φ n ( λ; N ) = 1 n! 2 h 0
Figure S2 .
S2Bosonic pairing fluctuations near the PCS regime. (a) Here, we plot the normalized fluctuations g − − |K| 2 )/|K| 2 in the nonlocal pairing observable K := k − , for different values of loss κ ∈ {0.01U, 0.1U, U } (red curves; transparency increases with increasing loss). Note the sharp dip exactly at ∆PCS = U (2 − N )/N . Here, G = U, Λ = 0, N = 500. Inset: We plot the normalized fluctuations g (2) φ := ( â †2â2 − |φ| 2 )/|φ| 2 in the local onsite pairing φ := â 2 j . Parameters same as before. (b) Same as panel (a), but for D = 2 with N = 8 × 8 and periodic boundary conditions. Here, Λ = 2U = 4G.
Figure S3 .
S3Benchmarking DMFT using the exact solution (in D = 0). (a) Here, we plot the mean onsite occupation n as a function of detuning, for G = U, κ = 0.01U . Note that we obtain asymptotic agreement with DMFT in the limit that N → ∞. To see work where a similar kind of mean field theory was benchmarked by exact diagonalization results in a permutation-symmetric spin model, see[S8]. (b) Here, we plot the rms fluctuations inN+ using the exact solution. We observe the empirical scaling ∆N+ := N 2 + − N + 2 = O(N 1/2 ) so that ∆N+/N vanishes as N → ∞, ensuring the asymptotic convergence of the wavefunction |ΨT of the paired boson gas to the form predicted by DMFT. Here, G = U/10, κ = U/100, and ∆ = 0.
Figure S4 .
S4Confirming the location of the critical point using the exact solution. (a) Average density as a function of detuning ∆ and loss κ, with N = 500, Λ = 0, and G = U . Phase boundaries can be seen, provided that κ < κ * ≈ 4U . (b) Maximum absolute value of the susceptibility as a function of κ, for κ → κ + * . Here, Λ = 0, and G = U/10. The polynomial fit used to estimate γ is depicted (dashed blue line). (N/2U ) m Γ(δ)/Γ(m + δ) ∼ N →∞ (2U m/N − ∆ eff ) −m for the Gamma function, which yields the asymptotic estimate
Figure S5 .
S5Nonthermal nature of the steady state. The exact solution for the Wigner function can be used to symbolically check that the Hamiltonian and steady state do not commute. Here, we evaluate the phase-space commutator H Wss − Wss H in the case G = κ = U, and Λ = 0, for the cases (a) N = 1, in which case we choose to evaluate the result at the phase space point α = 1 and (b) N = 2, in which case we choose to evaluate the result at the phase space point α1 = α2 = 1. Note that, since H, Wss are both purely real, the result is always purely imaginary. As expected, the result always vanishes in the limit κ → 0 + .
thank Alexander McDonald, Qian Xu and Mark Dykman for helpful discussions. This work was supported by the Air Force Office of Scientific Research MURI program under Grant No. FA9550-19-1-0399, and the Simons Foundation through a Simons Investigator award (Grant No. 669487).Supplemental Material for
"Competition between two-photon driving, dissipation and interactions in bosonic
lattice models: an exact solution"
David Roberts 1,2 , A. A. Clerk 1
1 Pritzker School of Molecular Engineering, University of Chicago, Chicago, IL, USA
2 Department of Physics, University of Chicago, Chicago, IL, USA
(Dated: February 8, 2023)
CONTENTS
I. Augmenting the model with hopping terms
2
A. Imaginary hopping arrangements
3
B. Chirally-symmetric hopping arrangements
3
II. Hidden TRS conditions
3
A. Exact solution via representation theory
4
B. Diagonal form of the representation
5
III. Exact solution: collective moments
5
A. Unitary case
5
B. Nonunitary case
6
IV. Exact solution: local moments
7
A. Addition of angular momentum for SU (1, 1)
7
B. Unitary case
7
Higher-order moments
8
C. Nonunitary case
9
Quadratic correlation functions
10
Quartic correlation functions
10
V. Phenomenology of the exact solution
10
A. SU (1, 1) coherent states
10
B. DMFT analysis of the D = 0 model
10
DMFT expansion
10
Establishing the validity of DMFT
12
Establishing the location of the critical point
13
C. Semiclassical limit
13
Instability of solutions with λ < λ *
14
Stability of solutions with λ = λ *
15
VI. Mathematical background
17
A. Proof of the SU (1, 1) decomposition theorem
17
B. Exact solution for the steady-state Wigner function
18
Scaling limit for the Wigner function
19
C. Using the Wigner function to verify the nonthermal character of the steady state
20
VII. Experimental realization using superconducting circuits (D = 0)
21
A. Adding a two-photon drive
22
B. Effect of junction capacitances
23
References
24
arXiv:2208.05451v2 [quant-ph] 7 Feb 2023
VI. MATHEMATICAL BACKGROUNDA. Proof of the SU (1, 1) decomposition theoremWe now establish the decomposition (S35). To make our task easier, we establish the desired decomposition for a dense subspace of the Hilbert space. The corresponding decomposition for the full Hilbert space can then be proven using standard functional-analytic techniques.In particular, let V (j) be the local nonunitary SU (1, 1) representations defined in (S34). We define W (j) ⊂ V (j) to be the algebraic part of each representation, that is, the part of the representation consisting of finite linear superpositions of Fock staes:finitely many a n nonzero (S85)The above subspaces allow us to conveniently establish the following theorem:Theorem. The global nonunitary SU (1, 1) representation ⊗ j W (j) decomposes into irreducible subrepresentations as follows:where the spaces Ware defined analogously:finitely many a n nonzero .Proof. This can be proved by going to the Segal-Bargmann representation. Within this representation, a finite-boson number state is represented as a polynomial: n a nβ †nwith C[x j ] the univariate polynomial ring generated by x j . It then follows that the tensor product ⊗ j W (j) is the multivariate polynomial ring C[x 1 , . . . , x N ] generated by the indeterminates x 1 , . . . x N . Finally, creation and annihilation operators are represented by partial differential operators:We first establish our decomposition under the assumption that the singular values λ j ≡ λ are all completely degenerate, and then generalize the argument to the generic non-degenerate case λ i = λ j . In the degenerate case, the global SU (1, 1) representation takes the simple formTherefore, within the Segal-Bargmann representation, our decomposition theorem is equivalent to the following decomposition of the polynomial ring ⊗ j W (j) :whereis some orthonormal basis of the space of harmonic homogeneous polynomials of degree l. When interpreted pointwise, the above isomorphism reads as a harmonic expansion of a fixed polynomial:where the q (p) l are univariate polynomials. The decomposition (S91) is then equivalent to the statement that the above mapping, interpreted as a mapping from the RHS to the LHS, is an isomorphism of SU (1, 1)-representations. The representation (S90) acts the same on both sides, so it suffices to establish the isomorphism at the vector space level, i.e. prove that the above mapping is both injective and surjective. For injectivity, it suffices to show that the subspaces C[R 2 ]h (p) l are all mutually nonintersecting (except at {0}), which can be verified by direct calculation:To demonstrate surjectivity, we must demonstrate that, for any polynomial p on the LHS of (S92), a decomposition of the form given on the RHS exists. This seems considerably more challenging to establish. However, here we are helped by a basic fact from harmonic analysis:Lemma. let p denote a homogeneous polynomial of degree l. Thenwhere q is a homogeneous polynomial of degree l − 2, and h is a homogeneous harmonic polynomial of degree l.A proof is usually given in standard textbooks on harmonic function theory (see, e.g.[S11]). In any case, iterating the above lemma, we obtain, for any homogeneous polynomial p a decompositionwhere h l denotes a homogeneous harmonic polynomial of degree l, so that the expansion (S92) can be established simply by writing out the polynomial on the LHS of (S92) as a sum of homogeneous components, and then expanding the harmonic polynomials appearing on the RHS of (S95) into a basis.The preceeding arguments constitute a proof of the theorem for the degenerate case λ ≡ λ j . Therefore, all that is left is to reproduce the above proof in the non-degenerate case λ i = λ j . Luckily, the proof for the non-degenerate case follows immediately from the degenerate case. In particular, we can write the SU (1, 1) representation aŝwhere we have made the change of variables x j → y j := λ 1/2 j x j . In particular, just by making the replacements x j → y j in (S91), we obtain the desired result:with R 2 and h (p) l defined just as in (S91), but with the replacements x j → y j . This completes the proof of our theorem in the non-degenerate case.B. Exact solution for the steady-state Wigner functionWe now compute a closed form for the Wigner function W ss of the steady-state density matrixρ ss . The arguments in[S12]generalize in a straightforward manner to the N -mode case. In particular, the steady-state Wigner function for our system satisfies the identitywhere Q |Ψ+ is the Husimi-Q representation of the N -mode pure state |Ψ + . Therefore, to express the Wigner function W ss of the steady state in closed form it suffices to express the Husimi-Q representation of |Ψ + in closedOur D = 0 model is relatively easily realizable using a simple superconducting circuit with only three nonlinear elements. To see this, we first attempt to realize the D = 0 Hamiltonian without the driving. We begin by placing a chain of N uncoupled LC oscillators in series. Via examination of Kirchoff's current laws, the Hamiltonian that describes the equations of motion for the chain is the following:where here, Q j denotes the charge stored on the jth capacitor, and Φ j denotes the time-integrated voltage across each inductor. One may diagonalize this Hamiltonian by defining dimensionless creation-and annihilation operatorŝwhere here, Q zpf j , Φ zpf j are the zero-point vacuum fluctuations of the charge and phase across the inductive and capacitive branches of each LC oscillator.To add an infinite-range Bose-Hubbard interaction, we place a Josephson junction in parallel with the chain of oscillators (c.f.Figure S6). The Josephson junction, in the limit of extremely weak junction capacitance C J C j , can be modelled accurately to leading order via the following interaction Hamiltonian:where ϕ j := Φ zpf j /2πΦ 0 . We now tune the dimensionless phase fluctuations ϕ j to be parametrically small and uniform across all modes, that is ϕ j ≡ ϕ 1. Note that this can be done without constraining the resonant frequencies of each oscillator. Taylor's theorem then says thatWe then go into a rotating frame with respect to the free Hamiltonian H free := j ω jnj , where ω j is the bare resonance frequency of each LC resonator in the chain. Now we choose the resonant frequencies of each mode so that the fundamental frequency Ω of the rotating-frame Hamiltonian is much larger than the rate −1 E J ϕ 2 . In this regime, the rotating-wave approximation is valid and yieldŝ (S110)
. K Baumann, C Guerlin, F Brennecke, T Esslinger, Nature. 4641301K. Baumann, C. Guerlin, F. Brennecke, and T. Esslinger, Nature 464, 1301 (2010).
. A A Houck, H E Türeci, J Koch, Nature Phys. 8292A. A. Houck, H. E. Türeci, and J. Koch, Nature Phys. 8, 292 (2012).
. M Fitzpatrick, N M Sundaresan, A C Y Li, J Koch, A A Houck, Phys. Rev. X. 711016M. Fitzpatrick, N. M. Sundaresan, A. C. Y. Li, J. Koch, and A. A. Houck, Phys. Rev. X 7, 011016 (2017).
. T Fink, A Schade, S Höfling, C Schneider, A Imamoglu, Nature Phys. 14365T. Fink, A. Schade, S. Höfling, C. Schneider, and A. Imamoglu, Nature Phys. 14, 365 (2018).
. R Ma, B Saxberg, C Owens, N Leung, Y Lu, J Simon, D I Schuster, Nature. 56651R. Ma, B. Saxberg, C. Owens, N. Leung, Y. Lu, J. Simon, and D. I. Schuster, Nature 566, 51 (2019).
. S Diehl, A Micheli, A Kantian, B Kraus, H P Büchler, P Zoller, Nature Phys. 4878S. Diehl, A. Micheli, A. Kantian, B. Kraus, H. P. Büchler, and P. Zoller, Nature Phys. 4, 878 (2008).
. E G D Torre, S Diehl, M D Lukin, S Sachdev, P Strack, Phys. Rev. A. 8723831E. G. D. Torre, S. Diehl, M. D. Lukin, S. Sachdev, and P. Strack, Phys. Rev. A 87, 023831 (2013).
. L M Sieberer, S D Huber, E Altman, S Diehl, Physical Review Letters. 110195301L. M. Sieberer, S. D. Huber, E. Altman, and S. Diehl, Physical Review Letters 110, 195301 (2013).
. A Leboite, G Orso, C Ciuti, Physical Review Letters. 110233601A. LeBoite, G. Orso, and C. Ciuti, Physical Review Let- ters 110, 233601 (2013).
. M J Hartmann, Journal of Optics. 18104005M. J. Hartmann, Journal of Optics 18, 104005 (2016).
. A Biella, F Storme, J Lebreuilly, D Rossini, R Fazio, I Carusotto, C Ciuti, Phys. Rev. A. 9623839A. Biella, F. Storme, J. Lebreuilly, D. Rossini, R. Fazio, I. Carusotto, and C. Ciuti, Phys. Rev. A 96, 023839 (2017).
. V Savona, Phys. Rev. A. 9633826V. Savona, Phys. Rev. A 96, 033826 (2017).
. M I Dykman, C Bruder, N Lörch, Y Zhang, Phys. Rev. B. 98195444M. I. Dykman, C. Bruder, N. Lörch, and Y. Zhang, Phys. Rev. B 98, 195444 (2018).
. J Lebreuilly, C Aron, C Mora, Phys. Rev. Lett. 122120402J. Lebreuilly, C. Aron, and C. Mora, Phys. Rev. Lett. 122, 120402 (2019).
. R Rota, F Minganti, C Ciuti, V Savona, Physical Review Letters. 122110405R. Rota, F. Minganti, C. Ciuti, and V. Savona, Physical Review Letters 122, 110405 (2019).
. D S Rokhsar, B G Kotliar, Phys. Rev. B. 4410328D. S. Rokhsar and B. G. Kotliar, Phys. Rev. B 44, 10328 (1991).
. W Krauth, M Caffarel, J.-P Bouchaud, Phys. Rev. B. 453137W. Krauth, M. Caffarel, and J.-P. Bouchaud, Phys. Rev. B 45, 3137 (1992).
. W Krauth, M Caffarel, J.-P Bouchaud, Phys. Rev. B. 453137W. Krauth, M. Caffarel, and J.-P. Bouchaud, Phys. Rev. B 45, 3137 (1992).
. W Zwerger, Journal of Optics B: Quantum and Semiclassical Optics. 5IOP PublishingW. Zwerger, Journal of Optics B: Quantum and Semi- classical Optics 5, S9 (2003), publisher: IOP Publishing.
. E Mascarenhas, H Flayac, V Savona, Phys. Rev. A. 9222116E. Mascarenhas, H. Flayac, and V. Savona, Phys. Rev. A 92, 022116 (2015).
. S Finazzi, A Leboite, F Storme, A Baksic, C Ciuti, Physical Review Letters. 11580604S. Finazzi, A. LeBoite, F. Storme, A. Baksic, and C. Ciuti, Physical Review Letters 115, 080604 (2015).
. O Scarlatella, A A Clerk, R Fazio, M Schiró, Phys. Rev. X. 1131018O. Scarlatella, A. A. Clerk, R. Fazio, and M. Schiró, Phys. Rev. X 11, 031018 (2021).
. D Roberts, A Lingenfelter, A A Clerk, PRX Quantum. 220336D. Roberts, A. Lingenfelter, and A. A. Clerk, PRX Quantum 2, 020336 (2021).
. K Stannigel, P Rabl, P Zoller, New Journal of Physics. 1463014K. Stannigel, P. Rabl, and P. Zoller, New Journal of Physics 14, 063014 (2012).
. P D Drummond, D F Walls, J. Phys. A. 13725P. D. Drummond and D. F. Walls, J. Phys. A 13, 725 (1980).
. M Wolinsky, H J Carmichael, Phys. Rev. Lett. 601836M. Wolinsky and H. J. Carmichael, Phys. Rev. Lett. 60, 1836 (1988).
. G Y Kryuchkyan, K V Kheruntsyan, Optics Communications. 127230G. Y. Kryuchkyan and K. V. Kheruntsyan, Optics Com- munications 127, 230 (1996).
. K V Kheruntsyan, D S Krähmer, G Y Kryuchkyan, K G Petrossian, Optics Communications. 139157K. V. Kheruntsyan, D. S. Krähmer, G. Y. Kryuchkyan, and K. G. Petrossian, Optics Communications 139, 157 (1997).
Quantum and Semiclassical Optics: Journal of the. G Kryuchkyan, K Kheruntsyan, V Papanyan, K Petrossian, European Optical Society Part B. 7965G. Kryuchkyan, K. Kheruntsyan, V. Papanyan, and K. Petrossian, Quantum and Semiclassical Optics: Jour- nal of the European Optical Society Part B 7, 965 (1999).
. B Cao, K W Mahmud, M Hafezi, Physical Review A. 9463805B. Cao, K. W. Mahmud, and M. Hafezi, Physical Review A 94, 063805 (2016).
. K V Kheruntsyan, K G Petrosyan, Phys. Rev. A. 6215801K. V. Kheruntsyan and K. G. Petrosyan, Phys. Rev. A 62, 015801 (2000).
. F Minganti, N Bartolo, J Lolli, W Casteels, C Ciuti, Scientific Reports. 626987F. Minganti, N. Bartolo, J. Lolli, W. Casteels, and C. Ciuti, Scientific Reports 6, 26987 (2016).
. A O Barut, L Girardello, Communications in Mathematical Physics. 2141A. O. Barut and L. Girardello, Communications in Math- ematical Physics 21, 41 (1971).
. S Luo, Journal of Mathematical Physics. 383478S. Luo, Journal of Mathematical Physics 38, 3478 (1997).
. V V Albert, S O Mundhada, A Grimm, S Touzard, M H Devoret, L Jiang, Quantum Science and Technology. 435007V. V. Albert, S. O. Mundhada, A. Grimm, S. Touzard, M. H. Devoret, and L. Jiang, Quantum Science and Technology 4, 035007 (2019).
. A Grimm, N E Frattini, S Puri, S O Mundhada, S Touzard, M Mirrahimi, S M Girvin, S Shankar, M H Devoret, Nature. 584205A. Grimm, N. E. Frattini, S. Puri, S. O. Mundhada, S. Touzard, M. Mirrahimi, S. M. Girvin, S. Shankar, and M. H. Devoret, Nature 584, 205 (2020).
. R Lescanne, M Villiers, T Peronnin, A Sarlette, M Delbecq, B Huard, T Kontos, M Mirrahimi, Z Leghtas, Nature Physics. 16509R. Lescanne, M. Villiers, T. Peronnin, A. Sarlette, M. Delbecq, B. Huard, T. Kontos, M. Mirrahimi, and Z. Leghtas, Nature Physics 16, 509 (2020).
See Supplemental Material (SM) for more theoretical details, as well as an experimental scheme that realizes our model using a superconducting circuit. The SM includes Refs. 49-60See Supplemental Material (SM) for more theoretical de- tails, as well as an experimental scheme that realizes our model using a superconducting circuit. The SM includes Refs. [49-60].
. A Leboite, G Orso, C Ciuti, Physical Review A. 9063821A. LeBoite, G. Orso, and C. Ciuti, Physical Review A 90, 063821 (2014).
. G S Agarwal, J. Opt. Soc. Am. B. 51940G. S. Agarwal, J. Opt. Soc. Am. B 5, 1940 (1988).
. G S Agarwal, A Biswas, Journal of Optics B: Quantum and Semiclassical Optics. 7350G. S. Agarwal and A. Biswas, Journal of Optics B: Quan- tum and Semiclassical Optics 7, 350 (2005).
. M Mirrahimi, Z Leghtas, V V Albert, S Touzard, R J Schoelkopf, L Jiang, M H Devoret, New Journal of Physics. 1645014M. Mirrahimi, Z. Leghtas, V. V. Albert, S. Touzard, R. J. Schoelkopf, L. Jiang, and M. H. Devoret, New Journal of Physics 16, 045014 (2014).
. Z Leghtas, S Touzard, I M Pop, A Kou, B Vlastakis, A Petrenko, K M Sliwa, A Narla, S Shankar, M J Hatridge, M Reagor, L Frunzio, R J Schoelkopf, M Mirrahimi, M H Devoret, Science. 347853Z. Leghtas, S. Touzard, I. M. Pop, A. Kou, B. Vlastakis, A. Petrenko, K. M. Sliwa, A. Narla, S. Shankar, M. J. Ha- tridge, M. Reagor, L. Frunzio, R. J. Schoelkopf, M. Mir- rahimi, and M. H. Devoret, Science 347, 853 (2015).
. S Puri, A Grimm, P Campagne-Ibarcq, A Eickbusch, K Noh, G Roberts, L Jiang, M Mirrahimi, M H Devoret, S M Girvin, Physical Review X. 941009S. Puri, A. Grimm, P. Campagne-Ibarcq, A. Eickbusch, K. Noh, G. Roberts, L. Jiang, M. Mirrahimi, M. H. De- voret, and S. M. Girvin, Physical Review X 9, 041009 (2019).
. L M Narducci, J R Tredicce, L A Lugiato, N B Abraham, D K Bandy, Physical Review A. 331842L. M. Narducci, J. R. Tredicce, L. A. Lugiato, N. B. Abraham, and D. K. Bandy, Physical Review A 33, 1842 (1986).
. M Gong, Y Yuan, C Li, P Yan, H Zhang, S Liao, Optics Express. 153236M. Gong, Y. Yuan, C. Li, P. Yan, H. Zhang, and S. Liao, Optics Express 15, 3236 (2007).
. J R Stone, G Moille, X Lu, K Srinivasan, Physical Review Applied. 1724038J. R. Stone, G. Moille, X. Lu, and K. Srinivasan, Phys- ical Review Applied 17, 024038 (2022).
. Z Wang, C Navarrete-Benlloch, Z Cai, Phys. Rev. Lett. 125115301Z. Wang, C. Navarrete-Benlloch, and Z. Cai, Phys. Rev. Lett. 125, 115301 (2020).
. J E , Mathematical Proceedings of the Cambridge Philosophical Society. 45J. E. Moyal, Mathematical Proceedings of the Cambridge Philosophical Society 45, 99-124 (1949).
. P Wang, R Fazio, Phys. Rev. A. 10313306P. Wang and R. Fazio, Phys. Rev. A 103, 013306 (2021).
S Axler, P Bourdon, W Ramey, Graduate Texts in Mathematics. Springer137Harmonic Function TheoryS. Axler, P. Bourdon, and W. Ramey, Harmonic Func- tion Theory, 2nd ed., Graduate Texts in Mathematics, Vol. 137 (Springer, 2001).
. S Axler, HFT.m. S. Axler, "HFT.m," (2020).
. D Roberts, A A Clerk, Phys. Rev. X. 1021022D. Roberts and A. A. Clerk, Phys. Rev. X 10, 021022 (2020).
. V Bargmann, Communications on Pure and Applied Mathematics. 14187V. Bargmann, Communications on Pure and Applied Mathematics 14, 187 (1961).
. V Bargmann, Communications on Pure and Applied Mathematics. 201V. Bargmann, Communications on Pure and Applied Mathematics 20, 1 (1967).
. O Scarlatella, A A Clerk, R Fazio, M Schiró, Physical Review X. 1131018American Physical SocietyO. Scarlatella, A. A. Clerk, R. Fazio, and M. Schiró, Physical Review X 11, 031018 (2021), publisher: Amer- ican Physical Society.
Sur les matrices hypohermitiennes et sur les matrices unitaires. L Autonne, A. Rey, Lyon3213505L. Autonne, Sur les matrices hypohermitiennes et sur les matrices unitaires (A. Rey, Lyon, 1915) oCLC: 3213505.
. F Fagnola, V Umanità, Communications in Mathematical Physics. 298523F. Fagnola and V. Umanità, Communications in Mathe- matical Physics 298, 523 (2010).
. H U Strand, M Eckstein, P Werner, Physical Review X. 511038American Physical SocietyH. U. Strand, M. Eckstein, and P. Werner, Physical Re- view X 5, 011038 (2015), publisher: American Physical Society.
. B Buča, T Prosen, New Journal of Physics. 1473007B. Buča and T. Prosen, New Journal of Physics 14, 073007 (2012).
First iteration of the circuit (no driving). A Josephson tunnel junction is placed in parallel with a chain of LC oscillators to provide a global Hubbard interaction. S6 Figure, Figure S6. First iteration of the circuit (no driving). A Josephson tunnel junction is placed in parallel with a chain of LC oscillators to provide a global Hubbard interaction.
. B Buča, T Prosen, New Journal of Physics. 1473007B. Buča and T. Prosen, New Journal of Physics 14, 073007 (2012).
. F Fagnola, V Umanità, Communications in Mathematical Physics. 298523F. Fagnola and V. Umanità, Communications in Mathematical Physics 298, 523 (2010).
Sur les matrices hypohermitiennes et sur les matrices unitaires. L Autonne, A. Rey, Lyon3213505L. Autonne, Sur les matrices hypohermitiennes et sur les matrices unitaires, (A. Rey, Lyon, 1915) oCLC: 3213505.
. V Bargmann, Communications on Pure and Applied Mathematics. 14187V. Bargmann, Communications on Pure and Applied Mathematics 14, 187 (1961).
. V Bargmann, Communications on Pure and Applied Mathematics. 201V. Bargmann, Communications on Pure and Applied Mathematics 20, 1 (1967).
. S Axler, HFT.m. S. Axler, "HFT.m," (2020).
. A O Barut, L Girardello, Communications in Mathematical Physics. 2141A. O. Barut and L. Girardello, Communications in Mathematical Physics 21, 41 (1971).
. P Wang, R Fazio, Phys. Rev. A. 10313306P. Wang and R. Fazio, Phys. Rev. A 103, 013306 (2021).
. O Scarlatella, A A Clerk, R Fazio, M Schiró, Physical Review X. 1131018American Physical SocietyO. Scarlatella, A. A. Clerk, R. Fazio, and M. Schiró, Physical Review X 11, 031018 (2021), publisher: American Physical Society.
. H U Strand, M Eckstein, P Werner, Physical Review X. 511038American Physical SocietyH. U. Strand, M. Eckstein, and P. Werner, Physical Review X 5, 011038 (2015), publisher: American Physical Society.
S Axler, P Bourdon, W Ramey, Harmonic Function Theory. Springer1372nd ed.S. Axler, P. Bourdon, and W. Ramey, Harmonic Function Theory, 2nd ed., Graduate Texts in Mathematics, Vol. 137 (Springer, 2001).
. D Roberts, A A Clerk, Phys. Rev. X. 1021022D. Roberts and A. A. Clerk, Phys. Rev. X 10, 021022 (2020).
. J E , Mathematical Proceedings of the Cambridge Philosophical Society. 45J. E. Moyal, Mathematical Proceedings of the Cambridge Philosophical Society 45, 99-124 (1949).
| [] |
[
"The Brera Multi-scale Wavelet HRI Cluster Survey: I Selection of the sample and number counts ⋆",
"The Brera Multi-scale Wavelet HRI Cluster Survey: I Selection of the sample and number counts ⋆"
] | [
"A Moretti \nINAF Osservatorio Astronomico di Brera\nVia E. Bianchi 46\n\nMerate (LC)\n23807Italy\n",
"L Guzzo \nINAF Osservatorio Astronomico di Brera\nVia E. Bianchi 46\n\nMerate (LC)\n23807Italy\n",
"S Campana \nINAF Osservatorio Astronomico di Brera\nVia E. Bianchi 46\n\nMerate (LC)\n23807Italy\n",
"D Lazzati \nInstitute of Astronomy\nUniversity of Cambridge\nMadingley RoadCB3 0HACambridgeUK\n",
"M R Panzera \nINAF Osservatorio Astronomico di Brera\nVia E. Bianchi 46\n\nMerate (LC)\n23807Italy\n",
"G Tagliaferri \nINAF Osservatorio Astronomico di Brera\nVia E. Bianchi 46\n\nMerate (LC)\n23807Italy\n",
"S Arena \nINAF Osservatorio Astronomico di Brera\nVia E. Bianchi 46\n\nMerate (LC)\n23807Italy\n",
"F Braglia \nINAF Osservatorio Astronomico di Brera\nVia E. Bianchi 46\n\nMerate (LC)\n23807Italy\n",
"I Dell'antonio \nBrown University\n02912ProvidenceRIUSA\n",
"M Longhetti \nINAF Osservatorio Astronomico di Brera\nVia E. Bianchi 46\n\nMerate (LC)\n23807Italy\n"
] | [
"INAF Osservatorio Astronomico di Brera\nVia E. Bianchi 46",
"Merate (LC)\n23807Italy",
"INAF Osservatorio Astronomico di Brera\nVia E. Bianchi 46",
"Merate (LC)\n23807Italy",
"INAF Osservatorio Astronomico di Brera\nVia E. Bianchi 46",
"Merate (LC)\n23807Italy",
"Institute of Astronomy\nUniversity of Cambridge\nMadingley RoadCB3 0HACambridgeUK",
"INAF Osservatorio Astronomico di Brera\nVia E. Bianchi 46",
"Merate (LC)\n23807Italy",
"INAF Osservatorio Astronomico di Brera\nVia E. Bianchi 46",
"Merate (LC)\n23807Italy",
"INAF Osservatorio Astronomico di Brera\nVia E. Bianchi 46",
"Merate (LC)\n23807Italy",
"INAF Osservatorio Astronomico di Brera\nVia E. Bianchi 46",
"Merate (LC)\n23807Italy",
"Brown University\n02912ProvidenceRIUSA",
"INAF Osservatorio Astronomico di Brera\nVia E. Bianchi 46",
"Merate (LC)\n23807Italy"
] | [] | We describe the construction of the Brera Multi-scale Wavelet (BMW) HRI Cluster Survey, a deep sample of serendipitous X-ray selected clusters of galaxies based on the ROSAT HRI archive. This is the first cluster catalog exploiting the high angular resolution of this instrument. Cluster candidates are selected on the basis of their X-ray extension only, a parameter which is well measured by the BMW wavelet detection algorithm. The survey includes 154 candidates over a total solid angle of ∼ 160 deg 2 at 10 −12 erg s −1 cm −2 and ∼ 80 deg 2 at 1.8×10 −13 erg s −1 cm −2 . At the same time, a fairly good sky coverage in the faintest flux bins (3 − 5×10 −14 erg s −1 cm −2 ) gives this survey the capability of detecting a few clusters with z ∼ 1 − 1.2, depending on evolution. We present the results of extensive Monte Carlo simulations, providing a complete statistical characterization of the survey selection function and contamination level. We also present a new estimate of the surface density of clusters of galaxies down to a flux of 3×10 −14 erg s −1 cm −2 , which is consistent with previous measurements from PSPC-based samples. Several clusters with redshifts up to z = 0.92 have already been confirmed, either by cross-correlation with existing PSPC surveys or from early results of an ongoing follow-up campaign. Overall, these results indicate that the excellent HRI PSF (5 ′′ FWHM on axis) more than compensates for the negative effect of the higher instrumental background on the detection of high-redshift clusters. In addition, it allows us to detect compact clusters that could be lost at lower resolution, thus potentially providing an important new insight into cluster evolution. | 10.1051/0004-6361:20041326 | [
"https://arxiv.org/pdf/astro-ph/0408131v2.pdf"
] | 2,632,544 | astro-ph/0408131 | f234f4ca26f8952e5fa4e0804815cb23ab8a7286 |
The Brera Multi-scale Wavelet HRI Cluster Survey: I Selection of the sample and number counts ⋆
October 14, 2018
A Moretti
INAF Osservatorio Astronomico di Brera
Via E. Bianchi 46
Merate (LC)
23807Italy
L Guzzo
INAF Osservatorio Astronomico di Brera
Via E. Bianchi 46
Merate (LC)
23807Italy
S Campana
INAF Osservatorio Astronomico di Brera
Via E. Bianchi 46
Merate (LC)
23807Italy
D Lazzati
Institute of Astronomy
University of Cambridge
Madingley RoadCB3 0HACambridgeUK
M R Panzera
INAF Osservatorio Astronomico di Brera
Via E. Bianchi 46
Merate (LC)
23807Italy
G Tagliaferri
INAF Osservatorio Astronomico di Brera
Via E. Bianchi 46
Merate (LC)
23807Italy
S Arena
INAF Osservatorio Astronomico di Brera
Via E. Bianchi 46
Merate (LC)
23807Italy
F Braglia
INAF Osservatorio Astronomico di Brera
Via E. Bianchi 46
Merate (LC)
23807Italy
I Dell'antonio
Brown University
02912ProvidenceRIUSA
M Longhetti
INAF Osservatorio Astronomico di Brera
Via E. Bianchi 46
Merate (LC)
23807Italy
The Brera Multi-scale Wavelet HRI Cluster Survey: I Selection of the sample and number counts ⋆
October 14, 2018Received ; acceptedarXiv:astro-ph/0408131v2 22 Sep 2004 Astronomy & Astrophysics manuscript no. 1326txt (DOI: will be inserted by hand later)X-rays: galaxies: clusters survey
We describe the construction of the Brera Multi-scale Wavelet (BMW) HRI Cluster Survey, a deep sample of serendipitous X-ray selected clusters of galaxies based on the ROSAT HRI archive. This is the first cluster catalog exploiting the high angular resolution of this instrument. Cluster candidates are selected on the basis of their X-ray extension only, a parameter which is well measured by the BMW wavelet detection algorithm. The survey includes 154 candidates over a total solid angle of ∼ 160 deg 2 at 10 −12 erg s −1 cm −2 and ∼ 80 deg 2 at 1.8×10 −13 erg s −1 cm −2 . At the same time, a fairly good sky coverage in the faintest flux bins (3 − 5×10 −14 erg s −1 cm −2 ) gives this survey the capability of detecting a few clusters with z ∼ 1 − 1.2, depending on evolution. We present the results of extensive Monte Carlo simulations, providing a complete statistical characterization of the survey selection function and contamination level. We also present a new estimate of the surface density of clusters of galaxies down to a flux of 3×10 −14 erg s −1 cm −2 , which is consistent with previous measurements from PSPC-based samples. Several clusters with redshifts up to z = 0.92 have already been confirmed, either by cross-correlation with existing PSPC surveys or from early results of an ongoing follow-up campaign. Overall, these results indicate that the excellent HRI PSF (5 ′′ FWHM on axis) more than compensates for the negative effect of the higher instrumental background on the detection of high-redshift clusters. In addition, it allows us to detect compact clusters that could be lost at lower resolution, thus potentially providing an important new insight into cluster evolution.
Introduction
Clusters of galaxies represent the largest collapsed objects in the hierarchy of cosmic structures, resulting from the growth of fluctuations lying on the high-density tail of the matter density field (e.g. Peacock 1999). As such, their number density and evolution are strongly dependent on the normalization of the power spectrum and the value of the density parameter Ω M (e.g. Borgani & Guzzo 2001;Rosati et al. 2002). In addition, clusters are usually considered as "simple" systems, where the physics involved in turning "mass into light" is possibly easier to understand, compared to the various complex processes connected to star formation and evolution in galaxies. In particular in the X-ray band, where clusters can be defined and recognized as single objects (not just as a mere collection of galaxies), observable quantities like X-ray luminosity Send offprint requests to: moretti@merate.mi.astro.it ⋆ Partially based on observations taken at ESO and TNG telescopes. L X and temperature T X show fairly tight relations with the cluster mass (e.g. Evrard et al. 1996;Allen et al. 2001;Reiprich & Böhringer 2002;Ettori et al. 2004). A full comprehension of these scaling relations requires more ingredients than a simple heating during the growth of fluctuations (Kaiser 1986;Helsdon & Ponman 2000;Finoguenov et al. 2001;Borgani et al. 2004). However, their existence and relative tightness give clusters a specific role as probes of the cosmological model, providing us with a way to test fairly directly the mass function (e.g. Böhringer et al. 2002;Pierpaoli et al. 2003) and the mass power spectrum (Schuecker et al. 2003), via respectively the observed cluster X-ray luminosity function (XLF) and clustering. In addition, and equally important, clusters at different redshifts provide homogeneous samples of essentially coeval galaxies in a high-density environment and give the possibility of studying the evolution of stellar populations (e.g. Blakeslee et al. 2003;Lidman et al. 2004).
X-ray based cluster surveys, in addition to a fairly direct connection of the observed quantities to model (mass- Fig. 1. The spatial distribution (equatorial coordinates, Aitoff projection) of the 914 HRI pointings used for the BMW-HRI cluster survey. The origin of right ascension is in the center of the plot and the grid is in steps of 3 h in RA and 30 • in DEC. The empty area corresponds to the |b II | < 20 • zone of avoidance, while the boxes mark the Magellanic clouds and Virgo cluster areas which were also excluded from the survey. Different symbols indicate different ranges of exposure time, as explained. specific) predictions, have one further, fundamental advantage over optically-selected catalogues: their selection function is well-defined (being essentially that of a fluxlimited sample) and fairly easy to reconstruct. This is a crucial feature when the goal is to use these samples for cosmological measurements that necessarily involve a precise knowledge of the sampled volume, as is the case when computing first or second moments of the density field. X-ray surveys in the "local" Universe ( z ∼ 0.1), stemming from the ROSAT All-Sky Survey (RASS, Voges et al. 1999) have been able to pinpoint quantities like the cluster number density to high accuracy. The REFLEX survey, in particular, has yielded the currently most accurate measurement of the XLF , in substantial agreement with other local estimates (Ebeling et al. 1997;De Grandi et al. 2001). These result provide a robust z ∼ 0 reference frame to which surveys of distant clusters can be safely compared in search of evolution.
Recent X-ray searches for serendipitous high-redshift clusters have been based mostly on the deeper pointed images collected with the ROSAT PSPC instrument (RDCS: Rosati et al. 1995;SHARC: Collins et al. 1997;160 Square Degrees: Vikhlinin et al. 1998a;WARPS: Perlman et al. 2002) or on the high-exposure North Ecliptic Pole area of the RASS (NEP: ); a deeper search for massive clusters in the overall RASS is also being carried out (MACS, Ebeling et al. 2001a). Results from these surveys consistently show a lack of evolution of the XLF 1 for L < L * X ≃ 3 · 10 44 erg s −1 out to z ∼ 0.8, pointing to low values of Ω M under reasonable assumptions on the evolution of the L X − T X relation (see Rosati et al. 2002 for a review). At the same time, however, they confirm the early findings from the Einstein Medium Sensitivity Survey (EMSS, Gioia et al. 1990;Henry et al. 1992) of a mild evolution of the bright end (Vikhlinin et al. 1998a;Nichol et al. 1999;Gioia et al. 2001;Mullis et al. 2004). In other words, there is an indication that above z ∼ 0.6 one finds less very massive clusters, likely indicating that beyond this epoch they were still to be assembled from the merging of smaller mass units. These conclusions, however, are still based on a rather small number of high-redshift clusters. Currently, we know ∼ 15 X-ray confirmed clusters above z = 0.8, with only 5 so far detected above z = 1, all of which are at z < 1.3. In addition, virtually all current statistical samples of distant clusters have been selected from X-ray images collected with the same, low angular resolution ROSAT-PSPC instrument. Clearly, the need for more samples of high-redshift clusters, possibly selected from independent X-ray imaging material, is compelling.
As a first contribution in this direction, during the last four years we have constructed a new sample of distant cluster candidates based on the still unexplored ROSAT-HRI archive, the BMW-HRI Cluster Survey. While serendipitous searches are already focusing on the fresh data being accumulated in the archives of the new powerful X-ray satellites XMM-Newton and Chandra (see e.g. the pioneering work by Boschin 2002), our work shows that a remarkable source of high-redshift clusters -the HRI archive -has so far been neglected. In this paper we discuss in detail the selection process, the properties of this catalogue, its completeness function and contamination. We then compute the sky coverage, number counts and expected redshift distribution of BMW-HRI clusters. The paper is organized as follows. Sect. 2 describes the general features of the HRI data and the field selection; Sect. 3 discusses the cluster detection and characterization; Sect. 4 presents the results of extensive Monte Carlo simulations, performed in order to understand the statistical properties of the cluster sample, in particular its completeness and contamination level; Sect. 5 uses all this information to derive the survey sky-coverage as a function of source flux and extension, and to compute the survey expected redshift distribution and mean surface density; Sect. 6 presents a few examples of distant clusters already identified in the BMW-HRI survey, while the last section summarizes the main results obtained in the paper.
Survey Description
The HRI instrument
The High-Resolution Imager (HRI) was the secondary instrument on board the ROSAT satellite, with technical features quite different from and complementary to the companion PSPC, including in particular a much better spatial resolution. The core of the HRI is a micro-channel plate detector with an octagon-like shaped field of view (with ∼ 19 ′ radius) that reveals single X-ray photons, providing information on their positions and arrival times. The HRI Point Spread Function (PSF) as measured onaxis is about 5 ′′ FWHM, i.e. a factor of ∼ 4 better than the PSPC 2 . On the other hand, the HRI is less efficient than the PSPC (a factor of 3 to 8 for a plausible range of 2 For completeness, note that this is still a factor of ∼ 5 worse than that of the Chandra X-ray Observatory, whose archive represents a further source of high-resolution data that is cur- incident spectra) and has a higher background. This consists of several components: the internal background due to the residual radioactivity of the detector (1-2 cts s −1 ), the externally-induced background from charged particles (1-10 cts s −1 ) and the X-ray background (0.5-2 cts s −1 ).
The HRI covers the energy range [0.1-2.4] keV, divided into 16 Pulse Height Analyzer (PHA) channels, which provide very crude spectral information (Prestwich et al. 1996). The HRI background is highest in the first few (1-3) PHA channels and, at variance with the ROSAT PSPC, it is dominated by the unvignetted particle background. As shown by sky calibration sources, most of the source photons instead arrive in a PHA channel between 3 and 8 (David et al. 1998). A priori, the high background of the HRI can be a serious problem for cluster searches. The low surface brightness of these diffuse objects is indeed further suppressed as a function of redshift, due to the cosmological dimming ∝ (1 + z) 4 and rapidly drops below the background. Coupled to the HRI lower sensitivity, this explains why all ROSAT serendipitous cluster searches have so far concentrated on the PSPC archive (see Sect. 1). We shall show here how these early, pessimistic conclusions are fortunately not fully correct. As detailed in Campana et al. (1999, C99 hereafter), in order to minimize the background and increase the signal-tonoise (S/N) ratio of X-ray sources, especially low-surfacebrightness ones, the BMW wavelet analysis has been restricted to PHA 2-9 (see also David et al. 1998). This range reduces the detector background by about 40% with a minimum loss of cosmic X-ray photons (< 10%; David et al. 1998). This is certainly one key feature in improving the detection of clusters of galaxies with these data, rently also under scrutiny using our algorithms (Romano et al., in preparation) thus allowing studies to reap the benefit of the excellent resolution of this instrument.
The surveyed area
The overall BMW-HRI serendipitous catalogue (Panzera et al. 2003, P03 hereafter;Lazzati et al. 1999, L99 hereafter; C99) is based on the analysis of 4298 observations, i.e. the whole HRI archive after excluding calibration observations, fields pointed at supernova remnants and some other problematic pointings. The selection of cluster candidates imposes a number of extra selections, which are discussed here. We first selected only high Galactic latitude fields (|b| > 20 • ) with exposure times larger than 1000 seconds, excluding also the Magellanic Clouds and Virgo Cluster regions, defined in Table 1. Moreover, we excluded all pointings on star clusters, clusters and groups of galaxies and obviously Messier and NGC nearby galaxies. In all these cases the target tends to fill the field-of-view of the instrument, thus biasing any search for serendipitous sources in the surrounding area. For this reason, we further visually inspected all the remaining fields using panoramic (40 ′ × 40 ′ ) DSS2 images with the HRI X-ray contours overlaid, so as to ascertain whether the central target could in any way influence the searched region (as e.g. in the case of a spiral galaxy with faint arms extending out of the central 3-arcmin radius area). All fields with such "problematic" targets were excluded from the cluster candidate catalogue.
One further problem needing a careful treatment concerns multiple observations of the same sky areas. All these multiple pointings have been analyzed separately by the BMW-HRI general survey: consequently, the catalogue lists every source detected in each single observation, including multiple detections of the same source. To simplify the treatment of such cases, when multiple observations looked "exactly" at the same region of sky, defined as images with nominal aim points separated by less than 30 ′′ , we discarded all but the deepest exposure. However, if the separation was larger than this, we kept all the (usually overlapping) fields. The goal here was to maximize the survey area, which is the main factor in finding distant luminous clusters. These overlaps have been properly Within each selected field we then considered only sources detected in the area comprised between 3 ′ and 15 ′ off-axis angle. This excludes the central part (< 3 ′ ) of the field of view, which is normally where the target is pointed, and the very external part (> 15 ′ ), where the sensitivity drops and the PSF worsens. We ended up with a grand total of 914 HRI observations ( Fig.1) with exposure times ranging from 1 ks to 204 ks (Fig.2), for a total surveyed area of 160.4 square degrees, ∼ 7% of which were observed twice or more.
Selection of Cluster Candidates
Source Extension Criterion
The BMW detection is based on a wavelet transform (WT) algorithm for the automatic detection and characterization of sources in X-ray images. It is fully described in L99 and has been developed to analyze ROSAT HRI images, producing the BMW-HRI catalogue (C99, P03). Candidate sources are identified as local maxima above a given threshold in wavelet space; the preliminary product of the detection procedure is a source list with a rough determination of the position (the center of the pixel with higher coefficient), source size (the scale of the WT where the signal to noise is maximized) and total number of photons (the value of the maximum WT coeffi-cient). Since the wavelet scale steps are discrete, the final catalogue parameters are a refinement of these values and are obtained through χ 2 minimization with respect to a Gaussian model in WT space (for all the details of the fit procedure in the WT space see L99). In particular, the BMW catalog source extension parameter (hereafter W clu ) is defined as the width of the best-fitting Gaussian.
The characterization procedure of extended sources has been explained in C99, but we briefly repeat it here for convenience of the reader. First, we considered a subsample of sources detected only in the observations that have a star as target (ROR number beginning with 2). This resulted in 6013 sources detected over 756 HRI fields (Fig. 3). The distribution of source extensions was divided into bins of 1 ′ , as a function of the source off-axis angle, and within each bin, a σ−clipping algorithm on the source extension was applied: the mean and standard deviation in each bin are calculated and sources at more than 3 σ above the mean value are discarded. In this way we determine the mean instrumental extension W p (θ) and standard deviation σ p (θ) of point-like source observed at different θ. As shown in Fig. 3, this provides us with a threshold for defining truly extended sources (Rosati et al. 1995). We define as candidate clusters all sources that lie above the 3 σ p (θ) corridor, including their 1 σ clu error bar, i.e. those sources for which, at their observed off-axis angle θ, one has 3
W clu − σ clu > W p (θ) + 3 σ p (θ) .(1)
This combined requirement on the distance from the point-like source locus, and on the intrinsic error in the source extension, roughly corresponds to a ∼ 3.5 σ confidence level for the extension classification.
Significance Criterion
To reduce the contamination level of the catalogue due to spurious detections, we also limit the current cluster selection to sources with detection significance larger than 4 σ (see Sect. 4.3). The source significance p s is defined as the confidence level at which a source is not a chance background fluctuation, given the background statistics and the specific field exposure time. This quantity is assessed via the signal to noise ratio in wavelet space. For each wavelet scale, the noise level is computed through numerical simulations of blank fields with the corresponding background (L99), while the signal is the peak of the wavelet coefficients corresponding to the source. In order to make this significance more easily comparable to other methods, confidence intervals are expressed in units of the standard deviation σ for which a Gaussian distributed variable would give an equal probability of spurious detection (68%: 1 σ; 95%: 2 σ, etc.). So, the values of our figure of merit p s will represent the number of σ's corresponding to the confidence level of that specific source. Note that the signal to noise in wavelet space can be different from that in direct space for several reasons: i) the background subtraction is more accurate and locally performed; ii) the high frequencies are suppressed, so that a correlated count excess gives a higher significance than a random one with the same number of counts; iii) the exposure map can be incorporated in the wavelet space, so that artifacts do not affect the source significance.
In the 914 HRI fields considered we have 194 sources meeting these requirements. Among these, we discovered that 22 were spurious sources caused by an hot pixel in the detector (not identified in the data reduction); this, coupled with the dithering of the satellite produces a high signal-to-noise extended source of 2 ′ typical dimension always located at the same detector coordinates.
We inspected directly each of the remaining 172 sources on DSS2/X-ray overlays and cross-correlated their positions with the NED database. As a result, we removed 18 of them from the catalog as clearly associated with a nearby galaxy, thus ending up with a final list of 154 cluster candidates. This represents our master statistical sample for follow-up. Clearly, even at p s < 4 σ a significant number of sources are truly clusters. At this level, however, the contamination rises to ∼ 40% (see Sect. 4.3), thus making the optical identification of real clusters significantly less efficient in terms of telescope time.
Re-estimation of cluster parameters
At the end of the analysis pipeline, the BMW detection algorithm yields a catalogue with 80 parameters for each source including the count rate, flux and extension (see P03 and Sect. 3.1). These values are reliable for point sources, but not for sources with a surface brightness profile (SBP) different from the instrumental PSF, as is the case for clusters. The observed cluster profiles are the result of the convolution of the intrinsic SBP with the PSF of the instrument. The BMW algorithm calculates the source flux and extension by means of a χ 2 minimization with respect to a Gaussian model in wavelet space (L99, see also previous section). Using a more appropriate (non-Gaussian) model within the pipeline is unpractical. First, in the general case a cluster model profile (e.g. a classical β-model or King profile, see Sect. 4) convolved with our Gaussian-based wavelet filter (L99) cannot be handled analytically, so that one of the simplifying features of the analysis pipeline is lost. Second, a direct profile fit to the data gives reliable results (in terms of χ 2 ) only for the few most luminous objects in the sample, suggesting that an integral approach is the best way to proceed.
For this reason, we adopted a simple and robust "growth-curve" technique, as applied to X-ray data by Böhringer et al. (2001), to re-measure the source total flux. Fig.4 visually illustrates this technique. The source total flux is defined as the asymptotic value of the inte- Integral, background-subtracted flux growth curve of the candidate cluster (filled circles), plotted together with its 1 σ error corridor (dotted lines) and the best-fitting β-model (Grey dashed curve). The total flux of the source is computed as the asymptotic value of the growth curve (horizontal dashed line). From the total flux, the core radius is then derived by comparison to a grid of β profiles convolved with the instrumental PSF at the specific off-axis angle. This is a typical source in our catalog, detected at ∼ 7 ′ off-axis with 75 net counts in a 10 ks exposure, corresponding to a flux of ∼ 3 × 10 −13 erg s −1 cm −2 .
gral SBP, computed within increasing circular apertures on the background-subtracted images, when this reaches a horizontal "plateau". The background-subtracted images are obtained from the original images by the subtraction of the background map (see Sect. 4). Technically, the plateau is defined by studying the local derivative of the growth curve, comparing at each step the flux variation within adjacent aperture radii to the expected mean error. To eliminate possible contamination from nearby sources, we first mask all other sources from the BMW-HRI general catalogue. Finally, we calculate the conversion from count-rates to fluxes assuming a bremsstrahlung spectrum with temperature T = 5 keV and correcting for Galactic absorption using the appropriate column density at the source position (Dickey & Lockman 1990). Once the total flux of the source is established, we estimate its core radius by fitting its integral profile, adopting for the differential profile a classical β-model (Cavaliere & Fusco-Femiano 1976), with fixed β = 2/3
I(r) = I 0 1 + r r c 2 (−3β+ 1 2 ) = I 0 1 + r r c 2 − 3 2 ,(2)
where r in this case describes an azimuthal quantity, with r c being the angular core radius. Choosing a fixed value for β is inevitable, given the low number of photons generally characterizing our sources, which does not allow a further free fit parameter. In this way, we obtain an estimate of the source extensions. This is a very important quantity, given that, as will be clear in the following, the statistical properties of the survey strongly depend on the extension and surface brightness values of the sources. As described in the next section, we assess these properties by means of Monte Carlo simulations: in these tests we coherently assume for the input simulated clusters the fixed slope β = 2/3 and variable core radii.
The actual observed SBP is then described by the convolution of the β profile with the instrumental PSF (which depends on the off-axis position). In practice, for each cluster candidate (i.e. total flux), we compute a numerical grid of expected SBPs for different values of the core radius ranging from 2 ′′ to 120 ′′ . We then measure the best fitting core radius by minimizing the χ 2 between the observed and computed profiles.
The uncertainties in the measurement of these parameters are estimated via a further specific Monte-Carlo test, whose results are shown in Fig. 5. The left panel of this figure shows that for sources with S/N> 4 our flux measurements are systematically under-estimated by ∼ 12%. This is expected, as our fluxes are integrated out to a certain radius, fixed by the growth-curve plateau. Ideally, the total flux could be recovered by assuming a model profile and extrapolating it to infinity. We prefer not to do it, as this would be model-dependent and would result in larger (left) and core radius (right) estimation, at different signal-to-noise levels, derived from the Monte Carlo simulations of the cluster sample. They are calculated as the ratio (input-output)/input. The filled circles give the mean systematic relative error in each bin and its random standard deviation.
errors (Vikhlinin et al. 1998a); rather, we include this uncertainty in the error budget. Fig. 5 also shows that, as one expects, the flux measurement errors depend on the source S/N ratio. We have therefore divided the cluster sample in S/N bins and assigned to each source the corresponding relative error from the simulated sample. One may wonder why the behaviour of these errors is here studied as a function of S/N, rather than of the (almost equivalent) p s confidence level defined in the previous section. The reason is that -unlike source detection -the measurement of flux and core radius is performed in real count space (not in wavelet space).
The right panel of Fig. 5, on the other hand, gives the typical errors in the estimates of cluster core radii. The way the values of β are chosen in the simulations requires some explanation, as it represents a subtle point. In fact, if one constructed input clusters with a fixed β = 2/3, the output of the simulation would describe the amplitude of statistical errors only (related to low S/N, etc.). We know that while β = 2/3 is a very good approximation for most cases, clusters show a distribution of values around this (e.g. Ettori et al. 2004). This introduces an additional source of error when we try and measure the cluster core radius with our fixed β profile. In the simulation, therefore, β was distributed according to a Gaussian with mean value 0.67 and standard deviation 0.05. With this choice, the relative error on r c turns out to be ∼ 20%, with negligible systematic errors as shown in . Angular sizes (core radii) versus measured Xray fluxes for the p s > 4 BMW-HRI candidate clusters. While an intrinsic correlation is expected between these two quantities, the slope observed here is probably influenced by the lack of sources at high fluxes, owing to the small volume sampled.
should consider, however, that the lack of sources at high fluxes, due to the small survey area, certainly contributes making the correlation steeper than it probably is intrinsically. A discussion of these properties and of the distribution of angular sizes in the BMW-HRI catalogue goes beyond the aims of this paper and will be discussed elsewhere (Arena et al., in preparation).
The typical core radius of BMW-HRI cluster candidates measured from this figure is ∼ 20 ′′ . However, the high resolution of HRI data allows us to find a number of very small sources, characterized by core radii as small as ∼5 ′′ . We already know from our CCD optical follow up that several of these compact sources are not spurious. Fig.7 shows a CCD image in the Gunn-r filter (taken with EFOSC2 at the ESO 3.6 m telescope) and the corresponding HRI SBP of BMW145754.0-212458, a source with angular core radius 6 ′′ ± 2 ′′ . This candidate has been positively identified with a cD-dominated group at redshift 0.312 and provides one key example of the specific advantage of the BMW-HRI catalogue with respect to previous PSPC-based surveys. Note that at the off-axis angle of this source (∼ 6 ′ ), the instrument PSF has an half energy diameter of only 8 ′′ .
Monte-Carlo Simulations
To quantify the limitations and assess possible systematic biases in the detection procedure, we performed extensive Monte Carlo simulations under realistic conditions. First, we embedded simulated clusters and point sources within realistic backgrounds and ran the detection pipeline, in order to evaluate the completeness and the characterization uncertainties of the catalogue. Second, we ran the detection procedure on simulated fields containing pure background, thus estimating the contamination of the catalogue due to spurious detections of background statistical fluctuations. In this section we describe the simulation results in some detail.
The simulated data
Our simulated sample consists of 12 sets of HRI fields, each set characterized by a different exposure time evenly sampling (on a logarithmic scale, see the top-right panel of Fig. 10) the range of the 914 observations which form the survey. Specific predictions for the whole set of actual exposure times are then obtained by interpolating through these 12 templates.
For each template field we used the background maps built for the BMW general catalog as described in detail in Sect. 3.3 of C99. The procedure essentially uses the ESAS software (Snowden 1994), which produces a vignetted sky background map and a background detector map. The total background map is then obtained by summing the detector and sky maps. Finally, for each simulated field a Poissonian realization of such a background is built. Fig. 8 provides an example of how the simulated background pixel statistics reproduces the observed one in a corresponding HRI image.
Point-like sources are simulated as pure PSF images with different normalizations, constructed by means of a ray-tracing technique (see C99), and their flux distribution is generated so as to reproduce the observed X-ray Fig. 8. Comparison of the global pixel statistics of a real (black line) and a simulated image (Grey line), for a typical exposure time of 20,000 sec. X-ray photons have been binned in pixels of 5 ′′ size. The histograms show how the background statistical properties of the simulated image accurately match those of the real one. The simulated background is built as a Poissonian realization of the background map, and the differences that can be noticed for pixels with more than five counts are due to the effect of the simulated (extended) sources, which are overabundant in the simulated image.
number counts in the [0.5-2] keV band (Moretti et al. 2003). In this way, we build a fully realistic X-ray sky, both in terms of background and surface density of pointlike sources, in which to embed the simulated clusters. To construct these, we distribute counts according to a βmodel surface brightness profile (Eq. 2), with values of r c randomly distributed in the range 5 ′′ -50 ′′ , convolved with the PSF at the specific off-axis angle where the cluster is positioned. Fig. 9 shows a visual comparison of a true cluster in the survey and a simulated one with the same core radius and integral flux. In each simulated image, we add three such extended sources, with fluxes randomly chosen in the range 10 −14 −10 −11 erg sec −1 cm −2 . All sources (both extended and point-like) are located at random positions over the whole (θ = [3 ′ −15 ′ ]) field of view. Overall, in a typical 20 ksec image this results in a mean number of 15 sources with more than 10 counts. The rather high number of simulated clusters allows us to reach a statistically significant number of tests in a reasonable number of simulations, still avoiding detection biases due to source confusion problems. To accumulate a statistically solid test sample, the procedure is repeated 1000 times for each of the 12 template exposure times. Fig. 7. A specific example showing one of the major advantages of the BMW-HRI survey, i.e. its ability to detect as extended (and thus identify as groups/clusters) also sources with a very small core radius. This object, BMW145754.0-212458, has a core radius = 6 ′′ ± 2 and would not be detected as extended in a PSPC image. The optical follow-up confirmed this source as a real cD-dominated cluster at redshift at z = 0.312. In the left panel we show a CCD r − Gunn image, with HRI X-ray isophothes overlaid. The X-ray has been smoothed with a Gaussian filter (σ = 10 ′′ ) and the isophotes correspond to 1,1.3,1.7,2 standard deviations over the background (after smoothing). The right panel shows the measured growth curve (filled circles). The black solid line gives the integral best-fitting PSFconvolved β-model, while the grey dashed line shows the instrumental PSF at the given off-axis angle. The horizontal dashed line defines the total flux of the source, as identified by the growth curve plateau.
The catalogue completeness
As discussed in the introduction, one of the advantages of X-ray surveys of clusters is the possibility to properly quantify their selection function S F , i.e. the probability that a source with a given set of properties (e.g. flux, extension) is detected in the survey. The selection function S F fully characterizes the sample completeness and is the key ingredient for estimating its sky-coverage.
In the ideal case of a uniform observation, given a value n σ for the detection threshold in S/N ratio, the corresponding minimum detectable count rate (or flux limit) can be expressed by the following formula (see e.g. ROSAT Observer Guide 1992):
I min = n σ b 1/2 d cell f cell 1 √ t exp + C min t exp .(3)
Here d cell is the linear dimension of the detection cell 4 , f cell is the fraction of signal contained in the cell, b is the background count-rate and C min is the minimum number of source counts needed for detection when the background is negligible. However, in the low S/N regime where the source flux is comparable to the background noise, the concept of flux limit becomes somehow arbi-trary: sources with fluxes lower than the theoretical sensitivity limit can be detected with non-null probability. This is due to the fact that faint sources can be detected if they sit on a positive background fluctuation, while they are missed in the opposite case. For this reason, the concept of a sharp flux limit needs to be generalized with a statistical approach, in which the selection function describes the probability for a source with given properties to be detected. Given the characteristics of X-ray telescopes and specifically of the HRI data, this probability depends in general on: i) the exposure time of the observation ; ii) the source position within the field of view: moving from the center to the edges of the images, the X-ray mirror effective area decreases and the spatial resolution worsens; iii) the source extension.
Assuming that extended sources can be sufficiently well described by the same β-model profile with different core radii, the whole cluster survey can be characterized by a selection function S F = S F (f, θ, t exp , r c ), where f is the flux, θ is the off-axis angle, r c is the apparent extension of the source and t exp the exposure time of the observation. Again, the only safe way to estimate the value of S F within this multi-parameter space is by means of Monte Carlo simulations.
To this end, we first grouped the input simulated sources into a grid defined by 3 different angular extension ranges, together with 20 logarithmic bins in count rate. We then considered in each of the 12 simulated sets of images, 3 different radial areas, defined by the annuli between [3 ′ − 9 ′ ], [9 ′ − 12 ′ ], and [12 ′ − 15 ′ ] off-axis angles. At this point, we could compute the actual values of the selection function S F by measuring the ratio of the number of detected to the number of input simulated sources within each bin of the 4-dimensional hyper-space defined by exposure time t exp , flux f , core radius r c and off-axis angle θ. The four panels in Fig. 10 summarize the results of this procedure. The top-left panel considers a simulated field with t exp = 40 ksec and the annulus between 9 ′ and 12 ′ . The points and curves show how the "projected" S F for different source extensions and as a function of flux is well fitted by a Fermi-Dirac function:
r c = [5 ′′ − 20 ′′ ], r c = [20 ′′ − 35 ′′ ], r c = [35 ′′ − 50 ′′ ],S F (f, θ, r c ) = 1 e f 50 −f c + 1 (4)
where f 50 and c are the two free parameters of the functional form. The parameter f 50 corresponds to the flux where S F equals 0.5, or in other words, the detection probability equals 50%, while c simply describes how sharp the cut-off in the selection function is. Considering different exposure times and off-axis angles, one gets analogous plots, with different values of the pair (f 50 , c). By constructing a 3-dimensional grid of the best fit (f 50 , c) values over (a) 12 different exposure times, (b) 3 off-axis angles and (c) 4 core radii, we then obtain the general expression of the selection function S F over the whole parameter space (see panels in Fig. 10). In this way, for a source with a given core radius within the range 5 ′′ -50 ′′ , we have the detection probability for an arbitrary exposure time observation (ranging from 1 ksec to 200 ksec ), at an arbitrary position within the field of view (3 ′ < θ < 15 ′ ) and for an arbitrary flux (ranging from 10 −15 to 10 −11 erg sec −1 cm −2 ). For the few clusters with core radius larger than 50 ′′ , S F is safely obtained by linear extrapolation, according to the bottom-right panel of Fig. 10.
The catalogue contamination
By construction, the detection threshold in the overall BMW HRI source catalogue is set by fixing the expected number of spurious sources: the higher the number of allowed spurious sources, the lower the threshold. This number is set to 1 in ten fields for each scale of the wavelet transform, which corresponds to 0.4 expected spurious sources in each frame. This, however, should be taken as a theoretical estimate because it assumes a perfectly flat background so as to provide a general instrumentindependent reference (L99). Here, we derive a more accurate estimate of the number of spurious detections by properly simulating a realistic background. Moreover, due to the empirical definition of extended sources, we cannot analytically predict the number of spurious sources that will be detected as extended and need to resort again to a Monte Carlo test. To this end, we build pure background images for the usual set of 12 templates with different exposure times (and different background values) ranging from 1 ksec to 200 ksec (see Sect. 4). This time, we do not add any simulated source. For each of the 12 template images, we generate 200 different realizations of the background map, assuming a pure Poissonian noise. Then we run the BMW detection pipeline: clearly, all detected sources will be spurious, resulting from background fluctuations above the detection threshold.
Within the 2400 simulated blank frames, we detect 844 spurious sources (both point-like and extended), to be compared with the expected value of 0.4 × 2400 = 960. The close agreement between these two figures represents an encouraging a posteriori confirmation that the total number of spurious detections is well under control in the whole selection procedure. Among these, 108 sources meet the requirements of our cluster survey (extension and significance), providing us with an estimate of 0.045±0.003 spurious p s > 4 σ detections in each frame of the survey. Note that this value does not depend on the specific exposure time of the field, as in each field the detection threshold is pushed down only to the appropriate level so as to keep the total number of spurious detections constant. Given the 914 fields composing our survey, we thus expect 38±3 spurious sources to contaminate the sample of 154 cluster candidates (25%). Note that the expected Fig. 10. Dependence of the selection function S F on the relevant observational and source parameters, as obtained from the Monte Carlo simulations. S F is defined as the ratio between the number of detected over the number of simulated input sources, as explained in the text. In the top-left panel we plot the results for the annulus between off-axis angle 9 ′ < θ < 12 ′ , t exp = 40 ksec and 4 different SB profiles. The points are very well fitted by a Fermi-Dirac function with different values of the two free parameters (Eq. 4). In the top-right panel for all 12 sets of simulated fields, we plot the values of f 50 (in counts sec −1 ) for the region 9 ′ < θ < 12 ′ as a function of exposure time. In the bottom-left panel we plot f 50 as a function of the off-axis angle for a 5 ksec exposure and for four different source extensions. In the bottom-right panel we plot f 50 as a function of source core radii, for a 30 ksec observation in the annulus between 3 ′ and 10 ′ . f 50 turns out to depend linearly also on the value of r c . In all four plots the dashed lines reproduce the multidimensional fit as explained in the text.
number of spurious detections is proportional to the number of the analyzed fields, not related to the number of sources in the catalog. This means that, for example, if one concentrated only on a subset of high-exposure fields, the number of spurious sources would go down faster than the number of true clusters. However, the explored area and volume would also be reduced accordingly, thus af-fecting the ability to find luminous, rare clusters, which is one of our goals.
There is a second source of contamination that needs to be considered, which is due to point-like sources mistaken as extended. This fraction is readily estimated from our first set of simulations, where we realistically simulated the observed sky in terms of surface density distribution Fig. 11. The expected number of spurious detections in the BMW-HRI cluster catalogue as a function of the source significance, expressed in units of standard deviations σ (see text for details). The curve includes the contamination produced both by background fluctuations detected as true (extended) sources and by real point sources mistaken as extended by the algorithm. On the basis of this curve, the BMW-HRI p s > 4 master sample is expected to have a 27% contamination, while selecting at p s > 5 one obtains a sample with virtually null contamination.
of point-like sources. The net result is that in the sample of 154 p s > 4 σ cluster candidates, we expect 4±2 sources to be false classifications of a point-like source, which makes up for a total contamination of 42 ± 4, i.e. 27%.
As shown by Fig. 11, the number of total expected spurious detections (background fluctuations plus wrongly classified point-like sources), decreases significantly at higher detection significance: it is more than 40% at 3 σ, 27% at 4 σ and negligible beyond 5 σ. The chosen significance threshold at 4 σ for our main cluster catalogue is based on this plot, representing a good compromise between the total number of sources and the expected contamination.
Sky Coverage and Number Counts
The BMW-HRI Cluster Survey Sky-coverage
The sky-coverage Ω (SC hereafter), measures the actual surveyed area of the survey as a function of X-ray flux and of the other parameters characterizing each source: it is a necessary ingredient for any statistical computation involving the mean surface density of clusters in the catalogue, like e.g. the log N -log S or the luminosity function.
By definition, it is closely related to the survey selection function, which, as we have seen, depends also on the off-axis angle θ and the source size r c (see Sect. 4.3). Let us therefore consider an infinitesimal annulus at an offaxis angle θ, characterized by a solid angle dω, within the field of view of a generic observation i. The area of this elementary surface will contribute differently to the total sky coverage, depending on the flux limit one chooses and on the source size: for example, depending on the exposure time of the observation, it will provide its full area at a given bright flux, while it will be "invisible" at fainter fluxes. Or, equivalently, it will be effective for finding very small sources, and become ineffective for very large, blurred sources. All these effects are already taken into account by the selection function S F that we have carefully estimated through our Monte Carlo simulations. In fact, by definition, the actual contribution of this annulus to the total sky coverage will be
dΩ i = dΩ i (f, θ, r c ) = S Fi (f, θ, r c )dω ,(5)
and the total SC yielded by that given field i for a source with flux f and size r c will then be simply the sum over the annuli, i.e.
Ω i (f, r c ) = θmax θmin S Fi (f, θ, r c )dω(θ) ,(6)
where in our case θ min = 3 ′ and θ max = 15 ′ . Finally, the overall SC of the survey, Ω BMW (f, r c ) will be given by the sum of the contributions by each single observation:
Ω BMW (f, r c ) = i Ω i = i θmax θmin S F (f, θ, r c )dω(θ) ,(7)
which shows explicitly the dependence of the total sky coverage on the source extension r c (Fig.12). Thus, given Fig. 13. Comparison of the BMW-HRI size-weighted sky coverage to those of the EMSS (Gioia et al. 1990), 160 Square Degrees (Vikhlinin et al. 1998a) and RDCS (Rosati et al. 1998). Since the sky coverage is a function of both flux and source extension, the BMW-HRI curve has been computed combining the sky coverages pertaining to different source extensions, according to the observed distribution of angular sizes in the p s > 4 sample. an observed source characterized by an (f, r c ) pair, it will always be possible to give a value for the total solid angle over which that source could have been found in the survey.
Given the various dependences we have discussed, a comparison of the sky coverages as a function of X-ray flux among different surveys can only be done in an approximate way, i.e. considering a "typical" sky coverage for each specific survey (e.g. Rosati et al. 2002). To perform such a comparison, given that the source extensions correlate with their fluxes (Fig. 6), we tried and obtain a sufficiently realistic one-dimensional Ω(f ) by computing the mean r c values within 5 bins in flux, and then constructing a "composite" 1D sky coverage which in each flux range reflects the typical size of sources with that mean flux. The result is shown in Fig. 13, compared to the sky coverages of three other representative surveys from the literature. This plots shows that the BMW-HRI survey is competitive with existing PSPC surveys both at bright and faint fluxes. In particular, it provides an interesting solid angle coverage below 6×10 −13 erg sec −1 cm −2 , which results in the potential ability to detect a few clusters beyond z ∼ 0.9−1. In fact, among the few BMW-HRI clusters with confirmed spectroscopic redshift, we already have two objects at z = 0.89 and z = 0.92. We remark that these values are specific for the low-contamination p s > 4 σ sample. In Sect. 5.3 we shall use this 1D sky coverage to compute the expected redshift distribution of BMW-HRI clusters.
Accounting for overlapping fields
As anticipated in Sect. 2.2, 7% of the survey area was observed more than once. Therefore, when summing the areas contributed by each field in the sky coverage calculation we need to account for this repeated area so as to include it only once. To this end, let us consider a given source with fluxf , which has been observed N times with different exposure times and therefore with different values of the selection function. In other words, the probability P i of detecting this source is different in each exposure i, coinciding with the value S Fi of the i-th selection function at fluxf . Accordingly, the global selection function for that specific source S F (f , r c ) in the N observations will correspond to the probability of detecting the source within at least one of the N observations. If we consider the probability Q of not detecting it in any of the observations, this is nothing else than the product of the probabilities of not detecting it in each image
Q = (1 − P 1 ) × (1 − P 2 ) × ...(1 − P N ) = i (1 − S Fi ) .(8)
At this point, the selection function pertaining to that source, S F (f, r c ) will be just the complement of Q, i.e.
S F = 1 − i (1 − S Fi ) .(9)
The expected redshift distribution
Using the computed sky coverage, we can now address one of the most interesting aspects of a deep cluster survey, i.e. the expected redshift distribution of the sample. In particular, it is of interest to ask how many clusters we expect in the cosmologically interesting range z ∼ 1, if any, in different evolutionary scenarios. Briefly, the differential redshift distribution is obtained in the standard way as
dn dz = dn dV dV dz = dn dV dV dl dl dz ,(10)
where dl/dz is the cosmological comoving line element, dV /dl = d 2 A dω, d A is the angular size distance and dω is the elementary solid angle covered (e.g. Peebles 1993, pag. 331). The sky coverage enters here when we integrate over the whole observed solid angle, while dn/dV , i.e. the number of clusters per unit volume expected at the same z, is obtained by integrating the XLF from the minimum allowed luminosity (at that z) to infinity,
dn dV (z) = ∞ Lmin(z) dn dL X dV dL X = ∞ Lmin φ(L X )dL X ,(11)
where φ(L X ) is very well described by the usual Schechter functional form, yielding
dn dV (z) = ∞ Lmin φ * L X L * X −α exp (−L X /L * X ) dL X L * X .(12)
We therefore used the XLF parameters for the [0.5-2] keV band measured by the REFLEX survey , re-computed for a lambda-cosmology by Mullis et al. (2004), φ * = 8.56 × 10 −7 h 3 Mpc −3 , L * X = 1.295 h −2 erg sec −1 , α = −1.69, to obtain the solid curves shown in Fig. 14, under the hypothesis of a non-evolving XLF. Rosati et al. (2000) first used a Maximum-Likelihood approach to quantify in a phenomenological way the evolution observed in the XLF measured from the RDCS 5 . To this end, they parameterized evolution in density and luminosity through a simple power-law model, in which
φ * (z) = φ * 0 (1 + z) A ,(13)L * X (z) = L * X 0 (1 + z) B(14)
and φ * 0 , L * X 0 are the XLF parameter values at the current epoch. From their analysis of the RDCS survey they estimated A = −1.2, B = −2, indicating a statistically significant evolution in the mean density of luminous clusters. The same model has recently been applied to the 160SD survey data by Mullis et al. (2004), where also the EMSS, NEP and RDCS constraints are added. By comparing all these available estimates, they suggests that the best fitting values for the parameters A and B are A ∼ 0, B ∼ −2.5. These are the values we adopted for our computations of the expected number of clusters in the BMW-HRI survey with evolution of the XLF.
The overall results, with and without evolution, are compared in Fig. 14. The first interesting point to notice concerns the total number of clusters predicted in the two scenarios. We have a prediction of a total of 89 to 132 clusters in the p s > 4 sample, depending on evolution. Our current number of candidates is 154, with a prediction of 42 being spurious due mostly to background fluctuations. This yields a number of 112 true clusters in the survey, which is right between the predictions of the two curves. This is interesting, as it could imply that less evolution of the XLF is seen in the BMW-HRI sample. However, note that the simple (A,B) evolutionary model has been applied in the original form , which, taken at face value, while reproducing correctly the observed deficit of luminous clusters above z ∼ 0.6, also under-predicts the number of intermediate-redshift systems, as can be seen from the left panel of Fig. 14, where the two curves already begin to separate at at redshifts as low as z = 0.15, where we know that no evolution whatsoever is observed. One possibility is to change slightly the Borgani et al. functional form describing the evolution of the XLF, as done by Mullis et al. (2004), by introducing an effective non-zero reference redshift in Eq. (13) and (14) which shifts the "switch-on" of evolution to larger, more realistic redshifts.
The second interesting point to be appreciated from the figure is the significant number of high redshift systems that the BMW-HRI survey should be capable of detecting. Even with the strongest evolution of the XLF we expect 3 clusters with z > 0.8, with the number rising quickly to ∼ 10 if milder evolution is admitted. In fact, we already have three confirmed clusters in this range (see Sect. 6), with a handful of further confirmed candidates with photometric redshift > 0.7. One of these is an extremely promising object showing a concentration of galaxies with r − Ks = [5.5 − 6] around the X-ray source position, with a range of colours typical of earlytype galaxies at z ∼ 1.2 − 1.3 (see e.g. Stanford 1997).
The LogN − −LogS
The cluster number counts (historically known also as the "LogN − −LogS" distribution) are the simplest diagnostic of cosmology/evolution that can be obtained from a sample of cosmological objects, without knowing their distances. It also represents a basic test for the quality of the sample and/or the reliability of the sky coverage, being sensitive to residual biases and contaminations as a function of flux.
To compute the integral flux distribution, we weight each source by the inverse of the total area in the survey over which the source could have been detected. This is nothing else than the inverse of the sky-coverage value at the specific source flux and extension:
n(> S) = f >S 1 Ω f .(15)
We used this formula and the estimated sky coverage to compute the number counts using a "high-purity" sample with p s > 5 σ, containing 45 candidates, for which negligible contamination is expected (Fig. 11). After estimating the appropriate sky coverage for this selection we computed the points reported in Fig.15. We chose to bin the counts so as to have an equal increase ∆N = 5 in the number of sources included in each bin, when moving to fainter and fainter fluxes. The error bars also include the uncertainty in the measured r c , on which the sky coverage for each source depends. Our result is compared to similar measurements from the RDCS (Rosati et al. 1998) and 160SD (Vikhlinin et al. 1998a) surveys, and to the predictions for an unevolving and evolving XLF in the adopted H o = 70 km s −1 Mpc −1 , Ω M = 0.3, Ω Λ = 0.7 cosmology. The XLF evolution is described using the same phenomenological (A, B) model used in the previous section, here using both the original parameters by Rosati et al. (2000, dotted line) and the milder values by Mullis et al. (2004, dashed line). We also note that the no-evolution curve is slightly higher than the one shown in Rosati et al. (2002), due to the different XLF parameters used here (REFLEX vs. BCS). The main result from this plot is that the BMW-HRI and PSPC data agree very well with each other and are globally consistent with a moderate evolution of the XLF. The observed counts can be taken first of all as an a posteriori confirmation of the predicted low level of contamination of the p s > 5 σ sample, and of the self-consistency of the computed sky coverage. More substantially, the agreement with the other surveys indicates (right), showing the number of clusters expected above a given redshift z. The dashed curves in both panels refer to an empirical evolution model for the XLF as defined by Rosati et al. (2000), using a conservative pair of parameters (A=0,B=-2.5), which best describe the overall behaviour of available PSPC surveys (Mullis et al. 2004).
that the bulk of the (faint) cluster population is consistently detected by both PSPC-and HRI-based surveys. However, one should keep in mind also the limitations of this kind of plot. In fact, the shape and amplitude of the logN-LogS is mainly sensitive to the faint/intermediate range of the XLF (dominated by low-redshift groups and clusters), while it is rather insensitive to changes in the number of rare, high-luminosity clusters at high redshift.
Early follow-up results
To provide a first hint of the general optical properties of BMW-HRI clusters, we briefly summarize here some early results from the ongoing follow-up campaign. Our main cluster identification strategy is currently based on multi-band imaging in the optical g, r, i bands and near-infrared J, H, K s bands, with at least two, typically three of these bands secured for each target. A cluster is considered as confirmed when a significant over-density of galaxies with coherent colours is detected in the area of the X-ray source. The current identification statistics, based on a total sample of 119 candidates observed so far, gives 83 confirmed, 19 rejected and 17 still uncertain clusters, in substantial agreement with the contamination level estimated in this paper. Photometric redshifts for these objects are also estimated either approximately from the mean colour of the cluster red sequence (when present), or more accurately using the photo-z codes of Fernandez-Soto et al. (2001), andBolzonella et al. (2000) when 3 or more bands are available. A few of these clusters have also been observed spectroscopically by us or have been found to be in common with other surveys.
We cross-correlated our list with the latest version of the 160SD survey, with the unpublished RDCS catalogue and with the SHARC (Romer et al. 2000) and WARPS (Perlman et al. 2002) published lists. This comparison to PSPC-based surveys, performed early in the project, provided us with encouraging confirmations, indicating that the survey had the potential to efficiently peer into the high-redshift Universe. Common high-redshift clusters include, for example: • BMW122657.3+333253 at z = 0.888, first discovered by Cagnoni et al. (2001) in the WGA survey and by Ebeling et al. (2001b) in the WARPS survey; • BMW052215.8-362452 at z = 0.53, also found in the 160SD survey );
• two (unpublished) RDCS clusters at z = 0.64 and z = 0.808 (Rosati, private communication). Our current list of new high-redshift clusters includes another 7 confirmed objects reliably located at z phot > 0.6 by r-i-Ks photometric redshifts. A few of these are shown in Fig. 16. These clusters will be the subject of specific future papers. X-ray/optical overlays and RGB images for more BMW-HRI clusters can be seen at http://www.merate.mi.astro.it/∼guzzo/BMW/gallery.html.
Summary and Conclusions
We have presented in detail the construction of the BMW-HRI Cluster Survey, a new sample of X-ray selected cluster candidates drawn from the so-far poorly explored ROSAT HRI archive. We have shown that by selecting sources with detection significance larger than 4 σ and extension significance better than ∼ 3.5 σ, one can obtain a reliable sample of 154 cluster candidates with flux Fig. 15. The classical LogN − −LogS plot -i.e. the mean cumulative surface density of clusters as a function of X-ray flux -from the BMW-HRI survey (filled circles), compared to the RDCS (Rosati et al. 1998) and 160SD survey (Vikhlinin et al. 1998b). The BMW-HRI points are computed from the high-significance p s > 5 σ sub-sample, for which the expected contamination is negligible. The curves give the expected counts in the same Λ-cosmology adopted in this paper, for the cases of no-evolution and evolution of the XLF. The BMW-HRI and PSPC data are all consistent with a negligible to moderate evolution of the XLF.
larger than 2×10 −14 erg sec −1 cm −2 and very interesting properties. We have performed extensive Monte Carlo simulations to recover the survey selection function with respect to the major parameters affecting the source detection. These have allowed us to reconstruct a sky coverage which is competitive with existing PSPC surveys over the whole range of fluxes, with a particularly interesting solid angle covered in the faintest bins. These results show that, contrary to expectations, the HRI data -once they have been treated as discussed in C99, i.e. eliminating the noisiest energy channels -allow us to detect low-surface-brightness objects out to high redshifts, despite their higher instrumental background. We also estimated the expected sample contamination due to background fluctuations and false classifications, which turns out to be 27% for this significance threshold.
We have used a high purity sample selected at p s > 5 σ, containing 45 objects expected to be virtually all true clusters, to estimate the cluster number counts in the range 3×10 −14 to ∼ 1×10 −12 erg sec −1 cm −2 . This measurement is in very good agreement with previous estimates from the RDCS and 160SD surveys.
Possibly the most important aspect of the BMW-HRI cluster survey is that it will be the first large sample of clusters to be drawn from an instrument independent from and with higher resolution than the ROSAT PSPC, on which virtually all serendipitous cluster surveys so far have been based 6 . This means that the BMW-HRI survey potentially includes some types of object which could have been missed in previous surveys. It is natural to expect that these are mostly small groups, which the HRI is able to detect as extended. Indeed, we have specific examples in the survey of objects with core radii as small as 6 ′′ . The 6 Notable recent exceptions are represented by new samples being constructed using Chandra and XMM-Newton data, as done respectively by Boschin (2002) and by the XMM-LSS survey (Valtchanov et al. 2003). In the first case, an accurate study of the survey selection function, sky coverage and predicted redshift distribution has been provided. Unfortunately, no deep follow-up is being done for this survey, where a significant fraction of clusters with z > 1 is expected. In the case of the XMM-LSS sample, on the other hand, two z ∼ 1 clusters have recently been discovered (Andreon, priv. comm.). However, no quantitative statistical description of the survey sky coverage is available yet. Fig. 16. Optical/X-ray overlays for a selection of new BMW-HRI groups/clusters at different redshifts. From top left to bottom right: BMW134654.9-301328, z spec = 0.358; BMW212415.7-334754, z spec = 0.92; BMW122842.6-391612, z phot ≃ 0.8; BMW112059.0+130450, z spec = 0.615. The redshift for BMW212415.7-334754, a very compact cluster, awaits further confirmation, being based on only 3 redshifts and displaying a second system at z = 0.62. The X-ray images have been smoothed with a Gaussian filter (σ = 40 ′′ ) and the isophotes correspond to 0.7,1,2,3 standard deviations over the background (after smoothing). Optical images are combined Gunn-r + i ∼ 60 min total exposures with EFOSC2 at the ESO 3.6 m telescope. For all but the first object the sides of the figures are 2.5 ′ . observed agreement among our number counts and those from the RDCS and 160SD surveys, however, indicates that the percentage of these small-sized objects does not seem to be sufficient to represent a substantial incompleteness in PSPC surveys. A finer assessment of possible differences will be provided once the redshifts for BMW-HRI clusters are measured, both by comparison of the faint end slopes of the cluster XLF and of the number of clusters detected above z ∼ 0.8. While the faint end of the number counts is dominated by low-redshift, low-luminosity objects, the HRI resolution should begin to make a difference when one goes to high redshifts. The work of Ettori et al. (2004), based on Chandra observations, seems to indicate that z > 0.8 clusters have on average a more compact profile than their lower-redshift counterparts. Keeping in mind that all our simulations are based on the necessary but crude approximation of a β = 2/3 profile, these findings would go in the direction of increasing the probability of detecting clusters at high redshift, favored by the HRI high spatial resolution. This would probably be too mild to show a significant effect in the integral number counts, yet might increase the number of detectable z ∼ 1 clusters by a significant factor.
Fig. 2 .
2Distribution of exposure times for the 914 pointings used for the cluster survey.
Fig. 3 .
3Source extensions as measured by the BMW detection algorithm (W clu , see text), plotted against the offaxis angle, showing our empirical definition of extended sources. The continuous line gives the mean extension of a point-like source at different off-axis angles, and the dashed lines identifies the 3 σ locus. Sources selected as cluster candidates have to lie, with their error bars, above the dashed line and are indicated by filled circles. treated in the computation of the actual survey sky coverage, as explained in detail in Sect. 5.1.
Fig. 4 .
4Left panel: 24 ′ × 24 ′ portion of an HRI image (Gaussian smoothed with σ = 5 ′′ ) showing the detection of cluster candidate BMW044743.4-202042, marked by the solid 90 ′′ radius circle. The dashed circle (3 ′ radius) indicates the central area containing the original HRI target (here barely visible), excluded from the cluster search. Right panel:
Fig. 5 .
5Relative uncertainties in the flux
Fig. 5 .
5The measured fluxes and core radii for all 154 sources in our list are plotted with their errors inFig. 6. The observed correlation between these two quantities is not surprising if clusters have an intrinsic distribution of sizes which is not flat (e.g.Mohr et al. 2001;Arena 2002). One
Fig. 6
6Fig. 6. Angular sizes (core radii) versus measured Xray fluxes for the p s > 4 BMW-HRI candidate clusters. While an intrinsic correlation is expected between these two quantities, the slope observed here is probably influenced by the lack of sources at high fluxes, owing to the small volume sampled.
Fig. 9 .
9Top panel: visual comparison of the HRI image of BMW000141.3-154042 (left), the cluster detected with the largest number of net counts in the BMW-HRI survey, with a simulated X-ray source with the same counts and core radius. The bottom panel plots the average of the central 10 columns of the two images, with the real cluster described by the black line.
Fig. 12 .
12The solid angle covered by the BMW-HRI survey (sky-coverage) as a function of flux and source extension, obtained by integrating the survey selection function over the observed fields.
Fig. 14 .
14The expected redshift distribution of the BMW-HRI cluster sample, in differential form (left), showing the number of clusters expected in bins of ∆z = 0.1, and in integral form
Table 1 .
1Regions around the LMC, SMC and Virgo cluster excluded from the survey. 52 ′ , 7 h 12 ′ ] [−74 • , −68 • ] SMC1 [23 h 52 ′ , 1 h 20 ′ ] [−77 • , −67.5 • ] SMC2 [23 h 46 ′ , 23 h 54 ′ ] [−77 • , −73 • ] SMC3 [01 h 20 ′ , 02 h 00 ′ ] [−72 • , −67.5 • ] Virgo [12 h 20 ′ , 12 h 44 ′ ] [7.0 • , 16 • ]region RA
DEC
LMC1 [3 h 52 ′ , 6 h 52 ′ ]
[−77 • , −63 • ]
LMC2 [5 h 24 ′ , 5 h 56 ′ ]
[−63 • , −58 • ]
LMC3 [6 h
All through this paper, we shall adopt a "concordance" cosmological model, with Ho = 70 km s −1 Mpc −1 , ΩM = 0.3, ΩΛ = 0.7, and -unless specified -quote all X-ray fluxes and luminosities in the [0.5-2] keV band.
For clarity, note that in the original BMW-HRI general catalog, a source was conservatively classified as extended if its extension W clu and the relative error (σ clu ) lay at more than 2 σ clu from this limit.
Note, as an aside, that in our case the wavelet transform, by its very nature, maximizes the S/N ratio for each source by optimizing the choice of the detection cell dimension.
A similar ML approach was used in a more physical way to estimate the values of ΩM and σ8 from the observed evolution
Acknowledgements. We thank P. Rosati for for continuous encouragement and for allowing us to make a comparison with unpublished data from the RDCS, C. Mullis for cross-checks with the 160SD survey and A. Vikhlinin for providing us with his number counts in electronic form. We thank S. Borgani, H. Böhringer, I. Gioia, J.P. Henry and C. Mullis for useful discussions and A. Finoguenov and W. Boschin for reading the manuscript. We thank A. Misto for continuous data archiving assistance.
. S W Allen, R W Schmidt, A C Fabian, MNRAS. 32837Allen, S.W., Schmidt, R.W., Fabian, A.C., 2001, MNRAS, 328, L37
. S ; J P Arena, M Franx, M Postman, ApJ. 596143Università di Milano Bicocca BlakesleeLaurea ThesisArena, S., 2002, Laurea Thesis, Università di Milano Bicocca Blakeslee, J.P., Franx, M., Postman, M. et al., 2003, ApJ, 596, 143
. H Böhringer, P Schuecker, L Guzzo, A&A. 369826Böhringer, H., Schuecker, P., Guzzo, L., et al., 2001, A&A, 369, 826
. H Böhringer, C A Collins, L Guzzo, ApJ. 56693Böhringer, H., Collins, C.A., Guzzo, L., et al., 2002 ApJ, 566, 93
. H Böhringer, P Schuecker, L Guzzo, A&A. 363476A&ABöhringer, H., Schuecker, P., Guzzo, L., et al., 2004, A&A, in press Bolzonella, M., Miralles, J.-M., Pellò, R., 2000, A&A, 363, 476
. S Borgani, L Guzzo, Nature. 40939Borgani, S., Guzzo, L., 2001, Nature 409, 39
. S Borgani, G Murante, V Springel, MNRAS. 3481078Borgani, S., Murante, G., Springel, V., et al., 2004, MNRAS, 348, 1078
. S Borgani, P Rosati, P Tozzi, ApJ. 56113Borgani, S., Rosati, P., Tozzi, P., et al., 2001, ApJ, 561, 13
. W Boschin, A&A. 396397Boschin, W., 2002, A&A, 396, 397
. I Cagnoni, M Elvis, D W Kim, ApJ. 560860Cagnoni, I., Elvis, M., Kim, D.W., et al., 2001, ApJ, 560, 860
. S Campana, D Lazzati, M R Panzera, G Tagliaferri, ApJ. 524C99423Campana, S., Lazzati, D., Panzera, M.R., Tagliaferri, G., 1999, ApJ, 524, 423 (C99).
. A Cavaliere, R Fusco-Femiano, A&A. 49137Cavaliere, A., Fusco-Femiano, R., 1976, A&A, 49, 137
. C A Collins, D J Burke, A K Romer, ApJ. 479117Collins C.A., Burke D.J., Romer A.K., 1997, ApJ, 479, 117L
The ROSAT HRI Calibration Report. L P David, U.S. ROSAT Science Data CenterSAODavid, L.P., et al., 1998, The ROSAT HRI Calibration Report, U.S. ROSAT Science Data Center (SAO)
. S De Grandi, L Guzzo, H Böhringer, ApJ. 51317De Grandi, S., Guzzo, L., Böhringer, H., et al., 1999, ApJ, 513, 17
. J M Dickey, F J Lockman, ARA&A. 28215Dickey, J.M., Lockman F.J., 1990, ARA&A, 28, 215
. H Ebeling, A C Edge, A C Fabian, ApJ. 479101Ebeling, H., Edge, A.C., Fabian, A.C., et al., 1997, ApJ, 479, L101
. H Ebeling, L R Jones, B W Fairley, ApJ. 54823Ebeling, H., Jones, L.R., Fairley, B.W., et al., 2001a, ApJ, 548, L23
. H Ebeling, A C Edge, J P Henry, ApJ. 553668Ebeling, H., Edge, A.C., Henry, J.P., et al., 2001b, ApJ, 553, 668
. S Ettori, P Tozzi, S Borgani, P Rosati, A&A. 41713Ettori, S., Tozzi, P., Borgani, S., Rosati, P., 2004, A&A, 417, 13
. A E Evrard, C R Metzler, J F Navarro, ApJ. 469494Evrard, A.E., Metzler, C.R., Navarro, J.F., 1996, ApJ, 469, 494
. A Fernndez-Soto, K M Lanzetta, H W Chen, Pascarelle, M Sebastian, N Yahata, ApJS. 13541Fernndez-Soto, A., Lanzetta, K.M., Chen, H.W. Pascarelle, Sebastian M., Yahata, N., 2001, ApJS, 135, 41
. A Finoguenov, T H Reiprich, H Böhringer, A&A. 368749Finoguenov, A., Reiprich T.H., Böhringer, H., 2001, A&A, 368, 749
. I Gioia, T Maccacaro, R E Schild, ApJS. 72567Gioia, I., Maccacaro, T., Schild, R.E., et al., 1990, ApJS, 72, 567
. I M Gioia, J P Henry, C R Mullis, W Voges, U G Briel, ApJ. 553105Gioia, I.M., Henry, J.P., Mullis, C.R., Voges W., Briel U.G., 2001, ApJ, 553, L105
. I M Gioia, J P Henry, C R Mullis, ApJS. 14929Gioia, I.M., Henry, J.P., Mullis, C.R., 2003, ApJS, 149, 29
. S F Helsdon, T J Ponman, MNRAS. 315356Helsdon, S.F., Ponman, T.J., 2000, MNRAS, 315, 356
. I Gioia, J P Henry, C R Mullis, ApJS. 14929Gioia, I., Henry, J.P., Mullis, C.R., et al., 2003, ApJS, 149, 29
. P Henry, I M Gioia, T Maccacaro, ApJ. 386408Henry, P., Gioia, I.M., Maccacaro T., et al., 1992, ApJ, 386, 408
. N Kaiser, MNRAS. 222323Kaiser, N., 1986, MNRAS, 222, 323
. D Lazzati, S Campana, P Rosati, ApJ. 524L99414Lazzati, D., Campana, S., Rosati., P., et al., 1999, ApJ, 524, 414 (L99)
. C Lidman, P Rosati, R Demarco, A&A. 416829Lidman, C., Rosati, P., Demarco, R., et al., 2004, A&A, 416, 829
. J J Mohr, E D Reese, E Ellingson, ApJ. 544109Mohr, J.J., Reese, E.D., Ellingson, E., et al., 2000, ApJ, 544, 109
. A Moretti, S Campana, D Lazzati, G Tagliaferri, ApJ. 588696Moretti, A., Campana, S., Lazzati, D., Tagliaferri, G., 2003, ApJ, 588, 696
. C R Mullis, B R Mcnamara, H Quintana, ApJ. 594154Mullis, C.R., McNamara, B.R., Quintana, H., et al., 2003, ApJ, 594, 154
. C R Mullis, A Vikhlinin, P Henry, ApJ. in press (astro-ph/0401605Mullis, C.R., Vikhlinin, A., Henry, P., et al., 2004, ApJ, in press (astro-ph/0401605)
. R C Nichol, A K Romer, B P Holden, ApJ. 52121Nichol R.C., Romer, A.K., Holden, B.P., et al., 1999, ApJ, 521, L21
. M R Panzera, S Campana, S Covino, A&A. 399P03351Panzera, M.R., Campana, S., Covino, S., et al., 2003, A&A, 399, 351 (P03)
J A Peacock, Cosmological Physics. CambridgeCambridge University PressPeacock, J.A., 1999, Cosmological Physics, Cambridge University Press, Cambridge.
Principles of Physical Cosmology. P J E Peebles, ApJS. Perlman E.S., Horner D.J., Jones L.R.140265Princeton University PressPeebles, P.J.E., 1993, Principles of Physical Cosmology, Princeton University Press Perlman E.S., Horner D.J., Jones L.R., 2002, ApJS, 140,265
. E Pierpaoli, S Borgani, D Scott, M White, MNRAS. 34216Pierpaoli, E., Borgani, S., Scott, D., White, M., 2003, MNRAS, 342, 16
. A Prestwich, P Callanan, S Snowden, A&AS. 189905Prestwich, A., Callanan, P., Snowden, S., et al., 1996, A&AS, 189, 905
. A K Romer, R C Nichol, B P Holden, ApJS. 126209Romer, A.K., Nichol, R.C., Holden, B.P., et al., 2000, ApJS, 126, 209
. T H Reiprich, H Böhringer, ApJ. 567716Reiprich, T.H., Böhringer, H., 2002, ApJ 567, 716
. P Rosati, R Della Ceca, R Burg, ApJ. 44511Rosati, P., Della Ceca, R., Burg, R., et al., 1995, ApJ, 445, L11
. P Rosati, R Della Ceca, R Burg, C Norman, R Giacconi, ApJ. 49221Rosati, P., Della Ceca, R., Burg, R., Norman, C., Giacconi, R., 1998, ApJ, 492, L21
P Rosati, S Borgani, R Della Ceca, Large-Scale Structure in the X-ray Universe. M. Plionis & I. Georgantopulos eds., AtlantisciencesParis13Rosati, P., Borgani, S., Della Ceca, R., et al., 2000, in Large-Scale Structure in the X-ray Universe, M. Plionis & I. Georgantopulos eds., Atlantisciences, Paris, p. 13
. P Rosati, S Borgani, C Norman, ARA&A. 40539Rosati, P., Borgani, S., Norman, C., 2002, ARA&A, 40, 539
. P Schuecker, H Böhringer, C A Collins, A&A. 398867Schuecker, P., Böhringer, H., Collins, C.A., et al., 2003, A&A, 398, 867
Cookbook for analysis procedures for ROSAT XRT/PSPC observations of extended objects and diffuse background. S L Snowden, Snowden, S. L. 1994, Cookbook for analysis procedures for ROSAT XRT/PSPC observations of extended objects and diffuse background
. S A Stanford, R Elston, P R Eisenhardt, AJ. 1142232Stanford, S. A., Elston, R., Eisenhardt, P.R., et al., 1997, AJ, 114, 2232
. I Valtchanov, M Pierre, J Willis, astro-ph/0305192A&A. in pressValtchanov, I., Pierre, M., Willis, J., et al., 2003, A&A, in press (astro-ph/0305192)
. A Vikhlinin, B R Mcnamara, W Forman, ApJ. 49821Vikhlinin, A., McNamara, B.R., Forman, W., et al., 1998a, ApJ, 498, L21
. A Vikhlinin, B R Mcnamara, W Forman, ApJ. 502558Vikhlinin, A., McNamara, B.R., Forman, W., et al., 1998b, ApJ, 502, 558
. W Voges, B Aschenbach, Th Boller, A&A. 349389Voges, W., Aschenbach, B., Boller, Th., et al., 1999, A&A, 349, 389
| [] |
[
"Looking At The Body: Automatic Analysis of Body Gestures and Self-Adaptors in Psychological Distress",
"Looking At The Body: Automatic Analysis of Body Gestures and Self-Adaptors in Psychological Distress"
] | [
"Journal Of L A T E X Class ",
"Files "
] | [] | [] | Psychological distress is a significant and growing issue in society. Automatic detection, assessment, and analysis of such distress is an active area of research. Compared to modalities such as face, head, and vocal, research investigating the use of the body modality for these tasks is relatively sparse. This is, in part, due to the limited available datasets and difficulty in automatically extracting useful body features. Recent advances in pose estimation and deep learning have enabled new approaches to this modality and domain. To enable this research, we have collected and analyzed a new dataset containing full body videos for short interviews and self-reported distress labels. We propose a novel method to automatically detect self-adaptors and fidgeting, a subset of self-adaptors that has been shown to be correlated with psychological distress. We perform analysis on statistical body gestures and fidgeting features to explore how distress levels affect participants' behaviors. We then propose a multi-modal approach that combines different feature representations using Multi-modal Deep Denoising Auto-Encoders and Improved Fisher Vector Encoding. We demonstrate that our proposed model, combining audio-visual features with automatically detected fidgeting behavioral cues, can successfully predict distress levels in a dataset labeled with self-reported anxiety and depression levels. | 10.1109/taffc.2021.3101698 | [
"https://arxiv.org/pdf/2007.15815v1.pdf"
] | 220,920,138 | 2007.15815 | c89cd1c30b125f74acca2ca4daa1985e331a1be7 |
Looking At The Body: Automatic Analysis of Body Gestures and Self-Adaptors in Psychological Distress
AUGUST 2015 1
Journal Of L A T E X Class
Files
Looking At The Body: Automatic Analysis of Body Gestures and Self-Adaptors in Psychological Distress
148AUGUST 2015 1Index Terms-Self-adaptorsfidgetingpsychological distressdigital phenotypingbehavioural sensing !
Psychological distress is a significant and growing issue in society. Automatic detection, assessment, and analysis of such distress is an active area of research. Compared to modalities such as face, head, and vocal, research investigating the use of the body modality for these tasks is relatively sparse. This is, in part, due to the limited available datasets and difficulty in automatically extracting useful body features. Recent advances in pose estimation and deep learning have enabled new approaches to this modality and domain. To enable this research, we have collected and analyzed a new dataset containing full body videos for short interviews and self-reported distress labels. We propose a novel method to automatically detect self-adaptors and fidgeting, a subset of self-adaptors that has been shown to be correlated with psychological distress. We perform analysis on statistical body gestures and fidgeting features to explore how distress levels affect participants' behaviors. We then propose a multi-modal approach that combines different feature representations using Multi-modal Deep Denoising Auto-Encoders and Improved Fisher Vector Encoding. We demonstrate that our proposed model, combining audio-visual features with automatically detected fidgeting behavioral cues, can successfully predict distress levels in a dataset labeled with self-reported anxiety and depression levels.
INTRODUCTION
P Sychological distress and mental disorders are significant threats to global health [1]. 1 According to the World Health Organization (WHO), an estimated 450 million people around the world suffer from neuropsychiatric conditions [3], with depression and anxiety being the most common mental disorders [4]. Despite existing strategies for the treatment of distress, such as depression, it is estimated that nearly two-thirds of people suffering distress have never received help from a health professional [5]. Early detection of distress is consistently noted as a key factor in treatment and positive outcomes. Early detection requires an ongoing assessment to identify distress when it begins. Self-evidently, ongoing assessment at scale is prohibitive when performed manually. As such, automatic detection of signs of psychological distress or specific mental disorders is an active area of research.
Currently, the most effective automated distress detection approaches utilize multi-modal machine learning. These modalities include facial, head, eye, linguistic (textual), vocal, and body.
There are significant challenges to body modality research, particularly within automatic distress detection, in-1. This work is an extension of the work in [2], originally published in the proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition (FG) 2020 cluding the lack of relevant data, the inability to share much of the data, and the difficulty in gathering such data. Specifically, the combination of full-body data (either sensor-based or video-based) with psychological distress labels is rare. Compounding this rarity is the private and sensitive nature of the data, which means such datasets are rarely shared publicly. Body expressions, and especially self-adaptors, have been shown to be correlated with human affect, depression and psychological distress [6], [7], [8], [9], [10]. Self-adaptors are self-comforting gestures, including any kind of touching on other parts of the body, either dynamically or statically [11], [12]. Fidgeting, a subset of self-adaptors, is the act of moving about restlessly, playing with one's fingers, hair, or personal objects in a way that is not peripheral or nonessential to ongoing tasks or events [13]. Patients with depression often engage in self-adaptors [14]. Fidgeting has been seen and reported in both anxiety and depression [12]; It is a sign of attention-deficit and hyperactivity disorder, also exhibited by individuals with autism [15]. With manually annotated data, Scherer et al. [16] reported a longer average duration of self-adaptors as well as fidgeting for distressed participants.
More recent advances in the state-of-the-art for pose estimation [17] enable accurate pose data on a broader set of datasets and thus open the door for new approaches for body expression analysis and broader incorporation of body features in multi-modal systems.
In this paper, we propose to use a hierarchical model to automatically detect self-adaptors as well as fidgeting, which has been shown to be predictive of psychological distress. We analyzed body gestures and self adaptors in a arXiv:2007.15815v1 [cs.CV] 31 Jul 2020 dataset of video recordings that we collected, concentrating on symptoms of depression and anxiety because these are the most common mental disorders [4]. We then present two methods to explore the body modality (especially fidgeting): First with a statistical linearity analysis with traditional linear regression, and second with a deep-learning-based pipeline. In the second method, a Multi-modal Deep Denoising Auto-Encoder (multi-DDAE) is utilized for encoding per-frame features. Improved Fisher Vector encoding [18] is then used to generate per-sample representation. Finally, we demonstrate that these features are discriminative in psychological distress detection.
The contributions of this paper can be summarized as follows:
1) We introduce a new audio-visual dataset containing recordings of non-clinical interviews along with distress labels from established psychological evaluation questionnaires.
2) We propose a hierarchical model for automatic detection of self-adaptors (including fidgeting) from visual data and evaluate our approach on a publicly available fidgeting dataset with manual labels.
3) We present a statistical analysis of a set of statistical body gesture features as well as specific fidgeting features extracted from the body modality data and explore how distress levels affect participants' behavior in our dataset. 4) As proof of concept, we implement a multi-modal feature fusion framework to perform distress classification and demonstrate the importance of self-adaptors features, specifically fidgeting, in predicting symptoms of depression and anxiety.
RELATED WORK
In this section, we focus on related work on automatic detection of signs of psychological distress, including studies that focus on separate modalities and multi-modal fusion frameworks.
Facial and head modality
Facial Action Coding System (FACS) [19] has long been used to taxonomize human facial movements by their appearance on the face, which yields the concept of Facial Action Units (AUs). For example, the Audio/Visual Emotion Challenge (thereafter AVEC) used AUs features as a basic descriptor for its psychological distress detection tasks.
A big body of literature has been developed to analyze facial expressions and the head modalities in the context of depression and psychological distress. For example, Yang et al. [20] proposed a "Histogram of Displacement Range (HDR)", which is a measurement of the amount of facial landmark movements to predict depression. Joshi et al. [21] presented a categorization analysis framework which consists of "bag of facial dynamics" and "histogram of head movements". Dibeklioglu et al. [22] [23] featureengineered dynamic representation (e.g., velocity, acceleration, and standard deviation of motion) for facial landmark movement and head motion and used them in a multimodal system to detect depression in a dataset of clinical interviews.
Psychomotor retardation refers to a slowing-down of thought and a reduction of physical movements in an individual. Sobin et al. [24] demonstrated the correlation between psychomotor retardation and depression. Syed et al. [25] handcrafted descriptors using craniofacial movements in order to capture the psychomotor retardation, and then made predictions of depression.
Some other features such as lower emotional expressivity [26], eye lid movement [25], reduced gaze activity [27] [28], and averted gaze [26] have been also used as predictive features of depression.
Audio modality
Acoustic features of speech can be predictive of distress irrespective of the speech content [29], [30]. For example, Ozdas et al. [29] assessed the risk of suicide by detecting the fluctuations in the fundamental frequency of people's speech. Dibeklioglu et al. [22] explored the use of vocal prosody for depression detection. Similarly, Syed et al. [25] investigated the use of turbulence in speech patterns.
Besides, in AVEC challenges, low-level descriptors of voice signals, such as Mel-frequency Cepstral Coefficients (MFCCs), are provided, leading to many multi-modal methods incorporating these acoustic features for distress and mental illness detection [20], [31].
Body modality
A few previous studies attempted to include the body modality in their models to predict psychological distress, mostly by extracting generic features from the video recordings related to the body. For example, Joshi et al. [21] computed Histogram of Gradients (HOGs) and Histogram of Optical Flow (HOFs) around the generic Space-Time Interest Points (STIPs) extracted from the videos, and then generated a "Bag of Body Dynamics" feature that was used for depression classification. Some of the multi-modal work presented in the AVEC challenges [31], [32], [33], [34] utilize the low-level descriptors of visual signals (such as latent CNN layer activation of ResNet [35] and VGGNet [36]) to predict on psychological distress.
More recent works also investigate the specific movement of body parts. In the past few years, the skeletal models, either using RGB such as OpenPose [17] or RGBD such as Microsoft Kinect SDK skeleton tracker 2 , have gained popularity for action recognition tasks and were used to generate more specific and concrete features by feature engineering [11], [37]. For example, Jaiswal et al. [37] extracted head movements using Kinect and performed multi-modal classification with other audiovisual features to predict ADHD and ASD. Though promising, the related work using such skeletal models on detecting psychological distress is still sparse.
In terms of automatic detection of self-adaptors, the only previous work that attempted to detect fidgeting behavior was presented by Mahmoud et al. [11]. They developed a multi-modal framework for automatic detection of descriptors of rhythmic body movement by extracting Speeded-Up Robust Features (SURFs) interest points around Microsoft 2. https://developer.microsoft.com/en-us/windows/kinect/ Kinect pose points and then detected rhythmic behaviors from analyzing the trajectories of the interest points. However, there are two limitations in their proposed automated system when applied to distress detection: 1) Their dataset they used was based on acted data, so the behavior detected is not natural. For example, in more real interview scenarios, participants do not always fidget with a rhythmic pattern.
2) The trajectory data was noisy, and their method could not sufficiently handle the complexity of the detected body signal. As such, they were only able to achieve 59% recognition on their acted dataset.
Multi-modal Learning
Since psychological distress is expressed through all modalities, many of the state-of-the-art models that predict signs of psychological distress proposed multi-modal approaches [20], [31], [31], [32], [33], [34], [38], [39], combining low-level features extracted from the face, speech, and text, which are usually the features publicly available for the datasets. By only working with extracted features, most of these works focused on exploiting the given features, instead of analyzing the behavioral cues (e.g., specific gestures) of psychological distress. For example, the winner of AVEC 2019 [33] proposed multi-layer attention fusion frameworks, but they did not explore the psychological basis of their models' decisions due to the lack of access to the raw data.
DATASET
In this section, we describe the data collection, experimental design, and general characteristics of our collected dataset. This dataset is designed to enable investigation of the body modality for use in automatic detection of distress.
Currently, the corpus is not publicly available due to the sensitivity of the collected video. Longer-term, we intend to make some portions (such as the features) of the data more broadly available to the research community.
Overview and design
Participants were recruited through the University of Cambridge email lists, student social media groups, and paper fliers posted around the town. We aimed to balance the sample with regards to distress levels, such that the database includes participants at the two distinct ends of the distress spectrum. To identify participants with high versus low levels of distress, we conducted an online screening with a total 106 people who signed up for the study. Participants completed standardized measures of depression (PHQ-8 [40], [41] and anxiety (GAD-7 [42]), as well as demographics. In the selection, we balanced the participants according to the public norm shown in Table 2 (e.g., For depression, above 6.63 is marked as high, otherwise low). Given potential gender differences in nonverbal communication [43], we also balanced the final sample with regards to gender within each distress group 3 . From the initial screening, 35 were 3. Non-binary/other was given as an option in the registration form. A number of people registered with this option. However, none of those people met the distress level criteria and were thus not selected for an interview.
invited to the face to face session, including 18 with high distress and 17 with low distress.
The participants completed the same measures of depression and anxiety immediately before the interview. This was meant to provide an assessment of distress closer in time to the interview and to increase the psychological salience of this information during the interview. We adopted a data collection methodology inspired by the DAIC dataset collection method [44], which consists of a human interviewer asking a series of open-ended conversational questions to elicit naturalistic behavior. The interviews were performed by a computer science researcher based on peer-support interview questions collected from the university support services. To achieve the conversational interview dynamic the interviewer asks general questions regarding the participant's life and further encourages the participant to elaborate. For example, the interviewer would ask "can you tell me about one time in your life you were particularly happy?" and then ask some follow up questions regarding the example the participant provided. The interviewer was blind to the distress level of participants during the interview.
To keep behaviors naturalistic, participants were not aware of the main goal of the study, which is an automatic analysis of behavioral cues. Instead, they were told that the experiment aimed at building models that can help in mental well-being. This ensured that their behavior would be as natural as possible. All participants got debriefed of the main aim of the data collection at the end of the session. Participants were not informed of the results of their questionnaires, and all of them were handed a small booklet with the list of peer support and mental well-being services provided by the university. It is worth mentioning that the interviewer was blind to whether participants were from high or low distress groups in order not to affect their behavior. They were also instructed to limit their body and facial expressions throughout the interview and keep their sitting posture constant through all the interviews in order to avoid any changes in participants' behavior due to mimicry effect [45].
The dataset is labeled with participant responses to selfevaluation questionnaires right before the interview for assessing distress and personality traits, as well as demographic labels such as gender. The distress questionnaires include the PHQ-8 for depression, GAD-7 for anxiety, SSS-8 [46] for somatic symptoms, and the PSS [47] for perceived stress. Personality traits are measured using the Big Five Inventory [48]. In sum, each participant provided responses to 5 questionnaires, in which PHQ-8 and GAD-7 were measured twice, both at registration and before the faceto-face session.
As a result, the dataset includes videos of fully natural non-acted expressions, including facial expressions, body motion, gestures, and speech.
Preliminary Analysis
We collected videos of 35 interviewed participants with a total video duration of 07:50:08. General statistics regarding the questionnaire and demographic results within the dataset are provided in Table 1 normalized covariance values, also known as the correlation coefficient.
Confounding Correlations
We assessed confounding correlations based on the depression label, as much of the related work focuses on depression. While the distress measures, anxiety, perceived stress, and somatic stress, were found to be strongly correlated with depression, the personality measures have below 50% covariance with the exception of neuroticism, which is a trait characterized by negative emotionality, with an 80% covariance. The demographic measures, gender, and age were negligibly correlated, with 9.47% and -11.09% covariance, respectively. Finally, the interview duration was found to be not correlated with any questionnaire result (less than 25% covariance with all labels). Thus, we can be confident that there are no confounding correlations with personality scores or demographics.
Published Norms
A comparison of the mean values for distress and personality measures between our dataset and the published norms is presented in Table 2. While there are differences, the measures are generally in line with the published norms. The dataset has a substantially higher mean perceived stress score, but only slightly higher mean scores for anxiety and depression. Depression, extraversion, and neuroticism measures are particularly close to their published norms. While the dataset mean for agreeableness and openness are substantially higher than the published norms (over 10% over the technical range for those measures).
Remarks
Participants completed the PHQ-8 and GAD-7 questionnaires twice: during registration and with the interview process. These questionnaires are temporal; specifically, they relate to the participant's mental state in the past two weeks. Given this, some difference between registration and interview results was expected.
With the exception of a small number of outliers, participants were generally consistent in self-evaluation between registration and interview. PHQ-8 responses had a mean difference of 0.89, while GAD-7 responses had a mean difference of 0.63. As a result, we took the most recent response to self-evaluation questionnaires as the label for each participant's video recording.
METHOD
We used our collected dataset to study body gestures and self-adaptors. In this section, we demonstrate two different methods to analyze the body modality within the context of psychological distress. As a first step, we extract the most common audio-visual features. Then we describe a set of generic statistical body features that we extract to analyze general body gesture movement. To look specifically for self-adaptors, we then present an automatic approach to extract self-adaptors and fidgeting behavior in our dataset. We then perform a feature-based statistical analysis on the extracted body features -both generic and fidgeting features to understand what features are generally correlated with distress classification. Lastly, we move on to propose a multimodal approach to demonstrate further the effectiveness of body modality, where we incorporate and analyze the co-occurrence of multiple modalities to make predictions.
Audiovisual Feature Extraction
Visual Features
For each video, we used state-of-the-art tools, OpenPose [17] and OpenFace 2.2 [51], to extract body pose features, facial Action Units (AUs), and gaze directions.
However, OpenPose and OpenFace do not take into account the consistency of the keypoints across time, causing the keypoints to usually fluctuate highly in many parts, introducing noise to the real continuous face and body motion. Besides, there are some frames where OpenPose or OpenFace fail to extract all pose points or gaze features, respectively. To overcome these problems, we infer the missing data via Cubic Spline Interpolation across the whole sequence. We then smooth the data using a Savitzky-Golay filter [52] (window length is 11 and the order of the polynomial is 3).
Audio Features
Speaker diarization involves partitioning an audio stream into homogeneous segments according to the speaker's identity. In order to distinguish the speech of the interviewer and the participant, we use the open-source Speaker-Diarization project [53] which utilizes an Unbounded Interleaved-State Recurrent Neural Network (UIS-RNN) [54], to extract speaker identities with respect to the time axis. We then conduct a manual check to assign correct diarization labels to the participant and the interviewer. We also use pyAudioAnalysis [55] to extract MFCCs.
Generic Body Features
To explore the body modality, we extract and analyze the set of generic statistical features that describe the body movements.
Feature Extraction
Two kinds of statistical features are computed and extracted: global features and localized features. In the global features, we care about the overall statistics of motion, while in the localized features (features that are within specific body parts, such as head, hands, and legs), we are interested in the statistics of the motion within the body parts, which we refer to as "localization". Our notation is summarized in Table 3.
We define a "gesture" as a period of sustained movement within a body localization. For example, waving hands is a gesture within "Hn (hand)" localization, and shaking legs continuously will register a gesture in "L (Legs)" localization.
To detect gestures within a localization, we scan the video using a moving window method.
First, the per-frame absolute movement (L 2 distance) is calculated for each pose point. The value is then averaged by the number of pose points in the localization. Formally,
F t = 1 |P | p∈P ||P p,t − P p,t−1 || 2(1)
where P p,t is the position vector of pose point p at time t, and F t is the averaged per-frame movement across all points. P are the collection of pose points in this localization. Second, a moving window is applied such that a small number of frames do not have a disproportionate effect on the detection. This process can be expressed by:
W i = 1 l t<i×(l+1) t=i×l F t(2)
where W i is the windowed average at window index i, l is the length of the window, and F t is the average movement at frame t, from Equation 1. We experimentally chose l = 10, i.e. a second of movement is represented by 3 windows. Third, the window moves until an average movement above a threshold is found, which is considered the beginning of the gesture. The gesture continues until n = 3 consecutive windows (30 frames, approximately 1 sec) are found below the movement threshold, which is thus considered the end of the gesture. Table 3 lists the set of body features we extract. Below we explain how we define each of these features for the overall body. Similarly, the localized features can be calculated for every localization/body part.
• Average frame movement -the per-frame average movement (moving distance) of every pose point of the body. This is the only feature that is not based on detected gestures. • Proportion of total movement occurring during a gesture -the proportion of total movement that occurred while a gesture is happening (within some localizations). • Average gesture surprise -defined as "fraction of frames with no gesture happening" ÷ "number of gestures". For example, if two gestures occurred within a sample such that 80% of the sample duration had no gesture occurring, the average gesture surprise would be 80% 2 = 40%. Whereas, if there were 100 gestures, the average surprise is 0.8%, even though both samples had the same proportion without any gesture occurring. This matches the intuition that each gesture within 100 evenly spaced gestures would be unsurprising as they were regularly occurring, whereas the 2 evenly spaced gestures would be surprising because nothing was happening in between.
• Average gesture movement standard deviation -the standard deviation of per-frame movement within a gesture is averaged across all detected gestures. This is intended to indicate the consistency of movement intensity through a gesture.
Feature Processing
All the movement data is extracted from smoothed Open-Pose data described in Section 4.1.1. All these body gesture features are concatenated (thereafter marked as BodyGesture, which has a feature vector of length 20 for each participant) and all features are normalized such that the length of the sample does not affect the results. Sum-based features (e.g., gesture length, gesture count, total movement, etc.) are normalized against the total number of frames in the sample. Gesture average features, such as gesture surprise, are again normalized against the total number of gestures.
Self-adaptors and Fidgeting features
In addition to the generic body features, we were interested in analyzing the self-adaptors and fidgeting behavior. In this section, we present our fidgeting detection system in three subsections. We start by exploring the selfadaptors/fidgeting encoding and the overall hierarchical design. Then we show the methods of building the two essential detectors of our hierarchical model in the following two subsections. For each detector, we demonstrate the detector's design, and then present the labeling strategy which provides reliable labels for training and evaluation. In order to validate the effectiveness of our automated fidgeting detection approach before moving onto distress classification, we evaluate our model thoroughly both on an acted dataset and on our newly collected dataset of natural expressions. Given the lack of broad agreement on the definition of fidgeting so far, we utilize a two-step hierarchical model to identify fidgeting. As shown in Table 4, we first identify selfadaptors, which we define as low-level location events (e.g. H2H, H2F). Secondly, action events (i.e. DYNAMIC, STATIC) of hand/leg are classified by the DYNAMIC/STATIC Classifier. Fidgeting is then defined as a combination of lowlevel self-adaptors and action events. Specifically, we define three types of fidgeting: cross hand fidgeting, single-hand fidgeting, and leg/feet fidgeting.
Overall Design and Encoding
Self-adaptors Description
H2H Hand to Hand H2A
Hand to Arm H2L
Hand to Leg H2F
Hand to Face HF Hand Free (when not belong to any of above) L2G
Both Legs on Ground L2L
Leg on the other Leg (crossed legs)
Action Events Description
DYNAMIC Moving obviously STATIC
No obvious movement is observed
Self-adaptor Detector
Design:
Each body location is represented using a bounding box. Self-adaptors are defined as overlapping bounding boxes. We represent the hand and face using the smallest rectangular box bounding all corresponding hand or face keypoints. The forearms, upper arms, lower legs, and upper legs'bounding boxes' long sides are aligned with the connection between two joints from OpenPose, while the width is a free parameter tuned for the best automatic detection performance.
First, H2H self-adaptor events are detected (i.e., when the two hands' bounding boxes overlap). Then all other handbased self-adaptor events are detected, for all segments of the video not containing H2H segments.
All self-adaptors, except for H2F, must be longer than 100 frames (around 4 seconds with the frame rate of 26). This reduces noise from detected self-adaptor events.
Labeling and Evaluation:
In order to validate our self-adaptor detector, we manually labeled 4 participants' videos, a total duration of 59 minutes. The interlabeler agreement was checked using Krippendorff's alpha. Each frame was labeled with one of the self-adaptor codes from Table 4. Within these videos, participants perform different self-adaptors and each event has a minimum total duration of 5 minutes, with the exception of H2F. Table 5, the Krippendorffs alpha agreement for left-hand location is 0.823, for right-hand location is 0.888 and for leg location is 1.00. This suggests good agreement between the annotators and, thus, the reliability of the labels. The results show that our network is able to detect self-adaptor with excellent overall precision, and especially for the H2H, H2F, L2L and L2G events, the detector reached a very high accuracy respectively. Note that, 'NA' in Table 5 means that there is no corresponding gestures in the evaluation set of 4 labelled participants. Fig. 1, the DY-NAMIC/STATIC Classifier operates on extracted optical flow from a sliding window across the video (size 100 frames, step 50 frames). To classify the action (DYNAMIC/STATIC), hand movements (especially fingers) and leg movements require optical flow to obtain smooth trajectories, given OpenPose estimations become unreliable when hands intersect or are occluded. We thus initialize the optical flow with the OpenPose estimations at the beginning of each slice.
As shown in
We choose Fast Fourier Transform (FFT), standard deviation (STD), and mean values (MEAN) of point trajectories as our input features (in this case, number of trajectories is 2 × number of keypoints as we have 2-D data for each keypoint). For fidgeting, we are more interested in the cyclic motion with a frequency ranging from 0.5Hz to 2.5Hz [11]. Therefore, we extracted the spectrum data within the range [0.5, 2.5] Hz. As we analyze slices of length 100, the dimension of FFT spectrum data that is within [0.5, 2.5] Hz is always fixed at 41× number of trajectories. We then average over the FFT values that have the same frequency to produce an FFT feature of length 41. As for the STD and MEAN features, we simply calculate along the time axis and give a vector with a length of the number of trajectories for each feature. We labelled DYNAMIC/STATIC on each of the three categories. We randomly sampled and labeled approximately 30% of slices for each category in every video.
Two researchers labeled the data independently. As shown in Table 6, we first manually dropped the slices with a wrong category label (e.g. a slice is detected as H2H while it's in fact not). The number of slices that have a correct category label is shown as "Correct". Secondly, we labeled DYNAMIC/STATIC and dropped the slices that lack a consensus between two researchers. The number of slices with an agreement is shown as "Agreed". The high percentage of both "Correct" and "Agreed" suggests the good performance of our self-adaptor detection and also the high reliability of action labels. As shown in Table 7, the detector achieved generally high accuracy and F1 score with low standard deviations. Though the hand actions are difficult even for researchers to label, the detector can successfully classify more than 80% of slices.
Feature encoding
This section describes how we encoded low-level framelevel features described in Sec 4.1 and 4.3 in preparation for the final prediction step. The generic statistical BodyGesture will not need to be encoded since it represents global statistical features rather than time-series features.
Fidgeting features processing
Having extracted low-level features from each frame, we combine them to form high-level descriptors of fidgeting behavior (CHF, CHF, and LFF as shown in Table 4). The Fidget_pure feature group is formed by {HCF, SHF-L(left hand), SHF-L(right hand), SHF-A(left hand), SHF-A(right hand), SHF-F(left hand), SHF-F(right hand), LFF}. The Fidget_pure group is combined with a participant speaking feature array to form the full fidget feature group, enabling us to investigate whether fidgeting and speaking co-occurrence is relevant. This participant speaking feature array indicates whether the participant is speaking during a frame. This is calculated using the previously described diarization data.
After all the feature extraction, we have several feature groups shown in Table 8.
Per-frame representation
In order to capture more useful feature representations and reduce the dimensionality, and inspired by our previous work [56], different modalities are combined using a Multimodal Deep Denoising Auto-Encoder (multi-DDAE). As shown in Fig. 2, each modality is encoded through a dense layer and then all are concatenated to yield the last shared dense layer which provides the representation we use. The shared layer is then inversely decoded to generate each modality. We optimized the hyper-parameters of the autoencoder via several experiments so that the dimensions of hidden layers are {0.5d, 0.25d, 0.5d} where d represents the input dimension of each node, and the noise applied at the input is 0.1 Gaussian noise. The training optimization target is the joint Mean Square Error (MSE) of the MSEs of the feature group at each node (later we fixed the loss weights to be 0.35 for the fidget feature group while 0.1 for others, as we are more interested in fidgeting in our experiments).
Feature Group Dimension Description
Whole video representation
Due to varying lengths of the videos, it's necessary to unify the dimensionality of the per-video representation. Though Fisher Vector was originally proposed to aggregate visual features [18], it has become popular in social signal processing such as bipolar disorder [57] and depression recognition [58]. Inspired by these applications, we apply a Gaussian Mixture Model to cluster similar per-frame representations and then use an Improved Fisher Vector encoding to obtain a fixed-length representation. As a result, the feature is transformed from num_frames × feature_dim to 2 × GMM_Kernel_num × feature_dim.
Classification of signs of distress
We apply a Random Forest to select important features from the per-video representation. The selected features are used by the classifier. We experiment with two classifiers: 1) a logistic regression-based classifier (LR) using a binary threshold of 0.5; 2) a Multi-Layer Perception (MLP) with two softmax outputs for binary classification.
As the available samples are limited and the useful features vary across individual differences, label smoothing [59] is applied to the MLP model in order to further boost the performance. More formally:
L new = L × (1 − s) + s n (3)
where L is the one-hot label at softmax outputs, s is the smoothing parameter, and n is the number of classification classes. For example, when smoothing is 0.2, the one-hot label {0, 1} will become {0.1, 0.9}, which lowers the confidence on training samples but reduces overfitting.
STATISTICAL ANALYSIS OF BODY GESTURE
To better understand the effect of different body-related features, before moving to deep multimodal learning, we deploy a simple linear regression model to perform statistical analysis on the body gesture features (BodyGesture from Sec. 4.2) and fidgeting features (Fidget_pure from Sec. 4.4). The aim of this section is to shed some light on the effect of different movements of every part of the body and its correlation with depression.
Experimental Setup
Fidgeting features from Fidget_pure is processed by averaging along the time axis (9 × N to 9 × 1) to match the dimension of other features in BodyGesture (20 ×1). Reporting notation is defined as "[localization]-[feature type][linear polarity]". Localization and feature type token mappings are provided in Table 3. Polarity is defined below:
• "+/¬": A greater value (e.g. more activity) contributing to a positive/negative classification • "/": A near-zero coefficient in linear model. • "?": The polarity is observed inconsistent in different folds of cross-validation. With the linear model, we perform 3-fold crossvalidation on depression labels, which is more reliable than normal train-valid-test split for our small dataset. Crossvalidation also provides more confidence about the polarity of each feature, as only the features that show consistent polarity across all folds will be marked. All results are calculated as the mean of 3-fold cross-validation results. All experiments and cross-validation are participant-independent.
Results and Discussion
As shown in Table 9, with only the global movement (O-FM), the F1 score is only 34.43%. This means that measuring the quantity of global motion in the body is not enough indicator of depression. While when combining all body gesture statistical features, the classifier achieves 66.81% F1 score.
Note that all body gesture statistical features include a large set of features representing statistics of different body parts as well as global body motion, as explained in Sec. 4.2. In order to filter out this large feature set, we performed an exhaustive feature search to obtain the combination of features that gives the best performance, repsresented in Table 9 as "Searched BodyGesture". It reaches a good F1 score at 82.70%.
As shown in Table 9, when we combine specific fidgeting features (Fidget_pure) with BodyGesture, and perform feature search on the concatenated feature, the F1-score reaches the best at 83.38%. The resulted best feature combination includes: For example, the O-GM+ token suggests that more movement within gestures relative to all other movement is indicative of depression, and especially, total movement within head gestures (He-GT+) is positively correlated with depression. The localized features suggest that the length of gestures in the head and legs (He-GL+, L-GL+) has a correlation with depression. It's clear that gesture statistics in hands (Hn-* ) are generally not interesting in prediction, while the classifier pays head and leg motions more attention. However, Hn-GS¬ suggests that more regular (thus less surprising) hand gestures (e.g. constant fidgeting) show a positive contribution to depression.
We can also conclude that a higher quantity of right hand fidgeting on the leg, arm, and face (SHF-* (Right)+) have a positive contribution to the higher depression level, and left hand fidgeting on the face (SHF-F(Left)+) is also positively correlated with high depression level. The difference in left and right might be because most participants are right-handed and therefore, their left hands exhibit less useful motions that are predictive of depression. The conclusion is not surprising, as, in our observations, people perform hand to hand fidgeting regardless of their depression label. Combining the results from above, we can conclude that, in our dataset, more regular hand gestures and more fidgeting on the leg, arm, and face are indicative of depression. Depressed participants also have more frequent motions in the head and leg region.
EVALUATION OF MULTIMODAL DEEP LEARNING
In this section, we evaluate and demonstrate the validity and potential of fidgeting features as complementing modality with other features to predict the signs of psychological distress.
First, we present some baseline distress classification results on our dataset. Next, we present results for our full multi-modal classifier pipeline, where we investigate the effects of hyper-parameters on the performance given the small size of our dataset. Finally, we apply our automatic fidgeting detection approach to a publicly available dataset [11] to demonstrate its accuracy and generalisability beyond our dataset.
As in Sec. 5, all results are calculated as the mean of 3-fold cross-validation results. All experiments and crossvalidation are participant-independent.
Baselines
As a baseline, we used Gaussian kernel Support Vector Machines (SVMs) classifiers applied on each individual feature group used in our multi-modal model (listed in Table 8). Unlike in Sec. 5, non-linearity can be considered in these baseline models. They are evaluated for a binary depression label and a binary anxiety label. These models provide a simple and common baseline for our dataset. For the baseline SVM, we use the mean value for each feature over the whole sample, thus providing a normalized representation with mean values of all the features. Results are presented in Fig. 5.
These baseline models demonstrate two points: first, the behaviors we are attempting to classify in our dataset are complex; and second, our fidgeting features by themselves are not trivially predictive of distress, but rather require learned representations.
Multi-modal distress classification
As presented in the previous baseline section, single modalities are not enough to capture the complexity of signs of psychological distress. Therefore we experiment with our proposed multi-modal classification framework. We encode different modalities through multi-DDAE and Improved Fisher Vector encoding (Sec. 4.4), and classify distress labels using either LR or MLP classifier after Random Forest feature selection (Sec. 4.5).
In Fig. 5, we present the best performance of different feature group combinations using our multi-modal fusion framework. We use a Random Forests (RF) for feature selection. As RFs take in labels to find the most discriminative features, this feature selection is only performed on the training set and selected features are then applied to the test set, which prevents label leaking.
Effects of some hyper-parameters
As shown in Fig. 3, when other hyperparamters are fixed, label smoothing makes great effects on classification performance. Fig. 3 presents the great effect of label smoothing on classification performance when other hyperparameters are fixed. Though some turbulences exist, the performance increases with higher label smoothing but starts to decrease when smoothing is too much. This is intuitively reasonable because when smoothing is above 0.5, there is less allowed space for model to learn features well. The results in Fig. 3 shows that label smoothing parameter at 0.4 generally provides good performance, and thus we fixed this value in follow experiments.
We test different numbers of features selected by RF (RF num), and different GMM kernel sizes. Fig. 4 shows that the performance is generally worse when RF num is low (< 100) as it results in insufficient information. However, when RF num is high (≥ 250), redundant features bias the classifier, decreasing performance.
Using 32 GMM kernels achieves better performance than 16 kernels. We believe this is due to the way GMM clusters similar per-frame features. More kernels mean more clusters and thus more predictive information. However, when kernel size is above 32, the fitting score is large (in GMM lower is better) and therefore increasing beyond 32 will not further improve performance. From Fig. 5, it is clear that fidget features improve most configurations' performance, but performance decreases slightly without the participant speaking event (presented as "Pure Fidgeting" in figure). This leads us to conclude that the co-occurrence of speaking and fidgeting is relevant for distress detection. Fig. 5 also demonstrates our ablation analysis to help us understand better the important factors in distress classification. We remove one or two feature groups from our framework and conduct the same experiments.
Effects of feature groups
Ablation Analysis
Without MFCCs features, the performance generally doesn't drop too much in depression and even increases in anxiety. This may suggest that MFCCs are not very important in depression and even distractive in anxiety detection.
AUs have long been proved to be predictive of distress, and, as expected, we see a significant performance reduction when omitting them.
It is interesting to note that fidgeting, with the LR configuration, does not consistently improve performance, but in anxiety, it always boosts the classification results. This allows us to conclude that fidgeting is certainly important in anxiety, but is also predictive in depression when combined with other feature configuration.
Fidget detector cross-dataset validation
To further validate our automatic fidgeting detection approach, we evaluate it on a publicly available dataset from Mahmoud et al. [11] that has videos of fidgeting behavior along with manual fidgeting labels.
In this dataset, actors perform specific fidgets. While these fidgets are overemphasized compared to natural fidgets, their core movement is similar.
Segments of the video containing fidgeting are manually labeled in an action-exclusive manner. That is, the co-occurrence of fidgeting is not labeled. Given this, we measure the accuracy of our approach in two phases: first, we check that fidgeting, regardless of location, is detected during the periods of manually labeled fidgeting; and second, we calculate the recall for location-specific fidgeting. Precision would not make sense for location-specific fidgeting, because the detected location may also be fidgeting, while the ground truth only considers one location.
Detected fidgeting segments shorter than 100 frames are excluded to reduce noise. As shown in Table 10, the recall of the non-fidget label is around 50%, but this due to the fact that the labels are generally assigned to a long continuous segment and do not accurately reflect the actions occurring per-frame. However, the recall of the fidget label is good, achieving 80%.
Our fidgeting detection approach outperforms the stateof-the-art presented by Mahmoud et al. [11] for each fidget type, achieving a recall above 75% for all fidgeting types.
CONCLUSION
We introduced a novel audio-visual distress dataset comprising recorded interviews and distress labels based on psychological questionnaires. We then presented an automated self-adaptor and fidgeting detection system to extract different fidgeting behaviors from real interview videos. We validated our automated approach by evaluating it on a manually-labeled publiclyavailable fidgeting dataset as well as our newly collected dataset of natural expressions.
Statistical analysis with generic gesture features was carried out, providing interesting insights into the effect of different generic body movements and their correlation with depression labels.
We also presented a deep learning method that doesn't require a feature search and utilizes the co-occurrence of different multi-modal features. We combined our detected fidgeting features with three other modalities, AUs, gaze, and MFCCs, in a multi-modal distress classification pipeline. This pipeline utilized a Multi-modal Deep Denoising Auto-Encoder to compactly represent the modalities per-frame, a GMM to FV step to represent the features across a whole video compactly, and a random forest to select important features. Finally, we tested the binary classification of depression/anxiety labels using LR and MLP classifiers. An ablation study has been carried out to demonstrate the effect of detected fidgeting behaviors in predicting signs of psychological distress.
LIMITATIONS AND FUTURE WORK
Given the limitations of the small dataset we used, more work is required to utilize the fidgeting features as a complementary modality for classification and prediction of psychological distress. Though recruiting participants and interviewing are time-consuming and costly, we are planning to extend our dataset with more videos. In our multimodal classification experiments, we treated all fidgeting features as a whole. When more data is available, it will be interesting to evaluate the importance of each fidget behavior (e.g., hand to arm fidget and hand to hand fidget). In our work, we only focused on depression and anxiety disorders. However, our automatic approach to detecting self-adaptors and fidgeting opens the door for more work to explore the presence of these non-verbal behaviors and measure them quantitatively in other psychological disorders.
Average
Gesture movement standard Deviation GD Number of Gestures GN Localized Average Length of Gesture GL Average per-frame Gesture movement GA Total movement in Gestures GT Average Gesture Surprise GS Number of Gestures GN
•
Number of gestures -total number of detected gestures across all tracked localizations.
Fig. 1 .
1Hierarchical self-adaptor detection workflow. (1) First, detect hand/leg location; (2) Classify motion using DYNAMIC/STATIC Classifier and then finally combine location and motion to give high-level fidgeting event. The figure shows the detection of H2H (Hand to hand) fidget. The same principle applies to other fidgets.
Fidgeting
Type Combination CHF (Cross Hand Fidgeting) H2H + DYNAMIC SHF (Single Hand Fidgeting) {H2A, H2L, H2F, H2F} + DYNAMIC SHF-L (to Leg only) H2L + DYNAMIC SHF-F (to Face only) H2F + DYNAMIC SHF-A (to Arm only) H2A + DYNAMIC LFF (Leg/Feet Fidgeting) {L2G, L2L} + DYNAMIC
4.3.3. 2
2Labeling and Evaluation: To train and evaluate the DYNAMIC/STATIC Classifiers, accurate labeling is required. Three classifiers are required to cover the three categories of detected self-adaptors: {H2H}, {H2A, H2L, H2F, H2F}, and {L2G, L2L}.
Fig. 2 .
2Multi-modal fusion & classification pipeline. The dashed arrow represents a fully connected neural network between dense layers. Pose estimation, gaze, Action Units, and MFCC data are extracted from videos. Fidget features are computed using the method described in Section 4. (1) All features are fed into a Multi-modal Deep Denoising Auto-Encoder (multi-DDAE) to generate a compact per-frame encoded representation. (2) These per-frame features are then compressed into a whole video representation using a Gaussian Mixture Model (GMM) and Fisher Vector combination. (3) Random Forest feature selection is performed. (4) Finally, a classifier predicts a given label. We experiment with two classifiers, a logistic regression classifier and a Multi-layer Perception.
{O-FM?, O-GM+, O-GN?, Hn-GN?, Hn-GS¬, He-GL+, He-GN+, He-GT+, He-GA+, He-GS+, L-GL+, L-GN+, L-GA+, SHF-L(Right)+, SHF-A(Right)+, SHF-F(Right)+, SHF-F(Left)+}. Looking deeply into this list of features we could infer some interesting insights into the overall body movements in our dataset, which we explain below.
Fig. 3 .
3Effects of label smoothing. In general, smoothing can boost performance. (error bar extends by the standard deviation in either side and best performance in bold) Fig. 4. Effects of hyper-parameters. Red denotes models incorporating fidget features and blue for non-fidget models. In general, models with fidget features perform better. (Error bars are not shown for better visualization; best performance of each model is in bold). RF+number denotes the number of features selected by Random Forest.
Fig. 5 .
5Effects of feature groups and ablation analysis (error bars extend by the standard deviation in either side; best performance is in bold).
. Covariance is presented asLabel Range Mean
Covariance
with Depression
Distress
Depression
0-19
7.43
-
Anxiety
0-19
7.00
86.15%
Perceived stress
1-30
18.17
84.00%
Somatic symptoms
1-27
9.06
74.16%
Personality
Extraversion
3-31
16.37
-30.49%
Agreeableness
12-34
25.67
-42.21%
Openness
7-39
27.29
4.29%
Neuroticism
1-31
16.86
80.00%
Conscientiousness
10-36
21.46
-46.41%
Demographic
Gender
18 M & 17 F
9.47%
Age
18-52
25.40
-11.09%
TABLE 1
General statistics regarding the questionnaire and demographic
results within the dataset. This table demonstrates there are no
confounding correlations with the depression label.
TABLE 2
2Comparison of the mean questionnaire values within our dataset to the published norms. This shows that the population distribution, with regards to these distress and personality measures, is generally in line with the broader population.
TABLE 3
3Feature notation Abbrs. of BodyGesture.
TABLE 4
4Self-adaptor and fidgeting encoding book
TABLE 6
6Hand/Leg action labelling overviewHaving reliable slice labels, we then partitioned participants into 5 folds and performed slice-level cross-validation. For evaluation, we calculated accuracy, F1 score, and their respective standard deviations.Category
Acc.
Acc. Std.
F1
F1 Std.
BOTH: H2H
0.833
0.019
0.834
0.019
LEFT:{H2A, H2L, H2F, H2F}
0.884
0.025
0.884
0.026
RIGHT:{H2A, H2L, H2F, H2F} 0.895
0.026
0.894
0.026
{L2G, L2L}
0.875
0.022
0.871
0.021
TABLE 7
DYNAMIC/STATIC Classifier evaluation (LEFT means left hand, RIGHT
means right hand, BOTH means both hands)
TABLE 8 Feature
8Groups. N is number of frames in each recording of participants.
Step 1 :
1Detect fidget only fidget precision recall f1-score supportTABLE 10 Results of fidget detection on Mahmoud et al.'s dataset[11].0
0.51
0.49
0.50
29440
1
0.79
0.80
0.80
69517
Step 2: Detect specific fidgeting
(evaluated with recall)
Fidget type
Recall Support
leg
0.784
32430
hand to face
0.865
10594
hand to arm
0.787
12794
hand cross
0.768
13699
Indigo Orton is a PhD student in ComputerDr Marwa Mahmoud is a Research Fellow of King's College and an Affiliated Lecturer at the Department of Computer Science and Technology, University of Cambridge. Her research focuses on computer vision and machine learning within the context of affective computing, behaviour analytics and human behaviour understanding. She is particularly interested in building inference models that tackle challenging realworld problems, usually characterised by data scarcity and noisy signals from multiple modalities. She applies her research in the areas of automotive applications, healthcare, and animal welfare.
Global, regional, and national incidence, prevalence, and years lived with disability for 328 diseases and injuries for 195 countries, 1990-2016: a systematic analysis for the global burden of disease study 2016. T Vos, A A Abajobir, K H Abate, C Abbafati, K M Abbas, F Abd-Allah, R S Abdulkader, A M Abdulle, T A Abebo, S F Abera, The Lancet. 39010100T. Vos, A. A. Abajobir, K. H. Abate, C. Abbafati, K. M. Abbas, F. Abd-Allah, R. S. Abdulkader, A. M. Abdulle, T. A. Abebo, S. F. Abera et al., "Global, regional, and national incidence, prevalence, and years lived with disability for 328 diseases and injuries for 195 countries, 1990-2016: a systematic analysis for the global burden of disease study 2016," The Lancet, vol. 390, no. 10100, pp. 1211-1259, 2017.
Automatic detection of self-adaptors for psychological distress. W Lin, I Orton, M Liu, M Mahmoud, IEEE International Conference on Automatic Face & Gesture Recognition (FG). IEEEW. Lin, I. Orton, M. Liu, and M. Mahmoud, "Automatic detection of self-adaptors for psychological distress," in IEEE International Conference on Automatic Face & Gesture Recognition (FG). IEEE, 2020.
The World Health Report. Mental health : new understanding, new hope. World Health Organization (WHO), The World Health Report 2001: Mental health : new understanding, new hope. Geneva: World Health Organization(WHO), 2017.
Depression and other common mental disorders: global health estimates. World Health Organization(WHO). WHO. WHO, Depression and other common mental disorders: global health estimates. World Health Organization(WHO), 2001.
Home -mental health foundation. 2020/07/04Mental Health Foundation. Mental Health FoundationMental Health Foundation, "Home -mental health foundation," Mental Health Foundation, accessed 2020/07/04. [Online].
Why bodies? Twelve reasons for including bodily expressions in affective neuroscience. B De Gelder, Philosophical Transactions of the Royal Society B: Biological Sciences. 3641535B. De Gelder, "Why bodies? Twelve reasons for including bodily expressions in affective neuroscience," Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 364, no. 1535, pp. 3475- 3484, 2009.
Dont scratch! Self-adaptors reflect emotional stability. M Neff, N Toothman, R Bowmani, J E F Tree, M A Walker, International Workshop on Intelligent Virtual Agents (IVA). SpringerM. Neff, N. Toothman, R. Bowmani, J. E. F. Tree, and M. A. Walker, "Dont scratch! Self-adaptors reflect emotional stability," in International Workshop on Intelligent Virtual Agents (IVA). Springer, 2011, pp. 398-411.
Towards automatic analysis of gestures and body expressions in depression. M Mahmoud, P Robinson, PervasiveHealthM. Mahmoud and P. Robinson, "Towards automatic analysis of gestures and body expressions in depression." in PervasiveHealth, 2016, pp. 276-277.
Semantic processing of self-adaptors, emblems, and iconic gestures: An erp study. K Chui, C.-Y Lee, K Yeh, P.-C Chao, Journal of Neurolinguistics. 47K. Chui, C.-Y. Lee, K. Yeh, and P.-C. Chao, "Semantic processing of self-adaptors, emblems, and iconic gestures: An erp study," Journal of Neurolinguistics, vol. 47, pp. 105-122, 2018.
To behave like a liar: Nonverbal cues to deception in an asian sample. S Chan, M Khader, J Ang, J Chin, W Chai, Journal of Police and Criminal Psychology. 313S. Chan, M. Khader, J. Ang, J. Chin, and W. Chai, "To behave like a liar: Nonverbal cues to deception in an asian sample," Journal of Police and Criminal Psychology, vol. 31, no. 3, pp. 165-172, 2016.
Automatic multimodal descriptors of rhythmic body movement. M Mahmoud, L.-P Morency, P Robinson, International Conference on Multimodal Interaction (ICMI). ACMM. Mahmoud, L.-P. Morency, and P. Robinson, "Automatic multi- modal descriptors of rhythmic body movement," in International Conference on Multimodal Interaction (ICMI). ACM, 2013, pp. 429- 436.
Nonverbal interaction of patients and therapists during psychiatric interviews. L A Fairbanks, M T Mcguire, C J Harris, Journal of Abnormal Psychology. 912109L. A. Fairbanks, M. T. McGuire, and C. J. Harris, "Nonverbal in- teraction of patients and therapists during psychiatric interviews." Journal of Abnormal Psychology, vol. 91, no. 2, p. 109, 1982.
An analysis of fidgeting and associated individual differences. A Mehrabian, S L Friedman, Journal of Personality. 542A. Mehrabian and S. L. Friedman, "An analysis of fidgeting and associated individual differences," Journal of Personality, vol. 54, no. 2, pp. 406-429, 1986.
The repertoire of nonverbal behavior: Categories, origins, usage, and coding. P Ekman, W V Friesen, Nonverbal Communication, Interaction, and Gesture. 1157106P. Ekman and W. V. Friesen, "The repertoire of nonverbal behavior: Categories, origins, usage, and coding," Nonverbal Communication, Interaction, and Gesture, vol. 1, no. 1, p. 57106, 1969.
Home literacy, television viewing, fidgeting and adhd in young children. J M Froiland, M L Davison, Educational Psychology. 368J. M. Froiland and M. L. Davison, "Home literacy, television view- ing, fidgeting and adhd in young children," Educational Psychology, vol. 36, no. 8, pp. 1337-1353, 2016.
Automatic behavior descriptors for psychological disorder analysis. S Scherer, G Stratou, M Mahmoud, J Boberg, J Gratch, A Rizzo, L.-P Morency, IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG). IEEES. Scherer, G. Stratou, M. Mahmoud, J. Boberg, J. Gratch, A. Rizzo, and L.-P. Morency, "Automatic behavior descriptors for psycho- logical disorder analysis," in IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG). IEEE, 2013, pp. 1-8.
Openpose: Realtime multi-person 2D pose estimation using part affinity fields. Z Cao, G Hidalgo, T Martinez, S Simon, Y A Wei, Sheikh, IEEE Transactions on Pattern Analysis and Machine Intelligence. Z. Cao, G. Hidalgo Martinez, T. Simon, S. Wei, and Y. A. Sheikh, "Openpose: Realtime multi-person 2D pose estimation using part affinity fields," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2019.
Improving the fisher kernel for large-scale image classification. F Perronnin, J Sánchez, T Mensink, European Conference on Computer Vision (ECCV). SpringerF. Perronnin, J. Sánchez, and T. Mensink, "Improving the fisher kernel for large-scale image classification," in European Conference on Computer Vision (ECCV). Springer, 2010, pp. 143-156.
Facial action coding system: a technique for the measurement of facial movement. E Friesen, P Ekman, Palo Alto. 3E. Friesen and P. Ekman, "Facial action coding system: a technique for the measurement of facial movement," Palo Alto, vol. 3, 1978.
Multimodal measurement of depression using deep learning models. L Yang, D Jiang, X Xia, E Pei, M C Oveneke, H Sahli, Annual Workshop on Audio/Visual Emotion Challenge (AVEC). ACML. Yang, D. Jiang, X. Xia, E. Pei, M. C. Oveneke, and H. Sahli, "Multimodal measurement of depression using deep learning models," in Annual Workshop on Audio/Visual Emotion Challenge (AVEC). ACM, 2017, pp. 53-59.
Can body expressions contribute to automatic depression analysis. J Joshi, R Goecke, G Parker, M Breakspear, 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG). IEEEJ. Joshi, R. Goecke, G. Parker, and M. Breakspear, "Can body expressions contribute to automatic depression analysis?" in 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG). IEEE, 2013, pp. 1-7.
Multimodal detection of depression in clinical interviews. H Dibeklioglu, Z Hammal, Y Yang, J F Cohn, International Conference on Multimodal Interaction (ICMI). ACMH. Dibeklioglu, Z. Hammal, Y. Yang, and J. F. Cohn, "Multimodal detection of depression in clinical interviews," in International Conference on Multimodal Interaction (ICMI). ACM, 2015, pp. 307- 310.
Dynamic multimodal measurement of depression severity using deep autoencoding. H Dibeklioglu, Z Hammal, J F Cohn, IEEE Journal of Biomedical and Health Informatics. 222H. Dibeklioglu, Z. Hammal, and J. F. Cohn, "Dynamic multimodal measurement of depression severity using deep autoencoding," IEEE Journal of Biomedical and Health Informatics, vol. 22, no. 2, pp. 525-536, 2017.
Psychomotor symptoms of depression. C Sobin, H A Sackeim, American Journal of Psychiatry. 1541C. Sobin and H. A. Sackeim, "Psychomotor symptoms of depres- sion," American Journal of Psychiatry, vol. 154, no. 1, pp. 4-17, 1997.
Depression severity prediction based on biomarkers of psychomotor retardation. Z S Syed, K Sidorov, D Marshall, Annual Workshop on Audio/Visual Emotion Challenge (AVEC). ACMZ. S. Syed, K. Sidorov, and D. Marshall, "Depression severity prediction based on biomarkers of psychomotor retardation," in Annual Workshop on Audio/Visual Emotion Challenge (AVEC). ACM, 2017, pp. 37-43.
Automatic audiovisual behavior descriptors for psychological disorder analysis. S Scherer, G Stratou, G Lucas, M Mahmoud, J Boberg, J Gratch, L.-P Morency, Image and Vision Computing. 3210S. Scherer, G. Stratou, G. Lucas, M. Mahmoud, J. Boberg, J. Gratch, L.-P. Morency et al., "Automatic audiovisual behavior descriptors for psychological disorder analysis," Image and Vision Computing, vol. 32, no. 10, pp. 648-658, 2014.
Cross-cultural detection of depression from nonverbal behaviour. S Alghowinem, R Goecke, J F Cohn, M Wagner, G Parker, M Breakspear, IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG). IEEE1S. Alghowinem, R. Goecke, J. F. Cohn, M. Wagner, G. Parker, and M. Breakspear, "Cross-cultural detection of depression from non- verbal behaviour," in IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), vol. 1. IEEE, 2015, pp. 1-8.
Detecting depression severity by interpretable representations of motion dynamics. K Anis, H Zakia, D Mohamed, C Jeffrey, IEEE International Conference on Automatic Face & Gesture Recognition (FG). IEEEK. Anis, H. Zakia, D. Mohamed, and C. Jeffrey, "Detecting depres- sion severity by interpretable representations of motion dynam- ics," in IEEE International Conference on Automatic Face & Gesture Recognition (FG). IEEE, 2018, pp. 739-745.
Analysis of fundamental frequency for near term suicidal risk assessment. A Ozdas, R G Shiavi, S E Silverman, M K Silverman, D M Wilkes, International Conference on Systems, Man and Cybernetics (SMC). IEEE3A. Ozdas, R. G. Shiavi, S. E. Silverman, M. K. Silverman, and D. M. Wilkes, "Analysis of fundamental frequency for near term suicidal risk assessment," in International Conference on Systems, Man and Cybernetics (SMC), vol. 3. IEEE, 2000, pp. 1853-1858.
An end-to-end model for detection and assessment of depression levels using speech. N Srimadhur, S Lalitha, Procedia Computer Science. 171N. Srimadhur and S. Lalitha, "An end-to-end model for detection and assessment of depression levels using speech," Procedia Com- puter Science, vol. 171, pp. 12-21, 2020.
Multi-modality hierarchical recall based on gbdts for bipolar disorder classification. X Xing, B Cai, Y Zhao, S Li, Z He, W Fan, Annual Workshop on Audio/Visual Emotion Challenge (AVEC). ACMX. Xing, B. Cai, Y. Zhao, S. Li, Z. He, and W. Fan, "Multi-modality hierarchical recall based on gbdts for bipolar disorder classifi- cation," in Annual Workshop on Audio/Visual Emotion Challenge (AVEC). ACM, 2018, pp. 31-37.
Bipolar disorder recognition with histogram features of arousal and body gestures. L Yang, Y Li, H Chen, D Jiang, M C Oveneke, H Sahli, Annual Workshop on Audio/Visual Emotion Challenge (AVEC). ACML. Yang, Y. Li, H. Chen, D. Jiang, M. C. Oveneke, and H. Sahli, "Bipolar disorder recognition with histogram features of arousal and body gestures," in Annual Workshop on Audio/Visual Emotion Challenge (AVEC). ACM, 2018, pp. 15-21.
Multilevel attention network using text, audio and video for depression prediction. A Ray, S Kumar, R Reddy, P Mukherjee, R Garg, Annual Workshop on Audio/Visual Emotion Challenge (AVEC). A. Ray, S. Kumar, R. Reddy, P. Mukherjee, and R. Garg, "Multi- level attention network using text, audio and video for depression prediction," in Annual Workshop on Audio/Visual Emotion Challenge (AVEC), 2019, pp. 81-88.
A multi-modal hierarchical recurrent neural network for depression detection. S Yin, C Liang, H Ding, S Wang, Annual Workshop on Audio/Visual Emotion Challenge (AVEC). S. Yin, C. Liang, H. Ding, and S. Wang, "A multi-modal hierarchi- cal recurrent neural network for depression detection," in Annual Workshop on Audio/Visual Emotion Challenge (AVEC), 2019, pp. 65- 71.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770-778.
Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, International Conference on Learning Representations (ICLR). K. Simonyan and A. Zisserman, "Very deep convolutional net- works for large-scale image recognition," International Conference on Learning Representations (ICLR), 2015.
Automatic detection of adhd and asd from expressive behaviour in rgbd data. S Jaiswal, M F Valstar, A Gillott, D Daley, IEEE International Conference on Automatic Face & Gesture Recognition (FG). IEEES. Jaiswal, M. F. Valstar, A. Gillott, and D. Daley, "Automatic detection of adhd and asd from expressive behaviour in rgbd data," in IEEE International Conference on Automatic Face & Gesture Recognition (FG). IEEE, 2017, pp. 762-769.
AVEC 2018: Bipolar disorder and cross-cultural affect recognition. F Ringeval, B Schuller, M Valstar, R Cowie, H Kaya, M Schmitt, S Amiriparian, N Cummins, D Lalanne, A Michaud, Annual Workshop on Audio/Visual Emotion Challenge (AVEC). ACMF. Ringeval, B. Schuller, M. Valstar, R. Cowie, H. Kaya, M. Schmitt, S. Amiriparian, N. Cummins, D. Lalanne, A. Michaud et al., "AVEC 2018: Bipolar disorder and cross-cultural affect recog- nition," in Annual Workshop on Audio/Visual Emotion Challenge (AVEC). ACM, 2018, pp. 3-13.
AVEC 2017: Real-life depression, and affect recognition workshop and challenge. F Ringeval, B Schuller, M Valstar, J Gratch, R Cowie, S Scherer, S Mozgai, N Cummins, M Schmitt, M Pantic, Annual Workshop on Audio/Visual Emotion Challenge (AVEC). ACMF. Ringeval, B. Schuller, M. Valstar, J. Gratch, R. Cowie, S. Scherer, S. Mozgai, N. Cummins, M. Schmitt, and M. Pantic, "AVEC 2017: Real-life depression, and affect recognition workshop and challenge," in Annual Workshop on Audio/Visual Emotion Challenge (AVEC). ACM, 2017, pp. 3-9.
The PHQ-8 as a measure of current depression in the general population. K Kroenke, T W Strine, R L Spitzer, J B W Williams, J T Berry, A H Mokdad, Journal of Affective Disorders. 1141-3K. Kroenke, T. W. Strine, R. L. Spitzer, J. B. W. Williams, J. T. Berry, and A. H. Mokdad, "The PHQ-8 as a measure of current depression in the general population," Journal of Affective Disorders, vol. 114, no. 1-3, pp. 163-173, Apr. 2009.
The PHQ-9: validity of a brief depression severity measure. K Kroenke, R L Spitzer, J B W Williams, Journal of General Internal Medicine. 169K. Kroenke, R. L. Spitzer, and J. B. W. Williams, "The PHQ-9: validity of a brief depression severity measure," Journal of General Internal Medicine, vol. 16, no. 9, pp. 606-613, 2001.
A brief measure for assessing generalized anxiety disorder. R L Spitzer, K Kroenke, J B W Williams, B Löwe, Archives of Internal Medicine. 16610R. L. Spitzer, K. Kroenke, J. B. W. Williams, and B. Löwe, "A brief measure for assessing generalized anxiety disorder," Archives of Internal Medicine, vol. 166, no. 10, pp. 1092-1097, May 2006.
Nonverbal communication. A Mehrabian, Transaction PublishersA. Mehrabian, Nonverbal communication. Transaction Publishers, 1972.
The distress analysis interview Corpus of human and computer interviews. J Gratch, R Artstein, G M Lucas, G Stratou, S Scherer, A Nazarian, R Wood, J Boberg, D Devault, S Marsella, D R Traum, S Rizzo, L.-P Morency, International Conference on Language Resources and Evaluation (LREC). J. Gratch, R. Artstein, G. M. Lucas, G. Stratou, S. Scherer, A. Nazar- ian, R. Wood, J. Boberg, D. DeVault, S. Marsella, D. R. Traum, S. Rizzo, and L.-P. Morency, "The distress analysis interview Cor- pus of human and computer interviews." International Conference on Language Resources and Evaluation (LREC), 2014.
The Social Context of Nonverbal Behavior. U Hess, P Philippot, S Blairy, Mimicry: Facts and fictionU. Hess, P. Philippot, and S. Blairy, "Mimicry: Facts and fiction," The Social Context of Nonverbal Behavior, pp. 213-241, 1999.
The somatic symptom scale-8 (SSS-8). B Gierk, S Kohlmann, K Kroenke, L Spangenberg, M Zenger, E Brähler, B Löwe, JAMA Internal Medicine. 1743B. Gierk, S. Kohlmann, K. Kroenke, L. Spangenberg, M. Zenger, E. Brähler, and B. Löwe, "The somatic symptom scale-8 (SSS-8)," JAMA Internal Medicine, vol. 174, no. 3, pp. 399-407, Mar. 2014.
Measuring Stress: A Guide for. S Cohen, T Kamarck, R Mermelstein, Health and Social Scientists. 10Perceived stress scaleS. Cohen, T. Kamarck, and R. Mermelstein, "Perceived stress scale," Measuring Stress: A Guide for Health and Social Scientists, vol. 10, pp. 1-2, 1994.
The big five trait taxonomy: history, measurement, and theoretical perspectives. O P John, S Srivastava, Handbook of personality: Theory and research. 2O. P. John, S. Srivastava et al., "The big five trait taxonomy: history, measurement, and theoretical perspectives," Handbook of personality: Theory and research, vol. 2, no. 1999, pp. 102-138, 1999.
National study of chronic disease self-management: six-month outcome findings. M G Ory, S Ahn, L Jiang, K Lorig, P Ritter, D D Laurent, N Whitelaw, M L Smith, Journal of Aging and Health. 257M. G. Ory, S. Ahn, L. Jiang, K. Lorig, P. Ritter, D. D. Laurent, N. Whitelaw, and M. L. Smith, "National study of chronic disease self-management: six-month outcome findings," Journal of Aging and Health, vol. 25, no. 7, pp. 1258-1274, Sep. 2013.
Development of personality in early and middle adulthood: Set like plaster or persistent change?. S Srivastava, O P John, S D Gosling, J Potter, Journal of Personality and Social Psychology. 845S. Srivastava, O. P. John, S. D. Gosling, and J. Potter, "Development of personality in early and middle adulthood: Set like plaster or persistent change?" Journal of Personality and Social Psychology, vol. 84, no. 5, pp. 1041-1053, 2003.
Openface 2.0: Facial behavior analysis toolkit. T Baltrusaitis, A Zadeh, Y C Lim, L.-P Morency, IEEE International Conference on Automatic Face & Gesture Recognition (FG). IEEET. Baltrusaitis, A. Zadeh, Y. C. Lim, and L.-P. Morency, "Openface 2.0: Facial behavior analysis toolkit," in IEEE International Confer- ence on Automatic Face & Gesture Recognition (FG). IEEE, 2018, pp. 59-66.
What is a savitzky-golay filter. R W Schafer, IEEE Signal Processing Magazine. 284R. W. Schafer et al., "What is a savitzky-golay filter," IEEE Signal Processing Magazine, vol. 28, no. 4, pp. 111-117, 2011.
Speaker diarizationn. L Dong, 2020/07/04Github repositoryL. Dong, "Speaker diarizationn," Github repository, accessed 2020/07/04. [Online]. Available: https://github.com/taylorlu/ Speaker-Diarization
Fully supervised speaker diarization. A Zhang, Q Wang, Z Zhu, J Paisley, C Wang, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEA. Zhang, Q. Wang, Z. Zhu, J. Paisley, and C. Wang, "Fully supervised speaker diarization," in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2019, pp. 6301-6305.
pyAudioAnalysis: an spen-source python library for audio signal analysis. T Giannakopoulos, PloS one. 1012T. Giannakopoulos, "pyAudioAnalysis: an spen-source python library for audio signal analysis," PloS one, vol. 10, no. 12, 2015.
Multimodal deep learning framework for mental disorder recognition. Z Zhang, W Lin, M S Liu, M Mahmoud, IEEE International Conference on Automatic Face & Gesture Recognition (FG). IEEEZ. Zhang, W. Lin, M. S. Liu, and M. Mahmoud, "Multimodal deep learning framework for mental disorder recognition," in IEEE International Conference on Automatic Face & Gesture Recognition (FG). IEEE, 2020.
Automated screening for bipolar disorder from audio/visual modalities. Z S Syed, K Sidorov, D Marshall, Annual Workshop on Audio/Visual Emotion Challenge (AVEC). ACMZ. S. Syed, K. Sidorov, and D. Marshall, "Automated screening for bipolar disorder from audio/visual modalities," in Annual Workshop on Audio/Visual Emotion Challenge (AVEC). ACM, 2018, pp. 39-45.
A temporally piece-wise fisher vector approach for depression analysis. A Dhall, R Goecke, International Conference on Affective Computing and Intelligent Interaction (ACII). IEEEA. Dhall and R. Goecke, "A temporally piece-wise fisher vector approach for depression analysis," in International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, 2015, pp. 255-259.
When does label smoothing help. R Müller, S Kornblith, G E Hinton, Advances in Neural Information Processing Systems (NeurIPS). R. Müller, S. Kornblith, and G. E. Hinton, "When does label smoothing help?" in Advances in Neural Information Processing Systems (NeurIPS), 2019, pp. 4696-4705.
He studied Computer Science at Hong Kong University for one year, and then transferred to Cambridge for a 4-year BA/MEng course in Engineering. He was awarded a silver medal in. His research interest spans Natural Language Processing. Information Engineering at University of CambridgeHyperspectral Image Processing, and Affective ComputingWeizhe Lin is a 3rd-year undergraduate stu- dent in Information Engineering at University of Cambridge. He studied Computer Science at Hong Kong University for one year, and then transferred to Cambridge for a 4-year BA/MEng course in Engineering. He was awarded a silver medal in "Future Scientist Award Program" by Chinese Academy of Sciences and Academy of Engineering. His research interest spans Natu- ral Language Processing, Hyperspectral Image Processing, and Affective Computing.
| [
"https://github.com/taylorlu/"
] |
[
"Controlling excitation avalanches in driven Rydberg gases",
"Controlling excitation avalanches in driven Rydberg gases"
] | [
"Kai Klocke \nDepartment of Physics\nInstitute for Quantum Information and Matter\nCalifornia Institute of Technology\n91125PasadenaCAUSA\n",
"Michael Buchhold \nDepartment of Physics\nInstitute for Quantum Information and Matter\nCalifornia Institute of Technology\n91125PasadenaCAUSA\n"
] | [
"Department of Physics\nInstitute for Quantum Information and Matter\nCalifornia Institute of Technology\n91125PasadenaCAUSA",
"Department of Physics\nInstitute for Quantum Information and Matter\nCalifornia Institute of Technology\n91125PasadenaCAUSA"
] | [] | Recent experiments with strongly interacting, driven Rydberg ensembles have introduced a promising setup for the study of self-organized criticality (SOC) in cold atom systems. Based on this setup, we theoretically propose a control mechanism for the paradigmatic avalanche dynamics of SOC in form of a time-dependent drive amplitude. This gives access to a variety of avalanche dominated, self-organization scenarios, prominently including self-organized criticality, as well as sub-and supercritical dynamics. We analyze the dependence of the dynamics on external scales and spatial dimensionality. It demonstrates the potential of driven Rydberg systems as a playground for the exploration of an extended SOC phenomenology and their relation to other common scenarios of SOC, such as e.g. in neural networks and on graphs. arXiv:1903.12181v1 [cond-mat.quant-gas] | 10.1103/physreva.99.053616 | [
"https://arxiv.org/pdf/1903.12181v1.pdf"
] | 88,522,525 | 1903.12181 | d99804fb1a631cc0a4743d17378c234661977294 |
Controlling excitation avalanches in driven Rydberg gases
Kai Klocke
Department of Physics
Institute for Quantum Information and Matter
California Institute of Technology
91125PasadenaCAUSA
Michael Buchhold
Department of Physics
Institute for Quantum Information and Matter
California Institute of Technology
91125PasadenaCAUSA
Controlling excitation avalanches in driven Rydberg gases
(Dated: April 1, 2019)
Recent experiments with strongly interacting, driven Rydberg ensembles have introduced a promising setup for the study of self-organized criticality (SOC) in cold atom systems. Based on this setup, we theoretically propose a control mechanism for the paradigmatic avalanche dynamics of SOC in form of a time-dependent drive amplitude. This gives access to a variety of avalanche dominated, self-organization scenarios, prominently including self-organized criticality, as well as sub-and supercritical dynamics. We analyze the dependence of the dynamics on external scales and spatial dimensionality. It demonstrates the potential of driven Rydberg systems as a playground for the exploration of an extended SOC phenomenology and their relation to other common scenarios of SOC, such as e.g. in neural networks and on graphs. arXiv:1903.12181v1 [cond-mat.quant-gas]
I. INTRODUCTION
Away from thermal equilibrium and in the absence of detailed balance, (quasi-) stationary states emerge from ordering principles different from the equipartition of energy. Outstanding amongst such out-of-equilibrium ordering mechanism is self-organized criticality. Introduced in the seminal paper of Bak, Tang and Wiesenfeld [1,2] to explain the emergence of flicker noise in electrical circuits, SOC has since then been observed in a variety of diverse, mainly large scale systems, ranging from earth quakes [3][4][5], forest fires [6][7][8][9] and solar flares [10,11] to vortex dynamics in superconductors [12,13] and turbulence [14]. Only recently, SOC was recognized as a possilbe mechanism to establish optimal conditions for information spreading [15][16][17][18][19][20].
The phenomenon of SOC can be described by simple means, by balancing dissipation and external drive, a manybody system is attracted, i.e. it self-organizes, towards a state with scale invariant correlations [21][22][23]. In thermal equilibrium, scale invariance is associated with dynamics at a critical point signaling a continuous phase transition [24]. Compared to a fine tuned critical point, scale invariance due to SOC is believed to occur in an extended parameter regime, commonly enabled by a separation of time scales between drive and dissipation [22,23,25]. While this makes SOC robust to changes in the external conditions, the interplay of interactions, drive and dissipation obscure its origin and only few microscopic models are found in the literature.
Manifestations of SOC in nature are mostly approached via phenomenological models [8,26], either because the microscopic description is too complex or the elementary building blocks are unknown [16,27,28]. This makes both the microscopic understanding and, even more, the controllability of SOC extremely challenging [22,23], especially for effective models [29,30]. Consequently, a setup exploring an extended SOC phenomenology on the one hand and featuring the knowledge and a large degree of controllability of its basic elements on the other hand represents a promising tool to study aspects of SOC in generic nonequilibrium settings.
Only recently a promising candidate has been introduced in an experiment with a gas of driven Rydberg atoms [31]; above a certain driving threshold, the atomic pseudo-spins Figure 1. Driven Rydberg self-organized criticality. a) Three-atom level scheme: Transitions from the ground |g to the Rydberg state |r are only resonant inside the facilitation radius r fac of a second Rydberg atom. b) Excitation avalanche triggered by a single Rydberg atom (red dots). After a period t ∼ κ −1 t , Rydberg atoms facilitate the excitation of ground state atoms (blue dots) inside the facilitation radius (red line), creating avalanches of length s before decaying into the ground or removed state (white dot) with rates Γ, γ ↓0 . c) Real space dynamics of the Rydberg density ρ x,t on a grid of N = 10 3 sites. Depending on the pumping strength growth rate λ, avalanches form a periodic (subcritical) structure, a fractal (SOC) structure or a random noise patter (supercritical). d) Distribution of avalanche sizes s (logarithmic scale) in the SOC regime (dots and triangles) and at the transition to the supercritical regime (squares). e) Zoom-in to the SOC pattern and illustration of an avalanche lengths. self-organize towards a transient, scale invariant state, featuring common signatures of SOC [23,28]. Our work builds up on this basic setting for SOC in cold Rydberg gases.
We propose the implementation of a control mechanism for excitation avalanches in driven Rydberg ensembles and explore the corresponding many-body dynamics. We show how this gives access to an extended SOC phenomenology, including subcritical and supercritical avalanche dynamics. By adjusting the proposed mechanism to common control parame-ters such as the laser intensity and the detuning, one can access the paradigms of SOC: a scale invariant avalanche distribution [9] and a 1 ω -noise pattern [1,2].
II. FACILITATED RYDBERG DYNAMICS
We consider the many-body dynamics in a gas of interacting Rydberg atoms [32][33][34][35][36][37], which move freely inside a trap. Each Rydberg atom is modeled as an effectively three level system, consisting of a non-interacting ground state |g , a highly polarizable Rydberg state |r with large principle quantum number n 1 [38][39][40] and an auxiliary, removed state |0 . The latter is a container state representing a set of internal states that can be reached via dissipative decay but are otherwise decoupled from the |g − |r sector [31,35]. Each atom obtains a label l and a set of operators σ ab l ≡ |a b| l acting on its internal states.
The ensemble is subject to a laser, coherently driving the |g − |r transition with a Rabi frequency Ω and detuning from resonance ∆. The highly excited Rydberg state is subject to dissipation originating from dephasing as well as spontaneous decay into both the ground state |g and the removed state manifold |0 with effective rates labeled by γ de , γ ↓g , γ ↓0 [31]. Due to their polarizability, two atoms, labeled l, l , in the Rydberg state experience a mutual van-der-Waals repulsion. Its potential form is V I l,l = C 6 | r l − r l | −6 , where C 6 is the van-der-Waals coefficient and r l , r l are the atomic positions [41] [42].
As a simple but crucial innovation we consider here a timedependent Rabi frequency
Ω → Ω t = Ω 0 (1 + t λ 2n c,0 ),(1)
with an initial frequency Ω 0 , a dimensionless density n c,0 , which we define later, and a ramp parameter λ n c,0 Ω 0 . This corresponds to a slow, linear increase of the pump laser intensity. It gives rise to a continuously increasing excitation probability for the |g ↔ |r transition, counteracting the decay into the removed state and balancing the system at a fixed, non-zero density of excited states for transient times t < n c,0 λ . The microscopic dynamics of the d-dimensional gas are given by the master equation ( = 1)
∂ tρ = i[ρ, H] + l L lρ(2)
for the ensemble density matrixρ. The coherent atom-light and atom-atom interaction is captured by the Hamiltonian
H = l l C 6 2| r l − r l | 6 σ rr l − ∆ σ rr l + Ω t 2 σ rg l + σ gr l ,(3)
while dissipative processes are described by the Liouvillian L lρ = γ de σ rr lρ σ rr l + γ ↓g σ gr lρ σ rg l + γ ↓0 σ 0r lρ σ r0 l −
Γ 2 {σ rr l ,ρ},(4)
where Γ = γ de + γ ↓g + γ ↓0 is the sum of all dissipative rates.
In typical experiments [31,35,43], the product of the atomic mass M and temperature T is 'large' compared to the density n 0 , causing a thermal de-Broglie wavelength λ th = h √ 2πMk B T much smaller than the mean free path d a ∼ n −1/d 0 . The motional degrees of freedom r l,l thus cannot maintain coherence between two subsequent scattering events and are treated as classical variables undergoing thermal motion, see below and Ref. [31].
We focus on a very large detuning ∆/Γ ∼ O(10 2 − 10 3 ) [44][45][46], leading to strongly suppressed, off-resonant single particle transitions |g ↔ |r at a rate τ t ≡ ΓΩ 2 t Γ 2 +4∆ 2 . Due to interactions, an atom in the Rydberg state, however, creates a facilitation shell of radius r fac = (C 6 /∆) 1/6 and width δr fac ∼ r fac Γ/∆. Inside the shell, the Rydberg repulsion compensates the detuning in Eq. (3), yielding an effective resonant excitation rate [47][48][49][50][51]. In the limit of strong dephasing, the atom coherences decay rapidly in time and the relevant dynamical degrees of freedom are the Rydberg state density and the density of 'active' states, i.e. of atoms in the Rydberg and in the ground state. Their coarse grained values, averaged over a 'facilitation cluster' of
κ t ≈ Ω 2 t /Γ with κ t τ tvolume V fac = π d 2 r d fac Γ Euler ( d 2 −1) are ρ x,t ≡ l,| r l − x|≤r fac σ rr l (t),(5)n x,t ≡ l s.t. | r l − x|≤r fac σ rr l + σ gg l (t).(6)
The evolution equations for ρ x,t and n x,t are obtained by adiabatically eliminating the atom coherences from the Heisenberg equations of motion [43,[51][52][53][54][55], yielding the Langevin equation (see [31,53])
∂ t ρ x,t = D∇ 2 ρ x,t + (κ t ρ x,t + τ t )(n x,t − 2ρ x,t ) − Γρ x,t + ξ x,t . (7)
Equation (7), describes Rabi oscillations inside each cluster with a total rate κ t ρ x,t + τ t . It combines the off-resonant oscillation rate τ t and the resonant, facilitated rate κ t ρ x,t , which is proportional to number of facilitating atoms ρ x,t . It prefers a semi-excited state ρ x,t = n x,t 2 and thus competes with the linear decay channel ∼ Γ, which prefers the ground state ρ x,t = 0.
The spreading of excitations from cluster to cluster is described by the diffusion term ∼ D∇ 2 ρ x,t . With D = κ t S being proportional to the facilitation rate and the surface S of the clusters [31]. In a dissipative environment, each cluster experiences fluctuations of ρ x,t , which are proportional to the oscillation rate [43,[51][52][53] and covered by the Markovian noise kernel
ξ x,t ξ y,t = δ( x − y)δ(t − t )(τ + κρ x,t ).(8)
Before turning to the evolution of n x,t , we discuss the meanfield solution of Eq. (7) in the limit where τ t Γ, κ t n x,t by setting D = ξ x,t = 0. Defining a critical density n c,t ≡ Γ κ t , one distinguishes two different regimes: an inactive regime for n x,t < n c , where the Rydberg density is suppressed and evolves towards ρ x,t → τ Γ n x,t and an active regime for n x,t > n c where it evolves towards ρ x,t → 1 2 (n x,t −n c ). The crossover between the two regimes at n x,t = n c,t [56] features a maximal
correlation length of ξ || = D √ 8Γτ t
. It turns into a sharp, second order phase transition in the limit τ t → 0 [51-53, 57, 58].
The evolution of the density n x,t is governed by thermal motion of the atoms, the decay into the removed state and density fluctuations. It is summarized in the Langevin equation [31]
∂ t n x,t = D n ∇ 2 n x,t − γ ↓0 ρ x,t + η x,t(9)
with a Markovian noise kernel η x,t η y,t = δ( x − y)δ(t − t )γ ↓0 ρ x,t and a thermal diffusion constant D n . It has minor impact on the dynamics but reduces geometrical constraints due to rare, inhomogeneous configurations of n x,t [59].
III. DERIVATION OF THE LANGEVIN EQUATIONS
In this section, we present the detailed derivation of the Langevin equations (7), (3) from the master equation (2). Readers interested in the effective dynamics may continue with its discussion in the following section.
Due to the exponential growth of the Hilbert space, the master equation Eq. (2) becomes too complex to solve for realistic, macroscopic system sizes. In order to reduce the complexity, the dynamics are projected onto the relevant long-wavelength degrees of freedom, i.e. the Rydberg density ρ and the active density n as defined in Eqs. (5), (6). This procedure has been discussed for the case of λ = γ ↓0 = 0 in Refs. [51,53] and for the case λ = 0, γ ↓0 0 in Ref. [31].
For strong dephasing γ de Ω t the decay of the atomic coherences σ rg l , σ r0 l towards their steady state value is the fastest process in the quantum master equation. They can be adiabatically eliminated by formally solving the steady state equation for the average (α = g, 0)
0 ! = ∂ t σ rα l = Tr σ rα l i[ρ, H] + l L lρ .(10)
Inserting the solution of Eq. (10) and the completeness relation σ rr l + σ gg l + σ 00 l = 1 into the full Heisenberg-Langevin equations for σ rr l , σ gg l yields
∂ t σ gg l = −∂ t σ rr l − γ ↓0 σ rr l + ξ g l ,(11)∂ t σ rr l = Ω 2 t Γ(σ gg l − σ rr l ) Γ 2 + 4(∆ − l l V I l,l σ rr l ) 2 − Γσ rr l + ξ r l .(12)
The Markovian noise operators ξ r,g l are added in order to enforce the fluctuation-dissipation relation of the driven dissipative master equation. They are local in space and time and fulfill the generalized Einstein relation
(ξ r l ) 2 = ∂ t (σ rr l ) 2 − 2 σ rr l ∂ t σ rr l .(13)
Since the operators σ rr l , σ gg l are projection operators with eigenvalues 0, 1, any function f of, say σ rr l , can be expressed
as f (σ rr l ) = f (0) + ( f (1) − f (0))σ rr l .
Extending this to the whole set of {σ rr l , σ gg l }, one rewrites
Ω 2 t Γ Γ 2 + 4(∆ − l V I l,l σ rr l ) 2 = Ω 2 t Γ Γ 2 + 4∆ 2 =τ t + l l Ω 2 t Γ Γ 2 + 4(∆ − V I l,l ) 2 − τ t σ rr l + O(σ rr l σ rr l ).(14)
This expression is independent of the form of the interaction potential V I l,l and exact up to second order powers in the projection operators. It separates off-resonant single particle transitions with rate τ and facilitated, two-particle transitions. For 2|∆ − V I l,l | < Γ, the facilitation rate deviates significantly from zero. Depending on the interaction potential, this defines the facilitation radius r fac , i.e. for a typical van der Waals potential V l,l = C 6 r 6 one finds r fac ≡ (C 6 /∆) 1/6 and the facilitation shell | r l − r l | ∈ [r fac − ∆r fac , r fac + ∆r fac ] with ∆r fac = r fac Γ 12∆ . We introduce a real space projector Π ll with Π ll = 1 if | r l − r l | is inside the facilitation shell and zero otherwise. This yields
∂ t σ rr l = τ + Ω 2 t Γ l l Π ll σ rr l (σ gg l − σ rr l ) − Γσ rr l + ξ r l . (15)
This provides a good approximation for the facilitation rate when the density of excitations is small. For a number of m ≥ 1 excited states inside a single shell, the exact solution shows a growth of the shell radius as r and similar for n x,t . For a homogeneous density, the drift term ∼ ∂ t r l can be approximated to be zero (see below for an inhomogeneous setting). This yields
∂ t n x,t = −γ ↓0 ρ x,t + η x,t , ∂ t ρ x,t = τ + Ω 2 t Γ F x (ρ z,t ) (n x,t − 2ρ x,t ) − Γρ x,t + ξ x,t ,(17)
where F x (ρ z , t) is some linear, quasi-local functional of ρ x,t . F x (ρ z,t ) has support only around | x − z| = r fac , enabling a Taylor expansion of the density. Since the Rydberg facilitation mechanism is isotropic in space, the expansion contains only even powers of derivatives. It reads as
F x (ρ z,t ) = F x (1)ρ x,t + F x ( z 2 ) 2 ∇ 2 ρ x,t + O(∇ 4 ρ x,t ).(18)
The noise ξ x,t ξ y,s = l,m Θ(r fac − | x − r l |)Θ(r fac − | y − r m |) ξ l,t ξ m,s = δ(s − t)δ(| x − y|) κ t ρ x,t + τ t remains Markovian and δ-correlated on length scales of the facilitation radius.
Making a conservative estimate for the temperature of the motional degrees of freedom T = O(10µK) and the atomic mass M = O(20u) [31], one finds a thermal de Broglie wave-
length λ T = h √ 2πMk B T ≈ 200nm.
For an atomic density of n 0 ≈ 10 11 cm −3 the mean free path in three dimensions amounts to d a = 6 πn 0 1/3 ∼ 2µm, which is at least one order of magnitude larger than λ T . Consequently, coherence in the motional degrees of freedom is lost between two subsequent scattering events and they can be treated classically. In the absence of an external trapping potential, the particles perform Brownian motion, i.e. thermal diffusion in a dilute van der Waals gas. This allows us to treat the atomic positions as slowly diffusing and uniformly distributed in space.
Including Brownian motion with diffusion constant D n the final form of the Langevin equations is
∂ t n x,t = D n ∇ 2 n x,t − γ ↓0 ρ x,t + ξ x,t , ∂ t ρ x,t = D∇ 2 ρ x,t + (κ t ρ x,t + τ t )(n x,t − 2ρ x,t ) − Γρ x,t + ξ x,t .(19) Here κ t = F x (1) Ω 2 t Γ is the facilitation rate. The diffusion con- stant D = F x ( z 2 ) Ω 2 t 2Γ (n x,t − 2ρ x,t ) + D n ≈ F x ( z 2 ) Ω 2 t
2κ t is dominated by the facilitated spreading, which is proportional to the average density, i.e. n x,t − 2ρ x,t ≈ Γ κ t . This makes D, apart from local density fluctuations, time independent.
IV. SELF-ORGANIZED CRITICALITY AND AVALANCHE DYNAMICS
In order to observe self-organization towards a long-range correlated state, the dynamics should push any initial density n x,0 close towards n x,t → n c,t and thereby maximize the correlation length ξ || . This is achieved by the combination of loss into the auxiliary state ∼ γ ↓0 and the continuously growing pump strength ∼ λ.
Their interplay is best understood by expanding the critical density n c,t up to first order in λt, yielding n c,t = n c,0 − λt, which is valid for λt < n c,0 . For active densities n x,t ≈ n c,t , the Rydberg state ρ x,t experiences a large correlation length, leading to long-lived and and far spreading excitations, i.e. the formation of avalanches. Once an avalanche has formed, parts of it decay into the removed state, leading to a decrease of n x,t . It reaches a stationary point when the decay of both n x,t and n c,t compensate each other, i.e. for λ = γ ↓0 ρ x,t .
On times t < λ n c,0 , this is the only homogeneous solution of Eqs. (7), (9) with
ρ x,t = λ γ ↓0
and n x,t = n c,t + 2λ
γ ↓0 + γ ↓0 τ t κλn c,t .(20)
It is reached after a time t ≈ max{κ −1 t , γ −1 ↓0 } and it survives up to times of order t ≈ n c,0 λ . On larger times, effects of order λ 2 t 2 set in and the active density depletes to zero, i.e. ρ x,t , n x,t → 0.
Imposing a double separation of time scales on the dynamics via i) τ t λ → 0 + and ii) λ γ ↓0 → 0 + , the above solution predicts the self-organization towards a long-lived and long-range correlated state with ρ x,t = 0 + , n x,t = n c,t + 0 + and ξ || → ∞. We thus call i) + ii) the conditions for SOC in our driven Rydberg setup. The degree up to which both conditions are met, i.e. SOC is realized, can be adjusted experimentally via the Rabi frequency Ω t , the detuning ∆ or the decay γ ↓0 . Such double separation of scales is a common requirement for realizations of SOC without energy conservation [60,61] [62]. Since both our Hamiltonian and the Lindblad dynamics do not conserve the energy, the conditions i)+ii) can be seen as the present manifestations of this phenomenon. One may now argue that such strict requirements do not really differ from parameter fine tuning in conventional criticality. We, however, show that the dynamics of Eqs. (7) and (9) display SOC even for very weak realizations of i) and ii), e.g. for τ t λ ∼ 10 −4 and λ γ ↓0 ∼ 0.1, making it accessible to experiments. We emphasize that for t < n c,0 λ the increase of Ω t with λ is identical to loading ground state atoms with rate λ into the system. The excitation avalanches of ρ x,t depend only on the difference n x,t − n c,t = n x,t + λt − n c,0 and are insensitive towards n c,t being decreased or n x,t being increased with rate λ. Experimentally, however, a controlled repopulation with rate λ is often less feasible than adjusting the drive strength.
In order to confirm the prediction of emergent SOC from the homogeneous treatment above and to observe its paradigmatic avalanche dynamics, we simulate the full time evolution of the Rydberg density via Eqs. (7), (9) in spatial dimensions 1 ≤ d ≤ 3. The equations are integrated on a d-dimensional grid of linear lattice spacing ∆x and we use dimensionless rates, expressed in units of ∆x 2 /D. The integration scheme is a derivative of the splitting scheme for stochastic differen-tial equations with multiplicative noise [63], adapted to the noise kernel of Eq. (7), see App. A.
For the simulations we set κ 0 = Γ = ∆x 2 2D , τ 0 = 10 −7 Γ and γ ↓0 = 10 −2 Γ, which is consistent with recent experiments [31,35,55]. Different degrees of scale separation are realized by varying λ within the interval λ ∈ [0, 0.2Γ]. We point out that, as for our choice of parameters, any realistic experiment will realize the conditions i) and ii) only on an approximate level.
Our simulations reveal an extended dynamical regime, which is governed by the formation, propagation and decay of avalanches containing a significant number of excitations, ρ x,t λ γ ↓0 , (see Fig. 1c). Parametrically it coincides well with the criterion τ t < λ < γ ↓0 , matching i) and ii). In general, the distribution P ava (s) of avalanche sizes s varies with λ. In the vicinity of a critical value λ ≈ λ soc it, however, approaches a scale invariant form P ava (s) ∼ s −α with an exponent α > 0.
In d = 1, we obtain α = 1.44 ± 0.1, which is consistent with results obtained from other SOC models, e.g. the forest fire model [64] or activity patterns in the cortex [65], and is associated with the underlying directed percolation universality class [17]. Its statistical error results from our sampling procedure, which dynamically counts avalanches from a finite number of patches of 10 4 × 10 4 sites (time and space). For d > 1, we predict α ≈ 1.5, however, with larger errors due to our avalanche counting scheme.
The scale invariant avalanche distribution is the hallmark of SOC [21][22][23]. It is accompanied by fractal spatio-temporal Rydberg excitation patterns (see Fig. 1c) and paradigmatic 1 ωfluctuations [1,2] in the Rydberg density ρ x,ω ≡ ρ x,t e iωt dt ∼ ω −β , with β 1 (see Fig. 2d). This clearly demonstrates a dynamical regime with SOC in the driven Rydberg gas. Its location at λ ≈ λ soc can be understood as a trade-off in optimizing i) and ii) simultaneously for fixed values of τ t , γ ↓0 . For dimensions d > 1 it approaches the estimate λ soc ∼ √ τ t γ ↓0 . Moving λ away from λ soc , P ava (s) remains scale invariant in a finite range |λ − λ soc | < η. We found η ≈ 0.2λ soc for system sizes of N = 10 6 lattice sites and our set of parameters. For larger deviations |λ − λ soc | > η, the algebraic form of P ava (s) persists only for avalanche sizes s < s || (λ), i.e. below a λdependent cutoff scale s || (λ). Estimating the cutoff scale from the mean-field correlation length, i.e. s || (λ) = ξ || , which is justified far away from the SOC regime, one finds s || (λ) ∼ Dγ ↓0 2κ t λ for λ τ t and s || (λ) ∼ Dλ κ t τ t for λ γ ↓0 . The behavior on distances above s || in the two regimes λ ≶ λ soc manifestly differs from each other. For supercritical values λ λ soc , the critical density n c,t decreases rapidly, leading to a large avalanche triggering rate and a high density of avalanches. On sizes s > s || (λ) different avalanches start to overlap, which makes them indistinguishable and generates a random excitation pattern (displayed in Fig. 1c), revealing the underlying avalanches only for s < s || (λ), (squares in Fig. 1d).
The slow decrease of n c,t in the subcritical regime, λ λ soc makes two subsequently following avalanches unfavorable and enforces a relative delay. It destroys the scale invariance above s || (λ) in favor of periodically triggered avalanches with increasing length s s || (λ). This transforms the fractal real space structure found in the SOC regime into a time-periodic pattern, which is dominated by thermodynamically large excitation avalanches, shown in Fig. 1c. The period between two subsequent avalanches appears to be the time by which n c,t decreases by an integer value, i.e. δt ≈ λ −1 .
V. EXPERIMENTAL OBSERVABILITY
While the real space evolution of excitation avalanches is hard to access in experiments, the statistics of excitations, i.e. ρ x,t and n x,t , can be measured via the particle loss rate ∝ γ ↓0 ρ x,t [31,35]. A robust, time-translational invariant observable is the integrated density
R t ≡ n 0 + λt − t 0 dt γ ↓0 ρ x,t V ,(21)
where n 0 is the total initial density and ... V = 1 V V d d x denotes the spatial average over the system volume. Its meaning becomes clear when comparing it with the initial critical density n c,0 at times tλ n c,0 , yielding R t − n c,0 = n x,t V − n c,t .
Both ρ x,ω and R t display very characteristic features in the three different regimes. For subcritical λ, the real time evolution of R t shows large, periodic amplitude fluctuations, reflecting individual, periodically triggered, extended avalanches. Instead, both the SOC and the supercritical regime feature much smaller amplitude fluctuations around R t ≈ n c,o (SOC) or R t n c,o (supercritical) as shown in Fig. 2a. In the subcritical (supercritical) regime, ρ x,ω departs from its scale invariant form at SOC and one finds instead suppressed (pronounced) density fluctuations at intermediate frequencies, see Fig. 2b.
Significant information is encoded in the statistics of R t , especially its meanR ≡ λ λ −1 0 R t dt and fluctuations σ 2 R ≡ λ λ −1 0 R 2 t dt −R 2 as displayed in Fig. 2c. For subcritical λ bothR and σ R increase with λ faster than the linear meanfield prediction. At the onset of SOC, however, bothR and σ R experience a sharp drop, manifest in a non-analytic kink in their λ-dependence. WhileR → n c,0 rapidly approaches the critical density, the fluctuations decrease by several orders of magnitude. Upon further increasing λ,R reaches a valley at ≈ n c,0 and subsequently increases again into the supercritical regime. σ R is featureless at the SOC-supercritical transition.
In order to reason the observability of SOC for realistic conditions, where the atomic cloud is confined inside a trap, we expose n x,t to a potential of the form V trap ( x) = V 0 exp(−| x| 2 /ξ 2 trap ), e.g. resulting from a Gaussian trapping laser with beam waist ξ trap [31].
VI. EFFECT OF THE SPATIAL DIMENSION
Apart from Rydberg atoms, the continuum model in Eq. (7) may also serve as a coarse grained description for activity spreading in sparse networks [17]. In this picture, each Rydberg atom represents a node and the parameters κ, τ, Γ describe its reaction to external stimuli and the decay of information. The density n x,t represents a 'node energy', which is consumed by active nodes with rate γ ↓0 ρ x,t and recharged with rate λ.
Optimal networks are expected to operate close to SOC [15][16][17][18][19][20]. Their natural tuning parameter is the average connectivity z of the nodes, which is adjusted to match external conditions [19,20,[66][67][68]. Figure 2c confirms that here the dimensionality d acts as a second 'control parameter'. Changing d from d = 1 to d = 2 shifts the scale invariant regime (shaded region) and increases its range. For a given set τ, λ, γ ↓0 , there may exist an 'optimal' d, for the system to display SOC. In Rydberg experiments d can be controlled by adjusting the trapping geometry. Combined with the tuneability of λ and τ, this offers many possibilities to study selforganized criticality in network-like setups.
VII. CONCLUSION
We propose and study an experimentally feasible mechanism to control excitation avalanches in driven Rydberg setups [31]. On large, transient times, one can observe subcritical, supercritical and self-organized critical avalanche dynamics, depending on the control parameter. Each regime features unique signatures, including a scale invariant avalanche distribution and 1 ω -noise, both paradigmatic signals for SOC. This motivates driven Rydberg ensembles [31] as viable platforms for the study of SOC and the conditions under which simple dynamical rules, as imposed by the facilitation condition, can establish and maintain self-ordering towards complex dynamics structures.
While the crossover from the SOC to the supercritical regime does not produce a pronounced feature in the integrated density, Fig. 2 reveals a developing non-analyticity in both the integrated density as well as its fluctuations as τ t is decreased. It hints towards an underlying critical point, on the one hand such a critical point might describe the SOC uni-versality class, including avalanche and correlation exponents. On the other hand, it could be a remnant of the directed percolation critical point, which would be reached for λ, τ → 0. In both cases, the investigation of this conjectured critical point and its relation to the SOC universality seems worthwhile for future work.
Based on the similarity of the corresponding master equations, we conjecture a relation between driven Rydberg gases and self-organizing neural networks. The analogy is strengthened by frequently observed periodic or random activity patterns in non-optimal operating networks [69,70]. Exploring this connection, especially for the role that is played by scale separation, appears a promising direction to connect driven Rydberg systems with neurosciences.
A non-zero τ t can be incorporated by using the same procedure with a simple change of variables: u = ρ + τ t /κ t . The non-negativity of ρ is enforced after sampling by resetting any value of u < τ t to τ. The well-behaving evolution of n x,t is performed via an Euler scheme.
1/6 r R (in d = 3 dimensions). The facilitation rate for m > 1 then grows ∝ √ m, compared to the ∝ m prediction of Eq. (15). If one bears in mind, however, the weak off-resonant excitation rate, configurations of m ≥ 1 are suppressed by a factor o(10 −4 ). The equation of motion for ρ x,t = l Θ(r fac − | x − r l |) σ rr land n x,t = l Θ(r fac − | x − r l |) σ rr l + σ gg l yield ∂ t ρ x,t = l ∂ t σ rr l + σ rr l ∂ t r l ∇ Θ(r fac − | x − r l |)(16)
Figure 2 .
2Experimental observables. a) Time evolution of the integrated density R t , Eq. (21), in three different regimes (n c,0 ≈ 4 for comparison). b) Fourier decomposition ρ ω of the Rydberg density, same parameters as in a). c) Time averaged meanR, standard deviation σ R and peak value of the integrated density R t in dimensions d = 1, 2. A sharp drop ofR, σ R marks the onset of SOC, i.e. a regime of scale invariant avalanche distributions (colored region). Arrows indicate the values of λ used in the plots a) and b).
For a mean free path d a ξ trap , the effect of V trap ( x) can be treated within the relaxation time approximation, see App. B. This adds a drift ∼ − v x · ∇n x,t to the R.H.S. of Eq. (9). Here v x = d a √ Mk B T ∇V trap is the relaxation velocity. The dynamics following this drift at low temperatures T ( V 0 d a √ Mk B T = 0.7D) is displayed in Fig. 3a. On distances | x| < ξ trap , avalanches remain well defined and both their fractal real space pattern and the scale invariant statistics are observable below the trap scale, see Fig. 3b.
Figure 3 .
3Avalanches in a trap (d = 1). a) Real space dynamics and b) distribution of avalanches in a Gaussian trap of width ξ trap = 10 3 lattice sites in the SOC regime (λ = 2.36 × 10 −3 ). Both the spatial and the temporal avalanche size follow the same scaling exponent.
ACKNOWLEDGMENTSWe thank G. Refael, S. Diehl and S. Whitlock for valuable comments on the manuscript. K. K. was supported by the J. Weldon Green SURF fellowship and M. B. acknowledges support from the Alexander von Humboldt foundation.Appendix A: Numerical integration schemeNumerical integration of Eqs.(7),(9)is performed by an operator-splitting update scheme[63]. At each time step, the evolution is decomposed into a stochastic evolution step and a deterministic step. The former is designed to solve a stochastic differential equation of the form:Here η is a Markovian noise kernel with mean zero and unit variance. For small γ ↓0 , κ, τ, we may approximate α and β to be constant over each time step. The corresponding Fokker-Planck equation has the exact solution P(ρ, δt) = λe −λ(ρ 0 e βδt +ρ) ρ ρ 0 e βδt µ/2 I µ 2λ ρ 0 ρe βδt ,(A2)where we set ρ ≡ ρ x,t+δ and ρ 0 ≡ ρ x,t as well as λ = 2β σ 2 (e βt −1) and µ = 2α σ 2 − 1 and I µ (x) is the modified Bessel function of the first kind with index µ and argument x. This can be expressed via a mixed Gamma distribution which allows for efficient sampling:which is shorthand notation for a random variable which is drawn from a Gamma distribution with argument µ + 1 + x, whereas x was drawn from a Poisson distribution with argument λρe β∆t .Given the values of ρ x,t at time t, its stochastic evolution ρ x,t+δt after a step δt can be drawn from the above distribution. The deterministic part of the equation of motion has a purely polynomial form and can also be solved exactly. The time discretization error is therefore only caused by the splitting of the evolution into a stochastic and a deterministic part.Appendix B: Relaxation time approximation in a trapIn the presence of an inhomogeneous background potential V( r) for the particles, the drift term in Eq. (16) becomes significant. For the active density it yieldswhere we applied the chain rule and inserted the momentum p l = M∂ t r l . In the relaxation time approximation, the momentum p is reset after a characteristic scattering timewhere d a is the mean free path and T is the temperature. This yields the equation of motionIt is stationary for p l = −t rel ∇V( r l ) and induces an average drift for times t > t rel . Inserting this result in Eq. (B1) and neglecting the variation of V on length scales ∼ r fac , i.e. V( r l ) ≈ V( x), one findswhere ... describes the dynamics of the internal states of the atoms. This approximation works well if both the facilitation shell and the mean free path are much smaller than the typical length scale of the potential V.
. P Bak, C Tang, K Wiesenfeld, 10.1103/PhysRevLett.59.381Phys. Rev. Lett. 59381P. Bak, C. Tang, and K. Wiesenfeld, Phys. Rev. Lett. 59, 381 (1987).
. P Bak, C Tang, K Wiesenfeld, 10.1103/PhysRevA.38.364Phys. Rev. A. 38364P. Bak, C. Tang, and K. Wiesenfeld, Phys. Rev. A 38, 364 (1988).
. A Sornette, D Sornette, EPL (Europhysics Letters). 9197A. Sornette and D. Sornette, EPL (Europhysics Letters) 9, 197 (1989).
. K Chen, P Bak, S P Obukhov, 10.1103/PhysRevA.43.625Phys. Rev. A. 43625K. Chen, P. Bak, and S. P. Obukhov, Phys. Rev. A 43, 625 (1991).
. P Bak, K Christensen, L Danon, T Scanlon, 10.1103/PhysRevLett.88.178501Phys. Rev. Lett. 88178501P. Bak, K. Christensen, L. Danon, and T. Scanlon, Phys. Rev. Lett. 88, 178501 (2002).
. K Schenk, B Drossel, S Clar, F Schwabl, 10.1007/s100510051113Eur. Phys. J. B. 15177K. Schenk, B. Drossel, S. Clar, and F. Schwabl, Eur. Phys. J. B 15, 177 (2000).
. B Drossel, F Schwabl, 10.1103/PhysRevLett.69.1629Phys. Rev. Lett. 691629B. Drossel and F. Schwabl, Phys. Rev. Lett. 69, 1629 (1992).
. B D Malamud, G Morein, D L Turcotte, 10.1126/science.281.5384.1840Science. 2811840B. D. Malamud, G. Morein, and D. L. Turcotte, Science 281, 1840 (1998).
. D L Turcotte, 10.1088/0034-4885/62/10/201Reports on Progress in Physics. 621377D. L. Turcotte, Reports on Progress in Physics 62, 1377 (1999).
. E T Lu, R J Hamilton, 10.1086/186180ApJL. 38089E. T. Lu and R. J. Hamilton, ApJL 380, L89 (1991).
. M J Aschwanden, N B Crosby, M Dimitropoulou, M K Georgoulis, S Hergarten, J Mcateer, A V Milovanov, S Mineshige, L Morales, N Nishizuka, G Pruessner, R Sanchez, A S Sharma, A Strugarek, V Uritsky, 10.1007/s11214-014-0054-6Space Science Reviews. 19847M. J. Aschwanden, N. B. Crosby, M. Dimitropoulou, M. K. Georgoulis, S. Hergarten, J. McAteer, A. V. Milovanov, S. Mi- neshige, L. Morales, N. Nishizuka, G. Pruessner, R. Sanchez, A. S. Sharma, A. Strugarek, and V. Uritsky, Space Science Re- views 198, 47 (2016).
. S Field, J Witt, F Nori, X Ling, 10.1103/PhysRevLett.74.1206Phys. Rev. Lett. 741206S. Field, J. Witt, F. Nori, and X. Ling, Phys. Rev. Lett. 74, 1206 (1995).
. E Altshuler, T H Johansen, 10.1103/RevModPhys.76.471Rev. Mod. Phys. 76471E. Altshuler and T. H. Johansen, Rev. Mod. Phys. 76, 471 (2004).
. S C Chapman, R M Nicol, 10.1103/PhysRevLett.103.241101Phys. Rev. Lett. 103241101S. C. Chapman and R. M. Nicol, Phys. Rev. Lett. 103, 241101 (2009).
. L De Arcangelis, C Perrone-Capano, H J Herrmann, 10.1103/PhysRevLett.96.028107Phys. Rev. Lett. 9628107L. de Arcangelis, C. Perrone-Capano, and H. J. Herrmann, Phys. Rev. Lett. 96, 028107 (2006).
. W L Shew, W P Clawson, J Pobst, Y Karimipanah, N C Wright, R Wessel, 10.1038/NPHYS3370Nature Physics. 11659W. L. Shew, W. P. Clawson, J. Pobst, Y. Karimipanah, N. C. Wright, and R. Wessel, Nature Physics 11, 659 (2015).
. J Hesse, T Gross, 10.3389/fnsys.2014.00166Frontiers in Systems Neuroscience. 8166J. Hesse and T. Gross, Frontiers in Systems Neuroscience 8, 166 (2014).
. D Marković, C Gros, 10.1016/j.physrep.2013.11.002Physics Reports. 53641D. Marković and C. Gros, Physics Reports 536, 41 (2014).
. O Kinouchi, M Copelli, 10.1038/nphys289q- bio/0601037Nature Physics. 2O. Kinouchi and M. Copelli, Nature Physics 2, 348 (2006), q- bio/0601037.
. A Levina, J M Herrmann, T Geisel, 10.1038/nphys758Nature Physics. 3857A. Levina, J. M. Herrmann, and T. Geisel, Nature Physics 3, 857 (2007).
. E T Lu, 10.1103/PhysRevLett.74.2511Phys. Rev. Lett. 742511E. T. Lu, Phys. Rev. Lett. 74, 2511 (1995).
. N W Watkins, G Pruessner, S C Chapman, N B Crosby, H J Jensen, 10.1007/s11214-015-0155-xSpace Science Reviews. 1983N. W. Watkins, G. Pruessner, S. C. Chapman, N. B. Crosby, and H. J. Jensen, Space Science Reviews 198, 3 (2016).
. R Dickman, M A Muñoz, A Vespignani, S Zapperi, Brazilian Journal of Physics. 3027R. Dickman, M. A. Muñoz, A. Vespignani, and S. Zapperi, Brazilian Journal of Physics 30, 27 (2000).
Quantum field theory and critical phenomena. J Zinn-Justin, J. Zinn-Justin, Quantum field theory and critical phenomena;
International series of monographs on physics. OxfordClarendon Press3rd ed., International series of monographs on physics (Claren- don Press, Oxford, 1996).
. A Vespignani, S Zapperi, 10.1103/PhysRevLett.78.4793Phys. Rev. Lett. 784793A. Vespignani and S. Zapperi, Phys. Rev. Lett. 78, 4793 (1997).
. M Rybarsch, S Bornholdt, 10.1371/journal.pone.0093090PLOS ONE. 91M. Rybarsch and S. Bornholdt, PLOS ONE 9, 1 (2014).
. C J Rhodes, R M Anderson, 10.1038/381600a0Nature. 381600C. J. Rhodes and R. M. Anderson, Nature 381, 600 (1996).
. M J Aschwanden, F Scholkmann, W Béthune, W Schmutz, V Abramenko, M C M Cheung, D Müller, A Benz, G Chernov, A G Kritsuk, J D Scargle, A Melatos, R V Wagoner, V Trimble, W H Green, 10.1007/s11214-018-0489-2Space Science Reviews. 21455M. J. Aschwanden, F. Scholkmann, W. Béthune, W. Schmutz, V. Abramenko, M. C. M. Cheung, D. Müller, A. Benz, G. Cher- nov, A. G. Kritsuk, J. D. Scargle, A. Melatos, R. V. Wagoner, V. Trimble, and W. H. Green, Space Science Reviews 214, 55 (2018).
. S H Strogatz, 10.1038/35065725Nature. 410268S. H. Strogatz, Nature (London) 410, 268 (2001).
. B Barzel, A.-L Barabási, 10.1038/nphys2797Nature Physics. 9750B. Barzel and A.-L. Barabási, Nature Physics 9, 750 (2013).
. S Helmrich, A Arias, G Lochead, M Buchhold, S Diehl, S Whitlock, arXiv:1806.09931ArXiv e-prints. condmat.quant-gasS. Helmrich, A. Arias, G. Lochead, M. Buchhold, S. Diehl, and S. Whitlock, ArXiv e-prints (2018), arXiv:1806.09931 [cond- mat.quant-gas].
. P Schauß, M Cheneau, M Endres, T Fukuhara, S Hild, A Omran, T Pohl, C Gross, S Kuhr, I Bloch, 10.1038/nature11596arXiv:1209.0944Nature (London). 491physics.atom-phP. Schauß, M. Cheneau, M. Endres, T. Fukuhara, S. Hild, A. Omran, T. Pohl, C. Gross, S. Kuhr, and I. Bloch, Nature (London) 491, 87 (2012), arXiv:1209.0944 [physics.atom-ph].
. G Günter, H Schempp, M Robert-De-Saint-Vincent, V Gavryusev, S Helmrich, C S Hofmann, S Whitlock, M Weidemüller, 10.1126/science.1244843Science. 342954G. Günter, H. Schempp, M. Robert-de-Saint-Vincent, V. Gavryusev, S. Helmrich, C. S. Hofmann, S. Whitlock, and M. Weidemüller, Science 342, 954 (2013).
. A V Gorshkov, R Nath, T Pohl, 10.1103/PhysRevLett.110.153601arXiv:1211.7060Physical Review Letters. 110153601quant-phA. V. Gorshkov, R. Nath, and T. Pohl, Physical Review Letters 110, 153601 (2013), arXiv:1211.7060 [quant-ph].
. S Helmrich, A Arias, S Whitlock, 10.1103/PhysRevA.98.022109Phys. Rev. A. 9822109S. Helmrich, A. Arias, and S. Whitlock, Phys. Rev. A 98, 022109 (2018).
. F Letscher, O Thomas, T Niederprüm, M Fleischhauer, H Ott, 10.1103/PhysRevX.7.021020Phys. Rev. X. 721020F. Letscher, O. Thomas, T. Niederprüm, M. Fleischhauer, and H. Ott, Phys. Rev. X 7, 021020 (2017).
. O Thomas, C Lippe, T Eichert, H Ott, arXiv:1712.05263Nature Communications. 92238physics.atom-phO. Thomas, C. Lippe, T. Eichert, and H. Ott, Nature Commu- nications 9, 2238 (2018), arXiv:1712.05263 [physics.atom-ph].
T Gallagher, Rydberg Atoms. Cambridge University PressT. Gallagher, Rydberg Atoms (Cambridge University Press, 1984).
. M Saffman, T G Walker, K Mølmer, 10.1103/RevModPhys.82.2313Rev. Mod. Phys. 822313M. Saffman, T. G. Walker, and K. Mølmer, Rev. Mod. Phys. 82, 2313 (2010).
. R Lw, H Weimer, J Nipper, J B Balewski, B Butscher, H P Bchler, T Pfau, Journal of Physics B: Atomic, Molecular and Optical Physics. 45113001R. Lw, H. Weimer, J. Nipper, J. B. Balewski, B. Butscher, H. P. Bchler, and T. Pfau, Journal of Physics B: Atomic, Molecular and Optical Physics 45, 113001 (2012).
. T Baluktsian, B Huber, R Löw, T Pfau, 10.1103/PhysRevLett.110.123001Phys. Rev. Lett. 110123001T. Baluktsian, B. Huber, R. Löw, and T. Pfau, Phys. Rev. Lett. 110, 123001 (2013).
The interaction might as well acquire a dipole-dipole form, V ∼ | r l − r l | −3. due to Foerster resonances [71]. This does, however, not modify the structure of Eq.The interaction might as well acquire a dipole-dipole form, V ∼ | r l − r l | −3 , e.g. due to Foerster resonances [71]. This does, however, not modify the structure of Eq. (7).
. M M Valado, C Simonelli, M D Hoogerland, I Lesanovsky, J P Garrahan, E Arimondo, D Ciampini, O Morsch, 10.1103/PhysRevA.93.040701Phys. Rev. A. 9340701M. M. Valado, C. Simonelli, M. D. Hoogerland, I. Lesanovsky, J. P. Garrahan, E. Arimondo, D. Ciampini, and O. Morsch, Phys. Rev. A 93, 040701 (2016).
. A Urvoy, F Ripka, I Lesanovsky, D Booth, J P Shaffer, T Pfau, R Löw, 10.1103/PhysRevLett.114.203002Phys. Rev. Lett. 114203002A. Urvoy, F. Ripka, I. Lesanovsky, D. Booth, J. P. Shaffer, T. Pfau, and R. Löw, Phys. Rev. Lett. 114, 203002 (2015).
. M Gärttner, K P Heeg, T Gasenzer, J Evers, 10.1103/PhysRevA.88.043410Phys. Rev. A. 8843410M. Gärttner, K. P. Heeg, T. Gasenzer, and J. Evers, Phys. Rev. A 88, 043410 (2013).
. T E Lee, H Häffner, M C Cross, 10.1103/PhysRevLett.108.023602Phys. Rev. Lett. 10823602T. E. Lee, H. Häffner, and M. C. Cross, Phys. Rev. Lett. 108, 023602 (2012).
. C Ates, T Pohl, T Pattard, J M Rost, 10.1103/PhysRevLett.98.023002Phys. Rev. Lett. 9823002C. Ates, T. Pohl, T. Pattard, and J. M. Rost, Phys. Rev. Lett. 98, 023002 (2007).
. T Amthor, C Giese, C S Hofmann, M Weidemüller, 10.1103/PhysRevLett.104.013001Phys. Rev. Lett. 10413001T. Amthor, C. Giese, C. S. Hofmann, and M. Weidemüller, Phys. Rev. Lett. 104, 013001 (2010).
. I Lesanovsky, J P Garrahan, 10.1103/PhysRevA.90.011603Phys. Rev. A. 9011603I. Lesanovsky and J. P. Garrahan, Phys. Rev. A 90, 011603 (2014).
. R Faoro, C Simonelli, M Archimi, G Masella, M M Valado, E Arimondo, R Mannella, D Ciampini, O Morsch, 10.1103/PhysRevA.93.030701Phys. Rev. A. 9330701R. Faoro, C. Simonelli, M. Archimi, G. Masella, M. M. Valado, E. Arimondo, R. Mannella, D. Ciampini, and O. Morsch, Phys. Rev. A 93, 030701 (2016).
. M Marcuzzi, M Buchhold, S Diehl, I Lesanovsky, 10.1103/PhysRevLett.116.245701Phys. Rev. Lett. 116245701M. Marcuzzi, M. Buchhold, S. Diehl, and I. Lesanovsky, Phys. Rev. Lett. 116, 245701 (2016).
. M Marcuzzi, E Levi, W Li, J P Garrahan, B Olmos, I Lesanovsky, 10.1088/1367-2630/17/7/0720031411.7984New Journal of Physics. 1772003M. Marcuzzi, E. Levi, W. Li, J. P. Garrahan, B. Olmos, and I. Lesanovsky, New Journal of Physics 17, 072003 (2015), 1411.7984.
. M Buchhold, B Everest, M Marcuzzi, I Lesanovsky, S Diehl, 10.1103/PhysRevB.95.014308Phys. Rev. B. 9514308M. Buchhold, B. Everest, M. Marcuzzi, I. Lesanovsky, and S. Diehl, Phys. Rev. B 95, 014308 (2017).
. C Pérez-Espigares, M Marcuzzi, R Gutiérrez, I Lesanovsky, 10.1103/PhysRevLett.119.140401Phys. Rev. Lett. 119140401C. Pérez-Espigares, M. Marcuzzi, R. Gutiérrez, and I. Lesanovsky, Phys. Rev. Lett. 119, 140401 (2017).
. R Gutiérrez, C Simonelli, M Archimi, F Castellucci, E Arimondo, D Ciampini, M Marcuzzi, I Lesanovsky, O Morsch, 10.1103/PhysRevA.96.041602Phys. Rev. A. 9641602R. Gutiérrez, C. Simonelli, M. Archimi, F. Castellucci, E. Ari- mondo, D. Ciampini, M. Marcuzzi, I. Lesanovsky, and O. Morsch, Phys. Rev. A 96, 041602 (2017).
In the presence of spatial fluctuations, the critical density is generally shifted larger values n c > Γ/κ. In order to determine n c in d = 1, 2, we compute the location of the critical point in Eq. 86 in d = 1In the presence of spatial fluctuations, the critical density is gen- erally shifted larger values n c > Γ/κ. In order to determine n c in d = 1, 2, we compute the location of the critical point in Eq. (7) numerically, e.g. we find n c = 3.86 in d = 1.
. H K Janssen, 10.1007/BF01319549Zeitschrift für Physik B Condensed Matter. 42151H. K. Janssen, Zeitschrift für Physik B Condensed Matter 42, 151 (1981).
H Hinrichsen, Advances in physics. 49815H. Hinrichsen, Advances in physics 49, 815 (2000).
Any rare configuration with n x,t = 0 would otherwise block the spreading of excitations forever. Any rare configuration with n x,t = 0 would otherwise block the spreading of excitations forever.
. J A Bonachela, M A Muñoz, 10.1088/1742-5468/2009/09/P09009Journal of Statistical Mechanics: Theory and Experiment. 9009J. A. Bonachela and M. A. Muñoz, Journal of Statistical Me- chanics: Theory and Experiment 2009, P09009 (2009).
. J A Bonachela, S De Franciscis, J J Torres, M A Muñoz, Journal of Statistical Mechanics: Theory and Experiment. 20102015J. A. Bonachela, S. de Franciscis, J. J. Torres, and M. A. Muñoz, Journal of Statistical Mechanics: Theory and Experi- ment 2010, P02015 (2010).
This is contrasted with SOC in energy conserving systems, e.g. the sandpile model, which requires only a single pair of separated scales. 60This is contrasted with SOC in energy conserving systems, e.g. the sandpile model, which requires only a single pair of sepa- rated scales [60].
. I Dornic, H Chaté, M A Muñoz, 10.1103/PhysRevLett.94.100601Phys. Rev. Lett. 94100601I. Dornic, H. Chaté, and M. A. Muñoz, Phys. Rev. Lett. 94, 100601 (2005).
. K Schenk, B Drossel, F Schwabl, 10.1103/PhysRevE.65.026135Phys. Rev. E. 6526135K. Schenk, B. Drossel, and F. Schwabl, Phys. Rev. E 65, 026135 (2002).
. C V Stewart, D Plenz, 10.1523/JNEUROSCI.0723-06.2006Journal of Neuroscience. 26C. V. Stewart and D. Plenz, Journal of Neuroscience 26, 8148 (2006), http://www.jneurosci.org/content/26/31/8148.full.pdf.
. A Levina, J M Herrmann, T Geisel, 10.1103/PhysRevLett.102.118110Phys. Rev. Lett. 102118110A. Levina, J. M. Herrmann, and T. Geisel, Phys. Rev. Lett. 102, 118110 (2009).
. S Bornholdt, T Rohlf, 10.1103/PhysRevLett.84.6114Phys. Rev. Lett. 846114S. Bornholdt and T. Rohlf, Phys. Rev. Lett. 84, 6114 (2000).
. N Bertschinger, T Natschlger, http:/arxiv.org/abs/https:/doi.org/10.1162/089976604323057443Neural Computation. 161413N. Bertschinger and T. Natschlger, Neural Computation 16, 1413 (2004), https://doi.org/10.1162/089976604323057443.
. A A Prinz, 10.1073/pnas.0802299105Proceedings of the National Academy of Sciences. 1055953A. A. Prinz, Proceedings of the National Academy of Sciences 105, 5953 (2008), http://www.pnas.org/content/105/16/5953.full.pdf.
. C Ong, E Gilmore, J Claassen, B Foreman, S A Mayer, 10.1007/s12028-012-9728-7Neurocritical Care. 1739C. Ong, E. Gilmore, J. Claassen, B. Foreman, and S. A. Mayer, Neurocritical Care 17, 39 (2012).
. W Li, P J Tanner, T F Gallagher, 10.1103/PhysRevLett.94.173001Phys. Rev. Lett. 94173001W. Li, P. J. Tanner, and T. F. Gallagher, Phys. Rev. Lett. 94, 173001 (2005).
| [] |
[
"Keypoint-Based Bimanual Shaping of Deformable Linear Objects under Environmental Constraints using Hierarchical Action Planning",
"Keypoint-Based Bimanual Shaping of Deformable Linear Objects under Environmental Constraints using Hierarchical Action Planning"
] | [
"Shengzeng Huo ",
"Anqing Duan ",
"Chengxi Li ",
"Peng Zhou ",
"Wanyu Ma ",
"David Navarro-Alarcon "
] | [] | [] | This paper addresses the problem of contact-based manipulation of deformable linear objects (DLOs) towards desired shapes with a dual-arm robotic system. To alleviate the burden of high-dimensional continuous state-action spaces, we model the DLO as a kinematic multibody system via our proposed keypoint detection network. This new perception network is trained on a synthetic labeled image dataset and transferred to real manipulation scenarios without conducting any manual annotations. Our goal-conditioned policy can efficiently learn to rearrange the configuration of the DLO based on the detected keypoints. The proposed hierarchical action framework tackles the manipulation problem in a coarse-to-fine manner (with highlevel task planning and low-level motion control) by leveraging on two action primitives. The identification of deformation properties is avoided since the algorithm replans its motion after each bimanual execution. The conducted experimental results reveal that our method achieves high performance in state representation of the DLO, and is robust to uncertain environmental constraints. | null | [
"https://arxiv.org/pdf/2110.08962v1.pdf"
] | 239,016,774 | 2110.08962 | 4328eeb0a247bb0b19b154e4f635de7e5836e4db |
Keypoint-Based Bimanual Shaping of Deformable Linear Objects under Environmental Constraints using Hierarchical Action Planning
Shengzeng Huo
Anqing Duan
Chengxi Li
Peng Zhou
Wanyu Ma
David Navarro-Alarcon
Keypoint-Based Bimanual Shaping of Deformable Linear Objects under Environmental Constraints using Hierarchical Action Planning
1Index Terms-Deformable Linear ObjectSynthetic LearningBimanual ManipulationHierarchical Planning
This paper addresses the problem of contact-based manipulation of deformable linear objects (DLOs) towards desired shapes with a dual-arm robotic system. To alleviate the burden of high-dimensional continuous state-action spaces, we model the DLO as a kinematic multibody system via our proposed keypoint detection network. This new perception network is trained on a synthetic labeled image dataset and transferred to real manipulation scenarios without conducting any manual annotations. Our goal-conditioned policy can efficiently learn to rearrange the configuration of the DLO based on the detected keypoints. The proposed hierarchical action framework tackles the manipulation problem in a coarse-to-fine manner (with highlevel task planning and low-level motion control) by leveraging on two action primitives. The identification of deformation properties is avoided since the algorithm replans its motion after each bimanual execution. The conducted experimental results reveal that our method achieves high performance in state representation of the DLO, and is robust to uncertain environmental constraints.
I. INTRODUCTION
D EFORMABLE object manipulation has many promising applications in growing fields, such as flexible cable arrangement [1], clothes folding [2], and surgical robots [3]. Among them, manipulation of deformable linear objects (DLOs) attracts much attraction due to its relevance in several manufacturing industries [4], such as wiring harness and knot tying [5].
Although great progress has been recently achieved in deformable object manipulation (e.g. [6]- [8]), shaping DLOs with environmental contacts remains an open problem. Compared with rigid objects, this problem is much more challenging due to the complex physical dynamics of infinite degree-offreedom DLOs. Our strategy is that instead of analytic physical dynamics, the DLO modeling is simplified to a kinematic multibody featured by several keypoints. The assumptions of our strategy are 1) the keypoint representation is sufficient for the contact-based shape matching problem, and 2) the shape error incurred by the modeling simplicity can be compensated for by coarse-to-fine manipulation. This paper aims to develop a complete algorithm (including perception and planning) to tackle the task of contact-based shaping of DLOs with bimanual manipulation.
Many researchers have worked on the representation of DLOs in vision [9]. Angles [10] and curvatures [11] are intuitive hard-coded descriptors for shape feedback, whose generalization is poor. [1], [8] develop Fourier-based descriptor; however, they require high computation cost during online perception. Data-driven based shape analysis has gained popularity in feature extraction [12]. [13] employs the Gaussian Mixture Model for its physics simulation engine, assuming the physical model of the deformable objects is known. [14] proposes an Encode-Manipulate-Decode network for cloth manipulation. However, it needs tremendous data collection and the latent vector is not semantic. Since real data is expensive to collect, learning on synthetic datasets and transferring to physical situations is an alternative solution [15]. [16] simulates 2D fabric smoothing on a mesh grid connected by various springs. [17] forms a braid of rope through twisting cylindrical meshes; this work needs a sphere mesh on one end to break out the symmetry. [18] generates images with a random b-spline curve with six control points; however, it still needs a real dataset for perception finetuning.
Robotic manipulation of deformable objects has been studied with various formulations and assumptions, including model-based and model-free approaches. With the pregrasping hypothesis, [1], [19] consider the deforming task as shape servoing and approximate the local deformation model with a linear Jacobian matrix, while the global convergence is not guaranteed. Formulating the task as a multi-step pickand-place manipulation problem, [17], [18] conduct the tasks with single-arm policy while real data collection is required for sim-to-real transferring or human visual demonstration. [20] assembles DLOs for specified fixtures with dual robots, yet contacts are not taken into consideration. Task and motion planning (TAMP) is a solution to tackle this multistep decision-making task [21] through factorizing [22] the planning process into discrete symbolic reasoning and continuous motion generation [23]. However, a majority of TAMP algorithms assume rigid objects, whose predictable dynamics are not available in deformable objects, let alone under contact constraints.
[24] exploits environmental contacts for manipulation of DLOs, which is achieved with some customized mechanical grippers and the assumption of pregrasping. We advance the achievement to manipulating DLOs from arbitrary configurations to the desired goal states under environmental constraints. The shape of the DLO is characterized with a sequence of ordered keypoints, an approach that narrows the state-action search space for bimanual manipulation. To deal with the complex contact configurations, a coarse-tofine planning framework with two defined action primitives is derived. The original contributions of this work are as follows:
• A novel data-driven perception approach for DLOs whose network is trained on a synthetic dataset. • A hierarchical action planning framework for shaping DLOs under environmental constraints in a coarse-to-fine manner. • Experimental results to validate our solution for contactbased DLOs bimanual manipulation in real environments. The remainder of this paper is organized as follows. Sec. II states the task's formulation. Sec. III explains the perception. Sec. IV reports the planning framework. Sec. V reports the results and Sec. VI gives the conclusions.
II. PROBLEM FORMULATION
The architecture of our vision-based manipulation system is depicted in Fig. 1. Given a goal observation I * , our task is to manipulate the DLO with an initial configuration I [0] to match it. Assuming the DLO has an obvious color contrast with the background, we segment the state of the DLO S [t] from a raw image I [t] with a color filter. To simplify the problem, we consider circular contacts with known size C = {c 1 , · · · , c k , · · · , c q } in the observation I [t] .
Formulating deformable object manipulation as a multi-step decision-making process, our aim is to obtain an action plan A = (A [1] , · · · , A [t] , · · · , A [
A [t] = [T [t] L T [t] R ], where T [t] L and T [t]
R are the actions of the left and right arm, respectively, including motion, grasping, and releasing.
III. PERCEPTION
Our perception takes the visual binary image S [t] as input and outputs the corresponding keypoints P [t] . To avoid timeconsuming real-world data collection for training, we render an annotated synthetic image dataset for supervised learning (Sec. III-A) and finetune the output of the network through the geometric constraints (Sec. III-B).
A. Synthetic Dataset Generation
In this section, we simulate DLOs to facilitate the keypoint detection P [t] from the binary image S [t] , as illustrated in Fig. 2. Geometrically, a DLO refers to an object whose length is much larger than the diameter of its cross-section. Thus, we mathematically describe it as a continuous curve. Taking the deformation of DLOs into consideration, our model utilizes several 2D curve segments based on the Fourier series [8] to depict the local shape of DLOs (l 1 , l 2 , · · · ), where each curve segment l is described along the X-axis:
y = f (x) = a 0 2 + N n=1
[a n cos(nωx) + b n sin(nωx)] (1) where a 0 is the bias of the Fourier descriptor at zero frequency and N is the number of harmonics under consideration. The coefficients of the n-th harmonic a n , b n are defined as a n = 2 T t0+T t0
f (t) cos(nωt)dt and b n = 2 T t0+T t0 f (t) sin(nωt)dt, respectively,
where ω denotes the frequency. Note that the discrete points of the curve l are in order along the X-axis with this definition. Each DLO L consists of several end-toend connected Fourier series-based segments L = (l 1 , l 2 , · · · ) and the point of it is represented as s i = (s ix , s iy ). Next, we simulate the raw input S [t] and our desired keypoints P [t] , respectively. We denote m keypoints from S [t] in a coarseto-fine manner. Initially, m candidates are sampled uniformly according to Euclidean distance. Since the points with high curvature describe the contour of the DLO, we also desire those as keypoints. The curvature of a point s i is defined as:
α i = f − (s i ), f + (s i ) (2) where f − (s i ) = s i − s i−1 and f + (s i ) = s i+1 − s i . Here, a, b
denotes the function about computing the angle between two vectors ( a, b). According to this definition, we substitute the points whose curvatures are larger than a threshold τ u for their corresponding nearest uniform candidates, enabling the number of the keypoints keeping a constant m. For S [t] , we stack the curve L along Y-axis to simulate the cross-section of the DLO. After these steps, both the sampled keypoints and the stacked layers enter into spatial transformation for data augmentation and camera view rendering for image processing. Spatial transformation, including translation and rotation, is significant for balancing the distribution of samples. Camera view rendering consists of resizing the curve into the region of interest and reorder of the points into an image format. Since we adopt a binary image S [t] to represent the DLO, the pixel at S(u, v) is positive if any point locates within its surroundings:
S(u, v) = 1, ∃s i ∈ I(u, v) 0, otherwise(3)
where
s i ∈ I(u, v) ⇐⇒ {w u < s ix < w u+1 } ∩ {h v < s iy < h v+1 }, u
and v are the horizontal and vertical position of the pixel in the image, respectively. For the labeled keypoints, we transform them from Cartesian frame to image frame, represented as p j (u j , v j ) in sequence.
With this generation pipeline, we obtain binary images describing the shape of DLOs and their corresponding annotated sequential keypoints.
B. Keypoint Detection
To detect the keypoints from the visual image S [t] , we design a network to predict the keypoints of DLOs. More details about the network structure and the training process are discussed in Sec V.
While the network is generalizable across different shapes of DLOs, errors are still unavoidable. As illustrated in Fig. 3, some outputs visually locate on the area of the background, which conflicts with the prior knowledge that the keypoints locate within the DLO. Hence, we consider this geometric constraint finetuning. For an output p j = (u j , v j ) that fails, namely S [t] (u j , v j ) = 0, we utilize the adjacent pixels (p j−1 , p j+1 ) to correct it, which is divided into two cases: (1) the ends are adjusted to the nearest pixels in the area of DLO and (2) the intermediate keypoints are revised through searching along the direction vertical to its tangent space δp j :
f ind s i (u i , v i ) s.t. S [t] (u i , v i ) = 1 − − → s i p j · δp j = 0 (4) where its tangent space δp j is defined as δp j = p j+1 − p j−1 .
Notably, we denote P [t] as the finetuning result of the raw output P [t] .
IV. HIERARCHICAL ACTION PLANNING
We propose a hierarchical action planning framework interleaving high-level task planning, where a sub-goalP [t+1] is designed given the detected keypoints (P [t] ,P * ), and low-level motion control, where a search-based action plan (T L ,T R ) is derived at each time step t. Specifically, the design taskP [t+1] investigates the interested keypoints specification (selection and placement) of dual arms respectively, whereas (T L ,T R ) is an action plan to manipulate the DLO to the detailed configuration. Under this framework, we define two multistep action primitives, the contact primitive and the shape primitive to implement the task in a coarse-to-fine manner. The switch between them depends on the analysis of the contact constraints, as illustrated in Fig. 4. Sharing the same classical pick-and-place manipulation configuration between the primitives, we first detail the contact primitive (Sec. IV-A) and highlight the difference of the shape primitive (Sec. IV-B) afterward.
A. Contact Primitive
Unlike the common robotic manipulation tasks, i.e., grasping, pushing, shape servoing, shaping DLOs under environmental constraints is a discrete-continuous mixed manipulation task, since contacts also provide external force for DLOs besides robots. With the state S * extracted from desired goal observation I * , DLOs reach and stay in this goal configuration only if the contacts are constructed correctly. Hence, it is significant to construct the contacts first as coarse matching, denoted as contact primitive. The whole algorithm of this primitive is shown in Alg. 1.
while c k = ∅ do T L , T R ← GraspPlan (P [t] , C, c k , J) if Reachable(c k , B k ) then T R ← T R ∪ Motion(c k , B k ) else T L ← T L ∪ Motion(c k , B k ) S [t+1] ← System Dynamics (T L , T R , S [t] ) P [t+1] ←Update keypoints (S [t+1] ) c k ← ContactSearch(P [t] , C, B)
Firstly, we analyze the role of contacts in shaping the DLO as S * , as illustrated in Fig. 5(a). Each contact c k supports its adjacent elements of the DLO, constraining its mobility. Hence, our algorithm validates the contact construction according to these elements. Specifically, we denote a set of benchmarks B = {B kb |k = 1, · · · , q, b = 1, 2, 3} for contact evaluation, which three benchmarks (B k1 , B k2 , B k3 ) are defined for a contact c k . Among them, the benchmarks (B k1 , B k2 ) are obtained through constrained optimization:
B k1 = arg max si ||s i − c k || 2 , B k2 = arg max si ||s i − B k1 || 2 s.t. τ i < ||s i − c k || 2 < τ e(5)
where (τ i ,τ e ) is distance threshold of the search area. In addition, we also search for the nearest element s * i of the goal shape S * to emphasize the support force from the contact c k to the DLO,
B k3 = arg min si ||s i − c k || 2(6)
After the search, we re-order the benchmarks B kb ∈ B along the sequential keypoints P * . These benchmarks act as baselines to assess the contact construction. A benchmark B kb is satisfied if we find an element s i that fulfills two thresholds τ c and τ a (graphical explanation in Fig. 5(b)):
||s i − c k || 2 < τ c − − → c k s i , −−−→ c k B kb < τ a(7)
For a contact c k , we consider it as qualified only if all the benchmarks {B kb |b = 1, 2, 3} are satisfied based on the metrics. According to the benchmark set B, we search the target contact along the sequence of k = 2, · · · , q, 1, meaning that the first contact on the end is skipped and placed in the last of the queue. This priority distribution avoids breaking out the constructed contacts on the end while manipulating the central part of the DLO. Note that this search paradigm stops once the target contact c k is acquired along the defined sequence.
Our low-level planner takes the target contact c k as input and output the motion of the dual-arm robot (T L , T R ) to achieve it. The benchmark set B also serves as the guidance for contact construction, whereas there exists a gap between the keypoints P [t] and B. To link the benchmark B and the keypoints P * , we pair them up with the nearest Euclidean distance, using a set J = {J kb |k = 1, · · · , q, b = 1, 2, 3} to mark the corresponding index:
J kb = arg min i || p * i − B kb || 2(8)
With the set J , the search sequence of grasping points selection for individual robots is divided into two cases 1) starting from the ends to the middle for c k on the end and 2) starting from the middle to the ends for c k in the intermediate. This solution enables to limit the impacts caused by the manipulation on unrelated sections of the DLO. This search paradigm undertakes under the constraints of the system, including the operation range and the contact obstacles. Next, we build on potential field [26] path planning to avoid the collision with the contacts, which provide a repulsion force to the robot. Specifically, we extend the benchmark set B to B with a threshold τ b :
B kb = c k + τ b · −−−→ c k B kb /|| B kb − c k || 2 ,(9)
For the target contact c k , we manipulate the DLO to the corresponding extended benchmark {B kb |b = 1, 2, 3}. During the action, dual arms are assigned as fixing for freedom constraints and moving for shaping, depending on the reachability analysis of the robot arm concerning B k1 . After that, the motion path is B k3 to B k1 for left arm or B k1 or B k3 for the right arm. The 4-DOF pose on a table-top environment is defined as π j = { χ j , η j }, where χ j and η j are position and direction vectors of 3 × 1. These two entities are defined by:
χ j = B kb , η j · −−−→ c k B kb = 0(10)
At last, we give a summary about the primitive. With the target contact c k , we select the grasping points for dual arms. Next, we assign their roles and execute the motion. At last, the robots release the DLO, waiting for the re-planning.
B. Shape Primitive
The goal of the shape primitive is to implement the finetuning after the contact construction. As illustrated in Fig. 6, our perception network correspondingly pairs up individual keypoints p
[t] j ∈ P [t] and p * j ∈ P * . Hence, the shape error ∆P between them is defined as:
∆P = 1 m m j=1 ||p [t] j − p * j || 2(11)
This error serves as a guidance to improve the similarity. Intuitively, we select the keypoints p j whose difference between the current stage P [t] and the goal stage P * is comparatively large. Meanwhile, we also find the one in second-level for bimanual manipulation:
g ← arg max j ||p [t] j − p * j || 2 , g ← arg max j,j =g ||p [t] j − p * j || 2 (12)
We reorder (g, g ) and reassign it to the dual-arm robot by
g L , g R ← min(g, g ), max(g, g )(13)
Similar to the contact primitive, we define the search paradigm under the system constraints as g L to 1 for left arm and g R to m for the right arm, respectively. Then, we define the target pose with respect to the g-th keypoint p * g :
π g = { p * g , δp * g }(14)
where δp * g is tangent of p * g . This shape primitive iterates until the desired goal is reached.
V. RESULTS
A. Hardware Setup
As illustrated in Fig. 1, our bimanual experimental platform consists of two UR3 robotic manipulators, equipped with 2-fingered Robotiq grippers, respectively. To facilitate the bimanual manipulation, they face each other with an interval of 0.6m. An Intel Realsense L515 camera is mounted to sense the top-down view of the manipulation space with a resolution of 1280 × 780. The spatial transformation between the depth camera and dual-arms (T L , T R ) is calibrated through the markers. Each contact is a cylinder (radius=4cm,height=1cm), localized via ArUco markers. All contacts are glued on the table, keeping them stable during the whole manipulation process. The contacts are conventionally ordered according to the detected sequential keypoints P * concerning the goal shape of DLO S * . Considering the physical limitations, the operation space of individual robots is constrained to a ringshaped region.
B. Perception
For perception in real environment, we utilize OpenCV [27] to segment the DLO S [t] from the raw observation I [t] with a morphological operation-based color filter, represented as a binary image. To balance the accuracy and efficiency, we resized S [t] to 128 × 64 for the following processing.
In this section, we introduce the superiority of our syntheticbased feature extraction without any manual data collection and annotations. To reduce the gap between simulation and reality, the synthetic dataset needs to render the physics. We quantitatively and qualitatively evaluate the robustness and accuracy of the perception model. Fig. 7 visualizes the synthetic dataset concerning the real data. Note that Fig. 7(a)-(b) is designed manually to act as references to have an intuitive comparison with the simulated Fig. 7(c)-(d). These graphical results validate the visual similarity with the real dataset. Our synthetic dataset includes 7040 labeled images in total, divided into a training dataset and testing dataset with a ratio of 10:1. Each sample is rendered as a binary image, containing a randomly generated curve and m = 16 corresponding sorted keypoints in image coordinates. To improve the variation of the dataset, the geometry features of the DLO, including radius, length, and the number of segments, are randomly generated over a wide range.
Based on the synthetic dataset, we train our supervised keypoint detection network, whose architecture is shown in Fig. 8(a). As a fully convolution network [28], it only involves convolution layers with a similar structure to VGG [29]. In the last layer, we apply 1×1 convolution to regress the dimension of the output as 2 × 16, where each column represents the position p j = (u j , v j ) in the image frame. The training is optimized based on the smooth L1 loss function L 1 (y,ŷ) = 1 2 (y −ŷ) 2 for |y −ŷ| ≤ 1 |y −ŷ| − 0.5 otherwise (15) where y andŷ denote the ground truth and the output of the training, respectively. Fig. 8(b) shows the corresponding loss trend for training, testing, and transferring. Note that both training and testing are implemented with our synthetic dataset for efficient processing. In addition, the transfer loss is evaluated on the real data collection with manual annotation, which includes fifty samples. Note that this manual collection dataset is only for evaluation and is not used to train the network. The promising results reveal the advantages of our Geo: Geometric-based method; Our: Our data-driven algorithm. µ C , σ 2 C : Mean and variance of corner error E C . µ P , σ 2 P : Mean and variance of keypointd error E P . perception method: 1) our synthetic dataset holds a high similarity with the real data to avoid manual collection; 2) the keypoint detection network converges to minimize the detection error; (3) the perception model is general to unseen samples in testing (simulation) and transferring (real).
As discussed above, geometric finetuning is proposed to account for the residual error. Fig. 9(a) illustrates several failure cases, in which some detected keypoints drop out from the positive region of the DLO, mainly on the steep area of the curve. Comparatively, Fig. 9(b) visualizes the keypoints with finetuning, graphically indicating that this method improves the representation level of the sequential keypoints.
Compared with data-driven learning models, manual designed descriptor is an alternative for keypoint detection due to its intuitiveness and interpretability. Here, we provide a comparison between our method and a traditional geometricbased baseline, whose steps include skeletonizing DLOs via [30] from S [t] , searching the corners of DLOs according to the mesh grids, sorting and sampling the keypoints based on nearest neighbor search. Our error metrics include the corner E C and the keypoint detection error E P , which are defined as E C = 1 2 (||p 1 −p 1 || 2 +||p m −p m || 2 ) and E P = 1 m j=m j=1 ||p j − p j || 2 , respectively. We emphasize the corner error E C here since it is the symbol to order the keypoints. Statistically, we leverage the mean value (µ C , µ P ) and the variance (σ 2 C , σ 2 P ) to evaluate their performance comprehensively. Note that p j andp j are the ground truth of the dataset and the output of the corresponding algorithm, respectively. The comparison results are shown in Table. I. Due to the huge diversity of the state space of DLOs, it is very difficult to manually develop a sequential keypoint detection method that is robust to various configurations. Conversely, our perception network is robust with its data-driven manner.
A key issue about descriptors is their representation level versus the original data. Since we only predict keypoints of DLOs based on the link-chain model, we reconstruct the orig- inal shape through end-to-end connection. For comparisons, we consider various unsupervised auto-encoders [12], whose goal is also to extract a compact latent code about the highdimensional data. We choose three baselines to adapt to our case 1) fully connected linear regression (LR), 2) convolutional neural network (CNN), and 3) PointNet [31] (PC). Specifically, the training of LR and CNN autoencoders is conducted based on the binary cross entropy (BCE) loss L BCE , while PC autoencoder is optimized through Chamfer distance d. They are defined as:
L BCE = − n i=1 y i logŷ i + (1 − y i ) log (1 −ŷ i ) d Ŷ , Y = ŷ∈Ŷ min y∈Y ŷ − y 2 2 + y∈Y min y∈Ŷ ŷ − y 2 2(16)
According to the network structure, LR and CNN take the 2D image format as input while PC utilizes the 3D point cloud with the same size after down-sampling. Since our original state S [t] is a binary image, the shape reconstruction issue here is formulated as a classification concerning each pixel S [t] (u, v). Hence, our evaluation metrics are L1 loss L 1 and IoU (Intersection over Union) between the reconstructed output and the original information, respectively:
L 1 = n i=1 |y i −ŷ i |, IoU =ŷ ∩ ŷ y ∪ y(17)
Table. II shows the comparison results. Note that FCN-L method utilizes the labeled keypoints for reconstruction and acts as ground truth for our data-driven representation. The finetuning output of our perception improves greatly compared with the raw output of the network FCN-R. Compared with LR and CNN autoencoders, our proposed FCN-F performs better both in L1 loss and IoU. The main reason is that autoencoders aim to reconstruct the entire information of the input (even the details) instead of paying attention to the fundamental features. Although [31] achieves well in L1 loss, its performance concerning IoU is poor. This is because it is only able to reconstruct the original data with a fixed size (due to the identical input dimension); thus loses some information inevitably.
C. Manipulation
To validate our hierarchical action planning framework, we evaluate the performance with multiple experiments using
Start
Goal Achieved various contact configurations and goals. Fig. 10 shows four designed tasks in our experiment. Note that the configuration of the DLO at the beginning S [0] is placed randomly on the table and the desired goal is provided artificially. For each experiment, we assume that the goal shape S * keeps stable with the support of the contacts and the table. The third column in Fig. 10 illustrates our achieved results. Since our hierarchical action planning is iterative, the robot continuously manipulates the DLO until the shape similarity between the goal S * and the achieved one S [H] is sufficient. In this experimental study, the shaping tasks are conducted with multi-step action depending on the feature extraction of the DLOs without learning their physical dynamics. As a multi-step decision-making process, we provide a typical example of the manipulation, as shown in Fig. 11. At the beginning, our algorithm computes the prior knowledge for the hierarchical action planning based on the goal image I * : 1) segment the DLO S * with the color filter and detect the corresponding sequence ordered keypoints P * through our perception network and 2) localize the contacts C = {c 1 , · · · , c k , · · · , c q } and compute the contact-based benchmarks (B, B , J ). Then, our algorithm enters into the action loop. For each planning, we sense the DLO S [t] and . If the contact restrictions are met, we move on to the shape primitive for finetuning. The entire algorithm iterates until reaching the goal state S * , which the criteria is defined as the binary IoU between S [t] and S * according to Eq. 17 should be larger than 40%. We also provide supplementary material for robotic bimanual manipulation videos. Based on the goal shape in Fig. 10, we implement four trials under various initial configurations. Fig. 12 depicts the quantitative measurements of the scenarios in Fig. 10. Specifically, the minimization of the magnitude error ∆P is shown in Fig. 12(b). These results corroborate that the detected sequential keypoints can be used to manipulate the DLO into the desired specification. Fig. 12(a) demonstrates the similarity level of the state at each time step with the goal shape S * , which IoU= 40% serves as a baseline. Note that the IoU value decreases compared with the previous time step in some cases since the contact-based manipulation task is not continuous. Hence, a coarse-to-fine manner is necessary for this challenging task, otherwise, we probably get stuck in a local optimum. These results also reveal that our algorithm is superior in feature description and action planning versus this kind of challenging task.
Although our planning framework is capable of dealing with the majority of these challenging tasks, there are some cases that the system fails. Fig. 13 presents two typical failure examples. Although our perception network plays well in most cases, its performance is severely affected by rolling. That is because the convolution is not good at dealing with the details of the pixels and the finetuning regresses the keypoints to the wrong region of the DLO, resulting in a sequence of disordered keypoints. Another case is caused by the lack of physical dynamics. Without any forecasting and feedback, our framework replans the action in an open-loop form. Thus, the system probably enters into a local convergence at the contacts.
VI. CONCLUSIONS
In this paper, we demonstrate a keypoint-based bimanual manipulation for DLOs under environmental contact constraints. Training on a synthetic image dataset, our perception extracts sequential keypoints of DLOs as descriptors. The hierarchical action planning framework performs the task with two defined primitives in a coarse-to-fine manner. The whole algorithm is semantic without requiring any manual data collection and annotation. However, our methods exist some limitations. The perception network has poor performance in the knotted cases. As an open-loop method, the stability of the planner is not guaranteed. For future directions, we are interested to explore the synergistic behaviors between dual arms to extend the framework for other deformable objects, such as clothes and bags.
Fig. 3 .
3Illustration of the geometric finetuning. The point locating on the background area is revised along the direction vertical to its tangent.
Fig. 4 .
4Flow chart depicting the conversion and details of the action primitives.
Fig. 5 .
5Graphical explanation of the contact primitive. (a) Based on the distance threshold τ i and τe, we acquire the benchmark B k in the goal state S * with respect to the contact c k . (b) Based on the threshold τc and τa, we search points that meet the conditions with respect to each benchmark B kb . Algorithm 1: ContactPrimitive(P [t] , P * , C, J , B, B )
Fig. 6 .
6Graphical explanation of the shape primitive. (a) The desired goal guides the framework for shaping. (b) Based on the goal and the current state, we select the points of interest to reshape.
Fig. 7 .
7Visualizations of synthetic dataset and the comparison with the real collected data. (a) Visual observation. (b) Extracted state of the DLO by the color filter. (c) Rendered state of the DLO. (d) Rendered keypoints of the DLO.
Fig. 8 .
8Details about the perception network. (a) Architecture of the FCN network. (b) Loss convergence of training, validation, testing, and transferring.
Fig. 9 .
9Visualizations of finetuning the predicted keypoints according to the surrounding geometric features. (a) Raw output of the network. (b) Finetuning results.
Fig. 12 .Fig. 13 .
1213Shape error minimization process. (a) IoU between the goal state S * and the state S [t] in each time step t. (b) The keypoints error between P * and P [t] . The stuck state of failure cases. detect its keypoints P [t] via our perception network. With this, we check the contact construction based on our search benchmarks B. If it is failed, we utilize the contact primitive to construct the corresponding contact c k . Once the action plan (T L , T R ) is accomplished, we update the state of the DLO S [t+1]
Fig. 1. Overview of the keypoint-based bimanual manipulation framework for shaping DLOs with contacts. Given the goal I * , the DLO manipulation is formulated as a goal-directed task from the initial configuration I [0] . At each time step t, the perception detects the sequential keypoints P[t] corresponding to the state of the DLO S [t] extracted from the visual observation I[t] . The hierarchical control framework takes the current P [t] and the goal P * keypoints as input and outputs the action plan (T L , T R ). The whole algorithm replans based on the new observation after the execution of the robots. The scheme iterates until reaching the desired goal.Contact
Shape
Estimator
Estimator
Hierarchical Framework
Task Planning
Motion Planning
DLO
Contact
Keypoint
Link
H] ) within H steps, such that the last state S [H+1] (with the transition function S [H+1] = A [H] × S [H] ) reaches the goal state S * . To apply TAMP framework for this challenging task, we make some modifications versus perception and planning. The state S [t] of the DLO is depicted as S [t] = {s[t]
1 , · · · , s
[t]
i , · · · , s
[t]
n }. Based on the kinematic
multibody model [25], we describe the DLO S [t] as a list of
sequential keypoints P [t] = {p
[t]
1 , · · · , p
[t]
j , · · · , p
[t]
m } (m
n)
since it allows us to 1) narrow down the search space from
Fourier
Segments
Sample
Layer
Spatial
Transformation
Camera View
Rendering
X
Y
Y
X
X
Y
U
V
Fig. 2. Data generation of labeled synthetic DLOs. Base on the Fourier series,
we generate multiple curve segments and concatenate them end-to-end. This
raw data undergoes sampling and stacking for keypoint labels and image
input, respectively. After the spatial transformation for data augmentation, we
simulate image rendering based on the camera acquisition principle.
high-dimensional state to low-dimensional latent space, and
2) obtain a compact feedback vector for semantic bimanual
manipulation. Note that the end of the DLO closer to the left
robot is denoted as the first keypoint in the perception. Based
on the description, our hierarchical control framework com-
bines high-level task planning and low-level motion control.
Taking P [t] and P * as input, the high-level model designs
the sub-goalsP [t+1] , while low-level model plans the local
motion A [t] to achieveP [t+1] . Note thatP [t+1] is the designed
sub-goal and is different from the detected keypoints P [t+1]
at time step t + 1. We choose bimanual manipulation in a
tabletop environment instead of a single-arm to 1) constraint
the unpredictable displacement of the DLO, and 2) enrich the
diversity of the action. In this case, each plan A [t] is defined as
TABLE I COMPARISON
IOF KEYPOINT DETECTION PERFORMANCECorner Error E C
Keypoint Error E P
µ C
σ 2
C
µ P
σ 2
P
Geo
1.96
29.78
28.21
389.96
Ours 1.71
12.5
3.36
21.44
TABLE II
IICOMPARISON OF KEYPOINT DETECTION PERFORMANCE ON
SYNTHETIC DATASET
L1
IoU
Net
Train
Valid
Test
Train
Valid
Test
FCN-L
0.0074
0.0074
0.0073
0.6645
0.665
0.6648
FCN-R
0.0274
0.0276
0.0272
0.0798
0.0783
0.0804
FCN-F
0.0208
0.0212
0.0205
0.2558
0.2493
0.2573
LR
0.0297
0.0299
0.0325
0.0846
0.0852
0.0643
CNN
0.0294
0.0296
0.0291
0.0996
0.0995
0.0991
PC [31]
0.02
0.0201
0.0199
0.0882
0.0878
0.0852
FCN-L: label of the FCN; FCN-R: raw output of the FCN; FCN-F: finetuning
FCN; LR: linear regression; CNN: convolutional neural network; PC [31]:
point cloud.
Fig. 10. Our designed DLO manipulation with environmental contacts scenarios. From left to right: the start state, the goal state and achieved state with our framework. All the images are taken by our top-down Realsense L515 depth camera.Fig. 11. Our designed DLOs manipulation with environmental contacts scenarios. From left to right: the start state, the goal state and achieved state with our framework. All the images are taken by our top-down Realsense L515 depth camera.Contact Primitive
Shape Primitive
Dual-arm robotic manipulation of flexible cables. J Zhu, B Navarro, P Fraisse, A Crosnier, A Cherubini, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEJ. Zhu, B. Navarro, P. Fraisse, A. Crosnier, and A. Cherubini, "Dual-arm robotic manipulation of flexible cables," in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 479-484, IEEE, 2018.
Benchmarking bimanual cloth manipulation. I Garcia-Camacho, M Lippi, M C Welle, H Yin, R Antonova, A Varava, J Borras, C Torras, A Marino, G Alenya, IEEE Robotics and Automation Letters. 52I. Garcia-Camacho, M. Lippi, M. C. Welle, H. Yin, R. Antonova, A. Var- ava, J. Borras, C. Torras, A. Marino, G. Alenya, et al., "Benchmarking bimanual cloth manipulation," IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1111-1118, 2020.
Automatic 3-d manipulation of soft objects by robotic arms with an adaptive deformation model. D Navarro-Alarcon, H M Yip, Z Wang, Y.-H Liu, F Zhong, T Zhang, P Li, IEEE Transactions on Robotics. 322D. Navarro-Alarcon, H. M. Yip, Z. Wang, Y.-H. Liu, F. Zhong, T. Zhang, and P. Li, "Automatic 3-d manipulation of soft objects by robotic arms with an adaptive deformation model," IEEE Transactions on Robotics, vol. 32, no. 2, pp. 429-441, 2016.
Robotic wires manipulation for switchgear cabling and wiring harness manufacturing. K Galassi, G Palli, 2021 4th IEEE International Conference on Industrial Cyber-Physical Systems (ICPS). IEEE2021K. Galassi and G. Palli, "Robotic wires manipulation for switchgear ca- bling and wiring harness manufacturing," in 2021 4th IEEE International Conference on Industrial Cyber-Physical Systems (ICPS), pp. 531-536, IEEE, 2021.
Robotic manipulation and sensing of deformable objects in domestic and industrial applications: a survey. J Sanchez, J.-A Corrales, B.-C Bouzgarrou, Y Mezouar, The International Journal of Robotics Research. 377J. Sanchez, J.-A. Corrales, B.-C. Bouzgarrou, and Y. Mezouar, "Robotic manipulation and sensing of deformable objects in domestic and in- dustrial applications: a survey," The International Journal of Robotics Research, vol. 37, no. 7, pp. 688-716, 2018.
Modelfree visually servoed deformation control of elastic objects by robot manipulators. D Navarro-Alarcon, Y.-H Liu, J G Romero, P Li, IEEE Transactions on Robotics. 296D. Navarro-Alarcon, Y.-H. Liu, J. G. Romero, and P. Li, "Model- free visually servoed deformation control of elastic objects by robot manipulators," IEEE Transactions on Robotics, vol. 29, no. 6, pp. 1457- 1468, 2013.
Visionbased manipulation of deformable and rigid objects using subspace projections of 2d contours. J Zhu, D Navarro-Alarcon, R Passama, A Cherubini, Robotics and Autonomous Systems. 142103798J. Zhu, D. Navarro-Alarcon, R. Passama, and A. Cherubini, "Vision- based manipulation of deformable and rigid objects using subspace projections of 2d contours," Robotics and Autonomous Systems, vol. 142, p. 103798, 2021.
Fourier-based shape servoing: a new feedback method to actively deform soft objects into desired 2-d image contours. D Navarro-Alarcon, Y.-H Liu, IEEE Transactions on Robotics. 341D. Navarro-Alarcon and Y.-H. Liu, "Fourier-based shape servoing: a new feedback method to actively deform soft objects into desired 2-d image contours," IEEE Transactions on Robotics, vol. 34, no. 1, pp. 272-279, 2017.
Challenges and outlook in robotic manipulation of deformable objects. J Zhu, A Cherubini, C Dune, D Navarro-Alarcon, F Alambeigi, D Berenson, F Ficuciello, K Harada, X Li, J Pan, arXiv:2105.01767arXiv preprintJ. Zhu, A. Cherubini, C. Dune, D. Navarro-Alarcon, F. Alambeigi, D. Berenson, F. Ficuciello, K. Harada, X. Li, J. Pan, et al., "Challenges and outlook in robotic manipulation of deformable objects," arXiv preprint arXiv:2105.01767, 2021.
A unified controller for region-reaching and deforming of soft objects. Z Wang, X Li, D Navarro-Alarcon, Y.-H Liu, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEZ. Wang, X. Li, D. Navarro-Alarcon, and Y.-h. Liu, "A unified controller for region-reaching and deforming of soft objects," in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 472-478, IEEE, 2018.
On the visual deformation servoing of compliant objects: Uncalibrated control methods and experiments. D Navarro-Alarcon, Y Liu, J G Romero, P Li, The International Journal of Robotics Research. 3311D. Navarro-Alarcon, Y.-h. Liu, J. G. Romero, and P. Li, "On the visual deformation servoing of compliant objects: Uncalibrated control meth- ods and experiments," The International Journal of Robotics Research, vol. 33, no. 11, pp. 1462-1480, 2014.
Lasesom: A latent and semantic representation framework for soft object manipulation. P Zhou, J Zhu, S Huo, D Navarro-Alarcon, IEEE Robotics and Automation Letters. P. Zhou, J. Zhu, S. Huo, and D. Navarro-Alarcon, "Lasesom: A latent and semantic representation framework for soft object manipulation," IEEE Robotics and Automation Letters, 2021.
A framework for manipulating deformable linear objects by coherent point drift. T Tang, C Wang, M Tomizuka, IEEE Robotics and Automation Letters. 34T. Tang, C. Wang, and M. Tomizuka, "A framework for manipulating deformable linear objects by coherent point drift," IEEE Robotics and Automation Letters, vol. 3, no. 4, pp. 3426-3433, 2018.
Emd net: An encodemanipulate-decode network for cloth manipulation. D Tanaka, S Arnold, K Yamazaki, IEEE Robotics and Automation Letters. 33D. Tanaka, S. Arnold, and K. Yamazaki, "Emd net: An encode- manipulate-decode network for cloth manipulation," IEEE Robotics and Automation Letters, vol. 3, no. 3, pp. 1771-1778, 2018.
Deep learning a grasp function for grasping under gripper pose uncertainty. E Johns, S Leutenegger, A J Davison, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEE. Johns, S. Leutenegger, and A. J. Davison, "Deep learning a grasp function for grasping under gripper pose uncertainty," in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4461-4468, IEEE, 2016.
Deep imitation learning of sequential fabric smoothing from an algorithmic supervisor. D Seita, A Ganapathi, R Hoque, M Hwang, E Cen, A K Tanwani, A Balakrishna, B Thananjeyan, J Ichnowski, N Jamali, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEED. Seita, A. Ganapathi, R. Hoque, M. Hwang, E. Cen, A. K. Tanwani, A. Balakrishna, B. Thananjeyan, J. Ichnowski, N. Jamali, et al., "Deep imitation learning of sequential fabric smoothing from an algorithmic supervisor," in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 9651-9658, IEEE, 2020.
Learning rope manipulation policies using dense object descriptors trained on synthetic depth data. P Sundaresan, J Grannen, B Thananjeyan, A Balakrishna, M Laskey, K Stone, J E Gonzalez, K Goldberg, 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEEP. Sundaresan, J. Grannen, B. Thananjeyan, A. Balakrishna, M. Laskey, K. Stone, J. E. Gonzalez, and K. Goldberg, "Learning rope manipulation policies using dense object descriptors trained on synthetic depth data," in 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 9411-9418, IEEE, 2020.
Self-supervised learning of state estimation for manipulating deformable linear objects. M Yan, Y Zhu, N Jin, J Bohg, IEEE robotics and automation letters. 52M. Yan, Y. Zhu, N. Jin, and J. Bohg, "Self-supervised learning of state estimation for manipulating deformable linear objects," IEEE robotics and automation letters, vol. 5, no. 2, pp. 2372-2379, 2020.
Robust deformation model approximation for robotic cable manipulation. S Jin, C Wang, M Tomizuka, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEES. Jin, C. Wang, and M. Tomizuka, "Robust deformation model approxi- mation for robotic cable manipulation," in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 6586-6593, IEEE, 2019.
Track deformable objects from point clouds with structure preserved registration. T Tang, M Tomizuka, The International Journal of Robotics Research. 0278364919841431T. Tang and M. Tomizuka, "Track deformable objects from point clouds with structure preserved registration," The International Journal of Robotics Research, p. 0278364919841431, 2018.
Modeling, learning, perception, and control methods for deformable object manipulation. H Yin, A Varava, D Kragic, Science Robotics. 654H. Yin, A. Varava, and D. Kragic, "Modeling, learning, perception, and control methods for deformable object manipulation," Science Robotics, vol. 6, no. 54, 2021.
Hierarchical task and motion planning in the now. L P Kaelbling, T Lozano-Pérez, 2011 IEEE International Conference on Robotics and Automation. L. P. Kaelbling and T. Lozano-Pérez, "Hierarchical task and motion planning in the now," in 2011 IEEE International Conference on Robotics and Automation, pp. 1470-1477, 2011.
Learning feasibility for task and motion planning in tabletop environments. A M Wells, N T Dantam, A Shrivastava, L E Kavraki, IEEE robotics and automation letters. 42A. M. Wells, N. T. Dantam, A. Shrivastava, and L. E. Kavraki, "Learning feasibility for task and motion planning in tabletop environments," IEEE robotics and automation letters, vol. 4, no. 2, pp. 1255-1262, 2019.
Robotic manipulation planning for shaping deformable linear objects withenvironmental contacts. J Zhu, B Navarro, R Passama, P Fraisse, A Crosnier, A Cherubini, IEEE Robotics and Automation Letters. 51J. Zhu, B. Navarro, R. Passama, P. Fraisse, A. Crosnier, and A. Cheru- bini, "Robotic manipulation planning for shaping deformable linear objects withenvironmental contacts," IEEE Robotics and Automation Letters, vol. 5, no. 1, pp. 16-23, 2019.
Kinematic multibody model generation of deformable linear objects from point clouds. M Wnuk, C Hinze, A Lechler, A Verl, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEM. Wnuk, C. Hinze, A. Lechler, and A. Verl, "Kinematic multibody model generation of deformable linear objects from point clouds," in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 9545-9552, IEEE, 2020.
A potential field based approach to multi-robot manipulation. P Song, V Kumar, Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No. 02CH37292). 2002 IEEE International Conference on Robotics and Automation (Cat. No. 02CH37292)IEEE2P. Song and V. Kumar, "A potential field based approach to multi-robot manipulation," in Proceedings 2002 IEEE International Conference on Robotics and Automation (Cat. No. 02CH37292), vol. 2, pp. 1217-1222, IEEE, 2002.
The opencv library. G Bradski, Dr Dobb's J. Software Tools. 25G. Bradski, "The opencv library," Dr Dobb's J. Software Tools, vol. 25, pp. 120-125, 2000.
Fully convolutional networks for semantic segmentation. J Long, E Shelhamer, T Darrell, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionJ. Long, E. Shelhamer, and T. Darrell, "Fully convolutional networks for semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431-3440, 2015.
Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, arXiv:1409.1556arXiv preprintK. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
A fast parallel algorithm for thinning digital patterns. T Y Zhang, C Y Suen, Communications of the ACM. 273T. Y. Zhang and C. Y. Suen, "A fast parallel algorithm for thinning digital patterns," Communications of the ACM, vol. 27, no. 3, pp. 236- 239, 1984.
Learning representations and generative models for 3d point clouds. P Achlioptas, O Diamanti, I Mitliagkas, L Guibas, PMLRInternational conference on machine learning. P. Achlioptas, O. Diamanti, I. Mitliagkas, and L. Guibas, "Learning rep- resentations and generative models for 3d point clouds," in International conference on machine learning, pp. 40-49, PMLR, 2018.
| [] |
[
"Text Counterfactuals via Latent Optimization and Shapley-Guided Search",
"Text Counterfactuals via Latent Optimization and Shapley-Guided Search"
] | [
"Quintin Pope popeq@oregonstate.edu \nSchool of Electrical Engineering and Computer Science\nOregon State University\n\n",
"Xiaoli Z Fern xfern@oregonstate.edu \nSchool of Electrical Engineering and Computer Science\nOregon State University\n\n"
] | [
"School of Electrical Engineering and Computer Science\nOregon State University\n",
"School of Electrical Engineering and Computer Science\nOregon State University\n"
] | [] | We study the problem of generating counterfactual text for a classifier as a means for understanding and debugging classification. Given a textual input and a classification model, we aim to minimally alter the text to change the model's prediction. White-box approaches have been successfully applied to similar problems in vision where one can directly optimize the continuous input. Optimization-based approaches become difficult in the language domain due to the discrete nature of text. We bypass this issue by directly optimizing in the latent space and leveraging a language model to generate candidate modifications from optimized latent representations. We additionally use Shapley values to estimate the combinatoric effect of multiple changes. We then use these estimates to guide a beam search for the final counterfactual text. We achieve favorable performance compared to recent whitebox and black-box baselines using human and automatic evaluations. Ablation studies show that both latent optimization and the use of Shapley values improve success rate and the quality of the generated counterfactuals. | 10.18653/v1/2021.emnlp-main.452 | [
"https://arxiv.org/pdf/2110.11589v1.pdf"
] | 239,616,314 | 2110.11589 | 15aaae9832ada788c5d0533cdddae37021af5a97 |
Text Counterfactuals via Latent Optimization and Shapley-Guided Search
Quintin Pope popeq@oregonstate.edu
School of Electrical Engineering and Computer Science
Oregon State University
Xiaoli Z Fern xfern@oregonstate.edu
School of Electrical Engineering and Computer Science
Oregon State University
Text Counterfactuals via Latent Optimization and Shapley-Guided Search
We study the problem of generating counterfactual text for a classifier as a means for understanding and debugging classification. Given a textual input and a classification model, we aim to minimally alter the text to change the model's prediction. White-box approaches have been successfully applied to similar problems in vision where one can directly optimize the continuous input. Optimization-based approaches become difficult in the language domain due to the discrete nature of text. We bypass this issue by directly optimizing in the latent space and leveraging a language model to generate candidate modifications from optimized latent representations. We additionally use Shapley values to estimate the combinatoric effect of multiple changes. We then use these estimates to guide a beam search for the final counterfactual text. We achieve favorable performance compared to recent whitebox and black-box baselines using human and automatic evaluations. Ablation studies show that both latent optimization and the use of Shapley values improve success rate and the quality of the generated counterfactuals.
Introduction
Deep neural networks have achieved state-of-theart performances for many natural language processing (NLP) tasks (Otter et al., 2020;Ruder et al., 2019). When applying such models in real world applications, understanding their behavior can be challenging -the ever increasing complexity of such models makes it difficult to understand and debug their predictions. A human can explain why an example belongs to a specific concept class by constructing a counterfactual of an example that is minimally altered but belongs to a different class. Contrasting the original example with its counterfactual highlights the critical aspects signifying the concept class. We study a similar approach to understand deep NLP models' classification criteria.
Given a classifier and an input text, our goal is to generate a counterfactual by making a set of minimal modifications to the text that change the label assigned by the classifier. Additionally, our goal is to understand the model's behavior when processing naturally occurring inputs, hence we wish to generate grammatically correct and semantically plausible counterfactuals.
Automatic generation of text counterfactuals has been studied in different settings. Qin et al. (2019) considered counterfactual story rewriting which aims to minimally rewrite an original story to be compatible with a counterfactual event. Wu et al. (2021) used a fine-tuned GPT-2 model to generate general purpose counterfactuals that are not tied to a particular classification model. Yang et al. (2020) aim to generate plausible-sounding counterfactuals that flip a classification model's decision for financial texts.
Related, textual adversaries also aim to change the model prediction (with modifications resembling natural text). The difference is that adversaries further aim to escape human detection (not changing a human's classification), whereas counterfactuals do not have such requirement.
Another line of related work is style transfer (Sudhakar et al., 2019;Wang et al., 2019;Hu et al., 2017), which aim to modify a given text according to a target style. It differs from adversary or counterfactual generation in that it seeks to fully change all style-related phrases, as opposed to minimally perturbing a text to change a classifier's decision.
White-box approaches have been widely used to generate adversaries or counterfactuals for vision tasks where the continuous inputs can be optimized to alter model predictions (Goodfellow et al., 2014;Carlini and Wagner, 2017;Neal et al., 2018). Such optimization based approaches are difficult to apply to language due to the discrete nature of text. We circumvent this difficulty by directly optimizing in the latent space of the input towards the desired classification. We then exploit the language generation capability of pre-trained language models, available for most state-of-the-art NLP models such as BERT (Devlin et al., 2019) or RoBERTa (Liu et al., 2019), to generate semantically plausible substitutions from the optimized latent representations. We further introduce Shapley values to estimate the combinatoric effect of multiple simultaneous changes, which are then used to guide a beam search to generate the final counterfactual.
Leveraging pre-trained language models to generate alternative texts has been a popular black-box approach in the recent literature on text adversaries (Li et al., 2020b;Garg and Ramakrishnan, 2020;Li et al., 2020a). Our work presents a first attempt to combine the strength of white-box optimization and the power of pre-trained language models. While Shapley values have been widely studied for the problem of feature importance (Lundberg and Lee, 2017;Sundararajan and Najmi, 2020) and data valuation (Jia et al., 2020), this is the first effort demonstrating their usefulness for text generation.
We compare our method to several white-box and black-box baselines on two different text classification tasks. Automatic and human evaluation results show that our method significantly improves the success rate of counterfactual generation, while reducing the fraction of input tokens modified and enhancing the semantic plausibility of generated counterfactuals. We also show through ablation studies that both counterfactual optimization of the latent representations and Shapley value estimates contribute to our method's strong performance.
Proposed Method
Problem statement. We are given a text classification model, M , an initial input token sequence, X = {x 0 , ..., x n−1 }, with vocabulary V . Model M outputs a classification scoreŷ = M (X) ∈ (0, 1), representing P (y = 1|X). Based on the score, a class label y ∈ {0, 1} is assigned. We seek to generate a counterfactual of X, which is defined as a set of tokens, X = {x 0 , ..., x n−1 }, that differs from X in no more than C max percent of locations, is grammatically plausible, and leads to a different classification, y . Here C max is an input parameter for maximum changes allowed, and smaller C max imposes stronger restrictions.
Note that our setup assumes binary classification, but can be easily extended to multi-class scenario to generate either targeted (with specified y ) or un-targeted counterfactuals (with unspecified y ).
Method overview. Our method consists of three steps. First, we generate a set of candidate token substitutions for each position. Second, we evaluate the capacity of these candidate substitutions to change model classification (individually and collectively), Finally, we construct the counterfactual by beam search.
Generating candidate substitutions
We generate candidate substitutions by first performing latent space optimization and then generate substitutions from the trajectory of latent representations using a language model.
Given an input token sequence X = {x 0 , ..., x n−1 }, we assume model M contains an embedding layer that maps this discrete input sequence into continuous embedding E = {e 0 , ..., e n−1 }. The goal is to optimize a sparsely altered E = {e 0 , ..., e n−1 } such that the model will output y , a target class different from M 's initial prediction y. With slight abuse of notation, let M (E ) denote M 's classification score when replacing E with E as the input embedding, we optimize the following objective. |e j − e j | (1) which minimizes the cross-entropy between M (E ) and the desired y , with a LASSO regularization to favor sparse divergence from the original E.
To reduce the sensitivity to the stopping point of optimization and produce diverse candidates, we optimize E for K steps and consider the full optimization trajectory {E k : k = 1 · · · K} to generate the candidate substitutions using the pretrained language model associated with model M .
Directly using the pre-trained language model is problematic because it does not operate in the same latent space as model M , whose encoder has been fine tuned for the specific classification task at hand. A simple fix to this problem is to use the fine-tuned encoder of M (which is used to optimize E ) and retrain the associated language modeling head. 1 This produces a language model that operates in the same space as the optimized embedding.
Specifically, we feed each E k (k = 1, . . . , K) through the encoder of M and the retrained language modeling head to generate a logit matrix T k of size |V | × n, where T k (s, t) quantifies the likelihood of observing the s-th token of the vocabulary at the t-th location given the overall context of E k .
To generate K candidate substitutions for each position t, we iteratively process T 1 , · · · , T K , selecting the token with the highest logit score excluding the original x t and previous selections. Let S k t be the set of candidate substitutions for position t generated at iteration k considering T k , it is computed as follows.
S k t = S k−1 t ∪ argmax s / ∈S k−1 t ∪xt T k (s, t)(2)
At the end of this step, we produce a set of K candidate substitutions for each input position.
Evaluating Candidate Substitutions
In the second step, we compute a metric that measures each candidate substitution's capacity to change the classification when applied in combination with other substitutions. Toward this goal, we consider Shapley value, which was originally proposed in cooperative game theory (Shapley, 1951) and has been used to measure feature importance for model interpretability (Lundberg and Lee, 2017). For a multi-player coalition-based game, the Shapley value of a player represents how valuable the player is as a potential coalition member. In our context, a coalition L is a set of simultaneous substitutions and value V (L) is measured by L's capacity to change model M 's prediction. Let X L denote the input generated by applying all substitutions 2 in L to X, and M (X L ) be M 's prediction score. We define V (L) to be M (X L ) − M (X) if we wish to flip a negative prediction and M (X) − M (X L ) otherwise.
The Shapley value of a single substitution s measures the expected marginal value of adding s to a coalition not already containing s. To ensure computational tractability, we constrain the size of the coalition to be a fixed value c s . As such, coalitions of any other sizes will have value zero. Conceptually this measures the potential value of substitution s when we modify exactly c s tokens.
Under this setting, it is straightforward to show that the Shapley value of a single substitution s can 2 By definition, L must not contain multiple substitutions to the same location, which will create a conflict. be estimated by the following equation:
SV (s) = 1 |L s | L∈Ls V (L) − 1 |L /s | L∈L /s V (L)
(3) L s (L /s ) denotes the set of coalitions containing (not containing) s that satisfy the size constraint.
Fully enumerating L s and L /s to compute Equation 3 is infeasible in most situations. We use two strategies to improve efficiency. First, we apply filtering to remove unimportant locations from further consideration. We adapt the Norm-Grad saliency method described by Rebuffi et al. (2020) to text and use the following gradient-based saliency score.
saliency(i) = |(∇ e iŷ ) e i | 2(4)
where ∇ e iŷ denotes the gradient of the original classification scoreŷ with respect to e i , the embedding of the i-th token, and represents the Hadamard product (elementwise multiplication).
Our second strategy is to approximate the Shapley values by sampling in the space of allowed substitutions. Suppose we want to evaluate each substitution w times on average and there are a total of N s substitutions to be evaluated. It is interesting to note that we do not need N s · w evaluations since each evaluation simultaneously contributes to the estimates of all c s substitutions that it contains.
We apply filtering to consider only the top C max × n locations, and fix the coalition size to be 50% of that (c s = 0.5 × C max × n). Each important location contributes K candidate substitutions. For input of length n, there are C max × K × n total substitutions to evaluate. Because each coalition evaluation covers 0.5 × C max × n substitutions, to evaluate each substitution w times on average, we need to evaluate 2 × w × K coalitions, which is independent of n and C max .
Constructing the Counterfactual
In the final step, we search for the optimal subset of substitutions via breadth-first beam search. The search space covers all possible subsets of nonconflicting substitutions, each subset corresponding to a unique candidate counterfactual.
We initialize the beam with the root of the search tree, which is the empty subset. At each iteration, we expand a node in the beam with a successor function returning b successors, each adding a single substitution. For a given search node, denoted by its subset L, we construct b successors by selecting b substitutions with the best Shapley values that do not conflict in location with any s ∈ L or introduce a redundant subset.
We then evaluate each successor node by applying its substitutions to the original input X and computing model M 's output on the resulting X . We rank all successors based on the model's score for the desired class y minus the fraction of tokens modified by the successor in question and populate the new beam with the top b candidates.
We limit the search depth to be C max × n, constraining our method to never modify more than C max percent of the input tokens. During search, if we generate a candidate that M classifies as y , we stop immediately and return that candidate as our final output. As such, the time we spend for beam search depends on how quickly we find a successful counterfactual.
Summary of approach
We summarize our method as text Counterfactuals via Latent Optimization and Shapley-guided Search (CLOSS). CLOSS has three primary hyperparameters: K, the number of candidate substitutions generated per token locations; w, the average number of times we wish to evaluate each substitution; b, the beam width of the beam search that constructs the final counterfactual. The default values are K = 30, w = 5, b = 15. The impact of these parameters will be explored in the experiments.
Empirical Evaluation
To evaluate our proposed method, we consider two different text classification tasks: sentiment classification and natural language inferences.
Experimental Setup
Data sources. We use the IMDB dataset (Maas et al., 2011) for sentiment classification. This is a binary classification dataset based on movie reviews from IMDB. For the natural language inference task, we use the QNLI dataset (Rajpurkar et al., 2016), which is a binary task derived from the Stanford question answering dataset. Each example contains a question and a context. The classifier must determine if the context answers the question.
Following the evaluation scheme used by Li et al. (2020a), we sample 1000 random data points from IMDB of length less than or equal to 100 words as our "short" IMDB data. We do not filter the QNLI dataset. The average word counts for short IMDB and QNLI are shown in Table 1 (row 1).
Classification models. For each task, we consider two classification models, RoBERTa (Liu et al., 2019) and BERT (Devlin et al., 2019), trained by TextAttack (Morris et al., 2020). We report the performance of both models in Table 1 Evaluation criteria. We consider the following performance metrics that measure the ability of a method to successfully generate counterfactuals and the quality of the generated counterfactuals.
• Failure rate (%F): the percent of inputs for which the method fails to change the model's prediction. • Fraction of tokens changed (%C): the average token modification rate among successfully generated counterfactuals. • BLEU: the average BLEU score between successfully generated counterfactuals and their original inputs. • Perplexity (P): Following Zang et al. (2020), we use the exponentiated language modeling loss of GPT-2 (Radford et al., 2019) to compute perplexity to score linguistic plausibility.
Baselines. We compare against adversarial baselines because we were unable to find counterfactual methods with open-source implementations. We carefully identified a set of baselines closely related to CLOSS with respect to the methodology, specifically focusing on black box methods that leverage pretrained language models (BERT-Attack, BAE), and a white-box method using gradients and beamsearch (Hot-Flip). Unless otherwise stated, we use the implementations in the TextAttack package (Morris et al., 2020). All black-box methods use some saliency measure to prioritize substituting important tokens. While CLOSS estimates saliency from the gradient, the black-box baselines use leave-a-token-out estimates, i.e., by removing or masking a token. BERT Adversarial Example (BAE) (Garg and Ramakrishnan, 2020) is a black-box method that generates potential substitutions by masking out input tokens and using the pre-trained BERT language model to suggest replacements. BERT-Attack (Li et al., 2020b) is also a blackbox method. It generates substitutions by feeding the entire unmasked input into the BERT language model to suggest replacements.
Textfooler ) is a black-box method that uses word embeddings by Mrkšić et al. (2016) to generate substitutions by selecting the vocabulary tokens whose embeddings have the highest cosine similarity with the original token. Adaptation of adversarial baselines for fair comparison. Adversaries differ from counterfactuals in that they additionally seek to retain the text's "true" class, relative to human judgement. In this regard, generating adversaries is more difficult than generating counterfactuals. Here we adapt the (adversary-generating) baselines to generate counterfactuals, thereby allowing fair comparison. All original baseline implementations employ certain heuristic constraints to preserve the original semantic content (and thus, true class) of the input. Most methods require a minimum cosine similarity between the Universal Sentence Encoder (Cer et al., 2018) (USE) representations of the modified text with the original input. Additional heuristics include not substituting stop words and requiring substitutions to have the same part-of-speech as the original. These heuristics directly modify the search space for a generation method, and thus can impact both the success and quality of counterfactual generation.
For CLOSS and our implementation of HotFlip, we do not employ such heuristics. Additionally, we created an unconstrained version of the TextAttack (Morris et al., 2020) implementations (denoted by suffix '-U') of all other baselines by removing the adversarial constraints. Arguably, PWWS-U and TextFooler-U remain more constrained than CLOSS because they only use synonyms (Word-Net and embedding based, respectively) for substitutions. However, the search spaces of BAE-U, BERT-Attack-U, and HotFlip are fully comparable to CLOSS.
TextAttack by default ignores any input misclassified by the model M , because the concept of an "adversarial" example does not readily extend to misclassified inputs. For counterfactual generation, we do not have this concern. Hence our evaluation seeks to generate a counterfactual that flips the model's classification regardless of its correctness.
Baseline parameters. For HotFlip, we consider two versions: a default version (HotFlip D) that uses parameters suggested by Ebrahimi et al. (2018) and an optimized version (HotFlip O), where the parameters and search procedure are optimized for performance. See Appendix 1 for details of our HotFlip implementations. For all other baselines, we use the default parameters from Tex-tAttack, which are the recommended parameters from the original papers.
Results
We report the results of all methods for short IMDB and QNLI in Table 2. Note that BERT-Attack restrains manipulations of multi-token words in a manner that is computationally intractable on our datasets; thus we do not report performance for the original BERT-Attack. Here we limit all methods to change no more than 15% of tokens by setting C max = 0.15. The impact of different C max values will be explored later in Figure 1.
Comparing to white-box baseline. CLOSS wins by a large margin. This is not surprising as HotFlip does not care about the semantic plausibility of the generated sentences whereas our method uses the language model to propose semantically plausible substitutions. Note that GPT-2 and RoBERTa are cased models, while BERT is uncased. This explains BERT's higher perplexity.
Comparing to black-box methods. We first observe that the heuristic constraints used by these methods have a drastic impact on the performance. Specifically, by removing these constraints, the failure rates of all methods are much reduced. However, the resulting counterfactuals tend to have lower quality, indicated by increased perplexity. Comparing CLOSS to both variants, we see that our method was able to achieve a highly competitive failure rate with few edits. CLOSS also achieves the lowest perplexity in most cases with the exception of RoBERTa model on QNLI, where BERT-Attack-U have slightly lower perplexity.
Impact of Varying C max
We considering different C max values including 0.1, 0.15, 0.2, 0.3 and 0.5. Figure 1 plots the perplexity against failure rate for different values of C max 4 . Increasing C max allows methods to change more input tokens, reducing their failure rates. However, higher C max also leads to greater distortion of the input, raising the perplexity. Thus, methods with better perplexity/fail rate tradeoffs have curves that fall closer to the lower-left corner of the plots. In this regard, CLOSS has the best performance on all comparisons, except for against BERT-Attack on RoBERTa QNLI, where the two methods appear comparable.
Ablation studies
We consider three ablated versions of CLOSS. CLOSS-EO removes the optimization step and in-4 HotFlip D is excluded due to its poor performance and to preserve the graph scaling for the rest of the methods. CLOSS-RTL skips retraining the language modeling head and uses the language modeling head of the pretrained language model. As a result, the language modeling head for this ablation has a different latent space compared to the fine-tuned encoder of classifier M . CLOSS-SV removes the Shapley value estimates of each substitution's impact on classification. Instead, we priority substitutions during beam search based on the token saliency (Eq. 4). We compare the performance of CLOSS with its ablations in Table 3. Here we omit BLEU score because it strongly correlates with %C in Tables 2, Effect of embedding optimization. By removing the optimization step, CLOSS-EO has significantly more failures, but lower perplexity. This is not surprising because optimizing the embedding increases the chance to flip the prediction but carries the risk of producing "unnatural" embeddings that lie outside the space of texts previously observed by the language model. This also sug-gests that CLOSS-EO can be a good candidate for scenarios where "naturalness" of the text is critical.
Effect of retraining language modeling head.
It is interesting to note that CLOSS-RTL has comparable perplexity to CLOSS, but a higher failure rate. We believe this is because the retrained language modeling head can generate tokens that better match the data distribution of IMDB and QNLI (but not of English text in general), i.e., the distribution of tokens to which the classifier M is sensitive.
Effect of Shapley values. By removing Shapley value estimates, CLOSS-SV sees substantial degradations in all measures, suggesting critical importance of this step to our method.
Computational Considerations
The estimation of Shapley value for CLOSS incurs a substantial cost in terms of number of queries to the given model. Indeed, the number of queries used by CLOSS can be significantly higher than some baselines 5 . This section takes a closer look at 5 We report the average number of model queries used by CLOSS and the baselines in Table 4 Precision of Shapley. The Shapley value is estimated via sampling and the sampling rate is controlled by the parameter w. Intuitively, larger w leads to more accurate Shapley values, but incurs higher computation cost. We explore how sensitive our method is to the parameter w in Figure 2(b&c). Specifically, Figure 2( short and long IMDB. We see that as long as w is reasonably large (≥5), the performance is fairly robust. Note that other measures like perplexity and BLEU score show similar trends, which are shown in Figure 3 in the Appendix. Figure 2(c), on the other hand, plots the average number of queries to model M required with w from 1 to 10. We see an interesting phenomenon for long IMDB where increased w actually leads to decreased number of queries. This may appear counter-intuitive at first sight, it actually demonstrates the power of good Shapley value estimates in speeding up the search. This phenomenon, however, was not observed for the short IMDB data, likely due to the substantially smaller search space thanks to the shorter input length.
Human and Qualitative Evaluation
Human evaluations. For human evaluations, we choose to compare CLOSS against Bert-Attack and HotFlip O, the two baselines performing the best in perplexity and flip rate respectively.
We randomly selected 100 original texts from IMDB for evaluation with the restriction that all three methods must successfully flip the classification changing 15% or less of the original tokens. Additionally, we exclude texts with more than 50 tokens to ease the burden on evaluators. Using the BERT classifier, We apply BERT-Attack, CLOSS and HotFlip to generate counterfactuals for each input. Eight human evaluators are each assigned 25 original texts and asked to rank (ties allowed) the three counterfactuals in order of grammatical correctness. Each input is evaluated by two evaluators, the inter-evaluator agreement per pairwise comparison is 75.4%. Human evaluators ranked CLOSS competitively with BERT-Attack and Hot-Flip, assigning average ranks of 1.54 to CLOSS, 1.68 to Bert-Attack and 2.50 to HotFlip. The difference between CLOSS and HotFlip is statistically significant (one-sided sign test, p-value< 0.0001).
Qualitative analysis of generated text. Inspecting the generated coutnerfactuals, we observe some interesting patterns, summarized below. See Appendix (Tables 5 and 6) for specific examples.
For the IMDB dataset, CLOSS often changes one or two sentiment words while the rest of the input still supports the original prediction. This suggests that the model may be triggered by a few sentiment words, ignoring most input. Identifying such critical substitutions will allow us to inspect the patterns of these "triggers" to reveal the weakness of the classifier. We also observe that when the model misclassifies, it often takes little change to correct the model, which helps debug the mistake.
Sometimes CLOSS introduces synergistic changes where each change's capacity to influence the classification seems contingent on the other. Finally, CLOSS sometimes distorts sentimentphrases into non-words to remove their impact on classification, possibly making up for the lack of ability to remove words.
For the QNLI dataset, unsurprisingly, we note that changing from entailment to non-entailment is far easier than the opposite (see Figures 5(c,d) in Appendix), and often requires changing only a few words shared by the Question and Context. Conversely, CLOSS can sometimes change non-entailment to entailment by introducing some shared word(s). This suggests that the model relies heavily on overlapping words to decide entailment.
More detailed analysis can be found in the Appendix, including how CLOSS's changes are distributed among part of speech tags ( Figure 4) and a failure analysis for CLOSS (A.7, Table 7).
Conclusion
We are motivated by how humans use counterfactuals to explain the concept of a class and seek to automatically generate counterfactual text input as a means to understand a deep NLP model and its definition of class. We assume full white-box access to the given model and perform optimization in the latent space to maximize the probability of predicting a target class. We then map from the optimized latent representation to candidate token substitutions using a language model. A key novelty of CLOSS is using Shapley values to estimate the potential of a token substitution in changing the model's prediction when used in combination with other substitutions. The Shapley value is then used to guide a breadth-first beam search to generate the final counterfactual. Through both automatic and human evaluations, we show that CLOSS achieves highly competitive performance both in terms of the success rate of generating counterfactuals as well as the quality of the generated counterfactuals.
Our approach has several limitations. As a whitebox approach, we require full access to the model, which can be restrictive in practical applications. Our approach currently only considers substitutions, excluding deletions and insertions. Finally, our method is only applicable to models that are based on pre-trained language models. Future work will adapt CLOSS to adversarial and black box settings. We also hope to improve the efficiency of CLOSS via more efficient Shapley value estimation (Chen et al., 2018;Jia et al., 2020).
A Appendix
A.1 Optimized HotFlip.
For each candidate in original HotFlip's beam search, we score every possible single-token substitution by using gradients to estimate the substitution's impact on the classification. The score of a candidate counterfactual is the sum of the scores of each individual substitution introduced by the candidate. These scores form a surrogate value function, which the beam search aims to maximize. At each step of the beam search, we can generate the successors (children) for each current beam members (parent) by applying a single substitution to any location in the parent text.
In our optimized HotFlip, we change the search procedure to promote diversity in the beam search by requiring every child generated off a common parent modify distinct locations in the text. We observe that this small modification substantially boost the performance HotFlip's performance. We also increase the beam size from the 10 suggested by Ebrahimi et al. (2018) to 100. Note that the original HotFlip's parameters are designed for character-level modification, which has a substantially smaller space of possible substitutions for each location. This might explain the the poor performance of HotFlip D, and the need to modify the search procedure for token-level generation.
A.2 Average Number of Queries
In Table 4, we present the average number of queries to the given model for CLOSS and baseline methods. We show two numbers per cell, where the first number is the average query number of all attempts (success or fail). The second number is the average number of queries for successful trials that modify 15% or less of input tokens.
A.3 Performance when varying w
In Figure 3, we plot the other performance measures including %C, BLEU and Perlexity, as a function of parameter w.
A.4 Qualitative Analysis
In tables 5 and 6, we present examples of counterfactuals generated by CLOSS that highlight interesting patterns we notice.
A.5 Part of Speech Changes
In Figure 4, we show the percentages of total changes that occur in each part of speech type. We split the results by direction of change. Note that for IMDB, CLOSS tends to modify adjectives more when flipping from negative to positive compared to flipping from positive to negative.
When flipping entailment to non-entailment, CLOSS is more likely to modify nouns compared to flipping non-entailment to entailment. This may re A.6 Distribution over Percent of Tokens Changed Figure 5 contains histograms showing the distribution of percent of tokens changed over successfully generated counterfactuals. Note that flipping entailment to non-entailment requires fewer changes than the reverse.
A.7 Error Analysis
We explore potential sources of counterfactual generation failure in table 7 by significantly increasing the computational resources devoted to certain steps of CLOSS and recording the resulting generation failure rate (%F). Even greatly increasing w does not reduce %F significantly. In comparison, increasing beam width is more effective, especially in regards to IMDB. The most effective interventions are to increase either K or render all tokens salient and scale w in proportion to the associated increase in potential substitutions. Note that when we increase K without changing w, the compute spent on estimating Shapley values scales linearly with K. These results suggest failures in the beam search are more of a bottleneck on performance than failing to identify useful substitutions with Shapley values.
IMDB
However, we can significantly improve performance by increasing both the pool of potential substitutions and the compute spent on estimating Shapley values. This implies many generation failures happen because the pool of potential substitutions the default CLOSS hyperparameters are able to search through does not contain substitutions able to flip the classification.
Description
Text (a) We can flip the class by changing a small fraction of the sentiment regions.
Old: Ruth Gordon is one of the more sympathetic killers that Columbo has ever had to deal with. And, the plot is ingenious all the way around. This is one of the best Columbo episodes ever. Mariette Hartley and G. D. Spradlin are excellent in their supporting roles. And Peter Falk delivers a little something extra in his scenes with Gordon.
New: Ruth Gordon is one of the more sympathetic killers that Columbo has ever had to deal with. And, the plot is ingenious all the way around. This is one of the worse Columbo episodes ever. Mariette Hartley and G. D. Spradlin are excellent in their supporting roles. And Peter Falk delivers a little something extra in his scenes with Gordon.
Old: ruth gordon is one of the more sympathetic killers that columbo has ever had to deal with. and, the plot is ingenious all the way around. this is one of the best columbo episodes ever. mariette hartley and g. d. spradlin are excellent in their supporting roles. and peter falk delivers a little something extra in his scenes with gordon.
New: ruth gordon is one of the more sympathetic killers that columbo has ever had to deal with. and, the plot is ingenious all the way around. this is one of the worst columbo episodes ever. mariette hartley and g. d. spradlin are excellent in their supporting roles. and peter falk delivers a little something extra in his scenes with gordon. (b) We sometimes see synergistic changes where each change's capacity to influence the classification seems contingent on the other.
Old: Excellent documentary that still manages to shock and enlighten. Unfortunately, times haven't changed much since this was made and it is thus an important piece for all freedom-conscious Americans to see.
New: Very pathetic that still manages to shock and enlighten. Unfortunately, times haven't changed much since this was made and it is thus an important piece for all freedom-conscious Americans to see.
Old: I love all his work but this looks like nothing.. sorry.. This looks more like a "David Lynch copycat". I think people like it only because "it's from David Lynch".
New: I love all his work but this hits like everything.. sorry.. This looks more like a "David Lynch copycat". I think people like it only because "it's from David Lynch". Description Text (c) RoBERTa incorrectly classified text as positive. Flipping to negative requires little changes.
Old: Some good movies keep you in front of the TV, and you are dying to see the result. This movie does not have highs and lows. It simply describes a young girl's family life in Africa. People come and go, the weather and the background are all the same.
New: Some decent movies keep you in front of the TV, and you are dying to see the result. This movie does not have highs and lows. It simply describes a young girl's family life in Africa. People come and go, the weather and the background are all the same. (d) BERT classifies as negative. Greater changes required to flip to positive.
Old: some good movies keep you in front of the tv, and you are dying to see the result. this movie does not have highs and lows. it simply describes a young girl's family life in africa. people come and go, the weather and the background are all the same.
New: some good movies keep you in front of the tv, and you are loving to see the result. this movie does not lack highs and lows. it simply describes a young girl's family life in africa. people come and go, the weather and the background are all the same. (e) Sometimes distorts words/grammar; Note how CLOSS removes "I loved this" by convering "loved this" into "lovedoo", thereby removing the original's positive sentiment Old: I loved this mini series. Tara Fitzgerald did an incredible job portraying Helen Graham, a beautiful young woman hiding, along with her young son, from a mysterious past. As an anglophile who loves romances... this movie was just my cup of tea and I would recommend it to anyone looking to escape for a few hours into the England of the 1800's. I also must mention that Toby Stephens who portrays the very magnetic Gilbert Markham is reason enough to watch this wonderful production.
New: I lovedoo mini series. Tara Fitzgerald did an incredible job portraying Helen Graham, a beautiful young woman hiding, along with her young son, from a mysterious past. As an anglophile who loves romances... this movie was just my cup of tea and I would recommend it to anyone looking to escape for a few hours into the England of the 1800's. I also must mention that Toby Stephens who portrays the very magnetic Gilbert Markham does reason enough to watch this dreadful production. (f) Non-words can significantly change sentiment classification. "thisecrated" doesn't seem particularly sentimentrelated, yet it can flip the classification of this otherwise very positive review.
Old: absolutely fantastic! whatever i say wouldn't do this underrated movie the justice it deserves. watch it now! fantastic! New: absolutely fantastic! whatever i say wouldn't do thisecrated movie the justice it deserves. watch it now! fantastic! Table 5: Example IMDB counterfactuals generated by CLOSS. Each row demonstrates an interesting pattern of behavior we observed. We use green to highlight words whose changes flip the text to positive and red for changes that flip texts to negative. Description Text (a) CLOSS can often flip entilment to nonentailment by changing a word that appears in both the Question and Context.
Old: Question: When was Luther's last sermon? Context : His last sermon was delivered at Eisleben, his place of birth, on 15 February 1546, three days before his death.
New: Question: When was Luther's new sermon? Context : His last sermon was delivered at Eisleben, his place of birth, on 15 February 1546, three days before his death.
Old: Question: when was luther's last sermon? Context : his last sermon was delivered at eisleben, his place of birth, on 15 february 1546, three days before his death.
New: Question: when was luther's traveling sermon? Context : his last sermon was delivered at eisleben, his place of birth, on 15 february 1546, three days before his death. (b) CLOSS can sometimes induce entailment by changing a word in the Question (Context) to match one in the Context (Question).
Old: Question: Who were the ESPN Deportes commentators for Super Bowl 50? Context : On December 28, 2015, ESPN Deportes announced that they had reached an agreement with CBS and the NFL to be the exclusive Spanish-language broadcaster of the game, marking the third dedicated Spanish-language broadcast of the Super Bowl.
New: Question: Who were the ESPN Deportes agreements for Super Bowl 50? Context : On December 28, 2015, ESPN Deportes announced that they had reached an agreement with CBS and the NFL to be the exclusive Spanishlanguage broadcaster of the game, marking the third dedicated Spanishlanguage broadcast of the Super Bowl. (c) If lexical overlap fails, we often need many edits to change non-entailment to entailment.
Old: Question: Who was the number two draft pick for 2011? Context : This was the first Super Bowl to feature a quarterback on both teams who was the #1 pick in their draft classes.
New: Question: Who was the show two draft pick for Kate? Context : This was the first Super half to feature a Premier on both teams who was the #1 pick in their draft classes. Table 6: Example QNLI counterfactuals generated by CLOSS. Each row demonstrates an interesting pattern of behavior we observe. We use green to highlight words whose changes flip the text to entailment and red for changes that flip texts to non-entailment.
Change
RoBERTa
Figure 1 :
1Plots of perplexity against failure rate as the maximum allowed percent of tokens changed (C max ) varies. Values for C max are 10%, 15%, 20%, 30% and 50%. CLOSS values are averaged over three runs. stead generates potential substitutions by feeding the original input into the default pre-trained language model associated with model M .
Figure 2 :
2(a) Plot of failure rate of CLOSS and CLOSS-SV as a function of the number of model queries for short IMDB. Beam width ranges from 5 to 20 for both approaches. (b) Plot of failure rate of CLOSS in flipping BERT's prediction as a function of w for both short and long IMDB. (c) Plot of number of BERT model queries used by CLOSS as a function of w for both short and long IMDB. the computational trade-offs surrounding Shapley value estimations.Shapley or not. Given the computational cost of estimating Shapley values, would a more thorough search (e.g., using larger beam width) remove the need for computing Shapley values? To explore this question, we consider different beam width b = 5, 10, 15 and 20, and plot the resulting failure rates of CLOSS and CLOSS-SV against the number of queries to the model for short IMDB and BERT inFigure 2(a). 6 The figure shows that even with larger beam width and higher number of queries, the performance of CLOSS-SV still trails behind CLOSS. It also shows that the Shapley value guided search reduces CLOSS's sensitivity to the beam width b both in terms of failure rate and the number of queries needed.
Figure 3 :
3(a) Plot of average percent tokens changed by CLOSS as a function of w for both short and long IMDB. (b) Plot of CLOSS average BLEU score as a function of w for both short and long IMDB. (c) Plot of CLOSS average perplexity as a function of w for both short and long IMDB.
Figure 4 :Figure 5 :
45Barcharts showing how the changes CLOSS makes are distributed among each part of speech tag. Histograms showing the distribution of percent of tokens changed over successfully generated counterfactuals.
Probability Weighted Word Saliency (PWWS) (Ren et al., 2019) is a black-box method that uses WordNet synonyms as potential substitutions to preserve the semantic content.HotFlip(Ebrahimi et al., 2018) is a white-box method that uses model gradients to estimate the impact of every possible token substitution on the model's classification and then applies beam search to generate counterfactuals. HotFlip implemented in TextAttack only works for word or characterlevel classifiers, not WordPiece(Schuster and Nakajima, 2012) classifiers like RoBERTa and BERT. Hence we implemented our own version of Hot-Flip 3 to be exactly comparable to our method.
The default HotFlip implementation substantially underperforms by all measures. Optimizing the parameters and search procedure for HotFlip leads to greatly improved performance. Comparing CLOSS with HotFlip O, we note that the failure rate of our method is slightly worse for QNLI and slightly better for IMDB short. For both datasets, the counterfactuals generated by CLOSS have fewer modifications and higher BLEU scores. The most striking difference is in the perplexity score, whereIMDB short
RoBERTa
BERT
Method
%F %C BLEU P
%F %C BLEU P
CLOSS
4.2 3.1 0.92 72.4 4.1 2.8 0.93 98.9
HotFlip D
37.0 6.5 0.86 145 22.8 5.2 0.89 140
HotFlip O
7.1 5.1 0.88 122 4.5 4.18 0.90 129
BAE
69.4 4.6 0.86 110 67.5 4.0 0.88 136
PWWS
14.6 5.9 0.83 96 14.0 4.7 0.86 125
TextFooler
22.3 6.3 0.82 91.5 31.9 5.7
0.83 132
BAE-U
16.6 5.7 0.85 107 25.1 4.9
0.87 141
Bert-Attack-U 6.3 4.4 0.90 78.2 22.2 4.7 0.88 120
PWWS-U
12.1 5.9 0.83 102 11.7 4.6 0.86 134
TextFooler-U 12.7 5.7 0.85 93.9 21.0 5.2 0.86 142
QNLI
RoBERTa
BERT
Method
%F %C BLEU P
%F %C BLEU P
CLOSS
5.1 3.3 0.92 92.4 3.5 3.3 0.92 143
HotFlip D
18.8 4.7 0.90 130 19.1 4.4 0.90 174
HotFlip O
3.4 4.0 0.90 125 2.1 3.8 0.91 178
BAE
34.6 3.7 0.87 94.4 33.2 4.0 0.87 175
PWWS
22.7 4.2 0.87 95.1 14.9 4.4 0.86 184
TextFooler
19.4 4.6 0.86 90.1 13.1 4.9 0.86 176
BAE-U
6.8 4.2 0.87 104 6.0 4.1 0.88 178
Bert-Attack-U 6.7 4.0 0.89 87.1 4.9 3.8 0.90 156
PWWS-U
16.0 4.3 0.87 107 8.7 4.3 0.87 201
TextFooler-U 7.4 4.4 0.87 101 4.6 4.3 0.88 180
Table 2 :
2Comparison of CLOSS with baselines on the short IMDB and QNLI data. 'U' indicates unconstrained version of the baselines. Our implementation of HotFlip uses exactly the same set of constraints as CLOSS. CLOSS values are averaged over three runs.
in the Appendix. In practice, CLOSS, implemented without parallelization, canIMDB
QNLI
RoBERTa
BERT
RoBERTa
BERT
Method
%F %C P
%F %C P
%F %C P
%F %C P
CLOSS
4.2 3.13 72.4 4.1 2.76 98.9 5.1 3.33 92.4 3.5 3.31 143
CLOSS-SV 9.4 5.73 84.5 11.6 5.06 116 7.3 5.13 108 6.4 5.05 159
CLOSS-EO 7.3 3.25 63.3 8.4 3.17 94.9 7.2 3.51 72.2 6.1 3.51 122
CLOSS-RTL 5.5 3.2 73.7 7.5 2.9 102 7.9 3.7 94.7 5.7 3.4 136
Table 3 :
3Ablation results on IMDB and QNLI. Values are averaged over three runs.
b) plot the failure rate of CLOSS in flipping BERT's prediction for both generate counterfactuals for typical inputs in seconds. 6 Figures for RoBERTa and QNLI are similar, thus omitted.
Mukund Sundararajan and Amir Najmi. 2020. The many shapley values for model explanation.Ke Wang, Hang Hua, and Xiaojun Wan. 2019. Control-
lable unsupervised text attribute transfer via editing
entangled latent representation.
Tongshuang Wu, Marco Tulio Ribeiro, Jeffrey Heer,
and Daniel S. Weld. 2021. Polyjuice: Automated,
general-purpose counterfactual generation.
Linyi Yang, Eoin M. Kenny, Tin Lok James Ng,
Yi Yang, Barry Smyth, and Ruihai Dong. 2020.
Generating plausible counterfactual explanations for
deep transformers in financial text classification.
Yuan Zang, Fanchao Qi, Chenghao Yang, Zhiyuan Liu,
Meng Zhang, Qun Liu, and Maosong Sun. 2020.
Word-level textual adversarial attacking as combina-
torial optimization. In Proceedings of the 58th An-
nual Meeting of the Association for Computational
Linguistics, pages 6066-6080, Online. Association
for Computational Linguistics.
Table 4 :
4Average queries per sample for CLOSS and baselines.
IMDB BERT IMDB RoBERTa QNLI BERT QNLICLOSS
4.2
4.1
5.1
3.5
Increase beam width
0.9
0.8
2.6
3.3
Increase w
2.1
3.2
3.9
4.0
Increase K
0.4
0.3
1.4
2.4
Everything salient, fixed w
6.2
9.3
6.4
4.0
Everything salient, scale w
0.53
0.93
0.53
0.23
Table 7 :
7Shows the impact on failure rate of significantly increasing CLOSS hyperparameters. Beam wdith increases from 15 to 100, w from 5 to 50, K from 30 to 300.0
10
20
30
40
50
Percent
PUNCT
VERB
PRON
NOUN
ADV
DET
ADJ
ADP
NUM
PROPN
CCONJ
PART
POS Tag
P->N
N->P
0
5
10
15
20
25
30
35
40
Percent
PUNCT
VERB
PRON
NOUN
ADV
DET
ADJ
ADP
NUM
PROPN
CCONJ
PART
POS Tag
P->N
N->P
(a) BERT IMDB
(b) RoBERTa IMDB
0
10
20
30
40
Percent
PUNCT
VERB
PRON
NOUN
ADV
DET
ADJ
ADP
NUM
PROPN
CCONJ
PART
POS Tag
NE->E
E->NE
0
5
10
15
20
25
30
35
40
Percent
PUNCT
VERB
PRON
NOUN
ADV
DET
ADJ
ADP
NUM
PROPN
CCONJ
PART
POS Tag
NE->E
E->NE
(c) BERT QNLI
(d) RoBERTa QNLI
Specifically, we retrain the language modeling head using the text for which we intend to generate counterfactuals. In our experiments this only involves 1000 data points, leading to a very fast re-training process.
HotFlip can generate insertion/deletion edits in addition to substitutions. Our implementation only considers substitutions to be directly comparable with our method.
AcknowledgementsThis work was partially supported by DARPA under grant N66001-17-2-4030.
Towards evaluating the robustness of neural networks. Nicholas Carlini, David Wagner, 2017 ieee symposium on security and privacy (sp). IEEENicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), pages 39-57. IEEE.
Universal sentence encoder. Daniel Cer, Yinfei Yang, Nan Sheng Yi Kong, Nicole Hua, Rhomni Limtiaco, St, Noah John, Mario Constant, Steve Guajardo-Cespedes, Yuan, Chris Tar, Yun-Hsuan Sung. Brian Strope, and Ray KurzweilDaniel Cer, Yinfei Yang, Sheng yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder.
L-shapley and c-shapley: Efficient model interpretation for structured data. Jianbo Chen, Le Song, Martin J Wainwright, Michael I Jordan, Jianbo Chen, Le Song, Martin J. Wainwright, and Michael I. Jordan. 2018. L-shapley and c-shapley: Efficient model interpretation for structured data.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing.
Hotflip: White-box adversarial examples for text classification. Javid Ebrahimi, Anyi Rao, Daniel Lowd, Dejing Dou, Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing Dou. 2018. Hotflip: White-box adversarial exam- ples for text classification.
Bae: Bert-based adversarial examples for text classification. Siddhant Garg, Goutham Ramakrishnan, Siddhant Garg and Goutham Ramakrishnan. 2020. Bae: Bert-based adversarial examples for text clas- sification.
J Ian, Goodfellow, arXiv:1412.6572Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprintIan J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversar- ial examples. arXiv preprint arXiv:1412.6572.
Toward controlled generation of text. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, Eric P Xing, PMLRInternational Conference on Machine Learning. Zhiting Hu, Zichao Yang, Xiaodan Liang, Ruslan Salakhutdinov, and Eric P Xing. 2017. Toward con- trolled generation of text. In International Con- ference on Machine Learning, pages 1587-1596. PMLR.
Ruoxi Jia, David Dao, Ce Zhang, Dawn Song, and Costas Spanos. 2020. Towards efficient data valuation based on the shapley value. Boxin Wang, Frances Ann Hubis, Nick Hynes; Bo LiRuoxi Jia, David Dao, Boxin Wang, Frances Ann Hubis, Nick Hynes, Nezihe Merve Gurel, Bo Li, Ce Zhang, Dawn Song, and Costas Spanos. 2020. Towards efficient data valuation based on the shap- ley value.
Is bert really robust? a strong baseline for natural language attack on text classification and entailment. Di Jin, Zhijing Jin, Joey Tianyi Zhou, Peter Szolovits, Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is bert really robust? a strong base- line for natural language attack on text classification and entailment.
Ming-Ting Sun, and Bill Dolan. 2020a. Contextualized perturbation for textual adversarial attack. Dianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Dianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, and Bill Dolan. 2020a. Contextualized perturbation for textual adversarial attack.
Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020b. Bert-attack: Adversarial attack against bert using bert. Linyang Li, Ruotian Ma, Qipeng Guo, Xiangyang Xue, and Xipeng Qiu. 2020b. Bert-attack: Adversarial attack against bert using bert.
. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta: A robustly optimized bert pretraining approachYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach.
A unified approach to interpreting model predictions. Scott Lundberg, Su-In Lee, Scott Lundberg and Su-In Lee. 2017. A unified ap- proach to interpreting model predictions.
Learning word vectors for sentiment analysis. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, Christopher Potts, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesOregon, USAAssociation for Computational LinguisticsPortlandAndrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analy- sis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, pages 142-150, Port- land, Oregon, USA. Association for Computational Linguistics.
Textattack: A framework for adversarial attacks, data augmentation. John X Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, Yanjun Qi, and adversarial training in nlpJohn X. Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020. Textattack: A framework for adversarial attacks, data augmenta- tion, and adversarial training in nlp.
Nikola Mrkšić, Ó Diarmuid, Blaise Séaghdha, Milica Thomson, Lina Gašić, Pei-Hao Rojas-Barahona, David Su, Tsung-Hsien Vandyke, Steve Wen, Young, Counter-fitting word vectors to linguistic constraints. Nikola Mrkšić, Diarmuid Ó Séaghdha, Blaise Thom- son, Milica Gašić, Lina Rojas-Barahona, Pei-Hao Su, David Vandyke, Tsung-Hsien Wen, and Steve Young. 2016. Counter-fitting word vectors to lin- guistic constraints.
Open set learning with counterfactual images. Lawrence Neal, Matthew Olson, Xiaoli Fern, Weng-Keen Wong, Fuxin Li, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Lawrence Neal, Matthew Olson, Xiaoli Fern, Weng- Keen Wong, and Fuxin Li. 2018. Open set learning with counterfactual images. In Proceedings of the European Conference on Computer Vision (ECCV).
A survey of the usages of deep learning for natural language processing. D W Otter, J R Medina, J K Kalita, 10.1109/TNNLS.2020.2979670IEEE Transactions on Neural Networks and Learning Systems. D. W. Otter, J. R. Medina, and J. K. Kalita. 2020. A survey of the usages of deep learning for natural language processing. IEEE Transactions on Neural Networks and Learning Systems, pages 1-21.
Counterfactual story reasoning and generation. Lianhui Qin, Antoine Bosselut, Ari Holtzman, Chandra Bhagavatula, Elizabeth Clark, Yejin Choi, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Lianhui Qin, Antoine Bosselut, Ari Holtzman, Chandra Bhagavatula, Elizabeth Clark, and Yejin Choi. 2019. Counterfactual story reasoning and generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 5046- 5056.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Squad: 100,000+ questions for machine comprehension of text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text.
Ruth Sylvestre-Alvise Rebuffi, Fong, Ji Xu, Andrea Vedaldi, There and back again: Revisiting backpropagation saliency methods. Sylvestre-Alvise Rebuffi, Ruth Fong, Xu Ji, and An- drea Vedaldi. 2020. There and back again: Revisit- ing backpropagation saliency methods.
Generating natural language adversarial examples through probability weighted word saliency. Yihe Shuhuai Ren, Kun Deng, Wanxiang He, Che, 10.18653/v1/P19-1103Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsShuhuai Ren, Yihe Deng, Kun He, and Wanxiang Che. 2019. Generating natural language adversarial ex- amples through probability weighted word saliency. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1085-1097, Florence, Italy. Association for Compu- tational Linguistics.
Transfer learning in natural language processing. Sebastian Ruder, Matthew E Peters, Swabha Swayamdipta, Thomas Wolf, 10.18653/v1/N19-5004Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorials. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: TutorialsMinneapolis, MinnesotaAssociation for Computational LinguisticsSebastian Ruder, Matthew E. Peters, Swabha Swayamdipta, and Thomas Wolf. 2019. Trans- fer learning in natural language processing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Com- putational Linguistics: Tutorials, pages 15-18, Minneapolis, Minnesota. Association for Computa- tional Linguistics.
Japanese and korean voice search. M Schuster, K Nakajima, 10.1109/ICASSP.2012.62890792012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). M. Schuster and K. Nakajima. 2012. Japanese and ko- rean voice search. In 2012 IEEE International Con- ference on Acoustics, Speech and Signal Processing (ICASSP), pages 5149-5152.
Notes on the n-person game -ii: The value of an n-person game. Lloyd Shapley, Lloyd Shapley. 1951. Notes on the n-person game -ii: The value of an n-person game.
transforming" delete, retrieve, generate approach for controlled text style transfer. Akhilesh Sudhakar, Bhargav Upadhyay, Arjun Maheswaran, 10.18653/v1/D19-1322Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsAkhilesh Sudhakar, Bhargav Upadhyay, and Arjun Ma- heswaran. 2019. "transforming" delete, retrieve, generate approach for controlled text style transfer. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 3269- 3279, Hong Kong, China. Association for Computa- tional Linguistics.
| [] |
[
"Continual Barlow Twins: continual self-supervised learning for remote sensing semantic segmentation",
"Continual Barlow Twins: continual self-supervised learning for remote sensing semantic segmentation"
] | [
"Valerio Marsocci ",
"Simone Scardapane "
] | [] | [] | In the field of Earth Observation (EO), Continual Learning (CL) algorithms have been proposed to deal with large datasets by decomposing them into several subsets and processing them incrementally. The majority of these algorithms assume that data is (a) coming from a single source, and (b) fully labeled. Real-world EO datasets are instead characterized by a large heterogeneity (e.g., coming from aerial, satellite, or drone scenarios), and for the most part they are unlabeled, meaning they can be fully exploited only through the emerging Self-Supervised Learning (SSL) paradigm. For these reasons, in this paper we propose a new algorithm for merging SSL and CL for remote sensing applications, that we call Continual Barlow Twins (CBT). It combines the advantages of one of the simplest self-supervision techniques, i.e., Barlow Twins, with the Elastic Weight Consolidation method to avoid catastrophic forgetting. In addition, for the first time we evaluate SSL methods on a highly heterogeneous EO dataset, showing the effectiveness of these strategies on a novel combination of three almost non-overlapping domains datasets (airborne Potsdam dataset, satellite US3D dataset, and drone UAVid dataset), on a crucial downstream task in EO, i.e., semantic segmentation. Encouraging results show the superiority of SSL in this setting, and the effectiveness of creating an incremental effective pretrained feature extractor, based on ResNet50, without the need of relying on the complete availability of all the data, with a valuable saving of time and resources. | 10.1109/jstars.2023.3280029 | [
"https://export.arxiv.org/pdf/2205.11319v2.pdf"
] | 248,987,094 | 2205.11319 | ab761b8f7eb22243446a4ff2f815d0b7abfc509c |
Continual Barlow Twins: continual self-supervised learning for remote sensing semantic segmentation
Valerio Marsocci
Simone Scardapane
Continual Barlow Twins: continual self-supervised learning for remote sensing semantic segmentation
1Index Terms-Self-supervised learningcontinual learningse- mantic segmentationremote sensing
In the field of Earth Observation (EO), Continual Learning (CL) algorithms have been proposed to deal with large datasets by decomposing them into several subsets and processing them incrementally. The majority of these algorithms assume that data is (a) coming from a single source, and (b) fully labeled. Real-world EO datasets are instead characterized by a large heterogeneity (e.g., coming from aerial, satellite, or drone scenarios), and for the most part they are unlabeled, meaning they can be fully exploited only through the emerging Self-Supervised Learning (SSL) paradigm. For these reasons, in this paper we propose a new algorithm for merging SSL and CL for remote sensing applications, that we call Continual Barlow Twins (CBT). It combines the advantages of one of the simplest self-supervision techniques, i.e., Barlow Twins, with the Elastic Weight Consolidation method to avoid catastrophic forgetting. In addition, for the first time we evaluate SSL methods on a highly heterogeneous EO dataset, showing the effectiveness of these strategies on a novel combination of three almost non-overlapping domains datasets (airborne Potsdam dataset, satellite US3D dataset, and drone UAVid dataset), on a crucial downstream task in EO, i.e., semantic segmentation. Encouraging results show the superiority of SSL in this setting, and the effectiveness of creating an incremental effective pretrained feature extractor, based on ResNet50, without the need of relying on the complete availability of all the data, with a valuable saving of time and resources.
I. INTRODUCTION
I N recent years, improvements in speed and acquisition technologies have drastically increased the amount of available Earth Observation (EO) images [1]. These improvements bring challenging issues to the widespread use of Remote Sensing (RS) classification [2] and semantic segmentation techniques, due to (a) the continuous arrival of new data, possibly belonging to partially overlapping domains, (b) the sheer size of the datasets, requiring vast amounts of processing power, and (c) the increasing quantity of data which has not been labeled by a domain expert. The main aim of this paper is to propose an algorithm to deal with these three characteristics simultaneously, that we call Continual Barlow Twins (CBT). This algorithm combines the strengths of two separates lines of research: Continual Learning (CL) for processing a large heterogeneous dataset in an incremental way, satisfying constraints (a) and (b) above, and Self-Supervised Learning (SSL) to deal with the lack of labeling information, satisfying Valerio Marsocci is with the Department of Computer, Control and Management Engineering Antonio Ruberti (DIAG), Sapienza University of Rome, 00185, Rome, Italy, mail: valerio.marsocci@uniroma1.it Simone Scardapane is with the Department of Information Engineering, Electronics and Telecommunication (DIET), Sapienza University of Rome, 00184, Rome, Italy, mail: simone.scardapane@uniroma1.it . It can easily be seen that the images from the different settings can be considered independent but non-identically distributed. constraint (c) above. In the following, we describe briefly the two issues separately, before introducing our proposed solution.
Problem #1: EO datasets are heterogeneous
Consider the situation where we trained a semantic segmentation network on a dataset of Italian satellite images. If we receive a new labeled dataset of similar images from a different nation, ideally, we would like our network to be able to segment equally well images coming from the two countries. However, most of the algorithms developed for classification or segmentation in EO suffer from catastrophic forgetting problems in this context, requiring to discard the acquired knowledge to retrain the model from scratch on the combination of the two datasets [3]. In a wide range of EO applications, the strategy of retraining the whole model is computationally expensive and costly [4]. Therefore, there is a need to ensure that the newly-developed models have the ability to learn new tasks while retaining satisfying performance on previous ones. The cause of the catastrophic forgetting problem is that different tasks or datasets are independent but not identically distributed in the feature domain, as it is known that the distribution of various RS datasets vary greatly [5], due to different resolutions, acquisitions, textures, and captured scenes. This is even more evident in urban scenes and in datasets made of images acquired from different types of sensors, e.g., drone, airborne, satellite (see Fig. 1 for a visualization of this phenomenon).
In this paper, we leverage a CL algorithm [3] to mitigate catastrophic forgetting problem and allow our algorithm to generalize to different feature distributions without the requirement of accessing already seen data.
Problem #2: EO datasets are largely unlabeled Most methods, especially in EO applications, are framed as supervised systems, relying on annotated data. More than in other fields, for drone, aerial and satellite images, it is difficult to rely on a labeled dataset, in light of the high cost and the amount of effort and time that are required, along with domain expertise [6]. In computer vision (CV), SSL has been proposed to handle this problem, reducing the amount of annotated data needed [7], [8]. The goal of SSL is to learn an effective visual representation of the input using a massive quantity of data provided without any label [9]. We can see the task as the need to build a well-structured and relevant set of features, able to represent an image which is useful for several downstream tasks. There is a growing research line demonstrating how SSL techniques increase performance in EO applications [10], although evaluations have been limited to a single dataset or domain.
Contributions of the paper
In this paper, we propose and experimentally evaluate a novel algorithm that is able to train a deep network for RS by exploiting vast amounts of heterogeneous, unlabeled, continually-arriving data. Specifically, we show that it is possible to exploit the potential of SSL incrementally, to obtain an efficient and effective pretrained model trained in several successive steps, without the need to re-train it from scratch every time new data is added [11]. The proposed Continual Barlow Twins (CBT) algorithm trains a feature extractor (ResNet50) with Barlow Twins (BT) [7], whose loss is integrated with a regularization term, borrowed from Elastic Weight Consolidation (EWC) [12], to avoid catastrophic forgetting. With the obtained feature extractor, we train a UNet++ [13] to perform semantic segmentation. When acquiring new RS data, CBT can be trained quickly, as it will be necessary to update it on the new data only, discarding all old data. Our method also provides computational efficiency, potentially allowing small realities to train a large model on huge amounts of data that would be unfeasible otherwise.
Since the generalisation capabilities and benefits of SSL on RS data from non-overlapping domains (as shown in Fig. 1) is still unexplored, we also propose a new benchmark by combining three datasets with images captured with different sensors (drone, airborne and satellite data), with different resolutions and acquired under different conditions, representing different objects and scenes. We show that SSL targeted to RS images can outperform standard pre-trained strategies (e.g., ImageNet), and we expect this to become a useful benchmark scenario for further research in SSL and CL in remote sensing. We also show that the proposed CBT algorithm offers significantly more versatility and less computing time compared to a standard approach.
II. RELATED WORKS
In this section, we briefly review the relevant literature on SSL (Section II-A) and CL (Section II-B) in EO. We note that in CV, the combination of these two technologies is starting to be explored, and some works have already started to demonstrate how SSL methods are well prepared, after small modifications, to learn incrementally [11], [14]. On the other hand, in EO, this combination has not yet been explored, to our knowledge, apart from a few embryonic contributions combining weakly-supervision with CL [15].
A. Self-supervised Learning in EO
In [16], the authors train CMC [8] on three large datasets both with RGB and multispectral bands. Then, they evaluate the effectiveness of the learned features on four datasets to solve downstream tasks of both single-label and multi-label classification. The same authors, in [10], applies a split-brain autoencoder on aerial images. In [17], the authors perform a semantic segmentation downstream task on the Vaihingen dataset [18] to learn the features of the encoder of the net that solves the segmentation. A different approach is proposed in [19], where the authors adopt as contrastive strategy the use of RS images of the same areas in different time frames, introducing a loss term based on the geolocation of the tiles. Other contrastive strategies are proposed by [20] and [21]. [22] learns visual representations inferring information on the visible spectrum from the other bands on BigEarthNet [23]. In [24], the authors propose a net that, imitating the discriminator of a generative adversarial network (GAN), identifies patches taken from two temporal images. A similar approach, with multi-view images, is proposed in [25]. In [26], the authors show the effectiveness of a SSL pretraining for time-series classification. Finally, [27] uses SSL strategy for transfer learning super resolution. purposes.
B. Continual Learning in EO
In [28], the authors proposed a two-block network for RS land cover classification tasks, where one module minimizes the error among classes during the new task training, and another module learns how to effectively distinguish among tasks, based on representing past data stored in a linear memory. Similarly, [29] proposes a framework based on two purposes: adapting and remembering. Concerning the former, the authors save a copy of the trained-on-the-previous task net, to store the info of the already seen classes. The latter stores some old data, which feed the net during the sequential training steps. Shaped for semantic segmentation, [5] propose two regularization components: representation consistency structure loss and pixel affinity structure loss. The first retains the information in the isolated pixels. The second saves the high-frequency information, shared throughout the tasks.
III. METHODOLOGY
A. Overview of the components
We consider a RS scenario where (a) data is coming incrementally from multiple domains (e.g., drone, airborne and/or satellite images); (b) we cannot re-train from scratch the model when new data is received; (c) the majority of the data is unlabeled. We refer to each domain (or subset of the dataset) as a task, in accordance with the CL literature. To achieve our compound objective, the intuition is to embed a CL strategy in a self-supervised framework, by combining two algorithms that are considered state-of-the-art in their respective fields: (i) Barlow Twins (BT) [7], which trains a network based on measuring the cross-correlation matrix between the outputs of two identical networks fed with distorted versions of a sample, and making it as close to the identity matrix as possible; (ii) Elastic Weight Consolidation (EWC) [12] consisting of constraining important weights of the network to stay close to the value obtained in previous tasks. In the next section, we highlight in detail how Continual Barlow Twins (CBT) works. A schematic overview of the method is provided in Fig. 2.
B. Continual Barlow Twins
Consider for now a single task, and denote by X a batch of unlabeled images. Our main training step, taken from BT, produces two disturbed views of X, Y A and Y B , based on a set of data augmentations strategies S (e.g., random rotations and scalings). In this paper we consider standard sets of data augmentations (see Section V), although augmentations specific to RS could also be considered. The two views are fed to a convolutional neural network with weights θ, that produces, respectively, two embeddings Z A and Z B (assumed to be mean-centered along the batch dimension). To learn effective representations of the input images in a self-supervised fashion we leverage the BT loss, which is composed of two terms called invariance and redundancy reduction terms:
L BT (X) = i (1 − C ii ) 2 invariance term +µ i j =i C 2 ij redundancy reduction term (1)
where µ is a positive constant balancing the invariant and the redundancy reduction terms of the loss, and C is the crosscorrelation matrix, with values comprised between -1 (i.e. total anti-correlation) and 1 (i.e. total correlation), computed between the outputs of the two identical networks along the batch dimension. Practically, the first term of the loss has the goal to make the diagonal elements of C equal to 1. In this way, the embeddings become invariant to the applied augmentations. On the other hand, the second term of the loss has the aim to bring to 0 the off-diagonal elements of C. This ensures that the various components of the embeddings are decorrelated with each other, making the information nonredundant, enhancing the representations of the images.
Suppose now that the network has been trained on images coming from a task T 1 using the loss (1) (e.g., drone images), and we receive a new dataset of images coming from a second task T 2 (e.g., satellite images). We denote the weights obtained at the end of the first training as θ T1 , and the data of the two tasks respectevely as D T1 and D T2 . To retain old knowledge from T 1 and avoid catastrophic forgetting, we complement the BT loss (1) with a EWC regularization term [12] which forces the weights to stay close to θ T1 depending on their importance, given by the diagonal of the Fisher information matrix F , which is a positive semidefinite matrix corresponding to the second derivative of the loss near the minimum. In our scenario, the loss cannot be decomposed for each individual data point, as it depends on the cross-correlations between data in a mini-batch and its corresponding augmentations. To this end, denoting by B T1 the number of mini-batches X that can be extracted from D T1 , we approximate the i-th element of the diagonal Fisher information matrix as:
F i = 1 B T1 X∈D T 1 ∂L BT (X) ∂θ T1 i 2(2)
where L BT (X) denotes the BT loss computed on mini-batch X, as in (1). Intuitively, each weight of the network is given an importance that depends on the square of the corresponding loss gradient. Given this approximation, the new loss for a batch of images X taken from the second task is given by:
L(X) = L BT (X) + i λ 2 F i θ i − θ T1 i 2(3)
where L BT (X) is the BT loss (1) computed on the data from task T 2, and λ weights the constraint on the previous task. If moving to a third task, we repeat the computation of the Fisher information matrix at the end of training for the second task and replace it. The CBT approach is summarized in Figure 2, and the associated code is available online. 1 After the selfsupervised pre-training, the network can be exploited for any downstream task of interest in EO. In particular, we explore in Section V a fine-tuning for a semantic segmentation task.
IV. DATASETS
To perform the experiments we build a novel dataset which is a combination of three previously introduced datasets. Each contains images from a different source: airborne, satellite, and drone. As previously stated, the construction of a novel mixed dataset is crucial since the data is vastly heterogeneous, presenting almost non-overlapping domains, as shown in Fig. 1. In fact, the choice was dictated by the desire to demonstrate the effectiveness of the SSL on a challenging task (that is semantic segmentation), extending its validity even in the case of highly variable data, while most previous works focused on a single domain (see Section II). We briefly summarize next each dataset. Salient information are summed up in Table I
Dataset
Type of images Number of images Potsdam [18] Aerial ∼5000 UAVid [30] UAV ∼7500 US3D [31] Satellite ∼11000 1) Potsdam: The ISPRS Potsdam dataset [18] consists of 38 high-resolution aerial true orthophoto (TOP), with four available bands (Near-Infrared, Red, Green and Blue). Each image is 6000 × 6000 pixels, with a Ground Sample Distance (GSD) of 5 cm, ending up in covering 3.42 km 2 . For our experiments, we took in consideration only the 38 RGB TOPs. These are annotated with pixel-level labels of six classes: background, impervious surfaces, cars, buildings, low vegetation, trees. We used the eroded mask, and we selected 24 images for training, 13 for testing and 1 for validation, without considering the background class, similarly to [32]. We cropped each image in 512×512 non-overlapping patches, ending up in 2640 images for training, 120 for validation and 1680 for test.
2) UAVid: Unmanned Aerial Vehicle semantic segmentation dataset (UAVid) [30] consists in 42 video sequences, captured with 4K high-resolution by an oblique point of view. UAVid is a challenging dataset due to the very high resolution of images, large-scale variation, and complexities in the scenes. The authors extracted ten labelled images per each sequence, ending up in 420 images with 3840 × 2160 pixels. The annotated classes are 8: building, road, static car, tree, low vegetation, human, moving car, background clutter. The images are already divided into train, validation and test, by the authors, however the test segmentation maps have not yet been released. For this reason, we used a part (80%) of the validation set as test set in our experiments. Moreover, we cropped the images in 512 × 512 non-overlapping patches, ending up in ∼ 7500 images.
3) US3D: The US3D dataset [31] includes approximately 100 km 2 coverage for the United States cities of Jacksonville, Florida and Omaha, Nebraska. Sources include incidental satellite images, airborne LiDAR, and feature annotations derived from LiDAR. The dataset is composed of 2783 images, 1024×1024, obtained from the WorldView-3 satellite: they are non-orthorectified and multi-view. The images have 8 bands, six of which are part of the visible spectrum, and two of the near infrared. Semantic labels for the US3D dataset, derived automatically from Homeland Security Infrastructure Program (HSIP), are five: ground, trees, water, building and clutter. For our experiments we considered only RGB bands and all the classes. Also for this dataset, we cropped the images in 512 × 512 non-overlapping patches, ending up in more than 11000 images, randomly divided in train (∼ 70%), validation (∼ 10%) and test (∼ 20%).
V. EXPERIMENTAL SETUP
For the training phase, a single Tesla V100-SXM2 32 GB GPU has been used. For the semantic segmentation task, we use UNet++ [13], with the Squeeze and Excitation strategy [33] and the softmax function as activation on the last layer.
For the experiments on all the three datasets, we fix the batch size to 8, the number of epochs to 200 and the learning rate to 0.0001. Moreover, we used Adam as optimizer, Jaccard loss as the cost function and the following set of augmentations: random horizontal flip, random geometric transformation (i.e. shifting, scaling, rotating), random gaussian noise, random radiometric transformation (i.e. brightness, contrast, saturation variations). The mean intersection over union (mIoU), and F1score (F1) are the selected evaluation metrics. Under these conditions, we test different pre-training strategies for UNet++. As baselines, we use ResNet50 pretrained on ImageNet in a supervised manner, and ResNet50 pretrained on ImageNet data with BT 2 , considered the fact that there are not other proposed algorithm for these kind of task. For the proposed CBT approach, we pretrain ResNet50 incrementally on the three datasets in this order: US3D, UAVid, Potsdam (with λ = 10e − 2). Then, we left the rest of parameters as in [7]. As additional baseline (i.e. upperbound baseline), we consider ResNet50 pretrained on the three datasets with standard BT. We used the same set of parameters provided in [7]. Finally, to assess the catastrophic forgetting we trained another encoder trained consequentially on the three datasets (US3D, UAVid, Potsdam) with a vanilla BT, without CL constraints. With these encoders, we trained the semantic segmentation models in a supervised way, with different percentages (10%, 50%, 100%) of labeled data for the three datasets. Particularly, we run all the experiments three times, reporting the mean and the standard deviation of the resulting metrics. The results are shown and commented in the next Section (Sec. VI).
VI. EXPERIMENTAL RESULTS
The results of the experiments on the downstream task are shown both in Figure 3 and Tables III, IV, V. As can be seen, the feature extractors that perform best are the ones obtained from the CBT and BT training on the three selected EO datasets. Precisely, these conformations outperform, with reference to mIoU, their counterpart trained with ImageNet supervised pretraining respectively by 3.39% and by 3.69% in average. Moreover, it is interesting to note that the performance of the models with the CBT feature extractor is only slightly lower (average drop of a negligible ≈0.3%, referring to mIoU) than that obtained with an encoder trained by means of BT, demonstrating how the proposed approach gives up only a slight part of optimal performance, against clear advantages in terms of computational efficiency and general versatility. In absolute terms, it is necessary to notice once again how selfsupervision strategies lead to better results than exclusively supervised ones [16], [19], and, above all, how this is even more true when combining EO data from domains that are not homogeneous in terms of type of sensor, acquisition, resolution and objects represented. In particular, in the next paragraphs, we will go in the depth of some specific evidences regarding: computational times (Sec. VI-A), UAVid experiments (Sec. 2 Downloaded at https://github.com/facebookresearch/barlowtwins
A. Computational times
As stated earlier, one of the best advantages of the proposed new method is the shorter computational time with a very limited performance drop in the case of incremental data availability. As already stressed, this situation is especially likely in the field of EO, where new data often arrives continuously [29], due to satellite revisit times, scheduled acquisition campaigns and other variable parameters. In Table II we can observe the results of the experiments. Concerning the traditional BT strategy, for the training of the three considered datasets, we simulate an incremental arrival of data as follows:
(1) training of the only US3D;
(2) joint training of US3D and UAVid;
(3) training of the three datasets together. This strategy is employed to create a salient feature extractor for all datasets, with the mean of avoiding catastrophic forgetting, in situations of incremental data availability. On the other hand, for CBT, step 1) is referred to training on US3D, step 2) on UAVid and step 3) on Potsdam, as previously stated. We can easily affirm, observing Table II, that our method could save nearly 50% of times, when the data are available incrementally. It is also interesting to state that, also in case of complete and immediate availability of all data, the computational times are comparable (28.75 h for CBT vs 24.61 h for BT, where the computational times of the latter consist of just the third step).
B. UAVid
According to the results shown in Table III and represented in Figures 3 a) and 4 a), when using a limited amount of data, the performance of the supervised pretrained encoder are inferior overall. Looking at the Figure 4 a), we can see that a better encoder, when 10% are used, leads to a more stable and effective training. On the other hand, the training with 50% of data are the most similar along the different pretrained encoders, with just ∼0.5% mIoU gap between the the worst result (74.92% mIoU, obtained with ImageNet encoder) and the best (75.39% mIoU, achieved with BT encoder), that it is almost negligible considering the standard deviations of the results. This trend can be mainly explained by what has been stated above with respect to the image domain. In fact, the images captured by drone are definitely more similar to close-range camera taken images, like ImageNet ones, than those from the other two RS datasets. This is mainly due to the point of view from which the images were captured. For Potsdam and US3D, the viewpoint is almost nadiral, while for UAVid it is oblique, more precisely the camera angle is set to around 45 degrees to the vertical direction, at a flight height of about 50m. As also stated by the authors [30], a non-nadiral view allows easier reconstruction of object geometry (i.e. volume, shape, etc...), making the use of more sophisticated feature extractors less effective. These reasons favour the high performance of ImageNet pretrained models, especially with a limited number of labels. However, by increasing the number of labels, the models with encoders trained on the proposed datasets are able to match their features to the best conformation to solve the task with the best performance (80.64% mIoU with respect to 77.87% mIoU of the ImageNet encoder experiment). In addition, the fact that the pretraining on UAVid was the second of the three steps slightly affected the performance of the CBT pretraining strategy (80.12% mIoU), with a very limited drop in performance (∼0.5%).
C. Potsdam
As far as the results on Potsdam are concerned, in Table IV and Figures 3 c) and 4 c), we see that the gap between results with self-supervised (71.42% mIoU) and supervised encoder (64.12% mIoU) is the largest among all experiments. This trend is true also for the experiments with a limited amount of training data, as the gap in the curve of Figure 4 c) shows. For example, with 10% of data, the gap between the ImageNet encoder (58.36% mIoU) and BT encoder (62.72% mIoU) is ∼4.4%. This can be explained by the fact that this dataset is the one with the least amount of data among the three available (see Table I). This insight is supported by the fact that the gap between performance with ImageNet encoders and performance with CBT and BT encoders is wider also for the other datasets when only 10% of the data is used (see also Figure 3). Therefore, it is definitely the one that benefits the most from a more efficient encoder feature selection. This could be confirmed by the fact that the Potsdam domain is comparable with that of US3D, a very wide dataset, capable of improving the representations that can be used during the training of the Potsdam semantic segmentation, confirming similar intuitions reached, for example, in [34].
D. US3D
As far as the US3D dataset is concerned, again selfsupervision techniques lead to better results on downstream tasks, highlighting how, in this case, CBT performs best of all (CBT 84.56% vs ImageNet 83.41% mIoU). In fact, this is the first evidence that can be easily argued from Table V and Figures 3 b) and 4 b). Therefore, considered also the standard deviations of the final results, we observe that there are no significant differences in performance between the other encoders (BT ImageNet 84.49% vs CBT 84.56% vs BT 84.43% mIoU), since the US3D is a large dataset, composed of several images of the same area but captured from different points of view (i.e. multiview). This redundancy, working as data augmentation itself, facilitates the resolution of the task on this dataset, as once an efficient feature extractor is engaged, convergence is achieved quite effectively. This intuition is confirmed also by the training curves, showed in Figure 4 b), where the training, except of some small instability in ImageNet curve, follow a similar behavior. It is not surprising that similar results are presented in [16], where self-supervision is applied on other large datasets.
E. Overcoming Catastrophic Forgetting
CL strategy, in addition to generically improving performance as demonstrated in previous sections, has the main advantage of overcoming catastrophic forgetting. To illustrate this, we performed a series of experiments in which we trained a BT model in a sequential fashion, without introducing any CL strategy. Specifically, starting from a ResNet50 pretrained on ImageNet with BT, we performed three training steps, in which we used the model obtained in the previous step: i) BT on US3D dataset; ii) BT on UAVid; iii) BT on Potsdam. Finally, we used the resulting ResNet50 as the backbone of UNet++ model, for the semantic segmentation downstream task. Figure ?? and Table VI show the effectiveness of CBT as a strategy to overcome catastrophic forgetting, making possible to train a powerful encoder, without the need of relying on either all the data together or high computational resources. Particularly, we can affirm that constraining the parameters of the model pretrained on ImageNet is an easy and effective strategy to train the backbone. In fact, when it is not possible to rely on a vast amount of data specifically shaped for EO tasks, it is better to exploit the capabilities of pretrained models that used huge dataset, like in this scenario.
Moreover, we can see in Figure ?? and Table VI, that once again UAVid is the dataset that, being more different from the others, suffer most from catastrophic forgetting (e.g. drop of ∼ 2.5% when 10% of labels are used, ∼ 2% when 100% of labels). In fact, as we have already observed, Potsdam and US3D have both nadiral views, making their characteristics more similar. For this very reason, the performance of 3-step BT on US3D is never excessively worse than the counterpart trained with CBT, even though the average performance drop (of ∼ 1.5%) is not negligible, being the first of the three dataset used. On the other hand, as one can expect, the performance on the Potsdam dataset, with the 3-step BT, are really similar to the one reached with CBT. In fact, being the last dataset on which the algorithm is trained, most of the knowledge of the encoder came from this dataset. This is visible especially when few data are used, where 3-step BT performance (61.51% mIoU) overcome the CBT one (60.90% mIoU). In general, once again, given the required computational power and the overall performance, CBT seems the best solution to have consistent results on all the datasets.
VII. CONCLUSIONS
In this paper, we have shown that the combination of CL and SSL offers an optimal compromise between performance and training efficiency and versatility for RS applications. In particular, we demonstrated a combined approach (Continual Barlow Twins) leading to consistent performance in a novel combination of datasets with RS images that are heterogeneous in terms of sensors, resolution, acquisition and scenes represented. Since the availability of unlabeled data is increasing at a great speed, and it is not possible for everyone to train repeatedly large models, a framework like CBT offers a potential solution. However, more work remains to be done. First, the validity of these results could be extended to new datasets and new tasks. Among the use of new datasets, we mention datasets containing multispectral images (i.e., not only with RGB bands). Second, other SSL and CL strategies can be combined into an effective and efficient framework.
Fig. 1 .
1t-Stochastic Neighbor Embedding (t-SNE) visualization results of the features of three selected RS datasets (see Section IV for a description of the datasets)
Fig. 2 .
2Schematic representation of the Continual Barlow Twins algorithm. When computing L CBT , C, and I contribute to the Barlow Twins loss term, f θ,T 1 , F , and f θ,T 2 to the EW C regularization term.
Fig. 3 .
3mIoU metrics on experiments with an increasing amount of labeled data of respectevely a) UAVid, b) US3D and c) Potsdam. VI-B), Potsdam experiments (Sec. VI-C), US3D experiments (Sec. VI-D) and catastrophic forgetting (Sec. VI-E).
Fig. 4 .
4Value of the loss on experiments with 10% labeled data of respectevely a) UAVid, b) US3D and c) Potsdam.
Fig. 5 .
5differences of mIoU among the experiments obtained with the encoder pretrained with the proposed CBT and the catastrophic forgetting baseline (i.e. encoder pretrained with a vanilla BT sequentially trained on the three datasets).
. https://github.com/VMarsocci/CBT1
TABLE I SUMMARY
IOF THE DATASETS USED FOR THE EXPERIMENTS.
TABLE II ELAPSED
IITRAINING TIMES. CBT OFFERS IMPORTANT ADVANTAGES WHEN DATA ARE PROVIDED IN AN INCREMENTAL FASHION. RESULTS FOR DIFFERENT % OF TRAINING DATA. THE HIGHEST SCORE IS MARKED IN BOLD. THE SECOND HIGHEST IS UNDERLINED. THE TAB REPORTS THE MEAN AND THE STANDARD DEVIATION OF THREE EXPERIMENTS.Strategy Step 1 (s) Step 2 (s) Step 3 (s) Tot (h)
BT
41400
61500
88600
53.19
CBT
42000
36400
25100
28.75
TABLE III
UAVID Encoder
%
mIoU
F1
10%
69.79 ± 0.48
80.32 ± 0.34
ImageNet
50%
74.92 ± 0.42
84.60 ± 0.28
100%
77.87 ± 0.37
86.67 ± 0.25
10%
69.99 ± 0.47
81.33 ± 0.32
BT ImageNet
50%
75.17 ± 0.39
84.81 ± 0.28
100%
78.51 ± 0.38
86.70 ± 0.26
10%
71.48 ± 0.47
81.50 ± 0.31
CBT
50%
75.25 ± 0.42
84.73 ± 0.29
100%
80.12 ± 0.39
88.42 ± 0.26
10%
71.60 ± 0.48
81.64 ± 0.35
BT
50%
75.39 ± 0.41
84.85 ± 0.27
100% 80.64 ± 0.37 88.44 ± 0.25
TABLE IV POTSDAM
IVRESULTS FOR DIFFERENT % OF TRAINING DATA. THE HIGHEST SCORE IS MARKED IN BOLD. THE SECOND HIGHEST IS UNDERLINED. THE TAB REPORTS THE MEAN AND THE STANDARD DEVIATION OF THREEEXPERIMENTS.Encoder
%
mIoU
F1
10%
58.36 ± 0.63
70.73 ± 0.50
ImageNet
50%
61.59 ± 0.57
73.55 ± 0.44
100%
64.12 ± 0.54
75.98 ± 0.40
10%
61.45 ± 0.59
73.25 ± 0.51
BT ImageNet
50%
66.88 ± 0.55
77.41 ± 0.42
100%
70.22 ± 0.52
79.42 ± 0.39
10%
60.90 ± 0.62
72.46 ± 0.52
CBT
50%
67.24 ± 0.55
77.87 ± 0.44
100%
70.90 ± 0.52
80.01 ± 0.39
10%
62.72 ± 0.60
74.44 ± 0.51
BT
50%
67.29 ± 0.54
77.77 ± 0.43
100% 71.42 ± 0.52 80.63 ± 0.39
TABLE V US3D
VRESULTS FOR DIFFERENT % OF TRAINING DATA. THE HIGHEST SCORE IS MARKED IN BOLD. THE SECOND HIGHEST IS UNDERLINED. THE TAB REPORTS THE MEAN AND THE STANDARD DEVIATION OF THREE EXPERIMENTS. MIOU VALUES FOR EXPERIMENTS WITH DIFFERENT AMOUNTS OF LABELED DATA WITH THE ENCODER PRETRAINED WITH: I) CBT, II) A VANILLA BT TRAINED CONSECUTIVELY ON THE THREE DATASETS.Encoder
%
mIoU
F1
10%
74.18 ± 0.41
84.60 ± 0.26
ImageNet
50%
78.25 ± 0.37
87.57 ± 0.14
100%
83.41 ± 0.30
90.76 ± 0.12
10%
75.44 ± 0.39
85.38± 0.27
BT ImageNet
50%
79.26 ± 0.35
87.89 ± 0.13
100%
84.49 ± 0.28
91.27 ± 0.11
10%
75.31 ± 0.41
85.42 ± 0.27
CBT
50%
79.70 ± 0.36
88.20 ± 0.14
100% 84.56 ± 0.28
91.28 ± 0.12
10%
75.89 ± 0.39
85.69 ± 0.25
BT
50%
80.67 ± 0.36
88.94 ± 0.13
100%
84.43 ± 0.28
91.30 ± 0.12
TABLE VI
Dataset
Encoder
mIoU (%)
10%
50%
100%
US3D
CBT
75.31 79.70
84.56
BT
73.50 77.39
82.41
UAVid
CBT
71.48 75.25
80.12
BT
68.98 73.93
77.81
Potsdam
CBT
60.90 67.24
70.90
BT
61.51 66.94
70.42
Progress and challenges in intelligent remote sensing satellite systems. B Zhang, Y Wu, B Zhao, J Chanussot, D Hong, J Yao, L Gao, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 15B. Zhang, Y. Wu, B. Zhao, J. Chanussot, D. Hong, J. Yao, and L. Gao, "Progress and challenges in intelligent remote sensing satellite systems," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 15, pp. 1814-1822, 2022.
Remote sensing image scene classification meets deep learning: Challenges, methods, benchmarks, and opportunities. G Cheng, X Xie, J Han, L Guo, G.-S Xia, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 13G. Cheng, X. Xie, J. Han, L. Guo, and G.-S. Xia, "Remote sensing image scene classification meets deep learning: Challenges, methods, benchmarks, and opportunities," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 13, pp. 3735- 3756, 2020.
A continual learning survey: Defying forgetting in classification tasks. M Delange, R Aljundi, M Masana, S Parisot, X Jia, A Leonardis, G Slabaugh, T Tuytelaars, IEEE Transactions on Pattern Analysis and Machine Intelligence. M. Delange, R. Aljundi, M. Masana, S. Parisot, X. Jia, A. Leonardis, G. Slabaugh, and T. Tuytelaars, "A continual learning survey: Defying forgetting in classification tasks," IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.
A cnn-transformer network with multiscale context aggregation for fine-grained cropland change detection. M Liu, Z Chai, H Deng, R Liu, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 15M. Liu, Z. Chai, H. Deng, and R. Liu, "A cnn-transformer network with multiscale context aggregation for fine-grained cropland change detec- tion," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 15, pp. 4297-4306, 2022.
Continual learning with structured inheritance for semantic segmentation in aerial imagery. Y Feng, X Sun, W Diao, J Li, X Gao, K Fu, IEEE Transactions on Geoscience and Remote Sensing. Y. Feng, X. Sun, W. Diao, J. Li, X. Gao, and K. Fu, "Continual learning with structured inheritance for semantic segmentation in aerial imagery," IEEE Transactions on Geoscience and Remote Sensing, 2021.
Research progress on few-shot learning for remote sensing image interpretation. X Sun, B Wang, Z Wang, H Li, H Li, K Fu, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 14X. Sun, B. Wang, Z. Wang, H. Li, H. Li, and K. Fu, "Research progress on few-shot learning for remote sensing image interpretation," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 2387-2402, 2021.
Barlow twins: Self-supervised learning via redundancy reduction. J Zbontar, L Jing, I Misra, Y Lecun, S Deny, arXiv:2103.03230arXiv preprintJ. Zbontar, L. Jing, I. Misra, Y. LeCun, and S. Deny, "Barlow twins: Self-supervised learning via redundancy reduction," arXiv preprint arXiv:2103.03230, 2021.
Contrastive multiview coding. Y Tian, D Krishnan, P Isola, 16th European Conference on Compute Vision (ECCV). SpringerY. Tian, D. Krishnan, and P. Isola, "Contrastive multiview coding," in 16th European Conference on Compute Vision (ECCV), pp. 776-794, Springer, 2020.
A simple framework for contrastive learning of visual representations. T Chen, S Kornblith, M Norouzi, G Hinton, PMLRInt. Conf. on Mach. Lear. T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, "A simple framework for contrastive learning of visual representations," in Int. Conf. on Mach. Lear., pp. 1597-1607, PMLR, 2020.
Evaluation of split-brain autoencoders for high-resolution remote sensing scene classification. V Stojnić, V Risojević, 2018 International Symposium ELMAR. IEEEV. Stojnić and V. Risojević, "Evaluation of split-brain autoencoders for high-resolution remote sensing scene classification," in 2018 Interna- tional Symposium ELMAR, pp. 67-70, IEEE, 2018.
Self-supervised models are continual learners. E Fini, V G T Da Costa, X Alameda-Pineda, E Ricci, K Alahari, J Mairal, arXiv:2112.04215arXiv preprintE. Fini, V. G. T. da Costa, X. Alameda-Pineda, E. Ricci, K. Alahari, and J. Mairal, "Self-supervised models are continual learners," arXiv preprint arXiv:2112.04215, 2021.
Overcoming catastrophic forgetting in neural networks. J Kirkpatrick, R Pascanu, N Rabinowitz, J Veness, G Desjardins, A A Rusu, K Milan, J Quan, T Ramalho, A Grabska-Barwinska, Proceedings of the national academy of sciences. the national academy of sciences114J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, et al., "Overcoming catastrophic forgetting in neural networks," Pro- ceedings of the national academy of sciences, vol. 114, no. 13, pp. 3521- 3526, 2017.
Unet++: A nested u-net architecture for medical image segmentation," in Deep learning in medical image analysis and multimodal learning for clinical decision support. Z Zhou, M M Rahman Siddiquee, N Tajbakhsh, J Liang, SpringerZ. Zhou, M. M. Rahman Siddiquee, N. Tajbakhsh, and J. Liang, "Unet++: A nested u-net architecture for medical image segmentation," in Deep learning in medical image analysis and multimodal learning for clinical decision support, pp. 3-11, Springer, 2018.
Special: Self-supervised pretraining for continual learning. L Caccia, J Pineau, arXiv:2106.09065arXiv preprintL. Caccia and J. Pineau, "Special: Self-supervised pretraining for con- tinual learning," arXiv preprint arXiv:2106.09065, 2021.
Weaklysupervised continual learning for class-incremental segmentation. G Lenczner, A Chan-Hon-Tong, N Luminari, B L Saux, arXiv:2201.01029arXiv preprintG. Lenczner, A. Chan-Hon-Tong, N. Luminari, and B. L. Saux, "Weakly- supervised continual learning for class-incremental segmentation," arXiv preprint arXiv:2201.01029, 2022.
Self-supervised learning of remote sensing scene representations using contrastive multiview coding. V Stojnic, V Risojevic, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionV. Stojnic and V. Risojevic, "Self-supervised learning of remote sensing scene representations using contrastive multiview coding," in Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1182-1191, 2021.
Mare: Self-supervised multi-attention resu-net for semantic segmentation in remote sensing. V Marsocci, S Scardapane, N Komodakis, Remote Sensing. 13163275V. Marsocci, S. Scardapane, and N. Komodakis, "Mare: Self-supervised multi-attention resu-net for semantic segmentation in remote sensing," Remote Sensing, vol. 13, no. 16, p. 3275, 2021.
The isprs benchmark on urban object classification and 3d building reconstruction. F Rottensteiner, G Sohn, J Jung, M Gerke, C Baillard, S Benitez, U Breitkopf, Remote Sensing and Spatial Information Sciences. 31Nr. 1F. Rottensteiner, G. Sohn, J. Jung, M. Gerke, C. Baillard, S. Benitez, and U. Breitkopf, "The isprs benchmark on urban object classification and 3d building reconstruction," ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences I-3 (2012), Nr. 1, vol. 1, no. 1, pp. 293-298, 2012.
Geography-aware self-supervised learning. K Ayush, B Uzkent, C Meng, K Tanmay, M Burke, D Lobell, S Ermon, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionK. Ayush, B. Uzkent, C. Meng, K. Tanmay, M. Burke, D. Lobell, and S. Ermon, "Geography-aware self-supervised learning," in Proceed- ings of the IEEE/CVF International Conference on Computer Vision, pp. 10181-10190, 2021.
Remote sensing image scene classification with self-supervised paradigm under limited labeled samples. C Tao, J Qi, W Lu, H Wang, H Li, IEEE Geoscience and Remote Sensing Letters. C. Tao, J. Qi, W. Lu, H. Wang, and H. Li, "Remote sensing image scene classification with self-supervised paradigm under limited labeled samples," IEEE Geoscience and Remote Sensing Letters, 2020.
Deep unsupervised embedding for remotely sensed images based on spatially augmented momentum contrast. J Kang, R Fernandez-Beltran, P Duan, S Liu, A J Plaza, IEEE Transactions on Geoscience and Remote Sensing. 593J. Kang, R. Fernandez-Beltran, P. Duan, S. Liu, and A. J. Plaza, "Deep unsupervised embedding for remotely sensed images based on spatially augmented momentum contrast," IEEE Transactions on Geoscience and Remote Sensing, vol. 59, no. 3, pp. 2598-2610, 2020.
The color out of space: learning self-supervised representations for earth observation imagery. S Vincenzi, A Porrello, P Buzzega, M Cipriano, P Fronte, R Cuccu, C Ippoliti, A Conte, S Calderara, 2020 25th International Conference on Pattern Recognition (ICPR). IEEE2021S. Vincenzi, A. Porrello, P. Buzzega, M. Cipriano, P. Fronte, R. Cuccu, C. Ippoliti, A. Conte, and S. Calderara, "The color out of space: learning self-supervised representations for earth observation imagery," in 2020 25th International Conference on Pattern Recognition (ICPR), pp. 3034- 3041, IEEE, 2021.
Bigearthnet: A large-scale benchmark archive for remote sensing image understanding. G Sumbul, M Charfuelan, B Demir, V Markl, IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium. IEEEG. Sumbul, M. Charfuelan, B. Demir, and V. Markl, "Bigearthnet: A large-scale benchmark archive for remote sensing image understanding," in IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium, pp. 5901-5904, IEEE, 2019.
Self-supervised representation learning for remote sensing image change detection based on temporal prediction. H Dong, W Ma, Y Wu, J Zhang, L Jiao, Remote Sens. 12111868H. Dong, W. Ma, Y. Wu, J. Zhang, and L. Jiao, "Self-supervised representation learning for remote sensing image change detection based on temporal prediction," Remote Sens., vol. 12, no. 11, p. 1868, 2020.
Self-supervised change detection in multiview remote sensing images. Y Chen, L Bruzzone, arXiv:2103.05969arXiv preprintY. Chen and L. Bruzzone, "Self-supervised change detection in multi- view remote sensing images," arXiv preprint arXiv:2103.05969, 2021.
Self-supervised pretraining of transformers for satellite image time series classification. Y Yuan, L Lin, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 14Y. Yuan and L. Lin, "Self-supervised pretraining of transformers for satellite image time series classification," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 474-487, 2021.
Selfs2: Self-supervised transfer learning for sentinel-2 multispectral image super-resolution. X Qian, T.-X Jiang, X.-L Zhao, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 16X. Qian, T.-X. Jiang, and X.-L. Zhao, "Selfs2: Self-supervised trans- fer learning for sentinel-2 multispectral image super-resolution," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 16, pp. 215-227, 2023.
Continual learning using data regeneration for remote sensing scene classification. N Ammour, IEEE Geoscience and Remote Sensing Letters. N. Ammour, "Continual learning using data regeneration for remote sensing scene classification," IEEE Geoscience and Remote Sensing Letters, 2021.
Incremental learning for semantic segmentation of large-scale remote sensing data. O Tasar, Y Tarabalka, P Alliez, IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 129O. Tasar, Y. Tarabalka, and P. Alliez, "Incremental learning for semantic segmentation of large-scale remote sensing data," IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 12, no. 9, pp. 3524-3537, 2019.
Uavid: A semantic segmentation dataset for uav imagery. Y Lyu, G Vosselman, G.-S Xia, A Yilmaz, M Y Yang, ISPRS journal of photogrammetry and remote sensing. 165Y. Lyu, G. Vosselman, G.-S. Xia, A. Yilmaz, and M. Y. Yang, "Uavid: A semantic segmentation dataset for uav imagery," ISPRS journal of photogrammetry and remote sensing, vol. 165, pp. 108-119, 2020.
Semantic stereo for incidental satellite images. M Bosch, K Foster, G Christie, S Wang, G D Hager, M Brown, 2019 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEEM. Bosch, K. Foster, G. Christie, S. Wang, G. D. Hager, and M. Brown, "Semantic stereo for incidental satellite images," in 2019 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1524- 1532, IEEE, 2019.
Multiattention network for semantic segmentation of fine-resolution remote sensing images. R Li, S Zheng, C Zhang, C Duan, J Su, L Wang, P M Atkinson, IEEE Transactions on Geoscience and Remote Sensing. R. Li, S. Zheng, C. Zhang, C. Duan, J. Su, L. Wang, and P. M. Atkinson, "Multiattention network for semantic segmentation of fine-resolution remote sensing images," IEEE Transactions on Geoscience and Remote Sensing, 2021.
Recalibrating fully convolutional networks with spatial and channel "squeeze and excitation" blocks. A G Roy, N Navab, C Wachinger, IEEE transactions on medical imaging. 382A. G. Roy, N. Navab, and C. Wachinger, "Recalibrating fully convo- lutional networks with spatial and channel "squeeze and excitation" blocks," IEEE transactions on medical imaging, vol. 38, no. 2, pp. 540- 549, 2018.
Self-supervised pretraining improves self-supervised pretraining. C J Reed, X Yue, A Nrusimha, S Ebrahimi, V Vijaykumar, R Mao, B Li, S Zhang, D Guillory, S Metzger, K Keutzer, T Darrell, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)C. J. Reed, X. Yue, A. Nrusimha, S. Ebrahimi, V. Vijaykumar, R. Mao, B. Li, S. Zhang, D. Guillory, S. Metzger, K. Keutzer, and T. Darrell, "Self-supervised pretraining improves self-supervised pretraining," in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pp. 2584-2594, January 2022.
| [
"https://github.com/facebookresearch/barlowtwins",
"https://github.com/VMarsocci/CBT1"
] |
[
"Fields, Strings, Matrices and Symmetric Products *",
"Fields, Strings, Matrices and Symmetric Products *"
] | [
"Robbert Dijkgraaf rhd@wins.uva.nl \nDepartments of Mathematics and Physics\nUniversity of Amsterdam Plantage Muidergracht 24\n1018 TVAmsterdam\n"
] | [
"Departments of Mathematics and Physics\nUniversity of Amsterdam Plantage Muidergracht 24\n1018 TVAmsterdam"
] | [
"The Dutch Intercity Seminar"
] | In these notes we review the role played by the quantum mechanics and sigma models of symmetric product spaces in the light-cone quantization of quantum field theories, string theory and matrix theory. | 10.1007/978-3-322-90172-9_8 | [
"https://export.arxiv.org/pdf/hep-th/9912104v1.pdf"
] | 16,095,371 | hep-th/9912104 | c5ea90eb57c18e3c6bb76009bf9b1ad676563c28 |
Fields, Strings, Matrices and Symmetric Products *
ViewegCopyright ViewegJanuary 1998. May 1998. 1999
Robbert Dijkgraaf rhd@wins.uva.nl
Departments of Mathematics and Physics
University of Amsterdam Plantage Muidergracht 24
1018 TVAmsterdam
Fields, Strings, Matrices and Symmetric Products *
The Dutch Intercity Seminar
ViewegJanuary 1998. May 1998. 1999* Based on lectures given among others at the Geometry and Duality Workshop at the Institute for
In these notes we review the role played by the quantum mechanics and sigma models of symmetric product spaces in the light-cone quantization of quantum field theories, string theory and matrix theory.
Introduction
For more than a decade now, string theory has been a significant, continuous influence in mathematics, ranging from fields as diverse as algebraic geometry to representation theory. However, it is fair to say that most of these applications concerned the so-called first-quantized formulation of the theory, the formulation that is used to describe to propagation of a single string. In contrast with point-particle theories, in string theory the first-quantized theory is so powerful because it naturally can be extended to also describe the perturbative interactions of splitting and joining of strings by means of Riemann surfaces of general topology. Study of these perturbative strings has led to series of remarkable mathematical developments, such as representation theory of infinite-dimensional Lie algebras, mirror symmetry, quantum cohomology and Gromov-Witten theory.
The second-quantized formalism, what is sometimes also refered to as string field theory, has left a much smaller mathematical imprint. Although there is a beautiful geometrical and algebraic structure of perturbative closed string field theory, developed mainly by Zwiebach [1], which is built on deep features of the moduli space of Riemann surfaces, it is very difficult to analyze, perhaps because it is intrinsically perturbative. Yet, in recent years it has become clear recently that the non-perturbative mathematical structure of string theory is even richer than the perturbative one, with even bigger symmetry groups-the mysterious U-duality groups [2]. The appearance of D-branes [3] and a eleven-dimensional origin in the form of M-theory [4] can only be properly understood from a second-quantized point of view.
At present there is only one candidate for a fundamental description of non-perturbative string theory, which is matrix theory [5]. In matrix theory an important role is played by non-abelian gauge fields, and the strings and conformal field theory only emerge in a certain weak coupling limit. We will not review much of matrix theory in these notes but refer to for example [6,7,8,9]. Important is that matrix theory makes directly contact with the second-quantized theory, indeed Fock spaces are one of the ubiquitous ingredients, and much of the notes will focus on this correspondence, also reviewing the work of [10,11].
Hamilton vs Lagrange: representation theory and automorphic forms
One of the most remarkable insights provided by string theory, or more properly conformal field theory, is the natural explanation it offers of the modular properties of the characters of affine Kac-Moody algebras, Heisenberg algebras, and other infinitedimensional Lie algebras. At the hart of this explanation-and in fact of much of the applications of field theory in mathematics-lies the equivalence between the Hamiltonian and Lagrangian formulation of quantum mechanics and quantum field theory.
This equivalence roughly proceeds as follows (see also [15]). In the Hamiltonian for-mulation one considers the quantization of a two-dimensional conformal field theory on a space-time cylinder R × S 1 . The basic object is the loop space LX of maps S 1 → X for some appropriate target space X. The infinite-dimensional Hilbert space H that forms the representation of the algebra of quantum observables is then typically obtained by quantizing the loop space LX. This Hilbert space carries an obvious S 1 action, generated by the momentum operator P that rotates the loop, and the character of the representation is defined as
χ(q) = Tr H q P (1.1)
with q = e 2πiτ and τ in the complex upper half-plane H. The claim is that these characters are always some kind of modular forms. From the representation theoretic point of view it is not at all clear why there should be a natural action of the modular group SL(2, Z) acting on τ by linear fractional transformations. In particular the transformation τ → −1/τ is rather mysterious.
In the Lagrangian formulation, however, the character χ(q), or more properly the partition function, is computed by considering the quantum field theory on a Riemann surface with topology of a two-torus T 2 = S 1 × S 1 , i.e., an elliptic curve with modulus τ . The starting point is the path-integral over all maps T 2 → X. Since we work with an elliptic curve, the modularity is built in from the start. The transformation τ → −1/τ simply interchanges the two S 1 's. Changing from the Hamiltonian to the Lagrangian perspective, we understand the appearance of the modular group SL(2, Z) as the 'classical' automorphism group of the two-torus. This torus is obtained by gluing the two ends of the cylinder S 1 × R, which is the geometric equivalent of taking the trace. Note that in string theory this two-torus typically plays the role of a world-sheet.
In second-quantized string theory we expect a huge generalization of this familiar two-dimensional story. The operator algebras will be much bigger (typically, generalized Kac-Moody algebras) and also the automorphism groups will not be of a classical form, but will reflect the 'stringy' geometry at work. An example we will discuss in great detail in these notes is the quantization of strings on a space-time manifold of the form
M = R × S 1 × X,(1.2)
with X a compact simply-connected Riemannian manifold. Quantization leads again to a Hilbert space H, but this space carries now at least two circle actions. First, we have again a momentum operator P that generates the translations along the S 1 factor. Second, there is also a winding number operator W that counts how many times a string is wound around this circle. It labels the connected components of the loop space LM. A state in H with eigenvalue W = m ∈ Z represents a string that is wound m times around the S 1 . In this way we can define a two-parameter character χ(q, p) = Tr H p W q P , (1. 3) with p = e 2πiσ , q = e 2πiτ , with both σ, τ ∈ H. We will see in concrete examples that these kind of expressions will be typically the character of a generalized Kac-Moody algebra and transform as automorphic forms. The automorphic properties of such characters become evident by changing again to a Lagrangian point of view and computing the partition function on the compact manifold T 2 × X. Concentrating on the T 2 factor, which now has an interpretation as a space-time, the string partition function carries a manifest T -duality symmetry group SO(2, 2, Z) ∼ = SL(2, Z) × SL(2, Z), (1.4) which is the 'stringy' automorphism group of T 2 . Let us explain briefly how this group acts on the moduli σ, τ . Since the string theory is not a conformal field theory, the partition function will depend both on the modulus τ of T 2 and on its volume g. Furthermore there is an extra dependence on a constant 2-form field θ ∈ H 2 (T 2 , R/Z). These two extra data are combined in a second complex 'modulus' σ = θ + ig. The T -duality group SO(2, 2, Z) will now acts on the pair (σ, τ ) by separate fractional linear transformations and the generalized character (1.3) will be some automorphic form for this group. Of course only the second SL(2, Z) factor has a clear geometric interpretation. The first factor, that exchanges large and small volume σ → −1/σ, as a complete stringy origin.
The appearance of the T -duality group SO(2, 2, Z) as a symmetry group of the twotorus is most simply explained by considering a single string. We are then dealing with the loop space LT 2 . If the torus is given by the quotient R 2 /Λ, with Λ a two-dimensional lattice, the momenta of such a string take value in the dual lattice P = ẋ ∈ Λ * .
(1.5)
The winding numbers, that label the components of LT 2 , lie in the original lattice
W = dx ∈ Λ.
(1.6) Therefore the total vector v = (W, P ) can be seen as an element of the rank 4, signature (2,2), even, self-dual Narain lattice
v = (W, P ) ∈ Γ 2,2 = Λ ⊕ Λ * , v 2 = 2 W · P (1.7)
The T -duality group appears now as the automorphism group of the lattice Γ 2,2 . In the particular example we will discuss in detail, where the manifold X is a Calabi-Yau space, there will be an extra quantum number and the lattice will be enlarged to a signature (2,3) lattice. Correspondingly, the automorphism group will be given by SO(3, 2, Z) ∼ = Sp(4, Z).
Particles, symmetric products and fields
It is well-known wise-crack that first-quantization is a mystery but second-quantization a functor. Indeed, for a free theory second quantization involves nothing more than taking symmetric products. We obtain the second-quantized Hilbert space from the firstquantized Hilbert space H as the free symmetric algebra SH. Yet, recent developments in string theory (and in certain field theories that are naturally obtained as limits of string theories) have provided us with a fresh outlook on this familiar subject. In particular this new approach allow us to include interactions in new ways.
Second-quantization of superparticles
Let us start by considering a well-known case: a point-particle moving on a compact oriented Riemannian manifold X. The first-quantization 'functor' Q 1 of quantum mechanics assigns to each manifold X a Hilbert space H and a Hamiltonian H,
Q 1 : X → (H, H). (2.1)
As is well-know, in (bosonic) quantum mechanics the Hilbert space is given by the squareintegrable functions on X, H = L 2 (X), together with the positive-definite Hamiltonian H = − 1 2 ∆, with ∆ the Laplacian on X. Supersymmetry adds anticommuting variables, and for the supersymmetric particle the Hilbert space is now the L 2 -completion of the space of differential forms on X,
H = Ω * (X) (2.2)
On this space we can realize the elementary N = 2 supersymmetry algebra
[Q, Q * ] = −2H (2.3)
by the use of the supercharge or differential Q = d and its adjoint Q * . The spectrum of the Hamiltonian is encoded in the partition function
Z(X; q, y) = Tr H (−1) F q H y F (2.4)
with fermion number F given by the degree of the differential form.
Of particular interest is the subspace V ⊂ H of supersymmetric ground states, that satisfy Qψ = Q * ψ = 0 and therefore also Hψ = 0. These zero-energy wavefunctions are represented by harmonic differential forms
V = Harm * (X) ∼ = H * (X).
(2.5)
We can compute the weighted number of ground states by the Witten index, which defines a regularized superdimension * of the Hilbert space
sdim H = Tr H (−1) F = Z(X; q, 1) (2.6)
Since this expression does not depend on q, the Witten index simply equals the Euler number of the space X
Tr H (−1) F = sdim V = k (−1) k dim H k (X) = χ(X). (2.7)
Note that here we consider H * (X) as a graded vector space generated by b + even generators and b − odd generators with χ(X) = b + − b − .
Second-quantization and symmetric products
The usual step of second-quantization now consists of considering a system of N of these (super)particles. It is implemented by the taking the N-th symmetric product of the single particle Hilbert space H 8) or more properly the direct sum over all N
S N H = H ⊗n /S N ,(2.Q 2 : H → SH = N ≥0 S N H.
(2.9) * For a graded vector space V = V + ⊕ V − with even part V + and odd part V − , we define the superdimension as sdim V = dim V + − dim V − , and, more generally, the supertrace of an operator a acting on V as sTr V (a) = Tr V + (a) − Tr V − (a). So we have sdim V = sTr V 1 = Tr V (−1) F . Here the Witten index operator (−1) F is defined as +1 on V + and −1 on V − . We now propose to reverse roles. Instead of taking the symmetric product of the Hilbert space of functions or differential forms on the manifold X (i.e., the symmetrization of the quantized manifold), we will take the Hilbert space of functions or differential forms on the symmetric product S N X (i.e., the quantization of the symmetrized manifold)
Q 2 : X → SX = N ≥0 S N X.
(2.
10)
The precise physical interpretation of this role-reversing is the topic of these notes. It will appear later as a natural framework for the light-cone quantization of string theory and of a certain class of quantum field theories that are obtained as low-energy limits of string theories. We will be particularly interested to learn whether these operations commute (they will not)
X Q 1 −→ H ↓ Q 2 ↓ Q 2 SX Q 1 −→ SH (2.11)
But first we have to address the issue that the symmetric space S N X is not a smooth manifold but an orbifold, namely the quotient by the symmetric group S N on N elements,
S N X = X N /S N .
(2.12)
We will first be interested in computing the ground states for this symmetric product, which we have seen are in general counted by the Euler number. Actually, the relevant concept will turn out to be the orbifold Euler number. Using this concept there is a beautiful formula that was first discovered by Göttsche [16] (see also [17,18]) in the context of Hilbert schemes of algebraic surfaces, but which is much more generally valid in the context of orbifolds, as was pointed out by Hirzebruch and Höfer [19]. First some notation. It is well-known that many formulas for symmetric products take a much more manageable form if we introduce generating functions. For a general graded vector space we will use the notation
S p V = N ≥0 p N S N V (2.13)
for the weighted formal sum of symmetric products. Note that for graded vector spaces the symmetrization under the action of the symmetric group S N is always to be understood in the graded sense, i.e., antisymmetrization for the odd-graded pieces. We recall that for an even vector space
dim S p V = N ≥0 p N dim S N V = (1 − p) − dim V (2.14)
whereas for an odd vector space
sdim S p V = N ≥0 (−1) N p N dim N V = (1 − p) dim V (2.15)
These two formulas can be combined into the single formula valid for an arbitrary graded vector space that we will use often †
sdim S p V = (1 − p) −sdim V (2.16)
Similar we introduce for a general space X the 'vertex operator'
S p X = 'exp pX' = N ≥0 p N S N X. (2.17)
Using this formal expression the formula we are interested in reads (see also [20])
Theorem 1 [16,19] -The orbifold Euler number of the symmetric products S N X are given by
χ orb (S p X) = n>0 (1 − p n ) −χ(X) .
(2.18)
The orbifold Euler character
The crucial ingredient in Theorem 1 is the orbifold Euler character, a concept that is very nicely explained in [19]. Here we give a brief summary of its definition.
Suppose a finite group G acts on a manifold M. In general this action will not be free and the space M/G is not a smooth manifold but an orbifold instead. The topological Euler number of this singular space, defined as for any topological space, can be computed as the alternating sum of the dimensions of the invariant piece of the cohomology,
χ top (M/G) = sdim H * (M) G . (2.19)
In the de Rahm cohomology one can also simply take the complex of differential forms that are invariant under the G-action and compute the cohomology of the standard differential d. This expression can be computed by averaging over the group
χ top (M/G) = 1 |G| g∈G sTr H * (M ) g = 1 |G| g∈G g 1 (2.20)
Alternatively, using the Lefshetz fixed point formula, we can rewrite this Euler number as a sum of fixed point contributions. Let M g denote the fixed point set of the element g ∈ G. (Note that for the identity M 1 = M.) Then we have
χ top (M/G) = 1 |G| g∈G sdim H * (M g ) = 1 |G| g∈G 1 g (2.21)
In the above two formulas we used the familiar string theory notation
h g = sTr H * (M g ) h (2.22)
for the trace of the group element h in the 'twisted sector' labeled by g. Note that the two expressions (2.20) and (2.21) for the topological Euler number are related by a 'modular S-transformation,' that acts as
g 1 → 1 g (2.23)
The orbifold Euler number is the proper equivariant notion. We see in a moment how it naturally appears in string theory. In the orbifold definition we remember that on each fixed point set M g there is still an action of the centralizer or stabilizer subgroup C g that consists of all elements h ∈ G that commute with g. The orbifold cohomology is defined by including the fixed point loci M g , but now taking only the contributions of the C g invariants. That is, we have a sum over the conjugacy classes [g] of G of the topological Euler character of these strata
χ orb (M/G) = [g]
χ top (M g /C g ). (2.24) Note that this definition always gives an integer, in contrast with other natural definitions of the Euler number of orbifolds. From this point of view the topological Euler number only takes into account the trivial class g = 1 (the 'untwisted sector'). If we use the elementary fact that |[g]| = |G|/|C g |, we obtain in this way It has been pointed out by Segal [21] that much of this and in particular the applications to symmetric products that we are about to give, find a natural place in equivariant Ktheory. Indeed the equivariant K-group (tensored with C) of a space M with a G action is isomorphic to
χ orb (M/G) = 1 |G| g∈G sdim H * (M g ) Cg = 1 |G| g,h∈G, gh=hg sTr H * (M g ) h = 1 |G| g,K G (M) = [g] K(M g ) Cg .
(2.27)
The orbifold Euler number of a symmetric product
We now apply the above formalism to the case of the quotient X N /S N . For the topological Euler number the result is elementary. We simply replace H * (X) by its symmetric product S N H * (X). Since we take the sum over all symmetric products, graded by N, this is just the free symmetric algebra on the generators of H * (X), so that [22]
χ top (S p X) = sdim S p H * (X) = (1 − p) −χ(X) (2.28)
In order to prove the orbifold formula (2.18) we need to include the contributions of the fixed point sets. Thereto we recall some elementary facts about the symmetric group.
First, the conjugacy classes [g] of S N are labeled by partitions {N n } of N, since any group element can be written as a product of elementary cycles (n) of length n,
[g] = (1) N 1 (2) N 2 . . . (k) N k , n>0 nN n = N.
(2.29)
The fixed point set of such an element g is easy to describe. The symmetric group acts on N-tuples (x 1 , . . . , x N ) ∈ X N . A cycle of length n only leaves a point in X N invariant if the n points on which it acts coincide. So the fixed point locus of a general g in the above conjugacy class is isomorphic to
(X N ) g ∼ = n>0 X Nn . (2.30)
The centralizer of such an element is a semidirect product of factors S Nn and Z n ,
C g = S N 1 × (S N 2 ⋉ Z N 2 2 ) × . . . (S N k ⋉ Z N k k ). (2.31)
Here the factors S Nn permute the N n cycles (n), while the factors Z n act within one particular cycle (n). The action of the centralizer C g on the fixed point set (X N ) g is obvious: only the subfactors S Nn act non-trivially giving
(X N ) g /C g ∼ = n>0 S Nn X. (2.32)
We now only have to assemble the various components to compute the orbifold Euler number of S N X:
χ orb (S p X) = N ≥0 p N χ orb (S N X) = N ≥0 p N {Nn} nNn=N n>0 χ top (S Nn X) = n>0 N ≥0 p nN χ top (S N X) = n>0 (1 − p n ) −χ(X) (2.33)
which concludes the proof of (2.18).
Orbifold quantum mechanics on symmetric products
The above manipulation can be extended beyond the computation of the Euler number to the actual cohomology groups. We will only be able to fully justify these definitions (because that's what it is at this point) from the string theory considerations that we present in the next section. For the moment let us just state that in particular cases where the symmetric product allows for a natural smooth resolution (as for the algebraic surfaces studied in [16] where the Hilbert scheme provides such a resolution), we expect the orbifold definition to be compatible with the usual definition in terms of the smooth resolution.
We easily define a second-quantized, infinite-dimensional graded Fock space whose graded superdimensions equal the Euler numbers that we just computed. Starting with the single particle ground state Hilbert space
V = H * (X) (2.34)
we define it as the symmetric algebra of an infinite number of copies V (n) graded by n = 1, 2, . . .
F p = n>0 S p n V (n) = S n>0 p n V (n) . (2.35)
Here V (n) is a copy of V where the 'number operator' N is defined to have eigenvalue n, so that
χ orb (S p X) = Tr F (−1) F p N = n>0 (1 − p n ) −χ(X) . (2.36)
We will see later that the degrees in V (n) are naturally shifted by (n − 1) d 2 with d the dimension of X, so that
V (n) ∼ = H * −(n−1) d 2 (X), n > 0. (2.37)
Of course, this definition makes only good sense for even d, which will be the case since we will always consider Kähler manifolds. This result can be interpreted as follows. We have seen that the fixed point loci consist of copies of X. These copies X (n) appear as the big diagonal inside S n X where all n points come together. If we think in terms of middle dimensional cohomology, which is particularly relevant for Kähler and hyperkähler manifolds, this result tells us that the middle dimensional cohomology of X contributes through X (n) to the middle dimensional cohomology of S n X.
So, if we define the Poincaré polynomial as
P (X; y) = Z(X; 0, y) = Tr V (−1) F y F = 0≤k≤d (−1) k y k b k (X), (2.38)
then we claim that the orbifold Poincaré polynomials of the symmetric products S N X are given by
P orb (S p X; y) = n>0 0≤k≤d 1 − y k+(n−1) d 2 p n −(−1) k b k (2.39)
This is actually proved for the Hilbert scheme of an algebraic surface in [17,18]. Although we will only be in a position to understand this well in the next section, we can also determine the full partition function that encodes the quantum mechanics on S N X. Again the Hilbert space is a Fock space built on an infinite number of copies of the single particle Hilbert space H(X)
H orb (S p X) = n>0 S p n H (n) (X).
(2.40)
The contribution to the total Hamiltonian of the states in the sector H (n) turns out to be scaled by a factor of n relative to the first-quantized particle, whereas the fermion number are shifted as before, so that
H (n) ∼ = Ω * −(n−1) d 2 (X), H (n) = − 1 2 ∆/n. (2.41)
To be complete explicit, let {h(m, k)} m≥0 be the spectrum of H on the subspace Ω k (X) of k-forms with degeneracies ‡ c(m, k), so that the single particle partition function reads
Z(X; q, y) = Tr H (−1) F y F q H = n>0 0≤k≤d c(m, k)y k q h(m,k) . (2.42)
Then we have for the symmetric product (in the orbifold sense) In later sections we give a QFT interpretation of this formula.
Z orb (S p X; q, y) = Tr H orb (SX) (−1) F p N q H y F = n>0, m≥0
Second-quantized Strings
The previous section should be considered just as a warming-up for the much more interesting case of string theory. We will now follow all of the previous steps again, going from a single quantized string to a gas of second-quantized strings. In many respects this construction -in particular the up to now rather mysterious orbifold prescription -is more 'canonical,' and all of the previous results can be obtained as a natural limiting case of the string computations.
The two-dimensional supersymmetric sigma model
In the Lagrangian formulation the supersymmetric sigma model that describes the propagation of a first-quantized string on a Riemannian target space X is formulated in terms of maps x : Σ → X with Σ a Riemann surface, that we will often choose to give the topology of a cylinder S 1 × R or a torus T 2 . The canonical Euclidean action, including the standard topological term, is of the form
1 4πα ′ Σ G µν (x)dx µ ∧ * dx ν + i 2π Σ x * B + fermions (3.1)
with G the Riemannian metric and B a closed two-form on X.
An important feature of the two-dimensional sigma model that in the limit α ′ → 0 it reduces to the supersymmetric quantum mechanics of the previous section. This limit can be equivalently seen as a rescaling of the metric G and thereby a low-energy or a largevolume limit, vol(X) → ∞. In this point-particle limit the dependence on the B-field disappears.
In the Hamiltonian formulation one describes a single string moving on a space X in terms of the loop space LX of maps S 1 → X. Depending on the particular type of string theory that we are interested in, this first-quantization leads us to assign to the manifold X a single string SCFT Hilbert space
Q 1 : X → H(X),(3.2)
that can be formally considered to be the space of half-infinite dimensional differential forms on LX. We will always choose in the definition of H Ramond or periodic boundary conditions for the fermions. These boundary conditions respect the supersymmetry algebra; other boundary conditions can be obtained by spectral flow [23].
On this Hilbert space act two natural operators: the Hamiltonian H, roughly the generalized Laplacian on LX, and the momentum operator P that generates the canonical circle action on the loop space corresponding to rotations of the loop,
e iθP : x(σ) → x(σ + θ). (3.3)
In a conformal field theory the operators H and P are usually written in terms of leftmoving and right-moving Virasoro generators L 0 and L 0 as
H = L 0 + L 0 − d/4, P = L 0 − L 0 . (3.4)
Here d is the complex dimension of X. If the manifold X is a Calabi-Yau space, the quantum field theory carries an N = 2 superconformal algebra with a U(1) L × U(1) R R-symmetry. In particular this allows us to define separate left-moving and right-moving conserved fermion numbers F L and F R , that up to an infinite shift (that is naturally regularized in the QFT) represent the bidegrees in terms of the Dolbeault differential forms on Ω * (LX). The most general partition function is written as
Z(X; q, y, q, y) = Tr H (−1) F y F L y F R q L 0 − d 8 q L 0 − d 8 (3.5)
with F = F L + F R the total fermion number. The partition function Z represents the value of the path-integral on a torus or elliptic curve, and we can write q = e 2πiτ , y = e 2πiz with τ the modulus of the elliptic curve and z a point in its Jacobian that determines the line-bundle of which the fermions are sections. The spectrum of all four operators L 0 , L 0 , F L , F R is discrete with the further conditions
L 0 , L 0 ≥ d/8, L 0 − L 0 ∈ Z, F L , F R ∈ Z + d 2 . (3.6)
For a general Calabi-Yau manifold it is very difficult to compute the above partition function explicitly. Basically, only exact computations have been done for orbifolds and the so-called Gepner points, which are spaces with exceptional large quantum automorphism groups. This is not surprising, since even in the α ′ → 0 limit we would need to know the spectrum of the Laplacian, while for many Calabi-Yau spaces such as K3 an explicit Ricci flat metric is not even known. Just as for the quantum mechanics case, we learn a lot by considering the supersymmetric ground states ψ ∈ V ⊂ H that satisfy Hψ = 0. In the Ramond sector the ground states are canonically in one-to-one correspondence with the cohomology classes in the Dolbeault groups,
V ∼ = H * , * (X). (3.7)
In fact, these states have special values for the conserved charges. Ramond ground states always have L 0 = L 0 = d/8, and for a ground state that corresponds to a cohomology class ψ ∈ H r,s (X) the fermion numbers are shifted degrees
F L = r − d/2, F R = s − d/2. (3.8)
The shift in degrees by d/2 is a result from the fact that we had to 'fill up' the infinite Fermi sea. We see that there is an obvious reflection symmetry F L,R → −F L,R (Poincaré duality) around the middle dimensional cohomology. If we take the limit q, q → 0, the partition function reduces essentially to the Poincaré-Hodge polynomial of X h(X; y, y) = lim q,q→0
Z(X; q, q, y, y) = 0≤r,s≤d
(−1) r+s y r− d 2 y s− d 2 h r,s (X) (3.9)
The elliptic genus
An interesting specialization of the sigma model partition function is the elliptic genus of X [24], defined as
χ(X; q, y) = Tr H (−1) F y F L q L 0 − d 8 (3.10)
The elliptic genus is obtained as a specialization of the general partition function for y = 1. Its proper definition is
χ(X; q, y) = Tr H (−1) F y F L q L 0 − d 8 q L 0 − d 8 (3.11)
But, just as for the Witten index, because of the factor (−1) F R there are no contributions of states with L 0 − d/8 > 0. Only the right-moving Ramond ground states contribute. The genus is therefore holomorphic in q or τ . Since this fixes L 0 − d/8 to be an integer, the partition function becomes a topological index, with no dependence on the moduli of X.
Using general facts of modular invariance of conformal field theories, one deduces that for a Calabi-Yau d-fold the elliptic genus is a weak Jacobi form [25] of weight zero and index d/2. (For odd d one has to include multipliers or work with certain finite index subgroups, see [26,27,28].) The ring of Jacobi forms is finitely generated, and thus finite-dimensional for fixed index * . It has a Fourier expansion of the form
χ(X; q, y) = m≥0, ℓ c(m, ℓ)q m y ℓ (3.12)
with integer coefficients. The terminology 'weak' refers to the fact that the term m = 0 is included. The elliptic genus has beautiful mathematical properties. In contrast with the full partition function, it does not depend on the moduli of the manifold X: it is a (differential) topological invariant. In fact, it is a genus in the sense of Hirzebruch -a ring-homomorphism from the complex cobordism ring Ω * U (pt) into the ring of weak Jacobi forms. That is, it satisfies the relations
χ(X ∪ X ′ ; q, y) = χ(X; q, y) + χ(X ′ ; q, y), χ(X × X ′ ; q, y) = χ(X; q, y) · χ(X ′ ; q, y), (3.13) χ(X; q, y) = 0, if X = ∂Y ,
where the last relation is in the sense of complex bordism. The first two relations are obvious from the quantum field theory point of view; they are valid for all partition functions of sigma models. The last condition follows basically from the definition in terms of classical differential topology, more precisely in terms of Chern classes of symmetrized products of the tangent bundle, that we will give in a moment. We already noted that in the limit q → 0 the genus reduces to a weighted sum over the Hodge numbers, which is essentially the Hirzebruch χ y -genus,
χ(X; 0, y) = r,s (−1) r+s h r,s (X)y r− d 2 ,(3.14)
and for y = 1 its equals the Witten index or Euler number of X
χ(X; q, 1) = Tr H (−1) F = χ(X). (3.15)
For smooth manifolds, the elliptic genus has an equivalent definition as χ(X; q, y) = X ch(E q,y )td(X) (3.16) * For example, in the case d = 2 it is one-dimensional and generated by the elliptic genus of K3.
with the formal sum of vector bundles
E q,y = y − d 2 n>0 −yq n−1 T X ⊗ −y −1 q n T X ⊗ S q n T X ⊗ S q n T X , (3.17)
where T X denotes the holomorphic tangent bundle of X. If the bundle E y,q is expanded as
E q,y = m, ℓ q m y ℓ E m,ℓ (3.18)
the coefficients c(m, ℓ) give the index of the Dirac operator on X twisted with the vector bundle E m,ℓ , and are therefore integers. This definition follows from the sigma model by taking the large volume limit, where curvature terms can be ignored and one essentially reduces to the free model, apart from the zero modes that give the integral over X.
Physical interpretation of the elliptic genus
Physically, the elliptic genus arises in two interesting circumstances. First, it appears as a counting function of perturbative string BPS states. If one constrains the states of the string to be in a right-moving ground state, i.e., to satisfy L 0 = d/8, the states are invariant under part of the space-time supersymmetry algebra and called BPS. The generating fucntion of such states is naturally given by the elliptic genus. Because we weight the right-movers with the chiral Witten index (−1) F R , only the right-moving ground states contribute.
Another physical realization is the so-called half-twisted model. Starting from a N = 2 superconformal sigma model, we can obtain a topological sigma model, by changing the spins of the fermionic fields. This produces two scalar nilpotent BRST operators Q L , Q R that can be used to define cohomological field theories. If we use both operators, or equivalently the combination Q = Q L + Q R , the resulting field theory just computes the quantum cohomology of X. This topological string theory is the appropriate framework to understand the Gromov-Witten invariants. If we ignore interactions for the moment, the free spectrum is actually that of a quantum field theory. Indeed, the gauging implemented by the BRST operator removes all string oscillations, forcing the states to be both leftmoving and right-moving ground states
L 0 ψ = L 0 ψ = 0. (3.19)
Only the harmonic zero-modes contribute. In this way one finds one quantum field for every differential form on the space-time. This is precisely the model that we discussed in the previous section.
However, as first suggested by Witten in [29], it is also possible to do this twist only for the right-moving fields. In that case, we have to compute the cohomology of the right-moving BRST operator Q R . This cohomology has again harmonic representatives with L 0 = 0. These states coincide with the BPS states mentioned above. The halftwisted cohomology is no longer finite-dimensional, but it is graded by L 0 and F L and the dimensions of these graded pieces are encoded in the elliptic genus. So, the half-twisted string is a proper string theory with an infinite tower of heavy states.
Second-quantized elliptic genera
We now come to analogue of the theorem of Göttsche and Hirzebruch for the elliptic genus as it was conjectured in [30] and derived in [10].
Theorem 2 [10] -The orbifold elliptic genus of the symmetric products S N X are given by
χ orb (S p X; q, y) = n>0, m≥0, ℓ (1 − p n q m y ℓ ) −c(nm,ℓ) (3.20)
In order to prove this result, we have to compute the elliptic genus or, more generally, the string partition function for the orbifold M/G with M = S N X and G = S N . The computation follows closely the computation of the orbifold Euler character that was relevant for the point-particle case.
First of all, the decomposition of the Hilbert space in superselection sectors labeled by the conjugacy class of an element g ∈ G follows naturally. The superconformal sigma model with target space M can be considered as a quantization of the loop space LM. If we choose as our target space an orbifold M/G, the loop space L(M/G) will have disconnected components of loops in M satisfying the twisted boundary condition
x(σ + 2π) = g · x(σ), g ∈ G,(3.21)
and these components are labeled by the conjugacy classes [g]. In this way, we find that the Hilbert space of any orbifold conformal field theory decomposes naturally into twisted sectors. Furthermore, in the untwisted sector we have to take the states that are invariant under G. For the twisted sectors we can only take invariance under the centralizer C g , which is the largest subgroup that commutes with g. If H g indicates the sector twisted by g, the orbifold Hilbert space has therefore the general form [31] H In the point-particle limit α ′ → 0 the size of all loops shrinks to zero. For the twisted boundary condition this means that the loop gets necessarily concentrated on the fixed point set M g and we are in fact dealing with a point-particle on M g /C g . In this way the string computation automatically produces the prescription for the orbifold cohomology that we discussed before. Indeed, as we stress, the quantum mechanical model of the previous section can best be viewed as a low-energy limit of the string theory.
(M/G) = [g] H Cg g (3.22) S 1 X (2)(3)
In the case of the symmetric product S N X, the orbifold superselection sectors correspond to partitions {N n } of N. Furthermore, we have seen that for a given partition the fixed point locus is simply the product n S Nn X (n) (3.23)
Here we introduce the notation X (n) , to indicate a copy of X obtained as the diagonal in X N where n points coincide. In the case of point-particles this distinction was not very important but for strings it is absolutely crucial. The intuition is best conveyed with the aid of fig. 1. where we depicted a generic twisted sector of the orbifold sigma model. The crucial point is that such a configuration can be interpreted as describing long strings † whose number can be smaller than N. Indeed, as we clearly see, a twisted boundary condition containing a elementary cycle of length n gives rise to a single string of 'length' n built out of n 'string bits.' If the cycle permutes the coordinates (x 1 , . . . , x n ) ∈ X n as
x k (σ + 2π) = x k+1 (σ), k ∈ (1, . . . , n),(3.24)
we can construct a new loop x(σ) by gluing the n strings x 1 (σ), . . . , x n (σ) together:
x(σ) = x k (σ ′ ) if σ = 1 n (2π(k − 1) + σ ′ ), σ ′ ∈ [0, 2π]. (3.25)
If the twist element is the cycle (N) ∈ S N , such a configuration describes one single long string of length N, instead of the N short strings that we would expect.
In this fashion we obtain from a cyclic twist (n) one single copy of the loop space LX that we denote as LX (n) . We use the notation H (n) for its quantization. The twisted loop space LX (n) is distinguished from the untwisted loop space LX in that the canonical circle action is differently normalized. We now have
e iθP : x(σ) → x(σ + θ/n).
(3.26)
So we find that only for θ = 2πn do we have a full rotation of the loop. This is obvious from the twisted boundary condition (3.24). It seems to imply that the eigenvalues for the operator P = L 0 − L 0 in this sector are quantized in units of 1/n. Together with the fact that in the elliptic genus only states with L 0 = 0 contribute, this would suggest that the contribution of the sector H (n) to the elliptic genus is ‡
χ(H (n) ; q, y) ? = m,ℓ c(m, ℓ)q m/n y ℓ . (3.27)
However, we must remember that the centralizer of a cycle of length n contains a factor Z n . This last factor did not play a role in the point-particle case, but here it does act non-trivially. In fact, it is precisely generated by e 2πiP . The orbifold definition includes a prescription to take the states that are invariant under the action of the centralizer. So only the states with integer eigenvalues of P survive. In this way only the states with m congruent to 0 modulo n survive and we obtain an integer q-expansion,
χ(H (n) ; y, q) = m,ℓ c(nm, ℓ)q m y ℓ .(3.28)
We now again assemble the various components to finish the proof of (3.20) (for more details see [10]). The infinite product formula has strong associations to automorphic forms and denominator formulas of generalized Kac-Moody algebras [35] and string one-loop amplitudes [36]; we will return to this.
General partition function
It is not difficult to repeat the above manipulations in symmetric algebra for the full partition function. In fact, we can write a general formula for the second-quantized string Fock space, similar as we did for the point-particle case in (2.35). This Fock space is again of the form
F p = n>0 S p n H (n) .
(3.30)
Here H (n) is the Hilbert space obtained by quantizing a single string that is wound n times. It is isomorphic to the subspace of the single string Hilbert space have fractional spectra compared to the single string Hamiltonians, the momentum operator still has an integer spectrum,
H = H (1) with L 0 − L 0 = 0 (mod n).L (n) 0 − L (n) 0 = 0 (mod 1),(3.33)
due to the restriction (3.31) that is implemented by the orbifold Z n projection. It is interesting to reconsider the ground states of H (n) in particular their U(1) L ×U(1) R charges, since this will teach us something about the orbifold cohomology of S N X. A ground state ψ (n) ∈ V (n) that correspond to a cohomology class ψ ∈ H r,s (X) still has fermion charges F L , F R given by
F L = r − d/2, F R = s − d/2.(3.34)
Making the string longer does not affect the U(1) current algebra. However, since these states now appear as a ground states of a conformal field theory with target space S n X, which is of complex dimension n · d, these fermion numbers have a different topological interpretation. The corresponding degrees r (n) , s (n) of the same state now considered as a differential form in the orbifold cohomology of S n X ⊂ S N X are therefore shifted as
r (n) = r + (n − 1)d/2, s (n) = s + (n − 1)d/2. (3.35)
That is, we have
V (n) ∼ = H * −(n−1) d 2 , * −(n−1) d 2 (X). (3.36)
In the quantum mechanics limit, the twisted loops that give rise to the contribution H (n) in the Fock space become point-like and produce another copy X (n) of the fixed point set X. However, this copy of X is the big diagonal in X n . We see that this gives another copy of H * (X) however now shifted in degree. In the full Fock space we have an infinite number of copies, shifted by positive multiples of (d/2, d/2).
We can encode this all in the generating function of Hodge numbers (3.9) as h orb (S p X; y, y) = n>0, r, s
(1 − p n y r y s ) −(−1) r+s h r,s (X) (3.37)
For the full partition function we can write a similar expression Z orb (S p X; q, q, y, y) = is the single string partition function.
Light-Cone Quantization of Quantum Field Theories
We now turn to the physical interpretation of the above results. Usually quantum field theories are quantized by splitting, at least locally, a Lorentzian space-time in the form R × Σ where R represents the time direction and Σ is a space-like Cauchy manifold.
Classically, one specifies initial data on Σ which then deterministically evolve through some set of differential equations in time. In recent developments it has proven useful to use for the time direction a null direction. This complicates of course the initial value problem, but has some other advantages. One can try to see this a limiting case where one uses Lorentz transformations to boost the time-like direction to an almost null direction [37].
The two-dimensional free scalar field revisited
Before we turn to the interpretation of our results on second quantization and symmetric products in terms of quantum field theory and quantum string theory, let us first revisit one of the simplest examples of a QFT and compute the partition function of a two-dimensional free scalar field. In fact, let us be slightly more general and consider a finite number c of such scalar fields labeled by a c-dimensional real vector space V . (One could easily take this vector space to be graded, but for simplicity we assume it to be even.) Quantization of this model usually proceeds as follows: one chooses a two-dimensional space-time with the topology of a cylinder R × S 1 and with coordinates (x 0 , x 1 ). One then introduces the light-cone variables x ± = x 0 ± x 1 . A classical solution of the equation of motion ∆φ = 0 is decomposed as
φ(x + , x − ) = q + px 0 + φ L (x − ) + φ R (x + ),(4.1)
where the zero-mode contribution (describing a point particle on V with coordinate q and conjugate momentum p) is isolated from the left-moving and right-moving oscillations φ L (x − ) and φ R (x + ). The non-zero modes have a Fourier expansion
φ L (x − ) = n =0 1 n α n e inx − (4.2)
with a similar expression for φ R (x + ).
In canonical quantization the Fourier oefficients α n are replaced by creation and annihilation operators with commutation relations [α n , α m ] = nδ n+m . This Heisenberg algebra is realized on a Fock space F built on a vacuum state |0 satisfying α n |0 = 0 for n > 0. This Fock space can be written in terms of symmetric products as
F p = n>0 S p n V (n) = S * n>0 p n V (n) ,(4.3)
where V (n) is a copy of V with the property that the (chiral) oscillation number operator N has eigenvalue n on V (n) . The p-expansion keeps track of the N-gradation. As a quantum operator N is defined as
N = n>0 α −n α n . (4.4)
The chiral partition function is now written as a character of this module (with p = e 2πiτ , τ in the upper half-plane H)
χ(p) = dim F p = Tr F p N = n>0 (1 − p n ) −c (4.5)
This character is almost a modular form of weight −c/2. One way to see this, is by considering the partition function of the full Hilbert space H, which is modular invariant.
H is obtained by combining the left-moving oscillators with the right-moving oscillators and adding the zero-mode contribution
H = L 2 (V ) ⊗ F ⊗ F (4.6)
The total chiral Hamiltonians can be written as
L 0 = − 1 2 ∆ + N, L 0 = − 1 2 ∆ + N . (4.7)
The full partition function is then evaluated as (1 − p n ). (4.9)
So the proper modular object is given by
Tr F p N −c/24 = η(p) −c (4.10)
which is a modular form of SL(2, Z) of weight −c/2 (with multipliers if c = 0 (mod 24).)
The extra factor p −c/24 is interpreted as a regularized infinite sum of zero-point energies that appear in canonical quantization.
In the Lagrangian formalism the same result is given in terms of a ζ-function regularized determinant of c scalar fields
Z = ( √ Im τ / det ′ ∆) c/2 (4.11)
with ∆ the laplacian on the torus T 2 and the prime indicates omission of the zero-mode. This determinant can be computed in a first-quantized form as a one-loop integral (4.12) with H = L 2 (T 2 ) is now the quantum mechanical Hilbert space of a single particle moving on T 2 . Here the RHS is defined by cutting of the integral at t = ǫ and subtracting the ǫ-dependent (but τ -independent) term. Let us mention a few aspects of these results that we will try to generalize when we consider strings instead of quantum fields in the next section. 5. The modularity of the characters, i.e., the transformation properties under the modular group SL(2, Z) is 'explained' by the relation to a partition function of a quantum field on a two-torus T 2 with modulus τ and automorphism group SL(2, Z).
log Z = − 1 2 log det ∆ = − 1 2 Tr log ∆ = ∞ 0 dt t Tr H e −tH
Discrete light-cone quantization
In light-cone quantization one works on R 1,1 in terms of the light-cone coordinates x ± with metric
ds 2 = 2dx + dx − ,(4.13)
but now chooses the null direction x + as the time coordinate. We will write the conjugate momenta as
p + = p − = −i ∂ ∂x − , p − = p + = −i ∂ ∂x + .
(4.14)
In the usual euclidean formulation of two-dimensional CFT we have p + = L 0 , p − = L 0 . In this setup a free particle has an eigentime given by x + = p + t. The light-cone Hamiltonian describes evolution in the 'light-cone time' x + and so is given by
H lc = p − (4.15)
An initial state is specified by the x − -dependence for fixed x + . The so-called discrete light-cone quantization (DLCQ) further assumes that the null direction x − is compact of radius R
x − ∼ x − + 2πR.
(4.16)
(The specific value of R is not very important since it can of course be rescaled by a Lorentz boost. We will therefore often put it to one, R = 1.) We denote the Lorentzian manifold so obtained as (R ×S 1 ) 1,1 . The periodic identification of x − makes the spectrum of the conjugate momentum p + discrete (Clearly, this quantization scheme is incomplete, since we are omitting the zero-modes with p + = 0. We will only obtain the left-moving sector of the theory. We will return to this point.) Since the classical equation for a free field reads
p + ∈ N/R, N ∈ Z.∂ + ∂ − φ = 0,(4.19)
the field φ(x − ) will have no x + -dependence and therefore the light-cone energy of its modes will vanish, p − = 0. If we have c of these free scalar fields φ(x) ∈ V , quantization will result in the same chiral Fock space that we considered in the canonical quantization
F p = n>0 S p n V (n) (4.20)
and the light-cone partition function is given by same infinite product
Tr F p P + = n>0 (1 − p n ) −c (4.21)
Note that the eigenvalues of the longitudinal momentum P + = n are always positive. This is due to the fact that the oscillation numbers α n form a Heisenberg algebra. We recognize in these formulas our computations of the Euler number of the symmetric products of a space X with χ(X) = c. To explain this relation we now consider the lightcone quantization of field theories in higher dimensions.
Higher-dimensional scalar fields in DLCQ
Things become a bit more interesting if we consider a free scalar field on a more general space-time of the form
M 1,d+1 = (R × S 1 ) 1,1 × X d ,(4.
22)
X compact Riemannian and with light-cone coordinates (x + , x − , x). We adopt light-cone quantization and consider as initial data the field configuration on x + = constant. The light-cone Hamiltonian p − again describes the evolution in x + . It will be convenient to perform a Fourier transformation in the light-cone coordinate x − and consider a basis of field configurations of the form
φ(x + , x − , x) = e i(p − x + +p + x − ) φ m (x) (4.23)
with p + = n (we put R = 1) and φ m (x) an eigenstate of the transverse Hamiltonian
H = − 1 2 ∆ (X) , Hφ m = h m φ m . (4.24)
The equation of motion on the space-time M, ∆ (M ) φ = 0, then gives the so-called massshell relation
p − = − 1 2p + ∆ (X) = 1 n h m .(4.25)
Here we see an interesting phenomenon. The light-cone energy is given by a nonrelativistic expression of the form p 2 /2m, where p is the transversal momentum and the 'mass' m is given by the longitudinal momentum p + = n. (On a curved manifold p 2 is replaced by the eigenvalues h m of the Laplacian.) The appearance of this non-relativistic expression has it geometric explanation in the fact that the stabilizer group of a nulldirection in R 1,n+1 is the Galilean group of R n . The formula implies that for a particle with p + = n, the light-cone energy p − is rescaled by a factor of 1/n.
Note that quite generally in light-cone quantization the symmetries of the underlying space-time manifold are not all manifest. If we work with the Minkowski space-time R 1,n+1 the Lorentz group SO(1, n + 1) is partly non-linearly realized. For interacting QFT's the proof of Lorentz invariance of a light-cone formulation is highly non-trivial. In DLCQ the Lorentz-invariance is only expected in the limit R → ∞. Since the value of R can be rescaled by a Lorentz boost, this limit is equivalent to the large N limit, N → ∞. Again, for interacting theories the appearance of Lorentz-invariance in this limit is not obvious.
Upon quantization we obtain in the present case a Fock space that is of the form
F p = n>0 N ≥0 p nN S N H (n) (4.26)
with H (n) = L 2 (X) with p − = H (n) = H/n the rescaled QM Hamiltonian. Now the light-cone energies p − will not typically vanish, and the full partition function is given by a two-variable function,
Z(X, p, q) = Tr F p P + q P − ,(4.27)
with P + , P − the total light-cone momentum operators (with eigenvalues p + , p − ).
From the above description is should have become clear that this partition function can be completely identified with the quantum mechanics on the symmetric product space S p X = N p N S N X that we discussed in such details in section 2. We therefore obtain: Theorem 3 -The discrete light-cone quantization of a free scalar field on the spacetime M = (R × S 1 ) 1,1 × X with total longitudinal momentum p + = N is given by the quantum mechanics on the orbifold symmetric product S N X,
H QF T (X) = H QM orb (SX). (4.28)
Furthermore, the light-cone Hamiltonian p − is identified with the non-relativistic quantum mechanics Hamiltonian H.
The supersymmetric generalization
It is easy to extend this construction to a physical system that describes the supersymmetric quantum mechanics on SX. In that case we want to have arbitrary differential forms on X, so our fundamental fields will be free k-forms φ k ∈ Ω k (M) with 0 ≤ k ≤ d on the space-time M = (R × S 1 ) 1,1 × X. These fields have a quadratic action (with fermionic statistics if k is odd)
Y 1 2 dφ ∧ * dφ, φ = k φ k ∈ Ω * (Y ). (4.29)
This gives as equation of motion the Maxwell equation
d * dφ = 0. (4.30)
This Lagrangian is invariant under the gauge symmetry
φ k → φ k + dλ k−1 , λ ∈ Ω k−1 (M), (4.31)
giving φ k the interpretation of a generalized k-form connection with curvature dφ k . This gauge symmetry can be fixed by requiring
ι ∂/∂x + φ = 0, (4.32)
a condition that we write as φ + = 0. With this gauge condition the equation of motion can be used to eliminate the component φ − in terms of the transversal components φ ∈ Ω * (X), leaving only the form on the transverse space X as physical. All of this is well-known from the description of the RR fields of the light-cone type II superstring (or supergravity). With this gauge fixing we naturally reduce the second-quantized light-cone description to supersymmetric quantum mechanics on SX. We therefore find exactly the field theoretic description of our SQM model of section 2. It describes the multi-form abelian gauge field theory on M = (R × S 1 ) 1,1 × X in DLCQ, in particular we have
H QF T (X) = H SQM orb (S p X) = n S p n Ω * (X) (n) (4.33)
where powers of p keep track of the longitudinal momentum p + . Particular interesting is the zero-energy p − = 0 sector V QF T ⊂ H QF T . Since p − is identified with the SQM Hamiltonian, these states correspond the ground states of the supersymmetric quantum mechanics and we have (4.34) and the partition function of this zero p − sector reproduces exactly the orbifold Euler character
V QF T = n>0 S p n H * (X)Tr V QF T (−1) F p P + = χ orb (S p X). (4.35)
The modular properties are now explained along the lines of section 1. This partition function can be computed in a Lagrangian formulation by considering the compact spacetime T 2 × X. The explicit T 2 factor explain the occurrence of SL(2, Z). The modular properties are particularly nice if we choose as our manifold X to be a K3 surface with χ(X) = 24. We then almost have a modular object without multipliers,
χ orb (S p X) = p ∆(p) (4.36)
with ∆(p) = η 24 (p) the discriminant, a cusp form of weight 12 for SL(2, Z). The correction p −χ/24 has again an interpretation as the regularized sum of zero-point energies. (Each boson contributes −1/24, each fermion +1/24.)
Light-Cone Quantization of String Theories
It is now straightforward to generalize all this to string theory along the lines quantum mechanics on SX → quantum field theory on X
2-d conformal field theory on SX → quantum string theory on X
The interest in this generalization lies in particular in the absence of a good Lorentzinvariant description of non-perturbative second-quantized quantum string theory. So we can gain something by studying the reformulation in terms of sigma models on symmetric products It is not difficult to give a string theory interpretation of our results in section 3 on sigma models on symmetric product spaces. Clearly we want to identify the DLCQ of string theory on (R×S 1 ) 1,1 ×X with the SCFT on SX. An obvious question is which type of string theory are we discussing. Indeed, the number of consistent interacting closed string theories is highly restricted: the obvious candidates are 1. Type II and heterotic strings in 10 dimension.
Topological strings in all even dimensions.
3. Non-abelian strings in 6 dimension.
Here the last example only recently emerged, and we will return to it in section 7. We will start with the Type II string.
The IIA superstring in light-cone gauge
The physical states of the ten-dimensional type II superstring are most conveniently described in the light-cone Green-Schwarz formalism. We usually think about the superstring in terms of maps of a Riemann surface Σ into flat space-time R 1,9 . But in light-cone gauge we make a decomposition R 1,1 × R 8 with corresponding local coordinates (x + , x − , x i ). The physical degrees of freedom are then completely encoded in the transverse map
x : Σ → R 8 . (5.1)
The model has 16 supercharges (8 left-moving and 8 right-moving) and carries a Spin(8) R-symmetry. More precisely, apart from the bosonic field x, we also have fermionic fields that are defined for a general 8-dimensional transverse space X as follows. Let S ± denote the two inequivalent 8-dimensional spinor representations of Spin (8). We use the same notation to indicate the corresponding spinor bundles of X. Let V denote the vector representation of Spin(8) and let T X be the associated tangent bundle. In this notation we have
∂x ∈ Γ(K Σ ⊗ x * T X), ∂x ∈ Γ(K Σ ⊗ x * T X). (5.2)
Now the left-moving and right-moving fermions θ, θ are sections of
θ ∈ Γ(K 1/2 Σ ⊗ x * S + ), θ ∈ Γ(K 1/2 Σ ⊗ x * S ± ) (5.3)
The choice of spin structure on Σ is always Ramond or periodic. The different choices of Spin(8) representations for the right-moving fermion θ (S + or S − ) give the distinction between the IIA and IIB string. We will work with the IIA string for which the representation of θ is chosen to be the conjugate spinor S − , but the IIB string follows the same pattern.
With these fields the action of the first-quantized sigma model is simply the following free CFT
S = d 2 σ 1 2 ∂x i ∂x i + θ a ∂θ a + θ˙a∂θ˙a . (5.4)
This model has a Hilbert space that is of the form
H = L 2 (R 8 ) ⊗ V ⊗ F ⊗ F . (5.5)
We recognize familiar components: the bosonic zero-mode space L 2 (R 8 ) describes the quantum mechanics of the center of mass x i of the string. The fermionic zero-modes θ a , θ˙a give rise to the 16 × 16 dimensional vector space of ground states
V ∼ = (V ⊕ S − ) ⊗ (V ⊕ S + ) (5.6)
where the spinor representations should be considered odd. This space forms a representation of the Clifford algebra Cliff(S + ) ⊗ Cliff(S − ) generated by the fermion zero mode
Γ a = dσ 2π θ a (σ), Γ˙a = dσ 2π θ˙a(σ). (5.7)
Using the triality S + → V → S − → S + of Spin(8) this maps to the usual Clifford representation of Cliff(V ) on S + ⊕ S − . Finally the Fock space F of non-zero-modes is given by
F q = n>0 q n S − ⊗ S q n V (5.8)
with a similar expression for F with S − replaced by S + . In this light-cone gauge the coordinate x + is given by
x + (σ, τ ) = p + τ (5.9)
for fixed longitudinal momentum p + > 0, whereas x − is determined by the constraints
∂x − = 1 p + (∂x) 2 , ∂x − = 1 p + (∂x) 2 . (5.10)
The Hilbert space of physical states of a single string with longitudinal momentum p + is given by the CFT Hilbert space H restricted to states with zero world-sheet momentum, the level-matching condition
P = L 0 − L 0 = 0. (5.11)
The light-cone energy p − is then determined as by the mass-shell relation
p − = 1 p + (L 0 + L 0 ) = 1 p + H. (5.12)
One can also consider DLCQ with the null coordinate x − periodically identified with radius R. This induced two effects. First, the momentum p + is quantized as p + = n/R, n ∈ Z >0 . Second, it allows the string to be wrapped around the compact null direction giving it a non-trivial winding number
w − = S 1 dx − = 2πmR, m ∈ Z. (5.13)
However, using the constraints (5.10) we find that
w − = 2π p + (L 0 − L 0 ) = 2πR n (L 0 − L 0 ). (5.14)
So, in order for m to be an integer, we see that the CFT Hilbert space must now be restricted to the space H (n) consisting of all states that satisfied the modified level-matching condition
P = L 0 − L 0 = 0 (mod n) (5.15)
This is exactly the defintion of the Hilbert space H (n) in section 3.5. (In the similar spirit the uncompactified model had Hilbert space H (∞) .) This motivates us to describe the second-quantized Type IIA string in terms of a SCFT on the orbifold
S N R 8 = R 8N /S N . (5.16)
Indeed in this correspondence we have:
p + = N, p − = H = L 0 + L 0 , (5.17) w − = P = L 0 − L 0 . (5.18)
This gives the following form for the second-quantized Fock space
F p = n>0 S p n H (n) (5.19)
where p keeps again track of p + . This is both the Hilbert space of the free string theory and of the orbifold sigma model on S p R 8 . So we can identify their partition functions
Z string (R 8 ; p, q, q) = Z SCF T (S p R 8 ; q, q), (5.20) with Z string (R 8 ; p, q, q) = Tr F p P + q P − +W − q P − −W − , Z SCF T (S p R 8 ; q, q) = N ≥0 p N Tr H(S N R 8 ) q L 0 −N/2 q L 0 −N/2 ,(5.21)
(Here we used that the central charge is 12N.) Note that this sigma model is not precisely of the form as we discussed in section 3. The world-sheet fermions now transform as spinors instead of vectors of Spin (8). The modification that one has to make are however completely straightforward. In particular, the U(1)'s whose quantum numbers gave the world-sheet fermion number F L,R only emerge if we break Spin(8) to SU(4) × U(1) ∼ = Spin(6) × Spin (2).
This issue is directly related to compactifications. Can we consider for the transversal space instead of R 8 a compact Calabi-Yau four-fold X and make contact with our computations of the elliptic genus of SX? In the non-linear sigma models of section 3 the fermion fields where always assumed to take values in the (pull-back of the) tangent bundle to the target space X, wheras in the Green-Schwarz string they are sections of spinor bundles. Note however that on a Calabi-Yau four-fold we have a reduction of the structure group SO(8) to SU (4). Under this reduction we have the following well-known decomposition of the three 8-dimensional representations of Spin(8) in terms of the representations of SU(4) × U(1) (up to triality)
V → 4 1/2 ⊕ 4 −1/2 S + → 4 −1/2 ⊕ 4 1/2 , (5.22) S − → 1 1 ⊕ 6 0 ⊕ 1 −1 .
So we see that, as far as the SU(4) symmetry is concerned, if we only use the S + representation, we could just as well worked with the standard N = 2 SCFT sigma-model, since this spinor bundle is isomorphic to the tangent bundle. This remark picks out naturally the Type IIB string, whose world-sheet fermions carry only one Spin(8) chirality that we can choose to be S + . So in this formulation only the type IIB light-cone model allows for a full compactifications on a Calabi-Yau four-fold. This fact is actually well-known. The IIA string acquires an anomaly χ(X)/24 that has to be cancelled by including some net number of strings [38,39]. For the Type IIB string this translates under a T-duality to a net momentum in the vacuum; we will see this fact again in a moment. We can also work with only left-moving BPS strings that are related to the elliptic genus of the SCFT. In that case it does not matter if we choose the IIA or IIB strings.
Elliptic genera and automorphic forms
If we just want to discuss free strings, without interactions, say in light-cone gauge without insisting on Lorentz-invariance, there are many more possible strings than the tendimensional superstring. In particular we can consider a string whose transverse degrees of freedom are described by the N = 2 supersymmetric sigma model on the Calabi-Yau space X. This string has as its low-energy, massless spectrum the field theory that consists of all k-form gauge fields, that we discussed in section 4.3. This is by the way exactly the field content of the topological string, that can be defined in any (even) dimension, but which only has non-vanishing interactions without gravitational descendents in spacetime dimension 6. So this critical case corresponds to choosing a transversal four-fold or complex surface X. If X has to be compact that restricts us to T 4 or K3. We will return to this topic in section 7.
Therefore another class of free string theories to be considered in the light-cone formulation are the 'untwisted' versions of the topological string, where we do not impose the usual BRST cohomology Q L = Q R = 0 that reduces the string to its massless fields. In fact, another interesting case is the half-twisted string (see section 3.3) in which we only impose Q R = 0. For that model we expect to make contact with the elliptic genus.
Indeed, in that case there is a straightforward explanation of the automorphic properties of the elliptic genus of the symmetric product. We recall the main formula (3.20) of Theorem 2, that we now interpret as a partition function of second-quantized BPS strings Z string (X; p, q, y) = χ orb (S p X; q, y) = Note that since the strings carry only left-moving excitations, L 0 = 0, the space-time Hamiltonian p − and winding number w − can be identified and thus the partition function represents the space-time character
Z string (X; p, q, y) = Tr F (−1) F p P + q W − y F . (5.25)
This is precisely the object we promised in our discussion to study. We will parametrize p, q, y as p = e 2πiσ , q = e 2πiτ , y 2πiz , (5.26) or equivalently by a 2 × 2 period matrix Ω = σ z z τ (5.27) in the Siegel upper half-space, det Im Ω > 0. The group Sp(4, Z) ∼ = SO(3, 2, Z) acts on the matrix Ω by fractional linear transformations, Ω → (Aω + B)(CΩ + D) −1 . Now the claim is that the string partition function χ orb (X : p, q, y) is almost equal to an automorphic form for the group SO (3, 2, Z), of the infinite product type as appear in the work of Borcherds [35]. This is just the string theory generalization of the fact that the Euler number χ orb (S p X) is almost a modular form of SL(2, Z). In fact, the Euler number is obtained from the elliptic genus in the limit y → 1, z → 0, where the q-dependence disappears. In this case, the automorphic group degenerates as
Sp(4, Z) → SL(2, Z) × SL(2, Z) (5.28)
where only the first SL(2, Z) factor acts non-trivially on p.
The precise form of the corrections needed to get a true automorphic function Φ(p, q, y) for a general Calabi-Yau d-fold X has been worked out in detail in [26]. It is defined by the product Φ(p, q, y) = p a q b y c (n,m,ℓ)>0
(1 − p n q m y ℓ ) c(nm,ℓ) (5.29) where the positivity condition means: n, m ≥ 0 with ℓ > 0 in the case n = m = 0. The 'Weyl vector' (a, b, c) is defined by
a = b = χ(X)/24, c = ℓ − |ℓ| 4 c(0, ℓ). (5.30)
Here the coefficients c(0, ℓ) are the partial Euler numbers
c(0, r − d 2 ) = s (−1) s+r h r,s (5.31)
One can then show that Φ is an automorphic form of weight c(0, 0)/2 for the group O(3, 2, Z) for a suitable quadratic form of signature (3,2). The form Φ follows actually from a standard one-loop string amplitude defined as an integral over the fundamental domain [36,27]. The integrand consists of the genus one partition function of the string on X × T 2 and has a manifest SO(3, 2, Z) T-duality invariance. The SO(3, 2, Z) appears in the following way. First of all, as explained in the introduction, strings on T 2 have two quantized momenta and two winding numbers, giving the Narain lattice Γ 2,2 . For a transversal Calabi-Yau space, there are also the left-moving and right-moving Fermi numbers F L , F R . Since we restrict to right-moving ground states in the elliptic genus, only F L gives another integer conserved quantum number ℓ. Adding this charge to the Narain lattice enlarges it to Γ 3,2 . Moreover, it allows us to extend the moduli σ, τ of the two-torus by another complex parameter z that couples to F L = ℓ. Technically, z has an interpretation as a Wilson loop that parametrizes the U(1) L bundle over T 2 . Together, σ, τ , z parametrize the lattice Γ 3,2 ; they can be considered as a point on the symmetric space
SO(3, 2)/SO(3) × SO(2) ∼ = H 2,1 . (5.32)
Now the strategy is to compute the string partition function through a one-loop amplitude Z string (X; p, q, y) = exp F string (X; p, q, y). (5.33) Note that F string is the partition function for maps from the world-sheet elliptic curve, with a modulus that we denote as τ ′ , to the space-time that contains an elliptic curve with modulus τ ,
T 2 τ ′ → T 2 σ,τ,z × X. (5.34)
It is easily to confuse the two elliptic curves! One now computes an integral over the fundamental domain of the world-sheet modulus τ ′ that has the form F string = 1 2
d 2 τ ′ τ ′ 2 − d 2 +1≤ǫ≤ d 2 (p L , p R ) ∈ Γ 3,2 ǫ n ∈ 2dZ − ǫ 2 e iπ(τ ′ p 2 L −τ ′ p 2 R ) c ǫ (n)e πinτ ′ /d (5.35)
where the notation Γ 3,2 ǫ indicates that ℓ = ǫ (mod 2d) and where the coefficients c ǫ (n) are defined in terms of the expansion coefficients of the elliptic genus of X as c(m, ℓ) = c ǫ (2dm − ℓ 2 ), with ǫ = λ (mod 2d). This integral can be computed using the by-now standard techniques of [40,36,27,28]. The final result of the integration is [26], Since the integral F is by construction invariant under the T-duality group O(3, 2, Z), this determines the automorphic properties of Φ. The factor det Im Ω transforms with weight −1, which fixes the weight of the form Φ to be c(0, 0)/2. This formula should be contrasted with the analogous computation for the zero-modes (4.8), i.e., the field theory limit,
F QF T (τ, τ ) = − log (Im τ ) c/2 |η c (τ )| 2 , c = χ(X). (5.37)
In the special case of K3 the infinite product Φ(Ω) is a well-known automorphic form [41], see also [42,43]. First of all, the elliptic genus of K3 is the unique (up to a scalar) weak Jacobi form of weight 0 and index 1. Realizing K3 as a Kummer surface (resolving the orbifold T 4 /Z 2 ) we see that the elliptic genus can be written in terms of genus one theta-functions as
χ(K3; p, q) = 2 3 even α ϑ 2 α (z; τ ) ϑ 2 a (0, τ )
.
(5.38)
If we now identify Ω with the period matrix of a genus two Riemann surface, we can rewrite the automorphic form in terms of genus two theta-functions,
Φ(Ω) = 2 −12 even α ϑ[α](Ω) 2 (5.39)
In the work of Gritsenko and Nikulin [41] is is shown that Φ also has an interpretation as the denominator of a generalized Kac-Moody algebra. It is a rather obvious conjecture that this GKM should be given by the algebra of BPS states induced by the string interaction. The full story for K3 is quite beautiful and explained in [28]. See [44,45] for more on the connection with GKM's. Summarizing we have seen the following:
1. The (BPS) string theory partition function factorizes in left-moving and rightmoving contributions that are holomorphic functions of the moduli p, q, y. These three factors have the following interpretation. The first factor is again the regulated zero-point energy, very similar to the field theory result. The second factor is due to the bosonic and fermionic zero-modes. (Recall that the low-energy field theory describes general differential forms on T 2 × X.) The third factor is there to restore the symmetry in p and q. It can only be understood using T -duality.
3. The full partition function is invariant because the zero-mode contribution adds a non-holomorphic factor (det Im Ω) −c(0,0)/2 . 4. The holomorphic contributions are characters of an infinite-dimensional generalized Kac-Moody algebra, directly related to the creation and annihilation operators of the string Fock space and their interactions.
5. The modularity of the characters, i.e., the transformation properties under the automorphic group SO(3, 2, Z) is 'explained' by the relation to a partition function of a string on a two-torus T 2 with an associated line bundle with moduli τ, σ, z and T -duality group SO(3, 2, Z).
Matrix Strings and Interactions
Up to now we have only considered free theories and observed how in light-cone quantization these models could be reformulated using first-quantized theories on symmetric products. Now we want to atke advantage from this relation to include interactions. This has proven possible for two important examples: 1) the ten-dimensional IIA superstring and some of its compactifications, and 2) the class of (2,0) supersymmetric six-dimensional non-abelian string theories. By taking the low-energy limit, similar formulations for the field theory limits follow. The essential starting point in these constructions is the beautiful Ansatz for a non-perturbative formulation of M-theory known as matrix theory [5]. See for example the reviews [8,7,9] for more information about matrix theory.
Supersymmetric Yang-Mills theory
Matrix string theory gives a very simple Ansatz of what non-perturbative IIA string theory looks like in light-cone gauge [12,13,11]. It is simply given by the maximally supersymmetric two-dimensional Yang-Mills theory with gauge group U(N) in the limit N → ∞ (or with finite N in DLCQ).
To be more precise, let us consider two-dimensional U(N) SYM theory with 16 supercharges. It can be obtained by dimensionally reducing the N = 1 SYM theory in 10 dimensions. Its field content consists of the following fields. First we pick a (necessarily trivial) U(N) principle bundle P on the world-sheet S 1 × R. Let A be a connection on this bundle. We further have 8 scalar field X i in the vector representation V of Spin(8), and 8 left-moving fermions θ a in the spinor representation S + and 8 right-moving fermion θ˙a in the conjugated spinor representation S − . All these fields are Hermitean N × N matrices, or if one wishes sections of the adjoint bundle ad(P ) .
The action for the SYM theory reads
S SY M = d 2 σ Tr 1 2 |DX i | 2 + θ a Dθ a + θ˙aDθ˙a + 1 2g 2 |F A | 2 + g 2 i<j [X i , X j ] 2 + g θ˙aγ i aȧ [X i , θ a ] (6.1)
Here g is the SYM coupling constant --a dimensionful quantity with dimension 1/length in two dimensions. This means in particular that the SYM model is not conformal invariant. In fact, at large length scales (in the IR) the model becomes strongly interacting. So we have a one-parameter family of QFT's labeled by the coupling constant g or equivalently a length scale ℓ = 1/g. The relation with string theory is the following. First of all for finite N the Hilbert space of states of the SYM theory should be identified with the DLCQ second-quantized IIA string Hilbert space. The integer N that gives the rank of the gauge group is then related to the total longitudinal momentum in the usual way as
p + = N/R,(6.2)
whereas the total light-cone energy is given by
p − = N p + H SY M (6.3)
with H SY M the Hamiltonian of the SYM model. Note that in the decompactification of the null circle where we will take N, R → ∞, keeping their ratio finite, only SYM states with energy
H SY M ∼ 1 N (6.4)
will contribute a finite amount to p − . Finally, the IIA string coupling constant g s (a dimensionless constant) is identified as
g s = (gℓ s ) −1 (6.5)
with ℓ s the string length, α ′ = ℓ 2 s . From this identification we see that free string theory (g s = 0) is recovered at strong SYM coupling (g = ∞). This is equivalent to the statement that free string theory is obtained in the IR limit. In this scaling limit-the fixed point of the renormalization group flow-we expect on general grounds to recover a superconformal field theory with 16 supercharges. We will now argue that this SCFT is the supersymmetric sigma model with target space S N R 8 . We can then use our previous analysis of orbifold sigma models to conclude that the point g s = 0 indeed describes the second-quantized free IIA string.
The analysis proceeds in two steps. First we observe that because of the last two terms in the action (6.1), in the limit g s = 0 which is equivalent to g = ∞, the fields X and θ necessarily have to commute. This means that we can write the matrix coordinates as (6.6) with U ∈ U(N) and x i a diagonal matrix with eigenvalues x i 1 , . . . , x i N . Now the matrix valued fields X i (σ) are single-valued, being section of the trivial bundle U(N) vector ad(P ). But this does not imply that the fields U(σ) and x i (σ) are too. In fact, it is possible that after a shift σ → σ + 2π the individual eigenvalues are permuted due to a spectral flow. Only the set of eigenvalues (or more properly the set of common eigenstates) of the commuting matrices X i is a gauge invariant quantity. So we should allow for configurations of the form
X i (σ) = U(σ) · x i (σ) · U −1 (σ),x i (σ + 2π) = g · x i (σ) · g −1 ,(6.7)
with g ∈ S N the Weyl group of U(N). Effectively this tells us that we are dealing with an orbifold with target space
R 8N /S N = S N R 8 ,(6.8)
given Lie-theoretically as t 8 /W with t the Cartan Lie algebra and W the Weyl group of U(N).
As we have analyzed before this implies that the Hilbert space decomposes in superselection sectors labeled by the conjugacy classes [g] of S N , which in turn are given by partitions of N. This structure indicates that the Hilbert space is a Fock space of second-quantized IIA strings. A sector twisted by g = (n 1 ) . . . (n k ) (6.9) describes k strings of longitudinal momentum
p + i = n i R = n i N p + tot , i = 1, . . . , k. (6.10)
We have also seen how for a string with a twist (n) of 'length' n the Z n projection of the orbifold projects the Hilbert space to a subsector conditioned to L 0 − L 0 = 0 (mod n) (6.11)
that we now interpreted as the usual DLCQ level-matching condition. In the large N limit, also the individual n i go to infinity, effectively decompactifying the null circle. The second step consists of analyzing the behaviour of the gauge field. The possibly twisted configurations of X i (σ) break the gauge group U(N) to an abelian subgroup T that commutes with the configuration X i (σ). In fact, if the twist sector is labeled by a partition n 1 + . . . + n k = N (6.12)
describing k strings of length n 1 , . . . , n k , the unbroken gauge group is
T ∼ = U(1) k . (6.13)
Because of the Higgs effect all the broken components of the gauge field acquire masses of the order g and thus decouple in the IR limit. This leaves us with a free abelian gauge theory on R × S 1 . This model has been analyzed in great detail. Dividing by the gauge symmetries leaves us with the holonomy along the S 1
Hol(A) = e S 1 A ∈ T (6.14)
as the only physical degree of freedom. The gauge theory is therefore described by the quantum mechanics on the torus T with Hamiltonian given by
H = −g 2 ∆ (6.15)
with ∆ the Laplacian on T . The eigenstates are given by the characters of the irreducible representations of T with eigenvalues (energies) g 2 times the second Casimir invariant of the representation. Clearly in the limit g → ∞ only the vacuum state or trivial representation survives. This state has a constant wavefunction on T which has the interpretation that the abelian gauge field is free to fluctuate, a result from the fact that in strong coupling the action S = 1 g 2 F 2 goes to zero. So all-in-all the gauge field sector only contributes a single vacuum state. This completes our heuristic derivation of the IR limit of SYM.
Since two-dimensional gauge theories are so well-behaved it would be interesting to make the above in a completely rigorous statement about the IR fixed point of SYM. One of the points of concern could be complications that emerge if some of the eigenvalues coincide. In that case unbroken non-abelian symmetries appear. As we will show in the next section however, from the SCFT perspective such effects are always irrelevant and thus disappear in the IR limit. In fcat, these effects are exactly responsible for the perturbative interactions at finite g.
Interactions
If the matrix string theory conjecture is correct, for finite coupling constant the SYM theory should reproduce the interacting string. A non-trivial check of this conjecture is to identify the correction for small g s . This should be given by the joining and splitting interaction of the strings, producing surfaces with nontrivial topology.
This computation was done in [11] where the leading correction was computed. Let us try to summarize this computation. (It is also reviewed in [9].) The idea is to analyze the behaviour of the SYM theory in the neighbourhood of the IR fixed point. In leading order, a deformation to finite g, is given by the least irrelevant operator in the orbifold CFT. That is, we look for the operator O in the sigma model that preserves all the supersymmetries and the Spin(8) R-symmetry and that has the smallest scaling dimensions. The deformed QFT then has an action of the form
S = S SCF T + (g s ) h−2 O + . . . (6.16)
with h the toal scaling dimension of O. We would like to see that the power of g s is one (so that h = 3) and that this deformation induces the usual joining and splitting interaction. Note that the Hilbert space of the matrix string was defined with Ramond boundary conditions for the supercurrent G˙a = γ aȧ i θ a ∂x i . That is, we have
G˙a(σ + 2π) = G˙a(σ). (6.17)
We have seen that the ground state space V (n) of a Z n twisted sector H (n) is isomorphic to the ground state space of a single string
V (n) ∼ = (V ⊕ S − ) ⊗ (V ⊕ S + ). (6.18)
Only the conformal dimensions are rescaled and given by
L 0 = L 0 = nd/8,(6.19)
since the central charge of the SCFT is n times as big. Here d was the complex dimension of the target space, so in our case d = 4. One way to understand this vacuum degeneracy is that Z n action on the n fermions θ 1 , . . . , θ n can be diagonalized with eigenvalues e 2πik/n , k = 0, . . . , n − 1. That is, there are linear combinations of the θ k , let us denote them byθ k that have boundary conditions θ k (σ + 2π) = e 2πik nθ k (σ). (6.20) So the linear sumθ 0 = θ 1 + . . . + θ n (6.21)
is always periodic and its zero modes give the 16 fold vacuum degeneracy. A similar story holds for the right-moving fermions.
Since we want to keep Ramond boundary conditions in the interacting theory, the local operator O that describes the first-order deformation should be in the NS-sector. This just tells us that the OPE
G˙a(z)O(w) (6.22)
is single-valued in z − w. So, using the familiar operator-state correspondence of CFT we have to look in the NS-sector of the Hilbert space. These are of course again labeled by twist fields. The only difference is that the fermions now have an extra minus sign in their monodromy, and satisfy the boundary conditions
θ k (σ + 2π) = −e 2πik nθ k . (6.23)
Now depending on whether n is even or odd there is a periodic fermion or not. So we expect to find only a degeneracy for even n. It is not difficult to compute the conformal dimension of the NS ground state in a Z n twisted sector. First of all, both for the bosons and fermions the Z n action can be diagonalized. The bosonic twist field that implements a twist with eigenvalue e 2πik/n has conformal dimensions dk(n − k)/2n 2 , with d the complex dimension of the transversal space (d = 4 for the IIA string). For the corresponding fermionic twist field we find conformal dimension dm 2 /2n 2 , where m = min(k, N − k). Adding up all the possible eigenvalues we obtain total conformal dimension h = n, n even, n − 1 n , n odd. (6.24) In particular the lowest dimension h = 2 is given by the Z 2 twist field σ. Since n = 2 is even, this ground state has the usual degeneracy
σ ∈ (V ⊕ S − ) ⊗ (V ⊕ S + ) (6.25)
Note that the zero-modes of the superpartner of the twisted boson x i give this degeneracy. However, the NS ground state is not supersymmetric neither Spin(8) invariant, and is therefore not a suitable candidate for our operator O.
There is however a small modification that does respect the supersymmetry algebra. In the Z 2 twisted sector the coordinate x i has a mode expansion which is sufficient. Since O is both SUSY and Spin( * ) invariant, it is the leading irrelevant operator that we were looking for.
What is the interpretation of the field O in string perturbation theory? It clearly maps superselection sectors with two strings into sectors with one string and vice versa. It is therefore exactly the usual joining and splitting interaction. In fact, the perturbation in the operator O reproduces the standard light-cone perturbation theory.
There is also a clear geometric interpretation of the twist field interaction O. Consider the manifold R 8 /Z 2 or if one wishes the compact version T 8 /Z 2 . This is a Calabi-Yau orbifold and defines a perfectly well-behaved superconformal sigma model. One could now try to blow-up the Z 2 singularity to obtain a smooth Calabi-Yau space. It is well-known that this cannot be done without destroying the Calabi-Yau property; the orbifold R 8 /Z 2 is rigid. In the SCFT language this is expressed by the fact that corresponding deformation does not respect the superconformal algebra. Algebraically, preserving the conformal invariance implies that the operator is marginal with scaling dimension 2. The fact that we found weight 3 is therefore in accordance with the fact that the two-dimensional field theory deforms to a massive field theory with a length scale -two-dimensional SYM.
However, we see that if the transverse target space would have been four-dimensional, the twist field interaction would have L 0 = L 0 = 1 and would have represented a marginal operator. This is a simple reflection of the fact that the orbifold R 4 /Z 2 or T 4 /Z 2 can be resolved to a smooth Calabi-yau manifold, respectively an hyperkähler ALE space or a K3 surface. We therefore turn now to the case where this four-dimensional example becomes relevant.
String Theories in Six Dimensions
For superstrings the critical dimension is ten, or transversal dimension eight. However, in the past year that has been growing evidence that there is also a fascinating class of string theories with critical dimension six, with a four-dimensional transversal space. In fact, there is believed to be such a string for every simply-laced Lie group of type A, D, E. We will mainly focus on the U(k) case.
There is very little know about these theories [46,48,47]. They have (2, 0) supersymmetry with a Spin(4) R-symmetry, do not contain a graviton, and give non-trivial six-dimensional SCFT's in the IR, where the R-symmetry is enlarged to Spin(5) = Sp (2). Roughly their massless modes should be a theory of non-abelian two-form gauge fields, whose three-form field strength is self-dual. For the U(1) theory this can be made precise. The massless modes form the irreducible (2,0) tensor multiplet which consists of one two-form B together with five scalar field X I .
Furthermore the string coupling of these microstrings or little strings is believed to be fixed to one (basically because of the self-duality). Since the string coupling cannot be tuned to zero, there is no reason why a free string spectrum should emerge. This good, because we know that the six-dimensional Green-Schwarz superstring is not Lorentz invariant. It is true however that this string does reproduce the tensor multiplet as its massless sector.
DLCQ formulations and matrix models
Up to now we only know how to describe these (2,0) strings in a matrix theory DLCQ formulation [14,49,50,51]. We choose a six-dimensional space-time of the form M 1,5 = (R × S 1 ) 1,1 × X 4 (7.1) with X a (Ricci flat) Riemannian four-manifold. We will often choose X to be compact, which restricts us to either T 4 or K3. We fix the longitudinal momentum to be p + = N/R, with R the radius of the null-circle. The claim is now that this string theory can be described in terms of a two-dimensional sigma model with target space the moduli space M k,N (X) (7.2) of U(k) instantons (self-dual connections) on X with total instanton charge ch 2 = N. This moduli space is a hyperkähler manifold of real dimension 4Nk. It has singularities, corresponding to (colliding) point-like instantons. There is however a particularly nice compactification by considering the moduli space M k,N (X) (7.3) of (equivalence classes) of coherent torsion free sheaves of rank k and ch 2 = N. In particular for the case k = 1 we find in this way the Hilbert scheme of dimension zero subschemes of length N M 1,N (X) = Hilb N (X). (7.4) This space is a intricate smooth resolution of the symmetric space S N X [52]. The fibers of the projection Hilb N (X) → S N X over the various diagonals keep track of the particular way the points approach each other. Quite generally, if X is a smooth Calabi-Yau space of complex dimension d then the symmetric product S N X is also Calabi-Yau manifold, albeit an orbifold, now of dimension Nd. Only for (complex) dimension two, i.e., if X a four-torus or K3 surface, is it possible to resolve the singularities of S N X to produce smooth Calabi-Yau. The Hilbert scheme Hilb N (X) provides a canonical construction. For CY d-folds with d ≥ 3 the Hilbert scheme is not smooth. We should mention here that all of the spaces M k,N are to be hyperkähler deformations of S N k X. In particular this implies that their cohomology is given by that of the symmetric product. For more on this issue see [53].
Deformations and interactions
For any Calabi-Yau space X, its deformation space is locally given by H 1 (T X ) ∼ = H 1,d−1 (X) * . By a well-know result of Tian and Todorov there are no obstructions to such deformations, and therefore the dimension of the moduli space M X of inequivalent complex structures is given by h 1,d−1 (X). It is not difficult to compute the dimension of the deformation space of the symmetric product S N X using the above formalism. We see that there is always a contribution given by H 1 (T X ). This corresponds to simply deforming the underlying manifold X. However, for dimension d = 2 and only for this dimension, there is a second contribution coming from H 0 (X (2) ). In fact, for d = 2 we have dim M S N X = dim M X + 1. (7.5) There is a direct geometric interpretation of this extra deformation. X (2) represents the small diagonal in X N where two points coincide. In the orbifold cohomology of S N X it contributes the cohomology of X, shifted however in bi-degree by (d − 1, d − 1) = (1, 1). The corresponding deformation corresponds to blowing up in the given complex structure this small diagonal. The corresponding operator in the SCFT is exactly the same Z 2 twist field that we have discussed before for the type II string. Therefore this deformation can be given an interpretation as tuning the string coupling constant [53].
‡
These degeneracies are consistently defined as superdimensions of the eigenspaces, so that c(m, k) ≤ 0 for k odd, and c(0, k) = (−1) k b k .
Fig. 1 :
1A twisted sector of a sigma model on S N X can describe less than N strings. (Here N = 9 and the sector contains three 'long strings.')
Nn H (n) ; q, y) ‡ We use the more general notation χ(H; q, y) = Tr H (−1) F q L0− d 8 y FL for any Hilbert space H.
of the operators L 0 and L 0 on H (n) are then rescaled by a factor 1/n compared with the action on H explained already, this rescaling is due to the fact that the string has now length 2πn instead of 2π. Even though the world-sheet Hamiltonians L
1
− p n q h q h y r y s −c(h,h,r,s) h, h, r, s)q h q h y r y s(3.39)
Z
(p, p) = Tr H p L 0 −c/24 p L 0 −c/24 = √ Im τ |η(p)|
1 .
1The quantum field theory partition function factorizes in left-moving and rightmoving contributions that are holomorphic functions of the modulus p.2. The holomorphic contributions are modular forms of weight−c/2 under SL(2, Z)if a particular correction (here p −c/24 ) is added.3. The full partition fucntion is modular invariant because the zero-mode contribution adds a non-holomorphic factor (Im τ ) −c/2 .4. The holomorphic contributions are characters of an infinite-dimensional Kac-Moody algebra, in fact in this simple case just the Heisenberg algebra generated by the operators α n .
have for fixed x + a decomposition of the scalar field as
p n q m y ℓ ) −c(nm,ℓ) (5.23)with the coefficients c(m, ℓ) determined by the elliptic genus of X,
F
string (Ω, Ω) = − log (det Im Ω) c(0,0)/2 |Φ(Ω)| 2 (5.36)
2 .
2The holomorphic contributions are automorpic forms of weight −c(0, 0)/2 of the group SO(3, 2, Z) if a particular correction is added. This correction takes the form (pq) χ(X)/24 y c ℓ>0 (1 − y ℓ ) c(0,ℓ) m>0,ℓ (1 − q m y ℓ ) c(m,ℓ) (5.40)
Here σ ij indicates the components of σ in V ⊗ V . This operator can be written asO = G˙a −1
† This can of course be generalized to traces of operators as sTr (S p A) = sdet(1 − pA) −1 .
† The physical significance of this picture was developed in among others[32,33,34] and made precise in[10].
AcknowledgementsA very much shortened version of these notes can be found in[54]. I wish to thank the organisers of the Geometry and Duality Workshop at the Institute for Theoretical Physics, UC Santa Barbara, January 1998 and the Spring School on String Theory and Mathematics, Harvard University, May 1998 for the invitation to present these lectures.
Closed string field theory: quantum action and the B-V master equation. B Zwiebach, hep-th/9206084Nucl. Phys. 390B. Zwiebach, Closed string field theory: quantum action and the B-V master equation, Nucl. Phys. B390 (1993) 33-152, hep-th/9206084.
Unity of superstring dualities. C Hull, P Townsend, hep-th/9410167Nucl. Phys. B. 438C. Hull and P. Townsend, Unity of superstring dualities, Nucl. Phys. B 438 (1995) 109, hep-th/9410167.
Dirichlet-branes and Ramond-Ramond charges. J Polchinski, hep-th/9510017Phys. Rev. Lett. 75J. Polchinski, Dirichlet-branes and Ramond-Ramond charges, Phys. Rev. Lett. 75 (1995) 4724-4727, hep-th/9510017.
String theory in various dimensions. E Witten, hep- th/9503124Nucl. Phys. B. 443E. Witten, String theory in various dimensions, Nucl. Phys. B 443 (1995) 85, hep- th/9503124.
M Theory as a matrix model: a conjecture. T Banks, W Fischler, S H Shenker, L Susskind, hep-th/9610043Phys. Rev. 55T. Banks, W. Fischler, S. H. Shenker, and L. Susskind, M Theory as a matrix model: a conjecture, Phys. Rev. D55 (1997) 5112-5128, hep-th/9610043.
M(atrix) theory : a pedagogical introduction. A , hep-th/9710136A. Bilal, M(atrix) theory : a pedagogical introduction, hep-th/9710136.
T Banks, hep-th/9710231Matrix theory. T. Banks, Matrix theory, hep-th/9710231.
Review of matrix theory. D Bigatti, L Susskind, hep-th/9712072D. Bigatti and L. Susskind, Review of matrix theory, hep-th/9712072.
R Dijkgraaf, E Verlinde, H Verlinde, hep- th/9709107Notes on matrix and micro strings. R. Dijkgraaf, E. Verlinde, and H. Verlinde, Notes on matrix and micro strings, hep- th/9709107.
Elliptic genera of symmetric products and second quantized strings. R Dijkgraaf, G Moore, E Verlinde, H Verlinde, hep-th/9608096Commun. Math. Phys. 185R. Dijkgraaf, G. Moore, E. Verlinde, and H. Verlinde, Elliptic genera of symmetric products and second quantized strings, Commun. Math. Phys. 185 (1997) 197-209, hep-th/9608096.
Matrix string theory. R Dijkgraaf, E Verlinde, H Verlinde, hep-th/9703030Nucl. Phys. 500R. Dijkgraaf, E. Verlinde, and H. Verlinde, Matrix string theory, Nucl. Phys. B500 (1997) 43-61, hep-th/9703030.
L , hep-th/9701025Proposals on non-perturbative superstring interactions. L. Motl, Proposals on non-perturbative superstring interactions, hep-th/9701025.
Strings from matrices. T Banks, N Seiberg, hep-th/9702187Nucl. Phys. 497T. Banks and N. Seiberg, Strings from matrices, Nucl. Phys. B497 (1997) 41-55, hep-th/9702187.
5D Black holes and matrix strings. R Dijkgraaf, E Verlinde, H Verlinde, hep-th/9704018Nucl. Phys. 506R. Dijkgraaf, E. Verlinde, and H. Verlinde, 5D Black holes and matrix strings, Nucl. Phys. B506 (1997) 121-142, hep-th/9704018.
E Witten, Geometry and physics. BerkeleyE. Witten, Geometry and physics, ICM, Berkeley (1988).
The Betti numbers of the Hilbert scheme of points on a smooth projective surface. L Göttsche, Math. Ann. 286L. Göttsche, The Betti numbers of the Hilbert scheme of points on a smooth projective surface, Math. Ann. 286 (1990) 193-207;
Hilbert Schemes of Zero-dimensional Subschemes of Smooth Varieties. Lecture Notes in Mathematics. 1572Springer-VerlagHilbert Schemes of Zero-dimensional Sub- schemes of Smooth Varieties, Lecture Notes in Mathematics 1572, Springer-Verlag, 1994.
Perverse sheaves and the cohomology of Hilbert schemes of smooth algebraic surfaces. L Göttsche, W Soergel, Math. Ann. 296L. Göttsche and W. Soergel, Perverse sheaves and the cohomology of Hilbert schemes of smooth algebraic surfaces, Math. Ann. 296 (1993) 235-245.
On the cohomology of Hilbert schemes of points. J Cheah, J. Alg. Geom. 5J. Cheah, On the cohomology of Hilbert schemes of points, J. Alg. Geom. 5 (1996) 479-511.
On the Euler Number of an orbifold. F Hirzebruch, T Höfer, Math. Ann. 286255F. Hirzebruch and T. Höfer, On the Euler Number of an orbifold, Math. Ann. 286 (1990) 255.
A strong coupling test of S-duality. C Vafa, E Witten, hep-th/9408074Nucl. Phys. 431C. Vafa and E. Witten, A strong coupling test of S-duality, Nucl. Phys. B431 (1994) 3-77, hep-th/9408074.
Equivariant K-theory and symmetric products. G Segal, manuscript and lecture at Aspen Center of PhysicsG. Segal, Equivariant K-theory and symmetric products, manuscript and lecture at Aspen Center of Physics, August 1996.
The Poincaré polynomial of a symmetric product. I G Macdonald, Proc. Camb. Phil. Soc. 58I.G. Macdonald, The Poincaré polynomial of a symmetric product, Proc. Camb. Phil. Soc 58 (1962) 563-568.
Comments on the N=2, N=3, N=4 superconformal algebras in two-dimensions. A Schwimmer, N Seiberg, Phys. Lett. 184191A. Schwimmer and N. Seiberg, Comments on the N=2, N=3, N=4 superconformal algebras in two-dimensions, Phys. Lett. 184B (1987) 191.
P S Landweber Ed, Elliptic Curves and Modular Forms in Algebraic Topology. Springer-VerlagP.S. Landweber Ed., Elliptic Curves and Modular Forms in Algebraic Topology (Springer-Verlag, 1988).
. E Witten, Commun.Math.Phys. 109525E. Witten, Commun.Math.Phys. 109 (1987) 525.
. A Schellekens, N Warner, Phys. Lett. 177317Nucl.Phys.A. Schellekens and N. Warner, Phys. Lett, B177 (1986) 317; Nucl.Phys. B287 (1987) 317.
The Dirac-Ramond operator in string theory and loop space index theorems. O Alvarez, T P Killingback, M Mangano, P Windey, Nucl. Phys. B (Proc. Suppl.). 1111Commun. Math. Phys.O. ALvarez, T.P. Killingback, M. Mangano, and P. Windey, The Dirac-Ramond op- erator in string theory and loop space index theorems. Nucl. Phys. B (Proc. Suppl.) 1A (1987) 89; String theory and loop space index theorems, Commun. Math. Phys. 111 (1987) 1.
Superconformal Algebras and String Compactification on Manifolds with SU(N) Holonomy. T Eguchi, H Ooguri, A Taormina, S.-K Yang, Nucl.Phys. 315193T. Eguchi, H. Ooguri, A. Taormina, S.-K. Yang, Superconformal Algebras and String Compactification on Manifolds with SU(N) Holonomy, Nucl.Phys. B315 (1989) 193.
Elliptic Genera and N=2 Superconformal Field Theory. T Kawai, Y Yamada, S.-K Yang, Nucl. Phys. 414T. Kawai, Y. Yamada and S.-K. Yang, Elliptic Genera and N=2 Superconformal Field Theory, Nucl. Phys. B414 (1994) 191-212.
M Eichler, D Zagier, The Theory of Jacobi Forms. BirkhäuserM. Eichler and D. Zagier, The Theory of Jacobi Forms (Birkhäuser, 1985).
The elliptic genus of Calabi-Yau 3-and 4-folds, product formulae and generalized Kac-Moody Algebras. C D D Neumann, hep-th/9607029C.D.D. Neumann, The elliptic genus of Calabi-Yau 3-and 4-folds, product formulae and generalized Kac-Moody Algebras, hep-th/9607029.
Heterotic string threshold correction, K3 surface and generalized Kac-Moody superalgebra. T Kawai, N , hep-th/9512046Phys. Lett. 3722T. Kawai, N = 2 Heterotic string threshold correction, K3 surface and generalized Kac-Moody superalgebra, Phys. Lett. B372 (1996) 59-64, hep-th/9512046.
T Kawai, hep-th/9710016K3 surfaces, Igusa cusp form and string theory. T. Kawai, K3 surfaces, Igusa cusp form and string theory, hep-th/9710016.
E Witten, Mirror manifolds and topological field theory. S-T YauHong KongInternational PressEssays on Mirror manifoldsE. Witten, Mirror manifolds and topological field theory, in Essays on Mirror mani- folds, Ed. S-T Yau (International Press, Hong Kong, 1992).
Counting dyons in N = 4 string theory. R Dijkgraaf, E Verlinde, H Verlinde, hep-th/9607026Nucl. Phys. 484R. Dijkgraaf, E. Verlinde and H. Verlinde, Counting dyons in N = 4 string theory, Nucl. Phys. B484 (1997) 543. hep-th/9607026.
. L Dixon, J Harvey, C Vafa, E Witten, Nucl. Phys. 261285Nucl. Phys.L. Dixon, J. Harvey, C. Vafa, and E. Witten, Nucl. Phys. B261 (1985) 620; Nucl. Phys. B274 (1986) 285.
Microscopic origin of the Bekenstein-Hawking entropy. A Strominger, C Vafa, hep-th/9601029Phys. Lett. 379A. Strominger and C. Vafa, Microscopic origin of the Bekenstein-Hawking entropy, Phys. Lett. B379 (1996) 99-104, hep-th/9601029.
D-branes and fat black holes. J M Maldacena, L Susskind, hep-th/9604042Nucl. Phys. 475J.M. Maldacena and L. Susskind, D-branes and fat black holes, Nucl. Phys. B475 (1996) 679, hep-th/9604042.
BPS spectrum of the five-brane and black bole entropy. R Dijkgraaf, E Verlinde, H Verlinde, hep-th/9603126Nucl. Phys. 486R. Dijkgraaf, E. Verlinde and H. Verlinde, BPS spectrum of the five-brane and black bole entropy, Nucl. Phys. B486 (1997) 77-88, hep-th/9603126.
Automorphic forms on O s+2,2 (R) and Infinite Products. R E Borcherds, Invent. Math. 120161R. E. Borcherds, Automorphic forms on O s+2,2 (R) and Infinite Products, Invent. Math. 120 (1995) 161.
Algebras, BPS states, and strings. J Harvey, G Moore, hep-th/9510182Nucl. Phys. 463J. Harvey and G. Moore, Algebras, BPS states, and strings, Nucl. Phys. B463 (1996) 315-368, hep-th/9510182.
Why is the matrix model correct?. N Seiberg, hep-th/9710009Phys. Rev. Lett. 79N. Seiberg, Why is the matrix model correct?, Phys. Rev. Lett. 79 (1997) 3577-3580, hep-th/9710009.
Constraints on low-dimensional string compactifications. S Sethi, C Vafa, E Witten, hep-th/9606122Nucl. Phys. 480S. Sethi, C. Vafa, and E. Witten, Constraints on low-dimensional string compactifi- cations, Nucl. Phys. B480 (1996) 213-224, hep-th/9606122.
A note on low-dimensional string compactifications. K Dasgupta, S Mukhi, hep-th/9612188Phys. Lett. 398K. Dasgupta and S. Mukhi, A note on low-dimensional string compactifications, Phys. Lett. B398 (1997) 285-290, hep-th/9612188.
Moduli-dependence of string loop corrections to gauge coupling constants. L Dixon, V Kaplunovsky, J Louis, Nucl. Phys. 307145L. Dixon, V. Kaplunovsky, and J. Louis, Moduli-dependence of string loop corrections to gauge coupling constants, Nucl. Phys. B307 (1988) 145.
Siegel Automorphic Form Corrections of Some Lorentzian Kac-Moody Algebras. V A Gritsenko, V V Nikulin, alg-geom/9603010Lorentzian Kac-Moody algebras. 119Amer. J. Math.V.A. Gritsenko and V.V. Nikulin, Siegel Automorphic Form Corrections of Some Lorentzian Kac-Moody Algebras, Amer. J. Math. 119 (1997), 181-224, alg- geom/9504006; The Igusa modular forms and "the simplest" Lorentzian Kac-Moody algebras, alg-geom/9603010.
Automorphic Forms and Lorentzian Kac-Moody Algebras, part I and part II. V A Gritsenko, V V Nikulin, alg-geom/9611028V.A. Gritsenko and V.V. Nikulin, Automorphic Forms and Lorentzian Kac-Moody Algebras, part I and part II, alg-geom/9610022 and alg-geom/9611028.
A hyperbolic Kac-Moody algebra and the theory of Siegel modular forms of genus 2. A J Feingold, I B Frenkel, Math. Ann. 263A.J. Feingold and I.B. Frenkel, A hyperbolic Kac-Moody algebra and the theory of Siegel modular forms of genus 2, Math. Ann. 263 (1983) 87-114.
On the algebra of BPS states. J A Harvey, G Moore, hep-th/9609017Commun. Math. Phys. 197J.A. Harvey and G. Moore, On the algebra of BPS states, Commun. Math. Phys. 197 (1998) 489-519, hep-th/9609017.
Perturbative BPS-algebras in superstring theory. C D D Neumann, hep-th/9702197Nucl. Phys. 499C.D.D. Neumann, Perturbative BPS-algebras in superstring theory, Nucl. Phys. B499 (1997) 596-620, hep-th/9702197.
E Witten, hep-th/9510135Some comments on string dynamics. E. Witten, Some comments on string dynamics, hep-th/9510135.
N Seiberg, hep-th/9705117Notes on theories with 16 supercharges. N. Seiberg, Notes on theories with 16 supercharges, hep-th/9705117.
N Seiberg, hep-th/9705221Matrix description of M-theory on T 5 and T 5. 2N. Seiberg, Matrix description of M-theory on T 5 and T 5 /Z 2 , Phys. Lett. B408 (1997) 98-104, hep-th/9705221.
Matrix description of interacting theories in six dimensions. O Aharony, M Berkooz, S Kachru, N Seiberg, E Silverstein, hep-th/9707079Adv. Theor. Math. Phys. 1O. Aharony, M. Berkooz, S. Kachru, N. Seiberg, and E. Silverstein, Matrix description of interacting theories in six dimensions, Adv. Theor. Math. Phys. 1 (1998) 148-157, hep-th/9707079.
On the conformal field theory of the Higgs branch. E Witten, hep-th/9707093J. High Energy Phys. 073E. Witten, On the conformal field theory of the Higgs branch, J. High Energy Phys. 07 (1997) 003, hep-th/9707093.
Light-cone description of (2,0) superconformal theories in six dimensions. O Aharony, M Berkooz, N Seiberg, hep- th/9712117Adv. Theor. Math. Phys. 2O. Aharony, M. Berkooz, N. Seiberg, Light-cone description of (2,0) superconfor- mal theories in six dimensions, Adv. Theor. Math. Phys. 2 (1998) 119-153, hep- th/9712117.
Heisenberg algebra and Hilbert schemes of points on projective surfaces. H Nakajima, alg-geom/9507012H. Nakajima, Heisenberg algebra and Hilbert schemes of points on projective surfaces, alg-geom/9507012.
R Dijkgraaf, hep-th/9810210Instanton strings and hyperkähler geometry. R. Dijkgraaf, Instanton strings and hyperkähler geometry, hep-th/9810210.
The mathematics of fivebranes. R Dijkgraaf, hep-th/9810157Proceedings of the ICM. the ICMBerlinR. Dijkgraaf, The mathematics of fivebranes, in Proceedings of the ICM Berlin 1998, Doc. Math. III (1998) 133-142, hep-th/9810157.
| [] |
[
"Local volatility under rough volatility",
"Local volatility under rough volatility"
] | [
"F Bourgey \nCentre de Mathématiques Appliquées (CMAP)\nCNRS\nEcole Polytechnique\nInstitut Polytechnique de Paris\nFrance\n\nBloomberg L.P., Quantitative Research\nLondonUK\n",
"S De Marco \nCentre de Mathématiques Appliquées (CMAP)\nCNRS\nEcole Polytechnique\nInstitut Polytechnique de Paris\nFrance\n",
"P K Friz \nTechnische Universität Berlin and Weierstraß-Institut\nBerlinGermany\n",
"P Pigato \nDepartment of Economics and Finance\nUniversità Roma Tor Vergata\nRomeItaly\n"
] | [
"Centre de Mathématiques Appliquées (CMAP)\nCNRS\nEcole Polytechnique\nInstitut Polytechnique de Paris\nFrance",
"Bloomberg L.P., Quantitative Research\nLondonUK",
"Centre de Mathématiques Appliquées (CMAP)\nCNRS\nEcole Polytechnique\nInstitut Polytechnique de Paris\nFrance",
"Technische Universität Berlin and Weierstraß-Institut\nBerlinGermany",
"Department of Economics and Finance\nUniversità Roma Tor Vergata\nRomeItaly"
] | [] | Several asymptotic results for the implied volatility generated by a rough volatility model have been obtained in recent years (notably in the small-maturity regime), providing a better understanding of the shapes of the volatility surface induced by rough volatility models, supporting their calibration power to SP500 option data. Rough volatility models also generate a local volatility surface, via the so-called Markovian projection of the stochastic volatility. We complement the existing results on implied volatility by studying the asymptotic behavior of the local volatility surface generated by a class of rough stochastic volatility models, encompassing the rough Bergomi model. Notably, we observe that the celebrated "1/2 skew rule" linking the short-term at-the-money skew of the implied volatility to the short-term at-the-money skew of the local volatility, a consequence of the celebrated "harmonic mean formula" of [Berestycki, Busca, and Florent, QF 2002], is replaced by a new rule: the ratio of the at-the-money implied and local volatility skews tends to the constant 1/(H + 3/2) (as opposed to the constant 1/2), where H is the regularity index of the underlying instantaneous volatility process. | 10.1111/mafi.12392 | [
"https://export.arxiv.org/pdf/2204.02376v2.pdf"
] | 247,957,850 | 2204.02376 | e839d97b5e4e328de4db8e699a9968dc760da7c7 |
Local volatility under rough volatility
November 16, 2022
F Bourgey
Centre de Mathématiques Appliquées (CMAP)
CNRS
Ecole Polytechnique
Institut Polytechnique de Paris
France
Bloomberg L.P., Quantitative Research
LondonUK
S De Marco
Centre de Mathématiques Appliquées (CMAP)
CNRS
Ecole Polytechnique
Institut Polytechnique de Paris
France
P K Friz
Technische Universität Berlin and Weierstraß-Institut
BerlinGermany
P Pigato
Department of Economics and Finance
Università Roma Tor Vergata
RomeItaly
Local volatility under rough volatility
November 16, 2022
Several asymptotic results for the implied volatility generated by a rough volatility model have been obtained in recent years (notably in the small-maturity regime), providing a better understanding of the shapes of the volatility surface induced by rough volatility models, supporting their calibration power to SP500 option data. Rough volatility models also generate a local volatility surface, via the so-called Markovian projection of the stochastic volatility. We complement the existing results on implied volatility by studying the asymptotic behavior of the local volatility surface generated by a class of rough stochastic volatility models, encompassing the rough Bergomi model. Notably, we observe that the celebrated "1/2 skew rule" linking the short-term at-the-money skew of the implied volatility to the short-term at-the-money skew of the local volatility, a consequence of the celebrated "harmonic mean formula" of [Berestycki, Busca, and Florent, QF 2002], is replaced by a new rule: the ratio of the at-the-money implied and local volatility skews tends to the constant 1/(H + 3/2) (as opposed to the constant 1/2), where H is the regularity index of the underlying instantaneous volatility process.
1 as t ↓ 0,
where h y is related to a minimization problem, similar to a geodesic in Riemannian geometry. Our analytic understanding is sufficiently fine to exploit it on the one hand for numerical tests (discussed in Section 4) and on the other hand to derive further analytic results (formulated in Sections 2 and 3, with proofs left to Section 5) including the blowup, when H < 1/2, of the local volatility skew in the short-dated limit, S loc ∼ (const)t H−1/2 , see Corollary 3.4 below for a precise statement and information on the constant. This finding is consistent with [38] where it is shown, amongst others, that in "regular" local-stochastic vol models, which amounts to a regularity assumption on σ loc , the implied volatility skew does not explode. The regularity of σ loc is violated here in the sense that S loc is infinite at t = 0. This is also consistent with [60,36] where it is shown that a "singular" σ loc can indeed produce exploding implied skews. A further interesting consequence, also part of Corollary 3.4, is then that the 1/2-rule of thumb from practitioners [22] (see also [43] and [39,Remark 3.4] for different proofs) actually fails and is replaced, again in the short-dated limit, by what we may call the 1/(H + 3/2)-rule,
S BS S loc → 1 H + 3/2 . (1.1)
As a sanity check, for Hurst parameter H = 1/2 we are in a diffusive regime and then indeed fall back to the 1/2-rule.
Techniques and further discussion. Our analysis is based on a mixture of large deviations (see e.g. [33] for a recent collection with many references), Malliavin calculus [5,59,29], and last not least ideas from rough paths and regularity structures techniques, following [7,31,32]; see also Section 14.6 in [35]. In order to deal with H < 1/2, we cannot rely on previously used methods in diffusion settings such as [62,21]. Local volatility in classical stochastic volatility models, including Heston, is discussed in many books on volatility modeling, [43] remains a key reference. Rigorous asymptotic results include [44,20,21]. In affine forward variance models, including rough Heston [27,46], it is conceivable that saddle-point-based techniques, in the spirit of [20] could be employed to study local volatility asymptotics. The bottleneck in such an approach seems however the lack of explicit knowledge of the moment-generating function, only given implicitly via convolution Riccati equations. We note that the recent preprint [2] confirmed the asymptotic result (1.1) using some representations of S BS and S loc based on Malliavin calculus, in a central limit (Edgeworth) regime, as opposed to our large deviations setting.
The modeling framework
We assume S 0 = 1 and that the log price X t := log S t satisfies
dX t = − 1 2 V t dt + V t ρ dW t + ρ dW t V t = σ 2 ( W t ) (2.1)
with 'volatility function' σ : R → R. We shall assume σ to be smooth, subject to mild growth conditions given below, such as to cover rough Bergomi type situations where σ(x) = σ 0 exp(ηx). We take ρ 2 +ρ 2 = 1, with ρ ∈ (−1, 1). We denote W = (W, W ) where W, W are two independent standard Brownian motions. These are used to construct W t = ρW t + ρW t and W t = (K * Ẇ ) t = t 0 K(t, s)dW s , (2.4) with K(t, s) = √ 2H(t − s) H−1/2 for t > s and K(t, s) = 0 otherwise, so that W is again a standard Brownian motion (ρ-correlated with W ) whereas W is a Riemann-Liouville fBm with Hurst index H ≤ 1/2, i.e., the self-similar Gaussian Volterra process in (2.4). We will use analogous notations for Cameron-Martin paths h = (h, h), so that h = ρ h + ρ h, and h t = (K H * ḣ) t = t 0 K(t, s)dh s . We denote H 1 the Cameron-Martin space and · H 1 the Cameron-Martin norm h 2 H 1 = 1 0 (ḣ 2 +ḣ 2 )dt. 3
Mathematical setting and results
The time-scaling property of the Gaussian process (W, W , W ) underlying the model (2.1) yields
X ε 2 law = X ε 1 for every ε > 0, where X ε 1 satisfies X ε 1 = 1 0 σ ε 2H W s ε d ρ W + ρ W s − 1 2 ε 2 1 0 σ 2 ε 2H W s ds . (3.1)
Forde and Zhang proved in [28], albeit under different technical conditions on the volatility function, that a small noise Large Deviation Principle (LDP) holds for the family ε 2H−1 X ε 1 (hence for ε 2H−1 X ε 2 ) as ε → 0, with speed ε 4H and rate function
Λ(y) := inf h=(h,h)∈H 1 1 2 h 2 H 1 : t 0 σ h s (ρ dh s + ρ dh s ) = y = 1 2 h y 2 H 1 , (3.2)
where h y is a minimizer of the control problem defining Λ(y). From the LDP (3.2), we have 4) and this small-noise LDP eventually translates to a short-time LDP for the process X ε 2 . This result was proved in the case where V t = σ( W t ) in [28], and then extended to the possible time dependence of the form V t = σ( W t , t 2H ) in [31,Section 7.3] (see also Remark 3.8 below). The short-time result for call and put prices reads as follows (see [28,Corollary 4.13])
−ε 4H log P X ε 1 ≥ yε 1−2H → Λ(y) = 1 2 h y 2 H 1 , for y ≥ 0 as ε ↓ 0, (3.3) −ε 4H log P X ε 1 ≤ yε 1−2H → Λ(y) = 1 2 h y 2 H 1 , for y ≤ 0, as ε ↓ 0 ,(3.−t 2H log E (e Xt − e y t 1/2−H ) + → Λ(y) = 1 2 h y 2 H 1 , for y > 0 as t ↓ 0, (3.5) −t 2H log E (e y t 1/2−H − e Xt ) + → Λ(y) = 1 2 h y 2 H 1 , for y < 0 as t ↓ 0 , (3.6)
where h y is as above. Let us also recall that these option price asymptotics imply the following asymptotic formula for the Black-Scholes implied volatility (notation: σ BS ), which can be seen as a "rough" version of the Berestycki-Buscat-Florent (BBF) formula [17]:
σ 2 BS (t, yt 1/2−H ) → χ 2 (y) := y 2 2Λ(y) for y = 0 as t ↓ 0. (3.7)
Remark 3.1 (Precise conditions for the LDP, call price asymptotics and implied volatility asymptotics). The exponential growth condition (2.3) is no obstruction for an LDP to hold for the model (2.1), as was shown in [7,48], weakening the linear growth condition first required in [28]. Moreover, while the put price asymptotics (3.6) always holds, the unboundedness of the call option payoff requires some additional condition for (3.5) to hold: with reference to [31, Assumption A2], we will assume the following "1+ moment condition" whenever necessary:
Assumption 3.2. There exists p > 1 such that lim sup ε→0 E[e pX ε 1 ] < ∞.
4
Following [31,Lemma 4.7], Assumption 3.2 is true under the following stronger, but more explicit, condition: the process S t = e Xt is a martingale, and there exist p > 1 and t > 0 such that E[S p t ] < ∞. It is known that such a condition on the moments of e Xt is satisfied when σ has linear growth, cf. [28], while in the case H = 1/2, the same is true under much weaker assumptions (σ of exponential growth and ρ < 0 is enough, see [61,55]). We expect similar results to hold for H < 1/2, but they have not been proved yet; see the partial results available in [41,49].
The Markovian projection of the instantaneous variance V t (see [51], [18,Corollary 3.7]) within the model (2.1) is defined by
σ 2 loc (t, k) := E V t |X t = k for every t > 0 and k ∈ R. (3.8)
It follows from references [51,18] that the dynamics of the resulting local volatility model are weakly well-posed; see also [34] for a generic regularization scheme obtained by time-shifting the local volatility surface (a procedure that we do not require here). We now present our main result. We prove that the local volatility function (3.8) satisfies the following short-time asymptotics.
σ 2 loc t, y t 1/2−H = E V t X t = yt 1/2−H → σ 2 h y 1 as t ↓ 0 , (3.9)
where we recall that h y t = t 0 K(t, s)dh y s and h y = (h y , h y ) is the minimizer of the rate function in
S loc (t, y) := σ loc (t, y t 1/2−H ) − σ loc (t, −yt 1/2−H ) 2y t 1/2−H , (3.10) S BS (t, y) := σ BS (t, y t 1/2−H ) − σ BS (t, −yt 1/2−H ) 2y t 1/2−H . (3.11)
Then, we have the following
S loc (t, y) ∼ Σ(y) − Σ(−y) 2y 1 t 1/2−H (3.12)
as t → 0. Under the additional moment condition in Assumption 3.2,
S BS (t, y) S loc (t, y) t→0 −−→ χ(y) − χ(−y) Σ(y) − Σ(−y) y→0 − −− → 1 H + 3/2 . (3.13)
In the case ρ = 0, we have S BS (t, y) = 0 and S loc (t, y) = 0 for every t.
In our numerical experiments in section 4, we estimate the exact ATM local volatility skew 1 2 ∂ k σ loc (t, k) k=0 in the rough Bergomi model (4.1), and find perfect agreement with Corollary 3.4. The model local volatility skew can be observed in Figure 1, and the ratio of the implied volatility skew over the local volatility skew in Figure 2.
Remark 3.5. When H = 1/2, we are back to the classical 1/2 skew rule, see Derman et al. [22].
Remark 3.6. One can expect the 1 H+3/2 rule (3.13) to hold also for rough or rough-like volatility models that do not belong to the model class (2.1), such as the rough Heston model [27]. The recent preprint [19] provides numerical evidence for the 1 H+3/2 rule under the lifted Heston model [1], a Markovian approximation of rough Heston, as well as a formal proof in the case of the proper rough Heston model, see [19,Proposition 2.1]. In their recent work [2], Alos and co-authors prove the 1 H+3/2 rule for stochastic volatility models under suitable assumptions on the asymptotic behavior of the volatility process and related iterated Malliavin derivatives, further providing an asymptotic rule for the ratio of the at-the-money second derivative ∂ kk (·)| k=0 of the local and implied volatility functions.
Remark 3.7 (The short-time harmonic mean formula and the 1/2 skew rule again). When expressed in terms of an implied volatility σ BS , Dupire's formula for local volatility reads
σ loc (t, k) 2 = σ BS (t, k) + 2 t ∂ t σ BS (t, k) t ∂ kk σ BS − 1 4 t 2 σ BS (∂ k σ BS ) 2 + 1 σ BS 1 − k ∂ k σ BS σ BS 2 (t, k) (3.14)
provided that σ BS is sufficiently smooth for all the partial derivatives to make sense. Formally taking t → 0 inside (3.14) and assuming that the partial derivatives ∂ t σ BS , ∂ k σ BS and ∂ kk σ BS remain bounded, one obtains
σ loc (0, k) 2 = σ BS (0, k) 2 1 − k σ BS (0,k) σ BS (0,k) 2 . (3.15)
The ordinary differential equation (3.15) can be used to reconstruct the function σ BS (0, ·) from σ loc (0, ·) and it is solved by the harmonic mean function
H(t, k) = 1 1 k k 0 dy σ loc (t,y)
, (3.16) evaluated at t = 0. The computation above, leading from (3.15) to (3.16), can be found in [57]; the rigorous counterpart of this formal argument, that is the asymptotic equivalence σ BS (t, k) ∼ H(t, k), 6 known as the "harmonic mean formula" or BBF formula, was proven in [16] under the assumption that the local volatility surface σ loc is bounded and uniformly continuous in a neighborhood of t = 0. It is straightforward to see that the harmonic mean satisfies the property ∂ k H(t, k) k=0 = 1 2 ∂ k σ loc (t, k) k=0 . Therefore, if we assume that the short-time approximation property σ BS (t, k) ≈ H(t, k) also holds for the first derivatives with respect to k, as a consequence we obtain the 1/2 short-time skew rule ∂ k σ BS (t, 0) ∼ 1 2 ∂ k σ loc (t, 0) that we referred to in Remark 3.5. Corollary 3.4 entails that the formal argument above does not hold anymore for the implied and local volatility surfaces generated by a rough stochastic volatility model. Notably, the boundedness of the partial derivatives ∂ t σ BS , ∂ k σ BS and ∂ kk σ BS , and the uniform continuity of the local volatility surface, fall short -but in such a way that the limit of the skew ratio S BS S loc can still be identified and explicitly computed (see related numerical tests in Figure 4).
Remark 3.8 (Time-dependent volatility function). The rough Bergomi model [6] comes with instantaneous variance
ξ(t) exp ηx − η 2 2 t 2H ∼ ξ(0) exp ηx =: σ 2 (x)
as t ↓ 0. We could have proved Theorem 3.3 and Corollary 3.4 in greater generality, with V t = σ 2 ( W t , t), provided the dependence with respect to t in σ = σ(x, t) is sufficiently smooth such as not to affect the local analysis that underlies the proof. This is more subtle in the case of rough Bergomi where t 2H fails to be smooth at t = 0 + when H < 1/2. Even so, we discussed in [31] how to adjust the arguments to obtain exact asymptotics, the same logic applies here.
Numerical tests
We wish to estimate the conditional expectation (3.8) for some specific instance of the model (2.1), using Monte Carlo simulation. We consider the rough Bergomi model [6], for which the instantaneous variance process is given by
V t = ξ 0 exp η W t − η 2 2 t 2H = ξ 0 exp η t 0 √ 2H(t − s) H−1/2 dW s − η 2 2 t 2H ,(4.1)
where ξ 0 = V 0 is the spot variance and η a parameter that tunes the volatility of variance. Note that, strictly speaking, Theorem 3.3 and Corollary 3.4 do not apply to the model above, because of the time dependence in the volatility function σ(
x, t) = √ ξ 0 exp η 2 x − η 2 4 t 2H .
In light of the discussion in Remark 3.8, we can expect our asymptotic results to hold for such a time-dependent volatility function as well, which is in line with the output of our numerical experiments below.
For a given time horizon T > 0 and a number N ∈ N * of time-steps, the random vector (log V t k ) 1≤k≤N , t k = k T N , has a multivariate Gaussian distribution with known mean and variance, see for example [6], and can therefore be simulated exactly. We use the standard simulation method for Gaussian vectors based on a Cholesky factorization of the covariance matrix. Of course, this method has a considerable complexity -cost O(N 3 ) for the Cholesky factorization and O(N 2 ) for the matrix multiplication required to get one sample of (V t k ) 0≤k≤N -but our focus is on the accuracy of our estimations, rather than on their computational time. We construct approximate samples of the log-asset price
X T = − 1 2 T 0 V t dt + T 0 √
V t (ρdW t + ρdW t ) using a forward Euler scheme on the same time-grid
X N T = − T 2N N −1 k=0 V t k + N −1 k=0 V t k ρ(W t k+1 − W t k ) + ρ(W t k+1 − W t k ) . Therefore, we obtain M i.i.d. approximate Monte Carlo samples (X N,m T , V m T ) 1≤m≤M of the couple (X N T , V T )
, from which our estimators of the implied volatility and local volatility (3.8) are constructed, as detailed below. Since our goal is to check the asymptotic statements appearing in Theorem 3.3 and Corollary 3.4, we will consider a large number N of discretization steps and a large number M of Monte Carlo samples in order to increase the precision of the estimates we use as a benchmark. We estimate out-of-the-money put and call option prices by standard empirical means and evaluate the corresponding implied volatilities σ BS (T, K) by Newton's search.
The rough Bergomi model (4.1) parameters we used in our experiments are S 0 = 1, η = 1.0, ρ = −0.7, and ξ 0 = 0.235 2 . We tested three different values of H ∈ (0, 1/2], namely H ∈ {0.1, 0.3, 0.5}. We used M = 1.5 × 10 6 Monte Carlo samples and N = 500 discretization points.
Remark 4.1. Several recent works [10,9,42,30] study the weak error rate of rough Bergomi type models. Without going into (bibliographical) details, the weak rate has now been identified as 1 for H above 1 6 and 3H + 1 2 for H below 1 6 . Importantly, as H ↓ 0, a weak rate of 1 2 persists. The fairly large number of time steps we considered in our experiments (N = 500) is arguably enough to obtain good benchmark values when H is close to 1 2 , but we should bear in mind that the bias in the Monte Carlo estimation is expected to become more and more important as H approaches zero. In this case, larger number of time steps might be required to get a trustworthy level of accuracy; of course, the complexity of the exact Cholesky method we exploited in our simulation of the Riemann-Liouville process makes the simulations very demanding for very large values of N .
Local and implied volatility estimators
In this section, we present in detail the estimators we have implemented for the target objects: the at-the-money implied volatility skew ∂ k σ BS (t, k)| k=0 , the local volatility function (or Markovian projection) σ loc (·, ·) in (3.8), and the local volatility skew ∂ k σ loc (t, k)| k=0 .
The estimator of the implied volatility skew. A representation of the first derivative ∂ k σ BS (t, k) can be obtained by differentiating the equation defining the implied volatility σ BS with respect to the log-moneyness k. More precisely, denoting C BS (k, v) the Black-Scholes price of a call option with log-moneyness k and total volatility parameter v = √ t σ, we have
E (S 0 e Xt − S 0 e k ) + = C BS k, √ t σ BS (t, k) ,(4.2)
for all k and t. Taking the derivative at both sides of (4.2) with respect to k and using the expressions of the first-order Black-Scholes greeks
∂ k C BS (k, v) and ∂ v C BS (k, v), we have ∂ k σ BS (t, k) = −∂ k C BS (k, v) − S 0 e k P X t ≥ k √ t ∂ v C BS (k, v) v= √ t σ BS (t,k) = N d 2 (k, v) − P X t ≥ k √ t φ d 2 (k, v) v= √ t σ BS (t,k) , where d 2 (k, v) = − k v − v 2
, and φ (resp. N ) denotes the standard Gaussian density (resp. cumulative distribution). The representation above for the implied volatility skew allows us to avoid finite difference methods and only requires us to estimate σ BS (t, k) and P(X t ≥ k), which we can do with the same Monte Carlo sample, in order to estimate ∂ k σ BS (t, k) (and therefore, in particular, the at-the-money skew ∂ k σ BS (t, 0)).
The estimator of the local volatility function. Given the Monte Carlo samples (X N,m T , V m T ) 1≤m≤M of the couple (X N T , V T ), the conditional expectation (3.8) defining the local volatility function can be estimated appealing to several different regression methods, see, e.g., [63,53]. We have implemented and benchmarked two different estimators: on the one side, a kernel regressor, already applied to evaluate the Markovian projection within the celebrated particular calibration algorithm [50], and on the other side, an alternative estimator based on the explicit knowledge of the conditional law of (X t , V t )|(W s ) s≤t .
Our kernel regressor is the Nadaraya-Watson estimator with bandwidth δ,
σ 2 loc (t, k) = E [V t |X t = k] ≈ M m=1 V m t K δ X N,m t − k M m=1 K δ X N,m t − k . (4.3)
We used a Gaussian kernel K δ (x) = exp(−δx 2 ) in our tests.
On the other hand, it is a standard fact that conditionally on
F t = σ(W u : u ≤ t), the instantaneous variance V t is known, while the log-price X t is normally distributed with mean − 1 2 t 0 V s ds + ρ t 0 √ V s dW s and variance (1 − ρ 2 ) t 0 V s ds.
This property yields a representation of the Markovian projection σ loc (·, ·) as the ratio of two expectations,
σ 2 loc (t, k) = E [V t |X t = k] = E [V t Π t (k)] E [Π t (k)] (4.4) where Π t (k) = 1 t 0 V s ds exp − 1 2(1 − ρ 2 ) t 0 V s ds k + 1 2 t 0 V s ds − ρ t 0 V s dW s 2 .
A derivation of (4.4) can be found in [56, Proposition 3.1]; incidentally, this representation of σ loc has been exploited in [52] in the context of a calibration strategy of local stochastic volatility models -prior to the particular algorithm [50].
The estimator of the local volatility skew. Differentiating the right-hand side of (4.4) with respect to k, we obtain a representation of ∂ k σ loc (t, k):
∂ k σ loc (t, k) = ∂ ∂k E[VtΠt] E[Πt] 2 σ loc (t, k) = E [V t Π t ] E U t 0 Vsds Π t − E U t 0 Vsds Π t V t E [Π t ] 2(1 − ρ 2 )E [V t Π t ] 1/2 E [Π t ] 3/2 , (4.5) where Π t is a shorthand for Π t (k), U = U (k) = k+ 1 2 t 0 V s ds−ρ t 0 √ V s dW s , and ∂Πt ∂k = − U (1−ρ 2 ) t 0 Vsds Π t .
All the expectations appearing in (4.4) and (4.5) can be estimated based on the exact simulation of the discretized variance path (V t k ) 1≤k≤N ; we approximate the integrals t 0 V s ds and t 0 √ V s dW s using left-point Euler schemes. Note that the resulting non-parametric estimators based on the representations (4.4) and (4.5) do not contain any kernel bandwidth or other hyper-parameters to be tuned. This is a clear advantage with respect to (4.3). We have nevertheless tested both estimators (4.3) and (4.4) for the Markovian projection function, and found perfect agreement between the two in our tests -in other words, the local volatilities and local volatility skews computed with the two different methods would be indistinguishable in Figures 1 and 3.
In Figure 1, we plot the term structure of the ATM implied and local volatility skews, for three different values of H and maturities up to T = 0.5 years. As pointed out in the Introduction and in section 2, the power-law behavior of the ATM implied volatility skew generated by the rough Bergomi model is already well-known; on the other hand, the power-law behavior observed for the local volatility skew in Figure 1 is (to the best of our knowledge) new, and consistent with Corollary 3.4. Figure 2 shows the ratio of the implied volatility ATM skew over the local volatility ATM skew, that is the ratio of the curves observed in Figure 1, for the different values of H: the numerical results are in very good agreement with the " 1 H+3/2 rule" announced in Corollary 3.4. Additionally, we note that the ratio of the two skews seems to be rather stable -its value is almost constant for maturities up to T = 0.5 years, with our parameter setup.
Short-dated local volatility
Theorem 3.3 gives the asymptotic behavior of σ loc T, y T 1/2−H as T becomes small. Since y is allowed to vary around the at-the-money point y = 0, we can check whether the limit (3.9) holds for the function y → σ loc (T, y) := σ loc T, y T 1/2−H , that is the whole local volatility smile rescaled with maturity. The computation of the limiting function σ h y 1 requires us to evaluate the Cameron-Martin path h y ∈ H 1 that minimizes the rate function in (3.2), for given y. We follow the procedure already exploited in [28] and [32, section 5.1] : it can be shown, see [28], that the rate function satisfies the alternative representation Λ(y) = inf (y−ρ G(h)) 2 2
ρ 2 F (h) + 1 2 ḣ ,ḣ :ḣ ∈ L 2 (0, 1) , with F (h) = σ 2 ( h), 1 = 1 0 σ 2 ( h t )dt and G(h) = σ( h),ḣ = 1 0 σ( h t )ḣ t dt.
This alternative representation yields the rate function under the form of an unconstrained optimization problem (as opposed to the constrained optimization in (3.2)), which can then be approximately solved by the projection of the one-dimensional path h over an orthonormal basis {ė n } n≥1 of L 2 ,ḣ t = n≥1 a nėn (t). In practice, we truncate the sum at a certain order N and minimize over the coefficients (a n ) 1≤n≤N ; we obtain an approximation of the minimizer h y and therefore of h y t = (K H * ḣ y ) t = t 0 K(t, s)ḣ s ds. We chose the Fourier basis ė 1 (t) = 1,ė 2n (t) = √ 2 cos(2πn t),ė 2n+1 (t) = √ 2 sin(2πn t), n ∈ N \ {0} in our experiments, and observed that truncation of the sum at N = 8 provides a good accuracy. The results for the rough Bergomi model are displayed in Figure 3, where the function σ loc (T, y) is indeed seen to approach its limit σ h y 1 when maturity decreases from T = 0.5 to T = 0.05. The residual error term σ loc (T, y) − σ h y 1 is seen to depend on H, with lower values of H being associated with higher errors. It is however unclear whether the error for H = 0.1 is due to the slow convergence of σ loc or the weak error rate due to the Monte Carlo simulation (see Remark 4.1).
Extrapolation of local volatility surfaces. Eventually, Theorem 3.3 provides us with an extrapolation recipe of local volatilities for very short maturities: fixing a (small) maturity T and a log-moneyness level k, formally plugging y = k T 1/2−H in (3.9) we obtain σ loc T, k ≈ σ h y
1 y= k T 1/2−H .
The limiting function σ h y 1 | y= k T 1/2−H can therefore be used to extrapolate a local volatility surface in a way that is consistent with the behavior implied by a rough volatility model. As a specific application, consider the calibration of a local-stochastic volatility model (LSV) to an option price surface, for example using the particle method of Guyon and Henry-Labordère [50]. The LSV model can be obtained by the decoration of a naked rough volatility model, which amounts to enhancing the rough volatility model (2.1) for S t = S 0 e Xt with a leverage function l(t, S),
dS t = S t l(t, S t ) V t ρ dW t + 1 − ρ 2 dW t .
Given the spot variance process V , the LSV model calibrated to a given Dupire local volatility surface σ Dup , corresponds to (see [50])
l(t, S t ) = σ Dup (t, S t ) E[V t |S t ]
.
In general, one wishes the leverage function l(t, S) to be a small correction to the original stochastic volatility model (in other words: as close as possible to l ≡ 1). In practice, the local volatility σ Dup coming from market data has to be extrapolated for values of t smaller than the shorter observed maturity, and the choice of the extrapolation method is up to the user. If, for small t, the chosen extrapolation σ Dup (t, K) is qualitatively too different from the behavior of the conditional expectation E[V t |S t = K] in the rough volatility setting (for example, more specifically: the ATM skew of σ Dup is far from the power law (3.12)), then the leverage function will have to compensate, hence deviating from the unit function. Under the pure rough volatility model (l ≡ 1), Theorem 3.3 and Corollary 3.4 describe the behavior of the Markovian projection E[V t |S t ] for small t: eventually, these statements give hints on how σ Dup (t, ·) should be extrapolated for l(t, ·) not to deviate too much from the unit function. Such an extrapolation scheme is exploited in the recent work of Dall'Acqua, Longoni and Pallavicini [19], precisely in order to calibrate a LSV model with lifted Heston [1] backbone to the implied volatility surface of the EuroStoxx50 index. 1
Failure of the harmonic mean asymptotic formula under rough volatility. In Remark 3.7, we pointed out that, as a consequence of the general 1 H+3/2 skew rule (as opposed to the 1/2 rule) in Corollary 3.4, the harmonic mean asymptotic formula σ BS (T, k) ∼ H(T, k) as T → 0, see (3.16), is expected not to hold for H = 1/2 (without any contradiction with the statements in [16], which require regularity conditions on local volatility surface that are not satisfied in the rough volatility setting, see our discussion in Remark 3.7). In other words, we do not expect the harmonic mean of the local volatility H(T, k) = It is a standard result that the implied volatility σ BS and the local volatility σ loc generated by a stochastic volatility model with ρ = 0 are symmetric around y = 0, so that the finite-difference at-the-money skews S BS and S loc are identically zero in this case. We, therefore, assume ρ = 0 in what follows. Let us write K1, 1 = 1 0 K1(t)dt and K1(t) = t 0 K(t, s)ds. Using an expansion of the map y → h y 1 around y = 0 as provided in [32], we have
Σ(y) = σ h y 1 = σ 0 + y σ 0 σ 0 ρK1(1) + O(y 2 ) as y → 0 .
Together with (3.12), this implies
S loc (t, y) ∼ σ 0 σ 0 ρK1(1) + r(y) 1 t 1/2−H
as t → 0, where r(y) → 0 as y → 0. Similarly, from (3.7) and a third order energy expansion of Λ, obtained in [8,Thm 3.4] (an extension to forth order is given in [32] but not required here) it follows that
S BS (t, y) ∼ χ(y) − χ(−y) 2y 1 t 1/2−H = σ 0 σ 0 ρ K1, 1 + (y) 1 t 1/2−H as t → 0, where (y) → 0 as y → 0. Therefore S BS (t, y) S loc (t, y) → χ(y) − χ(−y) Σ(y) − Σ(−y) = K1, 1 K1(1) + o(1) as t → 0.
The identity K1(1) = (H + 3/2) K1, 1
for K(t, s) = √ 2H(t − s) H−1/2 is straightforward to prove using simple integration (we note in passing that this identity holds for any self-similar W , by leveraging a representation in [54]). The statement of the corollary follows.
The proof of Theorem 3.3 is based on the following representation of the Markovian projection, based on the integration by parts of the Malliavin calculus,
E[V T |X T = y] = E V T 1 X T ≥y T 0 1 ρσ( Wt) dW t E 1 X T ≥y T 0 1 ρσ( Wt) dW t .
(5.1) Representation (5.1) is rather classical, see [29], though spelled out only in case ρ = 0, and [5, Lemma 3] for a general formula in an abstract setting. We note that ρ cancels as long as it is not zero, equivalently |ρ| < 1, which is our non-degeneracy assumption. (We kept ρ above to insist on this point.) For completeness, we give a proof in Lemma 5.1.
Proof of Theorem 3.3. Setting T = ε 2 and using the time-scaling property of the triple (W , W , X), we get
E σ( W T ) 2 1 X T ≥yT 1/2−H T 0 1 ρσ( W t ) dW t = E σ(ε 2H W 1 ) 2 1 X ε 1 ≥yε 1−2H 1 0 1 ρσ(ε 2H W t ) εdW t E 1 X T ≥yT 1/2−H T 0 1 ρσ( W t ) dW t = E 1 X ε 1 ≥yε 1−2H 1 0 1 ρσ(ε 2H W t ) εdW t .
We define
J(ε, y) = e Λ(y) ε 4H E σ(ε 2H W 1 ) 2 1 X ε 1 ≥yε 1−2H 1 0 1 ρσ(ε 2H W t ) εdW t J(ε, y) = e Λ(y) ε 4H E 1 X ε 1 ≥yε 1−2H 1 0 1 ρσ(ε 2H W t ) εdW t (5.2)
so that σ 2 loc ε 2 , y ε 1−2H = J(ε,y) J(ε,y) from (5.1). The implementation of an infinite-dimensional Laplace method along the lines of [31] allows us to determine the asymptotic behavior of J(ε, y) and J(ε, y) as ε → 0: we postpone the details to Lemma 5.2 below. We obtain the ε → 0 limit of σ 2 loc ε 2 , y ε 1−2H , and therefore the statement of the Theorem, from (5.3).
Lemma 5.1. The representation formula (5.1) for the conditional expectation holds for every y ∈ R.
Proof. The Malliavin derivative D of X T with respect to W is
D t X T = ρ √ V t , t < T ,
because V is W -adapted. Consider a two-dimensional Skorohod integrable process (0, u), with
u t = 1 T D t X T = 1 T ρ √ V t .
We write δ for the Skorohod integral. From Malliavin integration by parts formula for a bounded smooth function φ : R → R and using that D t V T = 0, we obtain
E[V T φ(X T )δ(0, u)] = E[ D(V T φ(X T )), u ] = E[V T φ (X T ) DX T , u ] = E V T φ (X T ) T 0 D t X T u t dt = E V T φ (X T ) .
We have used here boundedness of φ(·) and Assumptions 2.1 on σ(·) (see proof of next Lemma 5.4 for a detailed argument). Moreover, since V is adapted, we have δ(0, u) =
T 0 1 T ρ √ V t dW t , and so E[V T φ (X T )] = 1 T ρ E V T φ(X T ) T 0 1 √ V t dW t
Following the same steps, one can show that the following identity also holds:
E[φ (X T )] = 1 ρT E φ(X T ) T 0 1 √ V t dW t .
The representation formula (5.1) for the conditional expectation then follows from a regularization procedure of the indicator function 1 Xt≥y , see for example [5].
(5.2) we have J(ε, y) ∼ ε 2H σ 2 ( h y 1 ) 1 0 dh y t σ( h y t ) 1 ρ √ 2π 2Λ(y) E exp Λ (y) ∆ 2 , J(ε, y) ∼ ε 2H 1 0 dh y t σ( h y t ) 1 ρ √ 2π 2Λ(y) E exp Λ (y) ∆ 2 (5.3)
as ε → 0, where ∆ 2 is a quadratic Wiener functional given in (5.8) (see also [31,Equation (7.4)]).
Corollary 5.3 (Digital expansion).
We do not use it here but we note that from the computations in the proof of Theorem 3.3 and Lemma 5.2, it follows that there exists a y 0 > 0 such that the following holds for all y ∈ (0, y 0 ):
P X T ≥ yT 1/2−H ∼ e − Λ(y) T T H 1 √ 2π 2Λ(y) E exp Λ (y) ∆ 2 , as T → 0.
Proof of Lemma 5.2. We aim to apply the asymptotic results in [31]. Assumption (A1) in [31] is nothing but the validity of the large deviations principle for the model defined in (2.1), which we have already discussed in Remark 3.1. We take y close enough to 0 so that the non-degeneracy assumptions [31, Assumptions (A3), (A4), (A5)] are satisfied for the model under consideration, as it has been checked in [31,Section 7.1]. Therefore, we have that the preliminary regularity structures results in [31] apply to our setting, and can employ them in the proof. Using [32, Proof of Lemma 3.4,
Step 1] we have
J δ (ε, y) = ε 1−2H e Λ(y) ε 4H E δ σ(ε 2H W 1 ) 2 1 X ε 1 ≥yε 1−2H 1 0 1 ρσ(ε 2H W t ) ε 2H dW t
where the expectation E δ is with respect to the sub-probability P δ . As a consequence of Lemma 5.4, any "algebraic expansion" of J (i.e., in powers of ε) does not change by switching to J δ . So, proving the asymptotic behavior (5.3) for J δ implies the statement.
We recall (3.1) and apply Girsanov's theorem, via the transformation
ε 2H W → ε 2H W + h y = ε 2H (W + h y /ε 2H ), ε 2H W → ε 2H W + h y = ε 2H ( W + h y /ε 2H ) (5.5) from which we introduce Z ε 1 = 1 0 σ ε 2H W t + h y t d[ε 2H W + h y ] t − ε 1+2H 2 1 0 σ 2 ε 2H W t + h y t dt,(5.H 1 E δ σ(ε 2H W 1 ) 2 1 ε 2H−1 X ε 1 ≥y 1 0 1 ρσ(ε 2H W t ) ε 2H dW t = E δ e − 1 ε 2H 1 0ḣ y dW σ 2 (ε 2H W 1 + h y 1 )1 ε 2H g 1 +ε 4H g 2 +r 3 ≥0 1 0 1 ρσ(ε 2H W t + h y t ) (ε 2H dW t + dh y t ) .
Theorem A.1, applied with ε 2H (instead of ε), gives on {ε 2H |||W||| < δ},
σ 2 (ε 2H W 1 + h y 1 ) = σ 2 ( h y 1 ) + 1 ε,W , with | 1 ε,W | ≤ Cδ and 1 0 1 ρσ(ε 2H W t + h y t ) (ε 2H dW t + dh y t ) = 1 0 1 ρσ( h y t ) dh y t + 2 ε,W with | 2 ε,W | ≤ Cε 2H |||W||| ≤ Cδ. Therefore, σ 2 (ε 2H W 1 + h y 1 ) 1 0 1 ρσ(ε 2H W t + h y t ) (ε 2H dW t + dh y t ) = σ 2 ( h y 1 ) 1 0 1 ρσ( h y t ) dh y t + ε,W
with | ε,W | ≤ Cδ. If ε 2H |||W||| ≤ δ we also have (A.7), so, for fixed δ, for ε small enough,
|r ε 3 | ≤ δε 4H (C + |||W||| 2 ).
We have
E δ [. . . ] ∈ σ 2 ( h y 1 ) 1 0 dh y t ρσ( h y t ) ± Cδ E δ e − 1 ε 2H 1 0ḣ y dW 1 g 1 +ε 2H g 2 ±δε 2H (C+|||W||| 2 )≥0 (5.7)
The optimal condition [31, Lemma C.3] gives 1 0ḣ y dW = Λ (y)g 1 . By [31,Lemma 8.3],
g 2 = ∆ 2 + g 1 ∆ 1 + g 2 1 ∆ 0 (5.8)
where the ∆ i 's are independent of g 1 . We set now, as in [4], the zero mean Gaussian process V = V y
V t (ω) := W t (ω) − g 1 (ω)v t (5.9)
where v is chosen so that V is independent of g 1 . We also define
V(ω) := T −g 1 (ω)v W(ω)
where T , the "lifted" Cameron-Martin translation, is defined in (A.3). As in Section 8.1 of [31], we let
∆ 0 := ∆ 0 + Cδ v 2 H 1 , ∆ ± 2 := ∆ 2 ± δ(C + |||V||| 2 ),
where ∆ ± 2 is also P -independent of g 1 and V. (This independence allows for conditional Gaussian computations.) We refer to [31] for details, and here we only use that ε 2H |||V||| ≤ Cδ, so that
|ε 2H ∆ 1 | ≤ Cε 2H |||V||| ≤ Cδ,
when ε 2H |||W||| ≤ δ. Thus, the asymptotic behavior of J δ (ε, y) is sandwiched by
σ 2 ( h y 1 ) 1 0 dh y t ρσ( h y t ) ± Cδ times ( * ) with ( * ) ∈ E δ exp − Λ (y)g 1 ε 2H 1 g 1 +ε 2H ∆ ± 2 ±C(1+ ∆ 0 )δ|g 1 |>0 .
The limit of this expectation can be computed with the Laplace method. We prove the upper bound. Clearly,
E δ exp − Λ (y)g 1 ε 2H 1 g 1 +ε 2H ∆ ± 2 ±C(1+ ∆ 0 )δ|g 1 |>0 ≤ E · · ·
where · · · means the same argument. Set σ y = 2Λ(y)/Λ (y) and γ δ := C(1 + ∆ 0 )δ and assume that δ is small enough that γ δ < 1. By [31, Theorem 6.1, part (iii)], we have ε 2H Λ (y)σy > 0 and can then apply Lemma 5.5 (with N = g 1 /σ y ) to see that
E · · · |∆ 2 , V ≤ ε 2H σ y Λ (y) √ 2π max e Λ (y) ( ∆ 2 +δ ( C+|||V||| 2 )) 1−γ δ , e Λ (y) ( ∆ 2 +δ ( C+|||V||| 2 )) 1+γ δ .
By [31, Proposition 8.6 and proof of Corollary 7.1], exp (Λ (y) ∆ 2 ) ∈ L 1+ and by [31,Lemma 8.3 (iv)] exp(|||V||| 2 ) ∈ L 0+ , so that by letting successively ε and δ go to 0 we obtain that
lim sup ε→0 ε −2H E · · · ≤ 1 σ y Λ (y) √ 2π E exp Λ (y) ∆ 2 .
Recalling now 2Λ(y) = Λ (y)σ y , and the prefactor σ 2 ( h y 1 ) Proof. To this end, recall the sub-probability (5.4) and introduce
B = {ε 2H |||W||| ≥ δ} c .
We have
J(ε, y) − J δ (ε, y) = exp Λ(y) ε 4H E σ(ε 2H W 1 ) 2 1 X ε 1 ≥yε 1−2H 1 0 1 ρσ(ε 2H W H t ) εdW t 1 B . (5.10)
We have, for any p, p > 1 conjugate exponents,
E σ(ε 2H W 1 ) 2 1 X ε 1 ≥yε 1−2H 1 0 1 ρσ(ε 2H W t ) εdW t 1 B ≤ ε E σ(ε 2H W 1 ) 2 1 0 1 ρσ(ε 2H W t ) dW t p 1/p E[1 X ε 1 ≥yε 1−2H 1 B ] 1/p . (5.11)
The first factor can be bounded using Hölder inequality as
E σ(ε 2H W 1 ) 2 1 0 1 ρσ(ε 2H W t ) dW t p ≤ (E σ(ε 2H W 1 ) 2q ) 1/q E 1 0 1 ρσ(ε 2H W t ) dW t pq 1/q .
Since σ(·) satisfies (2.3), the first factor is bounded for any q > 1. Using Burkholder-Davis-Gundy inequality,
E 1 0 1 σ(ε 2H W t ) dW t pq ≤ E 1 0 dt σ(ε 2H W t ) 2
pq /2 using condition (2.2) and the moment formula for log-normal variables,
· · · ≤ E 1 0 exp(cε 2H W t )dt pq /2 ≤ E exp(cε 2H W 1 ) pq /2 < ∞
We conclude that the first factor in (5.11) is bounded by a constant, for any p ≥ 1. In [31, lines after (8.5)] it is shown that
E[1 X ε 1 ≥yε 1−2H 1 B ] = O(e − Λ(δ,y) ε 4H )
with Λ(δ, y) > Λ(y). Now,
E[1 X ε 1 ≥yε 1−2H 1 B ] 1/p = O(e − Λ(δ,y) p ε 4H )
and we can choose p > 1 close enough to 1 to have Λ(δ, y)/p > Λ(y). The statement follows. To obtain the lower bound, use e −y 2 /2 ≥ 1 − y 2 /2 and split the integral to obtain
(5.13) ≥ +∞ −∞ e −v 1 v+γ|v|+α>0 dv − ε 2 2 ∞ −∞ e −v v 2 1 v+γ|v|+α>0 dv.
The first integral is computed as before. For the second one, when α < 0 we have The statement follows.
A Elements of regularity structures for rough volatility
This appendix is based on [7,31]. We have an fBm W = K H * Ẇ of Hurst parameter H. Let M be the smallest integer such that (M + 1)H − 1/2 > 0 and then pick κ small enough such that Also, recall from [7] that there is a well-defined dilation δ ε acting on models. Formally, it is obtained by replacing each occurrence of W, W , W with ε times that quantity:
δ ε W = εW, εW , ε W , ε 2 W dW, ε 3 W 2 dW, .... ∈ M,
where M is the space of models. As a consequence, dilation works well with homogeneous model norms, |||δ ε W||| = ε|||W||| .
Theorem A.1 (Stochastic Taylor-like expansion). Let f be a smooth function. Fix h ∈ H 1 and ε > 0. If W is a model, then so is T h (δ ε W). The path-wise "rough/model" integral Ψ(ε) := Proof. Directly from (5.6),
Z ε 1 = 1 0 σ ε 2H W t + h y t d[ε 2H W + h y ] t + O(ε 1+2H )
uniformly on bounded sets of ε 2H | W |, and hence on bounded sets of ε 2H |||W|||. From [31,Theorem B.6], applied with ε replaced by ε 2H , and then again uniformly on bounded sets of ε 2H |||W||| we arrive at the error estimate,
|r 3 (ω)| ≤ O(ε 6H |||W||| 3 ) + O(ε 1+2H ),
valid uniformly on bounded sets of ε 2H |||W|||.
Assumption 2. 1 .
1There exist c 1 , c 2 , c 3 , c 4 > 0 such that for all x ∈ R, c 1 e −c 2 |x| ≤ σ(x), (2.2) σ(x) ≤ c 3 e c 4 |x| . (2.3)
Theorem 3 . 3 (
33Markovian projection at the LDP regime). Let Assumption 2.1 be in force. Then, the Markovian projection in the model (2.1) satisfies, for every y ∈ R \ {0} small enough,
of the minimizer for the control problem (3.2) is proved in [31, Lemma C.6]. Let us stress that the asymptotics (3.9) for the local volatility function holds under the mild growth conditions of Assumption 2.1, while we do not require the 1+ moment condition of Assumption 3.2.
3. 1
1Local volatility skew and the new 1 H+3/2 rule Let us write ∼ for asymptotic equivalence as t → 0. Let us denote Σ(y) := σ h y 1 the limiting function in (3.9), and consider the following finite-difference approximations of the local and implied volatility skew
.
Let ρ = 0. Let Assumption 2.1 be in force. Then, for y ∈ R \ {0} small enough,
kkk 1 Figure 1 :
11when maturities become small when the involved volatility surfaces are generated by a rough vol model. Having constructed estimators (4.3) and (4.4) of the local volatility function under the rough Bergomi model, we are also able to approximate (with an additional deterministic quadrature) the harmonic mean H(T, k), and compare the output with the implied volatility smile. The results are shown in Figure 4, for three different values of H. As expected, when H = 0.5 we observe (upper left panel) that the implied volatility σ BS (T, k) approaches the harmonic mean H(T, k) when maturity decreases from T = 0.45 to T = 0.05, and the at-the-money slopes are also seen to agree. The convergence is even more apparent in the upper right figure, where the ratio σ BS (T,k) H(T,k) is seen to monotonically converge to one. This behavior should be compared with the one in the two bottom figures, where the rough case H = 0.1 is considered (the case H = 0.3 being intermediate between the other two): now, when maturity decreases, the implied volatility smile does not seem to approach the harmonic mean H(T, k) anymore (apart from the specific at-the-money point k = 0 where both functions tend to the initial spot volatility σ 0 = √ V 0 ), and in particular, the slopes of the two curves are seen to considerably deviate from each other. This phenomenon is even more clear in the bottom right figure, where the ratio σ BS (T,k) H(T,k) has a completely different behavior with respect to the diffusive case H = 0BS (T, k)| k = 0 with H = 0.5 k loc (T, k)| k = 0 with H BS (T, k)| k = 0 with H = 0.3 k loc (T, k)| k = 0 with H BS (T, k)| k = 0 with H = 0.1 k loc (T, k)| k = 0 with H = 0.At-the-money implied and local volatility skews in the rough Bergomi model (4.1) for H = 0.5 (red, top left figure), H = 0.3 (green, top right figure), and H = 0.1 (blue, bottom figure). The maturity T on the x-axis is expressed in years.
Figure 2 :Figure 3 :
23Numerical evidence for the 1 H+3/2 ratio rule stated in Corollary 3.4: we plot the ratio of the at-the-money implied and local volatility skews ∂ k σ BS (T,k)| k=0 ∂ k σ loc (T,k)| k=0 for H ∈ {0.1, 0.3, 0.5} against maturity T (in years). The dashed lines correspond to the constant values 1 H+3/2 (blue for H = 0.1, green for H = 0.3, red for H = 0.5). loc (T, yT 1/2 H ) T=0.2 loc (T, yT 1/2 H ) T=0.35 loc (T, yT 1/2 H ) T=0.5 loc (T, loc (T, yT 1/2 H ) T=0.2 loc (T, yT 1/2 H ) T=0.35 loc (T, yT 1/2 H ) T=0.5 loc (T, 05 loc (T, yT 1/2 H ) T=0.2 loc (T, yT 1/2 H ) T=0.35 loc (T, yT 1/2 H ) T=0.5 loc (T, yT 1/2 H ) (h y 1 ) Short-dated local volatility the rough Bergomi model (4.1) for H = 0.5 (top left figure), H = 0.3 (top right figure), and H = 0.1 (bottom figure). Recall that, according to Theorem 3.3, σ loc (T, y T 1/2−H ) → σ( 05, k) H(T = 0.05, k) BS(T = 0.25, k) H(T = 0.25, k) BS(T = 0.45, k) H(T = 0.45, k) , k) H(T = 0.05, k) BS(T = 0.25, k) H(T = 0.25, k) BS(T = 0.45, k) H(T = 0.45, k)
Figure 4 :
4Numerical evidence for the failure of the harmonic mean formula within the rough Bergomi model (4.1) (see Remark 3.7): in the left figures, we compare the implied volatility σ BS (T, k) and the harmonic mean H(T, k) of the local volatility defined in (3.16), for two different maturities T and for H = 0.5 (red), H = 0.3 (green), and H = 0.1 (blue). In the right figures (same color conventions as the left figures), we plot the ratio σ BS (T,k) H(T,k) of the two functions, expected to tend to 1 as T → 0 when H = 0.5. 19 Proof of Corollary 3.4. Equation (3.12) is a straightforward consequence of Theorem 3.3.
.
Moreover, for J, J defined in
as detailed in the proof of[31, Theorem 6.1]. The proof of (5.3) is then a modification of[31, Proposition 8.7], from which we borrow the notations. Necessary definitions are recalled in Appendix A. We only prove the statement for J, the one for J being completely analogous. Set, for any δ > 0,P δ (A) = P(A ∩ {ε 2H |||W||| < δ}),(5.4)with W defined in (A.2), and set
±
Cδ in (5.7) we have the upper bound. The lower bound is proved in the same way using the lower bound in Lemma 5.5. Lemma 5.4. Fix δ > 0. Then there exists c = c y,δ > 0 such that |J δ (ε, y) − J(ε, y)| = O(exp(−c/ε 2 )).
Lemma 5. 5 .
5Let α ∈ R, γ ∈ [0, 1), ε > 0, and N ∼ N (0, 1). Then for some C > 0, −1 E exp −ε −1 N 1 N +γ|N |+εα>0 (5.13) γ , when α ≥ 0.
α 2 − 2α(1 − γ) + 2(1 − γ) 2 (1 − γ) 2 .
σ
t + h t d(T h (δ ε W)) tis well-defined, continuously differentiable in ε, and we have the estimates|f (ε W 1 + h 1 ) − f ( h 1 )| = O(ε|||W|||), |Ψ(ε) − Ψ(0)| = O(ε|||W|||),valid on bounded sets of ε|||W|||.Proof. As in[31, Theorem B.6], just stop the expansion at the first order.Lemma A.2. Let Z ε 1 be defined in (5.6) and recall ε ≡ ε 2H . ThenZ ε 1 = g 0 + εg 1 (ω) + ε 2 g 2 (ω) + r 3 ( h y s ) W s d W s (A.6) |r 3 (ω)| ≤ O(ε 6H |||W||| 3 ) + O(ε 1+2H ),uniformly on bounded sets of ε 2H |||W|||. (A.7)
(M + 1)(H − κ) − 1/2 − κ > 0 . (A.1) When H = 1/2, we have M = 1 and so 1/2 − κ ∈ (1/3, 1/2). This corresponds to the rough path case. More generally, we work with an enhancement of the Brownian noise (W, W ), also known as a model of the form W(ω) = W, W , W , W dW, W dW , W 2 dW, · · · , W M dW , (A.2) with homogeneous model norm 2 |||W||| := W 1/2−κ + W 1/2−κ + W H−κ + · · · + W M dW where · | 1/2−κ are classical, resp. 2-parameter, Hölder (semi)norms. One naturally defines, with h = (h, h) ∈ H 1 and h = K H * h T h (W) = W + h, W + h, W + h, ( W + h)d(W + h), ... .1/3
M (H−κ)+1/2−κ
(A.3)
We thank Andrea Pallavicini and Riccardo Longoni for interesting and stimulating discussions on this topic.
) as T → 0. The rate function minimizing path h y t is evaluated using the Ritz projection method with N = 8 Fourier basis functions, see section 4.2.
In fact, W 1/2−κ W H−κ by Schauder so that including W is mildly redundant.
Lifting the heston model. E , Abi Jaber, Quantitative Finance. 1912E. Abi Jaber. Lifting the heston model. Quantitative Finance, 19(12):1995-2013, 2019.
On the skew and curvature of implied and local volatilities. E Alòs, D García-Lorite, M Pravosud, arXiv e-printsE. Alòs, D. García-Lorite, and M. Pravosud. On the skew and curvature of implied and local volatilities. arXiv e-prints, https://arxiv.org/pdf/2205.11185.pdf, 2022.
On the short-time behavior of the implied volatility for jumpdiffusion models with stochastic volatility. E Alòs, J A León, J Vives, Finance and Stochastics. 114E. Alòs, J. A. León, and J. Vives. On the short-time behavior of the implied volatility for jump- diffusion models with stochastic volatility. Finance and Stochastics, 11(4):571-589, 2007.
R Azencott, Petites perturbations aléatoires des systèmes dynamiques: développements asymptotiques. Bulletin des sciences mathématiques. 109R. Azencott. Petites perturbations aléatoires des systèmes dynamiques: développements asymptotiques. Bulletin des sciences mathématiques, 109(3):253-308, 1985.
An elementary introduction to Malliavin calculus. V Bally, RR-4718Research ReportINRIA. Available atV. Bally. An elementary introduction to Malliavin calculus. Research Report RR-4718, INRIA. Available at https://hal.inria.fr/inria-00071868., 2003.
Pricing under rough volatility. C Bayer, P Friz, J Gatheral, Quantitative Finance. 166C. Bayer, P. Friz, and J. Gatheral. Pricing under rough volatility. Quantitative Finance, 16(6):887-904, 2016.
A regularity structure for rough volatility. C Bayer, P K Friz, P Gassiat, J Martin, B Stemper, Mathematical Finance. 303C. Bayer, P. K. Friz, P. Gassiat, J. Martin, and B. Stemper. A regularity structure for rough volatility. Mathematical Finance, 30(3):782-832, 2020.
Short-time near-the-money skew in rough fractional volatility models. C Bayer, P K Friz, A Gulisashvili, B Horvath, B Stemper, Quantitative Finance. 195C. Bayer, P. K. Friz, A. Gulisashvili, B. Horvath, and B. Stemper. Short-time near-the-money skew in rough fractional volatility models. Quantitative Finance, 19(5):779-798, 2019.
Short communication: On the weak convergence rate in the discretization of rough volatility models. C Bayer, M Fukasawa, S Nakahara, SIAM Journal on Financial Mathematics. 1332022C. Bayer, M. Fukasawa, and S. Nakahara. Short communication: On the weak convergence rate in the discretization of rough volatility models. SIAM Journal on Financial Mathematics, 13(3), 2022.
C Bayer, E J Hall, R Tempone, arXiv:2009.01219Weak error rates for option pricing under the rough bergomi model. arXiv preprintand to appear in IJTAFC. Bayer, E. J. Hall, and R. Tempone. Weak error rates for option pricing under the rough bergomi model. arXiv preprint arXiv:2009.01219, and to appear in IJTAF, 2020.
Hierarchical adaptive sparse grids and quasi-Monte Carlo for option pricing under the rough Bergomi model. C Bayer, C B Hammouda, R Tempone, Quantitative Finance. 00C. Bayer, C. B. Hammouda, and R. Tempone. Hierarchical adaptive sparse grids and quasi- Monte Carlo for option pricing under the rough Bergomi model. Quantitative Finance, 0(0):1- 17, 2020.
Log-Modulated Rough Stochastic Volatility Models. C Bayer, F A Harang, P Pigato, SIAM Journal on Financial Mathematics. 123C. Bayer, F. A. Harang, and P. Pigato. Log-Modulated Rough Stochastic Volatility Models. SIAM Journal on Financial Mathematics, 12(3):1257-1284, 2021.
C Bayer, B Horvath, A Muguruza, B Stemper, M Tomas, arXiv:1908.08806On deep calibration of (rough) stochastic volatility models. arXiv preprintC. Bayer, B. Horvath, A. Muguruza, B. Stemper, and M. Tomas. On deep calibration of (rough) stochastic volatility models. arXiv preprint arXiv:1908.08806, 2019.
Hybrid scheme for Brownian semistationary processes. M Bennedsen, A Lunde, M S Pakkanen, Finance and Stochastics. 214M. Bennedsen, A. Lunde, and M. S. Pakkanen. Hybrid scheme for Brownian semistationary processes. Finance and Stochastics, 21(4):931-965, 2017.
Decoupling the Short-and Long-Term Behavior of Stochastic Volatility. M Bennedsen, A Lunde, M S Pakkanen, Journal of Financial Econometrics. M. Bennedsen, A. Lunde, and M. S. Pakkanen. Decoupling the Short-and Long-Term Behavior of Stochastic Volatility. Journal of Financial Econometrics, 2021.
Asymptotics and calibration of local volatility models. H Berestycki, J Busca, I Florent, Quantitative Finance. 2H. Berestycki, J. Busca, and I. Florent. Asymptotics and calibration of local volatility models. Quantitative Finance, 2:61-69, 2002.
Computing the implied volatility in stochastic volatility models. H Berestycki, J Busca, I Florent, Communications on Pure and Applied Mathematics. 5710H. Berestycki, J. Busca, and I. Florent. Computing the implied volatility in stochastic volatility models. Communications on Pure and Applied Mathematics, 57(10):1352-1373, 2004.
Mimicking an Itô process by a solution of a stochastic differential equation. G Brunick, S Shreve, The Annals of Applied Probability. 234G. Brunick and S. Shreve. Mimicking an Itô process by a solution of a stochastic differential equation. The Annals of Applied Probability, 23(4):1584 -1628, 2013.
Rough-heston local-volatility model. E Dall'acqua, R Longoni, A Pallavicini, arXiv e-printsE. Dall'Acqua, R. Longoni, and A. Pallavicini. Rough-heston local-volatility model. arXiv e-prints, https://arxiv.org/abs/2206.09220, 2022.
Rational shapes of local volatility. S. De Marco, P Friz, S Gerhold, Risk. 26270S. De Marco, P. Friz, and S. Gerhold. Rational shapes of local volatility. Risk, 26(2):70, 2013.
Local Volatility, Conditioned Diffusions, and Varadhan's Formula. S , De Marco, P K Friz, SIAM Journal on Financial Mathematics. 92S. De Marco and P. K. Friz. Local Volatility, Conditioned Diffusions, and Varadhan's Formula. SIAM Journal on Financial Mathematics, 9(2):835-874, 2018.
The local volatility surface: Unlocking the information in index option prices. E Derman, I Kani, J Z Zou, Financial Analysts Journal. 524E. Derman, I. Kani, and J. Z. Zou. The local volatility surface: Unlocking the information in index option prices. Financial Analysts Journal, 52(4):25-36, 1996.
Pricing with a smile. B Dupire, Risk. 71B. Dupire. Pricing with a smile. Risk, 7(1):18-20, 1994.
A unified theory of volatility. Derivatives pricing: The classic collection. B Dupire, P. CarrB. Dupire. A unified theory of volatility. Derivatives pricing: The classic collection, 2004 (P. Carr, ed.), pages 185-196, 1996.
Short-term at-the-money asymptotics under stochastic volatility models. O El Euch, M Fukasawa, J Gatheral, M Rosenbaum, SIAM Journal on Financial Mathematics. 102O. El Euch, M. Fukasawa, J. Gatheral, and M. Rosenbaum. Short-term at-the-money asymp- totics under stochastic volatility models. SIAM Journal on Financial Mathematics, 10(2):491- 511, 2019.
The microstructural foundations of leverage effect and rough volatility. O El Euch, M Fukasawa, M Rosenbaum, Finance and Stochastics. 222O. El Euch, M. Fukasawa, and M. Rosenbaum. The microstructural foundations of leverage effect and rough volatility. Finance and Stochastics, 22(2):241-280, 2018.
The characteristic function of rough Heston models. O El Euch, M Rosenbaum, Mathematical Finance. 291O. El Euch and M. Rosenbaum. The characteristic function of rough Heston models. Mathe- matical Finance, 29(1):3-38, 2019.
Asymptotics for rough stochastic volatility models. M Forde, H Zhang, SIAM Journal on Financial Mathematics. 81M. Forde and H. Zhang. Asymptotics for rough stochastic volatility models. SIAM Journal on Financial Mathematics, 8(1):114-145, 2017.
Applications of Malliavin calculus to Monte-Carlo methods in finance. E Fournié, J.-M Lasry, J Lebuchoux, P.-L Lions, II. Finance and Stochastics. 52E. Fournié, J.-M. Lasry, J. Lebuchoux, and P.-L. Lions. Applications of Malliavin calculus to Monte-Carlo methods in finance. II. Finance and Stochastics, 5(2):201-236, 2001. 13
Weak error estimates for rough volatility models. P Friz, W Salkeld, T Wagenhofer, arXivP. Friz, W. Salkeld, and T. Wagenhofer. Weak error estimates for rough volatility models. arXiv, 2022.
Precise asymptotics: Robust stochastic volatility models. P K Friz, P Gassiat, P Pigato, The Annals of Applied Probability. 312P. K. Friz, P. Gassiat, and P. Pigato. Precise asymptotics: Robust stochastic volatility models. The Annals of Applied Probability, 31(2):896-940, 2021.
Short-dated smile under rough volatility: asymptotics and numerics. P K Friz, P Gassiat, P Pigato, Quantitative Finance. P. K. Friz, P. Gassiat, and P. Pigato. Short-dated smile under rough volatility: asymptotics and numerics. Quantitative Finance, pages 1-18, 2021.
Large deviations and asymptotic methods in finance. P K Friz, J Gatheral, A Gulisashvili, A Jacquier, J Teichmann, Springer110P. K. Friz, J. Gatheral, A. Gulisashvili, A. Jacquier, and J. Teichmann. Large deviations and asymptotic methods in finance, volume 110. Springer, 2015.
How to make Dupire's local volatility work with jumps. P K Friz, S Gerhold, M Yor, Quantitative Finance. 148P. K. Friz, S. Gerhold, and M. Yor. How to make Dupire's local volatility work with jumps. Quantitative Finance, 14(8):1327-1331, 2014.
A Course on Rough Paths. With an introduction to regularity structures. P K Friz, M Hairer, SpringerP. K. Friz and M. Hairer. A Course on Rough Paths. With an introduction to regularity structures. Springer, 2020.
The Step Stochastic Volatility Model. P K Friz, P Pigato, J Seibel, Risk. P. K. Friz, P. Pigato, and J. Seibel. The Step Stochastic Volatility Model. Risk, June, 2021.
Asymptotic analysis for stochastic volatility: martingale expansion. M Fukasawa, Finance and Stochastics. 154M. Fukasawa. Asymptotic analysis for stochastic volatility: martingale expansion. Finance and Stochastics, 15(4):635-654, 2011.
Short-time at-the-money skew and rough fractional volatility. M Fukasawa, Quantitative Finance. 172M. Fukasawa. Short-time at-the-money skew and rough fractional volatility. Quantitative Finance, 17(2):189-198, 2017.
Volatility has to be rough. M Fukasawa, Quantitative Finance. 211M. Fukasawa. Volatility has to be rough. Quantitative Finance, 21(1):1-8, 2021.
. M Fukasawa, T Takabatake, R Westphal, arXiv:1905.04852Is Volatility Rough? arXiv preprintM. Fukasawa, T. Takabatake, and R. Westphal. Is Volatility Rough? arXiv preprint arXiv:1905.04852, 2019.
On the martingale property in the rough Bergomi model. P Gassiat, Electron. Commun. Probab. 249P. Gassiat. On the martingale property in the rough Bergomi model. Electron. Commun. Probab., 24:9 pp., 2019.
Weak error rates of numerical schemes for rough volatility. P Gassiat, arXiv:2203.09298arXiv preprintP. Gassiat. Weak error rates of numerical schemes for rough volatility. arXiv preprint arXiv:2203.09298, 2022.
The volatility surface: a practitioner's guide. J , John Wiley & SonsJ. Gatheral. The volatility surface: a practitioner's guide. John Wiley & Sons, 2006.
Asymptotics of implied volatility in local volatility models. J Gatheral, E P Hsu, P Laurence, C Ouyang, T.-H Wang, Mathematical Finance. 224J. Gatheral, E. P. Hsu, P. Laurence, C. Ouyang, and T.-H. Wang. Asymptotics of implied volatility in local volatility models. Mathematical Finance, 22(4):591-620, 2012.
J Gatheral, T Jaisson, M Rosenbaum, Volatility is rough. Quantitative Finance. J. Gatheral, T. Jaisson, and M. Rosenbaum. Volatility is rough. Quantitative Finance, pages 1-17, 2018.
Affine forward variance models. J Gatheral, M Keller-Ressel, Finance and Stochastics. 233J. Gatheral and M. Keller-Ressel. Affine forward variance models. Finance and Stochastics, 23(3):501-533, 2019.
Machine learning for pricing American options in high-dimensional Markovian and non-Markovian models. L Goudenège, A Molent, A Zanette, Quantitative Finance. 204L. Goudenège, A. Molent, and A. Zanette. Machine learning for pricing American options in high-dimensional Markovian and non-Markovian models. Quantitative Finance, 20(4):573-591, 2020.
Large deviation principle for volterra type fractional stochastic volatility models. A Gulisashvili, SIAM Journal on Financial Mathematics. 93A. Gulisashvili. Large deviation principle for volterra type fractional stochastic volatility mod- els. SIAM Journal on Financial Mathematics, 9(3):1102-1136, 2018.
Gaussian stochastic volatility models: Scaling regimes, large deviations, and moment explosions. A Gulisashvili, Stochastic Processes and their Applications. 130A. Gulisashvili. Gaussian stochastic volatility models: Scaling regimes, large deviations, and moment explosions. Stochastic Processes and their Applications, 130(6):3648 -3686, 2020.
Being particular about calibration. J Guyon, P Henry-Labordère, Risk. J. Guyon and P. Henry-Labordère. Being particular about calibration. Risk, January, 2012.
Mimicking the one-dimensional marginal distributions of processes having an Itô differential. I Gyongy, Probab. Th. Rel. Fields. 714I. Gyongy. Mimicking the one-dimensional marginal distributions of processes having an Itô differential. Probab. Th. Rel. Fields, 71(4):501-516, 1986.
Calibration of local stochastic volatility models to market smiles: A Monte-Carlo approach. P Henry-Labordère, Risk Magazine. P. Henry-Labordère. Calibration of local stochastic volatility models to market smiles: A Monte-Carlo approach. Risk Magazine, September, 2009.
Non)-Parametric Regressions: Applications to Local Stochastic Volatility Models. Available at SSRN 3374875. P Henry-Labordère, P. Henry-Labordère. (Non)-Parametric Regressions: Applications to Local Stochastic Volatility Models. Available at SSRN 3374875, 2019.
A note on ergodic transformations of self-similar Volterra Gaussian processes. C Jost, Electron. Commun. Probab. 12C. Jost. A note on ergodic transformations of self-similar Volterra Gaussian processes. Electron. Commun. Probab., 12:259-266, 2007.
Loss of martingality in asset price models with lognormal stochastic volatility. B Jourdain, 267preprint CermicsB. Jourdain. Loss of martingality in asset price models with lognormal stochastic volatility. preprint Cermics, 267:2004, 2004.
Implied and local volatilities under stochastic volatility. R W Lee, International Journal of Theoretical and Applied Finance. 41R. W. Lee. Implied and local volatilities under stochastic volatility. International Journal of Theoretical and Applied Finance, 4(1):45-89, 2001.
Implied volatility: Statics, dynamics, and probabilistic interpretation. R W Lee, Recent Advances in Applied Probability. R. W. Lee. Implied volatility: Statics, dynamics, and probabilistic interpretation. Recent Advances in Applied Probability, pages 241-268, 2005.
Turbocharging Monte Carlo pricing for the rough Bergomi model. R Mccrickerd, M S Pakkanen, Quantitative Finance. 1811R. McCrickerd and M. S. Pakkanen. Turbocharging Monte Carlo pricing for the rough Bergomi model. Quantitative Finance, 18(11):1877-1886, 2018.
The Malliavin calculus and related topics. D Nualart, SpringerD. Nualart. The Malliavin calculus and related topics, volume 1995. Springer, 2006.
Extreme at-the-money skew in a local volatility model. P Pigato, Finance and Stochastics. 23P. Pigato. Extreme at-the-money skew in a local volatility model. Finance and Stochastics, 23:827-859, 2019.
Complications with stochastic volatility models. C A Sin, Advances in Applied Probability. 301C. A. Sin. Complications with stochastic volatility models. Advances in Applied Probability, 30(1):256-268, 1998.
Asymptotic expansion formulas of the Schilder type for a class of conditional Wiener functional integrations. S Takanobu, Asymptotic problems in probability theory: Wiener functionals and asymptotics, Proceedings of the Taniguchi International Symposium, Sanda and Kyoto. S. Takanobu. Asymptotic expansion formulas of the Schilder type for a class of conditional Wiener functional integrations. In Asymptotic problems in probability theory: Wiener function- als and asymptotics, Proceedings of the Taniguchi International Symposium, Sanda and Kyoto, 1990, pages 194-241. Longman Sci. Tech., 1993.
Introduction to nonparametric estimation. A B Tsybakov, Springer Science & Business MediaA. B. Tsybakov. Introduction to nonparametric estimation. Springer Science & Business Media, 2008.
| [] |
[
"Smart Contract Templates: essential requirements and design options",
"Smart Contract Templates: essential requirements and design options"
] | [
"Christopher D Clack \nDepartment of Computer Science\nCentre for Blockchain Technologies\nInvestment Bank CTO Office Barclays\nUniversity College London Vikram A. Bakshi Investment Bank CTO Office Barclays\n\n",
"Lee Braine \nDepartment of Computer Science\nCentre for Blockchain Technologies\nInvestment Bank CTO Office Barclays\nUniversity College London Vikram A. Bakshi Investment Bank CTO Office Barclays\n\n"
] | [
"Department of Computer Science\nCentre for Blockchain Technologies\nInvestment Bank CTO Office Barclays\nUniversity College London Vikram A. Bakshi Investment Bank CTO Office Barclays\n",
"Department of Computer Science\nCentre for Blockchain Technologies\nInvestment Bank CTO Office Barclays\nUniversity College London Vikram A. Bakshi Investment Bank CTO Office Barclays\n"
] | [] | Smart Contract Templates support legally-enforceable smart contracts, using operational parameters to connect legal agreements to standardised code. In this paper, we explore the design landscape of potential formats for storage and transmission of smart legal agreements. We identify essential requirements and describe a number of key design options, from which we envisage future development of standardised formats for defining and manipulating smart legal agreements. This provides a preliminary step towards supporting industry adoption of legally-enforceable smart contracts. | null | [
"https://arxiv.org/pdf/1612.04496v2.pdf"
] | 7,782,975 | 1612.04496 | 296760a69f8fae7d41ec47d57908bae5f7ebf4f6 |
Smart Contract Templates: essential requirements and design options
December 15, 2016
Christopher D Clack
Department of Computer Science
Centre for Blockchain Technologies
Investment Bank CTO Office Barclays
University College London Vikram A. Bakshi Investment Bank CTO Office Barclays
Lee Braine
Department of Computer Science
Centre for Blockchain Technologies
Investment Bank CTO Office Barclays
University College London Vikram A. Bakshi Investment Bank CTO Office Barclays
Smart Contract Templates: essential requirements and design options
December 15, 2016
Smart Contract Templates support legally-enforceable smart contracts, using operational parameters to connect legal agreements to standardised code. In this paper, we explore the design landscape of potential formats for storage and transmission of smart legal agreements. We identify essential requirements and describe a number of key design options, from which we envisage future development of standardised formats for defining and manipulating smart legal agreements. This provides a preliminary step towards supporting industry adoption of legally-enforceable smart contracts.
Introduction
The aim of Smart Contract Templates [2,3] is to support the management of the complete lifecycle of smart legal agreements. This includes the creation of legal document templates by standards bodies and the subsequent use of those templates in the negotiation and agreement of contracts by counterparties. They also facilitate automated execution of the contract via smart contract code [24], and provide a direct link within the instantiated smart contract as an identifier for reference and recovery of the signed legal agreement. The smart legal contracts could potentially be executed on distributed ledgers (such as Axoni Core [1], Corda [7], Digital Asset Platform [8], Ethereum [11], Hyperledger Fabric [16], etc.).
In a previous paper [3], we discussed the foundations, design landscape and research directions for Smart Contract Templates. We begin this paper by stating what we believe are the essential requirements for smart legal agreements. We then provide an abstract "core" specification and proceed to explore the design landscape for the storage and transmission of smart legal agreements. Our aim is to support the financial services industry (including trade associations such as the International Swaps and Derivatives Association (ISDA) and FIA) in: (i) exploring how legal prose can be connected with parameters and code, and (ii) reviewing existing data standards to take account of the features of smart legal agreements.
We do not aim to address topics relating to the execution of smart contract code, the semantics of legal prose, or languages for expressing business logic.
In a similar manner to our previous paper [3], we aim to discuss these topics using reasonably straightforward language, so that it is accessible not only to financial institutions but also to lawyers, standards bodies, regulators, and policy makers. We hope that the issues and views raised in this paper will stimulate debate and we look forward to receiving feedback.
Essential requirements
Smart Contract Templates are based on the framework of Grigg's Ricardian Contract triple of "prose, parameters and code" [13,14]. In this framework, key operational parameters (hereafter called "execution parameters") are extracted from the legal prose and passed to the smart contract code that provides automated execution.
The parameters are a succinct way to direct the code; additionally, one of those parameters may be an identifier for the reference and recovery of the smart legal agreement. The aim is to provide a legally-enforceable foundation for smart contracts (explained in more detail in [3]).
The above description leads to the following essential requirements for smart legal agreements:
1. Methods to create and edit smart legal agreements, including legal prose and parameters.
2. Standard formats for storage, retrieval and transmission of smart legal agreements. 3. Protocols for legally executing smart legal agreements (with or without signatures). 4. Methods to bind a smart legal agreement and its corresponding smart contract code to create a legally-enforceable smart contract.
Methods to make smart legal agreements available in forms acceptable according to laws and regulations in the appropriate jurisdiction.
The above essential requirements include four key items that merit further discussion:
1. Editing. Lawyers are likely to favour a graphical What-You-See-Is-What-You-Get (WYSIWYG 1 ) editor. This may be an existing ubiquitous editor (such as Microsoft Word), an editor enhanced with add-ins (such as Thomson Reuters Contract Express Author [25] or HotDocs [15]), or a custom editor (such as Smart Communications SmartDX [23] or ClauseMatch [4]). Alternatives include text editors (which may or may not include syntax highlighting) and graphical What-You-See-Is-What-You-Mean (WYSIWYM 2 ) editors. The editor must support contract metadata, including parameters.
Transmission.
To facilitate the transmission of smart legal agreements between multiple counterparties (e.g. during negotiation) and between a range of different applications (e.g. agreement editors and analytical tools), there should be agreed standard formats for transmission.
There are many possible "concrete" formats that could be used to transmit smart legal agreements. These include formats based on Extensible Markup Language (XML) [26] (such as Office Open XML Document [18] and Open Document Format for Office Applications (ODF) [20]), JSON 3 [9], markdown 4 , etc. If a standard format for transmission is not utilised, then it would be necessary, for example, to translate between formats during import and export and semantic consistency may not be assured.
It may be necessary to have different concrete implementations for different product categories, for example derivatives versus syndicated loans. We propose there would be benefit in a formal "abstract" specification for a serialised format. Such a specification could assist in selecting, extending, or designing standard concrete implementations that, although potentially different in detail, will nonetheless capture the same necessary features of a smart legal agreement. Different standard concrete formats might for example differ in choices such as character encoding and hashing format. Standard concrete formats could also facilitate, for example, automatic analyses across a wide range of smart legal agreements.
3. Ontologies. 5 Standard formats include not only standard syntax but also the use of standard ontologies. Existing standards such as the Enterprise Data Management Council's Financial Industry Business Ontology (FIBO) [10] could be leveraged to assist semantic analysis of legal prose. 6 As noted in [19], FIBO can be utilised to perform semantic reasoning and aid the development of querying applications.
The FIBO specifications define, among other things, legal and business entities, instruments, products, services, interest rates, currency exchange rates, economic indicators and market indices. Textual markup (discussed later) could be extended to support semantic analysis and reasoning using OWL, but the details of such extensions are beyond the scope of this paper.
4.
Binding smart legal agreement and code. This comprises two aspects:
(a) passing execution parameters to the smart contract code to direct its operation;
(b) providing a succinct way to identify the legal agreement uniquely at an operational level to support finding the legal agreement if needed for review or dispute resolution.
A candidate solution for the requirements of an operational-level unique agreement identifier is a cryptographic hash of the smart legal agreement that is passed as a parameter to the smart contract code and stored in the instantiated smart contract; this is the technique used in Ricardian Contracts. Note there are other similar solutions, such as Monax Industries' "dual integration" [17] which additionally provides a reverse link by adding a unique identifier for the instantiated smart contract to the final smart legal agreement.
Abstract core specification
The above essential requirements support the two legs of our previous definition of a smart contract -i.e. that it is both automatable and enforceable, where enforcement occurs via legal enforcement 7 of rights and obligations [3].
In this section, we start to explore the design landscape of a potential serialised format for storage, retrieval and transmission of smart legal agreements. We present an 5 See https://en.wikipedia.org/wiki/Ontology_(information_science) 6 FIBO provides a vocabularly of terms using two forms of definition [19] for each concept: (i) a structured ontology specification of the concept, and its relationships to others, represented using the Web Ontology Language (OWL), and (ii) a natural language definition which represents the concept using the vocabulary of the finance industry. 7 Further discussion on the legal enforceability of smart contracts can, for example, be found in [22].
abstract specification that defines the logical structure of smart legal agreements -and divide our presentation into two parts:
1. a small core specification (in this section), which is sufficiently general that it can serve as the basis for a wide range of possible specifications, and 2. a longer discussion (in the next section) of possible design options, with illustrative example specifications which are not intended to be prescriptive -allowing the final choices on these matters to be made later (e.g. by standards bodies).
Notation
We use the BNF-like 8 notation summarised in Figure 1 below, with the exceptional semantics that the elements are unordered. For example, "a ::= b c" defines "a" as "b c" or "c b", and "x ::= y* z*" defines "x" as any combination of zero or more of the elements "y" and "z".
::= Is defined as | Or *
Zero or more occurrences + One or more occurrences
Representation of smart contracts
Inspired by the Ricardian Contract triple of "prose, parameters, and code" [13,14], we define a core abstract specification that represents: (i) interim drafts of a contract (including the empty starting state), (ii) the final version of a contract, and (iii) a smart contract comprising multiple smart legal agreements and/or smart contract code implementations. Many detailed specifications can be derived from the following core specification 9 . smart-contract ::= smart-legal-agreement* smart-contract-code* (D1) smart-legal-agreement ::= legal-prose* parameters* agreement-header* (D2)
Options in the design landscape
Beyond the "core" abstract specification given above, which we believe to be generic, any further abstract specification opens up a landscape of possible design options. The predominant activity in this landscape is the identification and recording of metadata in a way that best fits the requirements of smart contracts. We have previously stated that the specific choices on these matters should be made later; however, we will attempt to describe many of the design options that arise, and to provide illustrative examples of how particular choices might be captured in a specification.
The illustrative examples given below do not constitute preferences or suggestions; each is provided merely to clarify one or more aspects of the discussion.
Markup
In our abstract specification, we wish to encode the logical structure of smart legal agreements. Without intending to be prescriptive in this matter, we focus on textual representation. Furthermore, in this paper we focus on the use of static textual markup rather than data transformations and procedures that are also important for workflow processes.
Markup can be either attached to text or associated with a position in text (for example an "anchor" used for cross-referencing). There are many different forms of textual markup, for example:
• Descriptive markup can, among other things, refer to the structure of the text (such as "heading" or "section" or a position in the text) and/or the meaning of the text (such as "parameter" or "indemnity clause").
• Presentational markup refers to how the marked-up text should be rendered (such as "bold" or "italic"). This could be implemented as: (i) inline presentational markup attached to text within legal prose, or (ii) style sheets that map descriptive markup to presentational markup. Formatting is a key aspect of legal documents, and is therefore an important part of an abstract specification.
We can specify the above example of two forms of markup as follows:
markup : : = presentational-markup | descriptive-markup
In the following sections, we explore the design options for inline textual markup of smart legal agreements.
Design options for prose
Text
There are different ways to view legal prose. For example, it may be viewed as a linear sequence of pieces of text, each of which is either marked-up or is not, and where there may be markup occurring between pieces of text. Another way is to view legal prose as a hierarchical structure of elements, such as one or more parts (recitals, definitions, schedules, etc) each of which contains one or more paragraphs, themselves containing sentences that contain words, and so on. In the latter example, markup could be applied to each of the hierarchical elements and there may be markup immediately preceding or immediately following an element. Many different abstract specifications of legal-prose are possible, for example: legal-prose : : = text* markup* text-with-markup* This motivates the choice of a specification for text-with-markup. We give one example below, to illustrate how this could be achieved, but there are many other ways:
text-with-markup : : = markup + text
Lists and tables
Legal text often uses various layout devices such as lists and tables. Lists have both a presentational aspect (e.g. they may be numbered, bulleted, or dashed) and a logical aspect (e.g. it is possible to refer to list items by position when a cross-reference is made from elsewhere in the text). Tables may also be referred to by number, and may have additional caption text. In order to detect certain kinds of error syntactically, lists and tables may require special markup rules. For example, if a list item were to appear outside a list this would normally be detected as a syntactic error. Although other kinds of markup such as bold and italic can be nested, normally there would be no rules that permit one to be nested inside the other but not vice-versa. If a design requirement is to detect list and table errors syntactically then, in the abstract specification, lists will be treated in a different way to "simple" markup (and for the same reason tables may also be treated differently). There are many ways to achieve this in a specification: one way might for example require text-with-markup to be given a more complex definition, with a layered structure to capture the legal nesting of list items inside a list. An elegant method might depend on the notation being used (e.g. whether the notation permits a recursive definition).
Cross-references
Cross-references are a common feature in legal text, where a reference to a target is embedded inside a source piece of text. The target may be either a referenceable piece of text (such as an item in a list, a table number, or some text with special markup), or a referenceable position in the text -where "referenceable" means in each case that it must have some kind of identifier. The former are sometimes known as "segments" and the latter as "anchors". 10 When editing or viewing a large document, it is useful to be able to jump from the source of a cross-reference to its target and then to jump back again (see also Section 5.4). It is also very useful to know whether a piece of text is the target of one or more cross-references (especially if the target is to be edited). For these reasons, there might be a design requirement for crossreferences to be bidirectional. 11 There are many ways to specify bidirectional cross-references. Two examples are:
1. as an inline markup applied to both the source and the target of the cross-reference, with full information about the source and the target being attached to both the source and the target;
2. as a small inline markup applied to both the source text and the target text, each holding a unique identifier for the cross-reference; with a list or table of all cross-references and full details of their sources and targets being held in the agreement-header.
Both of the examples given above would require specific descriptive markup to be used both for the source and for the two different types of target.
Redacted text
Redacted text, for example proprietary or privileged text, should not be printed out or transmitted to a third party -such text is typically formatted in a blacked-out or whited-out fashion in a redacted copy. This may be specified in a variety of ways, for example there may be specific markup to indicate that text is either "To Be Redacted" (and perhaps also "Has Been Redacted").
Optional clauses
During the drafting of legal text, the author may wish to search for standard clauses to insert, or there may be a requirement to provide the author with a template that includes embedded optional clauses. The former requirement falls mostly outside the scope of specifying a standard format for storage or transmission of a smart legal agreement (since a collection of standard clauses need not be a "smart legal agreement"). The latter requirement would require an extension to the specification, to include a "choice" element that would contain several pieces of text from which the author should choose. The design options are similar to those discussed for lists and tables; the requirement could be achieved in many ways, one of those being to treat the various choices as being similar to a list (with special syntactic rules like a list) but with a different variant of descriptive markup.
Design choices for parameters
An abstract specification for smart legal agreements must cater for many different scenarios, especially with regard to parameters. These parameters might initially only be embedded in the legal-prose element of the smart legal agreement (Definition D2) -for example, if only the legal documentation is available at the start. Alternatively, the parameters might initially only be included in the parameters element -for example, if a confirmation document is available, but the associated legal documentation has not yet been included. Later in the lifecycle of the smart legal agreement, several design options are available including:
• that all parameters should be identified within the legal-prose using inline markup and that the parameters element should not be used;
• that all parameters should be held in the parameters element and that wherever the prose contains text describing parameters they should not be marked up as parameters;
• that all parameters should be identified within the prose using inline markup and information about each parameter should also be kept in the parameters element (e.g. for operational convenience).
Parameters that are embedded in the prose may not initially be identified and so it is important that we are able to identify and retrieve parameters from the legal prose. The key data that we need for each parameter is its value. Parameters are also typically referenced by name, which should also be recorded.
Parameter data types
A concrete implementation could choose between (i) a mono-typed system for parameters where every parameter has the same type (e.g. text), and (ii) a typed system where the identification of a parameter entails identification of its name, value and type (where the available types would be determined by the concrete implementation). If a smart legal agreement is transmitted to a counterparty, it may be necessary to include in the serialised format a list of all non-standard type definitions. This could be held in the agreement-header element.
Complex parameter types such as arrays, lists and expressions could also be supported. One design decision could be to utilise a compositional description language to create business logic expressions that could be identified as parameters.
Identification of parameters
There are several different possible scenarios for the development of smart legal agreements. For example, as discussed above there is a design choice as to whether parameters are or are not held in the parameters element. It is possible that an execution parameter may be held in the parameters element without appearing in the prose (there may not be a legal-prose element). It is also possible that the execution parameters and their values might appear in the prose and not be held in the parameters element. Finally, it is possible that execution parameters might appear in both the legal-prose and the parameters elements.
If parameters appear in the prose, it is essential that there be a way to identify those parameters. They must also be retrievable so that they can be passed to the code when it executes. A design choice exists in how to identify the parameters. This could be achieved in many ways but we give two examples below:
1. One design decision might be to attach markup to each piece of text in legal-prose that provides parameter information (the markup would for example capture the name, type and value of the parameter in each case), yet leave the parameters element empty. When it is time to execute the code the prose can be searched to find the names, types and values of all execution parameters and these can be communicated to the code.
2. Another design decision might be to attach markup to each piece of text that provides parameter information, but store the name, type and value information for each parameter in the parameters element. The markup in the prose could, if required, store an identifier referencing the parameter data in parameters. When it is time to execute the code, all execution parameters in the parameters element can be communicated to the code without needing to search the prose. The parameters element may be useful when using standardised methods that require parameters to be presented separately from the prose, in which case a further design decision might be to make the parameters isomorphic to a standard format (e.g. by using FpML [12] as a concrete implementation).
The identification of parameters in the prose, when deemed necessary, could be specified as descriptive markup. There are also many ways that parameter data could be held in the parameters element. An example specification is shown below:
parameters : : = parameter* parameter : : = parameter-name parameter-type parameter-value 8
Other kinds of parameter
Legal prose may contain important data that is used for purposes other than execution of the smart contract code. For example, data may be used for compliance reporting or may be part of a definition that is important in a legal sense but is not needed by the smart contract code. One design choice would be to let such data be identified each time it is used, via a search of the legal prose. However, it might be deemed be advantageous to identify this data and keep a separate record of it for ease of reference (for example, by analysis and reporting systems). This record could potentially be held in the agreement-header element.
Cryptographic hashing
A cryptographic hash 12 is the output of a one-way mapping from data of arbitrary size to a bit string of fixed (and typically small) size. It is "one-way" because it is not feasible to obtain the original data from the hash. The same data always gives the same hash, and a small change in the data can lead to a large change in the hash. Furthermore, in general it is not feasible to find two pieces of data with the same hash.
Hashes could be used in various ways. For example: (i) as a unique identifier for a smart legal agreement -as the value of an execution parameter passed to the smart contract code and/or as an index into a repository, (ii) as a method to detect modification of a smart legal agreement after it has been signed, or (iii) as a method to detect modification of a preauthorised piece of text (e.g. a legal clause) that has been used inside a smart legal agreement. These techniques can also be used to evidence data tampering.
The Ricardian Contract uses a cryptographic hash of the entire document as an identifier. In general terms, using a cryptographic hash requires a canonical form of the document to avoid generating "false positive" modification alerts from semantically equivalent forms (such as alternative nesting of markup, e.g. Bold Italic versus Italic Bold). 13 Whether to use hashes for these or other purposes is a design option, as is the decison of where to store the hash.
Structure, header and linking to code
There are several design options that relate to the overall structure of a smart legal agreement, to its metadata, and to the linking of prose to smart contract code. We discuss these below.
Separating parts of the agreement
Large agreements may benefit from separation into logically separate parts (e.g. definitions, schedules and annexes). This can be achieved in many ways:
• through use of markup in the prose to identify the start and end of each new part;
• by defining legal-prose in a hierarchical way as discussed in Section 4.2.1;
• by representing a smart-legal-agreement as having multiple legal-prose elements (possibly with multiple agreement-header elements);
• by representing a smart-contract as having multiple smart-legal-agreement elements.
12 See https://en.wikipedia.org/wiki/Cryptographic_hash_function 13 See for example the section on "XML canonicalisation" at https://en.wikipedia.org/wiki/XML_Signature
Document header
We have previously identified several design options where it might be advantageous to hold information in the agreement-header. A wide range of information could be held; generally, this would be information that either does not exist in the prose, or for which it is administratively easier if a copy of that information is also held in a header. Examples might inlude:
• A list or table of all cross-references and full details of their sources and targets.
• A list of all non-standard type definitions.
• Various dates (dates of signing, execution date, effective date, and so on).
• Digital signatures.
• A cryptographic hash of the smart legal agreement (see Section 4.4).
• Various identifiers for the smart legal agreement, such as a local filing identifier. A cryptographic hash could also be used as a globally-unique identifier (and potentially also usable for local filing if desired). The issue of identification is wide-ranging -for example, an agreement may have a globally-unique identifier and an individual trade may have a mandated trade identifier.
• A style sheet for presentational formatting.
• An edit history and version control data -this is discussed further in Section 5.4.2.
Although metadata is essential throughout the lifecycle of a smart contract, it may be necessary to remove certain metadata in a final version of the contract.
Binding with code
There are several design choices to be considered regarding binding the legal prose with the smart contract code:
• There may be several different instances of standardised code that could run the smart contract (for example, corresponding to different code versions or different possible execution platforms). Note that the specification of smart-contract (in Definition D1) permits multiple instances of smart-contract-code.
• When smart contract code is instantiated onto an execution platform, there should be a mechanism for passing the execution parameters to that code. In addition, there should be a method for passing a unique identifier for the smart legal agreement to that code and the execution platform should embody a method to make that identifier available.
• After the smart contract code has been instantiated on an execution platform, there may be a requirement to store within the smart legal agreement a unique identifier of that executing instance of the code (e.g. see "dual integration" described at the end of Section 2). This unique identifier could, for example, be stored in the agreement-header, parameters, or legal-prose. According to the governance procedure, this may constitute a change to the agreement that might require a further level of authorisation before proceeding with execution of the smart contract code.
Design options for multi-document agreements
In this section we consider further design options that relate specifically to agreements that comprise multiple documents. We also consider what options may arise where there is a hierarchical relationship between, for example, standard templates, local templates, agreements and trades. Finally, we consider workflow topics such as an edit history and versioning.
Document groups
There are many ways in which smart contracts comprising multiple documents could be specified. The simplest specification is for each document to be a separate smart-legalagreement. An alternative would be for each document to be a separate legal-prose within a smart-legal-agreement potentially containing shared parameters and/or agreement-header.
Document types and status
Where an agreement comprises multiple documents, typically each document has a well-defined role or "type". For example, with agreements for financial derivatives there may be a Master Agreement, Schedule, Credit Support Annex, and so on. The document type could, for example, be held in an appropriate agreement-header element. Furthermore, it might be desired to keep track of the status of each document (for example, whether parameters have been identified, whether the legal prose has been agreed with counterparties, and so on). There may be many ways to do this; one design option would be to store a "document status" inside an appropriate agreement-header element.
Document hierarchies
In [3] we proposed that the lifecycle of a smart legal agreement would start with a Smart Contract Template. Organisations may also wish to develop local versions of the templates produced by standards bodies. Furthermore, there might be a tree hierarchy of local versions developed for different purposes. This hierarchy is then conceptually expanded downwards as agreements are derived from templates, and as trades are derived from agreements. 14 As a design option, the notion of a document "type" could be used to record for example whether a document is an industry-standard template, a local template, an agreement, and so on. However, as a further design option there may be a desire to record at what level a given document exists in the document hierarchy. This might be specified in many ways; one example would be to extend the agreement-header element with (i) a set of identifiers for parent documents (one level higher in the hierarchy, from which this document has been derived), and (ii) a set of identifiers for child documents (one level lower in the hierarchy). Industry-standard templates could exist at the top level of this hierarchy and have no parents.
Inter-document cross-references
Cross-references from a piece of text in one document to a different target document are common in legal prose; this may be a reference to a piece of text, to a locaton within the text, to the target document itself, or even to an entire agreement. If there is a requirement to support these cross-references so they can be quickly navigated from source to target and back again, then these inter-document cross-references should also be bidirectional.
14 Additionally, there might be a design requirement for a hierarchical precedence of documents.
We have previously discussed the design options for cross-references in Section 4.2.3. Additional design options could include cross-references between different documents (which may or may not be in the same smart legal agreement) and different kinds of document identifier (including local and globally-unique identifiers). Furthermore, where a document is the target of many inter-document cross-references, it could be required that any editing of the target document that would cause target pieces of text to move should not cause any changes in documents that contain the sources of those cross-references. This might be a particularly important design option if the source has been previously negotiated, agreed and hashed. 15
Incremental parameter definitions
Incremental parameter definition refers to the common practice of declaring a name in one document and then giving that name a value ("binding" the name to a value) in a different document. Furthermore the name, and by extension its value, may be referenced by a third document. 16 For example, a parameter name might be declared in a Master Agreement, given a value in a Schedule, and then used in a Confirmation.
As a design option, this feature of incremental definition of parameters could be applicable within all types of document including for example templates as issued by standards bodies, locally-modified templates, agreements, and so on. Any document could then be considered to be "parameterised", to the extent that it uses names whose values are to be provided in another document and/or at a later time.
Incremental parameter definitions could be specified in different ways. One example would be to start by permitting parameter-value to have a special value such as unbound (meaning that although the name has been declared there is as yet no value). 17 A second special value such as binding-location might be set at a later time to indicate that a value has been provided in another document (perhaps with an inter-document cross-reference to that value).
Version control and edit history
Version control ranges from the simple recording of a version number and timestamp for each document (and for the agreement as a whole), to the complex tracking of versions for multiple documents with multiple branches (for example different branches might be created for locallystored and transmitted versions, so that sensitive metadata is not transmitted). By contrast, an edit history maintains a complete log of all changes to a document (e.g. for audit purposes). For smart legal agreements the key design options include (i) recording the current version number and timestamp, (ii) keeping a complete log of changes -possibly including rejected amendments, approvals, and counterparty communication, and (iii) designing a branching and merging strategy for versions, so that exported (transmitted) versions can be re-imported.
The version number and date for documents can be stored in an appropriate agreementheader element, as can the document edit history.
Summary and further work
Summary
This paper began by presenting what we believe are the essential requirements for smart legal agreements, covering creation and editing, standard formats, legal execution protocols, binding to smart contract code, and making them available in acceptable forms. We then provided an abstract core specification for a smart contract and a smart legal agreement.
We then explored the design landscape for a seralised format for storage and transmission of smart legal agreements, including markup for metadata, design options for prose (lists, tables, cross-references, redacted text and optional clauses), design choices for parameters (data types and identification), cryptographic hashing, and multi-document agreements (groups, types, hierarchies, and incremental parameter definitions).
Our aim is to support the financial services industry in exploring how legal prose can be connected with parameters and code, and trade associations when reviewing existing data standards to take account of the features of smart legal agreements. This work therefore provides a preliminary step towards supporting industry adoption of legally-enforceable smart contracts.
Further work
There are many design options that we have not yet investigated, for example:
• permissioning and access control, ranging from the entire document to specific document portions such as clauses and sentences;
• roles and responsibilities of those able to operate on an agreement (e.g. designated signatories);
• the ability to specify that if a given clause is included, or modifed, then the agreement must be referred for specific authorisation;
• a more detailed exploration of whether/how existing standards (such as FpML) might be extended beyond key-value pairs towards support for the negotation of smart legal agreements including legal prose;
• financial transactions that are cleared via a central counterparty;
• the treatment of discretionary rights within an agreement.
It is also important to consider the workflow and system requirements for Smart Contract Templates. Any concrete implementation of a "standard" format will be used for transmission and storage between and within systems that will support substantial workflow (including negotiation between the parties to each agreement). There are some foundational architecture topics that arise for any such system, including many previously raised in [21] such as issues surrounding template and agreement governance, repositories, and jurisdiction. There remain many questions and design decisions to be explored. This will require substantial work and collaboration by the financial services industry, standards bodies, academia 18 , and lawyers. 18 For example, further development of the CLACK language [3] will be pursued at University College London.
Figure 1 :
1Notational conventions used in this paper.
See https://en.wikipedia.org/wiki/Backus-Naur_form 9 For example, if only an electronic confirmation document is available then we might have no legal-prose element, one parameters element and no agreement-header element.
See http://www.tei-c.org/Vault/P4/Lite/U5-ptrs.html11 There are other useful requirements that might additionally be applied to cross-references and should be addressed as design options. For example: each cross-reference should be unique within the agreement, each should have a single source and a single target, jumping between source and target (or vice versa) should be fast, a section of text may be the source for many references and may be the target for many references, sources and targets may be nested but may not be overlapped, etc.
An example specification to achieve this might maintain outgoing and incoming indirection tables in the agreement-metatdata element for each document -the source would then refer to a fixed entry in the metadata of the target regardless of any movement of the target within its document.16 The topic of parameter scope merits further discussion, e.g. the visibility of parameter names (and their values) outside a smart legal agreement.17 This might be useful when defining templates, so that a value is initially unbound but is then changed to be a specific value when for example the template is used to create an agreement.
Acknowledgements:We would like to thank Clive Ansell (ISDA), Ian Grigg (R3), Darren Jones (Barclays) and Simon Puleston Jones (FIA) for their helpful feedback.
. Axoni , Axoni Core, Axoni. Axoni Core, 2016. https://axoni.com/.
Barclays' Smart Contract Templates. L Braine, Barclays London Accelerator. L. Braine. Barclays' Smart Contract Templates, 2016. Barclays London Accelerator, https://vimeo.com/168844103/ and http://www.ibtimes.co.uk/barclays-smart-contract- templates-heralds-first-ever-public-demo-r3s-corda-platform-1555329/.
Smart Contract Templates: foundations, design landscape and research directions. C Clack, V Bakshi, L Braine, abs/1608.00771The Computing Research Repository (CoRR). C.D Clack, V.A Bakshi, and L. Braine. Smart Contract Templates: foundations, design landscape and research directions. The Computing Research Repository (CoRR), abs/1608.00771, 2016. Also at arXiv.org: http://arxiv.org/abs/1608.00771/.
. Clausematch, ClauseMatch, 2016. http://www.clausematch.com/.
Common Form. Common Form, 2016. https://commonform.org/.
. Commonaccord, CommonAccord, 2016. http://www.commonaccord.org/.
. Corda, Corda, 2016. https://www.corda.net/.
The Digital Asset Platform -Non-technical White Paper. Digital Asset, Digital Asset. The Digital Asset Platform -Non-technical White Paper, 2016. Available at https://digitalasset.com/press/digital-asset-releases-non-technical-white-paper.html.
Ecma International, The JSON Data Interchange Format. ECMA International. The JSON Data Interchange Format, 2016. http://www.ecma- international.org/publications/files/ECMA-ST/ECMA-404.pdf.
Enterprise Data Management Council (EDM Council). Financial Industry Business Ontology. Enterprise Data Management Council (EDM Council). Financial Industry Business Ontology, 2016. http://www.edmcouncil.org/financialbusiness/.
. Ethereum, Ethereum, 2016. https://www.ethereum.org/.
Financial products Markup Language (FpML). Financial products Markup Language (FpML), 2016. http://www.fpml.org/.
The Ricardian Contract. I Grigg, Proceedings of the First IEEE International Workshop on Electronic Contracting. the First IEEE International Workshop on Electronic ContractingIEEEI. Grigg. The Ricardian Contract. In Proceedings of the First IEEE International Workshop on Electronic Contracting, pages 25-31. IEEE, 2004. http://iang.org/papers/ ricardian_contract.html.
The Sum of All Chains -Let's Converge!, 2015. Presentation for Coinscrum and. I Grigg, I. Grigg. The Sum of All Chains -Let's Converge!, 2015. Presentation for Coinscrum and Proof of Work. http://financialcryptography.com/mt/archives/001556.html.
. Hotdocs, HotDocs, 2016. http://www.hotdocs.com/.
. Hyperledger Fabric, Hyperledger Fabric, 2016. https://github.com/hyperledger/fabric/.
. Monax Industries. Dual Integration. Monax Industries. Dual Integration, 2016. https://monax.io/explainers/dual_ integration/.
Standard ECMA-376 -Office Open XML File Formats. Ecma International, ECMA International. Standard ECMA-376 -Office Open XML File Formats, 2016. http://www.ecma-international.org/publications/standards/Ecma-376.htm.
Financial Industry Business Ontology -Foundations (FND), 2016. OMG Document Number: dtc/2016-03-03. Object Management Group (OMGObject Management Group (OMG). Financial Industry Business Ontology -Foundations (FND), 2016. OMG Document Number: dtc/2016-03-03, http://www.omg.org/spec/ EDMC-FIBO/FND/1.1/Beta2/PDF/index.htm.
Organization for the Advancement of Structured Information Standards (OASIS). Open Document Format for Office Applications (OpenDocument) Version 1.2Organization for the Advancement of Structured Information Standards (OASIS). Open Document Format for Office Applications (OpenDocument) Version 1.2, 2011. http://docs.oasis-open.org/office/v1.2/OpenDocument-v1.2.pdf.
Architectural Considerations for Smart Contract Templates. N Palmer, The R3 Smart Contract Templates Summit. N. Palmer. Architectural Considerations for Smart Contract Templates. In The R3 Smart Contract Templates Summit, pages 90-92, June 2016. http://r3cev.com/s/R3- Smart-Contract-Templates-Summit-_FINAL.pdf.
Can smart contracts be legally binding contracts?. Norton Rose R3, Fulbright, R3 and Norton Rose Fulbright. Can smart contracts be legally binding contracts?, 2016. http://www.nortonrosefulbright.com/knowledge/publications/144559/can-smart- contracts-be-legally-binding-contracts/.
Making sense of blockchain smart contracts. J Stark, J. Stark. Making sense of blockchain smart contracts, 2016. http://www.coindesk.com/ making-sense-smart-contracts/.
Contract Express Author. Thomson Reuters, Thomson Reuters. Contract Express Author, 2016. http://www.contractexpress.com/.
W3. Extensible Markup Language (XML). W3. Extensible Markup Language (XML), 2008. https://www.w3.org/TR/xml/.
| [
"https://github.com/hyperledger/fabric/."
] |
[
"MoS2/MoO3 Heterojunction: Dual Role of the Type II set-up and Band Gap Modulation of MoS2 upon Lithium-Ion Intercalation",
"MoS2/MoO3 Heterojunction: Dual Role of the Type II set-up and Band Gap Modulation of MoS2 upon Lithium-Ion Intercalation"
] | [
"Raheel Hammad ",
"Amar Kumar ",
"Tharangattu N Narayanan ",
"Soumya Ghosh ",
"\nGopanpally Village\nSerilingampally Mandal\n",
"\nRanga Reddy District\n500046, 500046Hyderabad, HyderabadIndia;, India\n"
] | [
"Gopanpally Village\nSerilingampally Mandal",
"Ranga Reddy District\n500046, 500046Hyderabad, HyderabadIndia;, India"
] | [] | In recent times photorechargeable metal ion batteries have garnered significant attention but the atomistic details of the mechanism of the charging process is still unknown. MoS2/MoOy, a type II semiconductor heterostructure, has been shown to function as photocathode where during discharge the lithium ion (Li-ion) intercalation happens mostly in MoS2 layers. Photoexposure leads to exciton formation and the type II set-up is supposed to generate spatially separated and longer-lived charge carriers. The Li intercalated MoS2 is known to undergo a phase transition from the semiconducting (2H) to a metallic (1T') phase. Hence, the proposal of exciton formation and its separation in LixMoS2 during photocharging needs closer inspection. In this study, with the help of density functional theory (DFT) based studies that is aptly supported by experimental data, it is shown that LixMoS2/MoO3 forms a type II heterostructure where the underlying band gap of LixMoS2 is exposed due to dispersion of electron density onto MoO3 upto a certain value of x. Further studies show that the type II arrangement is lost prior to the phase transition. In order to investigate the electronic structure and the phase transition upon lithiation in the explicit heterostructure, we introduced two unconventional computational schemes. The presence of the band gap and the ensuing type II arrangement in LixMoS2/MoO3 upto a certain concentration of the intercalated Li-ion justifies the possibility of the photocharging process. We believe that the general concepts explored in this study will be important in the rational design of type II heterostructures that can behave as photo-cathode materials in Li-ion batteries. | null | [
"https://export.arxiv.org/pdf/2303.15961v1.pdf"
] | 257,771,347 | 2303.15961 | 7b89520ad165e8c6aa2891f90aed71584d1caa9f |
MoS2/MoO3 Heterojunction: Dual Role of the Type II set-up and Band Gap Modulation of MoS2 upon Lithium-Ion Intercalation
Raheel Hammad
Amar Kumar
Tharangattu N Narayanan
Soumya Ghosh
Gopanpally Village
Serilingampally Mandal
Ranga Reddy District
500046, 500046Hyderabad, HyderabadIndia;, India
MoS2/MoO3 Heterojunction: Dual Role of the Type II set-up and Band Gap Modulation of MoS2 upon Lithium-Ion Intercalation
Solar BatteryDensity Functional TheoryType-II Heterostructure, Phase TransitionBand Gap Change
In recent times photorechargeable metal ion batteries have garnered significant attention but the atomistic details of the mechanism of the charging process is still unknown. MoS2/MoOy, a type II semiconductor heterostructure, has been shown to function as photocathode where during discharge the lithium ion (Li-ion) intercalation happens mostly in MoS2 layers. Photoexposure leads to exciton formation and the type II set-up is supposed to generate spatially separated and longer-lived charge carriers. The Li intercalated MoS2 is known to undergo a phase transition from the semiconducting (2H) to a metallic (1T') phase. Hence, the proposal of exciton formation and its separation in LixMoS2 during photocharging needs closer inspection. In this study, with the help of density functional theory (DFT) based studies that is aptly supported by experimental data, it is shown that LixMoS2/MoO3 forms a type II heterostructure where the underlying band gap of LixMoS2 is exposed due to dispersion of electron density onto MoO3 upto a certain value of x. Further studies show that the type II arrangement is lost prior to the phase transition. In order to investigate the electronic structure and the phase transition upon lithiation in the explicit heterostructure, we introduced two unconventional computational schemes. The presence of the band gap and the ensuing type II arrangement in LixMoS2/MoO3 upto a certain concentration of the intercalated Li-ion justifies the possibility of the photocharging process. We believe that the general concepts explored in this study will be important in the rational design of type II heterostructures that can behave as photo-cathode materials in Li-ion batteries.
Due to the structural stability and reversible lithium ion (Li ion) intercalation/de-intercalation abilities, layered materials like LiNixMnyCoz have been widely used in Li ion batteries as cathode where they exhibit large capacity and high voltage discharge plateau, sometimes at the expense of the electronic band gap. [1][2][3][4] In contrast, studies on the photo-cathode materials with a band gap commensurate with the frequency of light in the visible region, have been significantly low. [5][6][7] Traditionally, a type II heterojunction has been employed for photosensing and photocatalytic applications in order to efficiently separate the charge carriers that are generated upon photo-excitation. [8 -12 ] The efficiency of the type II set-up is critically dependent on the alignment of the band edges of the two materials that ensures proper dispersion of the excited electron on the conduction band (CB) of the material with the higher band edges to the CB of the other material. [8 ,9 , 13 ] Hence, for a type II heterojunction to act as a photocathode, the bands of the two materials need to remain properly aligned during the full discharge cycle of the battery. In this study we investigate the changes in the electronic band structure of MoS2 upon lithiation in a type II set-up with MoO3 that has been recently demonstrated in a photochargeable Li ion battery. [8 ] Bulk MoS2 in 2H phase (with 2 layers of hexagonal lattice stacked in AB fashion) displays an indirect band gap of ~ 1.3 eV while the 1T' phase (monolayer of distorted octahedral phase) is found to be metallic. [14 -16 ] The computed band structure of monolayer 1H-MoS2 displays a direct band gap of ~ 1.65 eV while there is a small indirect band gap in 1T' monolayer. [14 ] The band gap of layered MoS2 can be engineered by chemical doping [17 ] or employing external stimulus. [18 ] Previous studies indicate that upon lithiation, MoS2 becomes metallic irrespective of the initial phase ( Figure S1). [19 ,20 ] The metallization questions the photocharging mechanism, where exciton formation has to happen upon the absorption of light in the lithiated MoS2 (LixMoS2). [8 ] In order to understand the electronic structure of the MoS2 (bilayer) [21 ] /MoO3 (bulk) heterostructure, one needs to align the band edges between the two materials. The alignment requires computation of the valence band maxima (VBM) of the two materials relative to the vacuum in addition to their band gaps, which invariably involves calculation of band edges for a slab of finite number of layers. Incidentally, previous density functional theory (DFT) [22 ] calculations indicated that layered MoS2/MoO3 combination forms a type III heterojunction if PBE functional [23 ] is employed. [24 ] In order to avoid this pitfall, we employed an artificial layering scheme (henceforth termed as 'uL') for MoO3 to obtain the desired type II band alignment (Figure 1a,b). Alternatively, one can employ the natural layering for MoO3 and still obtain a type-II alignment by computing the band gaps with a more accurate hybrid functional (HSE06) while using PBE+U+D3 [25 ,26 ] for all the other relevant quantities to estimate the valence band offset (VBO) (see SI for details).
In Figure 1(c,d), two different type-II band alignments between bulk MoO3 and bilayer 2H-MoS2 are shown, employing either the natural layering (and gap from HSE06) or the uL layering for MoO3. [27 ,28 ] The corresponding alignments for LixMoS2 with varying x, along with an estimate of the number of electrons that is present in the CB of LixMoS2, are shown in Figure 2. As can be seen, the number of electrons is equal to the number of Li-ions that are added to the system. These electrons in the CB of LixMoS2 make this system metallic albeit the presence of an underlying band gap. In addition, it is clear from the graph(s) ( Figure 2) that this underlying band gap decreases with increase in the lithium ion concentration due to dramatic stabilization of the conduction band minimum (CBM) while the changes in the VBM is comparatively smaller. Ultimately, the type II character is lost and hence, there cannot be any efficient photo-excitation beyond that point (x = 0.33, Figure 2a; x = 0.25, Figure 2b). In order to understand the origin of the lowering of the CBM, we have plotted the corresponding projected density of states (PDOS, Figure S3) where the total DOS is projected separately onto the two MoS2 layers. The upper MoS2 layer, where the Li ions are being added, shows a steady metallization whereas the band edges of the lower layer do not change appreciably. These plots indicate that the reduction in the underlying band gap is caused by the presence of Li induced gap states near the CB edges.
We hypothesize that for lithium ion composition less than 0.25 (Figure 2b), the electron density in the CB of LixMoS2 will be transferred to the CB of MoO3, thus exposing the underlying band gap and hence, the system can be amenable to photoexcitation. In order to investigate the electron distribution in lithiated MoS2 in conjunction with MoO3, we set-up an explicit heterostructure composed of MoS2 and MoO3 (SI, Figure S4). Here we show that the type II alignment can be achieved in the explicit heterostructure with the uL scheme for MoO3 as opposed to chemical modifications [24 ] of the natural layering. Note that this artificial scheme is employed only to understand the electron density distribution in a type II setting. The structure of the supercell and the corresponding PDOS are shown in Figure 3. The corresponding structure and PDOS of a representative lithiated MoS2 system (Li2Mo40S80) in conjunction with MoO3 (Mo48O144 in the supercell) are provided in Figure S5. Integration of the DOS shows that two electrons are present in the combined conduction band of the heterostructure. Resolving the DOS onto PDOS and integrating it reveals that 1.12 electrons are transferred from the CB of lithiated MoS2 to the CB of MoO3.
While monolayer 1H-MoS2 is more stable than 1T'-MoS2, [14 ,20 ] the stability order switches upon Li-ion intercalation beyond a certain Li ion concentration as shown in Figure S6. [14 ] We wanted to investigate whether the transition point changes in a type-II set-up. This aspect is very important because if the structure transitions to the metallic phase at a very low concentration of lithium then the whole argument regarding lithium induced band gap modulation of the H-phase becomes inconsequential towards the photo-cathode behavior. Moreover, previous studies have shown that the phase stability can change depending on the number of electrons present in the system. [14 ] Computing the energy of the two phases in an explicit heterojunction set-up, however, is not practically feasible since there is a significant lattice parameter mismatch between the 1T'-MoS2 and MoO3 structures.
In order to mimic the effect of the MoO3 layers, we introduced a hypothetical layer of electronegative fluorine atoms on top of the lithiated MoS2 surface ( Figure S7) to extract electron density from the CB of lithiated MoS2 in either phase without affecting the band edges ( Figure S8). The amount of charge extracted by the fluorine sheet, obtained by integrating the differential PDOS, varies from 0.92e to 1.64e as the number of Li-ions is increased from 1 to 6 in a simulation cell with 12 units of MoS2 (Table S1). The corresponding PDOS plots are shown in Figure S9. Using this set-up, we can compute the effect of the charge density extraction on the phase transition between the 2H-and the 1T'-phases upon lithiation. As shown in Figure 4, the transition point is shifted towards higher lithium composition.
In order to experimentally investigate the effects of lithiation in the metallization and phase transition of monolayer (1H) MoS2, we have systematically studied lithiation/delithiation using in situ Raman spectroscopy [details in the supporting information]. As shown in Figure 5a, pristine MoS2 exhibits two prominent peaks at ~383 cm -1 and ~402 cm −1 , which correspond to the E 1 2g and A1g vibration modes of monolayer 1H-MoS2, respectively. [29 ] On the other hand, the metallic 1T' phase is characterized by J1 and J2 modes centered at ~154 cm -1 and ~225 cm −1 . [30 ,31 ] The evolution of the vibrational modes of the 1H phase at different stages of charging is shown in Figure 5b. We performed UV-VIS absorption for pristine MoS2 and LixMoS2 (discharged till 1V). As shown in Figure S11, pristine MoS2 shows absorption peaks at 660 nm, 610 nm, and 434 nm corresponding to A, B, and C exciton of 1H MoS2, [32 ] whereas LixMoS2 does not show any absorption peak but higher absorption in the entire spectral range, indicating metallization upon lithiation. To understand the metallization of the 1H-phase upon lithiation, we performed in situ photoluminescence (PL) studies ( Figure 6). The drastic decrease in intensity upon lowering the discharge voltage indicates decrease in efficiency of charge transfer from MoS2 (lithiated) to MoO3 and the shift in frequency to longer wavelengths imply decreasing band gap. Subsequent disappearance of photoluminescence below 2.95 V (during discharge) suggests disappearance of the type II character and phase transition to the 1T'-phase, which is also corroborated by the appearance of J1 and J2 peaks at that voltage in the Raman spectra. Additional graphs at different stages of charging / discharging are provided in the SI.
In this communication, we have explored the changes in the electronic band structure of MoS2 upon lithiation in a type II heterojunction set-up with MoO3. Our results suggest that (1) the type II arrangement exposes the underlying band gap of the otherwise metallic LixMoS2 and (2) the band gap of 2H/1H-phase of LixMoS2 reduces upon increasing x and eventually the type II band alignment is lost (~ x > 0.25). The loss of the type II character would suppress the photo-cathode behavior since the photoexcited electron-hole pair cannot be separated efficiently anymore. Computationally, the transition from the 2H/1H phase to the 1T' phase is found to occur after the loss of the type II character. We believe that the dual role of the type II set-up (exposure of the underlying band gap and efficient formation of charge separated excitons upon photoexcitation) is a general phenomenon. On the other hand, the current study suggests that one should also consider the band structure modulation upon Li-ion intercalation as an important gauge for an ideal photocathode material.
Figures
Supplementary Information
MoS2/MoO3 Heterojunction: Dual Role of the Type II set-up and Band Gap Modulation of MoS2 upon Lithium-Ion Intercalation
Raheel Hammad, Amar Kumar, Tharangattu N. Narayanan*, and Soumya Ghosh* Our calculations employed density functional theory (DFT) in conjuncion with PBE functional 1 as implemented in Vienna ab-initio simulation package (VASP) 2 with a plane wave basis set within the projector augmented plane-wave method (PAW) 3 . For all the MoS2 calculations a kinetic energy cutoff for plane-waves was set to be 360 eV, while a cutoff of 500 eV was used for MoO3 and MoO3/MoS2 heterostructure calculations. The energy and force convergence criteria were set to be 10 −4 eV and 0.02 eV Å −1 respectively. We employed a gamma centered Monkhorst scheme of grid density 0.025 in each direction for all the calculations. We have used a Hubbard U parameter of 5.0 eV for both MoS2 and MoO3. 4,5 For all multi layered systems DFT+U was combined with the disperson correction (D3) by Grimme et.al 6 to account for long range interactions. We relaxed the bulk and multi-layered geometry using PBE+U+D3 for MoS2. Additionaly HSE06 functional was used to compute the band gaps for a few systems as specified in the manuscript.
MoS2/MoO3 Band-offset
We used planar averaged Hartree potential to compute the offset between MoO3 and MoS2. The valence band maxima (VBM) of bi-layer MoS2(2L) is computed with HSE06 functional relative to its macroscopic average, which in turn is referenced to the vacuum level in a slab calculation. For bulk MoO3 calculations (with HSE06), the VBM is referenced to the vacuum level using the following protocol. We first computed the electronic structure of multi-layered slabs of MoO3 [010] in conjunction with vacuum in addition to the bulk MoO3. The lateral dimensions of the slab and the bulk simulation cells are kept the same while the geometry was relaxed in both cases. Central layers in the slab are supposed to represent the bulk and using the macroscopic average of this region one can reference the VBM of the bulk to the vacuum. [7][8] We can then compute the valence band offset (VBO) between the two materials. The simplified formula for the above method is given below
The conduction band offset is simply obtained by adding the band-gap to the VBM. However, PBE is known to underestimate the band gap, and therefore the experimental/HSE06 band gap can be combined with the above VBO for band alignment. 8 The growth procedure for MoS2 monolayer synthesis involved the use of chemical vapor deposition (CVD) method. A two-zone furnace was employed for the growth, following a method previously discussed. Only MoO3 and sulfur were used as precursors, and a small amount of NaCl was added to aid metal decomposition at lower temperatures. The process involved keeping sulfur at 210 ○ C in Zone I and placing MoO3 at 710 ○ C in Zone II, as indicated in the schematic diagram. The experiment was conducted in the presence of 190 sccm N2, which acted as the carrier gas. Upon completion of the growth, the furnace was rapidly cooled to room temperature to prevent further multilayer growth.
Characterization:
The Renishaw Invia Raman spectroscopy was used to obtain in-situ and ex-situ Raman spectra and photoluminescence (PL) spectra. The analysis was conducted using a 532 nm exciton laser with a 20x objective. The laser power was optimized to prevent overheating and improve the noise to signal ratio of the data.
Electrochemical measurements:
Electrochemical measurements in this study were carried out using a single-channel Bio-logic
Figure 1 :
1Unit cell for (a) conventional layering and (b) uL scheme. Band alignment between bilayer 2H-MoS2 and bulk MoO3 where VBO is computed with PBE+U whereas the band gap is computed either with HSE06 for the conventional layering of MoO3 (c) or with PBE+U for the uL scheme of MoO3 (d).
Figure 2 .
2Band edges of LixMoS2 for different values of x with the computational schemes shown in figures 1c and 1d, respectively. The type-II character is lost for x > 0.33 and x > 0.25 in the two cases respectively.
Figure 3 .
3Left: Supercell for 2H-MoS2/MoO3 (uL) heterostructure Right: Corresponding PDOS resolved into Mo (MoS2, black), Mo (MoO3, blue), S (yellow), O (red) density of states.
Figure 4 .
4Phase transition with or without the hypothetical F6 sheet.
Figure 5 .
5The in situ Raman spectra of monolayer based electrodes during Lithiation/delithiation (with Lithium metal as the other electrode) at different voltage vs Li/Li + . (a) Initial (pristine) MoS2 monolayer (Open circuit voltage: 2.8V), 1st discharge (dchg, 0.01 mAcm -2 ) at 2.2V and after over charging (chg) till 3.4V. (b) The spectra taken during the electrochemical charging at different charging potentials.
Figure 6 :
6The in situ photoluminescence (PL) of MoS2 monolayer based electrodes at different discharging voltages vs Li/Li + : (a) during discharging from charged state at 3.45 V till 2.35 V (inset PL curve shows discharge state from 2.85V to 2.35V). (b) During charging from discharged state 1.90 V to 3.45V.
( 3 )
3Dixit, M.; Markovsky, B.; Schipper, F.; Aurbach, D.; Major, D. T. Origin of Structural Degradation during Cycling and Low Thermal Stability of Ni-Rich Layered Transition Metal-Based Electrode Materials. J. Phys. Chem. C 2017, 121 (41), 22628-22636. (4) Manthiram, A. A Reflection on Lithium-Ion Battery Cathode Chemistry. Nat. Commun. 2020, 11 (1), 1-9.
A. D.; Chamola, S.; Mathieson, A.; Boruah, B. D.; De Volder, M.; Ahmad, S. Photo-Rechargeable Li-Ion Batteries: Device Configurations, Mechanisms, and Materials. ACS Appl. Energy Mater. 2022, 5 (7), 7891-7912. (8) Kumar, A.; Thakur, P.; Sharma, R.; Puthirath, A. B.; Ajayan, P. M.; Narayanan, T. N. Photo Rechargeable Li-Ion Batteries Using Nanorod Heterostructure Electrodes. Small 2021, 17 (51), 2105029. (9) Kumar, A.; Hammad, R.; Pahuja, M.; Arenal, R.; Ghosh, K.; Ghosh, S.; Narayanan, T. N. Photo-Rechargeable Li Ion Batteries Using TiS2 Cathode. 2023. https://doi.org/10.48550/arxiv.2301.06155. (10) Das, R.; Sarkar, S.; Kumar, R.; Ramarao, S. D.; Cherevotan, A.; Jasil, M.; Vinod, C. P.; Singh, A. K.; Peter, S. C. Noble-Metal-Free Heterojunction Photocatalyst for Selective CO2 Reduction to Methane upon Induced Strain Relaxation. ACS Catal. 2022, 12 (1), 687-697. (11) Zhao, B.; Gan, Z.; Johnson, M.; Najafidehaghani, E.; Rejek, T.; George, A.; Fink, R. H.; Turchanin, A.; Halik, M. 2D van Der Waals Heterojunction of Organic and Inorganic Monolayers for High Responsivity Phototransistors. Adv. Funct. Mater. 2021, 31 (42), 2105444. (12) Liu, X.; Gu, J.; Ding, K.; Fan, D.; Hu, X.; Tseng, Y. W.; Lee, Y. H.; Menon, V.; Forrest, S. R. Photoresponse of an Organic Semiconductor/Two-Dimensional Transition Metal Dichalcogenide Heterojunction. Nano Lett. 2017, 17 (5), 3176-3181. (13) Hu, C.; Chen, L.; Hu, Y.; Chen, A.; Chen, L.; Jiang, H.; Li, C. Light-Motivated SnO2 /TiO2 Heterojunctions Enabling the Breakthrough in Energy Density for Lithium-Ion Batteries. Adv. Mater. 2021, 33 (49), 2103558. (14) Kan, M.; Wang, J. Y.; Li, X. W.; Zhang, S. H.; Li, Y. W.; Kawazoe, Y.; Sun, Q.; Jena, P. Structures and Phase Transition of a MoS2 Monolayer. J. Phys. Chem. C 2014, 118 (3), 1515-1522. (15) Fleischauer, P. D. Fundamental Aspects of the Electronic Structure, Materials Properties and Lubrication Performance of Sputtered MoS2 Films. Thin Solid Films 1987, 154 (1-2), 309-322. . (16) He, K.; Poole, C.; Mak, K. F.; Shan, J. Experimental Demonstration of Continuous Electronic Structure Tuning via Strain in Atomically Thin MoS2. Nano Lett. 2013, 13 (6), 2931-2936. (17) Suh, J.; Tan, T. L.; Zhao, W.; Park, J.; Lin, D. Y.; Park, T. E.; Kim, J.; Jin, C.; Saigal, N.; Ghosh, S.; Wong, Z. M.; Chen, Y.; Wang, F.; Walukiewicz, W.; Eda, G.; Wu, J. Reconfiguring Crystal and Electronic Structures of MoS2 by Substitutional Doping. Nat. Commun. 2018, 9 (1), 1-7. (18) Lanzillo, N. A.; O'Regan, T. P.; Nayak, S. K. Band Structure Modulation in MoS2 Multilayers and Heterostructures through Electric Field and Strain. Comput. Mater. Sci. 2016, 112, 377-382. (19) Enyashin, A. N.; Seifert, G. Density-Functional Study of LiXMoS2 Intercalates (0≤x≤1). Comput. Theor. Chem. 2012, 999, 13-20. (20) Xia, J.; Wang, J.; Chao, D.; Chen, Z.; Liu, Z.; Kuo, J. L.; Yan, J.; Shen, Z. X. Phase Evolution of Lithium Intercalation Dynamics in 2H-MoS2. Nanoscale 2017, 9 (22), 7533-7540.
( 21 )
21Bilayer model allows increasing Li ion intercalation into MoS2 without any explicit interaction between Li-Ion and MoO3. (22) Kohn, W.; Becke, A. D.; Parr, R. G. Density Functional Theory of Electronic Structure. J. Phys. Chem. 1996, 100 (31), 12974-12980.
Figure S1 : 1875MoS2 Figure S2 . 083 Figure S3 .
S11875MoS2S2083S3(a) Band structure of monolayer of 1H-MoS2 computed with a periodic hexagonal cell with the lateral dimensions of 12.648 Å (16 Mo and 32 S atoms) employing PBE+U functional (b) Computed band structure of monolayer of 1H-Li0.(a) Band gap is computed with HSE06 while the valence band offset computed with PBE+U within the uL scheme for MoO3. (b) Change in band edges of lithiated MoS2 as a function of Li-ion concentration. The type II alignment is lost for Li-ion concentration (x) less than 0.PDOS of lithiated MoS2 for different concentration of Li-ion. The upper MoS2 layer (MoS2(u)) where the Li ions are added shows gradual metallization whereas the band edges of the bottom layer remains more or less unaffected. The analysis has been done with the PBE+U+D3 results.
, 9 Figure S4 .Figure S5 .Figure S6 .
9S4S5S6Units of MoS2 [5x2](left) and MoO3[4x3](right) employed in the explicit heterostructure calculations Left: Supercell of the explicit heterostructure with 2 Li-ions; Right: Corresponding PDOS projected separately on Mo(MoS2), Mo(MoO3), O, S, Li Phase transition in monolayer MoS2 as a function of Li ion concentration for a periodic cell of 16 MoS2 units. The simulation cell parameters for the hexagonal 1H and 1T' phases are 12.648 and 13.022 Å respectively. The energies are computed with PBE+U functional.
Figure S7 .Figure S8 .
S7S8Left: Structure of Li0.083MoS2 + F6 sheet ; Right: Corresponding PDOS Left :PDOS of Li0.083MoS2 + F6 sheet; Right: PDOS of Li0.083MoS2
Figure S9 .
S9PDOS for different values of x in LixMo12S24/F6 system Experimental Methods:
Figure S10 .
S10potentiostat (SP-200). The electrochemical discharge measurements during the lithiation of monolayer MoS2 were performed using a two-electrode battery setup consisting of an ITO-coated quartz electrode (25 mm × 25 mm) as the working electrode, lithium metal as the other electrode, and LiPF6 in EC/EMC as the electrolyte. Prior to transferring the MoS2 onto the ITO-coated quartz electrode, the electrode was thoroughly cleaned by washing it with soap and then with DI water and iso-propanol several times. The MoS2 was first spin-coated with PMMA (0.204 g in 5 mL toluene) and then immersed in 2 M KOH overnight. The PMMA-coated MoS2 was then transferred to water to remove KOH and subsequently transferred to the ITO-coated quartz electrode. The electrode was further cleaned with acetone after drying Schematic diagram of the MoS2 growth by the CVD method. Sulfur is placed in Zone I whereas a mixture of MoO3 and NaCl in Zone II. The MoS2 on Si/SiO2 is clearly seen in the optical image where the lateral size of the crystal is around 100-200 μm.
Figure S11 :Figure S12 .
S11S12UV-Vis spectra of pristine and lithiated MoS2 In-situ Raman spectra and Photoluminescence (PL) of MoS2 monolayer based electrodes during Lithiation/delithiation (with Lithium metal as counter electrode) at different voltage vs. Li/Li + . (a) Initial MoS2 monolayer, 1st discharge at 2.2V and after charging till 3.4V (b) 2nd discharge at 2.45V, 2nd charge at 2.95V and 3rd discharge at 2.65V.
Figure S13 :
S13In-situ Photoluminescence (PL) of MoS2 monolayer based electrodes during Lithiation/delithiation (with Lithium metal as counter electrode) at different voltage vs. Li/Li + . (a) PL during 2 nd charging from discharged state 2.30V to charge till 3.34V (b) during discharging from charged state 3.05V to discharge till 2.08V (inset PL curve discharge state from 2.95V to 2.08V).
Table S1. Number of electrons extracted by the F6 sheet for different Li-ion concentrations Integration of PDOS upto fermi level does not add up to total electrons in the system. To accurately compute the electron occupation number integration of total DOS has to be used. Incidentally, the PDOS of F and Li1Mo12S24 (upto the Fermi level) do not overlap in Li1Mo12S24/F6 heterojunction. Therefore, the PDOS can be used to determine the integration limits and total DOS can be used to compute the electron occupations in flourine and Li1Mo12S24 . To compute electron occupations in LixMo12S24/F6 for x > 1, we compute the integral of PDOS for F upto fermi level for these systems. This integral is then compared with PDOS integral for F in Li1Mo12S24/F6 , the difference is used to determine the charge extracted by flourine.System
Charge Extracted by F-sheet
Li1Mo12S24/F6
0.920 e
Li2Mo12S24/F6
0.635 e
Li3Mo12S24/F6
1.601 e
Li4Mo12S24/F6
1.400 e
Li5Mo12S24/F6
1.321 e
Li6Mo12S24/F6
1.641 e
ACKNOWLEDGEMENTSThe authors from TIFRH acknowledge the financial support from Department of Atomic Energy, Government of India, under Project Identification No. RTI 4007. TNN and SG would like to acknowledge the funding support from Infosys-TIFR "Leading Edge" Research Grant.
Improving Performance of LiNi0.8Co0.1Mn0.1O2 Cathode Materials for Lithium-Ion Batteries by Doping with Molybdenum-Ions: Theoretical and Experimental Studies. F A Susai, D Kovacheva, A Chakraborty, T Kravchuk, R Ravikumar, M Talianker, J Grinblat, L Burstein, Y Kauffmann, D T Major, B Markovsky, D Aurbach, ACS Appl. Energy Mater. 20196Susai, F. A.; Kovacheva, D.; Chakraborty, A.; Kravchuk, T.; Ravikumar, R.; Talianker, M.; Grinblat, J.; Burstein, L.; Kauffmann, Y.; Major, D. T.; Markovsky, B.; Aurbach, D. Improving Performance of LiNi0.8Co0.1Mn0.1O2 Cathode Materials for Lithium- Ion Batteries by Doping with Molybdenum-Ions: Theoretical and Experimental Studies. ACS Appl. Energy Mater. 2019, 2 (6), 4521-4534.
From Atoms to Cells: Multiscale Modeling of LiNixMnyCozO2 Cathodes for Li-Ion Batteries. L M Morgan, M M Islam, H Yang, K O'regan, A N Patel, A Ghosh, E Kendrick, M Marinescu, G J Offer, B J Morgan, M S Islam, J Edge, A Walsh, ACS Energy Lett. 20221Morgan, L. M.; Islam, M. M.; Yang, H.; O'Regan, K.; Patel, A. N.; Ghosh, A.; Kendrick, E.; Marinescu, M.; Offer, G. J.; Morgan, B. J.; Islam, M. S.; Edge, J.; Walsh, A. From Atoms to Cells: Multiscale Modeling of LiNixMnyCozO2 Cathodes for Li- Ion Batteries. ACS Energy Lett. 2022, 7 (1), 108-122.
Generalized Gradient Approximation Made Simple. J P Perdew, K Burke, M Ernzerhof, Phys. Rev. Lett. 18Perdew, J. P.; Burke, K.; Ernzerhof, M. Generalized Gradient Approximation Made Simple. Phys. Rev. Lett. 1996, 77 (18), 3865-3868.
2D MoO3-XSx/MoS2 van Der Waals Assembly: A Tunable Heterojunction with Attractive Properties for Photocatalysis. M Shahrokhi, P Raybaud, T Le Bahers, ACS Appl. Mater. Interfaces. 202130Shahrokhi, M.; Raybaud, P.; Le Bahers, T. 2D MoO3-XSx/MoS2 van Der Waals Assembly: A Tunable Heterojunction with Attractive Properties for Photocatalysis. ACS Appl. Mater. Interfaces 2021, 13 (30), 36465-36474.
Electron-Energy-Loss Spectra and the Structural Stability of Nickel Oxide: An LSDA+U Study. S L Dudarev, G A Botton, S Y Savrasov, C J Humphreys, A P Sutton, Phys. Rev. B. 573Dudarev, S. L.; Botton, G. A.; Savrasov, S. Y.; Humphreys, C. J.; Sutton, A. P. Electron-Energy-Loss Spectra and the Structural Stability of Nickel Oxide: An LSDA+U Study. Phys. Rev. B 1998, 57 (3), 1505-1509.
A Consistent and Accurate Ab Initio Parametrization of Density Functional Dispersion Correction (DFT-D) for the 94 Elements H-Pu. S Grimme, J Antony, S Ehrlich, H Krieg, J. Chem. Phys. 201015154104Grimme, S.; Antony, J.; Ehrlich, S.; Krieg, H. A Consistent and Accurate Ab Initio Parametrization of Density Functional Dispersion Correction (DFT-D) for the 94 Elements H-Pu. J. Chem. Phys. 2010, 132 (15), 154104.
A Novel Ternary MoS2/MoO3/TiO2 Composite for Fast Photocatalytic Degradation of Rhodamine B under Visible-Light Irradiation. Z Li, F Cao, L Wang, Z Chen, X Ji, New J. Chem. 20192Li, Z.; Cao, F.; Wang, L.; Chen, Z.; Ji, X. A Novel Ternary MoS2/MoO3/TiO2 Composite for Fast Photocatalytic Degradation of Rhodamine B under Visible-Light Irradiation. New J. Chem. 2019, 44 (2), 537-542.
The combination of uL layering of MoO3 combined with HSE06 gaps Is discussed in SI. 2The combination of uL layering of MoO3 combined with HSE06 gaps Is discussed in SI, Figure S2.
Probing Proximity-Tailored High Spin-Orbit Coupling in 2D Materials. Rani Sahoo, K Pradeep Chakravarthy, T Sharma, R Bawari, S Mundlia, S Sasmal, S Raman, K V; Narayanan, T N Viswanathan, N K Sahoo, K R Sharma, R Bawari, S Mundlia, S Sasmal, S Raman, K V; Narayanan, T N Chakravarthy, T P Viswanathan, N K , Adv. Quantum Technol. 20209Rani Sahoo, K.; Pradeep Chakravarthy, T.; Sharma, R.; Bawari, S.; Mundlia, S.; Sasmal, S.; Raman, K. V; Narayanan, T. N.; Viswanathan, N. K.; Sahoo, K. R.; Sharma, R.; Bawari, S.; Mundlia, S.; Sasmal, S.; Raman, K. V; Narayanan, T. N.; Chakravarthy, T. P.; Viswanathan, N. K. Probing Proximity-Tailored High Spin-Orbit Coupling in 2D Materials. Adv. Quantum Technol. 2020, 3 (9), 2000042.
Unraveling the Formation of Amorphous MoS2 Nanograins during the Electrochemical Delithiation Process. Z Zhu, S Xi, L Miao, Y Tang, Y Zeng, H Xia, Z Lv, W Zhang, X Ge, H Zhang, J Wei, S Cao, J Chen, Y Du, X Chen, Z Zhu, Y Tang, Y Zeng, H Xia, Z Lv, W Zhang, X Ge, H Zhang, J Wei, S Cao, X Chen, S Xi, Y Du, L Miao, J Chen, Adv. Funct. Mater. 2942Zhu, Z.; Xi, S.; Miao, L.; Tang, Y.; Zeng, Y.; Xia, H.; Lv, Z.; Zhang, W.; Ge, X.; Zhang, H.; Wei, J.; Cao, S.; Chen, J.; Du, Y.; Chen, X.; Zhu, Z.; Tang, Y.; Zeng, Y.; Xia, H.; Lv, Z.; Zhang, W.; Ge, X.; Zhang, H.; Wei, J.; Cao, S.; Chen, X.; Xi, S.; Du, Y.; Miao, L.; Chen, J. Unraveling the Formation of Amorphous MoS2 Nanograins during the Electrochemical Delithiation Process. Adv. Funct. Mater. 2019, 29 (42), 1904843.
Phase Transformation of 1Tʹ-MoS2 Induced by Electrochemical Prelithiation for Lithium-Ion Storage. X Hou, W Zhang, J Peng, L Zhou, J Wu, K Xie, Z Fang, ACS Appl. Energy Mater. 20229Hou, X.; Zhang, W.; Peng, J.; Zhou, L.; Wu, J.; Xie, K.; Fang, Z. Phase Transformation of 1Tʹ-MoS2 Induced by Electrochemical Prelithiation for Lithium-Ion Storage. ACS Appl. Energy Mater. 2022, 5(9),11292-11303.
. L Wang, Z Wang, H Y Wang, G Grinblat, Y L Huang, D Wang, X H Ye, X Li, Bin, Wang, L.; Wang, Z.; Wang, H. Y.; Grinblat, G.; Huang, Y. L.; Wang, D.; Ye, X. H.; Li, X. Bin;
Slow Cooling and Efficient Extraction of C-Exciton Hot Carriers in MoS2 Monolayer. Q Bao, A S Wee, S A Maier, Q D Chen, M L Zhong, C W Qiu, H B Sun, Nat. Commun. 8Bao, Q.; Wee, A. S.; Maier, S. A.; Chen, Q. D.; Zhong, M. L.; Qiu, C. W.; Sun, H. B. Slow Cooling and Efficient Extraction of C-Exciton Hot Carriers in MoS2 Monolayer. Nat. Commun. 2017, 8, 13906-13906.
Generalized Gradient Approximation Made Simple. J P Perdew, K Burke, M Ernzerhof, Phys. Rev. Lett. 77Perdew, J. P.; Burke, K.; Ernzerhof, M. Generalized Gradient Approximation Made Simple. Phys. Rev. Lett. 1996, 77, 3865−3868.
Efficient Iterative Schemes for ab initio Total-Energy Calculations using a Plane-Wave Basis Set. G Kresse, J Furthmuller, Phys.Rev. B. 54Kresse, G.; Furthmuller, J. Efficient Iterative Schemes for ab initio Total-Energy Calculations using a Plane-Wave Basis Set. Phys.Rev. B 1996, 54, 11169−11186
From Ultrasoft Pseudopotentials to the Projector Augmented-Wave Method. G Kresse, D Joubert, Phys. Rev. B. Kresse, G.; Joubert, D. From Ultrasoft Pseudopotentials to the Projector Augmented-Wave Method. Phys. Rev. B 1999, 59, 1758−1775.
DFT+U Study of Properties of MoO3 and Hydrogen Adsorption on MoO3(010). Yan-Hua Lei, Zhao-Xu Chen, J. Phys. Chem. C. 116Yan-Hua Lei and Zhao-Xu Chen* DFT+U Study of Properties of MoO3 and Hydrogen Adsorption on MoO3(010), J. Phys. Chem. C 2012, 116, 49, 25757-25764
. S L Dudarev, G A Botton, S Y Savrasov, C J Humphreys, A P Sutton, Phys. Rev. B. 571505S. L. Dudarev, G. A. Botton, S. Y. Savrasov, C. J. Humphreys, and A. P. Sutton Phys. Rev. B 57, 1505
. Stefan Grimme, Jens Antony, Stephan Ehrlich, Helge Kriegj, Chem. Phys. 132154104Stefan Grimme, Jens Antony, Stephan Ehrlich, and Helge KriegJ. Chem. Phys. 132, 154104 (2010)
Computing with DFT Band Offsets at Semiconductor Interfaces: A Comparison of Two Methods. C José, Conesa, 111581José C. Conesa "Computing with DFT Band Offsets at Semiconductor Interfaces: A Comparison of Two Methods" Nanomaterials 2021, 11, 1581
Band alignment of ScAlN/GaN heterojunction. Hanlin Fu, Justin C Goodrich, Nelson Tansu, Appl. Phys. Lett. 117231105Hanlin Fu, Justin C. Goodrich and Nelson Tansu "Band alignment of ScAlN/GaN heterojunction",Appl. Phys. Lett. 117, 231105 (2020)
Interfacial charge transfer and enhanced photocatalytic performance for the heterojunction WO3/BiOCl: first-principles study. Wenjuan Yang, Yanwei Wen, J. Mater. Chem. A. 220770Wenjuan Yang,a Yanwei Wen, "Interfacial charge transfer and enhanced photocatalytic performance for the heterojunction WO3/BiOCl: first-principles study" , J. Mater. Chem. A, 2014, 2, 20770
| [] |
[
"Natural Numerical Networks for Natura 2000 habitats classification by satellite images",
"Natural Numerical Networks for Natura 2000 habitats classification by satellite images"
] | [
"Karol Mikula \nDepartment of Mathematics\nSlovak University of Technology\nRadlinského 11810 05Bratislava, BratislavaSlovakia\n\nAlgoritmy:SK, s.r.o\nŠulekova 6811 06BratislavaSlovakia\n",
"Michal Kollár \nDepartment of Mathematics\nSlovak University of Technology\nRadlinského 11810 05Bratislava, BratislavaSlovakia\n\nAlgoritmy:SK, s.r.o\nŠulekova 6811 06BratislavaSlovakia\n",
"Aneta A Ožvat \nDepartment of Mathematics\nSlovak University of Technology\nRadlinského 11810 05Bratislava, BratislavaSlovakia\n\nAlgoritmy:SK, s.r.o\nŠulekova 6811 06BratislavaSlovakia\n",
"Martin Ambroz \nDepartment of Mathematics\nSlovak University of Technology\nRadlinského 11810 05Bratislava, BratislavaSlovakia\n\nAlgoritmy:SK, s.r.o\nŠulekova 6811 06BratislavaSlovakia\n",
"Lucia Čahojová \nPlant Science and Biodiversity Center\nInstitute of Botany\nSlovak Academy of Sciences\nDúbravská cesta 9845 23BratislavaSlovakia\n",
"Ivan Jarolímek \nPlant Science and Biodiversity Center\nInstitute of Botany\nSlovak Academy of Sciences\nDúbravská cesta 9845 23BratislavaSlovakia\n",
"Jozef Šibík \nPlant Science and Biodiversity Center\nInstitute of Botany\nSlovak Academy of Sciences\nDúbravská cesta 9845 23BratislavaSlovakia\n",
"Mária Šibíková \nPlant Science and Biodiversity Center\nInstitute of Botany\nSlovak Academy of Sciences\nDúbravská cesta 9845 23BratislavaSlovakia\n"
] | [
"Department of Mathematics\nSlovak University of Technology\nRadlinského 11810 05Bratislava, BratislavaSlovakia",
"Algoritmy:SK, s.r.o\nŠulekova 6811 06BratislavaSlovakia",
"Department of Mathematics\nSlovak University of Technology\nRadlinského 11810 05Bratislava, BratislavaSlovakia",
"Algoritmy:SK, s.r.o\nŠulekova 6811 06BratislavaSlovakia",
"Department of Mathematics\nSlovak University of Technology\nRadlinského 11810 05Bratislava, BratislavaSlovakia",
"Algoritmy:SK, s.r.o\nŠulekova 6811 06BratislavaSlovakia",
"Department of Mathematics\nSlovak University of Technology\nRadlinského 11810 05Bratislava, BratislavaSlovakia",
"Algoritmy:SK, s.r.o\nŠulekova 6811 06BratislavaSlovakia",
"Plant Science and Biodiversity Center\nInstitute of Botany\nSlovak Academy of Sciences\nDúbravská cesta 9845 23BratislavaSlovakia",
"Plant Science and Biodiversity Center\nInstitute of Botany\nSlovak Academy of Sciences\nDúbravská cesta 9845 23BratislavaSlovakia",
"Plant Science and Biodiversity Center\nInstitute of Botany\nSlovak Academy of Sciences\nDúbravská cesta 9845 23BratislavaSlovakia",
"Plant Science and Biodiversity Center\nInstitute of Botany\nSlovak Academy of Sciences\nDúbravská cesta 9845 23BratislavaSlovakia"
] | [] | Natural numerical networks are introduced as a new classification algorithm based on the numerical solution of nonlinear partial differential equations of forward-backward diffusion type on complete graphs. The proposed natural numerical network is applied to open important environmental and nature conservation task, the automated identification of protected habitats by using satellite images. In the natural numerical network, the forward diffusion causes the movement of points in a feature space toward each other. The opposite effect, keeping the points away from each other, is caused by backward diffusion. This yields the desired classification. The natural numerical network contains a few parameters that are optimized in the learning phase of the method. After learning parameters and optimizing the topology of the network graph, classification necessary for habitat identification is performed. A relevancy map for each habitat is introduced as a tool for validating the classification and finding new Natura 2000 habitat appearances. | 10.1016/j.apm.2022.11.021 | [
"https://arxiv.org/pdf/2108.04327v2.pdf"
] | 236,965,764 | 2108.04327 | 721b5aab4a97fb2cd847e151e9079023b014ebcf |
Natural Numerical Networks for Natura 2000 habitats classification by satellite images
Karol Mikula
Department of Mathematics
Slovak University of Technology
Radlinského 11810 05Bratislava, BratislavaSlovakia
Algoritmy:SK, s.r.o
Šulekova 6811 06BratislavaSlovakia
Michal Kollár
Department of Mathematics
Slovak University of Technology
Radlinského 11810 05Bratislava, BratislavaSlovakia
Algoritmy:SK, s.r.o
Šulekova 6811 06BratislavaSlovakia
Aneta A Ožvat
Department of Mathematics
Slovak University of Technology
Radlinského 11810 05Bratislava, BratislavaSlovakia
Algoritmy:SK, s.r.o
Šulekova 6811 06BratislavaSlovakia
Martin Ambroz
Department of Mathematics
Slovak University of Technology
Radlinského 11810 05Bratislava, BratislavaSlovakia
Algoritmy:SK, s.r.o
Šulekova 6811 06BratislavaSlovakia
Lucia Čahojová
Plant Science and Biodiversity Center
Institute of Botany
Slovak Academy of Sciences
Dúbravská cesta 9845 23BratislavaSlovakia
Ivan Jarolímek
Plant Science and Biodiversity Center
Institute of Botany
Slovak Academy of Sciences
Dúbravská cesta 9845 23BratislavaSlovakia
Jozef Šibík
Plant Science and Biodiversity Center
Institute of Botany
Slovak Academy of Sciences
Dúbravská cesta 9845 23BratislavaSlovakia
Mária Šibíková
Plant Science and Biodiversity Center
Institute of Botany
Slovak Academy of Sciences
Dúbravská cesta 9845 23BratislavaSlovakia
Natural Numerical Networks for Natura 2000 habitats classification by satellite images
data classificationpartial differential equations on graphsforward-backward diffusionnumerical methodsNatura 2000satellite images
Natural numerical networks are introduced as a new classification algorithm based on the numerical solution of nonlinear partial differential equations of forward-backward diffusion type on complete graphs. The proposed natural numerical network is applied to open important environmental and nature conservation task, the automated identification of protected habitats by using satellite images. In the natural numerical network, the forward diffusion causes the movement of points in a feature space toward each other. The opposite effect, keeping the points away from each other, is caused by backward diffusion. This yields the desired classification. The natural numerical network contains a few parameters that are optimized in the learning phase of the method. After learning parameters and optimizing the topology of the network graph, classification necessary for habitat identification is performed. A relevancy map for each habitat is introduced as a tool for validating the classification and finding new Natura 2000 habitat appearances.
Introduction
The new concept of natural numerical networks is introduced in this paper. It is used as a novel tool for automated classification of protected habitats using satellite images. The natural numerical network is based on the numerical solution of the nonlinear forward-backward diffusion (FBD) equation on complete graph. The network, represented by the numerical discretization of the FBD equation, can be classified as a new deep learning method. Usually, a deep learning method is formed by an artificial neural network with many hidden layers. In our discretization scheme, we use a sequence of time steps resolving the dynamics of the diffusion equation, and one hidden layer corresponds to one time step of the numerical scheme for solving the FBD equation. The proposed deep learning method does not use artificial neural network principles in its construction, see e.g. [1], [2] or [3]. In addition, diffusion equations are widely used in modelling phenomena in natural sciences, such as biology, physics or chemistry; thus, we call the proposed network natural. Indeed, the method seems to truly be a natural procedure for the clustering and supervised classification of data, so we call the proposed method the natural numerical network or the natural network for short. In building the natural network, we are inspired by the work [4] in which Eldad Haber and Lars Ruthotto showed the relation between a successful deep learning model, the so-called Residual Neural Network (ResNet) [5,6], and the numerical solution of the system of ordinary differential equations using the forward Euler method. Subsequently, they designed parabolic and hyperbolic networks for deep learning based on the appropriate partial differential equations [4]. Our natural network uses another type of PDEs, the nonlinear forward-backward diffusion equations. The forward diffusion causes the movement of points in a feature space toward each other. The opposite effect, when the points are kept away from each other, is caused by backward diffusion. This yields the desired classification. The natural network based on FBD equations contains just a few parameters that are optimized in the learning phase of the method. After learning the parameters and optimizing the topology of the graph of the network, the classification is performed. The relevancy map for each habitat is created as a novel tool for validating the classification, studying the relation of the habitat classification with the species composition and finding new Natura 2000 habitat appearances. It shows abilities of developed method in distinguishing between next-standing mixed deciduous forests with similar species composition and between two types of riparian forests as shown in the alluvium of the Danube river, and also potential in automated finding new localities of protected habitat areas as shown on softwood floodplain forest newly discovered in Slovakia by the proposed method.
The use of satellite data has become one of the essential methods for effectively and directly acquiring information on the Earth's surface [7,8]. Together with standardized botanical records (plots) and regular in situ measurements, remote sensing is a powerful monitoring instrument [9,10] playing an irreplaceable role in acquiring data essential for evaluating and implementing environmental policy by data analysis [7,11,12]. Remote sensing is also one of the most important tools in ecology and nature conservation for achieving the effective monitoring of ecosystems in space and time [13]; thus, using satellite images to monitor habitats and biome dynamics has been highlighted in many types of research activities. Ecosystems representing Natura 2000 habitats are complex plant communities including tree, shrub, and herb layers together with typical fauna [14,15] for which it was impossible to reach an automated identification and classification with existing methodologies based on satellite data [16,17] so far. In this paper we develop method which performs the first step to fill this gap. Although the method is generally designed to work with any type of optical data monitoring the Natura 2000 habitats, we use the optical information from spectral bands of the Sentinel-2 satellite [18] freely available on European Space Agency (ESA) servers.
In summary, the aims of presented study are 1) to give the complete mathematical definition of the natural numerical network based on forward-backward diffusion equations, 2) to present the numerical scheme corresponding to the natural numerical network in all phases of the classification algorithm, 3) to optimize the natural numerical network on training dataset of Natura 2000 habitats, and 4) to apply the trained natural numerical network to classification of Natura 2000 habitats, construction of the habitat relevancy maps for results validation and for finding new appearances of protected habitats.
Methods
Mathematical model
Let us define a graph as an ordered pair G = (V (G), E(G)), where V (G) is a finite set of vertices, and E(G) is a set of the two-element subsets of V (G) representing the edges of the graph G [19]. We denote the number of vertices of the graph G by N V . Let us consider that the graph G is a complete graph that means that each vertex v ∈ V (G) is connected to each other vertex by an edge. Let us suppose that graph G is undirected and thus the edges do not have an orientation.
Let us consider the function X :
G × [0, T ] → R k representing the spatial coordinates X(v, t) = (x 1 (v, t), . . . , x k (v, t)) of the vertex v ∈ V (G) in time t ∈ [0, T ]. In our case, k is a dimension of feature space R k . A diffusion of X(v, t) on the graph G is formulated as a partial differential equation (PDE) ∂ t X(v, t) = ∇ · (g∇X(v, t)), v ∈ V (G), t ∈ [0, T ],(1)
where g represents a so-called diffusion coefficient, see also [20]. We consider equation (1) together with an initial condition X(v, 0) = X 0 (v), v ∈ V (G). The boundary conditions are not necessary to prescribe because diffusion occurs between all vertices of the complete undirected graph G. Let us define the distance between two vertices v, u ∈ V (G) as the Euclidean distance between two points X(v, ·) and X(u, ·) in the feature space R k and denote it L(v, u). Since every pair of vertices of G, v and u, forms one edge, e = {v, u}, we can simplify the notation for the distance and denote it as a length of the edge L(e), which will be used throughout the following text.
We will design the diffusion coefficient g depending on the length of the edges e of the graph G. It will give a nonlinear diffusion model on the graph representing a generalization of the Perona-Malik model from the image processing [21]. We consider equation (1) with the diffusion coefficient g in the form
g(e) = ε(e) 1 1 + KL 2 (e) , K ≥ 0.(2)
The value of ε(e) in the diffusion coefficient depends on the type of diffusion which is applied on every single edge. If we need to apply forward diffusion, we choose ε(e) as a positive constant. The forward diffusion, represented by a positive diffusion coefficient, averages the values of a diffused quantity. The averaging reflects the smoothing property of the standard diffusion equation. It is called forward because it describes diffusion towards the future and in our application, it causes a moving and thus clustering of points together. On the other hand, the backward diffusion is represented by a negative diffusion coefficient, ε(e) is a small negative value, and can be understood as returning to the past in a diffusion process. It is an inverse process of the smoothing (averaging) values of a diffused quantity and in our application, it gives a repulsion of the points belonging to different clusters. If we only used the backward diffusion model, the points would be moving away from each other, and the whole system became unstable. But by a suitable combination of the forward and backward diffusion, when we choose a small negative coefficient ε(e) for backward diffusion, we do not observe any calculation instability. Such a model is a suitable and natural tool for supervised learning -the points inside the given clusters are kept together while points of different clusters are kept away from each other. Such proper behaviour is realized by the model (1)-(2). In Fig. 1 we illustrate behaviour of the model (1)-(2) on three given clusters where by blue lines we plot some of the links of forward diffusion and by red lines some of the links of backward diffusion. This figure depicts the basic features and behaviour of the natural network. The points inside a given cluster are attracted by the forward diffusion while there is a repulsion of points of different clusters by the backward diffusion.
Additionally, in Fig. 2 we illustrate the situation arising in supervised learning and application phases of the classification method when a new observation is added into the network. Only the forward diffusion is applied to all links of the vertex representing the new observation. It is depicted by the blue lines connecting the new observation (black square) with every other point. Thus, this new observation is attracted by a certain diffusion speed to all existing clusters which themselves are subject to the forward-backward diffusion as described before. The dynamics of the network decides about the cluster membership of the new observation.
The model (1)-(2) contains, together with forward-backward diffusion switch ε(e), also a weighting coefficient K. The constant K controls how the length L(e) of the edge e = {v, u} affects the diffusion of the vertices v and u over time. If KL(e) 2 is large, the diffusion coefficient g is close to 0 which means that the diffusion process will be slow, and the points are not diffusing (moving to each other) by an averaging. If KL(e) 2 is small, the diffusion coefficient is close to 1, the diffusion process is faster and points are moving to each other fast by the diffusion. Since the coefficient K is multiplying the squared length of the edge, the distant points in the feature space are averaging (moving) slower than the close points.
The model (1)-(2), after discretization, represents a basic natural numerical network for supervised deep learning classification and, with just a positive coefficient ε(e), it is also a proper model for unsupervised clustering (which is, however, not discussed in this paper). This basic model allows useful modifications which will be used also in our final classification algorithm. First of all, we slightly modify the diffusion coefficient into the form
g(e) = ε(e) 1 1 + k i=1 (K i l 2 i (e))
,
K i ≥ 0.(3)
Now, the parameters K i , i = 1, . . . , k, represent weights for each coordinate l i (e), i = 1, . . . , k, of the vector
l(e) = (l 1 (e), . . . , l k (e)) T = X(u, ·) − X(v, ·) = = (x 1 (u, ·) − x 1 (v, ·), . . . , x k (u, ·) − x k (v, ·)) T , v, u ∈ V (G).(4)
By this modification, we can control the diffusion speed in each direction of the k-dimensional feature space and achieve more accurate classification results. Next modification of the basic model (1)-(2) controls a forward diffusion coefficient on the edges of the new observation points, see Fig 2 and 3. We can reduce the forward diffusion influence on the vertex v ∈ V (G) by using the diffusion coefficient in the form g(e) = max(ε(e) 1
1 + k i=1 (K i l 2 i (e)) − δ, 0), ε(e) > 0(5)
on all edges e of v, where δ is a parameter of the "diffusion neighborhood" size. In classification of the new observation, the aforementioned modification causes that only the points in a "δ-diffusion neighborhood", i.e. for which the diffusion coefficient is large than δ, are attracting new observation point v. This modification is illustrated in Fig. 3.
Numerical discretization -natural network construction
Let us denote by f (v, t) any of the coordinates
x i (v, t) of X(v, t) = (x 1 (v.t), . . . , x k (v, t))
. To discretize the equation (1), we use i) the balance of diffusion fluxes (inflows and outflows) in each vertex v ∈ V (G) and ii) the approximation of the diffusion flux to the vertex v along its edge e.
First, let us define the diffusion flux approximation, which depends on the difference of values of function f at the vertices v and u, as for each edge e = {v, u}, where g e represents the diffusion coefficient on the edge e. If F(v, e, t) > 0, it represents the diffusion inflow of the quantity f into the vertex v. On the other hand, if F(v, e, t) < 0, it represents the diffusion outflow of the quantity f from the vertex v. Then the balance of diffusion fluxes in vertex v is expressed by the equation
F(v, e, t) = g e (f (u, t) − f (v, t)),(6)∂ t f (v, t) = e v F(v, e, t),(7)
that means, the time derivative of f in the vertex v is positive -the value of f increases in time, if the overall inflow to the vertex v is greater than the overall outflow from the vertex v. Vice versa, the time derivative of f in the vertex v is negative if the sum of inflows and outflows in the vertex v is negative, i.e. outflows from the vertex v are greater than inflows. When we substitute the approximation of the diffusion flux (6) into the balance equation (7), we obtain
∂ t f (v, t) = e v e={v,u} g e (f (u, t) − f (v, t)).(8)
The right hand side of the equation (8) in the graph theory represents the so-called "graph-Laplacian" (see equation (12) in [20] or equation (2.5) in [22]) which is given for a weighted complete undirected graph by relation
∇ · (∇f )(v, t) = 1 ν(v, t) e v e={v,u} g e (f (u, t) − f (v, t))(9)
where ν(v, t) represents a measure of the vertex v and g e represents a "weight" of the edge in the weighted graph. In numerical mathematics, we would understand the "Laplacian" defined in this way as an averaged Laplace operator on a "finite volume" v with a measure (area/volume) ν(v, t). Our numerical discretization (8) of the diffusion equation on the complete undirected graph corresponds to the choice ν(v, t) = 1, which is the standard choice for the vertex measure also in the graph theory, see [20]. For the time discretization, we use the semi-implicit approach, see e.g. [23]. The time interval [0, T ] is divided uniformly into M time steps t n , n = 1, . . . , M and let τ denote the size of the time step. For the approximation of time derivative we use the finite difference method and obtain
∂ t f (v, t) = f n (v) − f n−1 (v) τ ,(10)where f n (v) = f (v, t n ).
Since the diffusion coefficient g e at the edge e = {v, u} can depend on the unknown quantity f , see (2)-(5), and thus can change over time, we take its value from the previous time step. We denote it by g n−1 e and obtain the semi-implicit scheme in the form The semi-implicit scheme (11) can by rewritten in each time step n = 1, . . . , M into the system of linear equations
f n (v) − f n−1 (v) τ = e v e={v,u} g n−1 e (f n (u) − f n (v)).(11)(1 + τ e v e={v,u} g n−1 e )f n (v) − τ e v e={v,u} g n−1 e f n (u) = f n−1 (v).(12)
This system of equations is represented by a full matrix and as we have said before, for a complete undirected graph it is not necessary to define any boundary condition.
In the case of classification of the data from k-dimensional feature space, our diffusing variables are the Euclidean coordinates X(v, t) = (x 1 (v, t), . . . , x k (v, t)) of the vertices v of the graph G. In general, we get in each time step k systems of linear equations
(1 + τ e v e={v,u} g n−1 e )x n i (v) − τ e v e={v,u} g n−1 e x n i (u) = x n−1 i (v), i = 1, . . . , k, v ∈ V (G),(13)
which are interconnected by the diffusion coefficient g n−1 e , which depends on all
x n−1 i (v), x n−1 i (u), i = 1, .
. . , k and can be written in the form
g n−1 e = ε(e n−1 ) 1 1 + k i=1 (K i l 2 i (e n−1 )) , K i ≥ 0(14)
Let us denote the i-th cluster by
C i , i = 1, . . . , N C , where N C is the number of clusters. Let us have the vertices v ∈ C l and u ∈ C m , where l, m ∈ {1, . . . , N C }, e n−1 = {v, u}.
The value ε(e n−1 ) in the diffusion coefficient (14) is given by the following values
ε(e n−1 ) ≥ 0, if l = m, ε(e n−1 ) < 0, if l = m.(15)
For a reader convenience in Fig. 4 we present dynamics of the network by (13) -(15) in order to show how the points inside the given clusters are moving together and clusters themselves are keeping away. In practice, in the learning phase and also in the application phase, we add new observation to the network and run the dynamics of the network. In the application phase we add to the network completely new observation while in the learning phase the new observation is taken away from the learning dataset. In both cases, the dynamics is modified in such a way that all other points are moving by (13) -(15) but for the new observation w /
∈ C i , i ∈ {1, . . . , N C }, e = {w, u}, the diffusion coefficient is set to g n−1 e = max(ε(e n−1 ) 1 1 + k i=1 (K i l 2 i (e n−1 )) − δ, 0), ε(e n−1 ) ≥ 0, K i ≥ 0, δ > 0(16)
are given constants. A stopping criterion is applied for dynamics of the network. We call it the histogram stopping criterion because it calculates the number of occurrences (frequency) of evolving points in prescribed spatial cells in every time step. That means the kD grid with cells given by a specific spacing h is created, see the grey grid in Fig. 6 for 2D case illustration.
In numerical experiments presented below, we always use the spacing h = 0.01. We also determine which of the given clusters is the smallest one, and let the variable S min represent the number of points in this smallest cluster. Then, in every time step the histogram development is monitored, and whenever the number of points (denoted as the frequency in Fig. 6 left up corner) inside a grid cell is greater or equal to S min , the cell is marked, see the red cells in Fig. 6. At the same time, a specific neighborhood of every marked cell is examined, see the yellow subdomains in Fig. 6. The examined neigborhood is given by the interior of the concentric squares with Chebyshev radius H 1 and H 2 , respectively, in Fig. 6 we illustrate the situation where H 1 = 1 and H 2 = 8, and such parameters are also used in the computations presented below. If there are only zero values of the frequency in all the cells inside the examined neighborhood of the marked cell, we claim that a cluster was formed in the marked cell. The dynamics of the network is stopped when the number of clusters formed is equal to the number of clusters given.
For quantifying the relevancy of classification of any new observation w / ∈ C i , i ∈ {1, . . . , N C }, the relevancy coefficient R(w) is defined. We define it by using the information to which cluster the new observation is classified combined with the information about distances of the point representing the new observation and the centroids of all given clusters. In the left picture, the new observation resulting classification relevancy will be high, close to 1, while in the right picture it will be low, around 0.5.
The new observation is classified into the cluster C a , if after stopping the network dynamics its closest point in the network is a point from the cluster C a and, at the same time, their distance is less than H = 10h. Otherwise, the point is not classified to any cluster, it is called the outlier and its relevancy will be equal to 0 in all clusters. Now, let us assume that the new observation is classified into the cluster C a . The new observation has given its initial position X(w, 0) and we calculate the centroids of the given clusters at the final time step by
C i = 1 N C i v∈C i X(v, T ), i = 1, . . . , N C ,(17)
where N C i is the number of points in the cluster C i . Then we calculate the distance between the new observation and the centroid of the cluster C a = C a (w) to which it is assigned by the network dynamics,
l 1 (w) =| X(w, 0) − C a | ,(18)
and the average distance of the new observation to all other cluster centroids,
l 2 (w) = 1 N c − 1 Nc i=1 i =a | X(w, 0) − C i | .(19)
The above distances are used to define the quantity
R p (w) = 1 − l 1 (w) l 1 (w) + l 2 (w)(20)
which is in the range [0, 1] and which is the basis for definition of the relevancy coefficient. When the position of the new observation X(w, 0) is close to the centroid of the cluster to which it is assigned then R p (w) is close to 1, see Fig. 7 left. In this case, the relevancy of the resulting classification should be high. The quantity R p (w) is close to 0.5 or less if the distance of the new observation to the centroid of the cluster to which it is assigned is similar or greater than its distance to other clusters centroid, see Fig. 7 right. In this case, the resulting relevancy of the classification should be significantly reduced. We use the logistic function
L(x) = 1 1 + e λ(0.5−x)(21)
which after linear rescaling from the interval [L(0), L(1)] to the interval [0, 1] give the final definition of the relevancy coefficient R(w) for any new observation w,
R(w) = L(R p (w)) − L(0) L(1) − L(0) .(22)
While R p (w) is linearly decreasing from 1 to 0, depending on the distances ratio l 1 (w)/(l 1 (w)+ l 2 (w)), the final relevancy coefficient R(w) has the nonlinear character, see Fig. 8. It sets the relevancy values to be close to 1 for all new observations belonging to a neighborhood (size of which depends on λ) of the centroid of the cluster to which it is assigned. In all numerical experiments presented below, we use λ = 12. The relevancy coefficient defined by (22) is used in the definition of relevancy maps which give a useful information on Sentinel-2 image pixels and image subareas membership in respective clusters, see Tabs. 3 -6. 2.3. Ground-based vegetation data sampling -Application to habitat identification and prediction One of the possible applications of the natural numerical networks described above is ecology and nature conservation. The identification and classification of the Natura 2000 habitats by using remote sensing data still has strong limitations due to habitats' natural character and variability. In our application, we use a natural numerical network for the classification of the protected Natura 2000 forest habitats in the territory of Slovakia. There were four Natura 2000 forest habitats dominant in Western Slovakia chosen for the classification, as shown in Table 1. Therefore, we have four clusters C i , where i = 1, . . . , N C , and N C = 4. All habitat areas borders were semi-automatically segmented in NaturaSat software [24,25] and checked in the field by botany experts during vegetation seasons of 2019 and 2020. There were 125 areas segmented in red, green, blue and near-infrared channels of Sentinel-2 data [18]. We denote the segmented areas as S i , where i = 1, . . . , N S , and N S = 125. This input from field experts contains 30 segmented areas of 91E0 habitat, which consists of mixed ash-alder alluvial forests in temperate and boreal Europe (Alno-padion, Alnion incanae, and Salicion albae); 29 91F0 areas, which consist of riparian mixed forests of Quercus robur, Ulmus laevis and Ulmus minor, Fraxinus excelsio or Fraxinus angustifolia, along the great rivers of the Atlantic and Middle European provinces; 32 91G0 areas, which consist of Pannonic woods with Quercus petrea and Carpinus betulus; and 34 9110 areas, which consist of Luzulo-Fagetum beech forests habitats. The Sentinel-2 data from September 10, 2018 covering Western Slovakia were used, as shown in the left square in Fig. 9. More specifically, we studied the Natura 2000 habitats in the Podunajská nížina lowland along the Danube river (see Fig. 10 left up), the Záhorská nížina lowland along the Morava river (see Fig. 10 right up) and the habitats in the Malé Karpaty Mts. (see Fig. 10 bottom row).
Obtaining of multispectral data and habitat spectral characteristics
Sentinel-2 is a satellite of the European Space Agency (ESA) designed for the Copernicus European Union's Earth observation program focused on the observation of the atmosphere, land, seas and climate on Earth [26]. All data acquired by the Sentinel-2 satellite are systematically processed, and only the products of this process are available for users [18]. We use the Level-2A product that provides Bottom Of Atmosphere (BOA) reflectance images and offers 17 channels. In addition to these 17 channels, we calculate one more, the normalized difference vegetation index (NDVI) [27] giving further useful information on habitat status. Thus, for the classification and the feature space construction, we use 18 channels; and for every channel, we compute the statistical characteristics -the mean, the standard deviation, the minimum value and the maximum value in a prescribed image subarea A. Therefore, the feature space is the 72-dimensional Euclidean space; i.e., its dimensions are k = 72.
For classification purposes, first, in each segmented area S i , where i = 1, . . . , N S , we created a square A i = A(p i , r) with a randomly chosen center in a pixel p i ∈ S i and Chebyshev radius r. For large segmented areas, r = 5 can be chosen; and for small areas, r can be chosen smaller, as shown in Fig. 11. Such squares are used for building the learning datasets and also for the construction of the so-called relevancy maps defined below. The statistical characteristics of the above mentioned 18 channels are computed for every square A i , and they form the coordinates of points in the 72-dimensional feature space. The initial network graph G is constructed such that every vertex of the graph G is given by one such point corresponding to one square A i (or to one pixel p i or one segmented area S i , as we may say).
Since the feature space is high-dimensional, we reduce the dimensions. To detect and retain the maximal variance in the data, we apply Principal Component Analysis (PCA) [28,29]. We observed that in our application, the first two principal components are sufficient to describe the data variance; and thus, the dimension can be reduced from k = 72 to k = 2, which is simultaneously computationally tractable and yields convincing results.
Relevancy map
The relevancy map is a grayscale image with the same size as the images from Sentinel-2 optical channels. After finding the optimal parameters of the natural network, the square A(p, r) is created in every image pixel p (and not only inside the segmented areas as described above). For every p, the statistical characteristics of the square A(p, r) are computed and considered as new observation w(p). We note that to create required squares along the Sentinel-2 image boundary and compute the statistical characteristics there we use the reflection of corresponding values from the image interior to the exterior. The new observation w(p) is added to the graph G as a new vertex. Every new observation w(p) is classified by the natural network and its relevancy coefficient R(w(p)) is computed by (22). Finally, depending on the Chebyshev radius r of the square A(p, r), the relevancy map M r i , i = 1, . . . , N C , is defined for every cluster C i in every pixel p as follows
M r i (p) = R(w(p)), if w(p) is classified into C i , M r i (p) = 0, if w(p) is not classified into C i .(23)
Learning phase and network graph topology optimization
First, let us consider the graph G having N V = 125 vertices as described above and call it the (initial) learning dataset LDS125. All data from the learning dataset are labelled by the number of clusters to which they belong. As we stated above, we apply PCA to the original 72-dimensional points representing the data. PCA finds the coordinate system (basis) in which the highest variance (variability) of the data is in the first coordinate and it decreases subsequently with further coordinates. The change of the basis is represented by the linear transformation (matrix) that is applied to every point of the dataset, and we obtain coordinates of every point in the new coordinate system. Then, we are able to reduce the dimension of the feature space to k = 2, considering only the first two coordinates of every point, as shown in the top left of Fig. 4. As we observed experimentally, further coordinates do not help differentiate clusters and can be abandoned. After PCA matrix transformation, we also scale the coordinates of the points into the range [0, 1] × [0, 1] which helps in the model parameter tuning process. In fact, for any data, we can then use the same ranges of the natural network model parameters depending on the point distances.
The main goals of the learning phase and the network graph optimization are to tune the parameters of the model (13)- (16) and optimize the structure of the graph G itself to achieve the highest possible classification accuracy for observations from the learning dataset. To achieve that goal, we subsequently remove the cluster label from each vertex of graph G, set it as the new observation and classify it using the model, as shown in Fig. 5. We vary the model parameters K 1 , K 2 and δ and choose the combination of parameters that results in the greatest number (N B ) of correctly classified observations from the learning dataset. If N V is the number of all observations, our goal is to achieve a success rate N B /N V close to 1.
In the tuning process, the range for the diffusion coefficient parameters is K i ∈ [100, 5000] with the step size K s i = 100, where i = 1, 2; and the range for parameter δ is the interval [0.001, 0.1] with the step δ s = 0.001. As we tune three parameters only, we are able to go through all discrete parameters combinations for every new observation (a type of brute force approach) and find the combination that gives the highest possible classification accuracy N B . For any parameter combination, we apply at most n time steps of the natural network dynamics. We have chosen n = 200, but in most cases, the histogram stopping criterion is fulfilled for fewer time steps and classification is fast. For the numerical solution of the linear systems of equations (13), we use the SOR (Successive Overrelaxation) iterative method. The first row of Table 2 shows the results. We achieved the best success rate 105/125 by using the model parameters K 1 = 2800, K 2 = 4700 and δ = 0.004.
Dataset name
Correctly classified The achieved the best success rate of 0.84 should be improved, so we performed further steps in the learning phase. It is quite clear that the initial random choice of the representative squares A(p i , r) inside the segmented areas S i , where i = 1, . . . , N S , is not the optimal approach. The statistical characteristics of those squares do not necessarily correspond optimally to the statistical optical characteristics of the corresponding habitat in the Sentinel-2 image data. A strategy to solve this problem is based on spatial adjustment (shift) of representative squares inside the segmented areas S i such that we obtain a higher classification success rate with new (updated) vertices of the graph G. We can understand this step as a learning dataset adjustment. Since the representative squares have various radii (depending on the size of the segmented area), we construct the relevancy maps M r i , as shown in section 2.5, for r = 3, 4, 5. Since the relevancy map gives the relevancy of classification for every image pixel, we will be able to compare the relevancies of the pixels inside the segmented areas and choose the pixel with the highest relevancy. The relevancy maps are constructed by using the graph G corresponding to the initial learning dataset LDS125 and by using the optimal network parameters K 1 = 2800, K 2 = 4700, and δ = 0.004. Then, we check whether there is an r such that we are able to find a new pixel p ∈ S i in which M r a (p) >> 0 (ideally close to 1), where M r a is the relevancy map of the cluster to which the segmented area S i belongs. If we find such a pixel p in the relevancy map M r a , the new square A(p, r) is constructed. Every A(p i , r) for which it was possible to find the new representative square, with higher M r a (p i ), is replaced by the new one and the adjusted learning dataset LDS125adj is created. The network parameter tuning process is conduct again, but it now uses the LDS125adj learning dataset. The result is shown in the second row of Tab. 2; and the best success rate was 113/125 = 0.904, which is higher than the previous one. It was achieved by using the parameters K 1 = 4600, K 2 = 1700, and δ = 0.002. We can conclude that the adjustment of representative squares significantly increased the classification success rate.
However, in the previous adjustment step, it was not always possible to find an adjusted representative square for all S i , where i = 1, . . . , N S . This was caused by the fact that for some segmented areas S i , only zero values of relevancy were computed in all interior pixels in all relevancy maps M r a , where r = 3, 4, 5. We checked whether these only zero values in all relevancy maps were changed in those segmented areas when using LDS125adj dataset with its optimal parameters. Thus, we again constructed the relevancy maps. Checking the results, we found that there are still seven areas S i for which no inner pixel can be found such that it has relevancy greater than zero. It is clear that those areas cannot contribute in any way to increase the classification success rate. We call the area S i fulfilling the condition
M r a (p) = 0, ∀p ∈ S i , r = 3, 4, 5,(24)
unclassifiable and we remove all squares A(p, r) corresponding to the unclassifiable areas from the learning dataset LDS125adj and create the final learning dataset LDS118 containing only N V = 118 labeled observations. Using this adjustment, we changed the topology of the graph G itself, and we call this step the network graph topology optimization. The model parameters were tuned again and the best success rate of classification was 117/118 = 0.9915, as shown in the third row of Tab. 2, for the optimal parameters K 1 = 3100, K 2 = 1500, and δ = 0.003. This success rate is high and allow us to use such optimally tuned (trained) natural network in practical applications presented in the next subsections.
Results and discussion
Relevancy maps for Western Slovakia
In the Methods section, we discussed the learning process that led to the successfully optimized and trained natural network. Finally, the graph G of the trained natural network contains N V = 118 vertices and the optimal parameters are K 1 = 3100, K 2 = 1500, and δ = 0.003. This subsection is devoted to the construction of the final relevancy map for each explored Natura 2000 habitat. It was clear from the learning phase that the radius of the chosen square areas should vary between 3 (for small segmented areas) and 5 (for large segmented areas) to get the optimal results. For this reason, we compute three relevancy maps for the squares with radii r = 3, r = 4 and r = 5, as shown in Fig. 12. The final relevancy map M f i , where i = 1, . . . , N C , is obtained by taking the maximum of those three relevancies in every pixel p, i.e., we define it by
M f i (p) = max r M r i (p).(25)
The qualitative (visual) comparisons of the Natura 2000 habitat segmented areas with the final relevancy maps for habitats 91E0 and 91F0 are plotted in Fig. 13. As the figure shows, the interior of the segmented areas on each relevancy map contains bright colors. This reflects the correct high relevancy of classification inside the segmented areas and thus the correct assignment of the image pixels to the Natura 2000 habitat.
For the quantitative evaluation of the classification accuracy, we calculate the mean relevancy inside each segmented area. It is clear that the relevancy values inside the inner narrow band of width r are irrelevant because the squares A(p, r), for p from the narrow band, also partially cover the pixels outside the area, cf. Fig. 12. Thus, we shrink the boundary curve by the distance r = 3, and the mean value of the relevancy is computed inside the smaller curve. In Tables 3 -6, we show the mean relevancy inside the segmented areas of the 91E0, 91F0, 91G0 and 9110 Natura 2000 habitats in each of the final relevancy maps.
The studied Natura 2000 habitat segmented areas obtain nonzero values in not only one final relevancy map. This occurs because the final relevancy map is constructed by the combination of the relevancy maps with different rs and by the fact that pixels inside the segmented areas can be classified differently by the natural network. We notice that most segmented areas obtain the highest mean relevancy for the habitat to which they belong. This is a feature supporting the usability of the final relevancy maps in the classification of new areas and finding new appearances of habitats. There are also some cases where the segmented area has the highest mean relevancy in a different cluster than that to which it should belong. This is, however, in a large majority of cases explainable. E.g., it may be caused by a similar species composition of habitats or by the very often difficult decision of experts in the field when classifying the habitat. Thus, we can conclude that the final relevancy maps give very useful information for an expert decision on habitat classification. This is further discussed below.
The 91E0 softwood floodplain forests are the best detectable habitat among the studied habitats, as shown in Tab. 3, where all areas have the highest mean relevancy in the correct 91E0 cluster. The only disputable result is the habitat area 91E0_Sihot, which was classified correctly as 91E0 but the relevancy values are high for both 91E0 and 91F0 habitats. It can be caused by the high cover of Ulmus minor, the hardwood tree species at this locality, which is also typical for the 91F0 habitat, but in combination with different trees. The floodplain forests at the Sihoť Island are unique due to the specific co-occurrence of softwood floodplain forest species Populus sp. and Salix sp. together with Ulmus minor; therefore, the obtained relevancy values in Tab. 3 are meaningful. It seems that when hardwood tree species are mixed in the 91E0 forest habitat, the mean relevancy of such an area also has a higher value for the 91F0 habitat cluster. This fact indicates recognition of transitional floodplain forest type that is common in natural conditions and also partly occurs in the localities in Číčov (Cicov_1, Cicov_5) and Medveďov (Medvedov_2).
Within the 91F0 habitat, three areas are misclassified, and some others have close relevancies in both 91F0 and 91E0 habitats, as shown in Tab. 4. The segmented areas 91F0_Kopac1, 91F0_Vysoka_pri_Morave_6 and 91F0_Vysoka_pri_Morave_8 were classified contrary to expert opinions. These forests represent a transitional type between 91E0 and 91F0 habitats, and experts classified them as 91F0. In many cases, it is hard to classify transitional floodplain forests fully objectively. The categorization is dependent on the subjective decision of the field expert. The relevancy shows possible classification within the 91E0 habitat of these three areas, which should be discussed among botany experts. In the alluvial country with natural characteristics, the softwood (91E0) and the hardwood (91F0) floodplain forests are often interconnected. They form transitional zones with hardwood tree species in the softwood habitat and, conversely, with softwood species in the hardwood floodplain forest. Our results show that the relevancy maps are sensitive to these transitional forests and can give information about such mixed composition of the floodplain forest and possible classification within the next-standing habitat.
A few misclassified areas are found in the 91G0 habitat, as shown in Tab. 5. They are misclassified mainly within 91F0 instead of the correct 91G0. In addition, via a more indepth look, we see that all misclassified areas contain Quercus petraea species together with Carpinus betulus. For the 91F0 habitat, Quercus robur is typical, but almost no difference between the two Quercus at the species level is identified. However, since the 91F0 habitat occurs in river alluvia while the 91G0 habitat occupies a lower hill, adding the information from a digital elevation model to the feature space may solve this problem.
In Table 6, we present the results for the 9110 habitat areas. In some cases, they form transitional zones with 91G0 habitat because they share some tree species, Carpinus betulus, Tilia cordata, Prunus avium, Acer platanoides, etc. The beech forests in the Carpathians' upper parts are monodominant and thus are very easily detectable while forests in lower altitudes host more 91G0 species. Thus, the information from the digital elevation model can again improve the classification. One of the misclassified areas (9110_Raca_2) represents a very old-growth forest with a lower canopy cover, and probably some undergrowth species with different optical characteristics influenced the result. The natural vegetation has a continuous character, and Natura 2000 habitats stated in the habitats directive describe the essential, most characteristic forms of habitats. In real nature, these types are often interconnected and hard to classify, even by experts in the field. The presented results show the high classification success; moreover, it gives us information about the admixture of some tree species typical for next-standing habitats. This finding opens new opportunities in phytosociological and ecological research, especially in the fields of ecotone zones and vegetation gradients.
Exploration of other regions in Slovakia and finding new appearances of protected habitats
After obtaining the successfully trained natural network, we can explore further areas of the occurrence of protected habitats in territories neighboring Western Slovakia, as shown in Fig. 9. We explored the areas of Central and Eastern Slovakia and obtained very promising results. By using the relevancy maps, we confirmed the appearance of the 91F0 habitat around the Latorica River in the south of Eastern Slovakia and found the new appearance of 91E0 protected habitat along the Rimava River near the village of Dubovec in the southern part of Central Slovakia. The main goal of the relevancy map construction, to find the new appearances of protected habitats in an automated way, was thus achieved.
During the Natura 2000 mapping campaigns, the 91F0 floodplain forests around the Latorica River were sampled using vegetation plots. We use the trained natural network to create the final relevancy maps for the Eastern Slovakia region, and the details of the Sentinel-2 image and the 91F0 final relevancy map can be seen in Fig. 14. The figure shows 11 points (yellow color) representing the vegetation plots -phytosociological relevés, which mark the appearance of the habitat. Usually, they are not far from the boundary of the habitat. First, we see the correspondence of these points and bright colors in the relevancy map of the 91F0 habitat, which indicates the correct classification of the pixels along the Latorica River. Furthermore, we automatically segmented the habitat area [30] starting the segmentation from 11 habitat marking points. The evolution of the segmentation curves undergoing topological changes is presented in Fig. 14 with the final segmentation result plotted in the bottom right picture both on the Sentinel-2 image and relevancy map. The automatic segmentation identified the compact habitat area and also included a thin river branch, a forest road and small areas with younger floodplain forests, which is accepted because these objects are common parts of the habitat. Furthermore, the relevancy map was able to detect even these small parts, and high relevancy values were assigned only to the best representative area of the habitat. We computed also the mean value of the relevancy in the 91F0 relevancy map inside the segmented area; and it is 0.6548, which indicates the correct classification of the area along the Latorica River.
During the exploration of the relevancy maps created for the southern area of the Central Slovakia region, we found that the natural network assigns high 91E0 relevancy to the area around the Rimava River close to the village of Dubovec (Fig. 15 right). The discovered area has never been identified as a target habitat, and no databases contain such information (the Slovak vegetation database or the database of the State Nature Conservancy of the Slovak Republic where all currently known areas of Natura 2000 habitats are collected). We segmented the area by applying the automatic segmentation method [30], as shown at the bottom right in Fig. 15, to obtain the final segmentation result. The computed mean relevancy inside the segmented area is equal to 0.6079, and it again indicates a possible new appearance of the 91E0 habitat. Finally, the botanists went into the field and confirmed that the newly discovered area is classified as 91E0 Natura 2000 habitat, as shown in photos of the area in Fig. 16.
Exploration of the alluvial forests along the Danube River in the Central and South
Europe During the field exploration of the Danube River floodplains, the appearance and positions of the habitats were sampled using vegetation plots. We use the natural network to explore the alluvial forests along the Danube River and we made a qualitative and quantitative comparison of the vegetation plots and the relevancy map.
The floodplain forests occur on the banks of the Danube River from Central to South Europe. A large area of these forests is situated in Upper Austria around Linz city, where softwood floodplain forests (91E0 habitat) form a mosaic with monodominant plantations of Canadian poplars or maples. Vegetation plots were sampled in this area, and they were used to verify the relevancy map. All of the vegetation plots fit inside the regions identified by the relevancy map. Fig. 17 depicts two of them, situated inside the areas, which the natural network denotes as 91E0 habitat areas. Automatic segmentation was applied to these areas, and the result is also depicted in Fig. 17. The final curves surround the areas with high 91E0 relevancy. The 91E0 mean relevancy of the areas was equal to 0.7226 for the smaller area and 0.7318 for the larger area.
The relevancy map was created, and vegetation plots were also sampled in the Danube River alluvia in Vojvodina (North Serbia). The identification of 91E0 habitat areas was slightly less successful in this area. Two plots were not recognized due to the presence of the moistest forest types dominated by sparse willows and absenting poplars that are not so widespread in the upper parts of the Danube alluvia where the natural network was trained. One habitat area was not found because the plot occurs along a very thin forest line, which is explainable by the Sentinel-2's resolution and the mechanism of creating the relevancy map. All other plots were identified correctly, and two examples of habitat areas in the relevancy map are shown in Fig. 18 and Fig. 19, respectively. The first example is situated on the left bank of the Danube River near the village of Vajska. This vegetation plot was used as a starting point for the automatic segmentation algorithm, as shown in Fig. 18. This thin habitat area was segmented with a 91E0 mean relevancy equal to 0.7091. A complex habitat system from the Karadordevo nature reservation, identified in the relevancy map and segmented by automatic segmentation, is presented in Fig. 19. The starting point for the segmentation was the vegetation plot located on the north border of this area. The entire area of the 91E0 habitat was successfully segmented while the mosaic of wet meadows and young clear-cuts inside the habitat area was excluded. The mean relevancy of the segmented area is 0.6032, which means correct classification within the 91E0 habitat.
Another studied area was the delta of the Danube River in Romania. It is difficult to explore the region due to the complicated accessibility of most of the area. The character of the landscape means that localities can only be reached by small boats; thus, the importance of creating relevancy maps of such areas is significant. Fig. 20 plots the Saint George Branch of the Danube River, and the relevancy map corresponds very well with the vegetation sampled by plots. In the detail of the Danube delta's branch relevancy map, we can clearly see bright-colored 91E0 habitat and black-colored surrounding swamps and other ecosystems, which indicates correct classification.
Conclusions
The presented results give new perspectives for diverse science disciplines -mathematical modelling, vegetation science and ecology, remote sensing, nature conservation and mapping, and subsequent delivery of ecosystem services. The identification of plant communities on the scale of, e. g., Natura 2000 habitats, using remote sensing has been an open challenge for field ecologists over the last decades. Rapidly developing remote sensing techniques and different data-mining approaches were implemented to monitor land surface types such as agricultural land, water bodies, abandonment land, natural and plantation forests of different types, meadows with various management practices, built-up areas, etc, see e.g. [31,32,33,34,35,36,37]. However, due to the complicated character of target nature phenomena, it was not possible to reach the detailed scale of Natura 2000 habitats by using existing methodologies and satellite data. Thus, we developed the novel method -the natural numerical network -for that purpose. We introduced its definition in the form of the discretized forward-backward diffusion equation and applied it to the classification of Natura 2000 forest habitats. We also introduced the new concept of relevancy maps, which allow the automatic recognition of Natura 2000 habitat areas in remote sensing data provided by Sentinel-2 optical information and finding new appearances of protected habitats via the satellite data. We are not aware of any other method that would identify and explore the Natura 2000 habitats or similarly detailed plant communities on such an exact level and accuracy by using the remote sensing data.
Acknowledgement
This work was supported by the grants APVV-16-0431, APVV-19-0460 and ESA contracts 4000122575/17/NL/SC and 4000133101/20/NL/SC.
Figure 1 :
1Randomly generated 2D points in three clusters and some links of forward diffusion (blue lines) inside the clusters, and some links of backward diffusion (red lines) between points from different clusters.
Figure 2 :
2Randomly generated 2D points in three clusters with one new observation (black square). The blue lines represent some of the forward diffusion links inside the clusters and between the new observation and all other points. The red lines represent the links of backward diffusion between points from different clusters.
Figure 3 :
3Randomly generated 2D points in three clusters with one new observation (black square). A "δdiffusion neighborhood" around the new observation (yellow circle) and some links of forward diffusion with non-zero value of diffusion coefficient g(blue lines), and some links where the diffusion is set to zero value (yellow lines), because the points are outside of the δ-diffusion neighborhood.
Figure 4 :
4A dataset with 125 observations to which the forward-backward diffusion natural network is applied. We show the dynamics of the network in time steps n = 0, 1, 2, 5, 8, 14. The observations are classified into 4 clusters and the points in different clusters are distinguished by different colors.
Figure 5 :
5A dataset with 124 observations and one "new" observation (green point) to which the forwardbackward diffusion is applied. We show the dynamics of the network in time steps n = 0, 1, 2, 5, 8, 14. The new observation is classified to the "red" cluster. This figure illustrate the learning phase of the classification.
Figure 6 :
6For illustration of the histogram stopping criterion we visualize the 2D grid with spacing h = 0.015625 (grey lines), the marked grid cells (red) in which frequency is greater or equal to S min = 6, the neighborhood of the marked cell which is also examined (yellow subdomains) and further cells coloured by their frequencies given in the upper left corner. We see that the cluster was already formed inside the marked cell in the upper right corner because there are no other points inside the examined neighborhood while the cluster is not yet formed in the central marked cell because there are still other points in the examined neighborhood, there are two cells with the frequency equal to 1.
Figure 7 :
7We show data in the time step n = 0, with four given clusters (various colors) and one new observation (green point), the centroids of the given clusters (colored squares) and distances of the new observation to the centroids of the given clusters (black lines).
Figure 8 :
8The relevancy coefficient R(w) plotted in dependence on R p (w), λ = 12 in the definition of the logistic function(21).
Figure 9 :
9The map with the squares covered by the Sentinel-2 data. The left light green square Sentinel-2 tile number 34UXP covers Western Slovakia, the middle square Sentinel-2 tile number 34UCU covers the south of Central Slovakia and the right square Sentinel-2 tile number 34UEU covers the south of Eastern Slovakia.
Figure 11 :
11The subregions of Western Slovakia with segmented areas of protected Natura 2000 habitats. The segmented areas of 91E0 habitat are plotted by red curves, 91F0 habitat by blue curves, 91G0 habitat by yellow curves and 9110 habitat by purple curves. Together with the segmented areas we plot also squares inside the segmented areas which are used in the network learning phase and in the relevancy maps construction.
Figure 12 :
12Each pair of pictures consists of the Sentinel-2 image (left part) and the relevancy map (right part). A large segmented area is depicted on the top pictures and two small segmented areas are depicted on the bottom pictures. Pictures on the left side depict the relevancy map obtained using the squares A(p, 3) and pictures on the right side depict the relevancy map with squares A(p, 5).
Figure 13 :
13The segmented areas of 91E0 habitat (top) and 91F0 habitat (bottom) plotted on the Sentinel-2 image (left) and on the final relevancy map (right).
Figure 14 :
14The area around the Latorica river in Eastern Slovakia, the Sentinel-2 image (left) and relevancy map (right) with subsequent automatic segmentation of 91F0 habitat area.
Figure 15 :
15The area around the Rimava river in south of Central Slovakia, the Sentinel-2 image (left) and relevancy map (right) with subsequent automatic segmentation of newly found 91E0 habitat area.
Figure 16 :
16Newly discovered area of 91E0 protected habitat around the Rimava river, photographs.
Figure 17 :
17The area on the Danube river bank in Austria around Linz, the Sentinel-2 image (left) and relevancy map (right) with subsequent automatic segmentation of two 91E0 habitat area.
Figure 18 :
18The area around the Danube river in Western Vojvodina, the Sentinel-2 image (left) and relevancy map (right) with subsequent automatic segmentation of 91E0 habitat area.
Figure 19 :
19The special nature reservation "Karadordevo" around the Danube river in Western Vojvodina, the Sentinel-2 image (left) and relevancy map (right) with automatic segmented area of 91E0 habitat.
Figure 20 :
20The delta of the Danube river in Romania, the Sentinel-2 images (left) and details of the relevancy map (right). By the red color points, the 91E0 habitat localities, with the photographs given inFig. 21, are indicated.
Figure 21 :
21The delta of the Danube river in Romania, photographs.
Table 1 :
1Natura 2000 forest habitats used in the classification.
Table 2 :
2The results of the learning phase on datasets LDS125, LDS125adj and LDS118.
Table 3 :
3The segmented areas of the Natura 2000 habitat 91E0 with the mean value of the relevancy in each final relevancy map.Area code
91E0 habitat 91F0 habitat 91G0 habitat 9110 habitat
91F0_Bazantnica
0.0788
0.6781
0.3776
0.0013
91F0_Bogdalicky_vrch_1
0.1757
0.8159
0.1694
0
91F0_Bogdalicky_vrch_2
0.2456
0.7487
0.2778
0
91F0_Bogdalicky_vrch_3
0.1584
0.8833
0.2594
0
91F0_Bogdalicky_vrch_4
0.2511
0.9068
0.0460
0
91F0_Brestovany_1
0.0537
0.8097
0.4241
0
91F0_Brestovany_2
0.3178
0.6955
0.0457
0
91F0_Brestovany_3
0.2889
0.8301
0.2675
0
91F0_Brestovany_4
0.5076
0.8102
0.0035
0
91F0_Dolnyles
0.1420
0.7016
0.4337
0
91F0_Feldsky_les_1
0.2880
0.7741
0.0136
0
91F0_Feldsky_les_2
0.1991
0.8693
0.0161
0
91F0_Kopac_1
0.8349
0.3646
0
0
91F0_Kopac_2
0.4908
0.6242
0.0421
0
91F0_Kopac_3
0.0526
0.9502
0
0
91F0_Kopac_4
0.3927
0.8416
0
0
91F0_Suchohrad_1
0.4707
0.7697
0.0675
0
91F0_Suchohrad_2
0.2324
0.6818
0.1897
0
91F0_Suchohrad_3
0.4620
0.5770
0.0400
0
91F0_Vysoka_pri_Morave_1
0.0858
0.7792
0.3458
0
91F0_Vysoka_pri_Morave_2
0.0693
0.7032
0.2892
0.0004
91F0_Vysoka_pri_Morave_3
0.1795
0.8809
0.0513
0
91F0_Vysoka_pri_Morave_4
0.1535
0.7929
0.3317
0
91F0_Vysoka_pri_Morave_5
0.3218
0.7093
0.4345
0
91F0_Vysoka_pri_Morave_6
0.7894
0.4994
0
0
91F0_Vysoka_pri_Morave_7
0.2338
0.7984
0.0859
0
91F0_Vysoka_pri_Morave_8
0.5009
0.4573
0.0640
0
91F0_Vysoka_pri_Morave_9
0.1236
0.7318
0.2276
0
Table 4 :
4The segmented areas of the Natura 2000 habitat 91F0 with the mean value of the relevancy in each final relevancy map.Area code
91E0 habitat 91F0 habitat 91G0 habitat 9110 habitat
91G0_Casta_1
0
0.0098
0.8076
0.2909
91G0_Casta_2
0
0.1127
0.7744
0.2064
91G0_Dubova_1
0.1572
0.1403
0.5290
0
91G0_Dubova_2
0.0334
0.1036
0.8082
0.0063
91G0_Limbach_1
0
0.2438
0.9026
0
91G0_Limbach_2
0.0385
0.7094
0.6549
0
91G0_Limbach_3
0
0
0.9935
0
91G0_Limbach_4
0
0.1618
0.9733
0
91G0_Limbach_5
0
0.0433
0.9933
0
91G0_Losonec_1
0.0913
0.3749
0.6612
0.0239
91G0_Losonec_2
0.0282
0.1941
0.8678
0.0339
91G0_Losonec_3
0
0.0109
0.8967
0.0587
91G0_Pezinok_1
0.0412
0.0428
0.1943
0.7222
91G0_Pezinok_2
0
0
0.9926
0
91G0_Pezinok_3
0
0.0089
0.9499
0.0809
91G0_Pezinok_4
0.0027
0.0702
0.9153
0.0633
91G0_Pezinok_5
0
0.2264
0.9227
0
91G0_Pezinok_6
0.0796
0.0354
0.7120
0.1048
91G0_Raca_1
0.1290
0.9346
0.0971
0
91G0_Raca_2
0
0
0.3056
0.8097
91G0_Raca_3
0.0734
0.2068
0.8630
0.0165
91G0_Raca_4
0
0.0260
0.8926
0
91G0_Raca_5
0.2952
0.3849
0.1116
0
91G0_Raca_6
0.0258
0.1099
0.9134
0.0050
91G0_Raca_7
0.3668
0.5320
0.1257
0
91G0_Raca_8
0
0.7130
0.7517
0
91G0_Raca_9
0.0085
0
0.9766
0
91G0_Smolenice_1
0.0164
0.0396
0.8445
0.1793
91G0_Smolenice_2
0
0.0000
0.9860
0
91G0_Smolenice_3
0
0.2037
0.8959
0
91G0_Smolenice_4
0
0
0.9758
0.0641
Table 5 :
5The segmented areas of the Natura 2000 habitat 91G0 with the mean value of the relevancy in each final relevancy map.Area code
91E0 habitat 91F0 habitat 91G0 habitat 9110 habitat
9110_Borinka_1
0.0417
0.0857
0.2324
0.5473
9110_Borinka_2
0
0
0.3016
0.7477
9110_Borinka_3
0.0041
0.0272
0.6455
0.4040
9110_Limbach_1
0
0.1060
0.8554
0.2335
9110_Limbach_2
0
0
0.0332
0.9803
9110_Limbach_3
0
0
0
0.5491
9110_Limbach_4
0
0
0.0003
0.4209
9110_Limbach_5
0
0
0.0848
0.8750
9110_Limbach_6
0
0
0.1501
0.8651
9110_Limbach_7
0.0087
0.0240
0.1488
0.8900
9110_Limbach_8
0.0129
0.0378
0.2263
0.6683
9110_Limbach_9
0.0253
0.0537
0.2309
0.6883
9110_Limbach_10
0
0.0000
0.0076
0.9610
9110_Limbach_11
0.0030
0.0035
0.0141
0.8516
9110_Modra_Piesok_1
0.0465
0.0312
0.2866
0.7078
9110_Modra_Piesok_2
0.0462
0.1169
0.2982
0.6277
9110_Modra_Piesok_3
0
0.0000
0.0175
0.9893
9110_Modra_Piesok_4
0.0413
0.0368
0.0485
0.9425
9110_Pezinok_1
0.0017
0.0007
0.0044
0.7455
9110_Pezinok_2
0
0.0087
0.0377
0.8132
9110_Pezinska_Baba_1
0
0
0.0145
0.7932
9110_Pezinska_Baba_2
0.0028
0.0011
0.0096
0.9372
9110_Pezinska_Baba_3
0.0071
0.0127
0.2394
0.8769
9110_Pezinska_Baba_4
0
0.0141
0.0661
0.9621
9110_Pezinska_Baba_5
0.0093
0.0039
0.0842
0.9497
9110_Raca_1
0
0
0.0133
0.9364
9110_Raca_2
0.0034
0.2190
0.7639
0.0372
9110_Raca_3
0
0
0.0682
0.9609
9110_Raca_4
0.0402
0.1584
0.6640
0.3444
9110_Raca_5
0.0250
0.1000
0.2317
0.4714
9110_Smolenice
0
0
0.1020
0.8774
Table 6 :
6The segmented areas of the Natura 2000 habitat 9110 with the mean value of the relevancy in each final relevancy map.
Figure 10: The subregions of Western Slovakia with segmented areas of protected Natura 2000 habitats. The segmented areas of 91E0 habitat are plotted by red curves, 91F0 habitat by blue curves, 91G0 habitat by yellow curves and 9110 habitat by purple curves.
Artificial neural networks, back propagation, and the Kelley-Bryson gradient procedure. S E Dreyfus, 10.2514/3.25422Journal of Guidance Control Dynamics. 135S. E. Dreyfus, Artificial neural networks, back propagation, and the Kelley-Bryson gradient procedure, Journal of Guidance Control Dynamics 13 (5) (1990) 926-928. doi:10.2514/3.25422.
B C Csaji, H Eikelder, Approximation with artificial neural networks, Master's thesis. Eötvös Lorand UniversityB. C. Csaji, H. Eikelder, Approximation with artificial neural networks, Master's thesis, Eötvös Lorand University (2001).
I Goodfellow, Y Bengio, A Courville, Deep Learning. MIT PressI. Goodfellow, Y. Bengio, A. Courville, Deep Learning, MIT Press, 2016, http://www. deeplearningbook.org.
Stable architectures for deep neural networks. E Haber, L Ruthotto, 10.1088/1361-6420/aa9a90Inverse Problems. 341E. Haber, L. Ruthotto, Stable architectures for deep neural networks, Inverse Problems 34 (1) (2018). doi:10.1088/1361-6420/aa9a90.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, 10.1109/CVPR.2016.90K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, 2016, pp. 770-778. doi:10.1109/CVPR.2016.90.
B Chang, L Meng, E Haber, L Ruthotto, D Begert, E Holtham, Reversible architectures for arbitrarily deep residual neural networks, in: 32nd AAAI Conference on Artificial Intelligence, AAAI 2018. 32B. Chang, L. Meng, E. Haber, L. Ruthotto, D. Begert, E. Holtham, Reversible archi- tectures for arbitrarily deep residual neural networks, in: 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, Vol. 32, 2018, pp. 2811-2818. URL https://ojs.aaai.org/index.php/AAAI/article/view/11668
A survey of remote-sensing big data. P Liu, 10.3389/fenvs.2015.00045Frontiers in Environmental Science. 345P. Liu, A survey of remote-sensing big data, Frontiers in Environmental Science 3 (2015) 45. doi:10.3389/fenvs.2015.00045.
Monitoring biodiversity in the anthropocene using remote sensing in species distribution models. C Randin, M Ashcroft, J Bolliger, J Cavender-Bares, N Coops, S Dullinger, T Dirnböck, S Eckert, E Ellis, N Fernández, G Giuliani, A Guisan, W Jetz, S Joost, D N Karger, J Lembrechts, J Lenoir, M Luoto, X Morin, D Payne, 10.1016/j.rse.2019.111626Remote Sensing of Environment. 239111626C. Randin, M. Ashcroft, J. Bolliger, J. Cavender-Bares, N. Coops, S. Dullinger, T. Dirn- böck, S. Eckert, E. Ellis, N. Fernández, G. Giuliani, A. Guisan, W. Jetz, S. Joost, D. N. Karger, J. Lembrechts, J. Lenoir, M. Luoto, X. Morin, D. Payne, Monitoring biodiver- sity in the anthropocene using remote sensing in species distribution models, Remote Sensing of Environment 239 (2020) 111626. doi:10.1016/j.rse.2019.111626.
Remote sensing for mapping natural habitats and their conservation status -new opportunities and challenges. C Corbane, S Lang, K Pipkins, S Alleaume, M Deshayes, V Millán, T Strasser, J Vanden Borre, T Spanhove, M Förster, Intenational Journal of Applied Earth Observation and Geoinformation. 37C. Corbane, S. Lang, K. Pipkins, S. Alleaume, M. Deshayes, V. Millán, T. Strasser, J. Vanden Borre, T. Spanhove, M. Förster, Remote sensing for mapping natural habitats and their conservation status -new opportunities and challenges, Intenational Journal of Applied Earth Observation and Geoinformation 37 (2015) 7-16.
A Range of Earth Observation Techniques for Assessing Plant Diversity. A Lausch, M Heurich, P Magdon, D Rocchini, S Karsten, J Bumberger, D King, 10.1007/978-3-030-33157-3_13Remote Sensing of Plant Biodiversity. A. Lausch, M. Heurich, P. Magdon, D. Rocchini, S. Karsten, J. Bumberger, D. King, A Range of Earth Observation Techniques for Assessing Plant Diversity, Remote Sensing of Plant Biodiversity, 2020, pp. 309-348. doi:10.1007/978-3-030-33157-3_13.
Consistency in land-cover mapping: Influence of field workers, spatial scale and classification system. H Ullerud, A Bryn, R Halvorsen, L Hemsing, 10.1111/avsc.12368Applied Vegetation Science. 212H. Ullerud, A. Bryn, R. Halvorsen, L. Hemsing, Consistency in land-cover mapping: Influence of field workers, spatial scale and classification system, Applied Vegetation Science 21 (2) (2018) 278-288. doi:10.1111/avsc.12368.
Big data for remote sensing: Challenges and opportunities. M Chi, A Plaza, J Benediktsson, Z Sun, J Shen, Y Zhu, 10.1109/JPROC.2016.2598228Proceedings of the IEEE. 10411M. Chi, A. Plaza, J. Benediktsson, Z. Sun, J. Shen, Y. Zhu, Big data for remote sensing: Challenges and opportunities, Proceedings of the IEEE 104 (11) (2016) 2207-2219. doi: 10.1109/JPROC.2016.2598228.
Open data and open source for remote sensing training in ecology. D Rocchini, V Petras, A Petrasova, N Horning, L Furtkevicova, M Neteler, B Leutner, M Wegmann, 10.1016/j.ecoinf.2017.05.004Ecological Informatics. 40D. Rocchini, V. Petras, A. Petrasova, N. Horning, L. Furtkevicova, M. Neteler, B. Leut- ner, M. Wegmann, Open data and open source for remote sensing training in ecology, Ecological Informatics 40 (2017) 57-61. doi:10.1016/j.ecoinf.2017.05.004.
The natura 2000 protected areas network. European Environmental AgencyEuropean Environmental Agency, The natura 2000 protected areas net- work, https://www.eea.europa.eu/themes/biodiversity/natura-2000/ the-natura-2000-protected-areas-network (2020).
State nature conservation SR. Natura. State nature conservation SR, Natura 2000, http://www.sopsr.sk/natura/index1. php?p=3&lang=sk (2020).
Object-oriented methods for habitat mapping at multiple scales -case studies from northern germany and wye downs, uk. M Bock, P Xofis, J Mitchley, G Rossner, M Wissen, 10.1016/j.jnc.2004.12.002Journal for Nature Conservation. 132-3M. Bock, P. Xofis, J. Mitchley, G. Rossner, M. Wissen, Object-oriented methods for habitat mapping at multiple scales -case studies from northern germany and wye downs, uk, Journal for Nature Conservation 13 (2-3) (2005) 75-89. doi:10.1016/j.jnc.2004. 12.002.
Integrating remote sensing in natura 2000 habitat monitoring: Prospects on the way forward. J Vanden Borre, D Paelinckx, S Mucher, L Kooistra, B Haest, G De, A M Blust, Schmidt, 10.1016/j.jnc.2010.07.003doi:10.1016/ j.jnc.2010.07.003Journal for Nature Conservation. 192J. Vanden Borre, D. Paelinckx, S. Mucher, L. Kooistra, B. Haest, G. De Blust, A. M. Schmidt, Integrating remote sensing in natura 2000 habitat monitoring: Prospects on the way forward, Journal for Nature Conservation 19 (2) (2011) 116-125. doi:10.1016/ j.jnc.2010.07.003.
. Agency European Space, 2European Space Agency, Sentinel 2, https://sentinel.esa.int/web/sentinel/ missions/sentinel-2/data-products (2020).
A Bondy, U S R Murty, Graph Theory. Springer-Verlag London1st EditionA. Bondy, U. S. R. Murty, Graph Theory, 1st Edition, Springer-Verlag London, 2008.
J Friedman, T Jean-Pierre, cs.DM/0408028Calculus on graphs. J. Friedman, T. Jean-Pierre, Calculus on graphs, CoRR cs.DM/0408028 (2004).
Scale-space and edge detection using anisotropic diffusion. P Perona, J Malik, 10.1109/34.56205IEEE Transactions on Pattern Analysis and Machine Intelligence. 127P. Perona, J. Malik, Scale-space and edge detection using anisotropic diffusion, IEEE Transactions on Pattern Analysis and Machine Intelligence 12 (7) (1990) 629-639. doi: 10.1109/34.56205.
Wave equations for graphs and the edge-based laplacian. J Friedman, J.-P Tillich, 10.2140/pjm.2004.216.229Pacific Journal of Mathematics. 2162J. Friedman, J.-P. Tillich, Wave equations for graphs and the edge-based laplacian, Pacific Journal of Mathematics 216 (2) (2004) 229-266. doi:10.2140/pjm.2004.216. 229.
Semi-implicit finite volume scheme for solving nonlinear diffusion equations in image processing. K Mikula, N Ramarosy, 10.1007/PL00005479Numerische Mathematik. 893K. Mikula, N. Ramarosy, Semi-implicit finite volume scheme for solving nonlinear dif- fusion equations in image processing, Numerische Mathematik 89 (3) (2001) 561-590. doi:10.1007/PL00005479.
Semiautomatic segmentation of natura 2000 habitats in sentinel-2 satellite images by evolving open curves. K Mikula, J Urbán, M Kollár, M Ambroz, I Jarolímek, J Šibik, M Šibíková, 10.3934/dcdss.2020231Discrete and Continuous Dynamical Systems -Series S. 143K. Mikula, J. Urbán, M. Kollár, M. Ambroz, I. Jarolímek, J. Šibik, M. Šibíková, Semi- automatic segmentation of natura 2000 habitats in sentinel-2 satellite images by evolving open curves, Discrete and Continuous Dynamical Systems -Series S 14 (3) (2021) 1033- 1046. doi:10.3934/dcdss.2020231.
Semi-implicit scheme for semi-automatic segmentation in naturasat software. M Ambroz, M Kollar, K Mikula, ALGORITMY 2020 -21st Conference on Scientific Computing. Vysoké Tatry -Podbanské, SlovakiaProceedings of contributed papersM. Ambroz, M. Kollar, K. Mikula, Semi-implicit scheme for semi-automatic segmen- tation in naturasat software, in: ALGORITMY 2020 -21st Conference on Scientific Computing, Vysoké Tatry -Podbanské, Slovakia, September 10 -15, 2020. Proceedings of contributed papers, Vydavateľstvo SPEKTRUM STU, 2020, ISBN: 978-80-227-5032- 5, 2020, pp. 171-180.
Europe's eyes on earth. Copernicus Copernicus, Copernicus, Copernicus: Europe's eyes on earth, https://www.copernicus.eu/en/ about-copernicus/copernicus-detail (2020).
Earth Observing System, Normalized difference vegetation index. Earth Observing System, Normalized difference vegetation index, https://eos.com/ ndvi/ (2020).
I T Jolliffe, 10.1007/b98835Principal Component Analysis. New YorkSpringer-Verlag2nd EditionI. T. Jolliffe, Principal Component Analysis, 2nd Edition, Springer-Verlag New York, 2002. doi:10.1007/b98835.
Methods of Multivariate Analysis. A Rencher, 10.1002/0471271357doi:10.1002/ 0471271357Wiley-InterscienceA. Rencher, Methods of Multivariate Analysis, Wiley-Interscience, 2002. doi:10.1002/ 0471271357.
An automated segmentation of natura 2000 habitats from sentinel-2 optical data. K Mikula, J Urbán, M Kollár, M Ambroz, I Jarolímek, J Šibik, M Šibíková, 10.3934/dcdss.2020348doi:10.3934/ dcdss.2020348Discrete and Continuous Dynamical Systems -Series S. 143K. Mikula, J. Urbán, M. Kollár, M. Ambroz, I. Jarolímek, J. Šibik, M. Šibíková, An automated segmentation of natura 2000 habitats from sentinel-2 optical data, Discrete and Continuous Dynamical Systems -Series S 14 (3) (2021) 1017-1032. doi:10.3934/ dcdss.2020348.
Remote Sensing for Sustainable Forest Management. S E Franklin, Lewis Publishers/CRC Press1st EditionS. E. Franklin, Remote Sensing for Sustainable Forest Management, 1st Edition, Lewis Publishers/CRC Press, 2001.
Mapping species composition of forests and tree plantations in northeastern costa rica with an integration of hyperspectral and multitemporal landsat imagery. M Fagan, R Defries, S Sesnie, P Arroyo, C Soto-Castro, A Singh, P Townsend, R Chazdon, 10.3390/rs70505660Remote Sensing. 75M. Fagan, R. Defries, S. Sesnie, P. Arroyo, C. Soto-Castro, A. Singh, P. Townsend, R. Chazdon, Mapping species composition of forests and tree plantations in northeast- ern costa rica with an integration of hyperspectral and multitemporal landsat imagery, Remote Sensing 7 (5) (2015) 5660-5696. doi:10.3390/rs70505660.
Discrimination of tropical forest types, dominant species, and mapping of functional guilds by hyperspectral and simulated multispectral sentinel-2 data. G Vaglio Laurin, N Puletti, W Hawthorne, V Liesenberg, P Corona, D Papale, Q Chen, R Valentini, 10.1016/j.rse.2016.01.017Remote Sensing of Environment. 176G. Vaglio Laurin, N. Puletti, W. Hawthorne, V. Liesenberg, P. Corona, D. Papale, Q. Chen, R. Valentini, Discrimination of tropical forest types, dominant species, and mapping of functional guilds by hyperspectral and simulated multispectral sentinel-2 data, Remote Sensing of Environment 176 (2016) 163-176. doi:10.1016/j.rse.2016. 01.017.
Comparison of random forest, k-nearest neighbor, and support vector machine classifiers for land cover classification using sentinel-2 imagery. P.-T Noi, M Kappas, 10.3390/s18010018Sensors (Switzerland). 18118P.-T. Noi, M. Kappas, Comparison of random forest, k-nearest neighbor, and support vector machine classifiers for land cover classification using sentinel-2 imagery, Sensors (Switzerland) 18 (1) (2018) 18. doi:10.3390/s18010018.
Mapping and assessment of vegetation types in the tropical rainforests of the western ghats using multispectral sentinel-2 and sar sentinel-1 satellite imagery. J J Erinjery, M Singh, R Kent, 10.1016/j.rse.2018.07.006doi:10.1016/ j.rse.2018.07.006Remote Sensing of Environment. 216J. J. Erinjery, M. Singh, R. Kent, Mapping and assessment of vegetation types in the tropical rainforests of the western ghats using multispectral sentinel-2 and sar sentinel-1 satellite imagery, Remote Sensing of Environment 216 (2018) 345-354. doi:10.1016/ j.rse.2018.07.006.
Forest type classification based on integrated spectral-spatialtemporal features and random forest algorithm-a case study in the qinling mountains. K Cheng, J Wang, 10.3390/f10070559Forests. 107559K. Cheng, J. Wang, Forest type classification based on integrated spectral-spatial- temporal features and random forest algorithm-a case study in the qinling mountains, Forests 10 (7) (2019) 559. doi:10.3390/f10070559.
Assessment of sentinel-2 satellite images and random forest classifier for rainforest mapping in gabon. A Waśniewski, A Hoscilo, B Zagajewski, D Mouketou-Tarazewicz, 10.3390/f11090941Forests. 119941A. Waśniewski, A. Hoscilo, B. Zagajewski, D. Mouketou-Tarazewicz, Assessment of sentinel-2 satellite images and random forest classifier for rainforest mapping in gabon, Forests 11 (9) (2020) 941. doi:10.3390/f11090941.
| [] |
[] | [] | [] | [] | Heating a dipolar quantum fluid into a solid REVIEWER COMMENTS</B> Reviewer #1 (Remarks to the Author):This is an interesting and timely manuscript of finite-temperature effects in dipolar superfluids. Such systems of ultracold quantum systems are being studied in detail, due to their combination of interesting properties, such as the emergence of both superfluidity and supersolidity, which makes this a topic of rather broad interest. The present work uses approporiate methodology to make significant advances to our understanding of the role of finite temperature on the phase diagram of such systems: specifically this works reports (theoretically) on the finite-temperature phase diagram of such systems, which, as temperature is increased, reveals the emergence of supersolidity (rather than superfluidity) for smaller condensate atom numbers. This is then shown to be consistent with experimental measurements, which acts as further validation of the obtained results, and helps shed more light into previous experimental observations. This work is well written, accessible to a broad audience, and sufficiently novel and timely to potentially merit, in some form, publication in Nature Communications. | 10.1038/s41467-023-37207-3 | null | 257,928,704 | 2209.00335 | 9fafae6e1dd1a87951777702d335f7944c15be3b |
Open Access This file is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. In the cases where the authors are anonymous, such as is the case for the reports of anonymous peer reviewers, author attribution should be to 'Anonymous Referee' followed by a clear attribution to the source work. The images or other third party material in this file are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit
Heating a dipolar quantum fluid into a solid REVIEWER COMMENTS</B> Reviewer #1 (Remarks to the Author):This is an interesting and timely manuscript of finite-temperature effects in dipolar superfluids. Such systems of ultracold quantum systems are being studied in detail, due to their combination of interesting properties, such as the emergence of both superfluidity and supersolidity, which makes this a topic of rather broad interest. The present work uses approporiate methodology to make significant advances to our understanding of the role of finite temperature on the phase diagram of such systems: specifically this works reports (theoretically) on the finite-temperature phase diagram of such systems, which, as temperature is increased, reveals the emergence of supersolidity (rather than superfluidity) for smaller condensate atom numbers. This is then shown to be consistent with experimental measurements, which acts as further validation of the obtained results, and helps shed more light into previous experimental observations. This work is well written, accessible to a broad audience, and sufficiently novel and timely to potentially merit, in some form, publication in Nature Communications.
Below, I address some issues that the authors should further comment upon:
1.Can the authors clarify if the experimental data points shown in Fig. 1(b) [and Fig. 3] are old experimental measurements (e.g. taken from Ref. [41]) which have been re-analysed/presented in a manner facilitating direct comparison with presented numerical findings? This is my current understanding -but, assuming so, this is not very clearly stated neither in the main manuscript, nor in the Methods section. Assuming that to be so, please make such connections more explicit. If that is not the case, i.e. if this is newly-obtained data inspired by previous experiments/existing apparatus, then this should also be clarified. In principle, in such work I would have preferred (and partly expected) to see more direct experimental evidence for the postulated transition, i.e. by means of increasing T at (nearly) fixed N_c. Assuming reported data to be existing ones, I understand why the authors have not done this here, and assume the authors will aim to perform such exciting experimental study in the near future. Fig. 1(c) is indeed very nice and relevant, but perhaps the authors could clarify to which (if any) points in Fig. 1(b) this corresponds to?
2.Schematic in
3.In the Supplementaty Info (which is very useful), I would have liked to have seen more evidence / a better discussion in the context of does not correspond to shaded area in Fig. 3, but rather to the (highly relevant) numerical calculationsbut, on first inspection, such presentation is rather confusing.] 4.A minor point: Ref. [2] in SI is incomplete.
Reviewer #2 (Remarks to the Author):
The manuscript stems from an international theory-experiment collaboration. It describes unexpectedly how heating a dipolar superfluid from near-zero temperatures can induce a phase transition to a supersolid state with a broken translational symmetry. The manuscript presents new results on a topic, which is of general interest for the public. Furthermore, Fig. 1(b) and Fig. 3 present an impressive comparison between experimental measurements and theoretical modelling. For all these reasons a publication of the manuscript in Nature Communications is justified.
However, there is one point which needs clarification. The main new feature of the manuscript is the discussion of the impact of thermal fluctuations upon the formation of a supersolid. And to this end a precise measurement of the temperature in the laboratory is an indispensible prerequisite: 1) Do the experimental data shown in Fig. 1(b) stem originally from Ref. [41]? If the answer is yes, then the caption of Fig. 1(b) should mention that explicitly.
2) According to my knowledge Ref. [41] and this manuscript are the first ones to report about temperature measurements for supersolids. Therefore, it should be indicated briefly how the temperature was measured. Maybe the authors could indicate the difficulty in previous temperature measurements.
Reviewer #3 (Remarks to the Author):
In "Heating a quantum fluid into a solid" J. Sánchez-Baena and the co-authors investigate the superfluidsupersolid phase diagram of ultracold dipolar (dysprosium) atoms. They find that contrary to intuition an increase in temperature can lead to a superfluid-to-supersolid phase transition. They can explain this behavior with thermal fluctuations of the system. The work thus presents a theoretical framework to understand the unexpected experimental results of the earlier work ref. [41]. The results are noteworthy as they advance our understanding on the impact of microscopic fluctuations on the macroscopic state of matter. They take the established knowledge on quantum fluctuations of condensates and expand it by additionally including thermal fluctuations, the necessary step for the study of finite-temperature systems.
The manuscript is generally well written, easy to read and of refreshing compactness. There are only a few places where some modifications to the text would increase the clarity of the presentation and one point concerning the data analysis which I would like to ask the authors to address.
1. page 1, first paragraph: In the sentence "This finding sheds light on recent experimental observations" the appropriate references to the intended experiments should be provided. 3a. First I do not understand why the vertical axis is located on the right side. At first I thought that it would be an alternative scale to the vertical axis of part (a), but I think that is not true. So placing the axis on the left would avoid this ambiguity.
3b. My second concern is the blue-orange modulation color scale. I understand that the color transition point is probably chosen to support the analysis result but having absolutely no color variation in the range, say, 0.2-0.6 beats the purpose of presenting a color scale in the first place. I ask the authors to choose a color scale that allows for a real estimation of the measured modulation strength. (There are ample bi-color scales that still have some variation of the color in each sector.) 3c. Generally, the data points are those of Fig. 4 in ref. [41]. This should be made clearer as for me I only really understood the present Figure after seeing the "original" data. It was also beneficial to appreciate the observed phase transition and to understand the choice of the color scale. Ideally, this data (modulation vs. N_c) would be reproduced in the present manuscript.
3d. It is not clear to me how to especially translate between horizontal axes of parts (a) and (b). How does N_c relate to a_d/a? The point I am probably asking: Which part of the phase space presented in (a) is probed in (b)? It would be really nice if the corresponding area could be indicated in (a) even though I feel that we are probably in quite a different parameter range there. 4. page 3, bottom of left column: Considering the more general readership of Nature Communications, a few more words on how exactly the "thermal softening" (that is a shift of the minimum position towards larger a_d k_z, correct?) of the roton mode drives the "instability of the superfluid" would be appreciated. 4. page 3, first paragraph of right column: Is "focusing" nonlinearity an established expression in the field? To me it felt rather fancy and required some extra thinking to understand that basically an "attractive" force was intended. Why so complicated, or am I missing some additional point? 5. page 5, last sentence: This is my only concern of the present work that goes beyond cosmetics but concerns the experimental method. The authors state that due to technical reasons systematic errors in the measured densities occur. That is perfectly fine. What is not so fine is to just drop those data where the densities are negative (and which would clearly appear non-physical in the paper). However, it is not that _only_ this data is affected by the technical limitations. I expect _all_ data to deviate from the actual density to some degree, just that there it is less apparent. So just dropping obviously wrong data and leaving the rest as-is does not seem appropriate to me. Instead, one should try to estimate the error in the data and present all the data together with its error. I understand that this easily leads to a mess in the representation of Fig. 3, but I believe that this is a challenge that should be accepted.
To conclude, the authors present a very nice work that explains previously puzzling experimental results and that widens our understanding of fluctuations in general and dipolar quantum fluids in particular. As such I am, after the authors could address my concerns stated above, in favor of publication of the manuscript in Nature Communications. ***************************************************************** Response to Reviewer #1 ***************************************************************** We would like to thank the reviewer for the detailed assessment of our manuscripts and are very happy about the positive feedback on our results. We also thank the reviewer for the helpful remarks, which we have all addressed as detailed below. ***************************************************************** Fig. 1
Can the authors clarify if the experimental data points shown in
----------------------------------------------------------------------------
As the reviewer describes correctly the experimental data has been obtained from the experiment reported in Ref.
[41] (now Ref.[8] in the revised manuscript). In order to compare to our theory and draw conclusions about the predicted transition we have reanalyzed the measurement and obtained the data presented in the main text and in the supplementary material. For example, this includes the axial density shown in Fig.3, which has not been presented in Ref. [8], and now been obtained from the data of the experiment reported in [8].
We thank the reviewer for pointing out that this can be stated more clearly and have revised the manuscript accordingly. Specifically, we now cite the corresponding reference much earlier, already in the first paragraph, and cite Ref. [8] in the caption of Fig1 when discussing the presented data. While we have already tried refer to Ref.
[8] when discussing the data throughout the manuscript, we have added further reference and additional text to the paragraph describing the theory-experiment comparisons in Fig.1 and Fig.3.
We agree with the reviewer that temperature scans at fixed atom numbers would be an exciting measurement to trace the predicted transition more accurately. Currently, the variation of the temperature and atom number results from the evaporative cooling process and cannot be controlled independently, which makes it inherently difficult to scan the phase diagram in a different manner.
However, we completely agree that implementing different approaches to vary the temperature at constant atom number would be an exciting experiment that should be pursued in the future. We thank the reviewer for pointing this out and now mention this excellent point in the concluding paragraph. ***************************************************************** Fig. 1(c) is indeed very nice and relevant, but perhaps the authors could clarify to which (if any) points in Fig. 1(b) this corresponds to?
Schematic in
----------------------------------------------------------------------------
The drawing of Fig.1c is only meant to illustrate the underlying process schematically and does not correspond to actual data of panels (a) or (b). This is now stated more explicitly in the figure caption. *****************************************************************
In the Supplementary Info (which is very useful), I would have liked to have seen more evidence / a
better discussion in the context of Fig. S2: For example (i) why is Fig. S2(a) presented in an "inverted"
x-axis manner to Fig. 1(a) Fig. S2(a) ***************************************************************** 4. A minor point: Ref.
; (ii) better explanations should be given about what is plotted in
[2] in SI is incomplete.
----------------------------------------------------------------------------
We thank the reviewer for spotting this mistake and have corrected the reference. ***************************************************************** Response to Reviewer #2 ***************************************************************** We would like to thank the reviewer for the thorough assessment of our manuscript. We are very happy about the reviewer's positive outlook on our work, finding that publication in Nature Communications is justified. The reviewer has raised two very good points, which we have both addressed as detailed below.
***************************************************************** 1. Do the experimental data shown in Fig. 1(b) stem originally from Ref. [41]? If the answer is yes, then the caption of Fig. 1(b) should mention that explicitly.
----------------------------------------------------------------------------
We thank the reviewer for pointing out that this should be stated more clearly and have modified the caption of Fig. 1(b) accordingly. In addition, we now cite the corresponding reference much earlier, already in the first paragraph, and have added further reference and additional text to the paragraph describing the theory-experiment comparisons in Fig.1 and Fig.3. ***************************************************************** 2. According to my knowledge Ref. [41] and this manuscript are the first ones to report about temperature measurements for supersolids. Therefore, it should be indicated briefly how the temperature was measured. Maybe the authors could indicate the difficulty in previous temperature measurements.
----------------------------------------------------------------------------
We thank the reviewer for the suggestion and have now added a description of the temperature measurements in the Methods Section. In brief, to extract T and N_th, we record for each set of parameters (i.e., for each point in Fig.1b), an absorption image of the expanded atomic cloud using horizontal imaging. This is the standard method in quantum-gas experiments to extract T and N_th. The atomic distribution in the absorption images is bimodal with a dense modulated central peak corresponding to the condensed atoms in the supersolid and a broad contribution given by the thermal component. The latter can be fitted using a 2D Bose-enhanced Gaussian function. From this fit, we estimate the temperature of the cloud and the number of thermal atoms N_th. ***************************************************************** Response to Reviewer #3 ***************************************************************** We would like to thank the reviewer for the careful assessment of our manuscript and are happy about the reviewer's feedback and conclusion about publication in Nature Communication. Also, we very much appreciate the detailed and constructive remarks by the reviewer. As detailed below, we have addressed all of the reviewer's points and revised the manuscript accordingly. *****************************************************************
Page 1, first paragraph: In the sentence "This finding sheds light on recent experimental
observations" the appropriate references to the intended experiments should be provided.
----------------------------------------------------------------------------
We thank the reviewer for pointing this out and have added the respective reference. ***************************************************************** The dipoles are aligned perpendicular to the axis of the elongated trap geometry considered in the theory and realized in the experiment. More specifically, the dipoles are aligned along the y-axis, which then defines the angle θ for a given three-dimensional distance vector r between two atoms.
page 2, equation (1): A curiosity, what is the value of θ for the experiment at hand? How does
In the experiment, the dipole orientation is set by the external magnetic field, which is aligned along the y-direction. The atomic cloud in the cigar-shaped trap is elongated along the z-axis. We agree with the reviewer that this offers useful information and have now added additional text to specify the experimental trap geometry and considered dipole orientation. The orientation of the dipoles along the y-axis is also mentioned in the second paragraph of the Methods section. ***************************************************************** ***************************************************************** 3b. My second concern is the blue-orange modulation color scale. I understand that the color transition point is probably chosen to support the analysis result but having absolutely no color variation in the range, say, 0.2-0.6 beats the purpose of presenting a color scale in the first place. I ask the authors to choose a color scale that allows for a real estimation of the measured modulation strength. (There are ample bi-color scales that still have some variation of the color in each sector.)
----------------------------------------------------------------------------
The (somewhat unusual) color scale has been chosen to focus on the transition between the unmodulated and modulated state. As the reviewer points out correctly, this necessarily suppresses information about the detailed values of the modulation contrast and we certainly agree that this would present interesting additional information. We have therefore removed the color scale in Fig.1b and now show unmodulated states by blue points and modulated states by orange points. In fact, this matches the procedure to obtain the theoretical transition line and, thus, also simplifies the comparison between theory and experiment.
In order to show the explicit values of the modulation, we have now added a figure and corresponding discussion to the supplementary material, where we compare the experimental values and theoretical prediction for the modulation contrast.
We thank the reviewer for raising this excellent point and hope that the corresponding revisions have improved the presentation of the results. ***************************************************************** 3c. Generally, the data points are those of Fig. 4 in ref. [41]. This should be made clearer as for me I only really understood the present Figure after seeing the "original" data. It was also beneficial to appreciate the observed phase transition and to understand the choice of the color scale. Ideally, this data (modulation vs. N_c) would be reproduced in the present manuscript.
----------------------------------------------------------------------------
Following the reviewer's suggestion, we have revised the caption of Fig.1 and several places of the main text to state this more clearly. Specifically, we now cite the corresponding reference much earlier, already in the first paragraph, and cite Ref. [8] in the caption of Fig1 when discussing the presented data.
While we have have already tried refer to Ref.
[8] when discussing the data throughout the manuscript, we have added further reference and additional text to the paragraph describing the theory-experiment comparisons in Fig.1 and Fig.3.
Moreover, the mentioned data is now presented in the supplementary information where we show the modulation contrast in theory and experiment (see point 3) above), which we hope now brings more clarity. ***************************************************************** Generally, changing a_d/a as well as changing N_c, both changes the potential energy contribution of the dipole-dipole interaction and therefore has a similar effect on the phase diagram as seen in the figure.
3d. It is not clear to me how to especially translate between horizontal axes of parts (a) and (b). How
does
However, they are independent parameters and cannot be related to each other through simple scaling.
Most importantly, though, both panels show that a modulated state emerges upon raising the temperature. ***************************************************************** Here, mode softening refers to the lowering of the minimum energy, i.e. the roton energy. As this energy minimum decreases with increasing temperature, the modulated state (corresponding to the roton momentum at the energy minimum) becomes energetically more favorable, which eventually leads to the observed phase transition. We thank the reviewer for pointing out that such further discussions would be useful and have added corresponding text to the mentioned paragraph in the revised manuscript. ***************************************************************** 4b. Page 3, first paragraph of right column: Is "focusing" nonlinearity an established expression in the field? To me it felt rather fancy and required some extra thinking to understand that basically an "attractive" force was intended. Why so complicated, or am I missing some additional point?
----------------------------------------------------------------------------
The term "focusing nonlinearity" is commonly used in the description of nonlinear wave dynamics. The reviewer is correct that a focusing nonlinearity can arise from an attractive force and would generally act similarly to an attractive interaction. However, there is no exact equivalence since the precise dependence of H_th on density is not identical to the meanfield energy of attractively interacting particles. For this reason, we used the more general term of a "focusing nonlinearity".
Following the reviewer's suggestion, we have revised the corresponding sentence to simplify the formulation. ***************************************************************** 5. Page 5, last sentence: This is my only concern of the present work that goes beyond cosmetics but concerns the experimental method. The authors state that due to technical reasons systematic errors in the measured densities occur. That is perfectly fine. What is not so fine is to just drop those data where the densities are negative (and which would clearly appear non-physical in the paper). However, it is not that only this data is affected by the technical limitations. I expect all data to deviate from the actual density to some degree, just that there it is less apparent. So just dropping obviously wrong data and leaving the rest as-is does not seem appropriate to me. Instead, one should try to estimate the error in the data and present all the data together with its error. I understand that this easily leads to a mess in the representation of Fig. 3, but I believe that this is a challenge that should be accepted. We thank the reviewer for raising this question and pointing out that more details and explanation would be helpful. Following the reviewer's remarks, we now provide such addition information in the revised manuscript.
The appearance of a seemingly negative optical density is common to in-situ imaging measurements and typically occurs for small atomic densities near high-density regions. Correspondingly, in our case, such spurious negative densities can occur in between droplets of the supersolid state. There are different technical reasons that can cause negative densities in an individual image, including fluctuations of the background light, small misalignment of the high-resolution objective (leading to aberration or interference), and, most notably, lensing effects.
We probe the in-situ density by illuminating the atomic cloud with far-detuned laser light and performing phase contrast imaging. Hereby, the laser detuning is chosen such that optical absorption is greatly suppressed while the phase shift due to the index of refraction remains significant and can be used to measure the density profile (see e.g. Ref.
[54]). Therefore, the real part of the refractive index can lead to small but finite lensing effects in the presence of large density gradients as is the case in the supersolid phase. The resulting refraction of the probe beam can lead to a slight distortion of the images, that leaves the probing of the high-density regions largely unaffected but can cause spurious negative values of the measured density in the low-density regions between the droplets. This is a commonly encountered effect, but difficult to correct without accurate prior knowledge of the actual density profile. As in previous works, we have therefore chosen to set negative density values in single images to zero, which yields an average density profile as was shown in Fig.3 of the previously submitted manuscript.
We thank the reviewer for the above remark and fully agree that displaying the bare data, without correcting for negative-density errors, would offer a more direct comparison. Therefore, we followed the reviewer's advice and now show this data including technical imperfections along with statistical errors in Fig.3. We have also added a dedicated part about the optical probing (in-situ and time-offlight) of the BEC in the Methods Section.
REVIEWERS' COMMENTS
Reviewer #1 (Remarks to the Author):
The authors have appropriately clarified my previous queries, and responded satisfactorily to similar/further queries by the other reviewers.
In my opinion, the revised manuscript can be published as is.
Reviewer #3 (Remarks to the Author):
In the revised manuscript "Heating a quantum fluid into a solid" the authors quite thoroughly addressed the concerns I raised in my first review of their work. In particular I am very happy about the improved Fig. 1 in conjunction with the new Fig. S3, and the improved discussion of the uncertainties of Fig. 3. Together with the other improvements throughout the manuscript that should enhance the accessibility of the work, I am now very much in favor of recommending the manuscript for swift publication in Nature Communications.
Fig. S2: For example (i) why is Fig. S2(a) presented in an "inverted" xaxis manner to Fig. 1(a); (ii) better explanations should be given about what is plotted in Fig. S2(a), i.e. what the different colours represent , etc.; (iii) I would have expected to see both (a)-(b) plots shown in same axes as in main paper, and (iv) Fig. S2(b) compared more clearly to Fig. 3 [e.g. shaded area in S2(b)
2. page 2, equation (1): A curiosity, what is the value of \theta for the experiment at hand? How does it affect the results? 3. Figure 1 (b): This Figure represents the central result of the present work. Unfortunately, it is in my opinion also the most "unfortunate" of them all.
, i.e. what the different colours represent , etc.; (iii) I would have expected to see both (a)-(b) plots shown in same axes as in main paper, and (iv) Fig. S2(b) compared more clearly to Fig. 3 [e.g. shaded area in S2(b) does not correspond to shaded area in Fig. 3, but rather to the (highly relevant) numerical calculations -but, on first inspection, such presentation is rather confusing.] ----------------------------------------------------------------------------(i): That is an excellent point and we have now changed the axis in Fig.S2 to match the one in Fig. 1. (ii): We have modified the caption of Fig.S2 to better explain the colored areas in Fig. S2(a) (iii): We now use the same axis limits in Fig.S2 as we use in Fig.1 and Fig.3 of the main text. (iv): We have modified the figure accordingly. We have removed the colored shading and now use different line styles for the different calculation results.
it affect the results? ----------------------------------------------------------------------------
3a. First I do not understand why the vertical axis is located on the right side. At first I thought that it would be an alternative scale to the vertical axis of part (a), but I think that is not true. So placing the axis on the left would avoid this ambiguity. ----------------------------------------------------------------------------We have moved the axis label to the left vertical axis in Fig.1b.
N_c relate to a_d/a? The point I am probably asking: Which part of the phase space presented in (a) is probed in (b)? It would be really nice if the corresponding area could be indicated in (a) even though I feel that we are probably in quite a different parameter range there.----------------------------------------------------------------------------The two horizontal axes cannot be directly related to each other. In panel (a), we consider an infinitely elongated condensate and keep the chemical potential fixed, while varying a_d/a. Panel (b) is for a finite-sized atomic cloud where a_d/a is kept fixed while varying the number of condensed atoms. It can be noted that the experimental value of a_d/ a =1.46 (now given explicitly in the caption) in panel (b) is covered in the phase diagram of (a), such that both panels cover similar parameter regions.
4a. Page 3, bottom of left column: Considering the more general readership of Nature Communications, a few more words on how exactly the "thermal softening" (that is a shift of the minimum position towards larger a_d k_z, correct?) of the roton mode drives the "instability of the superfluid" would be appreciated. ----------------------------------------------------------------------------
-
---------------------------------------------------------------------------
manner facilitating direct comparison with presented numerical findings? This is my current understanding -but, assuming so, this is not very clearly stated neither in the main manuscript, nor in the Methods section. Assuming that to be so, please make such connections more explicit. If that is not the case, i.e. if this is newly-obtained data inspired by previous experiments/existing apparatus, then this should also be clarified. In principle, in such work I would have preferred (and partly expected) to see more direct experimental evidence for the postulated transition, i.e. by means of increasing T at (nearly) fixed $N_c$. Assuming reported data to be existing ones, I understand why the authors have not done this here, and assume the authors will aim to perform such exciting experimental study in the near future.(b) [and Fig. 3] are old
experimental measurements (e.g. taken from Ref. [41]) which have been re-analysed/presented in a
thank all reviewers for the posi2ve feedback on our work and for recommending publica2on without addi2onal requests and comments. thank all reviewers for the posi2ve feedback on our work and for recommending publica2on without addi2onal requests and comments.
| [] |
[
"Calibrating Cross-modal Features for Text-Based Person Searching",
"Calibrating Cross-modal Features for Text-Based Person Searching"
] | [
"Donglai Wei \nFudan University\nMEGVII Technology\nMEGVII Technology\nFudan University\nFudan University\n\n",
"Sipeng Zhang zhangsipeng@megvii.com \nFudan University\nMEGVII Technology\nMEGVII Technology\nFudan University\nFudan University\n\n",
"Tong Yang yangtong@megvii.com \nFudan University\nMEGVII Technology\nMEGVII Technology\nFudan University\nFudan University\n\n",
"Yang Liu yangliu20@fudan.edu.cn \nFudan University\nMEGVII Technology\nMEGVII Technology\nFudan University\nFudan University\n\n",
"Jing Liu jingliu19@fudan.edu.cn \nFudan University\nMEGVII Technology\nMEGVII Technology\nFudan University\nFudan University\n\n"
] | [
"Fudan University\nMEGVII Technology\nMEGVII Technology\nFudan University\nFudan University\n",
"Fudan University\nMEGVII Technology\nMEGVII Technology\nFudan University\nFudan University\n",
"Fudan University\nMEGVII Technology\nMEGVII Technology\nFudan University\nFudan University\n",
"Fudan University\nMEGVII Technology\nMEGVII Technology\nFudan University\nFudan University\n",
"Fudan University\nMEGVII Technology\nMEGVII Technology\nFudan University\nFudan University\n"
] | [] | Text-Based Person Searching (TBPS) aims to identify the images of pedestrian targets from a large-scale gallery with given textual caption. For cross-modal TBPS task, it is critical to obtain well-distributed representation in the common embedding space to reduce the inter-modal gap. Furthermore, it is also essential to learn detailed image-text correspondence efficiently to discriminate similar targets and enable fine-grained target search. To address these challenges, we present a simple yet effective method that calibrates cross-modal features from these two perspectives. Our method consists of two novel losses to provide finegrained cross-modal features. The Sew calibration loss takes the quality of textual captions as guidance and aligns features between image and text modalities. On the other hand, the Masking Caption Modeling (MCM) loss leverages a masked captions prediction task to establish detailed and generic relationships between textual and visual parts. The proposed method is cost-effective and can easily retrieve specific persons with textual captions. The architecture has only a dual-encoder without multi-level branches or extra interaction modules, making a high-speed inference. Our method achieves top results on three popular benchmarks with 73.81%, 74.25% and 57.35% Rank1 accuracy on the CUHK-PEDES, ICFG-PEDES, and RSTPReID, respectively. We hope our scalable method will serve as a solid baseline and help ease future research in TBPS. The code will be publicly available. * This work is done during Donglai Wei's internship at MEGVII Technology. † Corresponding author.A man with short brown hair is wearing a black jacket, blue jeans, and black and white shoes. The man also has a dark carry on luggage piece slung onto his back. | 10.48550/arxiv.2304.02278 | [
"https://export.arxiv.org/pdf/2304.02278v2.pdf"
] | 257,952,390 | 2304.02278 | 01c13e3dfb49def44baec83b1fbd2dddf01dde49 |
Calibrating Cross-modal Features for Text-Based Person Searching
Donglai Wei
Fudan University
MEGVII Technology
MEGVII Technology
Fudan University
Fudan University
Sipeng Zhang zhangsipeng@megvii.com
Fudan University
MEGVII Technology
MEGVII Technology
Fudan University
Fudan University
Tong Yang yangtong@megvii.com
Fudan University
MEGVII Technology
MEGVII Technology
Fudan University
Fudan University
Yang Liu yangliu20@fudan.edu.cn
Fudan University
MEGVII Technology
MEGVII Technology
Fudan University
Fudan University
Jing Liu jingliu19@fudan.edu.cn
Fudan University
MEGVII Technology
MEGVII Technology
Fudan University
Fudan University
Calibrating Cross-modal Features for Text-Based Person Searching
Text-Based Person Searching (TBPS) aims to identify the images of pedestrian targets from a large-scale gallery with given textual caption. For cross-modal TBPS task, it is critical to obtain well-distributed representation in the common embedding space to reduce the inter-modal gap. Furthermore, it is also essential to learn detailed image-text correspondence efficiently to discriminate similar targets and enable fine-grained target search. To address these challenges, we present a simple yet effective method that calibrates cross-modal features from these two perspectives. Our method consists of two novel losses to provide finegrained cross-modal features. The Sew calibration loss takes the quality of textual captions as guidance and aligns features between image and text modalities. On the other hand, the Masking Caption Modeling (MCM) loss leverages a masked captions prediction task to establish detailed and generic relationships between textual and visual parts. The proposed method is cost-effective and can easily retrieve specific persons with textual captions. The architecture has only a dual-encoder without multi-level branches or extra interaction modules, making a high-speed inference. Our method achieves top results on three popular benchmarks with 73.81%, 74.25% and 57.35% Rank1 accuracy on the CUHK-PEDES, ICFG-PEDES, and RSTPReID, respectively. We hope our scalable method will serve as a solid baseline and help ease future research in TBPS. The code will be publicly available. * This work is done during Donglai Wei's internship at MEGVII Technology. † Corresponding author.A man with short brown hair is wearing a black jacket, blue jeans, and black and white shoes. The man also has a dark carry on luggage piece slung onto his back.
Introduction
Person re-identification (Re-ID) is a fundamental technology in the field of video surveillance [14,25]. Its objec- Figure 1. An illustration of our method's motivation. For crossmodal tasks, a close and compact image-text feature distribution in the common embedding space is critical. Additionally, for the TBPS task, it is equally important to prevent missing the finegrained image-text correspondence.
tive is to identify a target person from a large-scale image database using a query image. However, Re-ID has limitations in practical applications. For example, in criminal investigations, witnesses may provide only textual descriptions of the suspect without an accompanying image. To address this new scenario, Text-Based Person Searching (TBPS) [21] has recently gained increasing interest. TBPS allows retrieval of the target person solely based on textual captions as the query.
Compared to Re-ID, the main task of TBPS is to learn fine-grained cross-modal features between visual and textual modalities. As shown in Figure 1, cross-modal features have two key characteristics that contribute to searching a person from images: closely aligned cross-modal features and fine-grained information correspondence. First, closely aligned cross-modal features can reduce the intermodal gap, making it easier to locate specific persons with textual captions. Second, detailed correspondence is essential for TBPS. In some cases, fine-grained cross-modal information is necessary to discriminate between two similar persons.
In the TBPS task, there are numerous approaches proposed from two perspectives. First, some methods [1,42,44] employ only two encoders and align the two modal features using a symmetric loss. Although simple, the feature alignment is limited in their methods. The other methods [19,41] utilize multi-modal models with transformerbased [32] cross-attention to improve cross-modal feature alignment and interaction. However, such methods require the fusion of all possible image-text pairs, resulting in a time-consuming retrieval process during inference. Second, to obtain robust fine-grained features, recent TBPS works have designed multi-level [4,8,20,35], multi-granularity [11,30,33] matching strategies, and specific attention modules [15,21,38]. These methods rely on the image-text backbone to provide fine-grained features. While these methods provide fine-grained features, they have complex model architectures and costly computations. Furthermore, the fine-grained features they produce are limited and hinder performance boost.
To address these issues, we propose a simple yet effective method for calibrating cross-modal features in TBPS. Our method consists of only a dual-encoder, making it simple and cost-effective without requiring complex interaction modules or extra multi-level branches, which allows for high-speed inference. In addition, we propose two novel training losses to calibrate cross-modal features. The first is a Sew calibration loss, which takes the quality of the text description as guidance and aligns features between the textual and visual modalities. It also pushes negative sample pairs apart and pulls positive sample pairs together across the two modalities. Next, we propose a Masking Caption Modeling (MCM) loss to obtain more fine-grained and generic correspondence. This loss uses a masked caption prediction task to establish detailed relationships between text parts and image parts. The operation is implemented through a cross-modal decoder that is discarded at the inference stage, avoiding extra computation cost.
To demonstrate the effectiveness of our method, we evaluated its performance on three popular benchmarks: CUHK-PEDES [21], ICFG-PEDES [8], and RSTPReID [45]. Our model surpasses the previous state-of-the-art methods and demonstrates impressive performance. Moreover, we conducted extensive experiments to validate each component of our method.
Overall, our major contributions can be summarised as follows:
• We introduce an effective and scalable framework to learn and calibrate cross-modal features for textbased person searching. Our framework utilizes a dual-encoder and an auxiliary cross-modal decoder to achieve efficient and high-speed inference.
• We propose two novel losses, in which the Sew calibration loss aligns fine-grained features between image and text modalities, as well as the MCM loss establishes detailed relationships between vision modality and textual modality.
• Extensive experiments demonstrate the superiority of our framework. Our method achieves new state-of-the-art on three popular benchmarks: CUHK-PEDES, ICFG-PEDES, and RSTPReID, which reaches 73.81%, 74.25%, and 57.35% on the Rank1, respectively.
Related Work
Text-Based Person Search
Text-based person searching is first introduced by [21], which identifies person images in a gallery only using a textual query. Early works utilized pre-task methods to obtain external cues such as person segmentation [35] and human body landmarks [22]. In recent years, end-to-end frameworks based on attention mechanisms [11,15,20,21,30,38] have become prevailing. Cross-modal attention is critical for performing image-text interaction. Existing methods can be broadly classified into attention-explicit and attention-implicit. Specifically, attention-explicit methods [11,15,21,38] design specific attention modules according to multi-granularity and multi-level strategies. For example, NAFS [11] conducts cross-modal alignments over fullscale features with a contextual non-local attention module. CFine [38] utilizes cross-attention for multi-grained global feature learning, it achieves impressive results on three benchmarks with knowledge transfer from the CLIP [27] model. In contrast attention-implicit methods [20,30] utilize transformer-based models with shared parameters to align cross-modal semantics implicitly. SafaNet [20] introduces a semantic-aligned feature aggregation network. It utilizes a cross-modal parameter-sharing multi-head attention module following the backbone to enhance the extracted image-text representations. However, compared to performing in-depth cross attention with a task-driven approach, the existing methods do not implement enough cross-interaction with generalized performance.
Metric Learning
Initially, metric learning used L 2 distance as the metric, with the goal is to minimize the L 2 distance between samples of the same class. Some L 2 -based metric learning methods include Siamese Networks [3] and Triplet Networks [29]. With the development of deep learning, researchers started using softmax-based loss functions in met- Figure 2. Overview of our proposed method. The framework consists of a dual-encoder for extracting image-text features and calibrating cross-modal features with the Sew Calibration loss. We also include a decoder for performing cross-modal interaction with the task-driven Mask Caption Modeling. At the inference stage, we only utilize the classification (CLS) tokens from the dual-encoder to implement similarity search.
ric learning to learn a more discriminative distance metric. Additionally, increasing the margin between classes is an intuitive approach to learn a better metric space. L-Softmax [24] introduced the concept of margin on softmax function for the first time. The widely used CosFace [34] proposed large-margin cosine loss to learn highly discriminative deep features for face recognition. Circle loss [31] proposed a simple loss based on a unified loss for metric learning and classification. Recently, some works introduced adaptive margin into marginal loss [17,23,26]. They usually learned image quality implicitly and adjust the margin accordingly. In single-modal representation learning, they usually give large margins to high-quality samples for hard mining. Compared to them, we try to solve a crossmodal matching problem where samples from two modalities have different information volume. We give greater tolerance to less informative samples in TBPS.
Masked Language Modeling
Masked Language Modeling (MLM) is a highly effective method for pre-training language models [6,28] by randomly selecting a certain percentage of words from the input sentence and then predicting the masked words based on the context of other words. Many cross-modal pre-training models have utilized MLM in their methods [2,18,19]. For example, the work in [19] combines MLM with contrastive loss in the framework, which achieved impressive performance in their cross-modal tasks. The success of MLM in BERT [6] has proven its ability to adapt well to various downstream tasks, leading to generalized performance. In our fine-grained framework, the task-driven decoder utilizing masked caption modeling can facilitate generic crossmodal learning.
Methods
Formally, given a set of images with corresponding captions, denoted as
X = {(I i , T i )} N i=1 .
Each image I i and its description text T i is associated with a person ID y i . Textbased person search aims to retrieve the most relevant Rank k (e.g., k = 1, 5, 10) person images efficiently from a largescale gallery with a textual caption. To solve this task, we propose a simple yet effective method, as shown in Figure 2.
We use ViT [9] and BERT [6] as the image-text encoders in our dual-encoder backbone. The image encoder takes image patches from I i along with a vision classification (CLS) token as input. It outputs an image feature sequence v i and a vision CLS token embedding v c i . Similarly, the text encoder obtains a text feature sequence t i and a text CLS token embedding t c i for caption T i , following previous works [11,38].
Sew Calibration Loss with Constraints
In the TBPS task, closely aligning cross-modal features is crucial for effectively finding specific persons with textual captions. To address the heterogeneity between modalities, we propose a Sew calibration loss that pushes each modality to a common space.
As shown in Figure 3 (a), in single modality settings, a triplet distance constraint is widely utilized when embeddings are from a single modality distribution. In intraclass samples, the embeddings are pulled together, while inter-class samples are pushed away. In the cross-modal setting, we expect to impose constraints from both sides of the two embedding distributions. For example, in onedirection retrieval, we first need to align the features of a "perfect pair," i.e., (I i , T i ). The shortest distance from the image embedding v i to the text embedding should be t i . This is because it is a perfect matching pair in person reidentification, and no other text feature will have a shorter distance (Eq. 1). We set the perfect pair as an image-text anchor (A img , A txt ). Next, we impose another constraint between the image anchor A img and its corresponding positive text samples P txt (1(y i = y j , i ̸ = j)) and negative text samples N txt (Eq. 2). For the other direction, we put symmetric constraints on A txt and P img , N img . Figure 3 (b) shows the proposed constraints for cross modality. Forces from both sides act like a seam to pull the two distributions together. Take the image side as an example, the L 2 distance constraints are shown as follows:
D(A img , A txt ) + M 1 < D(A img , P txt ), (1) D(A img , P txt ) + M 2 < D(A img , N txt ),(2)
where D denotes L 2 distance. M 1 and M 2 are two margins, M 1 < M 2 . With a bi-directional margin, each modal feature of the same pedestrian target is compressed compactly, making decision boundaries clearer.
We then relax the constraints to:
0.5D 2 (A img , A txt ) + M 1 < 0.5D 2 (A img , P txt ), (3) 0.5D 2 (A img , P txt ) + M 2 < 0.5D 2 (A img , N txt ).(4)
Subsequently, we change the above pairwise constraints into soft forms for better convergence following [31].
L P ull match = log 1 + K k=1 exp(α(v c it c k −v c it c i + M 1 )) , (5) L P ush match = log 1 + K k=1 J j=1 exp(α(v c it c j −v c it c k + M 2 )) ,(6)
wherev c andt c are CLS token features after normalizing, K and J denote the number of positive and negative samples in this batch, respectively. α is a scale parameter. As a result, the image-to-text matching part of Sew calibration loss is formulated as below:
L I2T match = L P ull match + L P ush match .(7)
The constraints in Eq. 2 can also be used to impose a classification loss in a similar way. As there is no difference for perfect positive samples in the classification task, we omit to constrain (1). Formally, the loss for our person ID classification part is as follows [42]:
L I2T id = 1 n n i=1 −log e (α(sy i ,i−M2)) e (α(sy i ,i−M2)) + j̸ =yi e (α(sy j ,i)) ),(8)s i,j = ω T ivj ,v i = (v c i ) Ttc i ·t c i ,(9)
where n is batch size, and ω represents the classification weight after normalization.v i can be explained as the projection of image representation v cls i onto the normalized text representationt c i . L T 2I match and L T 2I id are in the same form as above, but the change is focused on the text-to-image. Both matching and classification loss have identical decision boundaries. Equipped with our proposed cross-modal constraints, the Sew calibration loss can reduce the gap between image and text feature distributions effectively.
Although the margin restrictions allow our model to learn better cross-modal features, using a fixed margin M in all cases may not be flexible enough. A large margin constraint makes model learning difficult, while too small a margin does not impose a significant constraint. An adaptive margin guided by quality can be more effective. In TBPS, the texts come from the annotator's descriptions of the images. Images are expected to have complete information, while texts have varying amounts of information.
Thus, we adjust the margin value based on the quality of the text description. We argue that a less informative caption (i.e., a shorter caption) needs a smaller margin as a looser constraint. Based on this, we compute the adaptive margin for each image-text pair according to its text total tokens length T i :
M i = M min + (M max − M min ) · (T i − T min ) T max − T min ,(10)
where M max and M min are upper and lower bounds of margins. T max and T min are bounds of the captions length, respectively. We set T max and T min according to the different dataset captions length distributions. After that, we utilize M i to replace the fixed margin M above. We simply set M 1 = M 2 = M i . Compared to adaptive margins, we also report the detailed manual margin parameter analysis in the supplementary materials.
Masking Caption Modeling Loss
TBPS is a fine-grained cross-modal task, which means only caption-level discrimination is not enough. If the textual captions of two persons differ in a few words, a TBPS method can not retrieve a specific person without wordlevel discrimination. Although there are many works to establish word-level discrimination capacities, such methods are complex and limited, hindering performance boost. To solve this issue, we propose masking caption modeling to establish detailed image-text relationships. Furthermore, by utilizing MCM, our framework can perform more generic cross-modal learning.
Inspired by [7,40], we add a masked prediction task on the text branch. Concretely, this loss is based on a crossmodal decoder architecture. We mask a portion of text tokens and replace these masked tokens with a learnable token vector. The text encoder inputs these text tokens and outputs the corresponding text features. The decoder learns to maximize the conditional likelihood of the masked text feature t n under latent image feature sequence {v i } and text feature sequence {t i }:
L mcm = − N n=1 log P θ (t n |{t i }, {v i }),(11)
where N is the total masked token numbers in a caption. As shown in Figure 2 cross-modal decoder part, the decoder takes both unmasked text tokens and masked tokens in their original order as input. The multi-head selfattention [32] first encodes the text features as Q t , K t , V t , while the cross self-attention further improves the text features by taking into account the encoded image features as K i , V i for visual context. The final linear projection layer has the same number of output channels as the text vocabulary, and computes the cross entropy loss between the reconstructed and original words only on masked text tokens.
Algorithm 1: Pseudocode of the proposed framework
Input: Image I and text T ; A batch of n paired G = {(I 1 , T 1 ), (I 2 , T 2 ), · · · , (I n , T n )} Output: Training loss (L sew , L mcm ) or (I cls , T cls )
1 foreach T i in set G do 2 T i ← mask(T i ); 3 end 4 for i ← 1 to G do 5 (I i cls , I i 1 , · · · , I i c ) ← V iT (T i ); 6 (T i cls , T i 1 , · · · , T i c ) ← Bert(T i ); 7
if Training Stage then 8 (I i cls , T i cls ) ← BatchN orm(I i cls , T i cls ); It should be noted that the cross-modal decoder is only used during training and not during inference.
9 I i context , T i context ← (I i 1 , · · · , I i c ), (T i 1 , · · · , T i c ); 10 T i context ← Attn(T i context ); 11 T i cross ← CrossAttn(T i context , I i context ); 12 L i sew ← SewCalibration(I i cls , T i cls ); 13 L i mcm ← M CM (T i cross , T i );
Total Loss
The total loss L we optimized in each iteration is as follows:
L sew = L I2T match + L T 2I match + L I2T id + L T 2I id , L = λ 1 L sew + λ 2 L mcm ,(12)
where λ 1 and λ 2 are hyperparameters to balance the different loss terms during training. The pseudocode of our framework pipeline is shown in Algorithm 1.
Experiments
Datasets and Evaluation Metric
We evaluate our method on three benchmarks: CUHK-PEDES [21], ICFG-PEDES [8], and RSTPReid [45]. CUHK-PEDES is the first large-scale benchmark for textbased person search tasks. This dataset contains 40,206 images of 13,003 person IDs collected from five person reidentification datasets. Each image has two different textual captions with an extensive vocabulary, and the average sentence length is 23.5. The testing set comprises 3,074 images and 6,148 descriptions of 1,000 persons. ICFG-PEDES contains 54,522 images of 4,102 persons. For each image, the corresponding description has an average length of 37 words. The testing set consists of 19,848 image-text pairs of 1,000 persons. RSTPReID contains 20,505 images of 4,101 persons, with each pedestrian having five images. It is divided into a training set with 3,701 persons, a validation set with 200 persons, and a testing set with 200 persons. For our evaluation metric, we report the Rank k (k=1, 5, 10) text-to-image accuracy, which is commonly used in previous works to evaluate text-based person search. Given a textual description as the query, if the top-k retrieved images contain any person corresponding to the query, we consider it a successful person search.
Implementation Details
In our visual-textual dual-encoder, we extract visual representations using the ViT-Base pre-trained on ImageNet [5]. The images are resized to 224 × 224 pixels. For textual representations, we use the BERT-Base-Uncased model pre-trained on the Toronto Book Corpus and Wikipedia. The representation dimension is set to 768, and the feature sequence lengths are set to 197 and 100, respectively.
During the training phase, we use a batch size of 64 and train for 60 epochs. We use Adam optimizer with an initial learning rate of 0.001. To augment our data, we apply a random horizontal flipping operation, and we use a mask ratio of 0.1 for randomly masking text tokens. The minimum and maximum textual information length boundaries T min and T max are set to 20-60, 25-65, and 22-60 according to the caption length distributions of CUHK-PEDES, ICFG-PEDES, and RSTPReID, respectively. The bounds of the upper and lower margin M are set to 0.4 and 0.6, and the scale parameter α is set to 32. For each loss in the total loss function, the balance factors λ 1 and λ 2 are set equal to 1.
During the testing phase, we apply a re-ranking postprocessing approach to improve search performance following NAFS [11]. We conduct the experiments on four NVIDIA 2080Ti GPUs using PyTorch.
Comparison with State-of-the-art Methods
Results on CUHK-PEDES. Table 1 compares our framework and previous methods on CUHK-PEDES. It can be observed that our method can outperform all previous methods by a large margin. Compared to the state-of-the-art work CFine [38], our method achieves 67.71%(+2.64%), 84.57%(+1.56%) and 89.44%(+0.44%) on Rank1, Rank5 and Rank10 without re-ranking. The state-of-the-art results on the CUHK-PEDES show the effectiveness of our method. With the help of re-ranking, our method shows an incremental boost to get a 71.09% Rank1 score. Furthermore, with a better image encoder pre-training on CLIP, our method achieves 73.81%, 88.89% , and 92.77% on three metrics, respectively. These consistent improvements show the scalability of our method across better pre-trained models and extra post-processing operations.
Results on ICFG-PEDES and RSTPReid. We also utilized other benchmarks to validate our method's performance and generalization. The ICFG-PEDES and RSTPReid datasets are more challenging compared to CUHK-PEDES, and our method significantly outperformed all state-of-the-art methods on these two datasets by a large margin, as reported in Tables 2 and 3. Compared with the state-of-the-art results [38] on ICFG-PEDES, our method achieved significant improvements, with scores of 60.20%(+4.51%), 75.97%(+3.25%), and 81.78%(+2.32%) on Rank1, Rank5, and Rank10, respectively.
On the RSTPReid dataset, we achieved scores of 50.75% (+4.90%), 74.20%(+3.90%), and 81.70%(+3.30%) on the three metrics. With re-ranking post-processing, we were able to achieve scores of 72.17% and 55.35% on Rank1 for the two benchmarks, respectively.
We note that re-ranking also brings a significant boost, as our model learns clear and compact cross-modal key information patterns, and re-ranking can retrieve the correct feature neighbors more effectively. Moreover, by utilizing the CLIP pre-trained image model as an encoder, we achieved even better results, with scores of 74.25%, 86.95%, and 90.70% on ICFG-PEDES, and scores of 57.35%, 77.50%, and 85.50% on RSTPReid. These two challenging benchmark results demonstrate the robustness and scalability of our method.
Ablation Studies
Analysis of Model Components
To fully validate the performance of the different components in our method, we demonstrate the contributions of each part on CUHK-PEDES, as shown in Table 4. Model 1 and Model 2 show the results using ResNet [13] and ViT [9] as the image encoder, respectively, without the Sew Calibration and MCM losses. We utilize Model 2 as our baseline, and CMPM and CMPC [42] are used as loss functions in the baseline experiment. First, we compare ResNet-50 and ViT-Base as the image encoder and observe a performance improvement of 1.90%, 1.03%, and 0.69% on Rank1, Rank5, and Rank10, respectively. Based on this, we adopt ViT as the image encoder in the following experiments. We then validate the Sew calibration and MCM losses compared to the baseline. From Models 2-4, we observe that Sew calibration loss brings marginal improvement, and a common distribution can provide a better basis for optimizing the embedding space for fine-grained cross-modal recognition. Moreover, Model 3 represents the Sew calibration loss with the fixed margin 0.5, while Model 4 represents the Sew calibration loss with adaptive margins. We observe that Model 4 achieves better results than a fixed manual margin, no matter how we tune the margin value, which demonstrates the effectiveness of adaptive constraints. We also find that MCM gives another substantial improvement to the baseline from Model 2 to Model 5, indicating that the text-image detail mining capability is critical. Notably, when the two components are used jointly in Model 6, our method continues to improve and outperform the baseline by 4.57%, 2.99%, and 2.37% on the three metrics, respectively. This demonstrates that obtaining well-distributed image-text features in the common embedding space is essential for reducing the cross-modal gap. The consistent improvement in each component of the method demonstrates our effectiveness.
Impact of Sew Calibration Loss for Reducing Cross-modal Gap
Benefiting from the Sew calibration loss, which reduces the cross-modal gap, the learned representation distributions are closer than with the basic loss. To demonstrate this, we illustrate a comparison of singular value decomposition on cross-modal features, inspired by [16]. ure 4 (b) and (d). Intuitively, the smaller the distance between different modality features, the closer the distribution learned by the model. We find that the Sew calibration loss performs better representation distributions in the common embedding space. It ensures closer cross-modal representation distributions and a smaller inter-modal gap than the baseline.
Impact of Masking Caption Modeling
The masking caption modeling operation in fine-grained cross-modal interaction is critical in our framework. It comprises tokens masking, attention module, and masked tokens prediction. As shown in Table 5, we explore the effec-tiveness of MCM components on the CUHK-PEDES. First, only masking on the input text tokens without a reconstruction task behaves like a random erase text augmentation. This augmentation can already bring +1.36% marginal improvement and reach 63.65% on Rank1. It shows that the details provided by word tokens are helpful for fine-grained recognition.
Next, if we only utilize attention mechanism to enhance cross-modal representations without the mask caption modeling, it can achieve 63.83% on Rank1. We explain this improvement as the cross-attention brings details from sequence tokens to the CLS token. Meanwhile, image features also contribute a lot to this process.
On the other hand, we also design experiment of mask caption modeling without the cross attention. The decoder directly predicts all masked tokens from dual-encoder outputs. In this experiment, we observe 64.28% on Rank1. It proves the reconstruction tasks in decoder help guide the text encoder to learn richer and more refined representations. We also notice that with all three components, the MCM achieves 65.16% on Rank1. With the help of cross attention from image features, the CLS token ensembles all those rich information.
Analysis of Generalisability Validation
To better understand the contributions of MCM in our framework, we conduct a domain generalization analysis on the three benchmarks to demonstrate our generalization performance. In Table 6, CUHK ⇒ ICFG and CUHK ⇒ RSTP indicate using the model trained on the CUHK-PEDES dataset to infer their test sets and vice versa. Compared to the baseline, we can observe a performance improvement in the three metrics for ICFG-PEDES and RST-PReid. The fine-grained features mined by MCM are more resistant to overfitting. The improvement in domain generalization shows the capability of our framework in learning generic cross-modal information, which is essential to solve
Qualitative Analysis
Visualization of Attention Map
We visualize attention maps on the CUHK-PEDES test dataset to demonstrate model capability in learning imagetext correspondence. In Figure 5, compared to the baseline, we can observe that the visualization results obtained by our method are more apparent and refined. We conduct the word-level visualization to validate further the ability to perform fine-grained interaction. We select several keywords in the caption description, e.g., bag, shoes as items, colors, and clothing as modifiers. For example, the visualization results of IDs 8491 and 10648 do not focus on useful detailed information. The key messages in the two images are a man riding bike with a backpack and a woman wearing a sleeveless white dress. We can observe that our method successfully captures the key detail information compared to the baseline. From the word-level results, we can also observe that our method learns the critical parts in the cross-modal correspondence.
Visualization of Image-text Feature Distribution
Our method is capable of learning better image-text representation distributions with fine-grained image-text interaction and adaptive constraints to reduce the cross-modal gap.
To demonstrate this, we randomly selected some pedestrians and extracted their image-text global features, then mapped them to two dimensions for visualization in Figure 6 (a) and (b). For instance, we can take IDs 5532 and 11800 as examples. We can observe that the image-text features of the same ID distribution are compressed more compactly, and the boundaries between them are more apparent than the baseline results. This demonstrates the effectiveness of our method in mitigating the cross-modal gap.
Conclusion
In this work, we propose a simple yet effective method calibrating cross-modal features for text-based person searching. With a dual-encoder, our framework is simple and cost-effective without any extra multi-level branches or complex interaction modules. Our model makes a highspeed inference only based on the dual-encoder. Two novel losses are proposed in our method: Sew calibration and MCM losses. The Sew calibration loss aligns fine-grained features between vision and textual modalities, while the MCM loss establishes detailed relationships between textual and visual parts. In addition, the performance on the three popular benchmarks demonstrates the effectiveness and superiority of our method. We hope our effective framework will serve as a solid baseline for text-based person searching in the future.
Figure 3 .
3Illustration of Sew calibration loss. The constraints are different between single-modal and cross-modal matching. (Aimg, Atxt) denotes anchors in image-text feature distribution, while (Pimg, Ptxt) and (Nimg, Ntxt) denote positive and negative sample pairs, respectively. The Sew calibration loss pushes negative sample pairs and pulls positive sample pairs, stitching cross-modal key information like a seam.
, T 1 cls ), (I 2 cls , T 2 cls ), · · · , (I i cls , T i cls )} ← (I cls , T cls ); (L sew , L mcm ); 20 else 21 return (I cls , T cls ); 22 end
Specifically, Figure 4 (a) and (c) present the baseline and our method's singular value decomposition for the text and image modalities. We computed the distance between these two modalities at specific singular values and reflected them in Fig-
Figure 4 .
4Comparison of different sigular values of image-text embedding features on CUHK-PEDES (a) and ICFG-PEDES (c). A smaller gap between lines of the same type means their cross-modal distributions are closer. In (b) and (d), we show the specific log of singular values gap between baseline and our method.
( 1 )
1Total (2) Total (3) Bag (4) Floral (3) Backpack (4) Shoes (3) Bike (4) Backpack (3) White (4) Sleeveless (1) Total (2) Total
( 1 )
1Total (2) Total
( 1 )Figure 5 .
15Total Visualization of attention maps from baseline and our method on the CUHK-PEDES. We present the total caption-level results and the fine-grained word-level results, respectively. (1) Baseline caption-level results, (2) Our caption-level results, (3) & (4) Our wordlevel results. Best viewed in color.
Figure 6 .
6Presentation of cross-modal representation distributions on CUHK-PEDES test dataset. Different colors correspond to the different target ID. Best viewed in color.
Table 1 .
1Comparison with state-of-the-art methods on the CUHK-PEDES dataset. Rank1, Rank5, and Rank10 accuracies (%) are reported. The bold number represents the best score. R denotes the re-ranking post-processing operations. C denotes the image encoder pre-trained on the CLIP model.Method
Pub.
Rank1 Rank5 Rank10
GNA-RNN [21]
CVPR 17
19.05 -
53.64
Dual-path [44]
TOMM 20 44.40 66.26 75.07
CMPM/C [42]
ECCV 18 49.37 -
79.27
ViTAA [35]
ECCV 20 55.97 75.84 83.52
CMAAM [1]
WACV 20 56.68 77.18 84.86
VP Net [22]
TNNLS 22 58.83 81.25 86.72
HGA Net [43]
MM 20
59.00 79.49 86.62
SUM [36]
KBS 22
59.22 80.35 87.60
NAFS [11]
arXiv 21
59.94 79.86 86.70
NAFS+R [11]
arXiv 21
61.50 81.19 87.51
DSSL [45]
MM 21
59.98 80.41 87.56
DSSL+R [45]
MM 21
62.33 82.11 88.01
MGEL [33]
IJCAI 21
60.27 80.01 86.74
SSAN [8]
arXiv 21
61.37 80.15 86.73
ACSA [15]
TMM 22
63.56 81.40 87.70
ACSA+R [15]
TMM 22
68.67 85.61 90.66
ISANet [39]
arXiv 22
63.92 82.15 87.69
IVT [9]
ECCVW 22 64.00 82.72 88.95
TestReID [12]
BMVC 21 64.08 81.73 88.19
TestReID+R [12] BMVC 21 64.40 81.27 87.96
SAFA Net [20] ICASSP 22 64.13 82.62 88.40
TIPCB [4]
Neuro 22
64.26 83.19 89.10
CAIBC [37]
MM 22
64.43 82.87 88.37
AXM Net [10]
AAAI 22
64.44 80.52 86.77
CFine [38]
arXiv 22
65.07 83.01 89.00
CFine+C [38]
arXiv 22
69.57 85.93 91.15
Ours
-
67.71 84.57 89.44
Ours+R
-
71.09 86.78 91.23
Ours+C
-
69.61 86.01 90.90
Ours+R+C
-
73.81 88.89 92.77
Table 2 .
2Quantitative results on the ICFG-PEDES dataset.Method
Pub.
Rank1 Rank5 Rank10
Dual-path [44] TOMM 20 38.99 59.44 68.41
CMPM/C [42] ECCV 18 43.51 65.44 74.26
ViTAA [35]
ECCV 20 50.98 68.79 75.78
SSAN [8]
arXiv 21
54.23 72.63 79.53
TIPCB [4]
Neuro 22
54.96 74.72 81.89
IVT [30]
ECCVW 22 56.04 73.60 80.22
ISANet [39]
arXiv 22
57.73 75.42 81.72
CFine [38]
arXiv 22
55.69 72.72 79.46
CFine+C [38]
arXiv 22
60.83 76.55 82.42
Ours
-
60.20 75.97 81.78
Ours+R
-
72.17 85.74 89.67
Ours+C
-
62.29 77.15 82.52
Ours+R+C
-
74.25 86.95 90.70
Table 3 .
3Quantitative results on the RSTPReid dataset.Method
Pub.
Rank1 Rank5 Rank10
DSSL [45]
MM 21
32.43 55.08 63.19
SUM [36]
KBS 22
41.38 67.48 76.48
SSAN [8]
arXiv 21
43.50 67.80 77.15
IVT [30]
ECCVW 22 46.70 70.00 78.80
ACSA [15]
TMM 22
48.40 71.85 81.45
CFine [38]
arXiv 22
45.85 70.30 78.40
CFine+C [38]
arXiv 22
50.55 72.50 81.60
Ours
-
50.75 74.20 81.70
Ours+R
-
55.35 77.30 84.25
Ours+C
-
51.95 73.50 82.45
Ours+R+C
-
57.35 77.50 85.50
Table 4 .
4Performance comparisons of different components. Res
and ViT denote ResNet and Vision Transformer for image en-
coder, respectively. Sew-F refers to the fixed margin Sew cali-
bration loss, and Sew-A is the Sew calibration loss with adaptive
margins. We utilize Model 2 as our baseline.
ID Res ViT Sew-F Sew-A MCM Rank1 Rank5 Rank10
1
60.39 80.22 86.53
2
62.29 81.25 87.22
3
65.41 82.56 88.01
4
66.02 83.33 88.61
5
65.16 81.95 87.93
6
67.71 84.57 89.44
Table 5 .
5Ablation study of different masking caption modeling components on the CUHK-PEDES.Mask Attention Caption Modeling Rank1 Rank5 Rank10
63.65 81.84 87.51
63.83 80.87 86.35
64.28 81.97 87.75
65.16 81.95 87.93
Table 6 .
6Generic performance analysis on the three benchmarks for cross-domain validation. the fine-grained modal heterogeneity.CUHK ⇒ ICFG
Rank1 Rank5 Rank10
Baseline
30.79
49.10
57.47
Baseline+MCM
31.13
49.52
58.07
ICFG ⇒ CUHK
Rank1 Rank5 Rank10
Baseline
22.19
40.55
50.52
Baseline+MCM
23.78
42.41
52.11
CUHK ⇒ RSTP Rank1 Rank5 Rank10
Baseline
37.40
63.40
74.35
Baseline+MCM
39.25
63.95
74.55
RSTP ⇒ CUHK Rank1 Rank5 Rank10
Baseline
10.02
22.95
31.04
Baseline+MCM
10.69
25.20
33.93
Text-based person search via attribute-aided matching. Surbhi Aggarwal, Anirban Venkatesh Babu Radhakrishnan, Chakraborty, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)26Surbhi Aggarwal, Venkatesh Babu Radhakrishnan, and Anirban Chakraborty. Text-based person search via attribute-aided matching. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 2617-2625, 2020. 2, 6
Vlmo: Unified vision-language pre-training with mixture-of-modality-experts. Hangbo Bao, Wenhui Wang, Li Dong, Qiang Liu, Kriti Owais Khan Mohammed, Subhojit Aggarwal, Furu Som, Wei, arXiv:2111.02358arXiv preprintHangbo Bao, Wenhui Wang, Li Dong, Qiang Liu, Owais Khan Mohammed, Kriti Aggarwal, Subhojit Som, and Furu Wei. Vlmo: Unified vision-language pre-training with mixture-of-modality-experts. arXiv preprint arXiv:2111.02358, 2021. 3
Signature verification using a" siamese" time delay neural network. Jane Bromley, Isabelle Guyon, Yann Lecun, Eduard Säckinger, Roopak Shah, Advances in Neural Information Processing Systems. 6Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Säckinger, and Roopak Shah. Signature verification using a" siamese" time delay neural network. Ad- vances in Neural Information Processing Systems, 6, 1993. 2
Tipcb: A simple but effective part-based convolutional baseline for text-based person search. Yuhao Chen, Guoqing Zhang, Yujiang Lu, Zhenxing Wang, Yuhui Zheng, Neurocomputing. 49427Yuhao Chen, Guoqing Zhang, Yujiang Lu, Zhenxing Wang, and Yuhui Zheng. Tipcb: A simple but effec- tive part-based convolutional baseline for text-based person search. Neurocomputing, 494:171-181, 2022. 2, 6, 7
Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEEJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hier- archical image database. In 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 248-255. IEEE, 2009. 6
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova Bert, arXiv:1810.04805Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirec- tional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. 3
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova Bert, arXiv:1810.04805Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirec- tional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. 5
Semantically self-aligned network for text-to-image part-aware person re-identification. Zefeng Ding, Changxing Ding, Zhiyin Shao, Dacheng Tao, arXiv:2107.1266667arXiv preprintZefeng Ding, Changxing Ding, Zhiyin Shao, and Dacheng Tao. Semantically self-aligned network for text-to-image part-aware person re-identification. arXiv preprint arXiv:2107.12666, 2021. 2, 5, 6, 7
An image is worth 16x16 words: Transformers for image recognition at scale. Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, arXiv:2010.1192967arXiv preprintAlexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020. 3, 6, 7
Axm-net: cross-modal context sharing attention network for person re-id. Ammarah Farooq, Muhammad Awais, Josef Kittler, Syed Safwan, Khalid, arXiv:2101.08238arXiv preprintAmmarah Farooq, Muhammad Awais, Josef Kittler, and Syed Safwan Khalid. Axm-net: cross-modal con- text sharing attention network for person re-id. arXiv preprint arXiv:2101.08238, 2021. 6
Contextual non-local alignment over full-scale representation for text-based person search. Chenyang Gao, Guanyu Cai, Xinyang Jiang, Feng Zheng, Jun Zhang, Yifei Gong, Pai Peng, Xiaowei Guo, Xing Sun, arXiv:2101.0303626arXiv preprintChenyang Gao, Guanyu Cai, Xinyang Jiang, Feng Zheng, Jun Zhang, Yifei Gong, Pai Peng, Xiaowei Guo, and Xing Sun. Contextual non-local alignment over full-scale representation for text-based person search. arXiv preprint arXiv:2101.03036, 2021. 2, 3, 6
Textbased person search with limited data. Xiao Han, Sen He, Li Zhang, Tao Xiang, arXiv:2110.10807arXiv preprintXiao Han, Sen He, Li Zhang, and Tao Xiang. Text- based person search with limited data. arXiv preprint arXiv:2110.10807, 2021. 6
Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770- 778, 2016. 7
Transreid: Transformer-based object re-identification. Shuting He, Hao Luo, Pichao Wang, Fan Wang, Hao Li, Wei Jiang, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionShuting He, Hao Luo, Pichao Wang, Fan Wang, Hao Li, and Wei Jiang. Transreid: Transformer-based ob- ject re-identification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15013-15022, 2021. 1
Asymmetric cross-scale alignment for textbased person search. Zhong Ji, Junhua Hu, Deyin Liu, Lin Yuanbo Wu, Ye Zhao, IEEE Transactions on Multimedia. 627Zhong Ji, Junhua Hu, Deyin Liu, Lin Yuanbo Wu, and Ye Zhao. Asymmetric cross-scale alignment for text- based person search. IEEE Transactions on Multime- dia, 2022. 2, 6, 7
Understanding dimensional collapse in contrastive self-supervised learning. Li Jing, Pascal Vincent, Yann Lecun, Yuandong Tian, arXiv:2110.09348arXiv preprintLi Jing, Pascal Vincent, Yann LeCun, and Yuan- dong Tian. Understanding dimensional collapse in contrastive self-supervised learning. arXiv preprint arXiv:2110.09348, 2021. 7
Adaface: Quality adaptive margin for face recognition. Minchul Kim, Xiaoming Jain, Liu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)2022Minchul Kim, Anil K Jain, and Xiaoming Liu. Adaface: Quality adaptive margin for face recogni- tion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 18750-18759, 2022. 3
Vilt: Vision-and-language transformer without convolution or region supervision. Wonjae Kim, Bokyung Son, Ildoo Kim, PMLR, 2021. 3International Conference on Machine Learning. Wonjae Kim, Bokyung Son, and Ildoo Kim. Vilt: Vision-and-language transformer without convolution or region supervision. In International Conference on Machine Learning, pages 5583-5594. PMLR, 2021. 3
Align before fuse: Vision and language representation learning with momentum distillation. Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, Steven Chu Hong Hoi, Advances in Neural Information Processing Systems. 343Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. Align before fuse: Vision and language represen- tation learning with momentum distillation. Advances in Neural Information Processing Systems, 34:9694- 9705, 2021. 2, 3
Learning semantic-aligned feature representation for text-based person search. Shiping Li, Min Cao, Min Zhang, ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 6Shiping Li, Min Cao, and Min Zhang. Learning semantic-aligned feature representation for text-based person search. In ICASSP 2022-2022 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2724-2728. IEEE, 2022. 2, 6
Person search with natural language description. Shuang Li, Tong Xiao, Hongsheng Li, Bolei Zhou, Dayu Yue, Xiaogang Wang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)6Shuang Li, Tong Xiao, Hongsheng Li, Bolei Zhou, Dayu Yue, and Xiaogang Wang. Person search with natural language description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1970-1979, 2017. 1, 2, 5, 6
Verbal-person nets: Pose-guided multigranularity language-to-person generation. Deyin Liu, Lin Wu, Feng Zheng, Lingqiao Liu, Meng Wang, IEEE Transactions on Neural Networks and Learning Systems. 26Deyin Liu, Lin Wu, Feng Zheng, Lingqiao Liu, and Meng Wang. Verbal-person nets: Pose-guided multi- granularity language-to-person generation. IEEE Transactions on Neural Networks and Learning Sys- tems, 2022. 2, 6
Adaptiveface: Adaptive margin and sampling for face recognition. Hao Liu, Xiangyu Zhu, Zhen Lei, Stan Z Li, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Hao Liu, Xiangyu Zhu, Zhen Lei, and Stan Z Li. Adaptiveface: Adaptive margin and sampling for face recognition. In Proceedings of the IEEE/CVF Con- ference on Computer Vision and Pattern Recognition (CVPR), pages 11947-11956, 2019. 3
Large-margin softmax loss for convolutional neural networks. Weiyang Liu, Yandong Wen, Zhiding Yu, Meng Yang, arXiv:1612.02295arXiv preprintWeiyang Liu, Yandong Wen, Zhiding Yu, and Meng Yang. Large-margin softmax loss for convolutional neural networks. arXiv preprint arXiv:1612.02295, 2016. 3
Bag of tricks and a strong baseline for deep person re-identification. Youzhi Hao Luo, Xingyu Gu, Shenqi Liao, Wei Lai, Jiang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition workshops. the IEEE/CVF Conference on Computer Vision and Pattern Recognition workshopsHao Luo, Youzhi Gu, Xingyu Liao, Shenqi Lai, and Wei Jiang. Bag of tricks and a strong baseline for deep person re-identification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition workshops, pages 0-0, 2019. 1
Magface: A universal representation for face recognition and quality assessment. Qiang Meng, Shichao Zhao, Zhida Huang, Feng Zhou, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Qiang Meng, Shichao Zhao, Zhida Huang, and Feng Zhou. Magface: A universal representation for face recognition and quality assessment. In Proceed- ings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition (CVPR), pages 14225- 14234, 2021. 3
Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, PMLRInternational conference on machine learning. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sas- try, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748-8763. PMLR, 2021. 2
Improving language understanding by generative pre-training. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understand- ing by generative pre-training. 2018. 3
Facenet: A unified embedding for face recognition and clustering. Florian Schroff, Dmitry Kalenichenko, James Philbin, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionFlorian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recog- nition and clustering. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition, pages 815-823, 2015. 2
See finer, see more: Implicit modality alignment for textbased person retrieval. Xiujun Shu, Wei Wen, Haoqian Wu, Keyu Chen, Yiran Song, Ruizhi Qiao, Bo Ren, Xiao Wang, Computer Vision-ECCV 2022 Workshops: Tel. Aviv, IsraelSpringer27Proceedings, Part VXiujun Shu, Wei Wen, Haoqian Wu, Keyu Chen, Yi- ran Song, Ruizhi Qiao, Bo Ren, and Xiao Wang. See finer, see more: Implicit modality alignment for text- based person retrieval. In Computer Vision-ECCV 2022 Workshops: Tel Aviv, Israel, October 23-27, 2022, Proceedings, Part V, pages 624-641. Springer, 2023. 2, 7
Circle loss: A unified perspective of pair similarity optimization. Yifan Sun, Changmao Cheng, Yuhan Zhang, Chi Zhang, Liang Zheng, Zhongdao Wang, Yichen Wei, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)34Yifan Sun, Changmao Cheng, Yuhan Zhang, Chi Zhang, Liang Zheng, Zhongdao Wang, and Yichen Wei. Circle loss: A unified perspective of pair sim- ilarity optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recogni- tion (CVPR), pages 6398-6407, 2020. 3, 4
Attention is all you need. Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, 305Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. 2, 5
Text-based person search via multi-granularity embedding learning. Chengji Wang, Zhiming Luo, Yaojin Lin, Shaozi Li, The International Joint Conference on Artificial Intelligence (IJCAI). 26Chengji Wang, Zhiming Luo, Yaojin Lin, and Shaozi Li. Text-based person search via multi-granularity em- bedding learning. In The International Joint Confer- ence on Artificial Intelligence (IJCAI), pages 1068- 1074, 2021. 2, 6
Cosface: Large margin cosine loss for deep face recognition. Hao Wang, Yitong Wang, Zheng Zhou, Xing Ji, Dihong Gong, Jingchao Zhou, Zhifeng Li, Wei Liu, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)Hao Wang, Yitong Wang, Zheng Zhou, Xing Ji, Di- hong Gong, Jingchao Zhou, Zhifeng Li, and Wei Liu. Cosface: Large margin cosine loss for deep face recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 5265-5274, 2018. 3
Vitaa: Visual-textual attributes alignment in person search by natural language. Zhe Wang, Zhiyuan Fang, Jun Wang, Yezhou Yang, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)Springer67Zhe Wang, Zhiyuan Fang, Jun Wang, and Yezhou Yang. Vitaa: Visual-textual attributes alignment in person search by natural language. In Proceed- ings of the European Conference on Computer Vision (ECCV), pages 402-420. Springer, 2020. 2, 6, 7
Sum: Serialized updating and matching for text-based person retrieval. Knowledge-Based Systems. Zijie Wang, Aichun Zhu, Jingyi Xue, Daihong Jiang, Chao Liu, Yifeng Li, Fangqiang Hu, 2487Zijie Wang, Aichun Zhu, Jingyi Xue, Daihong Jiang, Chao Liu, Yifeng Li, and Fangqiang Hu. Sum: Serial- ized updating and matching for text-based person re- trieval. Knowledge-Based Systems, 248:108891, 2022. 6, 7
Caibc: Capturing allround information beyond color for text-based person retrieval. Zijie Wang, Aichun Zhu, Jingyi Xue, Xili Wan, Chao Liu, Tian Wang, Yifeng Li, Proceedings of the 30th ACM International Conference on Multimedia. the 30th ACM International Conference on MultimediaZijie Wang, Aichun Zhu, Jingyi Xue, Xili Wan, Chao Liu, Tian Wang, and Yifeng Li. Caibc: Capturing all- round information beyond color for text-based person retrieval. In Proceedings of the 30th ACM Interna- tional Conference on Multimedia, pages 5314-5322, 2022. 6
Clip-driven fine-grained text-image person re-identification. Shuanglin Yan, Neng Dong, Liyan Zhang, Jinhui Tang, arXiv:2210.1027667arXiv preprintShuanglin Yan, Neng Dong, Liyan Zhang, and Jin- hui Tang. Clip-driven fine-grained text-image person re-identification. arXiv preprint arXiv:2210.10276, 2022. 2, 3, 6, 7
Image-specific information suppression and implicit local alignment for text-based person search. Shuanglin Yan, Hao Tang, Liyan Zhang, Jinhui Tang, arXiv:2208.1436567arXiv preprintShuanglin Yan, Hao Tang, Liyan Zhang, and Jinhui Tang. Image-specific information suppression and im- plicit local alignment for text-based person search. arXiv preprint arXiv:2208.14365, 2022. 6, 7
Lit: Zero-shot transfer with lockedimage text tuning. Xiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, Lucas Beyer, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionXiaohua Zhai, Xiao Wang, Basil Mustafa, Andreas Steiner, Daniel Keysers, Alexander Kolesnikov, and Lucas Beyer. Lit: Zero-shot transfer with locked- image text tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recogni- tion, pages 18123-18133, 2022. 5
Vinvl: Revisiting visual representations in vision-language models. Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, Jianfeng Gao, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionPengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jian- feng Gao. Vinvl: Revisiting visual representations in vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pat- tern Recognition, pages 5579-5588, 2021. 2
Deep cross-modal projection learning for image-text matching. Ying Zhang, Huchuan Lu, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)67Ying Zhang and Huchuan Lu. Deep cross-modal pro- jection learning for image-text matching. In Proceed- ings of the European Conference on Computer Vision (ECCV), pages 686-701, 2018. 2, 4, 6, 7
Hierarchical gumbel attention network for text-based person search. Kecheng Zheng, Wu Liu, Jiawei Liu, Zheng-Jun Zha, Tao Mei, Proceedings of the 28th ACM International Conference on Multimedia. the 28th ACM International Conference on MultimediaKecheng Zheng, Wu Liu, Jiawei Liu, Zheng-Jun Zha, and Tao Mei. Hierarchical gumbel attention network for text-based person search. In Proceedings of the 28th ACM International Conference on Multimedia, pages 3441-3449, 2020. 6
Dualpath convolutional image-text embeddings with instance loss. Zhedong Zheng, Liang Zheng, Michael Garrett, Yi Yang, Mingliang Xu, Yi-Dong Shen, ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM). 167Zhedong Zheng, Liang Zheng, Michael Garrett, Yi Yang, Mingliang Xu, and Yi-Dong Shen. Dual- path convolutional image-text embeddings with in- stance loss. ACM Transactions on Multimedia Com- puting, Communications, and Applications (TOMM), 16(2):1-23, 2020. 2, 6, 7
Dssl: Deep surroundings-person separation learning for text-based person retrieval. Aichun Zhu, Zijie Wang, Yifeng Li, Xili Wan, Jing Jin, Tian Wang, Fangqiang Hu, Gang Hua, Proceedings of the 29th ACM International Conference on Multimedia. the 29th ACM International Conference on Multimedia67Aichun Zhu, Zijie Wang, Yifeng Li, Xili Wan, Jing Jin, Tian Wang, Fangqiang Hu, and Gang Hua. Dssl: Deep surroundings-person separation learning for text-based person retrieval. In Proceedings of the 29th ACM International Conference on Multimedia, pages 209-217, 2021. 2, 5, 6, 7
| [] |
[
"Pattern Formation by Electric-field Quench in Mott Crystal",
"Pattern Formation by Electric-field Quench in Mott Crystal"
] | [
"Nicolas Gauquelin \nDepartment of Physics\nNANOlab Center of Excellence\nElectron Microscopy for Materials Research (EMAT)\nUniversity of Antwerp\nBE-2020AntwerpenBelgium\n\nUniversity of Antwerp\nBE-2020AntwerpenBelgium\n",
"Filomena Forte \nCNR-SPIN\nDipartimento di Fisica \"E.R. Caianiello\"\nUniversità di Salerno\nI-84084, I-84084Fisciano, Salerno, Fisciano, SalernoItaly, Italy\n",
"Daen Jannis \nDepartment of Physics\nNANOlab Center of Excellence\nElectron Microscopy for Materials Research (EMAT)\nUniversity of Antwerp\nBE-2020AntwerpenBelgium\n\nUniversity of Antwerp\nBE-2020AntwerpenBelgium\n",
"Rosalba Fittipaldi \nCNR-SPIN\nDipartimento di Fisica \"E.R. Caianiello\"\nUniversità di Salerno\nI-84084, I-84084Fisciano, Salerno, Fisciano, SalernoItaly, Italy\n",
"Carmine Autieri \nInternational Research Centre MagTop\nInstitute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/46PL-02668WarsawPoland\n",
"Giuseppe Cuono \nInternational Research Centre MagTop\nInstitute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/46PL-02668WarsawPoland\n",
"Veronica Granata \nDipartimento di Fisica \"E.R. Caianiello\"\nUniversità di Salerno\nI-84084Fisciano, SalernoItaly\n",
"Mariateresa Lettieri \nCNR-SPIN\nI-84084Fisciano, SalernoItaly\n",
"Canio Noce \nDipartimento di Fisica \"E.R. Caianiello\"\nUniversità di Salerno\nI-84084Fisciano, SalernoItaly\n\nCNR-SPIN\nI-84084Fisciano, SalernoItaly\n",
"Fabio Miletto Granozio \nDipartimento di Fisica\nCNR-SPIN\nI-80126NapoliItaly\n\nUniversità di Napoli\nNapoliItaly\n",
"Antonio Vecchione \nCNR-SPIN\nDipartimento di Fisica \"E.R. Caianiello\"\nUniversità di Salerno\nI-84084, I-84084Fisciano, Salerno, Fisciano, SalernoItaly, Italy\n",
"Johan Verbeeck \nDepartment of Physics\nNANOlab Center of Excellence\nElectron Microscopy for Materials Research (EMAT)\nUniversity of Antwerp\nBE-2020AntwerpenBelgium\n\nUniversity of Antwerp\nBE-2020AntwerpenBelgium\n",
"Mario Cuoco \nCNR-SPIN\nDipartimento di Fisica \"E.R. Caianiello\"\nUniversità di Salerno\nI-84084, I-84084Fisciano, Salerno, Fisciano, SalernoItaly, Italy\n"
] | [
"Department of Physics\nNANOlab Center of Excellence\nElectron Microscopy for Materials Research (EMAT)\nUniversity of Antwerp\nBE-2020AntwerpenBelgium",
"University of Antwerp\nBE-2020AntwerpenBelgium",
"CNR-SPIN\nDipartimento di Fisica \"E.R. Caianiello\"\nUniversità di Salerno\nI-84084, I-84084Fisciano, Salerno, Fisciano, SalernoItaly, Italy",
"Department of Physics\nNANOlab Center of Excellence\nElectron Microscopy for Materials Research (EMAT)\nUniversity of Antwerp\nBE-2020AntwerpenBelgium",
"University of Antwerp\nBE-2020AntwerpenBelgium",
"CNR-SPIN\nDipartimento di Fisica \"E.R. Caianiello\"\nUniversità di Salerno\nI-84084, I-84084Fisciano, Salerno, Fisciano, SalernoItaly, Italy",
"International Research Centre MagTop\nInstitute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/46PL-02668WarsawPoland",
"International Research Centre MagTop\nInstitute of Physics\nPolish Academy of Sciences\nAleja Lotników 32/46PL-02668WarsawPoland",
"Dipartimento di Fisica \"E.R. Caianiello\"\nUniversità di Salerno\nI-84084Fisciano, SalernoItaly",
"CNR-SPIN\nI-84084Fisciano, SalernoItaly",
"Dipartimento di Fisica \"E.R. Caianiello\"\nUniversità di Salerno\nI-84084Fisciano, SalernoItaly",
"CNR-SPIN\nI-84084Fisciano, SalernoItaly",
"Dipartimento di Fisica\nCNR-SPIN\nI-80126NapoliItaly",
"Università di Napoli\nNapoliItaly",
"CNR-SPIN\nDipartimento di Fisica \"E.R. Caianiello\"\nUniversità di Salerno\nI-84084, I-84084Fisciano, Salerno, Fisciano, SalernoItaly, Italy",
"Department of Physics\nNANOlab Center of Excellence\nElectron Microscopy for Materials Research (EMAT)\nUniversity of Antwerp\nBE-2020AntwerpenBelgium",
"University of Antwerp\nBE-2020AntwerpenBelgium",
"CNR-SPIN\nDipartimento di Fisica \"E.R. Caianiello\"\nUniversità di Salerno\nI-84084, I-84084Fisciano, Salerno, Fisciano, SalernoItaly, Italy"
] | [] | The control of Mott phase is intertwined with the spatial reorganization of the electronic states. Out-ofequilibrium driving forces typically lead to electronic patterns that are absent at equilibrium, whose nature is however often elusive. Here, we unveil a nanoscale pattern formation in the Ca 2 RuO 4 Mott insulator. We demonstrate how an applied electric field spatially reconstructs the insulating phase that, uniquely after switching off the electric field, exhibits nanoscale stripe domains. The stripe pattern has regions with inequivalent octahedral distortions that we directly observe through high-resolution scanning transmission electron microscopy. The nanotexture depends on the orientation of the electric field, it is non-volatile and rewritable. We theoretically simulate the charge and orbital reconstruction induced by a quench dynamics of the applied electric field providing clear-cut mechanisms for the stripe phase formation. Our results open the path for the design of non-volatile electronics based on voltage-controlled nanometric phases. | 10.1021/acs.nanolett.3c00574 | [
"https://export.arxiv.org/pdf/2305.19596v1.pdf"
] | 258,786,900 | 2305.19596 | 1162d13aa5ed0d5344a6a60d00cad8fac41b40eb |
Pattern Formation by Electric-field Quench in Mott Crystal
Nicolas Gauquelin
Department of Physics
NANOlab Center of Excellence
Electron Microscopy for Materials Research (EMAT)
University of Antwerp
BE-2020AntwerpenBelgium
University of Antwerp
BE-2020AntwerpenBelgium
Filomena Forte
CNR-SPIN
Dipartimento di Fisica "E.R. Caianiello"
Università di Salerno
I-84084, I-84084Fisciano, Salerno, Fisciano, SalernoItaly, Italy
Daen Jannis
Department of Physics
NANOlab Center of Excellence
Electron Microscopy for Materials Research (EMAT)
University of Antwerp
BE-2020AntwerpenBelgium
University of Antwerp
BE-2020AntwerpenBelgium
Rosalba Fittipaldi
CNR-SPIN
Dipartimento di Fisica "E.R. Caianiello"
Università di Salerno
I-84084, I-84084Fisciano, Salerno, Fisciano, SalernoItaly, Italy
Carmine Autieri
International Research Centre MagTop
Institute of Physics
Polish Academy of Sciences
Aleja Lotników 32/46PL-02668WarsawPoland
Giuseppe Cuono
International Research Centre MagTop
Institute of Physics
Polish Academy of Sciences
Aleja Lotników 32/46PL-02668WarsawPoland
Veronica Granata
Dipartimento di Fisica "E.R. Caianiello"
Università di Salerno
I-84084Fisciano, SalernoItaly
Mariateresa Lettieri
CNR-SPIN
I-84084Fisciano, SalernoItaly
Canio Noce
Dipartimento di Fisica "E.R. Caianiello"
Università di Salerno
I-84084Fisciano, SalernoItaly
CNR-SPIN
I-84084Fisciano, SalernoItaly
Fabio Miletto Granozio
Dipartimento di Fisica
CNR-SPIN
I-80126NapoliItaly
Università di Napoli
NapoliItaly
Antonio Vecchione
CNR-SPIN
Dipartimento di Fisica "E.R. Caianiello"
Università di Salerno
I-84084, I-84084Fisciano, Salerno, Fisciano, SalernoItaly, Italy
Johan Verbeeck
Department of Physics
NANOlab Center of Excellence
Electron Microscopy for Materials Research (EMAT)
University of Antwerp
BE-2020AntwerpenBelgium
University of Antwerp
BE-2020AntwerpenBelgium
Mario Cuoco
CNR-SPIN
Dipartimento di Fisica "E.R. Caianiello"
Università di Salerno
I-84084, I-84084Fisciano, Salerno, Fisciano, SalernoItaly, Italy
Pattern Formation by Electric-field Quench in Mott Crystal
The control of Mott phase is intertwined with the spatial reorganization of the electronic states. Out-ofequilibrium driving forces typically lead to electronic patterns that are absent at equilibrium, whose nature is however often elusive. Here, we unveil a nanoscale pattern formation in the Ca 2 RuO 4 Mott insulator. We demonstrate how an applied electric field spatially reconstructs the insulating phase that, uniquely after switching off the electric field, exhibits nanoscale stripe domains. The stripe pattern has regions with inequivalent octahedral distortions that we directly observe through high-resolution scanning transmission electron microscopy. The nanotexture depends on the orientation of the electric field, it is non-volatile and rewritable. We theoretically simulate the charge and orbital reconstruction induced by a quench dynamics of the applied electric field providing clear-cut mechanisms for the stripe phase formation. Our results open the path for the design of non-volatile electronics based on voltage-controlled nanometric phases.
There are various paths to drive a changeover of the Mott insulating state [1] by either applying pressure or strain, changing the temperature nearby the Mott transition or doping the system away from integer filling, corresponding to bandwidth, temperature and density control, respectively [2][3][4][5][6][7][8]. The resulting phenomena have broad impact in condensed matter physics for both fundamental [2,3] and technological perspectives [4,5].
There are two scenarios that are often encountered in proximity of Mott phases: i) the occurrence of superconductivity when the insulating phase is destroyed, as for the emblematic case of cuprates [9] with magnetism playing an important role too, and ii) the tendency to form inhomogeneous electronic patterns due to the first order character of the Mott transition and the competing length scales of localized and itinerant electronic degrees of freedom [10][11][12]. Recently, it has been pointed out that the application of an electric field, both static or dynamic, can be an ideal knob to control the conducting properties of correlated materials by inducing insulatorto-metal transitions and novel quantum phases of matter [13][14][15][16]. Depending on its amplitude, an applied gate voltage can yield a dielectric breakdown and electronic avalanches [17] or activate collective low-energy lattice and spin-orbital excitations [18]. The transitions which emerge from the Mott insulating state can hence involve multiple degrees of freedom and be marked or not by significant changes in the crystal structure, as is the case for V 2 O 3 [19], VO 2 [20][21][22][23] and Fe 3 O 4 systems [24]. In this framework, CRO represents a paradigmatic material platform to assess the interplay of electron correla-tions and electron-lattice coupling in the presence of multiorbital physics [25][26][27] with spin-orbit and Hund's interactions [28]. Indeed, CRO is a Mott-insulator at room temperature and on heating through T MI =83°C it undergoes an insulator-to-metal transition accompanied by an abrupt structural change [29], without varying the crystal symmetry. The structural transition involves an orbital reconstruction from a preferential out-of-plane orbital occupancy of the 4d states (xz, yz) to a dominant orbital configuration with in-plane character (xy). At lower temperature, this redistribution turns into an orbitally ordered state [18,30]. The application of electric field, through current and optical pulses, or the use of thermal quench have been shown to melt the insulating phase [31][32][33][34] resulting into the formation of phase coexistence [33],including nanometric regions [34] at the boundaries of micrometric domains having metallic and insulating character. Nevertheless, while spatial inhomogeneities can form and inequivalent structural components compete, the origin and the mechanisms for the spatial reorganization remain mostly unexplained.
The problem of domains formation is particularly challenging in the context of Mott transitions as they are first order type in real materials. Domains formation is a general and complex phenomenon [35][36][37], with modulations that often result from the competition between short-range attractive and long-range repulsive interactions. In dynamical conditions, domains can arise from the quench of the interactions or by quenching the temperature from above to below the ordering transition [38]. Whether similar reorganization phenomena after electric or arXiv:2305.19596v1 [cond-mat.str-el] 31 May 2023 orbital quench can be encountered in correlated systems exhibiting insulator-to-metal transition is an outstanding problem not yet fully uncovered.
In this manuscript, we face this challenge and unveil a novel path to induce as well as turn on and off the formation of nanotextures by means of an applied electric field in a Mott insulator, focusing on the case of the CRO system. The emergent phase remarkably depends on the electric field orientation. It is a stable configuration that can be erased by voltage or temperature and regenerated with the same voltage quench protocol. The formation of the domains is ascribed to a nontrivial orbital dynamics that is activated by the electric field. We demonstrate that the electric field is able to affect and reduce the orbital population unbalance among the xy and (xz, yz) states. The nontrivial orbital relaxation allows for the formation of interfaces of long and short octahedra. We show that these interfaces are electrically active, thus they can interact by stabilizing a stripe pattern.
Let us start by considering the structural evolution across the thermally induced insulator-to-metal transition. Nakamura et al. [31] reported the change of the lattice parameters as measured upon heating of a bulk specimen. Hence, to set the reference, the first issue we aim to address is how the impact of the thermal effects on the structure manifests at the nanoscale. Figure 1a and Figure 1b compare the High-Resolution Transmission Electron Microscopy High Angle Annular dark field (HRSTEM-HAADF) image of the CRO sample taken at room temperature (20°C) and at a representative higher temperature (200°C). Note that we do not have access to the b-axis as it is in the direction of the electron beam. Although the change of lattice parameter from the short (S-) to the elongated (L)-phase is almost invisible by figure inspection, the fitting of the Ru atomic columns with a 2D gaussian distribution can provide the amplitude of the in-plane and out-of-plane lattice parameters. We find an expansion of the c lattice parameter from 11.9 to 12.2Å, as shown in Figure 1d, while the variation of the in-plane (a lattice parameter) is, on the other hand, small ( Figure 1c) in amplitude, with almost no change in the histogram displayed on Figure 1e. This analysis indicates that, by increasing the temperature, the RuO 6 octahedra and the unit cell elongate. Nanobeam electron diffraction (NBED) was used to locally determine the lattice parameter. This method collects a diffraction pattern, from the transmitted electrons, at each probe position while raster scanning the nanometer electron beam over the specimen. The information on the lattice parameters comes from the position of the peaks in the diffraction pattern, which is collected at each probe position. The peaks in a diffraction pattern arise from the constructive interference of the electron wave coming from the interaction of the electron with crystal. The position of the diffraction peaks can be translated into a scattering angle from which the lattice parameters can be determined via the use of Bragg's law. Since the transmitted electrons are collected, the measured lattice parameters arise from the entire thickness of the specimen (∼ 100 nm). These diffraction patterns were collected along the [010] zone axis in the case of Figure 1. Figure 1c and Figure 1f report the evolution of the a and c lattice parameters as measured in-situ from the diffraction pattern every 10°C. The analysis clearly signals the phase transition and its hysteresis loop of almost 30°C between heating and cooling cycle, which is consistent with the previously reported findings [31].
Interestingly, the application of a constant electric current induces the presence of a domain structure with stripes at the interface between a metallic and an insulating domain [33]. Although inhomogeneities occur, in Ref. [39] it is shown that locally a metal-to-insulator changeover always occurs at the same transition temperature, irrespective of being driven by temperature or by current. These phenomena thus indicate a tied connection between thermal and electric current driven effects. Here, we aim to verify whether the electronic phase reconstruction happens or not at the nanoscale, in the presence of an applied electric field when the electric current is not allowed to flow through the specimen. The experiment is performed by employing a capacitor like geometry, as described in the supplementary material, for a specimen where the electric field is applied along the a-crystallographic axis. In this experimental configuration, the voltage has been increased with a saw-tooth profile having the following sequence: 0;+1 V;-1 V;+1.2 V;-1.2 V...+3.2 V;-3.2 V (see inset in Figure 1h). Such voltage variation leads to a progressive increase of the c lattice constant as a function of the applied voltage accompanied by an almost unchanged behavior of the a lattice parameter amplitude. Here, two relevant remarks are in place: 1) the lattice parameter measured at a specific voltage does not depend on the orientation (sign) of the electric field but solely on its amplitude, 2) the lattice parameter value achieved at 3.2 V (corresponding to approximately 3.2 kV/cm), at room temperature, has the same value of that found at 200°C at zero voltage. This is also consistent with our measured bulk value and with published data [31]. Additionally, the analysis of the spatially resolved maps demonstrates that the distribution of the lattice parameter is uniform and, thus, no pattern formation in real space is observed at any applied voltage.
An unexpected and different behaviour is instead achieved when the voltage is quenched to zero from the maximum voltage configuration. This observation is obtained in the devised capacitor geometry (see supplementary material), where the Joule heating is expected to play a minor role compared to the electric current flow setup [32,33,39].
Single crystalline specimens of the bulk single crystal of CRO were prepared using the focused ion beam (FIB) and attached to one of the capacitor plates to have the electric field applied along the three different crystallographic directions, as shown in the supplementary material. For this setup, distinct spatial patterns are observed depending on the direction of the applied electric field before the quench to zero voltage.
For clarity only the c lattice parameter is reported as it gives the most significant variation. When the electric field is applied along the b crystallographic direction, two large domains are observed (Figure 2a and quenched state as shown in the supplementary material. When the applied voltage induces an electric field along the ccrystallographic direction and then is quenched to zero, stripes with a periodic sequence of regions with short and long octahedra appear along the c-direction of the specimen, as shown in Figs. 2b and 2e. The direction of the stripe modulation is perpendicular to the electric field orientation. On the other hand, when the voltage is applied along the a-crystallographic axis (Figures 2c and 2f), stripes are observed parallel to the applied electric field and mostly close to the interface between the sample and the electrode corresponding to the ac plane. Note that this stripe configuration has both a different generating mechanism and a diverse orientation when compared to the findings in Ref. [33].
All those stripe patterns can be erased by switching on the voltage and bringing it to the highest probed amplitude necessary to have the uniform high voltage state, or heating the sample above the metal-insulator transition temperature T MI . In both ways, the stripes can be systematically generated again with a similar spatial distribution when switching back the system to the zero voltage state. Hence, this patterned state is a stable configuration for the system which can be reproduced in a controlled manner. It is important to point out that, when considering a specimen attached to both sides of the capacitor, thus allowing a current flow through the sample, the crystallographic dependence of the pattern, as shown in Figure 2, is suppressed. Indeed, by quenching the specimen from 200°C to room temperature (within 1 second) does not lead to the formation of any pattern (see Figure 3 top panels). Similarly, by quenching the voltage to zero after bringing the specimen to the more conducting high-voltage configuration, no stripes are observed (see Figure 3 bottom panels). When electrical current is allowed to flow through the specimen, only fringes between a more conducting region (with elongated phase) and an insulating domain, similar to the results reported previously [33], are observed. We attribute this difference to Joule heating [39], which yields gradients of temperature in time during the temperature quench experiment that destabilize the orbital ordering. Hence, we argue that Joule heating or temperature gradients having a different relaxation time during the quench prevent the patterns formation.
Let us focus on the nature and formation mechanism of the observed pattern. Our first aim is to demonstrate that an orbital reconstruction can occur after the quench of the electric field. For this purpose, we perform a time-dependent simulation of the many body state on a finite size cluster with two effective RuO 6 octahedra. The analysis is aimed to capture on equal foot the correlated dynamics activated by the electric field both in space, on short-range length scale, as well as in time. The employed model Hamiltonian (see Supplemental Info) effectively includes all the local interactions at the Ru site and Ru-O charge transfer processes which are relevant to describing the correlated ground state in the CRO Mott phase. The electric field is introduced through a time-dependent vector potential that enters as a Peierls factor in the phase of the d − p hopping amplitude (see Supplemental Information). For a quench dynamics it can be generally expressed by the profile in Figure 4a.
We start by considering the insulating state in the regime of flatten octahedra with the crystal field potential favoring the charge occupation of the planar xy orbital. In this configuration, it is known [27,30,[40][41][42] that there is an orbital unbalance with excess charge in the xy band compared to the (xz, yz) states, with direct correspondence with the octahedral distortions. We track the orbital dynamics in the time frame before and after the quench of the electric field for different amplitudes of the maximum applied electric field, E max , expressed by the parameter η = E max /E M , with E M = 0.01eV /Angstrom. E M is a reference scale of the amplitude of the electric field that, for convenience, we introduced when scanning the phase diagram in our simulation. We have chosen the value 10 −2 eV/Angstrom because for the considered cluster, it is a characteristic scale that separates different regimes in terms of realized electronic configurations after the quench of the electric field. For values of η smaller than 10 −2 there is no significant orbital variations in the time dynamics. Namely, the orbital occupation stays substantially unchanged with small fluctuations around the ground state values. For an applied electric field corresponding to η ∼ 10 −2 the orbital dynamics starts to manifest significant fluctuations with a distribution having a broad variance around the ground state averaged occupation. For this regime of weak electric field the orbital unbalance is dynamically softened, although the averaged values are not much affected. The dynamics indicates that the charge and orbital distributions become broader in amplitude during the ramp up ( Figure 4b) and evolve into a different orbital distribution with larger spread (Figure 4c) after the quench of the electric field. The increase of the maximal applied electric field before its switching-off has a significant impact on the orbital dynamics pre-and postquench. The emergent orbital configuration is such that the orbital unbalance is completely suppressed (Figure 4d) before the quench, while after the quench ( Figure 4e) the distribution shrinks in amplitude and the difference of the averaged charge occupation in the xy and (xz, yz) states tends to vanish. A further increase of the maximal electric field amplitude (i.e. η ∼ 1) is affecting the profile of the orbital distribution before the quench while keeping the qualitative trend of suppressing the charge separation in the xy and (xz, yz) orbital sectors.
Hence, the analysis demonstrates that the application of an electric field yields non-equilibrium orbital configurations that are compatible with short and long octahedra depending on the parameter η. After the quench, above a critical threshold of the electric field, the orbital unbalance is substantially suppressed thus favoring the formation of more elongated octahedra. This result implies that, due to the presence of spatial electric field gradients, the system tends to rearrange by forming interfaces between long and short octahedra.
Having established that the quench of the electric field results into a reduction of the orbital unbalance that is compatible with long unit cell, we consider how the formation of interfaces between long and short octahedra interacts with the electric field. For this purpose, we simulate a superlattice configuration with long (L) and short (S) CRO unit cells (Fig. 5a). We determine the optimized structure by the relaxation of the Ru atoms along the c-axis and we obtain a head-to-head dis- placement of the Ru layers, as reported in Fig. 5a. This configuration implies that the overall displacement is vanishing. The Ru layers at the interface between the L-and the S-phase move towards the S-phase. Notably, qualitative similar results are observed for supercells of different size. We calculate the free energies of the superlattice and the bulk as a function of the displacements of the Ru atoms with respect to their centrosymmetric positions.
As we can see from Fig. 5b, the equilibrium energy of the bulk is achieved when the Ru atoms are in the centrosymmetric positions, while for the superlattice the situation changes, namely the equilibrium energy is when the Ru atoms are displaced by approximately 0.47 pm with respect to their centrosymmetric positions. These shifts of the Ru atoms along the (001) direction at the interface between different Ru-based compounds stacked along (001) are similar to those predicted in the metallic phase of the Sr-based ruthenate compounds [43]. While larger displacements are found in the metallic phase, we expect that the size of the displacements mainly depends on the electronic mismatch between the two structurally distinct phases within the superlattice. These displacements make the interface electrically active because they can sustain a non-vanishing electric dipole. Hence, the resulting physi-cal scenario is that the application of an electric field activates the orbital and lattice dynamics which allows for deviations of the structural configurations from the equilibrium. When interface configurations with inequivalent octahedral distortions form, the system tends to stabilize them and lower the energy by having a periodic alternation of short and long unit cells (Fig. 5). The head-to-head interface configuration is compatible with an averaged net vanishing electric field. This behavior resembles the phenomenology of magnetic systems described by the kinetic Ising model [38] with the orbital-lattice degrees of freedom replacing the spin ones.
To wrap up, we have demonstrated that the ramp up to a critical amplitude of the applied voltage and its successive quench to zero are able to induce a stripe phase in the CRO Mott insulator. The stripe phase is marked by long and short octahedra that periodically alternate along the c-axis with a nanometric length scale. This pattern together with the way (i.e. gate voltage quench) it is generated mark the difference between the observed stripe phase and the stripe domains realized at the boundary of micrometric regions with metallic and insulating character [34]. The configuration is stable and can be controlled by varying and switching off the applied gate. The stripe formation mechanism depends on the orientation FIG. 3. (a-c) The real space map of the c lattice parameter at room temperature, at 200°C and after quenching the specimen within 0.5 s back to room temperature. (d) Additionally, the histogram of the lattice parameter showing the two room temperature measurements and the high temperature measurement of the c-crystallographic axis. We can notice that with temperature alone the insulator-to-metal transition is fully reversible (e-g) The real space map of the c lattice parameter at 0 V, at 1.5 V and after quenching back to 0 V of a specimen contacted to both electrodes with the field applied along the a axis (Joule heating is playing a role in this geometry as current can flow through the specimen).
(h) The histogram which corresponds to the image in panel (g) where the colors of the bars correspond to the color in the image is displayed next to the right-most panel. Additionally, a histogram of the lattice parameter at 0 voltage (e) and maximum voltage (f) are shown to indicate the lattice constant in both insulating and metallic initial states. We notice that stripes are observed at the interface between two domains, one metallic and one with smaller lattice parameter (almost insulating) (see Supplmental Info for the I-V behavior before and after the voltage quench). of the applied electric field thus underlining the role of the orbital degrees of freedom in the stripe phase formation. We argue that the reason for having a different response for an electric field oriented along the a and b axes might arise from the character of the activated orbital excitation due to presence of an orbital easy axis for the orthorhombic crystalline symmetry of CRO.
Besides, in the insulating phase the ground state is orbitally and structurally correlated. The electric field tends to destroy the orbital pattern by deforming the octahedra and brings the system into a new state with interfaces between orbitally and structurally inequivalent configurations. The interfaces carry an electric dipole, so that they can interact and stabilize a periodic arrangement in the form of stripes. The observed phenomena may have a high impact on innovative types of switching memories, once the stripes pattern has been written, and they can be erased and written at will just by applying a small amplitude voltage. The fact that the stripe phase becomes the new zero-voltage state is also very beneficial as it is more energy efficient than having to switch back to a fully insulating state. These findings thus can pave the way for the construction of low-energy consumption non-volatile nanoscale electronics and in perspective be integrated with other functional devices employing photonic effects.
I. SUPPLEMENTAL INFO
In the Supplemental Information we provide details about the experimental methods and setup, the real space maps for various electric field configurations, and the electricalstructural characterization. Furthermore, we describe the methodology for the simulation of the electric field quench, and aspects of the density functional theory employed to in- vestigate the superlattice.
II. EXPERIMENTAL METHODS
In this Section we describe the methods related to the crystal fabrication, the structural characterization, and the preparation and the analysis for transmission electron microscopy.
Fabrication and structural characterization
Single crystals were grown by the floating zone technique using an infrared image furnace with two mirrors (NEC Machinery, model SC1-MDH11020). CRO single crystals used in this experiment were carefully selected prior to HRSTEM analysis. The morphology and composition were inspected by scanning electron microscopy using a SEM Leo EVO 50 Zeiss, coupled with an energy dispersive spectrometer (Oxford INCA Energy 300). The structural characterization was performed by high-angle X-ray diffraction measurements using a Panalytical X-Pert MRD PRO diffractometer and the electrical characterization was obtained by two terminal method applying DC current along the c-axis of the crystal at room temperature (see supplementary material)
Specimen preparation for Transmission Electron microscopy
Cross-sectional cuts of the samples along the [100] and [010] directions of Ca 2 RuO 4 c-oriented single crystal were prepared using a Thermofisher Scientific Helios 650 dualbeam Focused Ion Beam device on dedicated DENS biasing chips as shown in the supplementary material. To get a sample with field applied along the c-crystallographic axis, the lift-out lamella was rotated by 180°using the omniprobe nanomanipulator before attaching it to the chip this resulted in a slight angle between the electrical field and the c-axis of the crystal which has been neglected. The sample thickness was kept around 100-150 nm thick. Biasing and heating experiments were carried out in a DENS Solutions Lightning double tilt holder with help from a Keithley 2400 Source Meter and inhouse control program.
Scanning Transmission Electron microscopy
The electron microscopy characterization was performed on the X-Ant-Em instrument at the University of Antwerp. The Electron Microscope used consists of an FEI Titan G3 electron microscope equipped with an aberration corrector for the probe-forming lens as well as a high-brightness gun operated at 300 kV acceleration voltage with a beam current of around 100 pA for all experiments to reduce acquisition time. The STEM convergence semi-angle used was 21 mrad for HRSTEM-HAADF imaging, providing a probe size of 0.8 A. The collection semi-angle ranges from 29-160 mrad for annular dark field (ADF) imaging. Diffraction patterns used for Fig. 1 (main text) were acquired in nano-beam electron diffraction(NBED) mode with a convergence angle of 0.25 mrad, resulting in a spatial resolution of ∼ 1 nm, and a collection angle of 21 mrad using a camera length of 285 mm and a 256×256 pixel Quantum Detectors Merlin direct electron detection camera with an acquisition time of 2ms/pixel. Similar conditions were used to acquire the 2D maps presented in Fig. 2 and Fig. 2b (main text).
Determination of the lattice parameters using
HRSTEM-HAADF (direct space).
The HR-STEM images were used to determine the lattice parameters of the CRO crystal. Ten frames were acquired with a dwell time of 2 µs. Since each individual image contains enough signal it is possible to align them with the cross correlation method [47]. Multiple fast scans were acquired to reduce the effect of the sample drift while retaining the same signal-to-noise as doing one slow scan. After the images were aligned, a peak finding routine implemented in Statstem [48] was used to extract the initial guess of the atomic positions.
These initial positions were refined by fitting 2D Gaussians to each atomic column. The fitted values of the centre were used to determine the lattice constants of the material.
Determination of the lattice parameters using NBED (reciprocal space)
For NBED, a diffraction pattern is acquired at each probe position making it possible to map the lattice parameter at each probe position. In order to do this, a local 2D peak finding algorithm is used to determine the position of each diffraction peak [49]. Once these positions are determined, the two lattice vectors which describe the diffraction peak positions is determined by performing a linear fitting procedure. Once the lattice vectors are determined it is straightforward to retrieve the norm of each vector which corresponds the length of the lattice parameters.
III. EXPERIMENTAL SETUP AND ELECTRICAL-STRUCTURAL CHARACTERIZATION
In this Section we present extra results about the experimental setup (Fig. S1), the real space maps for various electric field configurations (Fig. S2), and the electrical-structural characterization (Fig. S3).
IV. THEORETICAL MODELLING OF THE ELECTRIC FIELD QUENCH
In CRO, the bands close to the Fermi level stem mostly from the t 2g orbitals (d yz, , d zx , d xy ), which hybridize with the oxygen (p x , p y , p z ) bands. Hence, one can build an effective model Hamiltonian for the propagating electrons within the ruthenium-oxygen plane, by considering the interaction terms at the ruthenium and oxygen sites and the kinetic term responsible for the ruthenium-oxygen hybridization. The non-interacting part of the Hamiltonian for the Ru-O bond along the x ([100]) direction comprises the following terms:
H Ru i −O [x] = t d α ,p β d † i,ασ p β σ + h.c.(1)
H O el = ε x n p x + ε y n p y + ε z n p z
H Ru el = ∑ i ε yz n id yz + ε zx n id zx + ε xy n id xy .
Eq. (1) is the Ru-O hopping along a given symmetry direction, e.g. the x-axis, t d α ,p β is the hopping amplitude, α, β are orbital indices running over the three orbitals in the t 2g sector, and d † iασ is the creation operator of an electron with spin σ at the site i in the orbital α. Here, we include all the hopping terms which are allowed according to the Slater-Koster rules, assuming that the Ru-O bond can form an angle θ with the x axis, due to the rotation of the octahedra around the c-axis. Eqs. (2) and (3) describe the orbital dependent on-site energy terms, which take into account the offset between the occupied orbitals of O and Ru. In particular, Eqs (3) includes the on-site crystal-field splitting of the t 2g manifold in the octahedral environment, which can be expressed in terms of the amplitude ∆ CF , with ∆ CF = (ε xy − ε z ). For flat octahedra below the structural transition temperature of Ca " RuO 4 , ∆ CF is negative. We also consider the possibility of having a small orthorhombic splitting, δ ort of the d xz ,d yz orbitals by assuming that ε yz = ε z + δ ort and ε yz = ε z − δ ort . For interacting electrons, we limit to the local Hamiltonian H Ru el-el , [40,41,50] at the Ru sites, which includes the complete Coulomb interaction projected onto the t 2g subspace. This is given by the intra-orbital U, and the inter-orbital Coulomb and exchange elements, U ′ and J H . We assume a rotational invariant condition for the Coulomb amplitudes, so
that U = U ′ + 2J H , and J ′ = J H H Ru el-el = U ∑ i,α n iα↑ n iα↓ − 2J H ∑ i,α<β S iα · S iβ + (4) + U ′ − J H 2 ∑ i,α<β n iα n iβ + +J H ∑ i,α<β d † iα↑ d † iα↓ d iβ ↑ d iβ ↓ .(5)
Moreover, we consider the spin-orbit coupling H Ru where λ is the spin-orbit coupling strength and (l αβ · s σ σ ′ ) are the matrix elements of the atomic SOC in the t 2g basis. Note that the t 2g orbitals have an effective orbital momentum l = 1, whose components in the basis (d yz , d xz , d xy ) can be expressed as l k = iε kpq . For the examined cluster with two rutheniums ions Ru 1 and Ru 2 and one oxygen atom O, the total Hamiltonian definitely reads as: Fig. S1e. The measurement is performed before (triangle) and after (circle) establishing the stripe phase corresponding to the pattern in Figure 3g of the main text. The amplitude of the resistance extracted from the linear regime is also reported in the label. ∼[1.5,2] for modelling the spin excitations observed by neutron scattering [27,28,51,52]. For the hopping amplitudes, we assume that the basic p − d hopping amplitudes in the tetragonal (α = 0) symmetry have the following value t 0 p,d =1.5 eV.
SOC H Ru SOC = λ ∑ iα,σ ∑ β ,σ ′ d † iασ (l αβ · S σ σ ′ )d iβ σ ′(6)H = H Ru 1 −O [x] + H Ru 2 −O [x] + H O el + H Ru el + H Ru el−el + H Ru SOC .(7)
Let us describer the methodology for investigating the consequence of a time-dependent electric field that is switched off after a given time interval. In the presence of an applied voltage, the effect of the external electric field can be incorporated in the miscroscopic model by the standard Peierls substitution to the hopping matrix elements,
t d α ,p β (t) = t d α ,p β exp −i ē h r O r Ru A(t)dr(8)
where r O and r Ru are the position of the O and Ru atoms, e is the electron charge andh the Planck constant, while the vec-tor potential is related to the electric field by E(t) = −∂ t A(t).
The electric field in this formalism corresponds to a timedependent deformation of the Hamiltonian and the present approach avoids to deal with electrodes in the system. Assuming that the electric field is static lying in the Ru-O plane and taking only one projection along the Ru-O-Ru axis, one can describe the evolution of the ground state by introducing a scalar vector potential A(t). We model the quench behavior of the electric field, by assuming the time profile for A(t) displayed in Fig. 4(a) (main text). In the time interval preceding the quench t < t Q , A(t) grows from zero to a maximum value, showing a super-linear dependence in time. In the specific, it is obtained as a cubic polynomial interpolation between linear functions, where the strength of the electric field in gradually increased up to an absolute value E max . In order to explore different coupling regimes compatible with the experimental values of the applied voltage, we considered several cases which are parametrized by the constant η = E max /E M , with E M = 0.01eV /Å and E max in the range [10 −4 ,10 −2 ] eV/Å. At t Q of the order of 0.8 ns, A is suddenly reduced to zero over a time interval of 0.1 ns. From a methodological point of view, we need to solve the time-dependent Schrödinger equation, ih ∂ ∂t |Ψ(t)⟩ = H(t)|Ψ(t)⟩, which rules the time evolution of the quantum system at zero temperature starting from the ground state of the Hamiltonian obtained by exact diagonalization. Due to the large dimension of the Hilbert space, the time dynamics of the many-body ground state is performed by means of the Cranck-Nicholson's method, that guarantees a unitary time evolution where the evolved wave function is expressed in an infinitesimal interval as
|Ψ(t + ∆t)⟩ = exp[−ih −1 t+∆t t H(t)dt]|Ψ(t)⟩ ≈ [1 − i ∆t/2 h H(t + ∆t/2)] 1 + i ∆t/2 h H(t + ∆t/2)] |Ψ(t)⟩ .
Hence, by means of Eq. (9) we determine the time dependent evolution of the ground state by discretizing the time interval. Here, the time step is considered to be dt = 1.0 × 10 −2h /t 0 p,d , with t 0 p,d being the amplitude of the p-d π hybridization hopping process for θ = 0. The choice of the time step is small enough to guarantee the convergence for the solution. Finally, we provide the description of the out-of-equilibrium dynamics of the on-site orbital occupancy of the d-orbitals in the ground-state following the quench, by calculating the time dependent expectation value of the electron density n xy in the d xy , and averaged (d xz ,d yz ) orbitals as given by 1 2 (n xz + n yz ), respectively. These quantities are the most relevant to identify the modification of the orbital configuration after quenching the applied electric field.
V. DENSITY FUNCTIONAL THEORY FOR CRO SUPERLATTICE
We have performed DFT calculations by using the Vienna ab-initio simulation package (VASP) [53][54][55]. The core and the valence electrons were treated within the projec-tor augmented wave (PAW) [56] method with a cutoff of 480 eV for the plane-wave basis. We have used the PBEsol exchange-correlation method [57], a revised Perdew-Burke-Ernzerhof (PBE) that improves equilibrium properties of solids. PBEsol+U is the approach that we have followed to take into account the correlations associated with the Ru-4d states. We have considered U=3 eV for the antiferromagnetic insulating phase of ruthenates [42,58], and regarding the Hund coupling we have used the value J H = 0.15 U in agreement with approaches based on the constrained random phase approximation for 4d-electrons [59]. The values of the lattice constants are a S =5.3945Å, b S =5.5999Å, c S =11.7653Å in the S-Pbca phase and a L =5.3606Å, b L =5.3507Å, c L =12.2637Å in the L-Pbca phase [60]. To simulate the stripe phase, we built a superlattice composed of four RuO 2 layers, with two layers in the L-and two layers in the S-phase stacked along the c-axis. The lattice constants of the superlattice are obtained by averaging the lattice parameters of the bulk; we have that a superlattice =(a S +a L )/2, b superlattice =(b S +b L )/2 and c superlattice =c S +c L . A 11×11×4 k-point grid has been used for the bulk [61], while a 11×11×2 k-point grid has been used for the superlattice.
) with short c-axis in the center of the specimen and longer c-axis in the regions which are closer to the gate contact and the surface, i.e. along the bottom and left side of the Figure 2a, respectively. Figure 2d (similarly to panels e and f) represents the histogram of the c lattice parameters corresponding to 0 voltage, maximum voltage FIG. 1. High Angle Annular Dark Field (HAADF-STEM) image from the (a) low and (b) high temperature structural phase of CRO, respectively. (d) The histogram of the c lattice constant for low and high temperature. The lattice constant is determined by fitting the atomic positions of the Ru atoms with a 2d Gaussian function and from this the lattice constant can be calculated. (e) Similar to (d) but the a lattice constant is shown. (c,f) The evolution of the two lattice parameters as a function of temperature when heating and cooling the specimen. The lattice constants are determined from the NBED experiments. (g,h)The lattice constants as a function of the applied voltage. In the inset of panel (h) the sequence of applied voltage is shown. The setup of the contacts between the electrodes and the sample is reported in the ExtendedFigure 1. The applied voltage leads to an electric field that is oriented along the b axis.
FIG. 2 .
2(a-c) The real space map of the c lattice parameter after the voltage is quenched to zero amplitude for three different orientation of the electric field. The orientation of the crystal with respect to the electric field is indicated in each panel. In the inset images, the average diffraction pattern is shown. (d-f) The histogram of the lattice parameter at zero voltage and maximum applied voltage indicates the distribution of the lattice parameters amplitude for the corresponding voltage configurations. We find that after the quench the distribution exhibits a bimodal lineshape that reflects the occurrence of stripes or domains with unit cells having short and long c lattice parameters. The geometry of the electrical contacts corresponds to an open circuit with the sample gated only on one side. The details of the electrical setup is reported in the ExtendedFigure 1. The bottom side in the panels a-c corresponds to the region of the contact of the sample with the electrode through which the electric field is applied. The white region on the left side of the panels (a) and (b) refers to the interface with the vacuum at the boundary of the sample.
Acknowledgement
This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 823717 -ESTEEM3. The Merlin camera used in the experiment received funding from the FWO-Hercules fund G0H4316N 'Direct electron detector for soft matter TEM'. C. A. and G. C. are supported by the Foundation for Polish Science through the International Research Agendas program co-financed by the European Union within the Smart Growth Operational Programme. C. A. and G. C. acknowledge the access to the computing facilities of the Interdisciplinary Center of Modeling at the University of Warsaw, Grant No. GB84-0, GB84-1 and GB84-7 and GB84-7 and Poznan Supercomputing and Networking Center Grant No. 609.. C. A. and G. C. acknowledge the CINECA award under the ISCRA initiative IsC85 "TOP-MOST" Grant, for the availability of high-performance computing resources and support. We acknoweldge A. Guarino and C. Elia for providing support about the electrical characterization of the sample. M.C., R.F., and A.V. acknowledge support from the EU's Horizon 2020213 research and innovation program under Grant Agreement No. 964398 (SUPER-GATE).
FIG. 4 .
4(a) The electric field is introduced via a time dependent potential A(t) as given by the Maxwell relation E(t) = −∂ t A(t) that is switched off after a characteristic time interval τ 0 . The η parameter sets the strength of the electric field and it is defined asE max /E M , where E max is the maximum absolute value of E(t) = −∂ t A(t) and E M = 0.01eV /Angstrom.Time is scaled in units of τ 0 = 100 picoseconds. The details of the model parameters are reported in the Methods section. (b-g) Time distribution of the density of the d xy and d γz orbitals at the ruthenium site. The distribution is evaluated on the time interval preceding (left panels (b),(d),(f)) or following (right panels (c),(e),(g)) the quench of the electric field, for increasing values of electric field amplitude through η. Vertical dotted lines mark the value of the orbital densities at zero voltage (left panels (b),(d),(f)), while dashed lines mark the values of the time average of the orbital densities after the quench (right panels (c),(e),(g)).(h) Schematics of the evolution of the orbital population, before and after the quench, in the limit of weak (small η) or strong electric field (large η).
FIG. 5 .
5(a) Superlattice of CRO composed of four RuO 2 layers. Two layers are in the L-and two in the S-phase. Grey, red and blue spheres indicate the Ru, O and Ca atoms, respectively. The blue arrows indicate the displacements, δ τ, of Ru atoms as due to structural relaxation. l1, l2, l3 and l4 label the layers in the superlattice in the Land S-regions. (b) Energies of the superlattice and the bulk as a function of displacements δ τ of the Ru atoms with respect to the centrosymmetric positions.
Figure S 6 .
6(a,b,d) A sketch of the experimental setup where the sample is connected to one electrode. A voltage is then applied over the two electrode which creates an electric field along the a, b and c crystallographic axes respectively. (c) An overview scan of the pure electric field setup where the sample is clearly visible attached to only one electrode (geometry used inFig. 1 (g-h),Fig. 2andFig. 4(e) (main text). A sketch of the experimental setup where the sample is connected to one electrode allowing no current to flow through the specimen. (f) An overview scan of the current allowed setup where the sample is clearly visible attached to both electrodes (related to ExtendedFigure 2 (e-h)).
Figure S 7 .
7The real space maps of the c lattice parameter when applying field along the b axis[corresponding to Fig. 2(a) (main text) and Fig. S1(b)](a) before applying an electric field of the sample with E-field along the b axis (b) The real space map of the c lattice parameter while applying the maximum electric field (c) The real space map of the c lattice parameter after quenching the specimen back to 0V.
For the present analysis we adopt material specific values such as λ = 0.075 eV, U in the range [2.0,2.2] eV, and J H [0.35, 0.5] eV are taken as a reference for the analysis. Similar values for ∆ CF , U and J H have been used for calculations of electronic spectra in CRO and the ratio g = ∆ CF /(2λ ) is typically considered to lie in the range
Figure S 8 .
8(a) Top view optical image of the specimen used for the STEM measurements. In the center we can notice the locations where the FIB-lamella were extracted along both a-and b-crystallographic axis, the inset shows the in-plane orientation of the crystal determined through EBSD measurement.(b) High-angle X-ray diffraction pattern acquired on the single crystal shown in panel (a) showing the absence of impurity peaks.(c) Electric characterization of the single crystal shown in panel (a) representing the measured electric field (E in V/cm) against the applied current density (J in A/cm 2 ) as taken at room temperature Figure S 9. Current-voltage (I-V) curves of the sample with a contact configuration as in
The transition to the metallic state. N F Mott, Philosophical Magazine. 6Mott, N. F. The transition to the metallic state. Philosophical Magazine 1961, 6, 287--309.
Dynamical mean-field theory of strongly correlated fermion systems and the limit of infinite dimensions. A Georges, G Kotliar, W Krauth, M J Rozenberg, Rev. Mod. Phys. 13Georges, A.; Kotliar, G.; Krauth, W.; Rozenberg, M. J. Dynam- ical mean-field theory of strongly correlated fermion systems and the limit of infinite dimensions. Rev. Mod. Phys. 1996, 68, 13.
Metal-insulator transitions. M Imada, A Fujimori, Y Tokura, Rev. Mod. Phys. 70Imada, M.; Fujimori, A.; Tokura, Y. Metal-insulator transitions. Rev. Mod. Phys. 1998, 70, 1039-1263.
Electric field effect in correlated oxide systems. C H Ahn, J M Triscone, J Mannhart, Nature. 424Ahn, C. H.; Triscone, J. M.; Mannhart, J. Electric field effect in correlated oxide systems. Nature 2003, 424, 1015-1018.
An emergent change of phase for electronics. H Takagi, H Y Hwang, Science. 327Takagi, H.; Hwang, H. Y. An emergent change of phase for electronics. Science 2010, 327, 1601-1602.
Doping-driven Mott transition in the one-band Hubbard model. P Werner, A J Millis, Phys. Rev. B. 85108Werner, P.; Millis, A. J. Doping-driven Mott transition in the one-band Hubbard model. Phys. Rev. B 2007, 75, 085108.
Extremely correlated Fermi liquid theory meets dynamical mean-field theory: Analytical insights into the doping-driven Mott transition. R Zitko, R Hansen, D Perepelitsky, E Mravlje, J Georges, A Shastry, B S , Phys. Rev. B. 235132R. Zitko, R.; Hansen, D.; Perepelitsky, E.; Mravlje, J.; Georges, A.; Shastry, B. S. Extremely correlated Fermi liquid theory meets dynamical mean-field theory: Analytical insights into the doping-driven Mott transition. Phys. Rev. B 2013, 88, 235132.
situ strain tuning of the metalinsulator-transition of Ca 2 RuO 4 in angle-resolved photoemission experiments. S Riccò, M Kim, A Tamai, S Mckeown Walker, F Y Bruno, I Cucchi, E Cappelli, C Besnard, T K Kim, P Dudin, M Hoesch, M J Gutmann, A Georges, R S Perry, F Baumberger, Nature Communications. 94535Riccò, S.; Kim, M.; Tamai, A.; McKeown Walker, S.; Bruno, F. Y.; Cucchi, I.; Cappelli, E.; Besnard, C.; Kim, T. K.; Dudin, P.; Hoesch, M.; Gutmann, M. J.; Georges, A.; Perry, R. S.; Baumberger, F. In situ strain tuning of the metal- insulator-transition of Ca 2 RuO 4 in angle-resolved photoemis- sion experiments. Nature Communications 2018, 9, 4535.
Doping a Mott insulator: Physics of high-temperature superconductivity. P A Lee, N Nagaosa, X.-G Wen, Rev. Mod. Phys. 78Lee, P. A.; Nagaosa, N.; Wen, X.-G. Doping a Mott insulator: Physics of high-temperature superconductivity. Rev. Mod. Phys. 2006, 78, 17-85.
Charged magnetic domain lines and the magnetism of high-T c oxides. J Zaanen, O Gunnarsson, Phys. Rev. B. 40Zaanen, J.; Gunnarsson, O. Charged magnetic domain lines and the magnetism of high-T c oxides. Phys. Rev. B 1989, 40, 7391-7394.
Study of an Ising model with competing long-and short-range interactions. U Löw, V J Emery, K Fabricius, S A Kivelson, Phys. Rev. Lett. 72Löw, U.; Emery, V. J.; Fabricius, K.; Kivelson, S. A. Study of an Ising model with competing long-and short-range interactions. Phys. Rev. Lett. 1994, 72, 1918-1921.
Nanoscale Phase Separation and Colossal Magnetoresistance. E Dagotto, Dagotto, E. Nanoscale Phase Separation and Colossal Magne- toresistance;
. Springer, BerlinSpringer, Berlin, 2003.
Ultrafast optical spectroscopy of strongly correlated materials and high-temperature superconductors: a non-equilibrium approach. C Giannetti, M Capone, D Fausti, M Fabrizio, F Parmigiani, D Mihailovic, Advances in Physics. 65Giannetti, C.; Capone, M.; Fausti, D.; Fabrizio, M.; Parmi- giani, F.; Mihailovic, D. Ultrafast optical spectroscopy of strongly correlated materials and high-temperature supercon- ductors: a non-equilibrium approach. Advances in Physics 2016, 65, 58-238.
Photo-induced superconductivity. A Cavalleri, Contemporary Physics. 59Cavalleri, A. Photo-induced superconductivity. Contemporary Physics 2018, 59, 31-46.
Floquet Engineering of Quantum Materials. T Oka, S Kitamura, Annual Review of Condensed Matter Physics. 10Oka, T.; Kitamura, S. Floquet Engineering of Quantum Mate- rials. Annual Review of Condensed Matter Physics 2019, 10, 387-408.
Nonthermal pathways to ultrafast control in quantum materials. A De La Torre, D M Kennes, M Claassen, S Gerber, J W Mciver, M A Sentef, Colloquium, Reviews of Modern Physics. 9341002de la Torre, A.; Kennes, D. M.; Claassen, M.; Gerber, S.; McIver, J. W.; Sentef, M. A. Colloquium: Nonthermal path- ways to ultrafast control in quantum materials. Reviews of Mod- ern Physics 2021, 93, 041002.
Electric-Pulse-Driven Electronic Phase Separation, Insulator-Metal Transition, and Possible Superconductivity in a Mott Insulator. C Vaju, L Cario, B Corraze, E Janod, V Dubost, T Cren, D Roditchev, D Braithwaite, O Chauvet, Adv. Mater. Vaju, C.; Cario, L.; Corraze, B.; Janod, E.; Dubost, V.; Cren, T.; Roditchev, D.; Braithwaite, D.; Chauvet, O. Electric-Pulse- Driven Electronic Phase Separation, Insulator-Metal Transition, and Possible Superconductivity in a Mott Insulator. Adv. Mater. 2008, 20, 2760.
Current-Induced Gap Suppression in the Mott Insulator Ca 2 RuO 4. R Okazaki, Y Nishina, Y Yasui, F Nakamura, T Suzuki, I Terasaki, Journal of the Physical Society of Japan. 82103702Okazaki, R.; Nishina, Y.; Yasui, Y.; Nakamura, F.; Suzuki, T.; Terasaki, I. Current-Induced Gap Suppression in the Mott Insu- lator Ca 2 RuO 4 . Journal of the Physical Society of Japan 2013, 82, 103702.
Nanoscale self-organization and metastable non-thermal metallicity in Mott insulators. A Ronchi, P Franceschini, A De Poli, P Homm, A Fitzpatrick, F Maccherozzi, G Ferrini, F Banfi, S Dhesi, M Menghini, M Fabrizio, J.-P Locquet, C Giannetti, Nat Commun. 133730Ronchi, A.; Franceschini, P.; De Poli, A.; Homm, P.; Fitz- patrick, A.; Maccherozzi, F.; Ferrini, G.; Banfi, F.; Dhesi, S.; Menghini, M.; Fabrizio, M.; Locquet, J.-P.; Giannetti, C. Nanoscale self-organization and metastable non-thermal metal- licity in Mott insulators. Nat Commun 2022, 13, 3730.
Nanoscale imaging and control of resistance switching in VO 2 at room temperature. J Kim, C Ko, A Frenzel, S Ramanathan, J E Hoffman, Appl. Phys. Lett. 213106Kim, J.; Ko, C.; Frenzel, A.; Ramanathan, S.; Hoffman, J. E. Nanoscale imaging and control of resistance switching in VO 2 at room temperature. Appl. Phys. Lett. 2010, 88, 213106.
Structural and electronic recovery pathways of a photoexcited ultrathin VO 2 film. H Wen, L Guo, E Barnes, J H Lee, D A Walko, R D Schaller, J A Moyer, R Misra, Y Li, E M Dufresne, D G Schlom, V Gopalan, J W Freeland, Phys. Rev. B. 165424Wen, H.; Guo, L.; Barnes, E.; Lee, J. H.; Walko, D. A.; Schaller, R. D.; Moyer, J. A.; Misra, R.; Li, Y.; Dufresne, E. M.; Schlom, D. G.; Gopalan, V.; Freeland, J. W. Structural and elec- tronic recovery pathways of a photoexcited ultrathin VO 2 film. Phys. Rev. B 2013, 88, 165424.
Role of Thermal Heating on the Voltage Induced Insulator-Metal Transition in VO 2. A Zimmers, L Aigouy, M Mortier, A Sharoni, S Wang, K G West, J G Ramirez, I K Schuller, Phys. Rev. Lett. 56601Zimmers, A.; Aigouy, L.; Mortier, M.; Sharoni, A.; Wang, S.; West, K. G.; Ramirez, J. G.; Schuller, I. K. Role of Thermal Heating on the Voltage Induced Insulator-Metal Transition in VO 2 . Phys. Rev. Lett. 2013, 110, 056601.
Evidence for photo-induced monoclinic metallic VO 2 under high pressure. W.-P Hsieh, M Trigo, D A Reis, G A Artioli, L Malavasi, W L Mao, Appl. Phys. Lett. 10421917Hsieh, W.-P.; Trigo, M.; Reis, D. A.; Artioli, G. A.; Malavasi, L.; Mao, W. L. Evidence for photo-induced mono- clinic metallic VO 2 under high pressure. Appl. Phys. Lett. 2014, 104, 021917.
Switching in Magnetite: A Thermally Driven Magnetic Phase Transition. T Burch, P P Craig, C Hedrick, T A Kitchens, J I Budnick, J A Cannon, M Lipsicas, D Mattis, Phys. Rev. Lett. 1444Burch, T.; Craig, P. P.; Hedrick, C.; Kitchens, T. A.; Bud- nick, J. I.; Cannon, J. A.; Lipsicas, M.; Mattis, D. Switching in Magnetite: A Thermally Driven Magnetic Phase Transition. Phys. Rev. Lett. 1969, 23, 1444.
Destruction of the Mott insulating ground state of Ca 2 RuO 4 by a structural transition. C S Alexander, G Cao, V Dobrosavljevic, S Mccall, J E Crow, E Lochner, R P Guertin, Phys. Rev. B. 60Alexander, C. S.; Cao, G.; Dobrosavljevic, V.; McCall, S.; Crow, J. E.; Lochner, E.; Guertin, R. P. Destruction of the Mott insulating ground state of Ca 2 RuO 4 by a structural transition. Phys. Rev. B 1999, 60, R8422-R8425.
Quasi-two-dimensional Mott transition system Ca 2−x Sr x RuO 4. S Nakatsuji, Y Maeno, Phys. Rev. Lett. 84Nakatsuji, S.; Maeno, Y. Quasi-two-dimensional Mott transi- tion system Ca 2−x Sr x RuO 4 . Phys. Rev. Lett. 2000, 84, 2666- 2669.
Spin-Orbital Excitations in Ca 2 RuO 4 Revealed by Resonant Inelastic X-Ray Scattering. L Das, Phys. Rev. 811048Das, L. et al. Spin-Orbital Excitations in Ca 2 RuO 4 Revealed by Resonant Inelastic X-Ray Scattering. Phys. Rev. X 2018, 8, 011048.
Hallmarks of Hunds coupling in the Mott insulator Ca 2 RuO 4. D Sutter, Nature Communications. 815176Sutter, D. et al. Hallmarks of Hunds coupling in the Mott insu- lator Ca 2 RuO 4 . Nature Communications 2017, 8, 15176.
Destruction of the Mott insulating ground state of Ca 2 RuO 4 by a structural transition. C S Alexander, G Cao, V Dobrosavljevic, S Mccall, J E Crow, E Lochner, R P Guertin, Physical Review B. 60Alexander, C. S.; Cao, G.; Dobrosavljevic, V.; Mccall, S.; Crow, J. E.; Lochner, E.; Guertin, R. P. Destruction of the Mott insulating ground state of Ca 2 RuO 4 by a structural transition. Physical Review B 1999, 60, 8422-8425.
Magnetic anisotropy and orbital ordering in Ca 2 RuO 4. D G Porter, V Granata, F Forte, S Di Matteo, M Cuoco, R Fittipaldi, A Vecchione, A Bombardi, Phys. Rev. 125142Porter, D. G.; Granata, V.; Forte, F.; Di Matteo, S.; Cuoco, M.; Fittipaldi, R.; Vecchione, A.; Bombardi, A. Mag- netic anisotropy and orbital ordering in Ca 2 RuO 4 . Phys. Rev. B 2018, 98, 125142.
Electric-field-induced metal maintained by current of the Mott insulator Ca2RuO4. F Nakamura, M Sakaki, Y Yamanaka, S Tamaru, T Suzuki, Y Maeno, Scientific Reports. 32536Nakamura, F.; Sakaki, M.; Yamanaka, Y.; Tamaru, S.; Suzuki, T.; Maeno, Y. Electric-field-induced metal maintained by current of the Mott insulator Ca2RuO4. Scientific Reports 2013, 3, 2536.
Emergence of a metallic metastable phase induced by electrical current in Ca 2 RuO 4. C Cirillo, V Granata, G Avallone, R Fittipaldi, C Attanasio, A Avella, A Vecchione, Phys. Rev. B. 235142Cirillo, C.; Granata, V.; Avallone, G.; Fittipaldi, R.; Attana- sio, C.; Avella, A.; Vecchione, A. Emergence of a metallic metastable phase induced by electrical current in Ca 2 RuO 4 . Phys. Rev. B 2019, 100, 235142.
Nano-Resolved Current-Induced Insulator-Metal Transition in the Mott Insulator Ca 2 RuO 4 . Physical Review X. J Zhang, 911032Zhang, J. et al. Nano-Resolved Current-Induced Insulator- Metal Transition in the Mott Insulator Ca 2 RuO 4 . Physical Re- view X 2019, 9, 011032.
. R A Vitalone, A J Sternbach, B A Foutty, A S Mcleod, C Sow, D Golez, F Nakamura, Y Maeno, A N Pasupathy, A Georges, A J Millis, D N Basov, Nanoscale Femtosecond Dynamics of Mott Insulator. 2022Nano Lett.Vitalone, R. A.; Sternbach, A. J.; Foutty, B. A.; McLeod, A. S.; Sow, C.; Golez, D.; Nakamura, F.; Maeno, Y.; Pasupathy, A. N.; Georges, A.; Millis, A. J.; Basov, D. N. Nanoscale Femtosecond Dynamics of Mott Insulator (Ca 0.99 Sr 0.01 ) 2 RuO 4 . Nano Lett. 2022, 14, 5689-5697.
Domain shapes and patterns: the phenomenology of modulated phases. M Seul, S Andelman, Science. 267476Seul, M.; Andelman, S. Domain shapes and patterns: the phe- nomenology of modulated phases. Science 1995, 267, 476.
Domain shapes and patterns: the phenomenology of modulated phases. O Portmann, A Vaterlaus, D Pescia, Nature. 422701Portmann, O.; Vaterlaus, A.; Pescia, D. Domain shapes and pat- terns: the phenomenology of modulated phases. Nature 2003, 422, 701.
Magnetic stripes and skyrmions with helicity reversals. X Yu, Proc. Natl Acad. Sci. Natl Acad. SciUSA1098856Yu, X.; et al., Magnetic stripes and skyrmions with helicity re- versals. Proc. Natl Acad. Sci. USA 2012, 109, 8856.
Theory of phase-ordering kinetics. A J Bray, Adv. Phys. 357Bray, A. J. Theory of phase-ordering kinetics. Adv. Phys. 1994, 43, 357.
Role of local temperature in the current-driven metal-insulator transition of Ca 2 RuO 4. G Mattoni, S Yonezawa, F Nakamura, Y Maeno, Phys. Rev. Materials. Mattoni, G.; Yonezawa, S.; Nakamura, F.; Maeno, Y. Role of local temperature in the current-driven metal-insulator transi- tion of Ca 2 RuO 4 . Phys. Rev. Materials 2020, 4, 114414.
Probing spin-orbital-lattice correlations in 4 d 4 systems. M Cuoco, F Forte, C Noce, Physical Review B. 94428Cuoco, M.; Forte, F.; Noce, C. Probing spin-orbital-lattice cor- relations in 4 d 4 systems. Physical Review B 2006, 73, 094428.
Interplay of Coulomb interactions and c-axis octahedra distortions in single-layer ruthenates. M Cuoco, F Forte, C Noce, Physical Review B. 74Cuoco, M.; Forte, F.; Noce, C. Interplay of Coulomb interac- tions and c-axis octahedra distortions in single-layer ruthenates. Physical Review B 2006, 74, 195124.
Nature of the Mott Transition in Ca 2 RuO 4. E Gorelov, M Karolak, T O Wehling, F Lechermann, A I Lichtenstein, E Pavarini, Phys. Rev. Lett. 226401Gorelov, E.; Karolak, M.; Wehling, T. O.; Lechermann, F.; Lichtenstein, A. I.; Pavarini, E. Nature of the Mott Transition in Ca 2 RuO 4 . Phys. Rev. Lett. 2010, 104, 226401.
Structural and electronic properties of Sr 2 RuO 4 /Sr 3 Ru 2 O 7 heterostructures. C Autieri, M Cuoco, C Noce, Phys. Rev. B. 75102Autieri, C.; Cuoco, M.; Noce, C. Structural and electronic properties of Sr 2 RuO 4 /Sr 3 Ru 2 O 7 heterostructures. Phys. Rev. B 2014, 89, 075102.
Image registration of low signal-to-noise cryo-STEM data. B H Savitzky, Ultramicroscopy. 191Savitzky, B. H. et al. Image registration of low signal-to-noise cryo-STEM data. Ultramicroscopy 2018, 191, 56-65.
An efficient approach for accurate and precise model-based quantification of atomic resolution electron microscopy images. A De Backer, K H W Van Den Bos, W Van Den Broek, J Sijbers, S Van Aert, Statstem, Ultramicroscopy. 171De Backer, A.; van den Bos, K. H. W.; Van den Broek, W.; Sijbers, J.; Van Aert, S. StatSTEM: An efficient approach for accurate and precise model-based quantification of atomic reso- lution electron microscopy images. Ultramicroscopy 2016, 171, 104-116.
Fast Pixelated Detectors in Scanning Transmission Electron Microscopy. Part I: Data Acquisition, Live Processing, and Storage. Microscopy and Microanalysis. M Nord, R W H Webster, K A Paton, S Mcvitie, D Mc-Grouther, I Maclaren, G W Paterson, 26Nord, M.; Webster, R. W. H.; Paton, K. A.; McVitie, S.; Mc- Grouther, D.; MacLaren, I.; Paterson, G. W. Fast Pixelated De- tectors in Scanning Transmission Electron Microscopy. Part I: Data Acquisition, Live Processing, and Storage. Microscopy and Microanalysis 2020, 26, 653-666.
Image registration of low signal-to-noise cryo-STEM data. B H Savitzky, Ultramicroscopy. 191Savitzky, B. H. et al. Image registration of low signal-to-noise cryo-STEM data. Ultramicroscopy 2018, 191, 56-65.
An efficient approach for accurate and precise model-based quantification of atomic resolution electron microscopy images. A De Backer, K H W Van Den Bos, W Van Den Broek, J Sijbers, S Van Aert, Statstem, Ultramicroscopy. 171De Backer, A.; van den Bos, K. H. W.; Van den Broek, W.; Sijbers, J.; Van Aert, S. StatSTEM: An efficient approach for accurate and precise model-based quantification of atomic reso- lution electron microscopy images. Ultramicroscopy 2016, 171, 104-116.
Fast Pixelated Detectors in Scanning Transmission Electron Microscopy. Part I: Data Acquisition, Live Processing, and Storage. Microscopy and Microanalysis. M Nord, R W H Webster, K A Paton, S Mcvitie, D Mc-Grouther, I Maclaren, G W Paterson, 26Nord, M.; Webster, R. W. H.; Paton, K. A.; McVitie, S.; Mc- Grouther, D.; MacLaren, I.; Paterson, G. W. Fast Pixelated De- tectors in Scanning Transmission Electron Microscopy. Part I: Data Acquisition, Live Processing, and Storage. Microscopy and Microanalysis 2020, 26, 653-666.
Tuning of the Ru 4+ ground-state orbital population in the 4d 4 Mott insulator Ca 2 RuO 4 achieved by La doping. D Pincini, L S I Veiga, C D Dashwood, F Forte, M Cuoco, R S Perry, P Bencok, A T Boothroyd, D F Mcmorrow, Phys. Rev. B. 9975125Pincini, D.; Veiga, L. S. I.; Dashwood, C. D. ; Forte, F.; Cuoco, M.; Perry, R. S.; Bencok, P.; Boothroyd, A. T.; McMorrow, D. F. Tuning of the Ru 4+ ground-state orbital population in the 4d 4 Mott insulator Ca 2 RuO 4 achieved by La doping. Phys. Rev. B 2019, 99 075125.
Damascelli, A. Spin-Orbital Entanglement and the Breakdown of Singlets and Triplets in Sr 2 RuO 4 Revealed by Spin-and Angle-Resolved Photoemission Spectroscopy. C N Veenstra, Z.-H Zhu, M Raichle, B M Ludbrook, A Nicolaou, B Slomski, G Landolt, S Kittaka, Y Maeno, J H Dil, I S Elfimov, M W Haverkort, Phys. Rev. Lett. 112127002Veenstra, C. N.; Zhu, Z.-H.; Raichle, M.; Ludbrook, B. M.; Nicolaou, A.; Slomski, B.; Landolt, G.; Kittaka, S.; Maeno, Y.; Dil, J. H.; Elfimov, I. S.; Haverkort, M. W.; Damas- celli, A. Spin-Orbital Entanglement and the Breakdown of Sin- glets and Triplets in Sr 2 RuO 4 Revealed by Spin-and Angle- Resolved Photoemission Spectroscopy. Phys. Rev. Lett. 2014, 112 127002.
Higgs mode and its decay in a two-dimensional antiferromagnet. A Jain, M Krautloher, J Porras, G H Ryu, D P Chen, D L Abernathy, J T Park, A Ivanov, J Chaloupka, G Khaliullin, B Keimer, B J Kim, Nat. Phys. 13Jain, A.; Krautloher, M.; Porras, J.; Ryu, G. H.; Chen, D. P.; Abernathy, D. L.; Park, J. T.; Ivanov, A.; Chaloupka, J.; Khal- iullin, G.; Keimer, B.; Kim, B. J. Higgs mode and its decay in a two-dimensional antiferromagnet. Nat. Phys. 2017, 13, July 633.
Ab initio molecular dynamics for liquid metals. G Kresse, J Hafner, Phys. Rev. B. 47558Kresse, G.; Hafner, J. Ab initio molecular dynamics for liquid metals. Phys. Rev. B 1993, 47 558.
Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set. G Kresse, J Furthmüller, Computational Materials Science. 615Kresse, G.; Furthmüller, J. Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set. Computational Materials Science 1996, 6, 1 15.
Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set. G Kresse, J Furthmüller, Phys. Rev. B. 1611169Kresse, G.; Furthmüller, J. Efficient iterative schemes for ab ini- tio total-energy calculations using a plane-wave basis set. Phys. Rev. B 1996, 54, 16 11169.
From ultrasoft pseudopotentials to the projector augmented-wave method. G Kresse, D Joubert, Phys. Rev. B. 591758Kresse, G.; Joubert, D.. From ultrasoft pseudopotentials to the projector augmented-wave method. Phys. Rev. B 1999, 59 1758.
Restoring the Density-Gradient Expansion for Exchange in Solids and Surfaces. J P Perdew, A Ruzsinszky, G I Csonka, O A Vydrov, G E Scuseria, L A Constantin, X Zhou, K Burke, Phys. Rev. Lett. 100136406Perdew, J. P.; Ruzsinszky, A.; Csonka, G. I.; Vydrov, O. A.; Scuseria, G. E.; Constantin, L. A.; Zhou, X.; Burke, K. Restor- ing the Density-Gradient Expansion for Exchange in Solids and Surfaces. Phys. Rev. Lett. 2008, 100 136406.
Antiferromagnetic and xy ferro-orbital order in insulating SrRuO 3 thin films with SrO termination. C Autieri, Journal of Physics: Condensed Matter. 28426004Autieri, C. Antiferromagnetic and xy ferro-orbital order in in- sulating SrRuO 3 thin films with SrO termination. Journal of Physics: Condensed Matter 2016, 28, 42 426004.
. L Vaugier, H Jiang, S Biermann, Vaugier, L.; Jiang, H.; Biermann, S..
Hund exchange J in transition metal oxides: Screening versus localization trends from constrained random phase approximation. U Hubbard, Phys. Rev. B. 86165105Hubbard U and Hund exchange J in transition metal oxides: Screening versus local- ization trends from constrained random phase approximation. Phys. Rev. B 2012, 86 165105.
Structural and magnetic aspects of the metalinsulator transition in Ca 2−x Ru x O 4. O Friedt, M Braden, G André, P Adelmann, S Nakatsuji, Y Maeno, Phys. Rev. B. 63174432Friedt, O.; Braden, M.; André, G.; Adelmann, P.; Nakatsuji, S.; Maeno,Y. Structural and magnetic aspects of the metal- insulator transition in Ca 2−x Ru x O 4 . Phys. Rev. B 2001, 63 174432.
Mott transition, spin-orbit effects, and magnetism in Ca 2 RuO 4. G Zhang, E Pavarini, Phys. Rev. B. 9575145Zhang, G.; Pavarini, E. Mott transition, spin-orbit effects, and magnetism in Ca 2 RuO 4 . Phys. Rev. B 2017, 95 075145.
| [] |
[
"Using Transfer Learning for Code-Related Tasks",
"Using Transfer Learning for Code-Related Tasks",
"Using Transfer Learning for Code-Related Tasks",
"Using Transfer Learning for Code-Related Tasks"
] | [
"Antonio Mastropaolo ",
"Nathan Cooper ",
"David Nader Palacio ",
"Simone Scalabrino ",
"Denys Poshyvanyk ",
"Rocco Oliveto ",
"Gabriele Bavota ",
"Antonio Mastropaolo ",
"Nathan Cooper ",
"David Nader Palacio ",
"Simone Scalabrino ",
"Denys Poshyvanyk ",
"Rocco Oliveto ",
"Gabriele Bavota "
] | [] | [] | Deep learning (DL) techniques have been used to support several code-related tasks such as code summarization and bug-fixing. In particular, pre-trained transformer models are on the rise, also thanks to the excellent results they achieved in Natural Language Processing (NLP) tasks. The basic idea behind these models is to first pre-train them on a generic dataset using a self-supervised task (e.g., filling masked words in sentences). Then, these models are fine-tuned to support specific tasks of interest (e.g., language translation). A single model can be fine-tuned to support multiple tasks, possibly exploiting the benefits of transfer learning. This means that knowledge acquired to solve a specific task (e.g., language translation) can be useful to boost performance on another task (e.g., sentiment classification). While the benefits of transfer learning have been widely studied in NLP, limited empirical evidence is available when it comes to code-related tasks. In this paper, we assess the performance of the Text-To-Text Transfer Transformer (T5) model in supporting four different code-related tasks: (i) automatic bug-fixing, (ii) injection of code mutants, (iii) generation of assert statements, and (iv) code summarization. We pay particular attention in studying the role played by pre-training and multi-task fine-tuning on the model's performance. We show that (i) the T5 can achieve better performance as compared to state-of-the-art baselines; and (ii) while pre-training helps the model, not all tasks benefit from a multi-task fine-tuning.Index Terms-DeepLearning, Empirical Software Engineering ! • A. Mastropaolo is with SEART | 10.1109/tse.2022.3183297 | [
"https://export.arxiv.org/pdf/2206.08574v1.pdf"
] | 249,718,125 | 2206.08574 | 14eaf34e4e7aa14eaf6130a4642cac7d357d94c8 |
Using Transfer Learning for Code-Related Tasks
Antonio Mastropaolo
Nathan Cooper
David Nader Palacio
Simone Scalabrino
Denys Poshyvanyk
Rocco Oliveto
Gabriele Bavota
Using Transfer Learning for Code-Related Tasks
JOURNAL OF L A T E X CLASS FILES, VOL. XX, NO. X, MONTH XXXX 1Index Terms-Deep Learning, Empirical Software Engineering !
Deep learning (DL) techniques have been used to support several code-related tasks such as code summarization and bug-fixing. In particular, pre-trained transformer models are on the rise, also thanks to the excellent results they achieved in Natural Language Processing (NLP) tasks. The basic idea behind these models is to first pre-train them on a generic dataset using a self-supervised task (e.g., filling masked words in sentences). Then, these models are fine-tuned to support specific tasks of interest (e.g., language translation). A single model can be fine-tuned to support multiple tasks, possibly exploiting the benefits of transfer learning. This means that knowledge acquired to solve a specific task (e.g., language translation) can be useful to boost performance on another task (e.g., sentiment classification). While the benefits of transfer learning have been widely studied in NLP, limited empirical evidence is available when it comes to code-related tasks. In this paper, we assess the performance of the Text-To-Text Transfer Transformer (T5) model in supporting four different code-related tasks: (i) automatic bug-fixing, (ii) injection of code mutants, (iii) generation of assert statements, and (iv) code summarization. We pay particular attention in studying the role played by pre-training and multi-task fine-tuning on the model's performance. We show that (i) the T5 can achieve better performance as compared to state-of-the-art baselines; and (ii) while pre-training helps the model, not all tasks benefit from a multi-task fine-tuning.Index Terms-DeepLearning, Empirical Software Engineering ! • A. Mastropaolo is with SEART
INTRODUCTION
Several code-related tasks have been recently automated by researchers exploiting Deep Learning (DL) techniques [81]. Several of these works customize DL models proposed in the Natural Language Processing (NLP) field to support code-related tasks, and most of them share one common characteristic: They shape the problem at hand as a text-totext transformation, in which the input and the output of the model are text strings. For instance, Tufano et al. [78] used an encoder-decoder architecture, commonly adopted in Neural Machine Translation (NMT) [16], [33], [69], to predict code changes usually recommended by reviewers in a code review process. Both the input and output are represented as a stream of tokens (i.e., textual format), with the input being the code submitted for review and the output a revised code implementing changes likely to be required in the code review process. While this is only one concrete example, similar observations hold for techniques automating bug fixing [15], [25], [48], [75], learning generic code changes [73], supporting code migration [52], [53], code summarization [24], [32], [39], [42], code reviews [77], [78], pseudo-code generation [55], code deobfuscation [31], [79], injection of code mutants [76], generation of assert statements [82], clone detection [74], [83], traceability [49] and code completion [5], [11], [17], [17], [34], [35], [70], [84].
Recent years have seen the rise of transfer learning in the field of natural language processing. The basic idea is to first pre-train a model on a large and generic dataset by using a self-supervised task, e.g., masking tokens in strings and asking the model to guess the masked tokens. Then, the trained model is fine-tuned on smaller and specialized datasets, each one aimed at supporting a specific task. In this context, Raffel et al. [60] proposed the T5 (Text-To-Text Transfer Transformer) model, pre-trained on a large natural language corpus and fine-tuned to achieve state-of-the-art performance on many tasks, all characterized by text-to-text transformations.
In our recent work [44] we empirically investigated the potential of a T5 model when pre-trained and fine-tuned to support four code-related tasks also characterized by text-totext transformations. In particular, we started by pre-training a T5 model using a large dataset consisting of 499,618 English sentences and 1,569,889 source code components (i.e., Java methods). Then, we fine-tuned the model using four datasets from previous work with the goal of supporting four coderelated tasks:
Automatic bug-fixing. We used the dataset by Tufano et al. [75], composed of instances in which the "input string" is represented by a buggy Java method and the "output string" is the fixed version of the same method.
Injection of code mutants. This dataset is also by Tufano et al. [76], and features instances in which the input-output strings are reversed as compared to automatic bug-fixing (i.e., the input is a fixed method, while the output is its buggy version). The model must learn how to inject bugs (mutants) in code instead of fixing bugs.
Generation of assert statements in test methods. We used the arXiv:2206.08574v1 [cs.SE] 17 Jun 2022 dataset by Watson et al. [82], composed of instances in which the input string is a representation of a test method without an assert statement and a focal method it tests (i.e., the main production method tested), while the output string encodes an appropriate assert statement for the input test method. Code Summarization. We used the dataset by Haque et al. [24] where input strings are some representations of a Java method to summarize, & an output string is a textual summary.
We fine-tuned a single pre-trained T5 model in a multitask setting on all four tasks, and showed that it is able to achieve better results as compared to the four referenced baselines in all tasks [24], [75], [76], [82]. However, since we only experimented with a pre-trained model fine-tuned in a multi-task setting, questions about the actual advantage offered by transfer learning remained unanswered. In this work, we aim at overcoming such a limitation that is also typical of several other works in the literature using offthe-shelf pre-trained models like T5 to support code related tasks (e.g., [61], [87]). Indeed, little effort has been spent on understanding the actual benefits (if any) that transfer learning brings when dealing with code-related tasks. Such observation holds for both (i) the pre-training phase, that should provide the model with general knowledge about a language of interest (e.g., Java) being at the core of the tasks to automate (e.g., bug-fixing); and (ii) the multi-task finetuning, that should allow the model to exploit knowledge acquired when trained for a specific task (e.g., bug-fixing) also for the automation of other tasks (e.g., generation of assert statements), thus possibly boosting the overall performance in all the tasks. Besides the expected positive impact on performance, pre-training and multi-task fine-tuning are also useful in real-life scenarios in which the training data for a particular task of interest is scarce (e.g., when manually labeled instances are needed) [63]. Pre-training the model in an unsupervised setting and/or fine-tuning it on other related tasks for which more training data is available can unlock the possibility of using deep learning models also for tasks characterized by scarcity of training data.
In this paper, we extend our previous work [45] by carefully assessing the impact of both pre-training and multitask fine-tuning on the T5 performance. In particular, we assess the performance of the T5 in the following scenarios:
• No Pre-training: We do not run any pre-training step.
We directly fine-tune four different T5 models, each one supporting one of the four tasks we experiment with.
• Pre-training single task: We first pre-train the T5 model on the dataset presented in Table 1. Then, starting from it, we fine-tune four models, one for each single task.
• Pre-training Multi-Task: Lastly, we fine-tune the pretrained model using a multi-task learning framework in which we train a single model to support all four code-related tasks. We experiment with two different multi-task fine-tunings: (i) the first is the one used in our original paper [45], in which the percentage of training instances from each of the four tasks is proportional to the size of their training dataset; (ii) the second in which the percentage of training instances is the same for all four tasks (i.e., 25% per task). In total, this resulted in the training, hyperparameters tuning, and testing of ten different models. Note that the choice of the four tasks subject of our study (i.e., bug-fixing, mutants injection, asserts generation, and code summarization) is dictated by the will of experimenting with tasks that use, represent, and manipulate code in different ways. In particular, we include in our study tasks aimed at (i) transforming the input code with the goal of changing its behavior (bugfixing and mutants injection); (ii) "comprehending code" to verify its behavior (asserts generation); and (iii) "comprehending code" to summarize it in natural language (code summarization). Also, following what has been done in the original datasets from previous work, the four tasks involve abstracted source code (bug-fixing [75], mutants injection [76], and asserts generation [82]), raw source code (asserts generation [82] and code summarization [24]), and natural language (code summarization [24]). Such a mix of tasks helps in increasing the generalizability of our findings.
We also perform a novel analysis of our dataset aimed at assessing the generalizability of our models by looking at the level of data snooping among our training and test datasets.
Our results confirm that the T5 can substantially boost the performance on all four code-related tasks. For example, when the T5 model is asked to generate assert statements on raw source code, ∼70% of test instances are successfully predicted by the model, against the 18% of the original baseline [82]. Also, we show that the pre-training is beneficial for all tasks, while the multi-task fine-tuning does not consistently help in improving performance. Finally, our datasets analysis confirm the generalizability of the tested models. The code and data used in this work are publicly available [2].
BACKGROUND AND RELATED WORK
In recent years, DL techniques have been increasingly used to support software engineering (SE). The activities commonly supported by state-of-the-art approach include software maintenance and software testing [86], and most of the proposed approaches target the source code [81]. While available approaches support a plethora of concrete SE tasks [81], [86], in this section we focus on the ones we target in our study: automated bug-fixing, injection of code mutants, generation of assert statements in test methods, and code summarization. We discuss in detail the techniques we use as baselines for each task. A broader literature review on the topic is available in two recent surveys by Yang et al. [86] and Watson et al. [81].
Automatic Bug-Fixing
Many techniques have been proposed for the automatic fixing of software bugs. Several of them [7], [13], [20], [21], [38], [54], [58], [66], [85] rely on the redundancy assumption, claiming that large programs contain the seeds of their own repair. Such an assumption has been verified by at least two independent studies [9], [43]. Automated bug-fixing techniques based on DL can rely on different levels of code abstraction. Word tokenization is a commonly used one, even if higher-level abstractions (e.g., AST-based) allow to achieve better results [51].
Mesbah et al. [48] focus on build-time compilation failures by presenting DeepDelta, an approach using NMT to fix the build. The input is represented by features characterizing the compilation failure (e.g., kind of error, AST path, etc.). As output, DeepDelta provides the AST changes needed to fix the error. In the presented empirical evaluation, DeepDelta correctly fixed 19,314 out of 38,788 (50%) compilation errors.
Chen et al. [15] present SequenceR, a sequence-tosequence approach trained on over 35k single-line bugfixes. SequenceR takes as input the buggy line together with relevant code lines from the buggy class (abstract buggy context). The output of the approach is the recommended fix for the buggy line. The approach, tested on a set of 4,711 bugs, was able to automatically fix 950 (∼20%) of them. Similar approaches have been proposed by Hata et al. [25] and Tufano et al. [75]. The latter is the one we compared our approach with and, thus, we describe it in more details.
Tufano et al. [75] investigate the performance of an NMTbased approach in the context of automatic bug-fixing. They train an encoder-decoder model on a set of bug-fix pairs (BFPs), meaning pairs of strings in which the first one (input) represents a Java method that has been subject to a bug-fixing activity, and the second one (target) represents the same Java method once the bug was fixed. To build this dataset, the authors mined ∼787k bug-fixing commits from GitHub, from which they extracted ∼2.3M BFPs. After that, the code of the BFPs is abstracted to make it more suitable for the NMT model (i.e., to reduce the vocabulary of terms used in the source code identifiers and literals). The abstraction process is depicted in Fig. 1 [75] The top part of the figure represents the raw source code to abstract. The authors use a Java lexer and a parser to represent each method as a stream of tokens, in which Java keywords and punctuation symbols are preserved and the role of each identifier (e.g., whether it represents a variable, method, etc.) as well as the type of a literal is discerned.
IDs are assigned to identifiers and literals by considering their position in the method to abstract: The first variable name found will be assigned the ID of VAR_1, likewise the second variable name will receive the ID of VAR_2. This process continues for all identifiers as well as for the literals (e.g., STRING_X, INT_X, FLOAT_X). The output of this stage is the code reported in the middle of Fig. 1 (i.e., abstracted code). Since some identifiers and literals appear very often in the code (e.g., variables i, j, literals 0, 1, method names such as size), those are treated as "idioms" and are not abstracted (see bottom part of Fig. 1, idioms are in bold).
Tufano et al. consider as idioms the top 0.005% frequent words in their dataset. During the abstraction a mapping between the raw and the abstracted tokens is maintained, thus allowing to reconstruct the concrete code from the abstract code generated by the model.
The set of abstracted BFPs has been used to train and test the approach. The authors build two different sets, namely BF P small , only including methods having a maximum length of 50 tokens (for a total of 58,350 instances), and BF P medium , including methods up to 100 tokens (65,455). The model was able to correctly predict the patch for the buggy code in 9% and 3% of cases in the BF P small and BF P medium dataset, respectively.
While other works have tackled the automatic bug-fixing problem, the approach by Tufano et al. has been tested on a variety of different bugs, rather than on specific types of bugs/warnings (e.g., only single-line bugs are considered in [15], while compilation failures are addressed in [48]).
Thus, we picked it as representative DL technique for automatic bug-fixing and we use the two datasets by Tufano et al. [75] to fine-tune the T5 model for the "automatic bugfixing" problem, comparing the achieved performance with the one reported in the original paper.
Injection of Code Mutants
Brown et al. [12] were the first to propose a data-driven approach for generating code mutants, leveraging bug-fixes performed in software systems to extract syntactic-mutation patterns from the diffs of patches. Tufano et al.
[76] built on this concept by presenting an approach using NMT to inject mutants that mirror real bugs. The idea is to reverse the learning process used for fixing bugs [75]: The model is trained to transform correct methods (i.e., the method obtained after the bug-fixing activity) into buggy methods (before the bug-fix). Indeed, the methodology used by the authors is the same used for the bug-fixing task (previously described), including the abstraction process. This is, to date, the only DL-based technique for injecting code mutants. Thus, we use the dataset exploited by Tufano et al.
[76] to fine-tune the T5 model for the problem of "injecting code mutants", comparing the achieved results with the ones reported in the original paper. Specifically, we reused their largest dataset, referred to as GM ident in the paper 1 , featuring 92,476 training instances, 11,560 used for hyperparameter tuning (evaluation set), and 11,559 used for testing. On this data, the approach by Tufano et al. was able to correctly predict the bug to inject in 17% of cases (1,991).
Generation of Assert Statements in Test Methods
Watson et al. [82] start from the work by Shamshiri et al. [65], who observed that tools for the automatic generation of test cases such as Evosuite [19], Randoop [56] and Agitar [3] exhibit insufficiencies in the automatically generated assert statements.
Thus, they propose ATLAS, an approach for generating syntactically and semantically correct unit test assert 1. A subset of this dataset named GM ident−lit has also been used in the original paper [76] to avoid including in the study bugs requiring the generation of previously unseen literals. We decided to test the T5 model on the most complex and complete dataset. statements using NMT. To train ATLAS, the authors mined 2.5M test methods from GitHub with their corresponding assert statement. For each of those test methods, they also identified the focal method, meaning the main production code method exercised by the test. A preprocessing of the dataset has been performed to remove all test methods longer than 1K tokens. Also, test methods requiring the synthesis of one or more unknown tokens for generating the appropriate assert statements have been removed. Indeed, if the required tokens cannot be found in the vocabulary of the test method they cannot be synthesized when the model attempts to generate the prediction. Finally, all duplicates have been removed from the dataset, leading to a final set of 158,096 Test-Assert Pairs (TAPs). Each method left in the dataset has then been abstracted using the same approach previously described by Tufano et al. [75]. However, in this case the authors experiment with two datasets, one containing raw source code and one abstracted code. ATLAS was able to generate asserts identical to the ones written by developers in 31.42% of cases (4,968 perfectly predicted assert statements) when only considering the top-1 prediction, and 49.69% (7,857) when looking at the top-5 in the abstracted dataset, while performance is lower on the raw dataset (17.66% for top-1 and 23.33% for top-5).
We use the datasets by Watson et al. [82] to fine-tune our T5 model for the "generation of assert statements" problem, and compare the achieved performance with the one in the original paper. Recently, Tufano et al. [72] proposed an approach based on transformers to achieve a the same goal. Their results show that such an approach achieves better results than ATLAS [82]. We did not use the approach proposed by Tufano et al. [72] as the main baseline because it is very similar to the one we presented in the our conference paper that this paper extends [45].
Code Summarization
Code summarization is one of the mainstream methods for automatic documentation of source code. The proposed summarization techniques fall into two categories. Extractive summarization techniques generate summaries by extracting information from the code components being summarized [23], [50], [64], [68]. On the other hand, abstractive summarization techniques aim at including in the summaries information not directly available in the source code [24], [28], [32], [46], [67]. DL techniques have been used to support for the latter.
Hu et al. [28] use a Deep Neural Network (DNN) to automatically generate comments for a given Java method. The authors mine ∼9k Java projects hosted on GitHub to collect pairs of method, comment , where "comment" is the first sentence of the Javadoc linked to the method. These pairs, properly processed, are used to train and test the DNN. The authors assess the effectiveness of their technique by using the BLEU-4 score [57], showing the superiority of their approach with respect to the competitive technique presented in [30].
Allamanis et al. [4] use attention mechanisms in neural networks to suggest a descriptive method name starting from an arbitrary snippet of code. Their approach can name a code snippet exactly as a developer would do in ∼25% of cases.
LeClair et al. [39] present a neural model combining the AST source code structure and words from code to generate coherent summaries of Java methods. The approach, tested on 2.1M methods, showed its superiority as compared to the previous works by Hu et al. [28] and Iyer et al. [30].
The approach by Haque et al. [24] is the most recent in the area of DL-aided source code summarization, and it is an improvement of the work by LeClair et al. [39].
It still aims at documenting Java methods through an encoder-decoder architecture but, in this case, three inputs are provided to the model to generate the summary: (i) the source code of the method, as a flattened sequence of tokens representing the method; (ii) its AST representation; and (iii) the "file context", meaning the code of every other method in the same file. The authors show that adding the contextual information as one of the inputs substantially improves the BLEU score obtained by deep learning techniques. The dataset used in the evaluation is composed of 2.1M Java methods paired with summaries. We reuse this dataset for the fine-tuning of the T5 model for the code summarization problem, and compare its performance to the state-of-the-art approach proposed by Haque et al. [24].
TEXT-TO-TEXT-TRANSFER-TRANSFORMER
The T5 model has been introduced by Raffel et al. [60] to support multitask learning in Natural Language Processing (NLP). The idea is to reframe NLP tasks in a unified textto-text format in which the input and output are always text strings. For example, a single model can be trained to translate across languages and to autocomplete sentences. This is possible since both tasks can be represented in a text-to-text format (e.g., in the case of translation, the input is a sentence in a given language, while the output is the translated sentence). T5 is trained in two phases: pretraining, which allows defining a shared knowledge-base useful for a large class of sequence-to-sequence tasks (e.g., guessing masked words in English sentences to learn about the language), and fine-tuning, which specializes the model on a specific downstream task (e.g., learning the translation of sentences from English to German). We briefly overview the T5 model and explain how we pre-trained and fine-tuned it to support the four said code-related tasks. Finally, we describe the decoding strategy for generating the predictions.
An Overview of T5
T5 is based on the transformer model architecture that allows handling a variable-sized input using stacks of self-attention layers. When an input sequence is provided, it is mapped into a sequence of embeddings passed into the encoder. The T5, in particular, and a transformer model [80], in general, offer two main advantages over other state-of-theart models: (i) it is more efficient than RNNs since it allows to compute the output layers in parallel, and (ii) it is able to detect hidden and long-ranged dependencies among tokens, without assuming that nearest tokens are more related than distant ones. This last property is particularly relevant in code-related tasks since a variable declaration may be distant from its usage. Five different versions of T5 have been proposed [60]: small, base, large, 3 Billion, and 11 Billion. These variants differ in terms of complexity, with the smaller model (T5 small ) having 60M parameters against the 11B of the largest one (T5 11B ). As acknowledged by the authors [60], even if the accuracy of the most complex variants is higher than the less complex models, the training complexity increases with the number of parameters. Considering the available computational resources, we decided to use the simplest T5 small model.
T5 small architectural details. The T5 small architecture is characterized by six blocks for encoders and decoders. The feed-forward networks in each block consist of a dense layer with an output dimensionality (d f f ) of 2,048. The key and value matrices of all attention mechanisms have an inner dimensionality (d kv ) of 64, and all attention mechanisms have eight heads. All the other sub-layers and embeddings have a dimensionality (d model ) of 512.
Pre-training of T5
In the pre-training phase we use a self-supervised task similar to the one used by Raffel et al. [60], consisting of masking tokens in natural language sentences and asking the model to guess the masked tokens. However, we did not perform the pre-training by only using natural language sentences, since all the tasks we target involve source code. We use a dataset composed of both (technical) natural language (i.e., code comments) and source code. To obtain the dataset for the pre-training we start from the CodeSearchNet dataset [29] which provides 6M functions from open-source code. We only focus on the ∼1.5M methods written in Java, since the four tasks we aim at supporting are all related to Java code and work at method-level granularity (e.g., fixing a bug in a method, generating the summary of a method, etc.).
Then, since for three of the four tasks we support (i.e., automatic bug-fixing [75], generation of assert statements [82], and injection of code mutants [76]) the authors of the original papers used an abstracted version of source code (see Section 2), we used the src2abs tool by Tufano [75] to create an abstracted version of each mined Java method. In the abstraction process, special tokens are used to represent identifiers and literals of the input method. For example, the first method name found (usually the one in the method signature) will be assigned the METHOD_1 token, likewise the second method name (e.g., a method invocation) will be represented by METHOD_2. This process continues for all the method and variable names (VAR_X) as well as the literals (STRING_X, INT_X, FLOAT_X). Basically, the abstract method consists of language keywords (e.g., for, if), separators (e.g., "(", ";", "}") and special tokens representing identifiers and literals. Comments and annotations are removed during abstraction. Note that, since the tool was run on Java methods in isolation (i.e., without providing it the whole code of the projects they belong to), src2abs raised a parsing error in ∼600k of the ∼1.5M methods (due e.g., to missing references), leaving us with ∼900k abstracted methods. We still consider such a dataset as sufficient for the pre-training.
The CodeSearchNet dataset does also provide, for a subset of the considered Java source code methods, the first sentence in their Javadoc. We extracted such a documentation using the docstring_tokens field in CodeSearchNet, obtaining it for 499,618 of the considered methods. We added these sentences to the pre-training dataset. This whole process resulted in a total of 2,984,627 pre-training instances, including raw source code methods, abstracted methods, and code comment sentences. In the obtained dataset there could be duplicates between (i) different raw methods that become equal once abstracted, and (ii) comments re-used across different methods. Thus, we remove duplicates, obtaining the final set of 2,672,423 instances reported in Table 1. This is the dataset we use for pre-training the T5 model, using the BERT-style objective function Raffel et al. used in their experiments and consisting of randomly masking 15% of tokens (i.e., words in comments and code tokens in the raw and abstracted code). Finally, since we pre-train and fine-tune the models on a software-specific dataset, we create a new SentencePiece model [37] (i.e., a tokenizer for neural text processing) by training it on the entire pre-training dataset so that the T5 model can properly handle the Java language and its abstraction. This model implements subword units (e.g., byte-pairencoding BPE) and unigram language model [36] to alleviate the open vocabulary problem in neural machine translation. The pre-training of the models has been performed for 250k steps which, using a batch size of 128 results in ∼32M of masked code instances processed that, given the size of the pre-training dataset (see Table 1) correspond to ∼12 epochs.
Fine-tuning of T5
We detail the process used to fine-tune the T5 model. Before explaining how the training instances are represented within each fine-tuning dataset, it is important to clarify that both in the pre-training and in the fine tuning the T5 can handle any sort of training instance as long as it can be formulated as a text-to-text transformation. Indeed, the T5 represents each training dataset as a N × 2 matrix, where N is the number of instances in the dataset and the 2 dimensions allow to express the input text and the expected output text. In the case of pre-training, the input text is an instance (i.e., a raw method, an abstract method, or a Javadoc comment) in which 15% of tokens have been masked, while the output text represents the correct predictions for the masked tokens. In the four downstream tasks, instead, the text-to-text pairs are represented as explained in the following.
Fine-tuning dataset
We describe the datasets we use for fine-tuning the model for the four targeted tasks. The datasets are summarized in Table 2. The number of training steps performed for the different tasks is proportional to the size of their training dataset. Indeed, we aim at ensuring that the same number of "epochs" is performed on each training dataset. Thus, smaller training datasets require a lower number of steps to reach the same number of epochs of larger datasets. In particular, we used 1.75M fine-tuning steps for the multi-task setting ∼90 epochs) and we scaled the others proportionally to reach the same number of epochs (e.g., ∼1.41M for the code summarization task).
Automatic Bug Fixing (BF). We use the dataset by Tufano et al. [75] composed by triplets BF m = m b , m f , M , where m b and m f are the abstracted version of the buggy and fixed version of Java method, respectively, and M represents the mapping between the abstracted tokens and the raw code tokens (e.g., VAR_1 → webServerPort), which allows to track back the output of the model to source code. The triplets refer to methods with at most 100 tokens and they are split into two sub-datasets: (i) the small version, containing methods with up to 50 tokens, and a medium version, with methods with at most 100 tokens. We train the model to predict the fixed versions, m f , given the buggy versions, m b . Given the presence of two datasets, we divide the BF task in two sub-tasks, BF small and BF medium , depending on the size of the involved methods [75].
Injection of Code Mutants (MG).
For the MG task we exploited one of the two datasets provided by Tufano et al. [73]: MG ident and MG ident−lit . In both datasets each instance is represented by a triple m f , m b , M , where, similarly to the BF datasets, m b and m f are the buggy and fixed version of the snippet, respectively, and M represents the mapping between the abstracted tokens and the code tokens. The first dataset (MG ident ) represents the most general (and challenging) case, in which the mutated version, m b , can also contain new tokens (i.e., identifiers, types, or method names) not contained in the version provided as input (m f ). MG ident−lit , instead, only contains samples in which the mutated version contains a subset of the tokens in the nonmutated code. In other words, MG ident−lit represents a simplified version of the task. For this reason, we decided to focus on the most general scenario and we only use the MG ident dataset.
Generation of Assertions in Test Methods (AG).
For the AG task we used the dataset provided by Watson et al. [82] containing triplets T, T M n , A , where T is a given test case, T M n is the focal method tested by T , i.e., the last method called in T before the assert [59], and A is the assertion that must be generated (output). For such a task, we use two versions of the dataset: AG raw , which contains the raw source code for the input (T + T M n ) and the output (A), and AG abs , which contains the abstracted version of input and output, i.e., src2abs(T + T M n ) and src2abs(A), respectively. These are the same datasets used in the original paper.
Code Summarization (CS). For code summarization, we exploited the dataset provided by Haque et al. [24] containing 2,149,120 instances, in which each instance is represented by a tuple S, A S , C S , D , where S represents the raw source code of the method, A S is its AST representation, C S is the code of other methods in the same file, and D is the summary of the method, i.e., the textual description that the model should generate [24]. For this specific task, we consider a variation of the original dataset to make it more coherent with the performed pre-training. In particular, since in the pre-training we did not use any AST representation of code, we decided to experiment with the T5 model in a more challenging scenario in which only the raw source code to summarize (i.e., S) is available to the model. Therefore, the instances of our dataset are represented by tuples S, D : We train our model to predict D given only S.
Decoding Strategy
Once the models have been trained, different decoding strategies can be used to generate the output token streams. T5 allows to use both greedy decoding and Beam-search. When generating an output sequence, the greedy decoding selects, at each time step t, the symbol having the highest probability. The main limitation of greedy decoding is that it only allows the model to generate one possible output sequence (e.g., one possible bug fix) for a given input (e.g., the buggy method).
Beam-search is an alternative decoding strategy previously used in many DL applications [8], [10], [22], [62]. Unlike greedy decoding, which keeps only a single hypothesis during decoding, beam-search of order K, with K > 1, allows the decoder to keep K hypotheses in parallel: At each time step t, beam-search picks the K hypotheses (i.e., sequences of tokens up to t) with the highest probability, allowing the model to output K possible output sequences.
We used Beam-search to provide several output sequences given a single input, and report results with different K values. It is worth noting that having a large K increases the probability that one of the output sequences is correct, but, on the other hand, it also increases the cost of manually analyzing the output for a user (i.e., a developer, in our context).
Data Balancing for the multi-task model
The datasets we use for fine-tuning have different sizes, with the one for code summarization dominating the others (see Table 2). This could result in an unbalanced effectiveness of the model on the different tasks. In our case, the model could become very effective in summarizing code and less in the other three tasks. However, as pointed out by Arivazhagan et al. [6], there is no free lunch in choosing the balancing strategy when training a multi-task model, with each strategy having its pros and cons (e.g., oversampling of less represented datasets negatively impacts the performance of the most representative task). For this reason, we decide to experiment with both strategies. In the first strategy, we follow the true data distribution when creating each batch. In other words, we sample instances from the tasks in such a way that each batch during the training has a proportional number of samples accordingly to the size of the training dataset. For the second strategy, we train a multi-task pretrained model using a balanced sampling strategy. In other words, we feed the T5 model with batches of data having exactly the same number of samples per task randomly selected during the fine-tuning.
The results we obtained confirm the findings of Arivazhagan et al. [6]. In particular, when using the first training sampling strategy (i.e., proportional sampling), the performance of the tasks having a large training dataset (i.e., AG abs , AG raw , CS ) had a boost. In contrast, when using the second strategy (i.e., balanced sampling), the performance increases for those tasks whose training dataset is small with, however, a price to pay for the other three tasks. Nonetheless, since the observed differences in performance are not major and each strategy has its pros and cons, we decided to discuss in this paper the results achieved using the proportional sampling schema, as we did in [45].
The results of the proportional sampling are available in our replication package [2].
STUDY DESIGN
We aim at investigating the performance of the T5 model on four code-related tasks: Automatic bug-fixing, Injection of code mutants, Generation of Asserts in Tests and Code Summarization. The focus of our evaluation is on (i) investigating the extent to which transfer learning is beneficial when dealing with code-related tasks, studying the impact on performance of both pre-training and multi-task learning; and (ii) comparing the obtained results with representative state-of-the-art techniques. The context is represented by the datasets introduced in Section 2, i.e., the ones by Tufano et al. for bug fixing [75] and injection of mutants [76], by Watson et al. for assert statement generation [82], and by Haque et al. for code summarization [24]. We aim at answering the following research questions (RQs):
• RQ 1 :What are the performances of the T5 model when supporting code-related tasks? With RQ 1 we aim at understanding the extent to which T5 can be used to automate code-related tasks, investigating the performance achieved by the model on the four experimented tasks. In the context of RQ 1 , we also investigate the impact of transfer learning on performance: -RQ 1.1 : What is the role of pre-training on the performances of the T5 model for the experimented code-related tasks? With RQ 1.1 we aim at investigating the boost in performance (if any) brought by pre-training the models on a software-specific dataset. -RQ 1.2 : What is the role of multi-task learning on the performances of the T5 model for the experimented coderelated tasks? RQ 1.2 analyzes the influence of the multitask learning (i.e., training a single model for all four tasks) on the model's performance.
• RQ 2 : What are the performances of T5 as compared with stateof-the-art baselines? In RQ 2 we compare the performances achieved by the T5 model against the ones achieved by the baseline approaches. In this regard, we run T5 on the same test sets used in the four original papers presenting automated solutions for the code-related tasks we target.
Data Collection and Analysis
As explained in Section 3.3, we experimented with different variants of the T5: (i) no pre-training (i.e., four models each fine-tuned for one of the supported tasks, without any pretraining); (ii) pre-training single task (i.e., four models each fine-tuned for one of the supported tasks, with pre-training); and (iii) pre-training multi-task (i.e., one model pre-trained and fine-tuned for all four tasks). These nine models have all been run on the test sets made available in the works presenting our four baselines and summarized in Table 2.
Once obtained the predictions of the T5 models on the test sets related to the four tasks, we compute the evaluation metrics reported in Table 3. We use different metrics for the different tasks, depending on the metrics reported in the papers that introduced our baselines. Accuracy@K measures the percentage of cases (i.e., instances in the test set) in which the sequence predicted by the model equals the oracle sequence (i.e., perfect prediction). Since we use beam-search, we report the results for different K values (i.e., 1, 5, 10, 25, and 50), as done in [75] (BF) and [82] (AG). Tufano et al. [73] do not report results for K > 1 for the MG task. Thus, we only compute K = 1.
BLEU score (Bilingual Evaluation Understudy) [57] measures how similar the candidate (predicted) and reference (oracle) texts are. Given a size n, the candidate and reference texts are broken into n-grams and the algorithm determines how many n-grams of the candidate text appear in the reference text. The BLEU score ranges between 0 (the sequences are completely different) and 1 (the sequences are identical). We use different BLEU-n scores, depending on the ones used in the reference paper of the baseline (see Table 3). For the CS task, we report BLEU-{1, 2, 3, 4} and their geometric mean (i.e., BLEU-A); for the MG task we only report BLEU-A.
ROUGE (Recall-Oriented Understudy for Gisting Evaluation) is a set of metrics for evaluating both automatic summarization of texts and machine translation techniques in general [41]. ROUGE metrics compare an automatically generated summary or translation with a set of reference summaries (typically, human-produced). We use the ROUGE LCS metrics based on the Longest Common Subsequence for the CS task [24]. Given two token sequences, X and Y , and their respective length, m and n, it is possible to compute three ROUGE LCS metrics: R (recall), computed as
LCS(X,Y ) m
, P (precision), computed as LCS(X,Y ) n , and F (F-measure), computed as the harmonic mean of P and R.
The computed metrics are used to select what the best training strategy for the T5 is (i.e., no pre-training, pre-training single task, or pre-training multi-task). We also statistically compare the performance of these three strategies for each task using the McNemar's test [47], which is a proportion We statistically compare each pair of training strategy in our study (i.e., no pre-training vs pretraining single task, no pre-training vs pre-training multi-task, pre-training single task vs pre-training multi-task) in terms of their Accuracy@1 (i.e., perfect predictions) for each of the four experimented tasks. To compute the test results for two training strategies T 1 and T 2 , we create a confusion matrix counting the number of cases in which (i) both T 1 and T 2 provide a correct prediction, (ii) only T 1 provides a correct prediction, (iii) only T 2 provides a correct prediction, and (iv) neither T 1 nor T 2 provide a correct prediction. We complement the McNemar's test with the Odds Ratio (OR) effect size. Also, since we performed multiple comparisons, we adjusted the obtained p-values using the Holm's correction [26].
The best model output of this analysis has then been used to compare the best T5 model with the four baselines by using the performance metrics reported in Table 3. Moreover, we also statistically compare the Accuracy@1 of the T5 and of the baselines using the same procedure previously described (i.e., McNemar's test with the OR effect size). We also perform a complementarity analysis: We define the sets of perfect predictions generated by the T5 (PP T5 d ) and by the baseline (PP BL d ) with a beam size K = 1. Then, for each task and dataset we compute three metrics:
Shared d = |PP T5 d ∩ PP BL d | |PP T5 d ∪ PP BL d | OnlyT5 d = |PP T5 d \ PP BL d | |PP T5 d ∪ PP BL d | OnlyBL d = |PP BL d \ PP T5 d | |PP T5 d ∪ PP BL d |
Shared d measures the percentage of perfect predictions shared between the two compared approaches on the dataset d, while OnlyT5 d and OnlyBL d measure the percentage of cases in which the perfect prediction is only generated by T5 or the baseline, respectively, on the dataset d.
We also present an "inference time" analysis: we compute the time needed to run T5 on a given input. We run such an experiment on a laptop equipped with a 2.3GHz 8-core 9thgeneration Intel Core i9 and 16 GB of RAM, using the CPU to run the DL model. We do this for different beam search sizes, with K ∈ {1, 5, 10, 25, 50}. For each K, we report the average inference time (in seconds) on all the instances of each task. Besides that, we also report the training time (in hours) for the nine different models involved in our study, i.e., no pre-training (four models, one for each task), pre-training single task (+4 models), and pre-training multi-task (one model pre-trained and fine-tuned for all four tasks). For the training we used a 2x2 TPU topology (8 cores) from Google Colab with a batch size of 128, with a sequence length (for both inputs and targets) of 512 tokens.
Finally, we discuss qualitative examples of predictions generated by T5 and by the baselines to give a better idea to the reader about the capabilities of these models in supporting the four code-related tasks.
Hyperparameter Tuning
Before running the T5 models on the test sets, we performed a hyperparameter tuning on the evaluation sets from Table 2, to decide the best configuration to run. This was done for all nine models we built (e.g., with/without pre-training, with/without multi-task learning).
For the pre-training phase, we use the default parameters defined for the T5 model [60]. Such a phase, indeed, is taskagnostic, and hyperparameter tuning would provide limited benefits. Instead, we tried different learning rate strategies for the fine-tuning phase. Especially, we tested four different learning rates: (i) Constant Learning Rate (C-LR): the learning rate is fixed during the whole training; (ii) Inverse Square Root Learning Rate (ISR-LR): the learning rate decays as the inverse square root of the training step; (iii) Slanted Triangular Learning Rate [27] (ST-LR): the learning rate first linearly increases and then linearly decays to the starting learning rate; (iv) Polynomial Decay Learning Rate (PD-LR): the learning rate decays polynomially from an initial value to an ending value in the given decay steps. Table 4 reports the specific parameters we use for each scheduling strategy. In total, we fine-tuned 36 models (i.e., nine models with four different schedulers) for 100k steps each. To select the best configuration for each training strategy, we compute the following metrics: for BF and AG, we compute the percentage of perfect predictions achieved on the evaluation set with the greedy decoding strategy (Accuracy@1); for MG, we compute the BLEU score [57]; for CS, we compute BLEU-A, the geometric average of the BLEU-{1,2,3,4} scores [57]. Basically, for each task we adopt one of the evaluation metrics used in the original paper. The complete results of the hyperparameters tuning phase are reported in our replication package [2].
RESULTS DISCUSSION
We discuss our results accordingly to the formulated RQs. Table 5 reports the performance achieved by the different variants of the T5 model that we experimented with. For each task (e.g., Automatic Bug-Fixing) and for each dataset (e.g., BFsmall), performance metrics are reported for the three adopted training strategies (i.e., no pre-training, pre-training single task, and pre-training multi-task). For readability reasons, we only report the BLEU-A, but the results of the other BLEU scores (e.g., BLEU-4) are available in our online appendix [2]. The pre-training multi-task setting is the same as used in our ICSE'21 paper [45] that this work extends. Note that for some tasks (e.g., AG raw ) the results reported in Table 5 are different as compared to the ones reported in the ICSE paper. This is due to two changes we performed in our experimental pipeline. First, as compared to the ICSE paper, we updated our scripts to exploit the latest T5 version available as of today (i.e., T5 0.9.2 -https://libraries.io/pypi/t5/0.9.2) and re-executed all of our experiments. Second, in our ICSE paper we lowercased the source code before providing it as input to the T5. However, we realized that when working with Java raw code (see e.g., the AG raw task), it is important to keep such information considering the wide adoption of the camelCase naming convention in such a language. Table 6 reports the results of the statistical analysis we performed using the McNemar's test [47] to identify (if any) statistical differences in terms of Accuracy@1 when using different training strategies.
Performance of T5 (RQ 1 ) and impact of transfer learning on performance (RQ
1.1 -RQ 1.2 )
Focusing on the Accuracy@1, it is evident that there is no training strategy being the best one across all tasks and datasets. In particular: no pre-training works better on the BF small dataset for automatic bug-fixing; pre-training single task works better on the BF medium dataset for automatic bugfixing, on both datasets related to the generation fo assert statements, and for the code summarization task; finally, pre-training multi-task works better for the injection of code mutants. Overall, the pre-training single task strategy seems to be the best performing strategy. Indeed, even when it is not the first choice for a given task/dataset, it is the second best-performing training strategy. Also, by looking at Table 6 we can observe that: 1) When pre-training single task is the best strategy, its performance in terms of Accuracy@1 are significantly better (p-value < 0.001) than the second best-performing strategy, with ORs going from 1.13 (for CS ) to 3.39 (AG raw ). This means that chances of getting a perfect predictions using this strategy are 13% to 339% higher when using this strategy as compared to the second choice. 2) When pre-training single task is not the best strategy, but the second choice, the difference in Accuracy@1 is not significant when compared to pre-training multi-task for MG ident . The only significant difference is the one in favor of no pre-training on BF small , with an OR of 0.77. For these reasons, in our RQ 2 we will compare the T5 using the pre-training single task strategy against the baselines.
A few observations can be made based on the findings in Table 5. First, the additional pre-training is, as expected, beneficial. Indeed, on five out of the six datasets the T5 performs better with pre-training. Second, the multi-task setting did not help in most of cases. Indeed, with the exception of MG ident in which the performance of pretraining single task and pre-training multi-task are basically the same, the single task setting performs always better. Such a result, while surprising at a first sight, can be explained by diverse types of input/output handled by the models across the four tasks. Indeed, (i) the datasets related to automatic bug-fixing and AG abs include abstracted code instances as input/output; (ii) the dataset used for code mutants and AG raw feature raw code instances as input/output; and (iii) the one for code summarization has raw source code as input and natural language text as output. Basically, given the different formats, the transfer learning across different tasks is likely to hinder the model rather than helping it.
Differently, the pre-training dataset features all three input/output representations and, thus, provides the model with a basic knowledge about all of them that, as a result, boosts performance.
While we will discuss more in depth the performance of the T5 model when comparing it to the considered baselines (Section 5.2), it is also worth commenting on the ability of the T5 to generate correct predictions, namely outputs that are identical to the reference ones (e.g., a method summary equal to the one manually written by developers). Quite impressive are the performances achieved on the generation of assert statements, especially on the dataset dealing with raw source code, in which the T5 correctly predicts 68.93% of assert statements with a single guess (75.95% when using five guesses). The Accuracy@1 is instead much lower for the other tasks, ranging between 11.85% (fixing bugs in the most challenging BF medium dataset) up to 28.72% when injecting mutants. Also worth noticing is the 12.02% of code summaries generated by the T5 that are identical to the manually written ones. In the next subsection, together with a comparison of our model with the baselines, we present qualitative examples of predictions generated by the T5.
Competitiveness of the T5 model compared to the baselines (RQ 2 )
We compare the results achieved by the T5 model when using the pre-training single task strategy with the baseline we consider for each task ( Table 3). The comparison is depicted in Fig. 2, while Table 8 shows the results of the statistical tests, and Table 10 shows the overlap metrics described in Section 4.1.
Automatic Bug Fixing (BF)
When using T5 for automatically fixing bugs, the accuracy achieved using a greedy decoding strategy (K = 1) differs according to the dataset we consider. For example, the T5 model achieves 15% of perfect predictions on the BF small no pre-training vs pre-training single task < 0.001 0.77 no pre-training vs pre-training multi-task < 0.001 0.46 pre-training multi-task vs pre-training single task < 0.001 1.67 BF medium no pre-training vs pre-training single task < 0.001 1.56 no pre-training vs pre-training multi-task < 0.001 0.12 pre-training multi-task vs pre-training single task < 0.001 8.56
Injection of Code Mutants
MG ident no pre-training vs pre-training single task < 0.001 1.51 no pre-training vs pre-training multi-task < 0.001 1.38 pre-training multi-task vs pre-training single task 0.75 0.99
Generation of Asserts in Test
AGraw no pre-training vs pre-training single task < 0.001 3.39 no pre-training vs pre-training multi-task < 0.001 0.71 pre-training multi-task vs pre-training single task < 0.001 4.95 AG abs no pre-training vs pre-training single task < 0.001 2.55 no pre-training vs pre-training multi-task < 0.001 0.74 pre-training multi-task vs pre-training single task < 0.001 2.93
Code Summarization CS no pre-training vs pre-training single task < 0.001 1.13 no pre-training vs pre-training multi-task < 0.001 0.83 pre-training multi-task vs pre-training single task < 0.001 1.40 dataset against 9% achieved by the baseline, with an improvement of 6 percentage points, while in the most challenging scenario, (i.e., BF medium ) our model obtains an improvement of 8 percentage points over the baseline (11% vs 3%). Such improvements are statistically significant ( Table 8) with ORs of 2.39 (BF small ) and 6.88 (BF medium ), indicating higher chance of observing a perfect prediction when using the T5 as compared to the baseline. Worth noticing is that as the beam width increases, the performance of the T5 and of the baseline gets closer, with the baseline performing better for K = 25 and K = 50 on BF small . Looking at the overlap metrics (Table 10), 25.90% of perfect predictions on BF small and 28.78% on BF medium are shared by the two techniques. The remaining are perfect predictions only with T5 (53.20% on BF small and 36% on BF medium ) or only with the baseline (20.90% on BF small and 35.16% on BF medium ). This indicates that the two approaches are complementary for the bug fixing task suggesting that further improvements could be possible by exploiting customized ML-based bug-fixing techniques. To further look into this finding, we analyzed the type of "code transformation" that T5 and the baseline were able to learn. With "code transformation" we refer to Abstract Syntax Tree (AST) operations needed to correctly transform the input code into the target prediction (i.e., the AST operations performed by developers to transform the buggy code into the fixed code). In particular, we used the Gumtree Spoon AST Diff [18] to collect the Delete, Insert, Move and Update operations performed on the AST nodes when fixing bugs. Then, for each of these operations, we extracted the 5 most popular ones (e.g., the five most popular Delete node operations). These 20 AST-level operations (4 types of operations × 5 most popular for each type) characterize the successful fixing of bugs/injection of code mutants in the three datasets. The column "Oracle" of ( Table 7) reports such numbers. Then, we took the correct predictions generated by T5 and by the baselines and checked the extent to which those predictions feature the "popular" AST operations that, accordingly to our oracles, are needed to properly fix bugs. Table 7 reports for both techniques and both datasets (BF small and BF medium ) the number of times the different AST operations were performed by the models.
Given the previously discussed superior performance of T5, it is expected to see that it managed to correctly perform the needed AST operations more often than the baseline. However, what is interesting is that there are specific types of operations that are not learned by the baseline while they are successfully implemented by T5. This is especially true for less popular operations such as the Insert ones, that require to synthesize new nodes that were not present in the input AST. Delete TypeAccess at Invocation 2,016 402 450 1,926 125 250 Delete Invocation at Block 1,444 294 326 1,315 159 240 Delete TypeAccess at ThisAccess 821 92 134 598 32 81 Delete VariableRead at Invocation 818 51 In BF medium , four of the top-five AST Insert operations are never applied by the baseline (see Table 7). Similar results are also obtained for the Update operations, while both models work similarly well when the bug-fix mostly requires the deletion of existing AST nodes.
Injection of Code Mutants (MG)
Looking at Fig. 2 we can observe that using T5 to generate mutants allows to obtain more accurate results than the baseline, with the Accuracy@1 improving by 12 percentage points, with 1,336 additional perfect predictions. The average BLEU score also improves by ∼0.02 on top of the very good results already obtained by the baseline (i.e., 0.77).
Minor improvements in BLEU score can still indicate major advances in the quality of the generated solutions [14]. Also in this case differences in terms of Accuracy@1 are statistically significant, with the T5 model being more likely to generate correct solutions (OR = 2.95) as compared to the baseline approach [76] (Table 8).
Differently from the bug-fixing task, for the injection of code mutants the percentage of shared perfect predictions (Table 10) is slightly higher (33%) with, however, T5 being the only one generating 50.52% of perfect predictions as compared to the 16.48% generated exclusively by the baseline.
Similarly to what has been done in the context of the bugfixing task, Table 9 reports the top-20 AST-level operations needed to correctly inject mutants in our dataset. Note that, differently from what observed for the bug-fixing task, injecting mutants mostly requires the insertion of new AST nodes. The trend that we observe is, as expected, the opposite of what we found for the bug-fixing task because the task is the same but with reversed input/output. Indeed, the baseline seems to correctly predict the most popular Insert operations in the AST, while it almost ignores the more rare Delete ones. T5 instead, covers all top-20 operations.
Generation of Assertions in Test Methods (AG)
T5 achieve much better performance in this task as compared to the baseline. The gap is substantial both with (AG abs ) and without (AG raw ) code abstraction (Fig. 2). With abstraction, the T5 achieves a 56% accuracy at K = 1 against the 31% achieved by ATLAS [82]. When both approaches are asked to generate multiple assert statements (i.e., K = 5, 10, 25, 50) the gap in performance ranges between 13-25 percentage points. When using the more challenging non-abstracted dataset AG raw , T5 achieves even better results. In this regard, when T5 is asked to generate only one assert statement (K = 1), the reported accuracy is 51 percentage points higher compared to the baseline , while for larger K values the gap in performance ranges between 51-53 percentage points. The McNemar's test confirms the huge gap in performance between the two techniques, with ORs ranging between 6.19 (AG abs ) and 43.12 (AG raw ).
In terms of overlap, we found a trend similar to the previously discussed task (mutants injection): On AG abs we have 34.92% of perfect predictions shared between the two approaches, while the remaining instances are distributed between the ones only predicted by T5 (58.87%) and the ones only predicted by the baseline (6.21%). The overlap is much smaller on the AG raw dataset, with only 9.56% of the instances correctly predicted by both the approaches, 89.65% of them correctly predicted only by T5, and 0.79% only by the baseline.
Code Summarization (CS)
On this task, T5 achieves a substantial increase in BLEU score as compared to the baseline. When considering the average BLEU (BLEU-A), the improvement is of ∼5 percentage points. On the other hand, it can be noticed that the ROUGE-LCS scores achieved when using T5 are lower than the ones achieved by the baseline (∼5 percentage points lower on the F-measure score). Thus, looking at these metrics, there is no clear winner, but T5 seems to be at least comparable to the baseline. To have something easier to interpret, we compared the two approaches in terms of the number of perfect predictions they generate, despite the fact that such a metric was not used in the original paper [24]. This means counting the comments generated by a technique that are exactly equal to the ones manually written by humans. T5 managed to generate 12.02% of perfect predictions (10,929 instances) against the 3.4% (3,048) of the baseline technique (over 3 × better). As expected from previous results, the majority of the perfect predictions for this task can be done only using T5 (93.79%). A limited percentage of perfect predictions is shared (4.79%), and a minority of instances can be only predicted through the baseline (1.42%). The McNemar's test highlights a statistical significance in terms of Accuracy@1, with an OR of 35.56.
Qualitative Analysis
To give a better idea to the reader about the capabilities of the T5 model in supporting the four code-related tasks, Fig. 3 shows two examples of perfect predictions made by T5 for each task. Each example is bordered with a dashed line and shows (i) the input provided by the model, and (ii) the generated output. In particular, in the case of the bug-fixing, mutants injection, and code summarization tasks, the first line shows the input and the second the output. Concerning the generation of assert statements, the first two lines (i.e., those marked with "//Test method" and "//Focal method") represent the input, while the third line shows the generated assert statement. We highlighted in bold the most relevant parts of the output generated by the model. The bottom part of Fig. 3 also shows some "wrong" predictions (i.e., the output of the model is different from the expected target) for the code summarization task, that we will discuss later on. Concerning the bug-fixing task, in the first example the model adds the break statement to each case of the switch block, thus allowing the program to break out of the switch block after one case block is executed. In the second example, instead, it changes the execution order of a statement as done by developers to fix the bug.
As per the mutants injection, the first example represents an arithmetic operator deletion, while the second is a non void method call mutation [1]. While these transformations might look trivial, it is worth remembering that they are considered as correct since they reproduce real bugs that used to affect these methods. Thus, the model is basically choosing where to mutate and what to mutate in such a way to simulate Both examples of correct prediction we report involve the generation of an assert statement including an invocation to the focal method (i.e., the main method tested by the test method). While the first is a rather "simple" assertFalse statement, the second required the guessing of the expected value (i.e., assertEquals).
Finally, for the code summarization, the two reported examples showcase the ability of T5 to generate meaningful summaries equivalent to the ones manually written by developers. For this task, we also reported in the bottom part of the figure some wrong but still meaningful predictions. In this case, the grey text represent the original summary written by developers, while the bold one has been generated by T5. In both cases, the generated summary is semantically equivalent and even more detailed that the manually written one.
These two examples help in discussing an important limitation of our analysis: While we assume the correct predictions to be the only valuable outputs of T5 and of the experimented baselines, they actually represent a lower-bound for their performance. Indeed, there are other predictions that, even if wrong, could still be valuable for developers, such as the two shown for the code summarization task. Table 11 reports the training time (in hours) for the nine models we trained. On average, the infrastructure we used for training requires 31.5 seconds every 100 training steps which, given our batch size = 128, means that 12,800 training instances can be processed in 31.5 seconds. Of course, multiple passes (usually referred to as epochs) are needed on the dataset during the training. Table 11 shows that (i) the pre-training has a cost of ∼22h that should be added on top of the fine-tuning cost shown for each task; (ii) as expected, the training time increases with the increase in size of the training dataset, with the code summarization task being the most expensive in terms of training time; (iii) clearly, the multi-task setting requires to train the model on all tasks, resulting in the highest training time (175h). Table 12 presents, instead, the results of the inference time analysis (i.e., the time needed to run the model on a given input and obtain the prediction). Such analysis allows to understand the extent to which such a model can be used in practice. Table 12 reports the inference time in seconds for different K values (e.g., with K = 10 the reported time is the one required by the model to generate 10 possible solutions).
Training and Inference Time
Concerning the bug-fixing task, the time needed to generate a fix depends on the dataset, since the complexity of the instances they feature is different. In the BF small dataset, the average inference time ranges between 0.72s (K = 1) and 5.99s (K = 50), while it is larger on the BF medium dataset (1.86s for K = 1 and 20.90s for K = 50). For the injection of code mutants, we observed results comparable to those of BF small : with K = 1 the average inference time is 0.94s, while for K = 50 it is 7.60s. The generation of assert statement is very fast for low values of K (0.73s for AG abs and 0.53s for AG raw with K = 1), while it gets slower for higher values of K (10.24 for AG abs and 5.45 for AG raw with K = 50). Finally, concerning the code summarization task, T5 takes only 0.20s for K = 1 and 1.45s for K = 50 to output code summaries for a method given as input.
Overall, considering that all the targeted tasks do not have strong real-time constraints (e.g., a developer can wait a few seconds for the automated fixing of a bug), the inference times should not hinder the model applicability in practice. Also, the reported inference times were obtained by running the model on a consumer-level device and by only using CPUs. We also computed the inference time using an Nvidia Tesla P100 GPU equipped with 16GB of VRAM. The achieved results are available in our replication package [2]. In summary, we observed an average decrease of inference time of ∼70% as compared to the one obtained using the CPU.
THREATS TO VALIDITY
Construct validity. Threats to construct validity concern the relationship between theory and observation. We used existing datasets that are popular and used in the community for both pre-training and fine-tuning our model with minimal additional processing (e.g., removal of duplicates after abstraction in the dataset used for the pre-training). Additionally, we have released all of our code and models in our replication study [2] for verification.
Internal validity. Threats to internal validity concern factors, internal to our study, that could influence its results. Many factors can influence our results, from model architecture, hyperparameter choices, data processing, the data itself, etc. For mitigating these issues, we have adopted methodologies usually employed in DL-based research. Specifically, we performed a detailed analysis of hyperparameter choices as discussed in Section 4.2. Concerning the pre-training phase, we used the default T5 parameters selected in the original paper [60] since we expect little margin of improvement for such a task-agnostic phase. For the fine-tuning, due to computational feasibility reasons, we did not change the model architecture (e.g., number of layers), but we experiment with different learning rates schedulers. We are aware that a more extensive calibration would likely produce better results. Finally, we pre-trained the model by masking 15% of tokens (i.e., words in comments and code tokens in the raw and abstracted code) in the ∼2.7M instances from the pre-training dataset. However, we did not experiment with the model after pre-training to verify whether it actually learned the languages of interest (i.e., raw source code, abstracted source code, and technical natural language). To address this limitation, we randomly selected 3k instances from the BF medium test set, both in their abstract and raw representation (6k in total). We also selected 3k code summaries from the CS dataset obtaining a dataset of 9k instances, equally split across raw source code, abstracted source code, and technical natural language. Note that these are instances that have not been used to pre-train the model and, thus, are unseen for a model only subject to pre-training. We randomly masked 15% of tokens in each of those instances, asking the pre-trained model to predict them. T5 correctly predicted 87,139 out of the 102,711 masked tokens (i.e., 84.8% accuracy). As expected, given the different complexity of the three "languages", T5 achieved a higher accuracy of 90.8% when working on abstracted code, 82.7% on raw code, and 64.6% when guessing tokens from technical language. Overall, such results indicate that the model successfully gathered knowledge about the languages of interest during the pre-training.
Also the quality of the employed datasets can dramatically impact the achieved results. This is because there may be biases making the dataset not representative of the real world. To assess the quality of our datasets we conducted various analyses around sampling bias and data snooping as recommended by Watson et al. [81].
To this end, we conducted an exploratory data analysis (EDA), which helps answering questions related to the reliability and quality of our datasets. To accomplish this, we performed a two-fold statistical procedure: complexity size and token distributions. In the complexity size procedure, we count the number of tokens per dataset and data partition. Then, we present the relative distribution in log scale. While in the token procedure, we concentrated on counting specific tokens by popularity or special interest (e.g., if , assert, or public). The purpose of the EDA is to monitor the size of datasets and its impact in the model performance. EDA's results can be found in our web appendix [2].
Conclusion validity. Threats to conclusion validity concern the relationship between evaluation and outcome. To this extent, we used appropriate statistical procedures, also adopting p-value adjustment when multiple tests were used within the same analysis.
External validity. Threats to external validity are related to the generalizability of our findings. Our study focused on the T5 model on four tasks using six datasets, all of which only involved Java code. While it is unclear how our model would perform if trained on other programming languages, excluding the abstraction component, the whole pipeline is language agnostic and can be easily adapted to other languages for evaluating this.
We also performed an analysis of our dataset aimed at finding out the generalizability of our models. This analysis assessed the level of data snooping among our datasets' training and test sets and how this impacts our model's results. To accomplish this we calculate the overlap between our fine-tuning datasets' training and test sets by computing the pairwise Levenshtein Distance [40] between the two sets. With these distances calculated, we computed the correlation between the distances and the performance of our model on the different test sets.
Specifically, we selected a statistically representative sample (confidence level = 95% and confidence interval = 5%) of each training set and calculated the pairwise Levenshtein Distance [40] between it and the entirety of the test set for each fine-tuning dataset. Next, depending on the type of performance metric (Perfect Prediction or BLEU Score), we calculate the correlation between the minimum, median, and maximum distances of all sampled training examples to each test example and the performance of our model on the test set. For the perfect prediction, we use Point Biserial Correlation (PBC) [71] as it allows to compare binary and continuous data. For the BLEU score, we use Pearson Correlation [71] since both are continuous values. Table 13 shows the correlation for each dataset. As shown, there exists a negative correlation between the minimum and median distances and model performance, i.e., the model tends to perform worse as the distance between the training and test examples increases. For the maximum distance case, there is instead a positive correlation for perfect prediction performance, i.e., the model tends to perform better the further away the maximum training examples are from the test examples. Such a result may be simply due to specific outliers present in the test set (i.e., an instances being very far from the ones in the training set). However, all the correlations we observed are quite low, supporting the generalizability of our models.
CONCLUSION
We presented an empirical study aimed at investigating the usage of transfer learning for code-related tasks. In particular, we pre-trained and fine-tuned several variants of the Text-To-Text Transfer Transformer (T5) model with the goal of supporting four code-related tasks, namely automatic bugfixing, injection of code mutants, generation of assert statements in test methods, and code summarization. We compared the performance achieved by the T5 against state-of-the-art baselines that proposed DL-based solutions to these four tasks.
The achieved results showed that: (i) the pre-training process of the T5, as expected, boosts its performance across all tasks; (ii) the multi-task fine-tuning (i.e., a single model trained for different tasks) instead, does not consistently help in improving performance, possibly due to the different types of "data" manipulated in the four tasks (i.e., raw code, abstracted code, natural language); (iii) in its best configuration, the T5 performs better that the baselines across all four tasks. When looking at the latter finding it is important to remember that the baselines used for comparison are not pre-trained and, thus, they (i) exploited less training data, and (ii) did not need the additional ∼22 hours of computation required by the pre-training.
Future work will aim at further advancing performance by employing larger versions of the T5. Also, while our results do not support the usage of multi-task learning in code-related tasks, we believe additional investigations are needed on this side. For example, by only considering a set of tasks all manipulating the same type of data (e.g., all working on raw code), it is possible that the benefits of multi-task learning would emerge.
Fig. 2 :
2Performance of the T5 model against the experimented baselines.Automatic Bug Fixing (BF)Generation of Asserts in Tests (AG)Code Summarization (CS)
Fig. 3 :
3Examples of perfect and alternative predictions private void METHOD_1 ( int VAR_1 , int offset ) { switch ( VAR_1 ) { case 0 : this . VAR_2 = offset ; break ; case 1 : this . VAR_3 = offset ; break ; case INT_1 : this . VAR_4 = offset ; break ; case INT_2 : this . VAR_5 = offset ; break ; } } Bug-fixing Mutants generation public int METHOD_1 ( int a ) { int index = VAR_1 [ ( a + 1 ) ] ; return VAR_2 . METHOD_2 ( index ) ; } public int METHOD_1 ( int a ) { int index = VAR_1 [ a ] ; return VAR_2 . METHOD_2 ( index ) ; } public java.lang.String METHOD_1 ( final java.lang.String VAR_1 ) { return STRING_1 . METHOD_2 ( VAR_2 . METHOD_3 ( ) . METHOD_4 ( ) ) ; } public java.lang.String METHOD_1 ( final java.lang.String VAR_1 ) { return VAR_2 . METHOD_3 ( ) . METHOD_4 ( ) isValidToExecute ( uk . gov . gchq . gaffer . user . User ) { return ( isPublic ) || ( ( null != user ) && ( ( isAddingUser ( user ) ) || ( ( ! ( isAuthsNullOrEmpty ( ) ) ) && ( isUserHasASharedAuth ( user ) ) ) ) ) ; } // Test method testClone ( ) { org . apache . flink . api . common . accumulators . DoubleMinimum min = new org . apache . flink . api . common . accumulators . DoubleMinimum ( ) ; double value = 3.14159265359 ; min . add ( value ) ; org . apache . flink . api . common . accumulators . DoubleMinimum clone = min . clone ( ) ; "<AssertPlaceHolder>" ; } getLocalValue ( ) { return null ; } // Focal method org . junit . Assert . assertEquals ( value , clone . getLocalValue ( ) , 0.0 ) Code summarization public void update() { check Widget ( ) ; Utils . paintComponentImmediately ( handle ) ; update ( false ) ; } public void setWordWrap(int row, int column, boolean wrap) { prepareCell ( row , column ) ; String wrapValue = wrap ? "" : "nowrap" ; DOM . setStyleAttribute ( getElement ( row, column ) , "whiteSpace" , wrapValue ) ; } sets whether the specified cell will allow word wrapping of its contents public void METHOD_1 ( final long [ ] data , boolean length ) { int VAR_1 = ( data . length ) * ( VAR_2 ) ; if ( length ) METHOD_2 ( VAR_1 ) ; METHOD_3 ( ( ( position ) + VAR_1 ) ) ; VAR_3 . METHOD_4 ( data , VAR_4 , null , ( ( VAR_5 ) + ( position ) ) , VAR_1 ) ; position += VAR_1 ; end = ( ( position ) > ( end ) ) ? position : end ; } public void METHOD_1 ( final long [ ] data , boolean length ) { int VAR_1 = data . length ; if ( length ) METHOD_2 ( VAR_1 ) ; VAR_1 *= VAR_2 ; METHOD_3 ( ( ( position ) + VAR_1 ) ) ; VAR_3 . METHOD_4 ( data , VAR_4 , null , ( ( VAR_5 ) + ( position ) ) , VAR_1 ) ; position += VAR_1 ; end = ( ( position ) > ( end ) ) ? position : end ; } shouldNeverValidateNullUserIV ( ) { final uk . gov . gchq . gaffer . federatedstore . FederatedAccess access = new uk . gov . gchq . gaffer . federatedstore . FederatedAccess . Builder ( ) . addingUserId ( null ) . build ( ) ; "<AssertPlaceHolder>" ; } Generation of assert statements org . junit . Assert . assertFalse ( access . isValidToExecute ( null ) ) testCase getTestCase ( String implementationNumber ) int index = Integer . valueOf ( implementationNumber ) ; int value = return getTestCase ( index ) ; protected void doConfigure(HierarchicalConfiguration config) throws ConfigurationException {} forces all outstanding paint requests for the widget return the specific test case override to handle config returns the test case with the given implementation number subclasses can override this method to perform custom configuration Wrong but meaningful predictions for the code summarization task
.Fig. 1: Abstraction processraw source code
abstracted code
abstracted code with idioms
public Integer getMinElement(List myList) {
if(myList.size() >= 0) {
return ListManager.getFirst(myList);
}
return 0;
}
public TYPE_1 METHOD_1 ( TYPE_2 VAR_1 )
{ if ( VAR_1 . METHOD_2 ( ) >= INT_1 )
{ return TYPE_3 . METHOD_3 ( VAR_1 ) ; }
return INT_1 ; }
public TYPE_1 METHOD_1 ( List VAR_1 )
{ if ( VAR_1 . size ( ) >= 0 )
{ return TYPE_2 . METHOD_3 ( VAR_1 ) ; }
return 0 ; }
TABLE 1 :
1Datasets used for the pre-training of T5.Data sources
Instances
Source code
1,569,773
Abstracted source code
766,126
Technical natural language
336,524
Total
2,672,423
TABLE 2 :
2Task-specific datasets used for fine-tuning T5.Task
Dataset
Training-set Evaluation-set Test-set
Automatic Bug-Fixing
BF small [75]
46,680
5,835
5,835
BF medium [75]
52,364
6,546
6,545
Injection of Code Mutants
MG ident [76]
92,476
11,560
11,559
Generation of Asserts in Test
AG abs [82]
126,477
15,809
15,810
AGraw [82]
150,523
18,816
18,815
Code Summarization
CS [24]
1,953,940
104,272
90,908
Total
2,422,460
162,838
149,472
TABLE 3 :
3Baselines and evaluation metrics for the tasks.Task
Baseline Accuracy@K
BLEU-n
ROUGE LCS
Automatic Bug-Fixing
[75]
{1, 5, 10, 25, 50} -
-
Injection of Code Mutants
[76]
{1}
{A}
-
Generation of Asserts in Test
[82]
{1, 5, 10, 25, 50}
-
-
Code Summarization
[24]
-
{1, 2, 3, 4, A} {P, R, F }
test suitable to pairwise compare dichotomous results of
two different treatments.
TABLE 4 :
4Learning-rates tested for hyperparameter tuning.Learning Rate Type Parameters
Constant
LR = 0.001
Inverse Square Root
LR starting = 0.01
Warmup = 10, 000
Slanted Triangular
LR starting = 0.001
LRmax = 0.01
Ratio = 32
Cut = 0.1
Polynomial Decay
LR starting = 0.01
LR end = 0.001
Power = 0.5
TABLE 5 :
5Overall results achieved by the T5 model for each tasks. The best configuration is highlighted in boldTask
Dataset
Model Configuration Accuracy@1 Accuracy@5 Accuracy@10 Accuracy@25 Accuracy@50 BLEU-A
Automatic Bug-Fixing
BF small
no pre-training
16.70%
29.88%
34.37%
39.57%
42.86%
-
pre-training single task
15.08%
32.08%
37.01%
42.51%
45.94%
-
pre-training multi-task
11.61%
35.64%
43.87%
52.88%
57.70%
-
BF medium
no pre-training
10.50%
17.60%
20.53%
24.38%
27.62%
-
pre-training single task
11.85%
19.41%
23.28%
28.60%
32.43%
-
pre-training multi-task
3.65%
19.17%
24.66%
30.52%
35.56%
-
Injection of Code Mutants
MG ident
no pre-training
25.78%
-
-
-
-
78.26%
pre-training single task
28.72%
-
-
-
-
78.69%
pre-training multi-task
28.92%
-
-
-
-
78.29%
Generation of Asserts in Test
AGraw
no pre-training
60.95%
59.14%
62.41%
69.05%
71.97%
-
pre-training single task
68.93%
75.95%
77.70%
79.24%
80.22%
-
pre-training multi-task
58.60%
66.90%
70.31%
73.19%
74.58%
-
AG abs
no pre-training
47.81%
49.60%
55.04%
64.28%
68.57%
-
pre-training single task
56.11%
71.26%
74.32%
76.67%
78.02%
-
pre-training multi-task
44.90%
63.40%
68.23%
73.04%
73.12%
-
Code Summarization
CS
no pre-training
11.80%
-
-
-
-
24.67%
pre-training single task
12.02%
-
-
-
-
25.21%
pre-training multi-task
11.45%
-
-
-
-
24.90%
TABLE 6 :
6McNemar's test (adj. p-value and OR) considering only accuracy@1 matches as correct predictionsTask
Dataset
Model Configuration
p-value
OR
Automatic Bug-Fixing
BF small
TABLE 7 :
7Top-20 AST operations needed to fix bugs in our dataset (see "Oracle" column) and their presence in correct predictions generated by T5 and the baselineDelete
BF small
BF medium
Oracle
Baseline [75]
T5
Oracle
Baseline [75]
T5
TABLE 8 :
8McNemer's test considering the correct predictions
achieved by the T5 model and the baselines when both
techniques generate only one prediction (i.e., accuracy@1)
Task
Dataset (d)
p-value
OR
Automatic Bug-Fixing
BF small
< 0.001
2.39
BF medium
< 0.001
6.88
Injection of Code Mutants
MG ident
< 0.001
2.95
Generation of Asserts in Test
AG abs
< 0.001
6.19
AG raw
< 0.001 43.12
Code Summarization
CS
< 0.001 35.56
TABLE 9 :
9Top-20 AST operations needed to inject mutants
in our dataset (see "Oracle" column) and their presence in
correct predictions generated by T5 and the baseline
Delete
MGident
Oracle
Baseline [76]
T5
Delete TypeAccess at Invocation
387
1
30
Delete Return at Block
327
20
64
Delete FieldRead at BinaryOperator
283
0
7
Delete FieldRead at Invocation
242
0
19
Delete Invocation at Block
236
0
15
Insert
MGident
Oracle
Baseline [76]
T5
Insert TypeAccess at Invocation
6,230
1,125
1,744
Insert Invocation at Block
3,979
860
1,183
Insert TypeAccess at ThisAccess
2,219
479
722
Insert VariableRead at Invocation
2,061
245
466
Insert Block at If
1,795
485
671
Move
MGident
Oracle
Baseline [76]
T5
Move Invocation from Block to Invocation
1,154
225
356
Move Invocation from Return to Invocation
283
55
105
Move Return from Block to Return
224
58
100
Move Assignment from Block to Assignment
190
26
56
Move Invocation from Invocation to Invocation
129
1
27
Update
MGident
Oracle
Baseline [76]
T5
Update TypeAccess at Invocation
923
67
220
Update FieldRead at BinaryOperator
408
14
63
Update Wra at Method
264
1
31
Update TypeAccess at ThisAccess
228
10
73
Update TypeReference at Method
208
0
25
TABLE 10 :
10Overlap metrics for correct predictions generated by the T5 model and the baselines.Task
Dataset (d)
Shared d
OnlyT5 d
OnlyBL d
Automatic Bug-Fixing
BF small
25.90%
53.20%
20.90%
BF medium
28.78%
36.06%
35.16%
Injection of Code Mutants
MG ident
33.00%
50.52%
16.48%
Generation of Asserts in Test
AG abs
34.92%
58.87%
6.21%
AGraw
9.56%
89.65%
0.79%
Code Summarization
CS
4.79%
93.79%
1.42%
TABLE 11 :
11Training time (hours) for the trained T5 modelsTraining
Bug-fixing
Mutants
Generation of
Code Multi-Task
generation assert statements summarization
No pre-training
6.26
5.85
17.51
123.55
-
Pre-training
28.10
27.72
39.40
145.42
175.00
TABLE 12 :
12Inference time with different beam size values.K
BF small
BF medium
MG ident
AG abs
AGraw
CS
1
0.72
1.86
0.94
0.73
0.53
0.20
5
1.47
3.69
1.70
1.59
1.04
0.36
10 1.91
5.26
2.20
2.64
1.52
0.48
25 3.54
11.10
4.32
5.45
3.15
0.81
50 5.99
20.90
7.60
10.24
5.45
1.45
TABLE 13 :
13Correlation between training-test set similarity and test set performance.Dataset
Min Median
Max
BF small
-0.15
-0.03
0.04
BF medium
-0.05
-0.03
0.01
MG ident
0.21
0.03 -0.23
AG abs
-0.21
-0.14
0.29
AGraw
-0.21
-0.14
0.19
CS
-0.38
-0.17 -0.09
JOURNAL OF L A T E X CLASS FILES, VOL. XX, NO. X, MONTH XXXX
ACKNOWLEDGMENTThis project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 851720). W&M team has been supported in part by the NSF CCF-1955853, CCF-1815186 and CCF-2007246 grants. Any opinions, findings, and conclusions expressed herein are the authors' and do not necessarily reflect those of the sponsors.Rocco Oliveto is a Professor in the Department of Bioscience and Territory at University of Molise (Italy). He is the Chair of the Computer Science program and the Director of the Laboratory of Computer Science and Scientific Computation of the University of Molise. He received the PhD in Computer Science from University of Salerno (Italy) in 2008. His research interests include traceability management, information retrieval, software maintenance and evolution, searchbased software engineering, and empirical software engineering. He is author of about 150 papers appeared in international journals, conferences and workshops. He serves and has served as organizing and program committee member of international conferences in the field of software engineering. He is a member of IEEE Computer Society and ACM.Gabriele Bavota is an associate professor at the Faculty of Informatics of the Università della Svizzera italiana (USI), Switzerland, where he is part of the Software Institute and he leads the SEART research group. He received the PhD in Computer Science from the University of Salerno, Italy, in 2013. His research interests include software maintenance and evolution, code quality, mining software repositories, and empirical software engineering. On these topics, he authored over 140 papers appeared in international journals and conferences and has received four ACM Sigsoft Distinguished Paper awards at the three top software engineering conferences: ASE 2013 and 2017, ESEC-FSE 2015, and ICSE 2015. He also received the best/distinguished paper award at SCAM 2012, ICSME 2018, MSR 2019, and ICPC 2020. He is the recipient of the 2018 ACM Sigsoft Early Career Researcher Award for outstanding contributions in the area of software engineering as an early career investigator and the principal investigator of the DEVINTA ERC project. More information is available at: https://www.inf.usi.ch/faculty/bavota/.
Pit -real world mutation testing https://pitest.org. "Pit -real world mutation testing https://pitest.org."
Replication package. "Replication package https://github.com/antonio-mastropaolo/ TransferLearning4Code."
Utilizing fast testing to transform java development into an agile, quick release, low risk process. "Utilizing fast testing to transform java development into an agile, quick release, low risk process." [Online]. Available: http://www.agitar.com/
A convolutional attention network for extreme summarization of source code. M Allamanis, H Peng, C A Sutton, abs/1602.03001CoRR. M. Allamanis, H. Peng, and C. A. Sutton, "A convolutional attention network for extreme summarization of source code," CoRR, vol. abs/1602.03001, 2016. [Online]. Available: http://arxiv.org/abs/1602.03001
Structural language models of code. U Alon, R Sadaka, O Levy, E Yahav, 1910arXivU. Alon, R. Sadaka, O. Levy, and E. Yahav, "Structural language models of code," arXiv, pp. arXiv-1910, 2019.
Massively multilingual neural machine translation in the wild: Findings and challenges. N Arivazhagan, A Bapna, O Firat, D Lepikhin, M Johnson, M Krikun, M X Chen, Y Cao, G F Foster, C Cherry, W Macherey, Z Chen, Y Wu, abs/1907.05019CoRR. N. Arivazhagan, A. Bapna, O. Firat, D. Lepikhin, M. Johnson, M. Krikun, M. X. Chen, Y. Cao, G. F. Foster, C. Cherry, W. Macherey, Z. Chen, and Y. Wu, "Massively multilingual neural machine translation in the wild: Findings and challenges," CoRR, vol. abs/1907.05019, 2019. [Online]. Available: http: //arxiv.org/abs/1907.05019
Getafix: learning to fix bugs automatically. J Bader, A Scott, M Pradel, S Chandra, Proc. ACM Program. Lang. 3OOPSLA27J. Bader, A. Scott, M. Pradel, and S. Chandra, "Getafix: learning to fix bugs automatically," Proc. ACM Program. Lang., vol. 3, no. OOPSLA, pp. 159:1-159:27, 2019.
Neural machine translation by jointly learning to align and translate. D Bahdanau, K Cho, Y Bengio, abs/1409.0473CoRR. D. Bahdanau, K. Cho, and Y. Bengio, "Neural machine translation by jointly learning to align and translate," CoRR, vol. abs/1409.0473, 2014.
The plastic surgery hypothesis. E T Barr, Y Brun, P Devanbu, M Harman, F Sarro, Proceedings of the 22Nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, ser. FSE. the 22Nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, ser. FSENew York, NY, USAACME. T. Barr, Y. Brun, P. Devanbu, M. Harman, and F. Sarro, "The plas- tic surgery hypothesis," in Proceedings of the 22Nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, ser. FSE 2014. New York, NY, USA: ACM, 2014, pp. 306-317.
Audio chord recognition with recurrent neural networks. N Boulanger-Lewandowski, Y Bengio, P Vincent, ISMIR. Citeseer. N. Boulanger-Lewandowski, Y. Bengio, and P. Vincent, "Audio chord recognition with recurrent neural networks." in ISMIR. Citeseer, 2013, pp. 335-340.
Neural edit completion. S Brody, U Alon, E Yahav, arXiv:2005.13209arXiv preprintS. Brody, U. Alon, and E. Yahav, "Neural edit completion," arXiv preprint arXiv:2005.13209, 2020.
The care and feeding of wild-caught mutants. D B Brown, M Vaughn, B Liblit, T Reps, Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, ser. ESEC/FSE 2017. the 2017 11th Joint Meeting on Foundations of Software Engineering, ser. ESEC/FSE 2017New York, NY, USAACMD. B. Brown, M. Vaughn, B. Liblit, and T. Reps, "The care and feeding of wild-caught mutants," in Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, ser. ESEC/FSE 2017. New York, NY, USA: ACM, 2017, pp. 511-522. [Online].
. http:/doi.acm.org/10.1145/3106237.3106280Available: http://doi.acm.org/10.1145/3106237.3106280
Automatic recovery from runtime failures. A Carzaniga, A Gorla, A Mattavelli, N Perino, M Pezzè, Proceedings of the 2013 International Conference on Software Engineering, ser. ICSE '13. the 2013 International Conference on Software Engineering, ser. ICSE '13Piscataway, NJ, USAIEEE PressA. Carzaniga, A. Gorla, A. Mattavelli, N. Perino, and M. Pezzè, "Automatic recovery from runtime failures," in Proceedings of the 2013 International Conference on Software Engineering, ser. ICSE '13. Piscataway, NJ, USA: IEEE Press, 2013, pp. 782-791.
Recent advances in google translate. I Caswell, B Liang, I. Caswell and B. Liang, "Recent advances in google translate," https://ai.googleblog.com/2020/06/ recent-advances-in-google-translate.html, 2020.
Sequencer: Sequence-to-sequence learning for endto-end program repair. Z Chen, S Kommrusch, M Tufano, L Pouchet, D Poshyvanyk, M Monperrus, IEEE Transactions on Software Engineering. Z. Chen, S. Kommrusch, M. Tufano, L. Pouchet, D. Poshyvanyk, and M. Monperrus, "Sequencer: Sequence-to-sequence learning for end- to-end program repair," IEEE Transactions on Software Engineering, 2019. [Online]. Available: http://arxiv.org/abs/1901.01808
Learning phrase representations using RNN encoder-decoder for statistical machine translation. K Cho, B Van Merrienboer, Ç Gülçehre, F Bougares, H Schwenk, Y Bengio, abs/1406.1078CoRR. K. Cho, B. van Merrienboer, Ç. Gülçehre, F. Bougares, H. Schwenk, and Y. Bengio, "Learning phrase representations using RNN encoder-decoder for statistical machine translation," CoRR, vol. abs/1406.1078, 2014.
An empirical study on the usage of transformer models for code completion. M Ciniselli, N Cooper, L Pascarella, A Mastropaolo, E Aghajani, D Poshyvanyk, M Di Penta, G Bavota, IEEE Transactions on Software Engineering. M. Ciniselli, N. Cooper, L. Pascarella, A. Mastropaolo, E. Aghajani, D. Poshyvanyk, M. Di Penta, and G. Bavota, "An empirical study on the usage of transformer models for code completion," IEEE Transactions on Software Engineering, 2021.
Fine-grained and accurate source code differencing. J.-R Falleri, F Morandat, X Blanc, M Martinez, M Monperrus, Proceedings of the International Conference on Automated Software Engineering. the International Conference on Automated Software EngineeringJ.-R. Falleri, F. Morandat, X. Blanc, M. Martinez, and M. Monperrus, "Fine-grained and accurate source code differencing," in Proceedings of the International Conference on Automated Software Engineering, 2014, pp. 313-324. [Online]. Available: https: //hal.archives-ouvertes.fr/hal-01054552/file/main.pdf
EvoSuite: Automatic Test Suite Generation for Object-oriented Software. G Fraser, A Arcuri, Proceedings of the 19th ACM SIGSOFT Symposium and the 13th European Conference on Foundations of Software Engineering, ser. ESEC/FSE '11. the 19th ACM SIGSOFT Symposium and the 13th European Conference on Foundations of Software Engineering, ser. ESEC/FSE '11ACMG. Fraser and A. Arcuri, "EvoSuite: Automatic Test Suite Genera- tion for Object-oriented Software," in Proceedings of the 19th ACM SIGSOFT Symposium and the 13th European Conference on Foundations of Software Engineering, ser. ESEC/FSE '11. ACM, 2011, pp. 416-419.
A study of the uniqueness of source code. M Gabel, Z Su, Proceedings of the Eighteenth ACM SIGSOFT International Symposium on Foundations of Software Engineering, ser. FSE '10. the Eighteenth ACM SIGSOFT International Symposium on Foundations of Software Engineering, ser. FSE '10New York, NY, USAACMM. Gabel and Z. Su, "A study of the uniqueness of source code," in Proceedings of the Eighteenth ACM SIGSOFT International Symposium on Foundations of Software Engineering, ser. FSE '10. New York, NY, USA: ACM, 2010, pp. 147-156.
A systematic study of automated program repair: Fixing 55 out of 105 bugs for $8 each. C L Goues, M Dewey-Vogt, S Forrest, W Weimer, ser. ICSE'12C. L. Goues, M. Dewey-Vogt, S. Forrest, and W. Weimer, "A systematic study of automated program repair: Fixing 55 out of 105 bugs for $8 each," ser. ICSE'12.
Sequence transduction with recurrent neural networks. A Graves, abs/1211.3711CoRR. A. Graves, "Sequence transduction with recurrent neural networks," CoRR, vol. abs/1211.3711, 2012. [Online]. Available: http://arxiv.org/abs/1211.3711
On the use of automated text summarization techniques for summarizing source code. S Haiduc, J Aponte, L Moreno, A Marcus, 2010 17th Working Conference on Reverse Engineering. S. Haiduc, J. Aponte, L. Moreno, and A. Marcus, "On the use of automated text summarization techniques for summarizing source code," in 2010 17th Working Conference on Reverse Engineering, 2010, pp. 35-44.
Improved automatic summarization of subroutines via attention to file context. S Haque, A Leclair, L Wu, C Mcmillan, MSR '20: 17th International Conference on Mining Software Repositories. ACMS. Haque, A. LeClair, L. Wu, and C. McMillan, "Improved automatic summarization of subroutines via attention to file context," in MSR '20: 17th International Conference on Mining Software Repositories, 2020. ACM, 2020, pp. 300-310.
Learning to generate corrective patches using neural machine translation. H Hata, E Shihab, G Neubig, abs/1812.07170CoRR. H. Hata, E. Shihab, and G. Neubig, "Learning to generate corrective patches using neural machine translation," CoRR, vol. abs/1812.07170, 2018. [Online]. Available: http://arxiv.org/abs/ 1812.07170
A simple sequentially rejective multiple test procedure. S Holm, Scandinavian journal of statistics. S. Holm, "A simple sequentially rejective multiple test procedure," Scandinavian journal of statistics, pp. 65-70, 1979.
Universal language model fine-tuning for text classification. J Howard, S Ruder, arXiv:1801.06146arXiv preprintJ. Howard and S. Ruder, "Universal language model fine-tuning for text classification," arXiv preprint arXiv:1801.06146, 2018.
Deep code comment generation. X Hu, G Li, X Xia, D Lo, Z Jin, Proceedings of the 26th Conference on Program Comprehension, ser. ICPC ?18. Association for Computing Machinery. the 26th Conference on Program Comprehension, ser. ICPC ?18. Association for Computing Machinery210X. Hu, G. Li, X. Xia, D. Lo, and Z. Jin, "Deep code comment genera- tion," in Proceedings of the 26th Conference on Program Comprehension, ser. ICPC ?18. Association for Computing Machinery, 2018, p. 200?210.
Codesearchnet challenge: Evaluating the state of semantic code search. H Husain, H.-H Wu, T Gazit, M Allamanis, M Brockschmidt, arXiv:1909.09436arXiv preprintH. Husain, H.-H. Wu, T. Gazit, M. Allamanis, and M. Brockschmidt, "Codesearchnet challenge: Evaluating the state of semantic code search," arXiv preprint arXiv:1909.09436, 2019.
Summarizing source code using a neural attention model. S Iyer, I Konstas, A Cheung, L Zettlemoyer, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational Linguistics1S. Iyer, I. Konstas, A. Cheung, and L. Zettlemoyer, "Summarizing source code using a neural attention model," in Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany: Association for Computational Linguistics, Aug. 2016, pp. 2073-2083. [Online].
Meaningful variable names for decompiled code: A machine translation approach. A Jaffe, J Lacomis, E J Schwartz, C L Goues, B Vasilescu, Proceedings of the 26th Conference on Program Comprehension, ser. ICPC '18. the 26th Conference on Program Comprehension, ser. ICPC '18A. Jaffe, J. Lacomis, E. J. Schwartz, C. L. Goues, and B. Vasilescu, "Meaningful variable names for decompiled code: A machine translation approach," in Proceedings of the 26th Conference on Program Comprehension, ser. ICPC '18, 2018, pp. 20-30.
Automatically generating commit messages from diffs using neural machine translation. S Jiang, A Armaly, C Mcmillan, 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE), ser. ASE'17. iSSNS. Jiang, A. Armaly, and C. McMillan, "Automatically generating commit messages from diffs using neural machine translation," in 2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE), ser. ASE'17, Oct. 2017, pp. 135-146, iSSN:.
Recurrent continuous translation models. N Kalchbrenner, P Blunsom, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingSeattle, Washington, USAAssociation for Computational LinguisticsN. Kalchbrenner and P. Blunsom, "Recurrent continuous translation models," in Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Seattle, Washington, USA: Association for Computational Linguistics, October 2013, pp. 1700-1709.
Maybe deep neural networks are the best choice for modeling source code. R Karampatsis, C A Sutton, R. Karampatsis and C. A. Sutton, "Maybe deep neural networks are the best choice for modeling source code,"
. Corr, abs/1903.05734CoRR, vol. abs/1903.05734, 2019. [Online]. Available: http: //arxiv.org/abs/1903.05734
Code prediction by feeding trees to transformers. S Kim, J Zhao, Y Tian, S Chandra, arXiv:2003.13848arXiv preprintS. Kim, J. Zhao, Y. Tian, and S. Chandra, "Code prediction by feeding trees to transformers," arXiv preprint arXiv:2003.13848, 2020.
Subword regularization: Improving neural network translation models with multiple subword candidates. T Kudo, arXiv:1804.10959arXiv preprintT. Kudo, "Subword regularization: Improving neural network translation models with multiple subword candidates," arXiv preprint arXiv:1804.10959, 2018.
Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. T Kudo, J Richardson, abs/1808.06226CoRR. T. Kudo and J. Richardson, "Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing," CoRR, vol. abs/1808.06226, 2018. [Online]. Available: http://arxiv.org/abs/1808.06226
Genprog: A generic method for automatic software repair. C Le Goues, T Nguyen, S Forrest, W Weimer, IEEE Trans. Software Eng. 381C. Le Goues, T. Nguyen, S. Forrest, and W. Weimer, "Genprog: A generic method for automatic software repair," IEEE Trans. Software Eng., vol. 38, no. 1, pp. 54-72, 2012.
A neural model for generating natural language summaries of program subroutines. A Leclair, S Jiang, C Mcmillan, Proceedings of the 41st International Conference on Software Engineering, ser. ICSE '19. the 41st International Conference on Software Engineering, ser. ICSE '19A. LeClair, S. Jiang, and C. McMillan, "A neural model for gen- erating natural language summaries of program subroutines," in Proceedings of the 41st International Conference on Software Engineering, ser. ICSE '19, 2019, pp. 795-806.
Binary Codes Capable of Correcting Deletions, Insertions and Reversals. V Levenshtein, Soviet Physics Doklady. 10707V. Levenshtein, "Binary Codes Capable of Correcting Deletions, Insertions and Reversals," Soviet Physics Doklady, vol. 10, p. 707, 1966.
Rouge: A package for automatic evaluation of summaries. C.-Y. Lin, Text summarization branches out. C.-Y. Lin, "Rouge: A package for automatic evaluation of sum- maries," in Text summarization branches out, 2004, pp. 74-81.
Neuralmachine-translation-based commit message generation: How far are we. Z Liu, X Xia, A E Hassan, D Lo, Z Xing, X Wang, Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, ser. ASE. the 33rd ACM/IEEE International Conference on Automated Software Engineering, ser. ASEZ. Liu, X. Xia, A. E. Hassan, D. Lo, Z. Xing, and X. Wang, "Neural- machine-translation-based commit message generation: How far are we?" in Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, ser. ASE 2018, 2018, pp. 373-384.
Do the fix ingredients already exist? an empirical inquiry into the redundancy assumptions of program repair approaches. M Martinez, W Weimer, M Monperrus, Companion Proceedings of the 36th International Conference on Software Engineering, ser. ICSE Companion. New York, NY, USAACMM. Martinez, W. Weimer, and M. Monperrus, "Do the fix ingredients already exist? an empirical inquiry into the redundancy assump- tions of program repair approaches," in Companion Proceedings of the 36th International Conference on Software Engineering, ser. ICSE Companion 2014. New York, NY, USA: ACM, 2014, pp. 492-495.
An empirical study on code comment completion. A Mastropaolo, E Aghajani, L Pascarella, G Bavota, 2021 IEEE International Conference on Software Maintenance and Evolution (ICSME). A. Mastropaolo, E. Aghajani, L. Pascarella, and G. Bavota, "An empirical study on code comment completion," in 2021 IEEE Inter- national Conference on Software Maintenance and Evolution (ICSME).
. IEEE. IEEE, 2021, pp. 159-170.
Studying the usage of text-to-text transfer transformer to support code-related tasks. A Mastropaolo, S Scalabrino, N Cooper, D Nader-Palacio, D Poshyvanyk, R Oliveto, G Bavota, 43rd IEEE/ACM International Conference on Software Engineering, ICSE 2021. IEEEA. Mastropaolo, S. Scalabrino, N. Cooper, D. Nader-Palacio, D. Poshyvanyk, R. Oliveto, and G. Bavota, "Studying the usage of text-to-text transfer transformer to support code-related tasks," in 43rd IEEE/ACM International Conference on Software Engineering, ICSE 2021. IEEE, 2021, pp. 336-347.
Automatic source code summarization of context for java methods. P W Mcburney, C Mcmillan, IEEE Transactions on Software Engineering. 422P. W. McBurney and C. McMillan, "Automatic source code summa- rization of context for java methods," IEEE Transactions on Software Engineering, vol. 42, no. 2, pp. 103-119, 2016.
Note on the sampling error of the difference between correlated proportions or percentages. Q Mcnemar, Psychometrika. 122Q. McNemar, "Note on the sampling error of the difference between correlated proportions or percentages," Psychometrika, vol. 12, no. 2, pp. 153-157, 1947.
Deepdelta: Learning to repair compilation errors. A Mesbah, A Rice, E Johnston, N Glorioso, E Aftandilian, Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ser. ESEC/FSE 2019. the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ser. ESEC/FSE 2019A. Mesbah, A. Rice, E. Johnston, N. Glorioso, and E. Aftandilian, "Deepdelta: Learning to repair compilation errors," in Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, ser. ESEC/FSE 2019, 2019, pp. 925-936.
Improving the effectiveness of traceability link recovery using hierarchical bayesian networks. K Moran, D N Palacio, C Bernal-Cardenas, D Mccrystal, D Poshyvanyk, C Shenefiel, J Johnson, 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE). Los Alamitos, CA, USAIEEE Computer SocietyK. Moran, D. N. Palacio, C. Bernal-Cardenas, D. McCrystal, D. Poshyvanyk, C. Shenefiel, and J. Johnson, "Improving the effectiveness of traceability link recovery using hierarchical bayesian networks," in 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE). Los Alamitos, CA, USA: IEEE Computer Society, oct 2020, pp. 873-885. [Online]. Available: https://doi.ieeecomputersociety.org/
Automatic generation of natural language summaries for java classes. L Moreno, J Aponte, G Sridhara, A Marcus, L Pollock, K Vijay-Shanker, 2013 21st International Conference on Program Comprehension (ICPC). L. Moreno, J. Aponte, G. Sridhara, A. Marcus, L. Pollock, and K. Vijay-Shanker, "Automatic generation of natural language summaries for java classes," in 2013 21st International Conference on Program Comprehension (ICPC), 2013, pp. 23-32.
A controlled experiment of different code representations for learning-based bug repair. M Namavar, N Nashid, A Mesbah, arXiv:2110.14081arXiv preprintM. Namavar, N. Nashid, and A. Mesbah, "A controlled experiment of different code representations for learning-based bug repair," arXiv preprint arXiv:2110.14081, 2021.
Lexical statistical machine translation for language migration. A T Nguyen, T T Nguyen, T N Nguyen, Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering, ser. ESEC/FSE 2013. the 2013 9th Joint Meeting on Foundations of Software Engineering, ser. ESEC/FSE 2013A. T. Nguyen, T. T. Nguyen, and T. N. Nguyen, "Lexical statistical machine translation for language migration," in Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering, ser. ESEC/FSE 2013, 2013, pp. 651-654.
Migrating code with statistical machine translation. Companion Proceedings of the 36th International Conference on Software Engineering, ser. ICSE Companion. --, "Migrating code with statistical machine translation," in Companion Proceedings of the 36th International Conference on Software Engineering, ser. ICSE Companion 2014, 2014, pp. 544-547.
A study of repetitiveness of code changes in software evolution. H A Nguyen, A T Nguyen, T T Nguyen, T N Nguyen, H Rajan, Proceedings of the 28th IEEE/ACM International Conference on Automated Software Engineering, ser. ASE'13. the 28th IEEE/ACM International Conference on Automated Software Engineering, ser. ASE'13Piscataway, NJ, USAIEEE PressH. A. Nguyen, A. T. Nguyen, T. T. Nguyen, T. N. Nguyen, and H. Rajan, "A study of repetitiveness of code changes in software evolution," in Proceedings of the 28th IEEE/ACM International Confer- ence on Automated Software Engineering, ser. ASE'13. Piscataway, NJ, USA: IEEE Press, 2013, pp. 180-190.
Learning to generate pseudo-code from source code using statistical machine translation. Y Oda, H Fudaba, G Neubig, H Hata, S Sakti, T Toda, S Nakamura, Proceedings of the 30th IEEE/ACM International Conference on Automated Software Engineering, ser. ASE '15. the 30th IEEE/ACM International Conference on Automated Software Engineering, ser. ASE '15Y. Oda, H. Fudaba, G. Neubig, H. Hata, S. Sakti, T. Toda, and S. Nakamura, "Learning to generate pseudo-code from source code using statistical machine translation," in Proceedings of the 30th IEEE/ACM International Conference on Automated Software Engineering, ser. ASE '15, 2015, pp. 574-584.
Randoop: Feedback-directed random testing for java. C Pacheco, M D Ernst, OOPSLA'07. C. Pacheco and M. D. Ernst, "Randoop: Feedback-directed random testing for java," in OOPSLA'07, 01 2007, pp. 815-816.
Bleu: A method for automatic evaluation of machine translation. K Papineni, S Roukos, T Ward, W.-J Zhu, Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ser. ACL '02. the 40th Annual Meeting on Association for Computational Linguistics, ser. ACL '02K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, "Bleu: A method for automatic evaluation of machine translation," in Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ser. ACL '02, 2002, pp. 311-318.
An empirical exploration of regularities in open-source software lexicons. D Pierret, D Poshyvanyk, The 17th IEEE International Conference on Program Comprehension, ICPC 2009. Vancouver, British Columbia, CanadaD. Pierret and D. Poshyvanyk, "An empirical exploration of regularities in open-source software lexicons," in The 17th IEEE International Conference on Program Comprehension, ICPC 2009, Vancouver, British Columbia, Canada, May 17-19, 2009, 2009, pp. 228- 232.
Recovering test-to-code traceability using slicing and textual analysis. A Qusef, G Bavota, R Oliveto, A De Lucia, D Binkley, J. Syst. Softw. 88A. Qusef, G. Bavota, R. Oliveto, A. De Lucia, and D. Binkley, "Recovering test-to-code traceability using slicing and textual analysis," J. Syst. Softw., vol. 88, no. C, p. 147-168, Feb. 2014.
Exploring the limits of transfer learning with a unified text-to-text transformer. C Raffel, N Shazeer, A Roberts, K Lee, S Narang, M Matena, Y Zhou, W Li, P J Liu, C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, "Exploring the limits of transfer learning with a unified text-to-text transformer," 2019.
Multi-modal program inference: A marriage of pre-trained language models and component-based synthesis. K Rahmani, M Raza, S Gulwani, V Le, D Morris, A Radhakrishna, G Soares, A Tiwari, Proc. ACM Program. Lang. 5OOPSLAK. Rahmani, M. Raza, S. Gulwani, V. Le, D. Morris, A. Radhakr- ishna, G. Soares, and A. Tiwari, "Multi-modal program inference: A marriage of pre-trained language models and component-based synthesis," Proc. ACM Program. Lang., vol. 5, no. OOPSLA, oct 2021.
Code completion with statistical language models. V Raychev, M Vechev, E Yahav, http:/doi.acm.org/10.1145/2594291.2594321Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation, ser. PLDI '14. the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation, ser. PLDI '14New York, NY, USAACMV. Raychev, M. Vechev, and E. Yahav, "Code completion with statistical language models," in Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation, ser. PLDI '14. New York, NY, USA: ACM, 2014, pp. 419-428. [Online]. Available: http://doi.acm.org/10.1145/2594291. 2594321
Leveraging small software engineering data sets with pre-trained neural networks. R Robbes, A Janes, Proceedings of the 41st International Conference on Software Engineering: New Ideas and Emerging Results, ICSE (NIER) 2019. A. Sarma and L. Murtathe 41st International Conference on Software Engineering: New Ideas and Emerging Results, ICSE (NIER) 2019Montreal, QC, CanadaR. Robbes and A. Janes, "Leveraging small software engineering data sets with pre-trained neural networks," in Proceedings of the 41st International Conference on Software Engineering: New Ideas and Emerging Results, ICSE (NIER) 2019, Montreal, QC, Canada, May 29-31, 2019, A. Sarma and L. Murta, Eds. IEEE / ACM, 2019, pp. 29-32.
Detecting user story information in developer-client conversations to generate extractive summaries. P Rodeghero, S Jiang, A Armaly, C Mcmillan, Proceedings of the 39th International Conference on Software Engineering, ser. ICSE ?17. the 39th International Conference on Software Engineering, ser. ICSE ?17P. Rodeghero, S. Jiang, A. Armaly, and C. McMillan, "Detecting user story information in developer-client conversations to generate extractive summaries," in Proceedings of the 39th International Conference on Software Engineering, ser. ICSE ?17, 2017, p. 49?59.
Automated Unit Test Generation for Evolving Software. S Shamshiri, Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, ser. FSE'15. the 2015 10th Joint Meeting on Foundations of Software Engineering, ser. FSE'15Bergamo, ItalyACMS. Shamshiri, "Automated Unit Test Generation for Evolving Software," in Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering, ser. FSE'15. Bergamo, Italy: ACM, 2015, pp. 1038-1041.
Automatic error elimination by horizontal code transfer across multiple applications. S Sidiroglou-Douskos, E Lahtinen, F Long, M Rinard, SIGPLAN Not. 506S. Sidiroglou-Douskos, E. Lahtinen, F. Long, and M. Rinard, "Automatic error elimination by horizontal code transfer across multiple applications," SIGPLAN Not., vol. 50, no. 6, pp. 43-54, Jun. 2015.
Automatically detecting and describing high level actions within methods. G Sridhara, L Pollock, K Vijay-Shanker, 2011 33rd International Conference on Software Engineering (ICSE). G. Sridhara, L. Pollock, and K. Vijay-Shanker, "Automatically detecting and describing high level actions within methods," in 2011 33rd International Conference on Software Engineering (ICSE), 2011, pp. 101-110.
Generating parameter comments and integrating with method summaries. 2011 IEEE 19th International Conference on Program Comprehension. --, "Generating parameter comments and integrating with method summaries," in 2011 IEEE 19th International Conference on Program Comprehension, 2011, pp. 71-80.
Sequence to sequence learning with neural networks. I Sutskever, O Vinyals, Q V Le, abs/1409.3215CoRR. I. Sutskever, O. Vinyals, and Q. V. Le, "Sequence to sequence learning with neural networks," CoRR, vol. abs/1409.3215, 2014.
Intellicode compose: Code generation using transformer. A Svyatkovskiy, S K Deng, S Fu, N Sundaresan, arXiv:2005.08025arXiv preprintA. Svyatkovskiy, S. K. Deng, S. Fu, and N. Sundaresan, "Intellicode compose: Code generation using transformer," arXiv preprint arXiv:2005.08025, 2020.
Correlation between a discrete and a continuous variable. point-biserial correlation. R F Tate, The Annals of mathematical statistics. 253R. F. Tate, "Correlation between a discrete and a continuous variable. point-biserial correlation," The Annals of mathematical statistics, vol. 25, no. 3, pp. 603-607, 1954.
Generating accurate assert statements for unit test cases using pretrained transformers. M Tufano, D Drain, A Svyatkovskiy, N Sundaresan, arXiv:2009.05634arXiv preprintM. Tufano, D. Drain, A. Svyatkovskiy, and N. Sundaresan, "Gener- ating accurate assert statements for unit test cases using pretrained transformers," arXiv preprint arXiv:2009.05634, 2020.
On learning meaningful code changes via neural machine translation. M Tufano, J Pantiuchina, C Watson, G Bavota, D Poshyvanyk, Proceedings of the 41st International Conference on Software Engineering, ICSE 2019. the 41st International Conference on Software Engineering, ICSE 2019Montreal, QC, CanadaM. Tufano, J. Pantiuchina, C. Watson, G. Bavota, and D. Poshy- vanyk, "On learning meaningful code changes via neural machine translation," in Proceedings of the 41st International Conference on Software Engineering, ICSE 2019, Montreal, QC, Canada, May 25-31, 2019, 2019, pp. 25-36.
Deep learning similarities from different representations of source code. M Tufano, C Watson, G Bavota, M Di Penta, M White, D Poshyvanyk, 2018 IEEE/ACM 15th International Conference on Mining Software Repositories (MSR). M. Tufano, C. Watson, G. Bavota, M. Di Penta, M. White, and D. Poshyvanyk, "Deep learning similarities from different repre- sentations of source code," in 2018 IEEE/ACM 15th International Conference on Mining Software Repositories (MSR), 2018, pp. 542-553.
An empirical study on learning bug-fixing patches in the wild via neural machine translation. M Tufano, C Watson, G Bavota, M D Penta, M White, D Poshyvanyk, 2019 IEEE International Conference on Software Maintenance and Evolution. Cleveland, OH, USA28Learning how to mutate source code from bug-fixesM. Tufano, C. Watson, G. Bavota, M. D. Penta, M. White, and D. Poshyvanyk, "An empirical study on learning bug-fixing patches in the wild via neural machine translation," ACM Trans. Softw. Eng. Methodol., vol. 28, no. 4, pp. 19:1-19:29, 2019. [76] --, "Learning how to mutate source code from bug-fixes," in 2019 IEEE International Conference on Software Maintenance and Evolution, ICSME 2019, Cleveland, OH, USA, September 29 -October 4, 2019, 2019, pp. 301-312.
Automating code review activities 2.0. R Tufano, S Masiero, A Mastropaolo, L Pascarella, D Poshyvanyk, G Bavota, 2022R. Tufano, S. Masiero, A. Mastropaolo, L. Pascarella, D. Poshyvanyk, and G. Bavota, "Automating code review activities 2.0," in 2022
IEEE/ACM 44th International Conference on Software Engineering (ICSE). IEEEIEEE/ACM 44th International Conference on Software Engineering (ICSE). IEEE.
Towards automating code review activities. R Tufano, L Pascarella, M Tufano, D Poshyvanyk, G Bavota, 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). IEEER. Tufano, L. Pascarella, M. Tufano, D. Poshyvanyk, and G. Bavota, "Towards automating code review activities," in 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). IEEE, 2021, pp. 163-174.
Recovering clear, natural identifiers from obfuscated js names. B Vasilescu, C Casalnuovo, P Devanbu, Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, ser. ESEC/FSE 2017. the 2017 11th Joint Meeting on Foundations of Software Engineering, ser. ESEC/FSE 2017B. Vasilescu, C. Casalnuovo, and P. Devanbu, "Recovering clear, natural identifiers from obfuscated js names," in Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering, ser. ESEC/FSE 2017, 2017, pp. 683-693.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Advances in neural information processing systems. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, "Attention is all you need," in Advances in neural information processing systems, 2017, pp. 5998- 6008.
A systematic literature review on the use of deep learning in software engineering research. C Watson, N Cooper, D Palacio, K Moran, D Poshyvanyk, ACM Transactions on Software Engineering and Methodology. C. Watson, N. Cooper, D. Palacio, K. Moran, and D. Poshyvanyk, "A systematic literature review on the use of deep learning in software engineering research," ACM Transactions on Software Engineering and Methodology.
On learning meaningful assert statements for unit test cases. C Watson, M Tufano, K Moran, G Bavota, D Poshyvanyk, Proceedings of the 42nd International Conference on Software Engineering. the 42nd International Conference on Software Engineering2020To AppearC. Watson, M. Tufano, K. Moran, G. Bavota, and D. Poshyvanyk, "On learning meaningful assert statements for unit test cases," in Proceedings of the 42nd International Conference on Software Engineering, ICSE 2020, 2020, p. To Appear.
Deep learning code fragments for code clone detection. M White, M Tufano, C Vendome, D Poshyvanyk, 2016 31st IEEE/ACM International Conference on Automated Software Engineering (ASE). M. White, M. Tufano, C. Vendome, and D. Poshyvanyk, "Deep learning code fragments for code clone detection," in 2016 31st IEEE/ACM International Conference on Automated Software Engineer- ing (ASE), 2016, pp. 87-98.
Toward Deep Learning Software Repositories. M White, C Vendome, M Linares-Vásquez, D Poshyvanyk, Proceedings of the 12th IEEE Working Conference on Mining Software Repositories (MSR'15), ser. MSR '15. the 12th IEEE Working Conference on Mining Software Repositories (MSR'15), ser. MSR '15Piscataway, NJ, USAIEEE PressM. White, C. Vendome, M. Linares-Vásquez, and D. Poshyvanyk, "Toward Deep Learning Software Repositories," in Proceedings of the 12th IEEE Working Conference on Mining Software Repositories (MSR'15), ser. MSR '15. Piscataway, NJ, USA: IEEE Press, 2015, pp. 334-345. [Online]. Available: http://dl.acm.org/citation.cfm? id=2820518.2820559
Sorting and transforming program repair ingredients via deep learning code similarities. M White, M Tufano, M Martinez, M Monperrus, D Poshyvanyk, 2019 IEEE 26th International Conference on Software Analysis, Evolution and Reengineering (SANER). M. White, M. Tufano, M. Martinez, M. Monperrus, and D. Poshy- vanyk, "Sorting and transforming program repair ingredients via deep learning code similarities," in 2019 IEEE 26th International Conference on Software Analysis, Evolution and Reengineering (SANER).
. IEEE. to appearIEEE, 2019, p. to appear.
A survey on deep learning for software engineering. Y Yang, X Xia, D Lo, J Grundy, arXiv:2011.14597arXiv preprintY. Yang, X. Xia, D. Lo, and J. Grundy, "A survey on deep learning for software engineering," arXiv preprint arXiv:2011.14597, 2020.
Towards standardizing and improving classification of bug-fix commits. S Zafar, M Z Malik, G S Walia, 2019 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM). S. Zafar, M. Z. Malik, and G. S. Walia, "Towards standardizing and improving classification of bug-fix commits," in 2019 ACM/IEEE International Symposium on Empirical Software Engineering and Mea- surement (ESEM), 2019, pp. 1-6.
| [
"https://github.com/antonio-mastropaolo/"
] |
[
"Chiral Fermions and Anomalies on a Finite Lattice",
"Chiral Fermions and Anomalies on a Finite Lattice"
] | [
"Karl Jansen \nDepartment of Physics\nUniversity of California at San Diego\n0319, 92093-0319La JollaCAUSA\n"
] | [
"Department of Physics\nUniversity of California at San Diego\n0319, 92093-0319La JollaCAUSA"
] | [] | Recently Kaplan proposed a new method to simulate chiral fermions on the lattice by introducing a space dependent mass term that looks like a domain wall. He showed that if one starts with an odd dimensional lattice theory the lower (even) dimensional world on the domain wall will have a chiral zero mode and the corresponding anomaly. I test this proposal by computing the gauge and Goldstone-Wilczek currents and their derivatives on a 3-dimensional finite lattice. By determining the spectrum of the finite lattice Hamiltonian I demonstrate that the theory one obtains is chiral below a critical momentum. Furthermore I show that one can see the anomaly on the finite lattice and that in the case of the 3-4-5 model the anomalies in the gauge currents cancel as in the continuum. | 10.1016/0370-2693(92)91113-n | [
"https://export.arxiv.org/pdf/hep-lat/9206014v1.pdf"
] | 13,991,921 | hep-lat/9206014 | 2843882172887f6be60d87eca4a240db6be347ca |
Chiral Fermions and Anomalies on a Finite Lattice
arXiv:hep-lat/9206014v1 10 Jun 1992 June 1992
Karl Jansen
Department of Physics
University of California at San Diego
0319, 92093-0319La JollaCAUSA
Chiral Fermions and Anomalies on a Finite Lattice
arXiv:hep-lat/9206014v1 10 Jun 1992 June 1992
Recently Kaplan proposed a new method to simulate chiral fermions on the lattice by introducing a space dependent mass term that looks like a domain wall. He showed that if one starts with an odd dimensional lattice theory the lower (even) dimensional world on the domain wall will have a chiral zero mode and the corresponding anomaly. I test this proposal by computing the gauge and Goldstone-Wilczek currents and their derivatives on a 3-dimensional finite lattice. By determining the spectrum of the finite lattice Hamiltonian I demonstrate that the theory one obtains is chiral below a critical momentum. Furthermore I show that one can see the anomaly on the finite lattice and that in the case of the 3-4-5 model the anomalies in the gauge currents cancel as in the continuum.
Introduction
The Standard Model and several of its extensions are chiral theories and consequently chiral gauge theories play an important role in our theoretical concepts. In principle one would like to understand these theories not only perturbatively but also at strong coupling. However, it has turned out that a non-perturbative regularization of chiral gauge theories runs into problems. If one wants to use the lattice as a regulator one faces the appearance of extra fermion species, the doubler fermions. Although it is a general belief that the lattice can be used for regulating every quantum field theory, it seems to fail in the case of chiral gauge theories [1]. This failure is summarized in the Nielsen-Ninomiya theorem [2] which leaves us with the choice of either having the unwanted doubler fermions on the lattice or no doubler modes but also the lack of chiral symmetry. Of course, the problem with the lattice as a regulator for chiral gauge theories raises the question, of whether this failure can be attributed only to the lattice or if it is a generic problem also with other non-perturbative regulators for chiral gauge theories.
The last years have seen numerous attempts to latticize the Standard Model [1]. Some of them like the Smit-Swift proposal have been carefully analyzed and tested, both in numerical and analytical computations. However, it is generally accepted by now [3,4] that the result of these investigations is negative. So far a successful method to put chiral fermions on the lattice is missing.
Here I want to start an investigation of a recent proposal by D. Kaplan [5]. He started with an odd-dimensional vector gauge theory, avoiding in this way problems with the Nielson-Ninomiya theorem. In this odd dimensional theory the fermion mass is taken to depend on one of the coordinates with the structure of a domain wall. Following Kaplan, on this domain wall -which is a lower (even) dimensional world-one should find a chiral fermion, the correct anomaly structure and the decoupling of the doubler modes. Therefore for a (lattice) person who lives on the domain wall and does not know about the extra dimension with the defect, it would seem as if the Nielsen-Ninomiya theorem is violated whereas a person living in the higher dimensional world would interprete this result as a consequence of the existence of the domain wall.
In this letter I provide a first test of Kaplan's proposal. By using numerical methods I calculate the spectrum and -in the presence of external gauge fields-the currents on the finite lattice. In particular I demonstrate the existence of the anomaly on a finite lattice.
The model
To be specific I will restrict myself to a 3-dimensional U(1) gauge theory on a lattice L 2 L s . The action is given by
S = 1 2 z,µ ψ z γ µ U z,µ ψ z+µ −ψ z+µ γ µ U * z,µ ψ z + m 0 (s) zψ z ψ z + w z,µ −2ψ z ψ z +ψ x U z,µ ψ z+µ +ψ z+µ U * z,µ ψ .(1)
Here a lattice point is denoted by z = (t, x, s) and µ is a unit vector pointing in one of the three directions. The lattice spacing a is set to one throughout the paper. The fermion fields are 2-component complex spinors and the gauge fields U z,µ ∈ U(1). They are related to the vector potential by U z,µ = e iqAµ(z) with q the electric charge. I leave out the gauge field self interaction (i.e. the plaquette term) as I am only interested in external gauge fields at this point. The gamma matrices in euclidean space are given by the Pauli-matrices. Note that in odd dimensions there is no analogue of γ 5 so that the theory is automatically vector-like. The action (1) has a mass term depending only on one of the lattice directions, s:
m 0 (s) ≡ sinh(µ 0 )θ(s); θ(s) = −1 2 ≤ s ≤ Ls 2 +1 Ls 2 + 2 ≤ s ≤ L s 0 s = 1, Ls 2 + 1 .(2)
This mass has the form of a kink-antikink pair and generates two domain walls, one located at s = 1 and the other at s = Ls 2 +1. The existence of the antikink is necessitated by periodic boundary conditions. When one increases the lattice size the distance between these domain walls which is L s /2, increases and for L s = ∞ we are left with only one domain wall.
To get some insight into the spectrum of the theory, let me turn off the gauge fields for a moment and set the Wilson coupling w to be zero (For a discussion of zero modes on domain walls see also [6,7]). Then the lattice Hamiltonian of the problem is
H = −σ 1 [σ 2 ∂ x + σ 3 ∂ s + m 0 (s)] ,(3)
where ∂ denotes the lattice derivative ∂ z = 1 2 [δ z,z+µ − δ z,z−µ ]. It is easy to see that
ψ + = e ikx e +µ 0 |s−( Ls 2 +1)| 1 0 (4)
is an (unnormalized) eigenstate of the Hamiltonian, which is a massless fermion with chirality σ 3 ψ + = +ψ + . This solution is centered on the domain wall at s = 1 and falls off exponentially as one goes away from the wall. There is a similar solution ψ −
ψ − = e −ikx e −µ 0 |s−( L 2 +1)| 0 1(5)
which lives on the domain wall at s = Ls 2 + 1 with negative chirality. Without the Wilsonterm one has also the doubler modes at the corners of the Brillouin zones. They remain massless and appear with the opposite chirality. Therefore the fermion and the doubler modes pair up, the resulting theory is not chiral and so far we have gained nothing.
To decouple the doublers Kaplan added the usual Wilson-term, which in this case is not problematic as we started with a vectorlike theory. Introducing the Wilson-term leads to
H = −σ 1 [σ 2 ∂ x + σ 3 ∂ s + m 0 (s) + w(∆ x + ∆ y )] .(6)
Here ∆ denotes the second derivative on the lattice. The Hamiltonian (6) can be reduced to only depend on s by imposing plane wave solutions in the x-direction. One obtains
H = −σ 1 [σ 2 sin(k) + σ 3 ∂ s + m 0 (s) + 2w(cos(k) − 1) + w∆ s )] .(7)
In [5] it was shown that for the infinite system which has a single domain wall, the Wilson term gives a large mass to the would-be chiral modes when their momenta exceed a critical value k c = O(1). Because the aim at the end is a finite lattice simulation one wants to keep the same scenario at finite volume. Of course, on the finite lattice one has two domain walls and expects two chiral modes, one located at the domain wall at s = 1 with positive chirality and the other located at L s /2 + 1 with negative chirality. The effect of the Wilson term is to delocalize and pair up high momentum bound states of opposite chirality on the two doamin walls. For low momentum, the states remain chiral and massless and their overlap is exponentially small ≈ e −µ 0 Ls/2 . If we look at only one domain wall we are left with only a single chiral mode in the low energy regime.
To see whether this scenario is true I calculated the eigenvalues and eigenfunctions of the Hamiltonian (7) numerically on a system size of L = L s = 100. The lattice momenta k are given by k = sin((π(n + 1 2 ))/L) where n = 0, ..., L − 1. These momenta correspond to antiperiodic boundary conditions in the x-direction as will be used for the calculation of the anomaly in the next section. Let me discuss the results of the numerical evaluation of the eigenvalues and the eigenfunctions. The eigenvalues occur in ±λ(k) pairs corresponding to a particle and an anti-particle solution. I plot the lowest momentum, k = sin(π/2L), eigenfunctions which correspond to the lowest eigenvalues ±λ 0 in fig.1. The figure shows that the +λ 0 state is located at s = 1, while the −λ 0 state is localized at s = Ls 2 + 1. They are well separated in agreement with the expected overlap of the wavefunctions which is ≈ e −50 (µ 0 was chosen to be 0.81).
In Fig.2a I show the lowest two eigenvalues λ 0 (k) and λ 1 (k) of the Hamiltonian (7). For k < k c ≈ 0.9, λ 0 exhibits the dispersion relation of a massless fermion while λ 1 corresponds to a state with a value of the mass at the cut-off and the system has a mass gap. Note that k c is already at the order of the cut-off. Increasing the momenta k above k c the eigenvalues degenerate and the energies are above the cut-off. The doubler modes at the corner of the Brillouin zone are decoupled and there is only one zero mode in the spectrum. Fig.2b shows that this zero mode is a chiral fermion. I plot the ratio
R = <ψψ > <ψσ 1 ψ > .(8)
This ratio measures the "violation of chirality". It is zero if ψ is chiral. R being non-zero indicates that the state is no longer a chiral eigenstate. The figure clearly demonstrates that for small momenta the wavefunction on the lattice is chiral. For large values of k > ∼ 0.9 the chirality is lost, the large momentum (at the order of the cut-off) modes pair up.
I conclude that if I am restricted to the domain wall at s = 1 I find a chiral fermion at low energies on the finite lattice. As a consequence I should see the corresponding anomaly. That this is indeed the case will be shown in the next section.
The anomaly 3.1 In the continuum
It is well known that the existence of a zero mode on a mass defect produces an anomaly [9,10]. This anomaly should appear in the 2-dimensional world on the domain wall and is given by [8]
∂ i j i (z) = ±q 2 E(t) 2π .(9)
Here j i is the current j i =<ψγ i ψ > , (i = t, x), q is the charge of the fermion, E(t) is an applied external E-field and the ± signs stand for the chirality of the zero mode on the domain wall. (The factor of i appearing in the Euclidean anomaly has been absorbed into the definition of the current j.) The existence of this anomaly may seem surprising because we started from a 3-dimensional vector gauge theory and no anomaly should be present.
There must be another current wih nonzero divergence that cancels the zero mode anomaly (9).
This extra contribution is found in the Goldstone-Wilczek current [9]. It is well known that charge can flow due to the adiabatic change of external fields. In the present example this effect creates the so-called Goldstone-Wilczek current [9,10], along the s-direction,
j GW s (z) = − q 2 2π m(s) |m(s)| E(t) .(10)
This current is perpendicular to the applied E-field (resembling in this respect a Hall current) and to the domain wall. As the mass m 0 (s) is different on the two sides of the domain wall, the divergence of this current is not zero and exactly opposite to the divergence of the current (9). The 3-dimensional vector theory is anomaly free. The cancellation of the currents (9) and (10) is an example of the general relation of anomalies in odd dimensions to the ones in even dimensions as discussed in [10].
If the picture described above is correct, applying an external E-field one has to see the anomaly (9) even on a finite lattice. Moreover, one should have no anomaly if one starts with an anomaly free theory like the 3-4-5 model as discussed in [5]. In this model one chooses as the mass term
m 0 (s)ψ z ψ z → m 0 (s) (ψ z ψ z ) 3 + (ψ z ψ z ) 4 − (ψ z ψ z ) 5 .(11)
The indices here mean that the fermions have charge q = 3, 4, 5, respectively. Because of the minus sign of the q = 5 fermion mass the q = 5 fermion and the q = 3, 4 fermions have opposite chirality. Furthermore, because the anomaly is proportional to the charge squared, the sum of the individual divergencies should vanish. This would not only test that one sees the anomaly on the lattice but also that it is proportional to q 2 and has therefore the right amplitude.
On the lattice
To test the above picture I have calculated the gauge current and its derivative numerically. The current on the finite lattice is given by
j µ z = 1 2 ψ z γ µ U z,µ ψ z+µ +ψ z+µ γ µ U * z,µ ψ z + w ψ z U z,µ ψ z+µ −ψ z+µ U * z,µ ψ(12)
where µ = 1, 2, 3. Note that I leave out the i in front of the current, because it will drop out in the anomaly equation (9). The total divergence of this current is locally conserved ∂ µ j µ z = 0 because the theory (1) is vectorlike. But if we pretend to live in the 2-dimensional world on one of the domain walls, where we have a chiral fermion, we would see the anomaly (9) realized.
The current (12) can be calculated numerically on a finite lattice from the inverse fermion matrix. I used the conjugate gradient method to invert the matrix in the presence of the external gauge fields. Note that this step is a numerical computation and that no simulation is involved. I have chosen antiperiodic boundary conditions in the x direction to avoid problems with zero modes that would render the matrix not invertible. To implement an external E-field along the domain wall, the gauge fields were chosen to be
U x,µ=2 = exp{−iq L 2π E 0 cos( 2π L (t − 1)) }(13)
The U's in the other directions were set to 1.
As is shown in fig.1, the wavefunctions on the lattice have a finite width. The 1 + 1dimensional world is not localized at only s = 1 but extends over several s-slices. The charge is built up in this finite region in s and the anomaly equation (9) has therefore to be modified on the lattice. Let us define
< ∂ i j i >≡ s∈Λ ψ ∂ i j i (t, x, s) , i = (t, x).(14)
Here the sum in s is taken over the support Λ ψ of the wavefunction (see fig.1) -the "width" of the 1 + 1-dimensional world. By varying the number of s-slices one can determine how many slices one has to take in the summation so that the result for < ∂ i j i > does not change.
The anomaly equation on the lattice reads now
< ∂ i j i >= ± q 2 2π E ef f (t) (15)
where the effective electric field E ef f for small E 0 is given by
E ef f = sin( 2π L ) 2π L E 0 sin(t − 1) .(16)
One can perform several checks on the program. First, using the wave function (4) as an input and acting on it with the fermion matrix it should reproduce the zero eigenvalue if the gauge fields are switched off. This is an excellent test to see whether the correct matrix is in the program. Second, because the 3-dimensional gauge current is conserved, the total divergence of the current should cancel locally in the presence of the external E-field. Also this I could see clearly by calculating the local currents. It turned out that ∂ x j x was exactly zero (as one would expect) and that the cancellation only occured with ∂ t j t (the zero mode current (9)) and ∂ s j s (the Goldstone-Wilczek current (10)) at the same lattice point, i.e. ∂ x j x = 0 , ∂ t j t = −∂ s j s .
I have chosen a 16 3 lattice with µ 0 = 0.81 (see (2)) and the strength of the E-field E 0 = 0.001 1 . The outcome of the numerical evaluation of the current in the presence of the E-field was as follows. Setting the Wilson coupling to zero < ∂ i j i > was exactly zero. This shows that the doublers did not decouple. They cancel the anomaly and we are left with a vectorlike anomaly free theory.
Next I included a Wilson coupling w = 0.9. To see, whether for the 16 3 lattice and the choice of µ 0 and w used here, the situation is similar to the case of the L = 100 system used in section 2, I calculated the lowest eigenvalue λ 0 and the corresponding eigenfunction from the Hamiltonian (7). I find that the wavefunctions have only a tiny overlap. The support of the eigenfunctions extends over about 7 s-slices. For small momenta the 16 3 system still has a chiral fermion and is appropriate to test the anomaly equation (15). I calculated < ∂ i j i > from the inverse fermion matrix. The summation over s (see eq.(14)) was taken over three s-slices on each side of the s = 1 slice. Increasing the number of s-slices did not change the numerical value of < ∂ i j i >. This fits into the picture I have obtained for the wavefunction from the Hamiltonian.
To test the anomaly equation (15) I constructed the ratio
R anomaly = q 2 E ef f (t) 2π / < ∂ i j i > .(17)
In the 3-4-5 model I find find R anomaly to be independent of t for each fermion with charge q = 3, 4, 5, respectively. This shows that the divergencies of the individual flavor currents are anomalous! Moreover, if the anomaly equation (15) is realized R anomaly should equal one. I find from the numerical computation of < ∂ i j i >, R anomaly ≈ 0.98 in very good agreement with the theoretical expectations.
In Fig.3 I plot q < ∂ i j i > for the 3-4-5 model. The individual < ∂ i j i > correspond to the different charges q in the 3-4-5 model as indicated in the figure. They individually satisfy the relation R a = 1 within 2%. Notice that in the case of q = 5 the sign of < ∂ i j i > is reversed showing that the chirality of the zero mode is flipped. Moreover, the figure also shows the sum of all < ∂ i j i > as the straight line at zero! The anomalies cancel, we end up with a anomaly free gauge current in spite of the fact that the individual flavor currents are seen to be anomalous.
Conclusion
In this work I have shown for the first time that it is possible in practice to see anomalous behaviour of currents on the lattice. This was possible by using a recent proposal by Kaplan which circumvents the Nielson-Ninomyia theorem by starting with a three dimensional vectorlike theory with a mass defect and a Wilson term. This mass defect or domain wall guarantees the existence of a zero mode on the domain wall. Kaplan made use of this zero mode to construct a chiral gauge theory in the lower 2-dimensional target gauge theory. By introducing a Wilson-term it is possible to get rid of the unwanted doubler modes and to end up with only one chiral mode.
By solving the Hamiltonian problem on the finite lattice numerically I demonstrated that this scenario is true also for the finite lattice system. I find a chiral fermion for small momenta located at one of the domain walls and that the lattice system has an energy gap. I showed by numerically calculating the gauge current on finite lattices that as a consequence of the existence of the chiral fermion the anomaly can be seen on the lattice and that the divergence of the gauge current < ∂ i j i >, eq.(12), satisfies the anomaly equation eq.(15). In addition I demonstrated that in the case of an anomaly free theory, which was taken to be the 3-4-5 model, the anomalies on the lattice cancel for the gauge current.
Although these results are certainly only a first step, the results are promising. They clearly indicate that the Kaplan proposal for chiral lattice fermions (or regulated chiral fermions in general) has a chance to solve the old puzzle of chiral fermions on the lattice. The next step is to render the gauge fields dynamical and to see whether one can get out a 2-dimensional chiral gauge theory -the chiral Schwinger model-in the continuum limit. Then one might proceed to four dimensions. Work in this direction is in progress. The eigenfunctions corresponding to the lowest eigenvalues ±λ 0 for the smallest lattice momentum, k = sin(π/2L), are plotted. The system size is L = L s = 100, the domain wall mas is µ 0 = 0.81 and the Wilson coupling w = 0.9. The wavefunction for +λ 0 is located at s = 1, while the wavefunction for −λ 0 is at Ls 2 + 1. They show no overlap. Fig.2a The two lowest positive eigenvalues λ 0 and λ 1 from the Hamiltonian (7) as a function of the lattice momenta k = sin(π(n + 1 2 )/L) where n = 0, ..., L − 1. The parameters are as in fig.1. For small k < k c ≈ 0.9, the system has a mass gap, the lowest eigenvalue exhibits the dispersion relation of a massless fermion and λ 1 is at the order of the cut-off. Fig.2b The ratio R =<ψψ > / <ψσ 1 ψ > is shown. R measures the "violation of chirality". It is zero when the fermion is a chiral eigenstate and the discrepancy from zero indicates that the state is no longer chiral. The figure clearly shows that the lowest eigenvalue λ 0 in fig.2a (which corresponds also to the wavefunction centered at s = 1 in fig.1) belongs to a chiral fermion at low energies. Fig.3 The divergence of the 1 + 1-dimensional gauge current in an external E-field. The system size here is 16 3 . The parameters are otherwise like in figs.1,2, i.e. µ = 0.81 and w = 0.9. The strength of the E-field is E 0 = 0.001. The different curves correspond to the contributions to q < ∂ i j i > ,i = x, t, (see eq.(12)) from fermions of charge q = 3, 4, 5 and chirality +1, +1, −1, respectively. The straight line at zero is the result of the sum of the three individual curves, showing that the anomalies cancel in the gauge current, even though the individual flavor currents are anomalous. Note that the results shown in the figure do not involve a simulation but stem from a direct numerical computation of the gauge current. Accordingly they do not have errorbars.
Figure Caption
Fig. 1
1Fig.1 The eigenfunctions corresponding to the lowest eigenvalues ±λ 0 for the smallest lattice momentum, k = sin(π/2L), are plotted. The system size is L = L s = 100, the domain wall mas is µ 0 = 0.81 and the Wilson coupling w = 0.9. The wavefunction for +λ 0 is located at s = 1, while the wavefunction for −λ 0 is at Ls 2 + 1. They show no overlap.
The choice of E 0 was motivated to stay in the low energy regime of the dispersion relation (seefig.2a).
Acknowledgements I want to thank David Kaplan not only for numerous helpful discussions but also for sharing and explaining to me his idea of chiral fermions prior to publication. This work is supported by DOE grant DE-FG-03-90ER40546.
. J Smit, Nucl.Phys.B (Proc.Suppl.). 173J. Smit, Nucl.Phys.B (Proc.Suppl.) 17 (1990) 3.
. H B Nielsen, M Ninomiya, Nucl.Phys. 18520H.B. Nielsen and M. Ninomiya, Nucl.Phys.B185 (1981) 20.
. M F L Golterman, Nucl. Phys.B (Proc. Suppl.). 20515M.F.L. Golterman, Nucl. Phys.B (Proc. Suppl.) 20 (1990) 515;
. M F L Goltermann, D N Petcher, J Smit, Nucl.Phys. 37051M.F.L. Goltermann, D.N. Petcher and J. Smit, Nucl.Phys. B370 (1992) 51.
A Method for Simulating Chiral Fermions on the Lattice, UCSD preprint. D B Kaplan, D.B. Kaplan, A Method for Simulating Chiral Fermions on the Lattice, UCSD preprint, UCSD/PTH 92-16.
. R Jackiw, C Rebbi, Phys.Rev.D. 133398R.Jackiw and C. Rebbi, Phys.Rev.D 13 (1976) 3398.
. D Boyanovsky, E Dagotto, E Fradkin, Nucl. Phys. 285340D. Boyanovsky, E. Dagotto and E. Fradkin, Nucl. Phys. B285 (1987) 340;
. E Fradkin, Nucl.Phys. B (Proc.Suppl.). 1175E. Frad- kin, Nucl.Phys. B (Proc.Suppl.)1A (1987) 175;
. E Dagotto, E Fradkin, A Moreo, Phys.Lett. 172383E. Dagotto, E. Fradkin and A.Moreo, Phys.Lett. 172B (1986) 383.
. S Adler, Phys.Rev. 1772426S. Adler, Phys.Rev. 177 (1969) 2426;
. J S Bell, R Jackiw, Nuovo Cimento. 6047J.S. Bell and R. Jackiw, Nuovo Cimento 60 (1969) 47;
. P H Frampton, T Kephart, Phys.Rev. 501010Phys.Rev.P.H. Frampton and T.W Kephart, Phys.Rev. 50 (1983) 1343,1347; Phys.Rev.D28 (1983) 1010;
. L Alavarez-Gaumé, E Witten, Nucl.Phys. 234269L. Alavarez-Gaumé and E. Witten, Nucl.Phys.B234 (1983) 269;
. B Zumino, Lectures at Les Houches Summer School. B. Zumino, Lectures at Les Houches Summer School (1983);
. R Stora Cargése, Lectures, R. Stora Cargése Lectures (1983).
. J Goldstone, F Wilczek, Phys.Rev.Lett. 47986J. Goldstone and F. Wilczek, Phys.Rev.Lett. 47 (1981) 986.
. C G Callan, Jr , J A Harvey, Nucl.Phys. 250427C.G. Callan, Jr. and J.A. Harvey, Nucl.Phys. B250 (1985) 427.
| [] |
[
"Confinement On the Moose Lattice a.k.a. Party Trick Confinement",
"Confinement On the Moose Lattice a.k.a. Party Trick Confinement",
"Confinement On the Moose Lattice a.k.a. Party Trick Confinement",
"Confinement On the Moose Lattice a.k.a. Party Trick Confinement",
"Confinement On the Moose Lattice a.k.a. Party Trick Confinement",
"Confinement On the Moose Lattice a.k.a. Party Trick Confinement"
] | [
"Benjamin Lillard blillard@illinois.edu \nIllinois Center for Advanced Studies of the Universe\nDepartment of Physics\nUniversity of Illinois at Urbana-Champaign\n61801UrbanaILUSA\n",
"Benjamin Lillard blillard@illinois.edu \nIllinois Center for Advanced Studies of the Universe\nDepartment of Physics\nUniversity of Illinois at Urbana-Champaign\n61801UrbanaILUSA\n",
"Benjamin Lillard blillard@illinois.edu \nIllinois Center for Advanced Studies of the Universe\nDepartment of Physics\nUniversity of Illinois at Urbana-Champaign\n61801UrbanaILUSA\n"
] | [
"Illinois Center for Advanced Studies of the Universe\nDepartment of Physics\nUniversity of Illinois at Urbana-Champaign\n61801UrbanaILUSA",
"Illinois Center for Advanced Studies of the Universe\nDepartment of Physics\nUniversity of Illinois at Urbana-Champaign\n61801UrbanaILUSA",
"Illinois Center for Advanced Studies of the Universe\nDepartment of Physics\nUniversity of Illinois at Urbana-Champaign\n61801UrbanaILUSA"
] | [] | In this work we present a new class of N = 1 supersymmetric confining gauge theories, with strikingly simple infrared theories that descend from intricate interconnected networks of product gauge groups. A diagram of the gauge groups and the charged matter content of the ultraviolet theory has the structure of a triangular lattice, with SU (N ) or SU (3N ) gauge groups at each of the vertices, connected by bifundamental chiral superfields. This structure admits a U (1) R conserving superpotential with marginal (trilinear) operators. With the introduction of this superpotential, the SU (3N ) and SU (N ) gauge groups confine: in the far infrared limit of the supersymmetric theory, the relevant degrees of freedom are gauge invariant "mesons" and "baryons." In this paper we show how the properties of the infrared degrees of freedom depend on the topology and shape of the moose/quiver "lattice" of the original gauge theory. We investigate various deformations of the theory, and propose some phenomenological applications for BSM models. | 10.1007/jhep11(2022)125 | [
"https://export.arxiv.org/pdf/2112.13828v1.pdf"
] | 245,501,974 | 2112.13828 | d6873da75bb5ecbb5852432e5de17a11a3646267 |
Confinement On the Moose Lattice a.k.a. Party Trick Confinement
Benjamin Lillard blillard@illinois.edu
Illinois Center for Advanced Studies of the Universe
Department of Physics
University of Illinois at Urbana-Champaign
61801UrbanaILUSA
Confinement On the Moose Lattice a.k.a. Party Trick Confinement
In this work we present a new class of N = 1 supersymmetric confining gauge theories, with strikingly simple infrared theories that descend from intricate interconnected networks of product gauge groups. A diagram of the gauge groups and the charged matter content of the ultraviolet theory has the structure of a triangular lattice, with SU (N ) or SU (3N ) gauge groups at each of the vertices, connected by bifundamental chiral superfields. This structure admits a U (1) R conserving superpotential with marginal (trilinear) operators. With the introduction of this superpotential, the SU (3N ) and SU (N ) gauge groups confine: in the far infrared limit of the supersymmetric theory, the relevant degrees of freedom are gauge invariant "mesons" and "baryons." In this paper we show how the properties of the infrared degrees of freedom depend on the topology and shape of the moose/quiver "lattice" of the original gauge theory. We investigate various deformations of the theory, and propose some phenomenological applications for BSM models.
Introduction
Strongly coupled gauge theories are highly relevant to our understanding of the universe, but challenging to analyze. Taking the strong nuclear force (QCD) as an example, perturbation theory is incapable of deriving the masses and interactions of the various hadrons in the Standard Model from high-energy observables. A first-principles calculation in the strongly coupled regime requires nonperturbative methods, such as lattice QCD. Aspects of supersymmetry make this problem more tractable: with a sufficiently high degree of symmetry, it may be possible to constrain the form of the low energy theory to the point that its constituents and interactions can be more precisely identified.
In this article we investigate product gauge groups of the form (SU (3N ) × SU (N ) 3 ) k in N = 1 supersymmetry (SUSY), with a set of chiral bifundamental "quark" matter fields arranged so that a moose diagram of the theory forms a triangular lattice. With the appropriate superpotential, the theory exhibits two stages of confinement, leading to an infrared theory of "baryons" and "mesons" that is surprisingly simple.
The ultraviolet phase of our theory is constructed from three ingredients: the SU (3N ) and SU (N ) gauge groups; chiral matter superfields, transforming in the bifundamental representations of SU (3N ) × SU (N ) or SU (N ) × SU (N ); and a chiral superpotential W , which includes the marginal gauge invariant trace operators of the form W ⊃ λTr (q 1 q 2 q 3 ) for three bifundamentals q i . To analyze the low energy theory we rely on the Seiberg dualities for supersymmetric QCD (SQCD) [1,2].
For the initial discussion, we restrict our attention to the cases where there is some high-energy scale M where all of the gauge couplings are perturbatively small, i.e. g 3N (µ), g N (µ) O(1) at µ = M . As the SU (3N ) and SU (N ) gauge couplings run in opposite directions (i.e. their NSVZ β functions [3] have opposite signs), this situation is not entirely generic. We refer to this µ ∼ M regime as the ultraviolet theory (UV), even though the SU (N ) gauge groups become strongly coupled in the extreme ultraviolet limit µ M . This theory exhibits confinement with chiral symmetry breaking at a scale characterized by Λ 3N , where the one-loop β(g 3N ) function diverges. By describing the theory as confining, we mean that the µ Λ 3N perturbatively coupled (SU (3N )×SU (N ) 3 ) k theory with the trilinear superpotential W is Seiberg-dual to a theory of singlet "baryons" and SU (N ) × SU (N ) bifundamental "mesons" at scales µ Λ 3N . Some of the global symmetries of the UV theory are spontaneously broken by the O(Λ 3N ) expectation values of the baryon operators.
A subset of the SU (N )×SU (N ) bifundamentals acquire vectorlike masses m i of order O(λ i Λ 3N ), inherited from the λ i Tr (q 1 q 2 q 3 ) operators in the UV theory. Like the λ i = 0 UV superpotential terms, these vectorlike masses lift some of the flat directions that would otherwise be included in the moduli space. At scales µ < m i , these massive degrees of freedom can be integrated out. This changes the sign of the β(g N ) function, so that the SU (N ) gauge groups become strongly coupled at some new Λ N < m i . In the far infrared (IR) theory µ Λ N , the only light degrees of freedom are composite baryons and mesons, which can be mapped onto the set of gauge invariant operators of the original UV theory. Despite the formidable complexity of the original ultraviolet theory, this phase of the low energy effective theory is remarkably simple. Sections 2 and 4 present our main results, following the evolution of the theory from µ ∼ M down to µ Λ N step by step, tracking the degrees of freedom and their global symmetries in each regime. Section 2 focuses on the aspects of the calculation that are the least dependent on the actual shape of the "moose lattice," and the easiest to generalize. Details about the boundary conditions become much more important in the infrared limit of the theory, as we show in Section 4. For example, if the moose lattice is given periodic boundary conditions, the SU (N ) groups do not confine, but instead have an unbroken Coulomb phase in the IR. Section 4 explores several such variations on the boundary conditions. As an interlude between Sections 2 and 4, Section 3 follows the global symmetries of the theory from the UV to the IR.
Our focus in this work is restricted to four-dimensional spacetime, and the so-called moose lattice is simply a way to keep track of the gauge groups and matter fields -however, it is highly suggestive of a geometrical interpretation, consistent with the deconstruction [4][5][6][7] of a six-dimensional spacetime with two compact dimensions, as we discuss in our concluding remarks. This view is reinforced by the emergence of some bulk-like and brane-like features in the infrared theory.
In Section 1.1 we provide a review of the familiar Seiberg dualities and confinement in SQCD. Section 1.2 reviews some especially relevant literature on confinement in SU (N ) product gauge theories [8][9][10].
Review of Confinement in N = 1 Theories
Supersymmetry (SUSY) ameliorates some of the challenges of strongly coupled theories, making it possible to derive some infrared properties of a theory exactly. The conjectural Seiberg dualities [1,2] are central to this effort. Given a gauge group such as SU (N ) with some set of matter fields, one can sometimes identify a dual theory with a different gauge group, new matter fields, and possibly some superpotential that describes the interactions of the dual matter fields. In describing the theories as "dual," we mean that the two ultraviolet theories flow to the same infrared behavior, not that this duality is exact at all energy scales. Seiberg dualities have been identified not only for SU (N ) gauge groups, but also Sp(2N ), SO(N ), and the exceptional Lie groups.
A number of SUSY gauge theories have been shown to confine: that is, rather than being dual to another gauge theory, the dual theory has no gauge interactions. Well-known examples include SU (N c ) with F = N c or F = N c + 1 pairs of chiral superfields in the fundamental ( ) and antifundamental ( ) representations of the gauge group (a.k.a. "SQCD"), or SU (N c ) with one field in the antisymmetric ( ) representation and an appropriate number of fundamentals and antifundamentals [11][12][13].
In some cases the infrared theory has a "quantum deformed moduli space," where the classical constraint equations are modified by some terms that depend on the gauge couplings. We refer to these theories as "qdms-confining." The canonical example is the F = N c case of SQCD, where the infrared behavior is described by the gauge invariant operators
M ij = Q α i Q αj , B = det Q ≡ Q Nc , B = det Q ≡ Q Nc ,(1.1)
where Q Nc ≡ det Q is the completely antisymmetric product of i = 1, 2, . . . N c distinct fields Q i , each in the fundamental representation of SU (N c ). Classically, M and BB would obey the constraint equation det(QQ) = det Q det Q,
det M = BB,(1.2)
but this classical constraint is modified quantum mechanically [14][15][16] by a term proportional to the holomorphic scale Λ, A second class of theories, known as "s-confining" [17,18], confine smoothly without necessarily breaking any of the global symmetries, with the classical constraint equations enforced by a dynamically generated superpotential. For example, the infrared limit of SQCD with F = N + 1 flavors is described by the gauge invariants
det M − BB = Λ b , Λ b = µ b exp − 8π 2 g 2 (µ) + iθ YM , b = 3N c − F.B i = (Q N ) i ≡ i j 1 ...j N N ! k 1 ...k N Q k 1 j 1 Q k 2 j 2 . . . Q k N j N , B i = (Q N ) i ,(1.
4)
M ij = (QQ) ij ≡ Q α i Q α j , where k refer to SU (N c ) gauge indices, i and j to SU (F ) flavor indices, and is the completely antisymmetric tensor. The constraint equations between M , B and B are not modified quantum mechanically. Instead, the infrared theory has a dynamically generated superpotential [19] W = 1 Λ 2N −1 BM B − det M , (1.5) which enforces the classical constraint equations
B i M ij = 0, M ij B j = 0, (M ij ) −1 det M = B i B j . (1.6)
It is easy to show that this W has R charge +2 under any conserved U (1) R . The origin of moduli space is now a viable solution to the constraint equations, permitting confinement without chiral symmetry breaking.
The Seiberg dualities for SQCD survive a number of nontrivial consistency checks: the dimensionality of the moduli spaces of the UV and IR theories match, the two theories share the same set of global symmetries, and all of the 't Hooft anomaly matching conditions are satisfied. In Appendix A we demonstrate this for some F = N examples chosen to highlight some subtleties associated with anomaly matching on the quantum deformed superpotential.
Superpotential Deformations
Each of the models described so far has been derived from a UV theory with no superpotential. In the case of s-confinement, a superpotential is dynamically generated for the IR theory, which respects all the global symmetries and enforces the classical constraints between the operators. For qdms confinement, there is no dynamically generated superpotential: the quantum modified constraint can only be implemented in a superpotential by the use of Lagrange multipliers.
In the SQCD example, one could perturb the theory by including the gauge-invariant superpotential operators
W ∼ m ij (QQ) ij + (QQ) 2 M + . . . . (1.7)
This W explicitly breaks the SU (F ) × SU (F ) r symmetry, as well as U (1) R . Note that if any of the mass terms m ij are larger than Λ, then it is no longer appropriate to treat the problem as F = N SQCD. After integrating out the heavier quarks with m ij Λ, the remaining F < N SQCD theory does not confine: its Seiberg dual has an SU (N − F ) gauge group. The infrared theory thus bears no resemblance to the qdms-confining version of SQCD. If we are to treat the superpotential Eq. (1.7) as a small perturbation, it should be the case that m ij Λ. In this case the global symmetries are still approximately conserved, and the infrared effective theory developed in Section 1.1 is still applicable at scales large compared to m ij (and small compared to Λ).
As defined in Eq. (1.1), M , B and B have mass dimensions 2, N and N , respectively. When matching superpotential terms it can be more convenient to normalize these by factors of Λ to give the operators canonical mass dimension, 2
M → (QQ) Λ , B → (Q N ) Λ N −1 , B → (Q N ) Λ N −1 , (1.8)
so that a generic symmetry-violating superpotential for SQCD includes
W ∼ A ij Λ M ij + α ijkl Λ 2 M M ij M kl + . . . + β Λ N −1 M N −3 B B +β Λ N −1 M N −3 B B + . . . . (1.9)
For W ≈ 0 to be a good approximation in the near ultraviolet as well as the infrared, the mass scales associated with the irrelevant operators should satisfy M Λ, so that all of the global symmetries are approximately conserved above and below the scale Λ. For G global to be approximately conserved below Λ, it must also be the case that A ij Λ Λ 2 . In the N = 2 special case, the same should be true for M B Λ and M B Λ.
Solving the equations of motion for M ij , B and B, we find that the F 2 + 1 light degrees of freedom do not remain massless, but instead acquire some potential that lifts various directions of the moduli space. In the F = N case the addition of A ij , β, andβ is sufficient to completely break the global symmetry group SU (F ) × SU (F ) r × U (1) B × U (1) R . The quadratic terms M 2 , M B, M B and BB determine the location of the global minimum of the potential on the moduli space, up to corrections from further irrelevant operators that may be included in the superpotential. Figure 1: The moose diagram for the qdms-confining SU (N ) k model [9], showing the k gauge groups G i = SU (N ) i and the global SU (N ) × SU (N ) r symmetry. Each Q j transforms as a bifundamental (N , N ) under the adjacent G j × G j+1 . The quark charges under the global U (1) B symmetry are indicated in the lower row. Arrows pointing into (out of) a group G j indicate that a quark transforms in the (anti)fundamental representation of that group.
Moose Diagrams
SU(N )`G 1 G 2 G k SU(N ) r Q 0 Q 1 Q 2 Q k 1 Q k +1 1 +1 ( 1) k 1 ( 1) k (1.1) 1
Confinement in Linear Moose Theories
Each of the complicated product gauge group models considered in this paper uses a collection of alternating SU (N )×SU (M )×SU (N )×SU (M )×. . . gauge groups as a building block. Dubbed the "linear moose" model [20], the matter content of this theory consists of one chiral bifundamental quark for each adjacent pair of SU (N )×SU (M ) or SU (M )×SU (N ) groups. This type of structure appears in the k site deconstruction of a five-dimensional theory [4][5][6][7], which in the presence of a Z 2 orbifold produces an SU (N ) k chiral gauge theory.
For the "even" linear moose with equal numbers of SU (N ) and SU (M ) gauge groups, (SU (N )× SU (M )) k , the anomaly matching conditions are saturated, indicating that even in a nonsupersymmetric theory the confinement can proceed without breaking chiral symmetry [20], while for the "odd" linear moose (SU (N )×SU (M )) k ×SU (N ) the chiral symmetry is necessarily spontaneously broken.
Additional information about the low energy behavior can be extracted from supersymmetric moose theories [8-10, 13, 21-23]. For example, with N = 1 supersymmetry, it is often possible to derive the exact form of the chiral superpotential. If an N = 1 theory can be shown to be the limit of N = 2 supersymmetry [2,21,[24][25][26][27], the Kähler potential may be similarly constrained based on the form of the holomorphic prepotential.
In the special case N = M , the Seiberg duality for F = N SQCD can be used to quantify aspects of the infrared theory for the supersymmetric linear mooses [9]. The moose (a.k.a "quiver" [28]) diagram for this theory is shown in Figure 1, together with the matter superfield charge assignments under the global symmetries. Below, we summarize the method and results of Ref. [9], which are utilized several times in Section 2.
Given some large hierarchy between confinement scales, e.g. Λ 1 Λ 2 . . ., the gauge groups G 2,3,... in Figure 1 can be treated as global symmetries in the regime where only G 1 is strongly coupled. In this limit the degrees of freedom at intermediate scales Λ 2,3,... µ Λ 1 include the baryonic det Q 0 and det Q 1 operators, as well as the mesonic (Q 0 Q 1 ) that transforms as the bifundamental of SU (N ) × G 2 . The baryon and meson operators satisfy the usual quantummodified constraint,
det(Q 0 Q 1 ) = det Q 0 det Q 1 − Λ b 1 ,(1.10)
with b = 2N . If G 2 were not gauged, then det Q 0 , det Q 1 and (Q 0 Q 1 ) would form the set of gauge-invariant operators that describe the flat directions on the moduli space [29]. Next, at µ ∼ Λ 2 , the group G 2 = SU (N ) 2 becomes strongly coupled, thus lifting the pseudo-flat (Q 0 Q 1 ) directions and selecting the det Q 0 det Q 1 = Λ b vacuum. As the light degrees of freedom (Q 0 Q 1 ) and Q 2 resemble F = N SQCD, confinement of G 2 produces the gauge-invariant composite operators det(Q 0 Q 1 ), det Q 2 , and (Q 0 Q 1 Q 2 ). However, given the constraint equation Eq. (1.10), the baryonic operator det(Q 0 Q 1 ) is not an independent degree of freedom, but is redundant with det Q 0 and det Q 1 .
Continuing in this manner for G 3,4,... , and replacing the redundant operators det(Q 0 Q 1 ) and det(Q 0 Q 1 Q 2 . . .) where possible, the constraint equations have the form [9] det 11) where in our shorthand Q N j ≡ det Q j . Given k copies of the gauge group SU (N ), the moduli space is spanned by the reduced set of gauge invariant operators (Q 0 Q 1 . . . Q k ) ij and Q N 0,1,...,k , where the mesonic operator (Q 0 . . . Q k ) is a bifundamental of the SU (N ) × SU (N ) r flavor symmetry, with the single constraint equation
(Q 0 Q 1 ) = (Q 0 ) N (Q 1 ) N − Λ b 1 , det(Q 0 Q 1 Q 2 ) = (Q 0 ) N (Q 1 ) N (Q 2 ) N − Λ b 1 (Q 1 ) N − (Q 0 ) N Λ b 2 , det(Q 0 Q 1 Q 2 Q 3 ) = Q N 0 Q N 1 Q N 2 Q N 3 − Λ b 1 Q N 2 Q N 3 − Q N 0 Λ b 2 Q N 3 − Q N 0 Q N 1 Λ b 3 + Λ b 1 Λ b 3 ,(1.det(Q 0 . . . Q k ) = k j=0 Q N j + all neighbor contractions Q N 0 . . . Q N j−1 Q N j . . . Q N k .
(1.12)
Following Ref. [9], "neighbor contraction" indicates the replacement
Q N j−1 Q N j → −Λ 2N j .
The sum in Eq. (1.12) includes all possible contractions.
Here we see explicitly the difference between "even" and "odd" moose theories identified in Ref. [20]. If k is even, then the product (Q N 0 . . . Q N k ) includes an odd number of terms, and the moduli space includes the origin, (Q 0 Q 1 . . . Q k ) ij = 0, Q N j = 0, thus permitting confinement without chiral symmetry breaking. If instead k is odd, then the product (Q N 0 . . . Q N k ) can be fully contracted, and the constraint equation includes the constant term Λ b 1 Λ b 3 . . . Λ b k , thus forcing at least a subset of the operators to acquire nonzero expectation values.
Even though this analysis used the Λ 1 Λ 2 . . . Λ k hierarchy as a simplification, the same conclusion Eq. (1.12) is reached in any other ordering of scales [9]. Furthermore, the model survives a number of consistency checks, including the Λ j → 0 limit where any of the gauge groups is replaced by a global symmetry; the addition of mass terms, where possible; or spontaneous symmetry breaking, where an SU (N ) j is higgsed to one of its subgroups.
Modified Boundary Conditions
A closely related class of product gauge groups was shown in Ref. [10] to s-confine. In this model the N copies of Q 0 in Figure 1 were replaced by four quarks Q and one two-component antisymmetric tensor A ( in Young tableaux notation). This product gauge group confines while dynamically generating a superpotential, with the same "even/odd" behavior identified in Ref. [20] based on the number of gauged SU (N ) groups.
In another variation of the linear SU (N ) k theory, the linear moose of Ref. [9] is modified by gauging the diagonal SU (N ) subgroup of the global SU (N ) × SU (N ) r , so that the moose diagram forms a closed ring [8]. To analyze this theory it is easiest to begin with the limit where this gauged G 0 ⊂ SU (N ) × SU (N ) r is weakly coupled, with Λ 0 Λ 1,2,...,k . If G 0 were not gauged, then the mesonic operator M ij = (Q 0 . . . Q k ) ij would be gauge-invariant. With G 0 gauged, this bifundamental of SU (N ) × SU (N ) r decomposes into irreducible representations of G 0 = SU (N ): a singlet M 0 , and an adjoint M Ad , defined as
M 0 ≡ Tr(Q 0 . . . Q k ) M Ad ≡ (Q 0 . . . Q k ) ij − 1 N Tr(Q 0 . . . Q k ). (1.13)
The moduli space is spanned by the gauge invariant operators Q N j ; M 0 ; and powers of M m Ad for 2 ≤ m ≤ N − 1, with the upper limit on m due to the chiral ring being finitely generated [30,31]. With the adjoint M Ad the only G 0 -charged degree of freedom, it is not possible to completely higgs the gauge group. By giving M Ad an arbitrary expectation value, SU (N ) can be broken into any of its rank N − 1 subgroups, leaving at least an unbroken gauged U (1) N −1 Coulomb phase.
For the SU (N ) k+1 ring moose it is possible to identify a holomorphic prepotential that specifies both W and the Kähler potential. For example, in the Λ 0 Λ 1...k limit, the chiral matter field M Ad can be combined with the vector gauge superfield λ 0 and the antichiral M † Ad into an N = 2 supermultiplet, all of which transform in the adjoint representation of G 0 . In Ref. [8], the prepotential hyperelliptic curve is found by reducing the SU (N ) k product group down to SU (N ) i × SU (N ) j , where Λ i and Λ j are the two smallest holomorphic scales, in analogy with the SU (2) × SU (2) theory in Ref. [2].
Appealing to a geometric interpretation of this theory, we refer to these variants of the SU (N ) k linear moose as having different boundary conditions. In Ref. [9], the linear moose terminates with (anti)fundamental quarks with SU (N ) global symmetries at each end; in Ref. [10], the matter content is altered on one of the boundaries to include the antisymmetric representation; and in Ref. [8] the boundaries are made periodic, turning the moose line into a moose ring. In Section 4 we investigate similar variations to the boundary conditions on the moose lattice.
Confinement on the Moose Lattice
The discretization of Euclidean space in n ≥ 2 dimensions is complicated by the fact that there are multiple lattice arrangements that can be said to contain only nearest-neighbor interactions. This ambiguity is not present for the n = 1 dimensional lattice, where each vertex has precisely two neighbors (or one, if the vertex is on the boundary of the lattice). Two dimensional space, on the other hand, can be tiled by squares, triangles, hexagons, or any variety of non-regular polygons. Our decision to present a triangular (rather than rectangular) lattice is motivated by the fact that the triangular plaquette permits gauge-invariant marginal operators in the superpotential, so that all of the important mass scales in the problem are dynamically generated. This setup leads to a relatively simple analysis, where the confinement proceeds in two stages, at Λ 3N and Λ N .
From Section 2.1 to Section 2.4 we track the evolution of the theory from the ultraviolet (µ ∼ M ) to the infrared (µ Λ N ). This stage of the analysis can be completed without specifying the precise shape of the moose lattice, but for concreteness we will periodically refer to an [SU (3N ) × SU (N ) 3 ] k example with k = 3 × 3 as an illustration. The shape and topology of the moose lattice become much more important in Section 4, where we analyze the different kinds of behavior that can emerge in the far infrared limit of the theory.
The Weakly Coupled Regime
Our discussion begins at the ultraviolet scale µ ∼ M where we take all of the gauge couplings to be perturbatively small, i.e. g i O(1) for all of the SU (3N ) and SU (N ) gauge groups. A moose diagram of one example is shown in Figure 2, with non-Abelian groups SU (N c ) represented by circles, and chiral superfields represented by lines.
The "unit cell" of the lattice consists of a central SU (3N ) gauge group surrounded by six SU (N ) groups; three pairs of bifundamental quarks Q + Q, respectively in the fundamental and antifundamental representations of SU (3N ); and a bifundamental q for each neighboring SU (N )× SU (N ) pair on the boundary of the hexagonal unit cell, for a total of six. Throughout this paper we will borrow lattice-related terms to describe the various components of the model, so that each gauge group is located at a "site"/"node"/"vertex"; the bifundamentals are "links" or edges; and the trilinear gauge invariants Tr(q 1 q 2 q 3 ) will be referred to as "plaquette" operators. Figure 2 shows a k = 3 × 3 example of the [SU (3N ) × SU (N ) 3 ] k model, with nine copies of the unit cell arranged in a parallelogram. Each SU (N c ) group on the boundary of the lattice is a global symmetry. At these sites the cubic anomaly coefficients are nonzero, so these SU (N ) groups cannot be gauged without introducing additional matter fields.
It is not necessary to include an integer number of unit cells in the lattice. Although Figure 2 shows an example where every SU (3N ) group is gauged, and the lattice boundary passes only through SU (N ) sites, we could just as well have routed the lattice boundary through some of the SU (3N ) groups instead. Periodic boundary conditions are more restrictive. A periodic direction should include an integer number of unit cells, so that the gauged SU (3N ) and SU (N ) sites are all anomaly-free.
At the ultraviolet scale M , each of the gauge groups is taken to be weakly coupled. For the SU (3N ) groups this is a natural assumption: the matter content of Figure 2 supplies each SU (3N ) group with F = 3N pairs of (Q+Q) quarks, and SQCD with F = N c exhibits asymptotic freedom. Given a coupling g 3N (M ) defined at the short-distance scale M , the gauge coupling at another scale µ is given by the NSVZ β function,
β(g Nc ) = dg Nc d log µ = − g 3 Nc b 16π 2 , b = 3N c − F, (2.1)
and β < 0 for F = N c . Dimensional transmutation defines the holomorphic scale Λ 3N in terms of the gauge coupling g and the CP violating Yang-Mills phase θ YM ,
Λ b 3N = µ b exp − 8π 2 g 2 3N (µ) + iθ YM ,(2.2)
which indicates the scale where the SU (N c ) gauge group becomes strongly coupled, i.e. g(Λ) → ∞.
If g 3N (M )
4π is perturbatively small, then there will be a hierarchy Λ 3N M between the two scales.
In a theory with multiple unit cells, the scales Λ Each SU (N ) node, on the other hand, has 3N + N + N pairs of quarks in the N and N representations, for a total of F = 5N c flavors. In this case, b = −2N , the β(g N ) function is positive, and there is no asymptotic freedom. However, if we start with a weakly coupled g N (M ) 1 at µ = M , the SU (N ) groups remain weakly coupled for all µ < M down to the phase transition at µ ∼ Λ 3N . In this work we do not explore the µ M limit of the theory, where the SU (N ) become strongly coupled.
The final addition to the theory is a chiral superpotential W , composed of the marginal trace operators W ⊃ λTr (Q 1 Q 2 Q 3 ) that encompass each triangular plaquette. Eventually we anticipate introducing a full set of symmetry-violating marginal and irrelevant operators into W , but at this stage of the discussion we restrict W to include only the U (1) R preserving operators. For Figure 2 this includes the plaquettes surrounded by SU (3N ) × SU (N ) × SU (N ) groups, but not the SU (N ) × SU (N ) × SU (N ) plaquettes.
The moose notation is particularly helpful when constructing gauge invariant operators: one simply follows the arrows on the diagram so as to form closed loops. For example, within the bulk of the lattice, the next set of trace operators are the dimension-6, irrelevant operators W ⊃ α(Q 1 Q 2 . . . Q 6 )/M 3 . Another type of gauge invariant operator is formed by open "Wilson lines," which start and end on the boundaries of the lattice. These operators transform as bifundamentals under the SU (N f ) L × SU (N f ) R global symmetries associated with their endpoints, and have the form
W ⊃ α ij M k−2 Q (1) i,n 1 Q (2) n 1 ,n 2 Q (3) n 2 ,n 3 . . . Q (k) n k−1 ,n k Q (k+1) n k ,j ,(2.3)
where i, j are indices for the global symmetries SU (N f ) L,R , and the repeated n 1...k indices correspond to the gauged SU (N c ) groups. Simple examples on Figure 2 include the straight leftto-right lines, which pass either through alternating SU (N ) and SU (3N ) nodes, or exclusively though SU (N ) nodes. In addition to the analogous straight lines along the ϕ = 120 • and ϕ = 240 • directions (where we define ϕ = 0 as left to right in the page), generic Wilson line gauge invariants can include any number of 60 • corners. Most of these operators have no relevance in the infrared theory: as we show in Section 2.2, the degrees of freedom associated with the 60 • corners become massive, so that the low energy theory is dominated by the straight Wilson lines that pass through SU (3N ) sites.
An altogether different set of gauge invariant "baryon" operators is generated by the completely antisymmetric products
W ⊃ (q) N M N −3 + (q) N M N −3 + (Q a ) N (Q b ) N (Q c ) N M 3N −3 + (Q a ) N (Q b ) N (Q c ) N M 3N −3 , (2.4)
where q and q are any of the SU (N ) × SU (N ) bifundamentals, and Q a,b,c and Q a,b,c are SU (3N ) × SU (N ) bifundamentals in the (3N, N) and (3N, N) representations, respectively. In the N = 3 special case, q N and q N are marginal operators.
By invoking M as the mass scale associated with the irrelevant operators, we complete the promise made earlier in this section, that the theory is weakly coupled at scales µ M . This statement now applies to the superpotential couplings as well as the gauge interactions, as long as the dimensionless parameters α, λ, etc. are all O(1).
Theories of gravity are generally expected to violate global symmetries [32][33][34][35][36][37], so the flavor symmetry violating superpotential Eq. (2.3) and its baryon number violating cousin Eq. (2.4) may be generated by Planck-scale effects, i.e. with M → M p . A lower scale M S < M p may be appropriate if this theory is to be embedded within string theory, any other N > 1 version of supersymmetry, or any more than four continuous spacetime dimensions.
First Stage of Confinement
Approaching the scales µ → Λ (i) 3N , the SU (3N ) i gauge groups become strongly coupled. To understand the behavior of the theory in the infrared, µ Λ 3N , we refer to the Seiberg duality for F = N c = 3N SQCD, an S-duality that relates the weakly-coupled gauge theory at µ Λ 3N to a theory of gauge invariant mesons and baryons at µ Λ 3N . Everything we need to know about the µ Λ 3N regime of the theory can be deduced from studying a single unit cell of the lattice.
Thanks to the positive sign in the β(µ) function for N c = N with F = 5N , the SU (N ) gauge coupling g N -already perturbative at µ = M -becomes even smaller as µ decreases from M towards Λ 3N . As far as the SU (3N ) node is concerned, there are 3N flavors of quarks Q and antiquarks Q, with opposite charges under a U (1) B baryon number, and transforming under approximate SU (3N ) Q and SU (3N ) Q flavor symmetries. By gauging the SU (N ) subgroups, these putative global SU (3N ) Q,Q flavor symmetries are explicitly broken, but at µ Λ 3N this effect is a small perturbation.
The gauge groups and chiral matter associated with the unit cell are shown in Figure 3. The superpotential includes six U (1) R conserving plaquette operators per unit cell,
W ⊃ λ 1 (q 1 Q 3 Q 1 ) + λ 2 (q 2 Q 3 Q 2 ) + λ 3 (q 3 Q 1 Q 2 ) + λ 4 (q 4 Q 1 Q 3 ) + λ 5 (q 5 Q 2 Q 3 ) + λ 6 (q 6 Q 2 Q 1 ), (2.5)
where the trace over gauge indices is implied. Each λ c=1...6 is a dimensionless complex parameter. Under the simplest U (1) R charge assignment, each SU (3N )-charged Q i and Q i is neutral, while the SU (N ) × SU (N ) bifundamentals q i have R charges of +2.
Following Eq. (1.1), the Seiberg dual of the F = N c SQCD is described by F 2 meson operators and two baryon operators, with one constraint equation:
(M mn ) ij = (Q m Q n ) ij , B = (Q N 1 Q N 2 Q N 3 ), B = (Q N 1 Q N 2 Q N 3 ), det M − BB = Λ b 3N , (2.6) with b = 6N
, for indices m, n = 1, 2, 3 and i, j = 1, 2, . . . , N . The determinant det M is a shorthand for the completely antisymmetric product of 3N (QQ) operators, which could also be expressed in terms of determinants of the nine distinct SU (N ) charged mesons, det M mn .
2 Triangular/Hexagonal Unit Cell: shows the theory at scales µ < λΛ 3N , after integrating out the vectorlike quarks. In this infrared limit, the only light SU (N )-charged matter fields are the mesons that pass through the center of the unit cell.
3N N N N N N N Q 1 Q 3 Q 2 Q 1 Q 3 Q 2 q 1 q 2 q 3 q 4 q 5 q 6 ! N N
In the g N → 0 limit, where the specified SU (N ) node becomes a global symmetry, M ab , B and B all represent flat directions on the moduli space subject to the constraint Eq. (2.6). At symmetryenhanced points on the moduli space, the SU
(F ) Q × SU (F ) Q × U (1) B × U (1) R global symmetry may be spontaneously broken to SU (F ) diag × U (1) B × U (1) R , by the M ij ∝ δ ij expectation value (with i, j = 1 . . . F ). Or, the U (1) B global symmetry can be broken by BB = −Λ b
3N . For gauged SU (N ), many of these flat directions are lifted. The true moduli space is spanned by gauge-invariant operators [29], and the M mn are not gauge invariant: they transform as bifundamentals of SU (N ) × SU (N ). Each gauged SU (N ) introduces a D-term potential for the mesons M mn , so that the vacuum of the theory lies on the BB = −Λ b 3N branch, with spontaneously broken baryon number and unbroken SU (N ) symmetries.
One linear combination of the B and B scalars, the "B + B" direction that changes the value of BB , acquires an O(Λ 3N ) mass. The other linear combination, the "B − B" or "tan β" direction tangential to the BB = −Λ b 3N flat direction, remains massless. This flat direction is lifted if U (1) B is explicitly broken; for example, by the gauge invariant irrelevant operators,
W ⊃ (Q N 1 Q N 2 Q N 3 ) M 3N −3 + (Q N 1 Q N 2 Q N 3 ) M 3N −3 + (Q N 1 Q N 2 Q N 3 )(Q N 1 Q N 2 Q N 3 ) M 6N −3 + . . . W (µ < Λ 3N ) ∼ Λ 3N −1 3N M 3N −3 B + Λ 3N −1 3N M 3N −3 B + Λ 6N −2 3N M 6N −3 BB + . . . (2.7)
which induce small tadpole operators and even smaller baryon mass terms into the superpotential.
Here we have rendered B and B as operators with canonical mass dimension +1, by extracting the appropriate powers of the confinement scale Λ 3N :
B = (Q N 1 Q N 2 Q N 3 ) Λ 3N 3N −1 , B = (Q N 1 Q N 2 Q N 3 ) Λ 3N 3N −1 , M ab = (Q a Q b ) Λ 3N . (2.8)
Applying the same (Q a Q b ) → Λ 3N M ab mapping to the plaquette superpotential Eq. (2.6), we see each of the a = b mesons acquires a vectorlike mass pairing with one of the edge quarks q i :
W (µ < Λ 3N ) ⊃ m 1 q 1 M 31 + m 2 q 2 M 32 + m 3 q 3 M 12 + m 4 q 4 M 13 + m 5 q 5 M 23 + m 6 q 6 M 21 , (2.9)
where m a = λ a Λ 3N . Figure 3 illustrates the transition. The middle diagram shows the nine M ab mesons together with the six q c in one unit cell. This moose diagram describes the theory at the intermediate scales m a < µ < Λ 3N . All of the mesons shown in Figure 3 are neutral under the spontaneously broken U (1) B . However, as we show in Section 3, there are a number of unbroken U (1) global symmetries under which B and B are neutral, and the mesons M ab are charged. Integrating Out: On the BB = −Λ b 3N , M ab = 0 branch of the vacuum, all of the M ab degrees of freedom correspond to approximately flat directions on the moduli space, at least if we ignore the D-term potential from the weakly gauged SU (N ) groups. As can be seen from the supersymmetric Lagrangian, which includes terms of the form L ⊃ |∂W/∂Φ| 2 for each of the superfields Φ, many of the otherwise-flat directions are lifted by Eq. (2.9). To pick one example, the m 1 term in W contributes two terms to the so-called F -term potential,
Integrating Out Heavy Mesons and Quarks
−L F ⊃ ∂W ∂q 1 2 + ∂W ∂M 31 2 ≈ m 1 m 1 (M 31 M 31 + q 1 q 1 ) ,(2.10)
In the absence of any other q 1 dependent terms in the superpotential, the scalar potential is minimized at the vacuum solution M 31 = q 1 = 0. Not counting the dimension-6 irrelevant operators, the only other q 1 dependent term in the superpotential comes from the triangular plaquette operator involving q 1 and the q b and q c from the adjacent unit cells, W ∼ λ bc (q 1 q b q c ). Together with the m 1 mass term, the vacuum solution nominally shifts to
m 1 M 31 + λ bc q b q c = 0, q 1 = 0. (2.11)
However, confinement on the b and c unit cells sets q b = q c = 0, so that the minimum of the scalar potential remains at M 31 = 0. In principle the dimension-6 operators do have the potential to shift M ab and q i away from the origin of moduli space; however, thanks to the powers of M 3 in the denominator of such operators, the resulting shift is small enough that it can be safely ignored.
Matching Holomorphic Scales: Before we move on to the strongly coupled regime of the SU (N ) gauge theory, let us take a moment to study the transition at µ ∼ λΛ 3N . This is where the sign of the SU (N ) β function changes, which is what causes SU (N ) to become strongly coupled at µ Λ 3N in the first place. By matching the gauge coupling at the scales µ = m c , the holomorphic scale Λ N for the F = N theory can be derived from g N (µ = M ). Specifically, it is the holomorphic gauge coupling τ that we match at each threshold,
2πi τ (µ) ≡ − 8π 2 g 2 (µ) + iθ YM . (2.12)
We begin with the values of g N and θ YM evaluated at µ = M , and define a Λ F =5N M holomorphic scale:
Λ b N,F =5N = M −2N exp − 8π 2 g 2 (M ) + iθ F =5N , Λ N,F =5N = M exp +1 2N 8π 2 g 2 (M ) − iθ F =5N 2N . (2.13)
Although the CP -odd θ YM parameter is invariant under the RG evolution, it can acquire threshold corrections at µ ∼ m, so we specify
θ YM (µ = M ) ≡ θ F =5N .
Allowing the four superpotential coupling constants λ a=1,2,3,4 to acquire distinct values, there are generally four distinct mass thresholds, m a > m b > m c > m d . Matching τ (µ = m a ) between the F = 5N and F = 4N theories, with b = −2N and b = −N , respectively, we find
|Λ F =5N | −2N e iθ F =5N m −2N a = e 2πi τ (ma) = |Λ F =4N | −N e iθ F =4N m −N a , |Λ F =5N | −2N m N a e iθ F =5N = |Λ F =4N | −N e iθ F =4N .
(2.14)
Applying the same matching procedure at m b,c,d , we find
|Λ F =5N | −2N (m a m b m c m d ) N e iθ F =5N = |Λ F =N | 2N e iθ F =N . (2.15)
So, the θ YM phase in the F = N theory is given by
θ F =N = θ F =5N + N arg (m a m b m c m d ) = θ F =5N + N arg (λ a λ b λ c λ d ) . (2.16)
Inspecting the real part of Eq. (2.15), we find
|Λ N,F =N | = m a m b m c m d Λ 2 N,F =5N 1/2 = |λ a λ b λ c λ d | 1/2 |Λ 3N | 2 |Λ N,F =5N | = |λ a λ b λ c λ d | 1/2 |Λ 3N | 2 M exp − 8π 2 /2N g 2 N (M ) , |Λ N,F =N | = |λ a λ b λ c λ d | 1/2 M exp − 1 N 8π 2 g 2 3N (M ) − 1 2N 8π 2 g 2 N (M )
.
(2.17)
As expected, Λ N for the F = N phase of the theory is exponentially small compared to Λ 3N , which is itself exponentially smaller than M . If any of the λ i are much smaller than
O(1), then Λ N,F =N 1 g 2 i (µ) log µ g = 1 M ? ⇤ 3N ⇤ 3N ⇤ N N c = F = 3 N N c = F = N F = 5 N strong coupling (6.1) 6 Cartoon 1 g 2 i (µ) log µ g = 1 M ? ⇤ 3N ⇤ 3N ⇤ N N c = F = 3 N N c = F = Nb = 3N c − F = 6N , while the g N (in red) become weaker (with b = 3N − 5N = −2N ). After SU (3N ) confinement, 4N of the SU (N ) charged flavors acquire O(λΛ 3N ) masses, instigating the F → N transition.
For simplicity we have taken λ a,b,c,d ≈ λ to be nearly equal, while also assuming an approximately uniform value of Λ 3N for all of the adjacent unit cells. After the last vectorlike pairs are integrated out, the slope has changed from b = −2N to b = +2N , and g N runs to strong coupling as µ → Λ N . In this graphic we follow two different SU (N ) groups. If their mass thresholds are identical then the ratio between the g 2 N couplings remains fixed, but for generic λ i Λ (j) 3N this is not so.
is suppressed by a further factor of √ λ i . Note that the ultimate expression for Λ N is unaffected by changes to the assumed ordering, λ a > λ b > λ c > λ d .
For simplicity, we assumed in Eq. (2.17) that the two unit cells that border the SU (N ) node have equal values of Λ 3N . Very little changes when this assumption is relaxed. If instead the SU (3N ) L and SU (3N ) R gauge groups have couplings g 3N,L = g 3N,R at µ = M , Eq. (2.17) generalizes to
|Λ N,F =N | = |(λ a λ b )(λ c λ d )| 1/2 M exp − 1 2N 8π 2 g 2 3N,L (M ) − 1 2N 8π 2 g 2 3N,R (M ) − 1 2N 8π 2 g 2 N (M ) . (2.18)
Even if for some reason there is a large hierarchy between Λ 3N,L and Λ 3N,R , it is still true that the SU (N ) remains weakly coupled until after both of its neighboring SU (3N ) gauge groups have confined. Once the first of the strongly coupled SU (3N ) groups confines at (for example) Λ 3N,L , only two pairs of the bifundamental fields acquire O(Λ 3N,L ) masses. After these are integrated out, the F = 5N effective flavors of SU (N ) are reduced to F = 3N ; the coefficient of the one-loop β function switches from b = −2N to b = 0; and the gauge coupling g N (µ) remains fixed at a perturbatively small value. Only after the remaining SU (3N ) R group confines does the β(g N ) function become negative.
After integrating out the vectorlike pairs of quarks q and mesons M ab , the SU (N ) gauge groups become strongly coupled in the infrared. At this stage of the calculation, Λ N,F =N is the only relevant version of the SU (N ) holomorphic scale, so for the remainder of this section we take Λ b N to refer exclusively to
Λ b N ≡ Λ 2N N,F =N = |Λ N,F =N | 2N e iθ F =N , (2.19)
where θ F =N and |Λ N,F =N | are given in Eq. (2.16) and Eq. (2.17).
In Figure 4 we show the running gauge couplings for an SU (3N ) and two adjacent SU (N ) groups. At µ = M the various g 2 i are taken to be of the same magnitude. In this example we take the simplifying limit where the neighboring Λ 3N are similar in size, as are the λ i superpotential coupling constants, so that the transition between F = 5N and F = N for the SU (N ) groups occurs sharply at µ ≈ λΛ 3N .
With b = 3N c − F = 6N for the SU (3N ) gauge group, g 3N runs relatively quickly towards strong coupling with decreasing µ < M , while the SU (N ) more gradually become more weakly coupled. For g N , the sharp transition from F = 5N to F = N at µ = λΛ flips the sign of b, implying that only at µ ≈ (λΛ 3N ) 2 /M has g N (µ) returned to its initial value at µ = M . Thus, Λ N (λΛ 3N ) 2 /M is generally much smaller than Λ 3N and M . If Figure 4 were drawn to scale, the red lines corresponding to the different g N would form mirror images in the vicinity of µ ∼ λΛ 3N , and Λ N would be much further to the left on the plot. (2.20)
With this substitution, the moduli space in the limit of weakly coupled SU (N ) is given by of the analysis we may restrict our attention to a single string of unit cells, as follows:
(det M 11 )(det M 22 )(det M 33 ) − BB = Λ b 3N , (2.21) for b = 2N c = 6N .
Second Stage of Confinement
2 Triangular/Hexagonal Unit Cell: (2.24) Following Ref. [9], the term "neighbor contractions" refers to the replacement of (B j−1 B j ) by a factor of the holomorphic scale −Λ b N associated with the SU (N ) node that connects the M
3N N N N N N N Q 1 Q 3 Q 2 Q 1 Q 3 Q 2 q 1 q 2 q 3 q 4 q 54 − Λ b 1 B (a) 3 B (a) 4 − B (a) 1 Λ b 2 B (a) 4 − B (a) 1 B (a) 2 Λ b 3 + Λ b 1 Λ b 3 ,(2.25)
where Λ b j=1,2,3 refer to the jth gauged SU (N ) group, which lies on the border between the jth and (j + 1)th unit cells. The constant term Λ 1 Λ 3 removes the origin from the moduli space: either det M a or some product of baryon operators must acquire an expectation value. If k is odd, then the product (B 1 . . . B k ) cannot be fully contracted: instead, every term in the sum contains an odd number of B j factors. In this case, M a = B This is the stage of the calculation where the boundary conditions, i.e. the shape and topology of the moose lattice, begin to have significant effects on the low energy theory. In this section we have taken the SU (N ) nodes at the boundary of the lattice to be conserved global symmetries.
This kind of boundary is the easiest to analyze: unit cells can be added to or deleted from the k = 3 × 3 example without any change to the analysis above. With different boundary conditions come significantly altered infrared behaviors. In one example, diagonal subgroups of the global SU (N ) ×SU (N ) r groups can be gauged: depending on the resulting topology of the moose lattice, this alteration may prevent the SU (N ) gauge groups from confining. We explore these kinds of possibilities in Section 4.
Symmetry-Breaking Superpotentials: At the corners where the lattice is only one unit cell wide (in the ϕ = 120 • direction), our imposition of SU (N ) ×SU (N ) r global symmetry conservation is important for the consistency of the analysis. These cells admit the gauge invariant mass terms of the form Q 2 Q 2 , e.g. W ⊃ α ij M (Q 2 Q 2 ) ij . Unless the dimensionless α ij is exponentially small, α Λ 3N /M , then these quarks should be integrated out. The theory left behind, SU (3N ) SQCD with F < N c , is not described by the methods of Section 2.2.
Where the moose lattice is wider (m > 1), the analogous gauge invariant operators are irrelevant, with mass dimension 2m > 3. In the m = 2 case, the perturbation to the SU (3N ) theory is small. After SU (3N ) × SU (3N ) confinement, the operator W ∼ α(Q
Conclusion
It is not unseemly to pause here in a spirit of celebration at the simplicity of the low-energy theory. In the µ ∼ M theory depicted in Figure 2, we started with 108 matter fields; nine SU (3N ) and sixteen SU (N ) gauge groups; a superpotential consisting of 62 plaquette operators; and a global symmetry group that includes 22 copies of SU (N ) and a similar, as-yet-uncounted number of U (1) symmetries. In the µ < Λ N limit shown in Figure 5, on the other hand, there are 11 mesons, each transforming as a bifundamental of a global SU (N ) × SU (N ), and a collection of light baryon operators associated with the spontaneously broken U (1) symmetries, subject to constraint equations of the form Eq. (2.24) and Eq. (2.27).
Tracking the Global Symmetries
There are several good reasons why we should keep track of the global symmetries. Matching the anomaly coefficients of the global symmetries in the UV and IR limits provides a consistency check for the Seiberg duality, for example. We may also want to embed the Standard Model within the moose lattice, or to test whether the moose theory descends from some higher dimensional QFT; tracking the global symmetries is important to either effort. A number of the (approximate) global symmetries are spontaneously broken during the various stages of confinement, generating (pseudo-) Nambu-Goldstone bosons and their superpartners. For phenomenological applications these details are important: the approximately massless degrees of freedom may be desirable, e.g. for QCD axion models, or they may be harmful, if for example their presence can be ruled For the moose lattice, the global symmetries can be split into two types. The first class consists of "localized" U (1) symmetries that are associated with a single unit cell. These symmetries have mixed SU (N c ) 2 U (1) anomalies that are cancelled using only the fields from that unit cell. Due to this independence from the neighboring cells and the lattice boundary, we associate these symmetries with the interior "bulk" of the moose lattice.
N N Q 3 q 5 N N N N Q 1 Q 3 Q 2 Q 1 Q 3 Q 2 q 1 q 2 q 3 q 4
The second type of global symmetry is associated with the lattice boundary. For these symmetries, the gauge anomalies generated by the matter fields on one boundary are cancelled by matter fields on another part of the boundary, flowing through some set of charged matter fields in the lattice bulk that generally span multiple unit cells.
Global Symmetries In the Bulk
A complete accounting of the anomaly-free global symmetries depends on the shape of the moose lattice. However, it is possible to identify some non-R U (1) symmetries that are properties of the unit cell: that is, where the only matter fields with U (1) charges are the 12 bifundamentals shown in Figure 3.
A simple counting exercise shows why this should be possible. Starting with 12 U (1) phases (one for each matter field), six of the linear combinations are broken by the R-conserving plaquette superpotential; W of Eq. new non-R U (1) symmetries, which are anomaly-free and not broken by the plaquette superpotential.
To make this explicit, consider the following U (1) a × U (1) b charge assignment for the quarks in Figure 6:
U (1) a :â(Q 1 ) =â(Q 1 ) = 0,â(Q 2 ) =â(Q 3 ) =â(q 4 ) =â(q 6 ) = +1, a(q 5 ) = −â(q 2 ) = +2,â(Q 2 ) =â(Q 3 ) =â(q 1 ) =â(q 3 ) = −1. (3.2) U (1) b :b(Q 1 ) = −b(Q 1 ) = +2,b(Q 2 ) =b(Q 3 ) = −b(Q 2 ) = −b(Q 3 ) = +1, b(q 5 ) =b(q 2 ) = 0,b(q 1 ) =b(q 6 ) = −b(q 3 ) = −b(q 4 ) = +3. (3.3)
Here we use a new notation,x(Φ), to concisely report the U (1) x charge of a superfield Φ, or the charge of its scalar component if the U (1) is an R symmetry. Each unit cell has its own U (1) a × U (1) b global symmetry, acting only on the quarks associated with that cell. Transformations of this kind can be called "localized," to distinguish them from global symmetries like U (1) B that act on matter fields from multiple unit cells. This is distinct from (but reminiscent of) a truly local (a.k.a gauged) symmetry.
After SU (3N ) confinement, Eq. It is possible to gauge U (1) a ×U (1) b or any of its U (1) subgroups without adding any additional matter fields, thanks to the cancellation of all of the mixed gauge anomalies involving U (1) a,b and the various gauged SU (N c ) groups. In this case the localized nature of U (1) a × U (1) b in the moose lattice is now highly reminiscent of a gauged U (1) × U (1) symmetry from a 6d spacetime, where the discretization of two compact dimensions causes the local transformation to be realized as a separate U (1) a × U (1) b gauge group for each unit cell.
Plaquette Operators and R: If U (1) a and U (1) b are not gauged, then the superpotential Eq. (2.5) may be expanded to include trace operators from the SU (N ) × SU (N ) × SU (N ) plaquettes. For any three mutually adjacent unit cells r, s, t, these plaquette operators take the form W ⊃ Tr q So, the addition of the (q 1 q 3 q 5 ) and (q 2 q 4 q 6 ) type plaquettes to W has replaced the k independent localized U (1) a × U (1) b with a single global U (1) a × U (1) b . This picture further reinforces the notion that the moose lattice reconstructs two extra dimensions: either U (1) a × U (1) b is locally conserved, Eq.
Global Symmetries From the Boundary
The U (1) a,b type symmetries are distinct from U (1) R and the baryon number U (1) B that is spontaneously broken in the BB = 0 vacuum. Indeed, if we restrict our view to a single unit cell, we find that both U (1) B and U (1) R have nonzero mixed SU (N ) 2 U (1) anomaly coefficients. This means that the anomaly-free versions of U (1) B and U (1) R on the moose lattice must involve matter fields from multiple unit cells. These are the simplest examples of global symmetries associated with the boundaries of the lattice: with the correct charge assignment for a conserved U (1) B , all of the mixed SU (N ) 2 U (1) B anomalies cancel for the gauged SU (N ) groups, but not for the SU (N ) ,r on the boundary of the moose lattice.
In this section we describe a systematic method for enumerating the other boundary-associated global U (1) symmetries. Unlike Section 3.1, we restrict our attention to U (1) rotations that leave Eq. (3.4) as well as Eq. (2.5) invariant. A relatively simple charge basis can be constructed from strings of adjacent quarks with alternating ±1 charges, traversing the bulk of the moose lattice in a zigzag pattern. All plaquette operators are neutral under such a charge assignment. For the mixed SU (N c ) 2 U (1) anomalies to cancel for all gauged SU (N c ), this type of global symmetry needs to involve adjacent rows of unit cells, as shown in Figure 7. The result is a chevron-like charge assignment. The only nonzero SU (N ) 2 U (1) anomaly coefficients involve the global symmetries at the boundaries of the lattice, pairing two SU (N ) groups with their SU (N ) r counterparts on the opposite edge.
There are U (1) symmetries of this type along each of the ϕ = 0, 120 • , 240 • directions. In the (arbitrarily chosen) moose lattice of Figure 7, there are 3 + 5 + 5 = 13 distinct chevron charge assignments. This accounting includes the single-row zigzag versions that run along the edges of the moose lattice: for example, the dashed ϕ = 120 • line in Figure 7, but pushed all the way to the right edge of the lattice. In the k = 3 × 3 model of Figure 2, there are 4 + 4 + 6 = 14 of these U (1) symmetries. More generally, for an arbitrarily shaped moose, the number of chevron global symmetries parallel to ϕ depends on the number of distinct rows of unit cells, i.e.:
N chev. = ϕ=0,120 • ,240 • (N rows (ϕ) + 1). (3.6)
Because the number of these U (1) symmetries is proportional to the number of rows in the moose lattice, it scales with the length of the perimeter of the lattice, rather than as the area (number of unit cells) of the lattice. These chevron U (1) symmetries can be thought of as partially localized, in analogy with Section 3.1. Rather than being confined to a single unit cell, these U (1)s act on the quarks within a specific horizontal band (or a band parallel to ϕ = ±120 • ) while ignoring the rest of the moose lattice. Unlike the U (1) a ×U (1) b symmetries, these chevron U (1)s mix with the SU (3N ) baryon number: that is, for any participating unit cell, the operators B = (Q N 1 Q N 2 Q N 3 ) and B = (Q
Global symmetry example
(2.3) , (3.7)
where as in Figure 7 we use blue and red to indicate ±1 charges for the quarks. .
(3.8)
In this example we added a third row of cells to illustrate the alternating pattern. Note that the charge assignment is symmetric with respect to shifts in the ϕ direction, and that the only . . = 0, and the global symmetry group need not be broken. Most moose lattices will include some odd-length rows in one direction or another, forcing some spontaneous symmetry breaking, but an all-even moose lattice can be constructed from doubly periodic boundary conditions. As we discuss in Section 4, periodic boundary conditions lead to Coulomb phases rather than confinement.
Sources of Explicit Symmetry Breaking
Exactly conserved U (1) symmetries can pose phenomenological problems, especially in contexts (such as spontaneous symmetry breaking) where they correspond to exactly massless particles. For this reason alone, we should parameterize the sources of U (1) breaking which could introduce mass terms for otherwise massless Nambu-Goldstone bosons.
In this context, despite the fact that Λ N is much smaller than Λ 3N , it is still (by definition) large compared to the ultimate IR limit of the theory, µ Λ N . In particular, we have assumed that SUSY is preserved during the SU (N ) confinements. If SUSY is broken (which it must be at some point, if the theory is to describe anything resembling our universe) then the scale of soft SUSY breaking should satisfy m s Λ N . Though we can be glad to ignore any degrees of freedom with O(Λ N ) masses, this does not necessarily extend to the approximate global symmetries that are explicitly broken by their mixed anomalies with SU (N ) gauge groups. Many of these anomalous U (1) A are broken spontaneously by the BB vevs from SU (3N ) confinement at a much higher scale, Λ 3N , where U (1) A can be treated as approximately conserved. The ratio between Λ N and Λ 3N suppresses the particle masses introduced by the triangle anomaly: indeed, this is exactly the setup of a typical axion model [38][39][40][41][42][43][44][45], where the axion mass m a is related to f A , the scale of spontaneous U (1) A breaking, and Λ QCD , the source of explicit U (1) A violation, via
m 2 a ∼ Λ 4 QCD f 2 A ≡ m 2 π f 2 π f 2 A . (3.9)
In our examples f A is proportional to Λ 3N , while Λ N stands in for Λ QCD , i.e. m a ∼ Λ 2 N /Λ 3N . Incidentally, the fact that all of these mass scales are dynamically generated makes the moose lattice an ideal playground for model building. For example, Refs. [46,47] use similar features in simpler SUSY product gauge theories to construct composite QCD axion models.
Global symmetries can also be broken by superpotential operators. We have seen this when adding plaquette operators to the superpotential; for example, the localized U (1) a × U (1) b global symmetries of Eq.
W line ⊃ α ij M 2k−3 p (Q 1 Q 1 Q 2 Q 2 . . . Q k Q k ) ij , (3.10)
where the Wilson line runs through k different unit cells. The jth unit cell in the product should be adjacent to its (j ± 1)th neighbors, though the path through the lattice does not need to be in a straight line. We write Eq. The other type of symmetry violation comes from baryonic superpotential operators, e.g.
W line −→ α ij k i=1 Λ (i) 3N M k p (M 1 M 2 . . . M k ) ij M k−3 p .W bary ⊃ β M 3N −3 p (Q N 1 Q N 2 Q N 3 ) +β M 3N −3 p (Q N 1 Q N 2 Q N 3 ) + γ i M N −3 p (q N i ) + . . . . (3.12)
Each unit cell supplies a B and B type operator, as well as six q N i = det q i from the SU (N )×SU (N ) bifundamentals. Applying the same mapping between canonically normalized degrees of freedom across the SU (3N ) transition, In principle any of the conserved U (1) symmetries can be gauged, as long as the anomaly cancellation conditions are satisfied. Take U (1) B of Eq. (3.7) for example. With an integer number of unit cells included in the moose lattice, the U (1) 3 B and mixed gauge anomalies all cancel. From the perspective of the 4d theory, gauging U (1) B is straightforward. Unlike U (1) a × U (1) b , however, there is no local U (1) B conservation in the cells of the moose lattice: a U (1) B rotation acting only on the Q i and Q i of a single unit cell has nonzero SU (N ) 2 U (1) anomaly coefficients. For this 4d product gauge theory to descend from a higher dimensional theory with a locally conserved U (1) B , the local U (1) transformations acting on the compact space need to be spontaneously broken as part of the 6d → 4d discretization scheme.
W bary ⊃ Λ 3N M p 3N −3 βΛ 2 3N B +βΛ 2 3N B + 6 i=1 γ i M N −3 p (q N i ).
Boundary Conditions
For most of Section 2, the shape of the moose lattice mattered very little to the analysis. In Section 2.4, where the SU (N ) groups become strongly coupled, we did make the assumption that the moose lattice was not periodic. Our discussion of global symmetries in Section 3 is more directly dependent on the shape of the lattice boundary, but our methods remain generic enough that we could switch between the k = 3 × 3 example and a k = 5 + 4 or k = 5 + 4 + 3 version with impunity, as in (3.7) and (3.8). Every example was constructed in the same basic way: by connecting an integer number of unit cells, gauging the SU (N ) nodes that connect adjacent unit cells, while leaving the SU (N ) nodes on the boundary of the moose lattice as global symmetries.
By altering the boundary of the moose lattice, we can construct several new types of gauge theories from this template, often by finding ways to gauge the SU (N ) boundary nodes. For example, we can add new matter fields charged under a single SU (N ) so as to cancel its cubic SU (N ) 3 anomaly; we can add bifundamentals of SU (N ) × SU (N ) r to gauge a pair of boundary nodes; or, we could gauge the diagonal subgroup SU (N ) d ⊂ SU (N ) × SU (N ) r of two boundary nodes. None of these perturbations have much effect on SU (3N ) confinement, but can considerably alter the behavior of the SU (N ) m gauge theory.
Reflective Boundaries
The reader has probably noticed that the moose lattices depicted in Figures 2 and 5 have notches missing from the edges. Considering the matter further, the reader may have decided that the missing SU (N ) i × SU (N ) j bifundamentals have a minimal impact on the theory after all: if SU (N ) i,j are global symmetries, each of these edge fields is just N 2 chiral fields with no gauge charges. Naturally, if the edge fields are charged under some gauged U (1), or if they are coupled to the other quarks in the superpotential, they are not entirely irrelevant, but they are not especially interesting either.
If the addition of edge quarks allows the boundary SU (N ) nodes to be gauged, their impact on the theory can be much more interesting. Take for example the modified k = 3 × 3 theory shown in Figure 8. Here we have added eight bifundamentals to fill in the notches; however, these edge quarks have the opposite SU (N ) × SU (N ) r charges, i.e. in the (N, N) representation rather than (N, N), or vice versa. The arrows on the moose diagram for these edge quarks appear to be pointing in the wrong way: that is, unlike every other matter field in the moose lattice, the arrows of the new edge quark point in the ϕ = ±60 • , 180 • directions.
From the perspective of the new gauged boundary nodes, there are 3N fundamentals (e.g. from an SU (3N ) node) and 3N antifundamentals (from the three adjacent SU (N ) nodes). This is F = 3N c SQCD, for which the NSVZ β function vanishes, and asymptotic freedom is lost. With our working assumption that g N (M ) O(1) is perturbatively small, the edge SU (N ) gauge couplings begin to run below µ = λ i Λ 3N , where the vectorlike mass terms for the mesons and q i quarks become relevant. The right panel of Figure 8 shows the theory after these fields have been integrated out. This is the same scale at which the β function for the SU (N ) nodes in the bulk switches sign. If the edge and bulk SU (N ) groups have similarly strong couplings at M , i.e. g (i)
N (M ) ∼ g (j)
N (M ), then the running of the different gauge couplings will tend to make the bulk SU (N ) groups more weakly coupled than the edge nodes at µ ∼ Λ 3N , so that the edge nodes are the first to confine.
Previously, the µ < λΛ 3N phase of the k = 3 × 3 theory involved eleven disjointed strings of SU (N ) charged mesons, as depicted in the right panel of Figure 5: three sets each in the ϕ = 0 • , 240 • directions, and five in the ϕ = 120 • direction. With the wrong-way edge quarks and gauged SU (N ) nodes on the boundaries, previously decoupled SU (N ) m sets are joined together. In Figure 8 there are now only three sets of gauge invariant Wilson line operators. One is a simple example of the form Eq. (3.10), running from corner to corner along the ϕ = 120 • direction. The other two appear to bounce off of the lattice boundaries: one originates and terminates in the lower left corner, the other begins and ends in the upper right corner. This is why we refer to the boundary conditions as "reflective."
Aside from this detail, the SU (N ) m−1 groups confine in the manner described in Section 2.4 and Ref. [9], now with m = 3 or m = 10. In the far infrared limit the degrees of freedom Although we have added eight edge quarks to the k = 3 × 3 model, the reflective theory has fewer global U (1) symmetries. There is no gauge invariant plaquette superpotential associated with the edge quarks χ i , because products of the form (χ i q a q b ) are charged under the boundary SU (N ) nodes. However, by gauging two SU (N ) nodes for each new edge quark χ, some of the U (1) global symmetries acquire SU (N ) 2 U (1) anomalies that cannot be canceled by assigning χ a charge under U (1).
Coulomb Phases: In the reflective k = 3 × 3 example, all of the IR mesons transform as the bifundamental of a different global SU (N ) × SU (N ) r . This feature is not generic, and is not even generic for k = n × n parallelogram arrangements. In the k = 4 × 4 parallelogram of Figure 9, for example, some of the strings of connected SU (N ) gauge groups form closed loops, with no SU (N ) ,r endpoints on the lattice boundary. This arrangement is studied in Ref. [8], which we review in Section 1.2. These closed loop product groups appear especially frequently in the examples with periodic boundary conditions. At a generic point on the moduli space, each SU (N ) 1 × SU (N ) 2 × . . . SU (N ) m gauge theory is spontaneously broken to U (1) N −1 , a Coulomb phase with N − 1 massless photons. There are two disjoint closed loops of this form in the k = 4 × 4 theory, providing a total of 2(N − 1) unbroken U (1) gauge groups at an arbitrary point on the moduli space. As noted in Ref. [2], for theories in the Coulomb phase one can describe the Lagrangian using a holomorphic prepotential, borrowing methods from N = 2 supersymmetry. The hyperelliptic curves for the SU (N ) m theory are given in Ref. [8].
Cylindrical Moose
All of the examples discussed so far have involved topologically trivial moose lattices, aside from the possibility floated in Section 4.1 of connecting non-adjacent SU (N ) ×SU (N ) r boundary nodes. By making one of the dimensions in the moose lattice periodic, we can construct cylindrical lattices with S 1 topology.
Periodic lattices have additional discrete symmetries associated with reflections or translations. If the coupling constants λ i , g N and g 3N also respect these symmetries, e.g. g
(i) 3N = g (i+1) 3N = g (i+2) 3N = .
. . along the periodic direction, then the discrete symmetries should be manifest in the low energy degrees of freedom. After SU (3N ) confinement and the subsequent integrating-out of the massive quarks, the theory includes three sets of SU (N ) 4 ring product gauge theories, with charged mesons from the 1 → 1, 3 → 3 and 5 → 5 rows. These SU (N ) groups do not confine, but each have an unbroken U (1) N −1 at a generic point on the moduli space. Following Ref. [8], the gauge-invariant degrees of freedom can be written in terms of the M Ad operators of Eq. (1.13),
Symmetric Cylindrical Moose
M Ad ≡ (M (1) a . . . M (4) a ) ij − 1 N Tr(M (1) a . . . M (4) a ),(4.1)
where M At k = N this u k is related to the other degrees of freedom by a classical constraint,
M N ∼ (M (1) a ) N (M (2) a ) N (M (3) a ) N (M (4) a ) N . (4.3)
This classical constraint receives quantum corrections of the form B
(i) a B (i+1) a → −Λ b i+1
, including a complete set of nearest neighbor replacements as in Eq. (1.12) [8,48].
At scales µ where the SU (N ) groups are weakly coupled (e.g. µ ∼ λΛ 3N ), an arbitrary point on the moduli spaces sees the gauge invariant operators u k acquire expectation values that spontaneously break SU (N ) 4 to its maximal Abelian subgroup, U (1) N −1 . Ref. [8] uses methods from N = 2 supersymmetry to derive the kinetic terms of the supersymmetric Lagrangian in the infrared limit of the theory.
Along the ϕ = 240 • direction we find four copies of SU (N ) 2 gauge theories with open boundaries, as in Figure 1 or Ref. [9]. As described in Section 2.4, these SU (N ) groups confine, so that in the far infrared the theory is described by baryons B Each discrete symmetry of the lattice is only a symmetry of the particle model if the coupling constants cooperate. Unequal values of λ i between different unit cells, for example, tend to break the translation or reflection symmetries.
(i) a ∼ (Q (i) a Q (i) a ) N and mesons M i ∼ (Q (1) i Q (1) i . . . Q (3) i Q (3) i ),
Alternative Cylinder Boundaries
In Figure 10 we chose an example where the aperiodic boundaries (on the top and bottom rows) respected the shift symmetry in the ϕ = 0 • direction. A more generic example can have cylindrical topology without this "azimuthal" symmetry. For example, one could add or delete unit cells from the aperiodic edges. The periodic edge can be twisted, for example by matching the "1" and "3" on the right edge of Figure 10 to the "3" and "5" nodes (respectively) on the left edge. Other modifications preserve the shift symmetry: one could add wrong-way quark fields to the cylinder ends to recreate the reflective boundary conditions of Section 4.1.
Reflective: If the 4×3 example of Figure 10 is given reflective boundary conditions, it is possible for all 16 of the boundary SU (N ) nodes to be gauged. After SU (3N ) confinement, the charged matter content is given by: Barbershop: The shift symmetry of the cylinder can be broken by adding an apparent rotation or twist to the periodic moose lattice. In Figure 10, the moose lattice is exactly periodic in the ϕ = 0 • direction. We could instead have rolled up the lattice vertically in the page, rather than horizontally: for example, .
(4.5)
The seven numbered edge nodes are gauged, so that the lattice is approximately periodic in the ϕ ≈ 90 • direction. Unlike the Figure 10 following → 6 → 4 → 2 → r through a total of 11 gauged SU (N ) sites. The ϕ = 120 • direction hosts two open strings with SU (N ) 5 gauge groups: one passes through → 7 → 3 → r , the other through SU (N ) 5 and SU (N ) 1 . Like the stripes on a barbershop pole, the gauge invariant line operators in the ϕ = ±120 • directions wrap around the S 1 direction while traveling horizontally along the cylinder. The number of distinct line operators depends on the size of the cylinder, and on the degree to which it is twisted.
In the example of (4.4), the combination of periodic and reflective boundaries removed all of the global SU (N ) ,r symmetries, ensuring that the IR limit of the theory exhibits a Coulomb phase for each of the ϕ. With the twisted cylindrical moose lattice of (4.5), we encounter the opposite behavior: there are no closed SU (N ) rings or Coulomb phases in any direction, but instead all of the SU (N ) sites confine as in Section 2.4.
Möbius: As a final S 1 related example, we can further modify the k = 4 × 3 cylinder by adding a 180 • twist about the central row, to construct a Möbius strip rather than a simple cylinder. Taking reflective boundary conditions on the top and bottom edges (or rather, the single "top = bottom" edge), the moose diagram after SU (3N ) confinement takes the form: .
(4.6)
Compared to (4.4), there are even fewer ϕ = ±120 • distinct gauge invariant line operators: one along 0 → 6 → 0, another following 2 → 4 → 2, each of which encounters 16 gauged SU (N ) groups. Also, the "5" and "1" ϕ = 0 meson lines now join together into a single 1 → 5 → 1. So, the SU (N ) charged mesons can be organized into four sets of SU (N ) m rings, with m = 4, 8 for ϕ = 0, and m = 16, 16 for ϕ = ±120 • . The infrared theory exhibits a Coulomb phase with just four copies of U (1) N −1 , rather than the U (1) 7(N −1) of (4.4).
Toroidal Moose
Finally, let us discuss the toroidal topologies (T 2 = S 1 × S 1 ) that arise when periodic boundary conditions are imposed on all edges of the moose lattice. By construction, these geometries have no SU (N ) sites that are not gauged. A generic periodic flat torus can be represented by a k = m × n parallelogram, with some scheme for matching the nodes on opposite edges. That is, the boundaries can be twisted in the manner of (4.5), while preserving the two-dimensional shift symmetries of the lattice. As a first concrete example, we return to the k = 4 × 3 parallelogram, depicted in Figure 11 before and after SU (3N ) confinement. With this particular choice for the periodic boundaries, Using the technology from Section 2, it is straightforward to follow the theory from µ ∼ M towards the infrared, past SU (3N ) confinement and the generation of masses for the vectorlike pairs of mesons and quarks. The remaining light SU (N ) charged degrees of freedom are shown in the right side of Figure 11. The ϕ = 0 mesons form four sets of SU (N ) 3 , following j → j for j = 1, 3, 5, 7. Similarly, the ϕ = 240 • mesons provide another three SU (N ) 4 rings, with j → j for j = 1 , 3 , 5 . Lastly, the ϕ = 120 • mesons form a single closed loop encompassing an SU (N ) 12 gauge group, following 0 → 6 → 4 → 4 → 2 → 2 → 0.
The doubly periodic lattice has a two dimensional shift symmetry, spanned by unit cell translations in the ϕ = 0 • and ϕ = 240 • directions. Respectively, these operations cyclically permute the sets of ϕ = 240 • and ϕ = 0 • mesons. The single ϕ = 120 • line transforms as the identity under both kinds of translation. The moduli space is spanned by the u k type operators of Eq. Either type of edge can be twisted, by acting on the node labels with cyclic permutations. As a simple example, we could shift the bottom row of labels four spaces to the right, 0 → 4 , so that the top and bottom rows now match along the ϕ = 90 • vertical direction. The k = 3 × 4 version is shown in Figure 12, after SU (3N ) confinement. In this example the lattice is symmetric with respect to reflections about the vertical axis: this was not true of Figure 11. Now, both sets of ϕ = ±120 • mesons form rings of SU (N ) 12 . The ϕ = 120 • example follows 0 → 4 → 2 → 6 → 4 → 2 → 0 ; for ϕ = 240 • , the line operator follows 1 → 3 → 5 → 1 instead. The ϕ = 0 • operators are not impacted by the twist: they still form four sets of SU (N ) 3 rings. At a generic point on the moduli space, the SU (N ) groups are spontaneously broken to U (1) 6(N −1) .
In the present work we are content to restrict ourselves to T 2 and S 1 topologies for the moose lattice. More complicated topologies can of course be constructed by folding and connecting lattices of different shapes, but the methods for determining the low energy behavior remain the same.
Conclusion
This paper is dedicated to the N = 1 supersymmetric triangular moose lattice in four spacetime dimensions, with the [SU (3N ) × SU (N )] k style product gauge group. Assuming that there is a high energy scale M at which the coupling constants are perturbatively small, we have shown that the SU (3N ) gauge groups confine, and that some of the SU (N ) charged quarks and mesons subsequently acquire vectorlike masses. Depending on the lattice boundary conditions, the SU (N ) gauge groups may either confine or form Coulomb phases.
Aspects of the infrared theory are highly suggestive of a higher dimensional interpretation, where the effectively two-dimensional moose lattice is associated with two compact extra dimensions. Some of the degrees of freedom, such as the baryon operators, are localized to specific segments of the moose lattice. Each gauged SU (3N ), for example, is associated with the B and B operators defined in Eq. (2.6), as if B and B were composite fields propagating in the bulk of the extra-dimensional theory. As we show in Section 3.1, some global symmetries can act in the same way, especially if some subgroup of the U (1) a × U (1) b is gauged. Other degrees of freedom, like the M of Eq. (2.23), are associated more closely with the boundaries of the lattice, as are the chevron-type global U (1) symmetries of Section 3.2.
Especially for the examples with periodic boundary conditions presented in Section 4, we see that the meson and baryon operators in the farthest infrared limit possess shift symmetries that recall a discretized version of translation invariance. An approximate version of this translation invariance appears in the bulk of the moose lattice even in non-periodic topologies. All of these signs point towards a geometric physical interpretation of the moose lattice gauge theory, providing a clear direction for future research on this topic.
The Coulomb phases associated with the periodic or reflective boundary conditions provide another area for future study. Aside from noting that Ref. [8] provides expressions for the holomorphic prepotential of the SU (N ) m ring theories, we have not yet taken advantage of the approximate N = 2 supersymmetry to constrain the Lagrangian for the infrared degrees of freedom.
The structure of the model also provides many opportunities for model building. The strong CP problem, and the associated axion quality problem, supply one such well-motivated target. The QCD axion provides an elegant mechanism to explain the otherwise confoundingly tiny value of the SU (3) c CP violating θ parameter, where an approximate U (1) PQ with nonzero SU (3) 2 c U (1) PQ anomaly coefficient is spontaneously broken at a high scale, f a Λ QCD . Nonperturbative QCD effects generate a potential for the pseudo-Nambu-Goldstone boson of U (1) PQ that sets the effective value of θ to zero.
This mechanism requires that U (1) PQ should be classically conserved to an extremely high degree, broken only by QCD effects. However, gravitational effects are generally expected to break global symmetries [32][33][34][35][36][37], and even relatively tiny perturbations to the axion potential can ruin the solution to the strong CP problem. A successful "high quality" axion model protects U (1) PQ against these gravitational intrusions by ensuring that all PQ-charged gauge-invariant operators permitted in the Lagrangian are sufficiently suppressed [46,47,[49][50][51][52][53][54][55][56]. Especially for large moose lattices, or for the "barbershop" arrangements of Section 4.2.2, it can be relatively easy to embed SU (3) c and a high quality QCD axion within this model. Indeed, compared to the relative simplicity of Ref. [47], invoking a whole moose lattice for the sole purpose of dealing with the axion quality problem may be seen as overly aggressive.
The automatically generated hierarchy between the scales Λ N and Λ 3N depicted in Figure 4 is another feature of the moose lattice that has possible model building applications. Due to the sign change in the β(g N ) function induced by SU (3N ) confinement, the scale Λ N is suppressed by a factor of Λ 2 3N /M . This inverse relationship, reminiscent of the seesaw mechanism for neutrinos, allows for an unusually small Λ N even if the SU (N ) and SU (3N ) couplings are of similar size at µ ∼ M .
Finally, it may be worthwhile to generalize the two-dimensional triangular lattice beyond the [SU (3N ) × SU (N ) 3 ] k paradigm to include higher dimensions and alternative lattice arrangements.
Acknowledgements
I am grateful to Patrick Draper, Arvind Rajaraman, Yuri Shirman, and Tim M. P. Tait for several conversations during the development of this paper, and to Carlos Blanco, Aaron Friedman, Robert McGehee, and Pavel Maksimov for their patience at the social occasions where I have presented the principal results. Special thanks go to Patrick Draper for helpful feedback on this manuscript. This work was performed in part at Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611. This work was partially supported by a grant from the Simons Foundation. Some of this work was supported by NSF Grant No. PHY-1620638 and the Chair's Dissertation Fellowship from the Department of Physics & Astronomy at UC Irvine.
A Global Symmetries and 't Hooft Anomaly Matching
For the infrared theory of M , B and B to be dual to the SU (N ) gauge theory of Q and Q, it must satisfy a number of nontrivial constraints that arise from the global symmetries of the theory. At every point on the moduli space, the 't Hooft trace anomalies of the preserved global symmetries should match between the two theories. For qdms-confinement, we show that the full set of anomaly matching conditions is not satisfied at the origin of moduli space: but, the anomaly coefficients do match everywhere on the quantum-deformed moduli space Eq. (1.3), for those global symmetries which are not spontaneously broken. This provides a separate confirmation that the origin should be excised from the qdms-confined theories. In the case of s-confinement, the moduli space includes the origin, so the full set of 't Hooft anomaly coefficients must match in the infrared theory.
In SQCD with N colors and F flavors, the global symmetry group is
G global = SU (F ) L × SU (F ) R × U (1) B × U (1) R . (A.1)
In addition to G global , there is also an approximate symmetry U (1) A , which has a nonzero SU (N ) 2 -U (1) A anomaly coefficient. It is explicitly broken by SU (N ) instantons, at a scale characterized by Λ. As the F = N model features prominently in the product gauge groups introduced in this paper, let us take a moment to explore its infrared effective theory in more detail. For this special case the U (1) R symmetry can be defined such that the scalar parts of the Q and Q supermultiplets have zero R charge, as shown in Table 1. Using the canonical normalization for U (1) R , the fermionic components of Q and Q (and B, B and M ) have R charges −1. This definition of U (1) R is not unique: as in any theory with multiple conserved U (1) charges, it is possible to define a U (1) R or a U (1) B out of linear combinations of U (1) R and U (1) B .
To demonstrate the use of the anomaly matching conditions in the presence of spontaneous symmetry breaking, consider the U (1) 3 R cubic anomaly coefficient: where we have naively evaluated the anomaly coefficient A IR at the origin of the moduli space. The two A do not match: this is because we have overcounted the IR degrees of freedom by neglecting the quantum modified constraint, Eq. (1.3). On the BB = −Λ b branch of the moduli space,
A UV (U (1) 3 R ) = F · N (−1) 3 + N · F (−1) 3 + (N 2 − 1)(+1) 3 = −N 2 − 1 ASU (F ) SU (N ) c SU (F ) r U (1) B U (1) R U (1) A Q F N 1 −1 0 +1 Q 1 N F +1 0 +1 λ 1 Ad 1 0 +1 0 Λ b 1 1 1 0 0 +2N B 1 1 −N 0 B 1 1 +N 0 M F F 0 0
absence of any superpotential there is an SU (F ) Q × SU (F ) Q global flavor symmetry, under which M transforms as a bifundamental and B and B are singlets. The other global symmetries include a U (1) B "baryon number" symmetry under which B and B have opposite charges, and a U (1) R under which the scalar parts of the Q, Q, B, B, and M superfields are all neutral. 1 This U (1) R does not commute with the generators of the N = 1 supersymmetry; we follow the normalization where the chiral superpotential W has charge +2 under any conserved U (1) R , while the gauginos have charge +1. The origin of the moduli space B = B = M ij = 0 is inconsistent with the modified constraint Eq. (1.3), so either det M or BB must acquire some expectation value. A nonzero BB spontaneously breaks the U (1) B baryon number. If instead det M = 0, then SU (F ) × SU (F ) is broken to a subgroup: in the most symmetric of these vacua, M ij ∝ δ ij , a diagonal subgroup SU (F ) d ⊂ SU (F ) × SU (F ) remains unbroken. On a more generic point in the moduli space, the SU (F ) × SU (F ) × U (1) B symmetry is completely spontaneously broken.
Figure 2 :
2Left: the moose diagram for the (SU (3N ) × SU (N ) 3 ) k triangular theory, in the regime where the gauged SU (3N ) are weakly coupled. The SU (N ) nodes on the boundary of each figure (with nonzero cubic 't Hooft anomaly coefficients) are global symmetries. Each triangular plaquette can be encircled by gauge-invariant trace operators of the form W ∼ λTr(Q 1 Q 2 Q 3 ), where Q 1...3 are the three bifundamental quarks that lie on the edges of the plaquette. Right: The moose diagram of the SU (3N ) invariant degrees of freedom, in the limit µ Λ 3N . The paired bifundamental quarks at the edge of each hexagon acquire vectorlike mass terms of order O(λΛ 3N ) from the trace terms in the superpotential, and are integrated out of the infrared theory.
the different SU (3N ) gauge groups are not necessarily identical, and can even include large hierarchies Λ smaller than M .
Figure 3 :
3When the theory is probed at intermediate scales µ < Λ 3N , the unit cell evolves towards the moose diagram shown on the right. First, on the left, we show the Λ 3N µ < M unit cell for weakly coupled SU (3N ). After SU (3N ) confines, the light degrees of freedom include the SU (N ) × SU (N ) meson operators, shown in the middle diagram either crossing the center of the unit cell or connecting adjacent SU (N ) nodes on its boundary. Together with the original SU (N ) × SU (N ) bifundamentals q i , the meson operators on the edge of the hexagon acquire O(λ i Λ 3N ) masses from the plaquette operators W ∼ QQq → (QQ)q. The rightmost diagram
From the perspective of any of the SU (N ) nodes, the confinement at Λ 3N does not pose a significant change to the theory. For example, the 3N quarks Q 1 in the N representation of the ϕ = 0 • SU (N ) node inFigure 3are replaced with N + N + N mesons M 11 , M 21 , and M 31 , also in the fundamental representation of SU (N ). For an SU (N ) node in the bulk of the moose lattice, the SU (3N ) confinement simply replaces F = 5N SQCD with a differently labelled F = 5N SQCD model. This is seen explicitly in the right panel ofFigure 2, for any of the SU (N ) sites in the bulk.The main difference is that the triangle plaquette operators of Eq.(2.5) become vectorlike mass terms, Eq. (2.9), pairing the mixed mesons M a,b =a with the SU (N ) × SU (N ) bifundamental quarks q i . Thus, of the F = 5N original flavors, 4N of these acquire O(λ i Λ (j) 3N ) masses, where j = 1, 2 refer to the SU (3N ) groups on either side of a given SU (N ) node. At scales below λΛ 3N the heavy quarks and mesons are integrated out, leaving only the SU (N ) charged M aa mesons, as shown in the rightmost diagram of Figure 3.
Figure 4 :
4An illustration (not exactly to scale) of the running couplings for one N c = 3N and two N c = N gauge groups, taken to have similar values near µ ∼ M . The narrative in this section follows the plot from right to left, UV to IR. Between M and Λ 3N , g 3N (in blue) runs quickly towards strong coupling, with the slope on the plot given by
Deformed Moduli Space: In light of the O(λΛ 3N ) masses for the mixed mesons, we can simplify the constraint equation Eq. (2.6) by discarding the contributions from the heavy degrees of freedom. Expanding M into its nine components M ab , and setting M ab → M ab = 0 for a = b, the nonzero part of the determinant is det M −→ det M 11 det M 22 det M 33 .
Figure 5 Figure 5 :
55shows the state of the theory where we left off in Section 2.3: the six vectorlike pairs of bifundamental quarks and mesons q a M bc at each unit cell have been integrated out, leaving behind three massless mesons M aa = (Q a Q a ), which transform as bifundamentals of the SU (N ) × SU (N ) nodes on opposite corners of the unit cell. The three mesons are oriented along the ϕ = 0 • , 120 • , 240 • lattice directions, which largely decouple from each other. Within the bulk of the lattice, there are no light degrees of freedom that couple mesons in the ϕ direction to any of the ϕ ± 120 • mesons. Each SU (N ) gauge group couples exclusively to a single pair of mesons, and there are no couplings between parallel sets of meson operators. All of the Wilson lines that changed direction within the lattice incorporated the heavy degrees of freedom q or (Q a Q b =a ).Ignoring the lattice boundaries, the SU (N ) groups and their charged matter fields can be organized into several copies of the SU (N ) k linear moose theory shown inFigure 1. At this stage After integrating out the quarks and mesons which have vectorlike masses of O(λΛ 3N ), the remaining light degrees of freedom are shown in the moose diagrams above. Left: At the intermediate scales Λ N µ < λΛ 3N , each SU (N ) gauge group in the bulk of the lattice is connected by bifundamental mesons to the SU (N ) nodes found on the far sides of the two neighboring unit cells. Right: In the far infrared of the theory, µ Λ N , the strongly coupled SU (N ) product gauge theory is replaced by its Seiberg dual. The degrees of freedom include "baryon" operators, (M N a ), and extended "meson" operators, shown here as Wilson lines stretching between opposing boundaries of the finite lattice, oriented in the ϕ = 0 • , 120 • , and 240 • directions, labelled a, b, c respectively.
j
cell has three meson types, M (γ) = M γγ = (Q γ Q γ ) for γ = a, b, c, oriented respectively in the ϕ = 0 • , 120 • , 240 • directions. Given m = 4 unit cells, there are m − 1 gauged SU (N ) groups.This theory is described in detail by Chang and Georgi[9], and summarized in Section 1.2, so there is relatively little work left to do here. The mesons M (a) charged under the strongly coupled SU (N ) gauge groups form gauge-invariant operators, . . . B (a) m .
superpotential coupling constants λ i appear indirectly in Eq.(2.24): as established in Section 2.3, the holomorphic scales Λ N depend on both λ i and g Nc (M ).Note that none of these mesons M(a) i is charged under "the" U (1) B symmetry; but, it should be possible to identify some other B symmetry under which the M i mesons have ±1 charges, as in the SU (N ) k model of Ref. [9]. There are indeed many such U (1) symmetries, each with some nonzero SU (N ) 2 U (1) anomaly coefficients with the global SU (N )s of the top and bottom edges of Eq. (2.22). A complete discussion of this "meson's baryon number" is postponed to Section 3.2. If the number of unit cells in the row is even, then the product B 1 B 2 . . . B m can be fully contracted. In the m = 4 example of Eq. (2.22), the constraint equation is expanded as det M a = B
included on the moduli space. A memory of Λ 3N is preserved in the baryonic operators through the spontaneous symmetry breaking. Recall from Eq. (2.6) that SU (3N ) confinement produced two baryon operators B and B, and that one linear combination of these remains light. As demonstrated in Section 2.3, the M ab degrees of freedom for a = b acquire O(λΛ 3N ) masses, and expectation values M ab = 0. This simplifies the det(QQ) determinant that appears in the constraint equation for B and B: after replacing M a =b with their expectation values, the only remaining term in det(QQ) is BB + Λ b 3N = det(QQ) = det M aa det M bb det M cc . (2.26) In the language of Eq. (2.23), the SU (3N ) constraint equation from the jth unit cell becomes (BB + Λ 6N 3N ) j = B the "B − B" flat direction to the more recently formed baryons B (γ) = det M γγ , for γ = a, b, c.For the k = 3 × 3 arrangement shown in Figures 2 and 5, the deformed moduli space includes a copy of Eq. (2.24) for each of the 11 strings of meson operators shown in Figure 5, combined with one copy of Eq. (2.27) for each of the nine SU (3N ) unit cells. Along the ϕ = 0 • and ϕ = 240 • directions, the constraint equations all follow the m = 3 form of Eq. (2.24), with SU (N ) × SU (N ) gauge groups. In the ϕ = 120 • direction, on the other hand, the width of the moose lattice is not constant. At the lower left and upper right corners ofFigure 5, there are no gauged SU (N ) groups, and only a single meson M (b) = (Q 2 Q 2 ) in each cell, following the labelling convention ofFigure 3. The off-center ϕ = 120 • strings have a single gauged SU (N ), so that the effective infrared theory is simply F = N SQCD, while the string passing through the central SU (3N ) has two gauged SU (N ) groups, like the ϕ = 0 • and ϕ = 240 • lines.
( 1 )
1Q(1) )(Q(2) Q(2) )/M induces a mass for the mesons M 1 and M 2 , which is of order αΛ small compared to M , so Λ 2 3N /M Λ 3N . However, these meson masses can in principle be large compared to the scale of SU (N ) confinement, Λ N , because Λ N Λ 3N is also exponentially small. In order for the SU (N ) confinement to proceed as in Section 2.4 for the m = 2 wide moose lattice, even in the face of global symmetry violation, the associated meson mass scale αΛ 2 3N /M should be small compared to Λ N .
Figure 6 :
6In the left diagram we label the matter superfields associated with the unit cell according toFigure 3. The SU (3N ) and SU (N ) labels are suppressed, replaced by filled and open circles (respectively) at the nodes of the diagram. In the central and right diagrams we list the charges of the superfields under the U (1) a and U (1) b symmetries, respectively.out by cosmological data. Some of the global symmetries may be gauged, changing the low-energy degrees of freedom in the theory.
(2.5) is invariant under the remaining 6 linear combinations. The mixed SU (3N ) 2 U (1) anomaly breaks another linear combination. At a generic point within the bulk of the lattice (the central SU (3N ) in Figure 2, for example), each SU (N ) node at the boundary of the unit cell is shared with one neighboring cell: so, although there are six SU (N ) groups appearing in Figure 3, on average only three linear combinations of U (1) phases are broken by the mixed SU (N ) 2 U (1) anomalies. So, for each unit cell within the bulk of the moose lattice, we can identify 12 − 6 − 1 − 3 = 2 (3.1)
(2.5) induces U (1) a ×U (1) b conserving mass terms for the mixed mesons (Q i Q j =i ) and the quarks q i . The light degrees of freedom are the mesons M ii = (Q i Q i ) and the baryons B and B, subject to the constraint Eq. (2.21). It is easy to see from Eqs. (3.2) and (3.3) that B, B, and M ii are all neutral under U (1) a × U (1) b : the only superfields with nontrivial charges are the ones that acquired vectorlike masses, q i and M i,j =i . Once these fields are integrated out, the light degrees of freedom are decoupled from a and b.
such operators to the superpotential generally breaks each individual localized U (1) a,b symmetry. On average, each unit cell in the bulk of the moose lattice comes with two new Tr (q a q b q c ) plaquette operators, so that the number of U (1) symmetries in the moose lattice no longer scales as the number of unit cells. In terms of Eq. show that once Eq.(3.4) is added to W , the number of U (1)s scales with the size of the perimeter of the lattice, rather than its area. A version of U (1) a × U (1) b remains unbroken by Eq. (3.4): the symmetric linear combinations of the localized U (1) symmetries from every unit cell in the moose lattice, a = a (r) + a (s) + a (t) + . . . and b = b (r) + b (s) + b (t) + . . .. This is easily seen from Eq. (3.2) and Eq. (3.3), by assigning the same charge to all q (r) i independently of the unit cell label r.
Figure 7 :
7An illustration of the quark charge assignments under the boundary-associated conserved global symmetries, using the chevron basis referred to in the text. In the main example the quark charges are indicated by the line color, blue and red for ±1 charges. Quarks not charged under this U (1) are drawn with faint gray dashed lines. Solid and open circles at the nodes represent SU (3N ) and SU (N ) groups. Every plaquette operator is neutral under this U (1), and all of the SU (N c ) 2 U (1) mixed anomalies cancel for the gauge groups. As a counterexample, we also show an anomalous U (1) A as the zigzag line along the ϕ = 120 • direction, with black and red dashed lines for ±1 charges. This U (1) A is unbroken by W and the SU (3N ) 2 U (1) A gauge anomaly; but, some of the SU (N ) 2 U (1) A anomaly coefficients are nonzero, so this U (1) A is broken explicitly by Λ N scale effects.
(3.4) is forbidden, and there is an independent U (1) a × U (1) b at every unit cell; or, U (1) a × U (1) b is only globally conserved, and there is a single copy of it acting on the whole moose lattice.Under the U (1) R identified in Eq. (2.5), all quarks q i have R charge +2. Eq. (3.4) breaks this particular symmetry. More precisely, the SU (N ) 3 plaquette operators have nontrivial charges under U (1) R × r U (1) and there is some linear combination of R, a (r) and b (r) under which Eq. (3.4) has charge +2.
charges under the chevron U (1). When BB acquires its O(Λ 3N ) expectation value, it breaks the U (1) B under which B and Q are charged. In the infrared limit of the theory, well below Λ 3N , the U (1) B is not manifest as a global symmetry of the composite degrees of freedom, but shows up as a collection of Nambu-Goldstone superfields, one for each gauged SU (3N ).There is a global U (1) B with a particularly simple charge assignment, uniform with respect to the various unit cells:
All of the q i are neutral. The plaquette operators are manifestly invariant under U (1) B , and all the SU (N c ) 2 U (1) B gauge anomalies cancel. In addition to B, there are three related conserved U (1) symmetries, obtained by shifting the U (1) B charge assignment in the ϕ = 0, 120 • , 240 • directions. Rather than converging at the SU (3N ) nodes, these other B symmetries follow a triangular lattice with vertices at alternating SU (N ) nodes. After SU (3N ) confinement, U (1) B and most linear combinations of chevron U (1)s are spontaneously broken. An alternating combination of chevron U (1)s is left unbroken: given the full set of chevron symmetries U (1) j , j = 0, 1, 2, . . . , N rows , for a fixed value of ϕ, the combination (0) − (1) + (2) − (3) + . . . leaves all of the B and B operators invariant. To give an explicit example, the charge assignments under the unbroken ϕ = 0 chevron U (1) are:
i
SU (N ) sites with nonzero SU (N ) 2 U (1) anomaly coefficients are those on the top and bottom edges, parallel to ϕ. This U (1) and its ϕ = ±120 • analogues are particularly relevant to the phase of the theory outlined in Section 2.4: although B and B are neutral under this U (1) and its ϕ = ±120 • analogues, the meson operators M (j) 2,3 have nontrivial charges of ±2. In the last stage of confinement, where the SU (N ) gauge groups become strongly coupled, the chevron U (1) groups may or may not be spontaneously broken. Following Eq. (2.22), the effective SU (N ) × SU (N ) r × U (1) B symmetry of a particular row in the moose lattice could be broken to the diagonal SU (N ) d × U (1) B by a vev for the meson line operator (M ) N could spontaneously break U (1) B to a discrete Abelian subgroup. The B in this scenario is a linear combination of the chevron U (1) symmetries. Alternatively, if the number of unit cells in each row is consistently an even number, then the deformed moduli space Eq. (2.24) may yet include the origin, M = B (a) = .
(2.5) are broken by the addition of Eq. (3.4) to W . Other possible gaugeinvariant operators include the moose lattice analogue of Wilson lines,
( 3 .
310) in terms of the Planck scale M p , though depending on the context a lower UV scale (e.g. M ) may be more appropriate. Operators of this form are charged under the global SU (N ) and SU (N ) r where the Wilson line begins and ends on the lattice boundary, and the addition of W line to the superpotential explicitly breaks the non-Abelian global symmetries. If k is large, however, the effect on the IR theory is generally small. After SU (3N ) confinement, the effective operator is suppressed by k factors of Λ 3N /M p , so that the theory of SU (N ) charged mesons M i = (Q i Q i ) takes the form:3
k ≥ 3, or Λ 3N ≪ M p , this explicit breaking of the non-Abelian global symmetry may be quite small. In the furthest infrared limit, the straight line operators M ij ∼ (M 1 M 2 . . . M k ) ij may acquire an expectation value, subject to a constraint equation of the form Eq. (2.24), which includes vacua with det M = 0. At an arbitrary point on the moduli space, the SU (N ) × SU (N ) r global symmetry is entirely (spontaneously) broken, and operators of the form Eq. (3.10) can generate masses for the pNGBs.
in Section 2.3, the quarks q i acquire O(λΛ 3N ) masses and are integrated out of the theory. The baryons are light degrees of freedom, subject to the constraint equation det M −BB = Λ 6N 3N . Without Eq. (3.13), the BB = 0 vacuum would create massless Nambu-Goldstone bosons for the various U (1) B type global symmetries, but the β andβ terms introduce masses for these fields. Gauged Abelian Symmetries: If any of the spontaneously broken U (1)s are locally conserved, then it is the (super) Higgs mechanism that saves us from phenomenologically unfriendly massless fields. This would be the case if a subgroup of U (1) a × U (1) b is gauged, for example. Although the baryons B and B are neutral under a and b, the mesons M i ∼ (Q 1 Q 1 ) are not. After SU (N ) confinement, the baryon operator B (i) ∼ (M i ) N can acquire a vev, spontaneously breaking U (1) a × U (1) b to a subgroup and endowing the gauge boson with a mass.
Figure 8 :
8Reflective boundary conditions. Left: The k = 3 × 3 moose lattice from Figure 2 is modified by the addition of eight "wrong-way" bifundamental quarks on the boundaries of the lattice. The wrong-way quarks are depicted in black; the matter fields already present in the Section 2 are shown in gray. Following the concise notation of the figures in Section 3, the SU (N ) and SU (3N ) sites are represented (respectively) by white and black nodes. The only SU (N ) sites with nonzero SU (N ) 3 anomaly coefficients are those at the corners of the parallelogram. Right: After the SU (3N ) groups confine, and after the mesons and quarks with vectorlike O(Λ 3N ) masses are integrated out, the theory includes the SU (N ) charged quarks and mesons shown here. Thanks to the new edge quarks, the fields can be organized into three disjoint SU (N ) m−1 gauge theories, two with m = 10 and one with m = 3. include three composite mesons of the form M ∼ m i M i , with each M charged under a different SU (N ) × SU (N ) r .
Figure 9 :
9Left: In the k = 4 × 4 parallelogram moose lattice with reflective boundary conditions, there are two sets of closed SU (N ) m loops, and three open lines with SU (N ) × SU (N ) r global symmetries. The open ended SU (N ) m−1 theories confine, following Ref. [9], while the closed loop SU (N ) m theories have Coulomb phases, as in Ref. [8]. Right: Here we show an asymmetric moose lattice with reflective boundaries and an arbitrarily chosen shape. In this example there are no closed loops: the gauge invariant Wilson lines transform under the SU (N ) × SU (N ) r global symmetries of the nodes on the corners of the lattice. Irregular Boundaries: There is no requirement that the moose lattice should have a symmetric shape. We emphasize this point with the example shown on the right hand side of Figure 9. This example happens to have no closed loops, and a total of four open-ended Wilson lines of varying length. The shortest line is the ϕ = 0 gauged SU (N ) × SU (N ) in the topmost row; the longest begins in the lower right corner, passes through 29 gauged SU (N ) sites before finding the middleleft corner.Naturally, this program of adding charged edge quarks and gauging SU (N ) × SU (N ) r can be extended to any ( , r) pair of boundary nodes, even non-adjacent ones. In principle every global SU (N ) × SU (N ) r can be contracted in this way, until all of the SU (N ) nodes in the lattice are gauged.
Figure 10 Figure 10 :
1010shows a k = 4 × 3 example, periodic in the ϕ = 0 • direction. The gauge group is SU (3N )12 × SU (N )28 , and the non-Abelian global symmetry from the nodes on the top A cylindrical moose lattice with k = 4×3 unit cells, periodic in the horizontal direction. The labels j = 1 . . .5 indicate that the SU (N ) j nodes on the left and right boundaries are identical. Dashed lines indicate that a quark (for example, the bifundamental of SU (N ) 1 × SU (N ) 2 ) appears twice on the diagram. bottom edges of the diagram is SU (N ) 8 × SU (N ) 8 r . Each pair of SU (N ) j nodes (j = 1 . . . 5) on the left and right edges corresponds to a single gauged SU (N ) group. The dashed lines in the moose diagram depict quarks that would otherwise be double counted; for example, the "2 → 1" edge quark is shown as a solid line on the left edge, and dashed on the right. Thanks to the periodicity of the moose lattice, it is invariant under shifts of an integer number of unit cells in the ϕ = 0 • direction. This symmetry of the lattice becomes a symmetry of the theory if the coupling constants within the unit cells of each row are identical. Each row in the bulk can still have three distinct SU (N ) coupling constants g (i) N and eight plaquette coupling constants λ i (including the SU (N ) 3 plaquettes), without spoiling the translation symmetry. If additional constraints are imposed on the λ i and g (i)N , then the theory may also inherit the reflection symmetries of the lattice. More precisely, the theory can be made invariant with respect to the combination of global charge conjugation (N c ↔ N c at all nodes) with reflections across any vertical (ϕ = 90 • ) plane aligned with the lattice nodes.
a
) is the ϕ = 0 • meson of the ith unit cell. This M Ad is an adjoint of SU (N ) 1 , and neutral under the other SU (N ) i . The moduli space is spanned by the gauge invariant products of M Ad ,u k ≡ 1 k Tr M k Ad , k ≥ 2,(4.2) as well as the usual baryon operators, (M (i) a ) N . Note that the values of u k are insensitive to the choice of which SU (N ) group should correspond to i = 1. The cyclicality of the trace operator ensures that u k is invariant under shifts of the form i → i + 1.
where each M i transforms as a bifundamental of an SU (N ) × SU (N ) r . These gauge invariants are related to each other by the modified constraint equation Eq. (1.12).In the ϕ = 120 • direction we find exactly the same behavior, except that two of the SU (N ) 2 sets include the SU (N )2 or SU (N ) 4 depicted on the boundaries of Figure 10. The SU (N ) groups confine, and the light degrees of freedom include four sets of M mesons, each charged under its own SU (N ) × SU (N ) r global symmetry. Under the i → i + 1 shift symmetry, the four ϕ = 120 • mesons are permuted cyclically with each other, as are their associated baryons. The same is true for the four ϕ = 240 • mesons. There are three sets of u (j) k operators, one for each row in Figure 10. Each u . Under reflection-plus-conjugation operations, on the other hand, each ϕ = 120 • meson is interchanged with one of the ϕ = 240 • mesons, while again the u (j) k transform trivially.
sets of ϕ = ±120 • mesons merge into four rings of SU (N ) 8 , following 0 → 0, 2 → 2, 4 → 4, and 6 → 6 (respectively). Horizontal shifts on the lattice induce cyclic permutations to these operators, and now none of the SU (N ) groups confine. Instead, there is a Coulomb phase with 7(N − 1) unbroken U (1) gauge groups; three from the three horizontal rows of SU (N ) 4 , four from the four SU (N ) 8 loops. The ϕ = 0 • mesons are unaffected by the modification to the boundary conditions.
example, this twisted cylinder does not generate any closed rings of SU (N ) gauge groups after SU (3N ) confinement. There are three disjoint open meson lines in the ϕ = 0 direction, one for each horizontal row, each with an SU (N ) × SU (N ) r type global symmetry. In the ϕ = 240 • direction there is a single string of SU (N ) charged mesons,
Figure 11 :
11Left: A T 2 torus, based on a k = 3 × 4 rectangular arrangement with periodic boundaries, is shown here in the UV limit where the gauged SU (3N ) are asymptotically free. The labels j = 1 . . . 7 and j = 1 . . . 5 indicate the connections between the boundary nodes. Each pair of SU (N ) j or SU (N ) j nodes on the diagram corresponds to a single gauged SU (N ); likewise, the four "0" nodes represent a single gauged SU (N ). Right: After the SU (3N ) groups confine and the vectorlike pairs are integrated out, these SU (N ) × SU (N ) bifundamental mesons are the only charged degrees of freedom. Along the ϕ = 0 • and 240 • directions they form closed loops, with SU (N )3 and SU (N ) 4 product groups respectively. For this rectangular k = m × n torus with m = n, the ϕ = 120 • wraps around the torus in a single SU (N ) 12 ring, following 0 → 6 → 4 → 4 → 2 → 2 → 0. straight lines in the ϕ = 0 and ϕ = 240 • directions wrap exactly once about each S 1 , e.g. 3 → 3 or 5 → 5 . Each node number j = 1 . . . 7 or j = 1 . . . 5 corresponds to a single gauged SU (N ), as usual; similarly, although the SU (N ) 0 group appears at each of the four corners on the lattice, it is a single gauge group. So, the UV theory (perturbatively coupled at µ ∼ M ) is composed of an SU (3N ) 12 × SU (N ) 36 gauge group, with 12 × 6 bifundamentals of SU (3N ) × SU (N ) and an equal number of SU (N ) × SU (N ) quarks. Assuming no additional U (1) factors are gauged, the superpotential admits 12 × 8 trilinear plaquette operators.
Figure 12 :
12(4.2), together with Tr (M (1) a . . . M (m) a ) and the various baryonic operators. At an arbitrary point on the Here we show another symmetric T 2 torus, periodic in the ϕ = 0 • and ϕ = 90 • directions. The j → j lines (j = 1, 3, 5, 7) are unaffected by the altered boundary conditions (compared to Figure 11), but now the ϕ = ±120 • mesons each form one closed ring of SU (N ) 12 . Reflections of the lattice in the ϕ = 0 • or ϕ = 90 • directions exchange the ϕ = 120 • and ϕ = 240 • operators with each other. moduli space the SU (N ) 36 group is broken to U (1) 8(N −1) .
{0}IR (U (1) 3 R ) = 1(−1) 3 + 1(−1) 3 + F 2 (−1) 2 = −N 2 − 2, (A.2)
1 :F
1Transformations of the superfields Q, Q, M , B and B, under the gauged and global symmetry groups for the F = N case of SQCD. Also shown are the R charge of the gauginos (λ), and the transformation of Λ b under the spurious U (1) A . Here N and N indicate the fundamental and antifundamental representations of SU (N ), while Ad indicates the adjoint representation. The U (1) R charges shown are those of the scalar component of the superfields. At scales well above the scale |Λ|, U (1) A is approximately conserved, but it is broken explicitly when SU (N ) c is gauged. The infrared theory is valid only for scales well below Λ, where U (1) A is badly broken, so we do not list the U (1) A charges of the gauge-invariant operators M , B, or B. where SU (F ) × SU (F ) r is preserved and U (1) B is spontaneously broken, the only light degree of freedom between B and B is the one tangential to BB = −Λ b . Excitations of B and B that change the value of BB acquire O(Λ) masses, and are not degrees of freedom of the infrared theory. Similarly, on the det M = Λ b branch with B = B = 0, the F 2 = N 2 nominal degrees of freedom in M are reduced to N 2 −1. For example, at the symmetry enhanced point M ij = Λ b/N δ ij , where the flavor symmetry is broken to its diagonal subgroup, SU (F ) × SU (F ) r → SU (F ) d , the N 2 − 1 dimensional adjoint representation of SU (F ) d remains light, while the Tr M degree of freedom acquires an O(Λ) mass. If we investigate any generic point on the moduli space, the result is the same: there are only F 2 + 1 degrees of freedom, and A IR (U (1) 3 R ) = (2 + F 2 − 1)(−1) 3 = −N 2 − 1 = A UV (U (1) 3 R ). (A.3) Similar subtleties arise in the mixed anomalies, such as SU (F ) 2 -U (1) B . In this case A UV (SU (F ) 2 U (1) B ) = N · 1(−1) = −N, A {0} IR (SU (F ) 2 U (1) B ) = F · 1(0) = 0. (A.4) Our definition of A uses the normalization of the Dynkin indexμ such thatμ = 1 for the fundamental and antifundamental representations, andμ = 2N c for the adjoint of SU (N c ). Again, we find that it is a mistake to evaluate A IR at the origin {0}, and in fact there is no point on the moduli space where SU (F ) and U (1) B are simultaneously unbroken. There is, however, the symmetry enhanced point M ij ∝ δ ij , B = B = 0, where the global symmetry is SU (F ) d × U (1) B × U (1) R . The degrees of freedom are B, B, and M Ad , which transforms as the adjoint of SU (F ) d , and the SU (F ) 2 d U (1) B anomaly coefficients do match: A UV (SU (F ) 2 d U (1) B ) = N · 1(−1) + N · 1(+1) = 0, A IR (SU (F ) 2 d U (1) B ) = 1 · 2F (0) = 0. (A.5)For other anomaly coefficients there is less need for subtlety. Evaluating SU (F ) 2 U (1) R on the M = 0 branch of the moduli space, for example, we find A UV (SU (F ) 2 U (1) R ) = N · 1(−1), A IR (SU (F ) 2 U (1) R ) = · . It is similarly easy to show that the mixed U (1) B U (1) R anomalies match when evaluated on the B = B = 0 branch.
Table
We use the common shorthand where each superfield and its scalar component are labelled with the same symbol, e.g. Q or Q.
As the exact form of the Kähler potential is not known for the dual theory, the mapping between UV gauge invariants and canonically normalized IR degrees of freedom includes some unknown numeric coefficients. Our (QQ) → ΛM mapping assumes these coefficients are O(1).
See comment in footnote 2 about matching IR degrees of freedom and UV gauge invariants.
Electric -magnetic duality in supersymmetric non-Abelian gauge theories. N Seiberg, 10.1016/0550-3213(94)00023-8arXiv:hep-th/9411149Nucl. Phys. 435hep-thN. Seiberg, "Electric -magnetic duality in supersymmetric non-Abelian gauge theories," Nucl. Phys. B435 (1995) 129-146, arXiv:hep-th/9411149 [hep-th].
Phases of N=1 supersymmetric gauge theories in four-dimensions. K A Intriligator, N Seiberg, 10.1016/0550-3213(94)90215-1arXiv:hep-th/9408155Nucl. Phys. 431hep-thK. A. Intriligator and N. Seiberg, "Phases of N=1 supersymmetric gauge theories in four-dimensions," Nucl. Phys. B431 (1994) 551-568, arXiv:hep-th/9408155 [hep-th].
Exact Gell-Mann-Low Function of Supersymmetric Yang-Mills Theories from Instanton Calculus. V A Novikov, M A Shifman, A I Vainshtein, V I Zakharov, 10.1016/0550-3213(83)90338-3Nucl. Phys. B. 229V. A. Novikov, M. A. Shifman, A. I. Vainshtein, and V. I. Zakharov, "Exact Gell-Mann-Low Function of Supersymmetric Yang-Mills Theories from Instanton Calculus," Nucl. Phys. B 229 (1983) 381-393.
De)constructing dimensions. N Arkani-Hamed, A G Cohen, H Georgi, 10.1103/PhysRevLett.86.4757arXiv:hep-th/0104005Phys. Rev. Lett. 86hep-thN. Arkani-Hamed, A. G. Cohen, and H. Georgi, "(De)constructing dimensions," Phys. Rev. Lett. 86 (2001) 4757-4761, arXiv:hep-th/0104005 [hep-th].
4-D constructions of supersymmetric extra dimensions and gaugino mediation. C Csaki, J Erlich, C Grojean, G D Kribs, 10.1103/PhysRevD.65.015003arXiv:hep-ph/0106044Phys. Rev. 6515003hep-phC. Csaki, J. Erlich, C. Grojean, and G. D. Kribs, "4-D constructions of supersymmetric extra dimensions and gaugino mediation," Phys. Rev. D65 (2002) 015003, arXiv:hep-ph/0106044 [hep-ph].
Exact results in 5-D from instantons and deconstruction. C Csaki, J Erlich, V V Khoze, E Poppitz, Y Shadmi, Y Shirman, 10.1103/PhysRevD.65.085033arXiv:hep-th/0110188Phys. Rev. 6585033hep-thC. Csaki, J. Erlich, V. V. Khoze, E. Poppitz, Y. Shadmi, and Y. Shirman, "Exact results in 5-D from instantons and deconstruction," Phys. Rev. D65 (2002) 085033, arXiv:hep-th/0110188 [hep-th].
Localized fermions and anomaly inflow via deconstruction. W Skiba, D Tucker-Smith, 10.1103/PhysRevD.65.095002arXiv:hep-ph/0201056Phys. Rev. 6595002hep-phW. Skiba and D. Tucker-Smith, "Localized fermions and anomaly inflow via deconstruction," Phys. Rev. D65 (2002) 095002, arXiv:hep-ph/0201056 [hep-ph].
N=1 supersymmetric product group theories in the Coulomb phase. C Csaki, J Erlich, D Z Freedman, W Skiba, 10.1103/PhysRevD.56.5209arXiv:hep-th/9704067Phys. Rev. 56hep-thC. Csaki, J. Erlich, D. Z. Freedman, and W. Skiba, "N=1 supersymmetric product group theories in the Coulomb phase," Phys. Rev. D56 (1997) 5209-5217, arXiv:hep-th/9704067 [hep-th].
Quantum modified mooses. S Chang, H Georgi, 10.1016/j.nuclphysb.2003.09.006arXiv:hep-th/0209038Nucl. Phys. 672hep-thS. Chang and H. Georgi, "Quantum modified mooses," Nucl. Phys. B672 (2003) 101-122, arXiv:hep-th/0209038 [hep-th].
Product group confinement in SUSY gauge theories. B Lillard, 10.1007/JHEP10(2017)060arXiv:1704.06282JHEP. 1060hep-thB. Lillard, "Product group confinement in SUSY gauge theories," JHEP 10 (2017) 060, arXiv:1704.06282 [hep-th].
The Dual of supersymmetric SU(2k) with an antisymmetric tensor and composite dualities. M Berkooz, 10.1016/0550-3213(95)00400-MarXiv:hep-th/9505067Nucl. Phys. 452hep-thM. Berkooz, "The Dual of supersymmetric SU(2k) with an antisymmetric tensor and composite dualities," Nucl. Phys. B452 (1995) 513-525, arXiv:hep-th/9505067 [hep-th].
Duality in SUSY SU(N) with an antisymmetric tensor. P Pouliot, 10.1016/0370-2693(95)01427-6arXiv:hep-th/9510148Phys. Lett. 367hep-thP. Pouliot, "Duality in SUSY SU(N) with an antisymmetric tensor," Phys. Lett. B367 (1996) 151-156, arXiv:hep-th/9510148 [hep-th].
Duality and exact results in product group theories. E Poppitz, Y Shadmi, S P Trivedi, 10.1016/S0550-3213(96)00464-6arXiv:hep-th/9605113Nucl. Phys. 480hep-thE. Poppitz, Y. Shadmi, and S. P. Trivedi, "Duality and exact results in product group theories," Nucl. Phys. B480 (1996) 125-169, arXiv:hep-th/9605113 [hep-th].
The Massless Limit of Supersymmetric QCD. A C Davis, M Dine, N Seiberg, 10.1016/0370-2693(83)91332-1Phys. Lett. 125A. C. Davis, M. Dine, and N. Seiberg, "The Massless Limit of Supersymmetric QCD," Phys. Lett. B125 (1983) 487-492.
Dynamical Supersymmetry Breaking in Four-Dimensions and Its Phenomenological Implications. I Affleck, M Dine, N Seiberg, 10.1016/0550-3213(85)90408-0Nucl. Phys. 256I. Affleck, M. Dine, and N. Seiberg, "Dynamical Supersymmetry Breaking in Four-Dimensions and Its Phenomenological Implications," Nucl. Phys. B256 (1985) 557-599.
Exact results on the space of vacua of four-dimensional SUSY gauge theories. N Seiberg, 10.1103/PhysRevD.49.6857arXiv:hep-th/9402044Phys. Rev. 49hep-thN. Seiberg, "Exact results on the space of vacua of four-dimensional SUSY gauge theories," Phys. Rev. D49 (1994) 6857-6863, arXiv:hep-th/9402044 [hep-th].
A Systematic approach to confinement in N=1 supersymmetric gauge theories. C Csaki, M Schmaltz, W Skiba, 10.1103/PhysRevLett.78.799arXiv:hep-th/9610139Phys. Rev. Lett. 78hep-thC. Csaki, M. Schmaltz, and W. Skiba, "A Systematic approach to confinement in N=1 supersymmetric gauge theories," Phys. Rev. Lett. 78 (1997) 799-802, arXiv:hep-th/9610139 [hep-th].
Confinement in N=1 SUSY gauge theories and model building tools. C Csaki, M Schmaltz, W Skiba, 10.1103/PhysRevD.55.7840arXiv:hep-th/9612207Phys. Rev. 55hep-thC. Csaki, M. Schmaltz, and W. Skiba, "Confinement in N=1 SUSY gauge theories and model building tools," Phys. Rev. D55 (1997) 7840-7858, arXiv:hep-th/9612207 [hep-th].
Lectures on supersymmetric gauge theories and electric-magnetic duality. K A Intriligator, N Seiberg, 10.1016/0920-5632(95)00626-5arXiv:hep-th/9509066Nucl. Phys. Proc. Suppl. 45hep-th]. [,157(1995)K. A. Intriligator and N. Seiberg, "Lectures on supersymmetric gauge theories and electric-magnetic duality," Nucl. Phys. Proc. Suppl. 45BC (1996) 1-28, arXiv:hep-th/9509066 [hep-th]. [,157(1995)].
A Tool Kit for Builders of Composite Models. H Georgi, 10.1016/0550-3213(86)90092-1Nucl. Phys. 266H. Georgi, "A Tool Kit for Builders of Composite Models," Nucl. Phys. B266 (1986) 274-284.
New Seiberg Dualities from N=2 Dualities. K Maruyoshi, M Taki, S Terashima, F Yagi, 10.1088/1126-6708/2009/09/086arXiv:0907.262586hep-thK. Maruyoshi, M. Taki, S. Terashima, and F. Yagi, "New Seiberg Dualities from N=2 Dualities," JHEP 09 (2009) 086, arXiv:0907.2625 [hep-th].
K Nii, arXiv:1603.085503d Deconfinement, Product gauge group, Seiberg-Witten and New 3d dualities. hep-thK. Nii, "3d Deconfinement, Product gauge group, Seiberg-Witten and New 3d dualities," arXiv:1603.08550 [hep-th].
A duality web of linear quivers. F Brünner, V P Spiridonov, 10.1016/j.physletb.2016.08.039arXiv:1605.06991Phys. Lett. 761hep-thF. Brünner and V. P. Spiridonov, "A duality web of linear quivers," Phys. Lett. B761 (2016) 261-264, arXiv:1605.06991 [hep-th].
Exactly marginal operators and duality in four-dimensional N=1 supersymmetric gauge theory. R G Leigh, M J Strassler, 10.1016/0550-3213(95)00261-ParXiv:hep-th/9503121Nucl. Phys. B. 447R. G. Leigh and M. J. Strassler, "Exactly marginal operators and duality in four-dimensional N=1 supersymmetric gauge theory," Nucl. Phys. B 447 (1995) 95-136, arXiv:hep-th/9503121.
The Moduli space of vacua of N=2 SUSY QCD and duality in N=1 SUSY QCD. P C Argyres, M Plesser, N Seiberg, 10.1016/0550-3213(96)00210-6arXiv:hep-th/9603042Nucl. Phys. B. 471P. C. Argyres, M. Plesser, and N. Seiberg, "The Moduli space of vacua of N=2 SUSY QCD and duality in N=1 SUSY QCD," Nucl. Phys. B 471 (1996) 159-194, arXiv:hep-th/9603042.
Deformations of N=2 dualities to N=1 dualities in SU, SO and USp gauge theories. T Hirayama, N Maekawa, S Sugimoto, 10.1143/PTP.99.843arXiv:hep-th/9705069Prog. Theor. Phys. 99T. Hirayama, N. Maekawa, and S. Sugimoto, "Deformations of N=2 dualities to N=1 dualities in SU, SO and USp gauge theories," Prog. Theor. Phys. 99 (1998) 843-874, arXiv:hep-th/9705069.
On inherited duality in N=1 d = 4 supersymmetric gauge theories. P C Argyres, K A Intriligator, R G Leigh, M J Strassler, 10.1088/1126-6708/2000/04/029arXiv:hep-th/9910250JHEP. 0429P. C. Argyres, K. A. Intriligator, R. G. Leigh, and M. J. Strassler, "On inherited duality in N=1 d = 4 supersymmetric gauge theories," JHEP 04 (2000) 029, arXiv:hep-th/9910250.
D-branes, quivers, and ALE instantons. M R Douglas, G W Moore, arXiv:hep-th/9603167M. R. Douglas and G. W. Moore, "D-branes, quivers, and ALE instantons," arXiv:hep-th/9603167.
Varieties of vacua in classical supersymmetric gauge theories. M A Luty, W Taylor, 10.1103/PhysRevD.53.3399arXiv:hep-th/9506098Phys. Rev. 53hep-thM. A. Luty and W. Taylor, "Varieties of vacua in classical supersymmetric gauge theories," Phys. Rev. D53 (1996) 3399-3405, arXiv:hep-th/9506098 [hep-th].
Chiral rings and anomalies in supersymmetric gauge theory. F Cachazo, M R Douglas, N Seiberg, E Witten, 10.1088/1126-6708/2002/12/071arXiv:hep-th/0211170JHEP. 1271F. Cachazo, M. R. Douglas, N. Seiberg, and E. Witten, "Chiral rings and anomalies in supersymmetric gauge theory," JHEP 12 (2002) 071, arXiv:hep-th/0211170.
Chiral ring of Sp(N) and SO(N) supersymmetric gauge theory in four-dimensions. E Witten, arXiv:hep-th/0302194E. Witten, "Chiral ring of Sp(N) and SO(N) supersymmetric gauge theory in four-dimensions," arXiv:hep-th/0302194.
Axion Induced Topology Change in Quantum Gravity and String Theory. S B Giddings, A Strominger, 10.1016/0550-3213(88)90446-4Nucl. Phys. 306S. B. Giddings and A. Strominger, "Axion Induced Topology Change in Quantum Gravity and String Theory," Nucl. Phys. B306 (1988) 890-907.
Planck scale physics and the Peccei-Quinn mechanism. M Kamionkowski, J March-Russell, 10.1016/0370-2693(92)90492-MarXiv:hep-th/9202003Phys. Lett. 282hep-thM. Kamionkowski and J. March-Russell, "Planck scale physics and the Peccei-Quinn mechanism," Phys. Lett. B282 (1992) 137-141, arXiv:hep-th/9202003 [hep-th].
Planck scale corrections to axion models. S M Barr, D Seckel, 10.1103/PhysRevD.46.539Phys. Rev. 46S. M. Barr and D. Seckel, "Planck scale corrections to axion models," Phys. Rev. D46 (1992) 539-549.
Gravity and global symmetries. R Kallosh, A D Linde, D A Linde, L Susskind, 10.1103/PhysRevD.52.912arXiv:hep-th/9502069Phys. Rev. 52hep-thR. Kallosh, A. D. Linde, D. A. Linde, and L. Susskind, "Gravity and global symmetries," Phys. Rev. D52 (1995) 912-935, arXiv:hep-th/9502069 [hep-th].
Wormholes and Global Symmetries. L F Abbott, M B Wise, 10.1016/0550-3213(89)90503-8Nucl. Phys. 325L. F. Abbott and M. B. Wise, "Wormholes and Global Symmetries," Nucl. Phys. B325 (1989) 687-704.
WORMHOLES MADE WITHOUT MASSLESS MATTER FIELDS. S R Coleman, K.-M Lee, 10.1016/0550-3213(90)90149-8Nucl. Phys. 329S. R. Coleman and K.-M. Lee, "WORMHOLES MADE WITHOUT MASSLESS MATTER FIELDS," Nucl. Phys. B329 (1990) 387-409.
Constraints Imposed by CP Conservation in the Presence of Instantons. R D Peccei, H R Quinn, 10.1103/PhysRevD.16.1791Phys. Rev. 16R. D. Peccei and H. R. Quinn, "Constraints Imposed by CP Conservation in the Presence of Instantons," Phys. Rev. D16 (1977) 1791-1797.
CP Conservation in the Presence of Instantons. R D Peccei, H R Quinn, 10.1103/PhysRevLett.38.1440Phys. Rev. Lett. 38R. D. Peccei and H. R. Quinn, "CP Conservation in the Presence of Instantons," Phys. Rev. Lett. 38 (1977) 1440-1443.
Problem of Strong p and t Invariance in the Presence of Instantons. F Wilczek, 10.1103/PhysRevLett.40.279Phys. Rev. Lett. 40F. Wilczek, "Problem of Strong p and t Invariance in the Presence of Instantons," Phys. Rev. Lett. 40 (1978) 279-282.
A New Light Boson?. S Weinberg, 10.1103/PhysRevLett.40.223Phys. Rev. Lett. 40S. Weinberg, "A New Light Boson?," Phys. Rev. Lett. 40 (1978) 223-226.
Weak Interaction Singlet and Strong CP Invariance. J E Kim, 10.1103/PhysRevLett.43.103Phys. Rev. Lett. 43103J. E. Kim, "Weak Interaction Singlet and Strong CP Invariance," Phys. Rev. Lett. 43 (1979) 103.
Can Confinement Ensure Natural CP Invariance of Strong Interactions?. M A Shifman, A I Vainshtein, V I Zakharov, 10.1016/0550-3213(80)90209-6Nucl. Phys. 166M. A. Shifman, A. I. Vainshtein, and V. I. Zakharov, "Can Confinement Ensure Natural CP Invariance of Strong Interactions?," Nucl. Phys. B166 (1980) 493-506.
A Simple Solution to the Strong CP Problem with a Harmless Axion. M Dine, W Fischler, M Srednicki, 10.1016/0370-2693(81)90590-6Phys. Lett. 104M. Dine, W. Fischler, and M. Srednicki, "A Simple Solution to the Strong CP Problem with a Harmless Axion," Phys. Lett. 104B (1981) 199-202.
On Possible Suppression of the Axion Hadron Interactions. A R Zhitnitsky, Sov. J. Nucl. Phys. 31260Yad. Fiz.A. R. Zhitnitsky, "On Possible Suppression of the Axion Hadron Interactions. (In Russian)," Sov. J. Nucl. Phys. 31 (1980) 260. [Yad. Fiz.31,497(1980)].
A Composite Axion from a Supersymmetric Product Group. B Lillard, T M P Tait, 10.1007/JHEP11(2017)005arXiv:1707.04261JHEP. 115hep-phB. Lillard and T. M. P. Tait, "A Composite Axion from a Supersymmetric Product Group," JHEP 11 (2017) 005, arXiv:1707.04261 [hep-ph].
A High Quality Composite Axion. B Lillard, T M P Tait, 10.1007/JHEP11(2018)199arXiv:1811.03089JHEP. 11199hep-phB. Lillard and T. M. P. Tait, "A High Quality Composite Axion," JHEP 11 (2018) 199, arXiv:1811.03089 [hep-ph].
Quantum moduli spaces of linear and ring mooses. G Hailu, 10.1016/S0370-2693(02)03160-XarXiv:hep-th/0209267Phys. Lett. B. 552G. Hailu, "Quantum moduli spaces of linear and ring mooses," Phys. Lett. B 552 (2003) 265-272, arXiv:hep-th/0209267.
Discrete gauge symmetries in axionic extensions of the SSM. E J Chun, A Lukas, 10.1016/0370-2693(92)91266-CarXiv:hep-ph/9209208Phys. Lett. B. 297E. J. Chun and A. Lukas, "Discrete gauge symmetries in axionic extensions of the SSM," Phys. Lett. B 297 (1992) 298-304, arXiv:hep-ph/9209208.
Composite axion models and Planck scale physics. L Randall, 10.1016/0370-2693(92)91928-3Phys. Lett. 284L. Randall, "Composite axion models and Planck scale physics," Phys. Lett. B284 (1992) 77-80.
Axions and a gauged Peccei-Quinn symmetry. H.-C Cheng, D E Kaplan, arXiv:hep-ph/0103346H.-C. Cheng and D. E. Kaplan, "Axions and a gauged Peccei-Quinn symmetry," arXiv:hep-ph/0103346.
A "gauged" U (1) Peccei-Quinn symmetry. H Fukuda, M Ibe, M Suzuki, T T Yanagida, 10.1016/j.physletb.2017.05.071arXiv:1703.01112Phys. Lett. B. 771hep-phH. Fukuda, M. Ibe, M. Suzuki, and T. T. Yanagida, "A "gauged" U (1) Peccei-Quinn symmetry," Phys. Lett. B 771 (2017) 327-331, arXiv:1703.01112 [hep-ph].
Accidental Peccei-Quinn symmetry protected to arbitrary order. L Di Luzio, E Nardi, L Ubaldi, 10.1103/PhysRevLett.119.011801arXiv:1704.01122Phys. Rev. Lett. 119111801hep-phL. Di Luzio, E. Nardi, and L. Ubaldi, "Accidental Peccei-Quinn symmetry protected to arbitrary order," Phys. Rev. Lett. 119 no. 1, (2017) 011801, arXiv:1704.01122 [hep-ph].
Axion quality from the (anti)symmetric of SU(N ). M Ardu, L Di Luzio, G Landini, A Strumia, D Teresi, J.-W Wang, 10.1007/JHEP11(2020)090arXiv:2007.12663JHEP. 1190hep-phM. Ardu, L. Di Luzio, G. Landini, A. Strumia, D. Teresi, and J.-W. Wang, "Axion quality from the (anti)symmetric of SU(N )," JHEP 11 (2020) 090, arXiv:2007.12663 [hep-ph].
Axion Quality from Superconformal Dynamics. Y Nakai, M Suzuki, 10.1016/j.physletb.2021.136239arXiv:2102.01329Phys. Lett. B. 816136239hep-phY. Nakai and M. Suzuki, "Axion Quality from Superconformal Dynamics," Phys. Lett. B 816 (2021) 136239, arXiv:2102.01329 [hep-ph].
Exact accidental U(1) symmetries for the axion. L Darmé, E Nardi, 10.1103/PhysRevD.104.055013arXiv:2102.05055Phys. Rev. D. 104555013hep-phL. Darmé and E. Nardi, "Exact accidental U(1) symmetries for the axion," Phys. Rev. D 104 no. 5, (2021) 055013, arXiv:2102.05055 [hep-ph].
| [] |
[] | [
"X I Ng \nC C A ST (W orl d Laboratory)\n\n",
"Li -R Ong \nC C A ST (W orl d Laboratory)\n\n",
"W En-G \nC C A ST (W orl d Laboratory)\n\n",
"Zhang R En-You \nC C A ST (W orl d Laboratory)\n\n",
"Ji Ang Y I B ,H \nC C A ST (W orl d Laboratory)\n\n",
"Li Ang \nC C A ST (W orl d Laboratory)\n\n",
"Lig Ang \nC C A ST (W orl d Laboratory)\n\n"
] | [
"C C A ST (W orl d Laboratory)\n",
"C C A ST (W orl d Laboratory)\n",
"C C A ST (W orl d Laboratory)\n",
"C C A ST (W orl d Laboratory)\n",
"C C A ST (W orl d Laboratory)\n",
"C C A ST (W orl d Laboratory)\n",
"C C A ST (W orl d Laboratory)\n"
] | [] | O . B ox 8730,B ei ji ng 100080,P. R . C hi na b D epartm ent ofM odern Physi cs,U ni versi ty ofSci ence and Technol ogy ofC hi na (U ST C ),H efei ,A nhui230027,P. R . C hi na A bstract W e have cal cul ated the ful lone-l oop el ectroweak (EW ) and Q C D correcti ons to the thi rd generati on scal ar-ferm i on pai r producti on processes e + e ! !f i f i (f = t;b; ) atan el ectron-posi tron l i nearcol l i der(LC )i n them i ni m alsupersym m etri cstandard m odel (M SSM ).W e anal yze the dependence ofthe radi ati ve correcti onson the param eterssuch as the col l i di ng energy pŝand the SU SY fundam entalparam eters A f ,tan , ,M S U S Y and so forth.T henum eri calresul tsshow thattheEW correcti onsto thesquark-,stau-pai r producti on processes and Q C D correcti ons to the squark-pai r producti on processes gi ve substanti alcontri buti onsi n som eparam eterspace.T heEW rel ati vecorrecti onsto squarkpai r producti on processes can be com parabl e w i th Q C D correcti ons at hi gh energi es. T herefore,these EW and Q C D correcti ons cannot be negl ected i n preci se m easurem ent ofsferm i on pai r producti ons vi a col l i si on at future l i near col l i ders. PA C S:12.60.Jv,14.80.Ly,12.15.Lk,12.38.B x K eyw ords: SU SY ,Q C D correction,electrow eak correction,photon collider Supported by N ati onalN aturalSci ence Foundati on ofC hi na. | 10.1103/physrevd.71.055009 | [
"https://export.arxiv.org/pdf/hep-ph/0503171v1.pdf"
] | 18,956,252 | hep-ph/0503171 | 8a27eda86a1bffa3055526f5090a5cf464c98411 |
arXiv:hep-ph/0503171v1 17 Mar 2005
X I Ng
C C A ST (W orl d Laboratory)
Li -R Ong
C C A ST (W orl d Laboratory)
W En-G
C C A ST (W orl d Laboratory)
Zhang R En-You
C C A ST (W orl d Laboratory)
Ji Ang Y I B ,H
C C A ST (W orl d Laboratory)
Li Ang
C C A ST (W orl d Laboratory)
Lig Ang
C C A ST (W orl d Laboratory)
arXiv:hep-ph/0503171v1 17 Mar 2005Ful lone-l oop Q CD and el ectroweak correcti onsto sf erm i on pai r producti on i n col l i si ons
O . B ox 8730,B ei ji ng 100080,P. R . C hi na b D epartm ent ofM odern Physi cs,U ni versi ty ofSci ence and Technol ogy ofC hi na (U ST C ),H efei ,A nhui230027,P. R . C hi na A bstract W e have cal cul ated the ful lone-l oop el ectroweak (EW ) and Q C D correcti ons to the thi rd generati on scal ar-ferm i on pai r producti on processes e + e ! !f i f i (f = t;b; ) atan el ectron-posi tron l i nearcol l i der(LC )i n them i ni m alsupersym m etri cstandard m odel (M SSM ).W e anal yze the dependence ofthe radi ati ve correcti onson the param eterssuch as the col l i di ng energy pŝand the SU SY fundam entalparam eters A f ,tan , ,M S U S Y and so forth.T henum eri calresul tsshow thattheEW correcti onsto thesquark-,stau-pai r producti on processes and Q C D correcti ons to the squark-pai r producti on processes gi ve substanti alcontri buti onsi n som eparam eterspace.T heEW rel ati vecorrecti onsto squarkpai r producti on processes can be com parabl e w i th Q C D correcti ons at hi gh energi es. T herefore,these EW and Q C D correcti ons cannot be negl ected i n preci se m easurem ent ofsferm i on pai r producti ons vi a col l i si on at future l i near col l i ders. PA C S:12.60.Jv,14.80.Ly,12.15.Lk,12.38.B x K eyw ords: SU SY ,Q C D correction,electrow eak correction,photon collider Supported by N ati onalN aturalSci ence Foundati on ofC hi na.
Introduction
T he standard m odel(SM )hasbeen successfuli n descri bi ng the strong,weak and el ectrom agneti c i nteracti on phenom ena at the energy scal e up to 10 2 G eV .A t the hi gher energy scal e, i t i s l i kel y that the m i ni m alsupersym m etri c standard m odel(M SSM ) i s the m ost attracti ve candi date am ong vari ousextensi onsofthe SM .In the M SSM ,the exi stence ofscal arpartners of al l ferm i ons i n the SM , nam el y, two chi ral scal ar ferm i onsf L andf R are requi red. A t future col l i ders runni ng i n TeV energy regi on, the supersym m etri c scal ar parti cl ef f pai r producti on processes are very prom i si ng channel s i n probi ng di rectl y the exi stence ofthese scal ar ferm i ons,si nce thei r producti on cross secti ons can be com parati vel y l arge,i fthe scal ar ferm i ons are not too heavy.
T he two chi ralSU SY statesf L andf R ofeach ferm i on f turn to thei r m ass ei genstates by m i xi ng w i th each other. T he m i xi ng si ze i s proporti onalto the m ass ofthe correspondi ng SM ferm i on [ 1] . G eneral l y,peopl e bel i eve thatthe sferm i onsofthe thi rd generati on are m ore i m portant i n di rect SU SY di scovery than those of the form er two generati ons, because the sferm i onsf L andf R ofthe thi rd generati on m i x strongl y to form the two m ass ei genstates f 1 andf 2 . W e assum e that the m ass ei genstatesf 1 (f = t;b; ) have l ower m asses thanf 2 . T herefore,f 1 i s very probabl y to be di scovered i n a rel ati vel y l ower col l i di ng energy range. A nothersi gni cance ofthe sferm i on pai rproducti on i sthati tgi vesaccessto one ofthe SU SY fundam entalparam eters A f ,the tri l i near coupl i ng param eter.
T he futurehi gherenergy e + e l i nearcol l i ders(LC )i sdesi gned to l ook forthe evi dencesof H i ggs boson and other new parti cl es beyond the SM .T here have been al ready som e detai l ed desi gns of l i near col l i ders,such as N LC [ 2] ,JLC [ 3] ,T ESLA [ 4]and C LIC [ 5] . B ecause of the cl eaner background of e + e col l i si on than pp( p) col l i si on,LC can produce m ore di sti ncti ve experi m ental si gnature of new physi cs. T he sl epton pai r producti on at LC are i ntensi vel y di scussed i n R efs. [ 6,7,8,9] . T he squark pai r produced by e + e anni hi l ati on has been studi ed thoroughl y,both at tree l eveland at next-to-l eadi ng order [ 10] [ 11] . In R ef. [ 12]the Q C D correcti on to stop pai r producti on vi a fusi on at e + e l i near col l i der i s i nvesti gated. T he scal ar ferm i on pai r producti on vi a e + e col l i si ons e + e !f i f j (f = ;t;b; i;j = 1;2) at one-l oop l evel ,has been studi ed i n detai li n [ 13,14] . T hey have consi dered the com pl ete SU SY -Q C D and el ectroweak (EW )one-l oop correcti ons. T hei rresul tsshow thatattheenergy of p s = 500 1000 G eV ,the Q C D correcti ons are dom i nated w hi l e the EW correcti ons are ofthe sam e m agni tude as the SU SY -Q C D correcti ons at the hi gher energy scal e.
H owever,the future e + e l i near col l i ders are desi gned to gi ve other faci l i ti es operati ng i n e e , and othercol l i si on m odesatthe energy of500 5000 G eV w i th a l um i nosi ty ofthe order 10 33 cm 2 s 1 [ 15] . T he future LC ' s can turn the hi gh energy el ectron-posi tron beam s i nto theC om pton backscatteri ng energeti c photon beam sw i th hi gh e ci ency i n thescatteri ng ofi ntense l aser photons. W i th the hel p ofthe new experi m entaltechni ques,i t i s feasi bl e to yi el d a scal er ferm i on pai r producti on di rectl y vi a the hi gh energy photon col l i si on. D i erent opti ons of the col l i di ng m ode are com pl em entary to each other and w i l ladd essenti al new i nform ati on to that obtai ned from the C ER N Large H adron C ol l i der (LH C ).T herefore,the sferm i on pai r producti on vi a fusi on provi des another i m portant m echani sm i n produci ng sferm i on pai r. M oreover, thei r producti on rates shoul d be l arger than those by the e + e anni hi l ati on because of the exi sti ng of the s-channelsuppressi on i n the l atter. A t the treel evel , the two nal sferm i ons produced i n col l i si ons shoul d be the sam e sferm i on m ass ei genstate, si nce onl y the el ectrom agneti c i nteracti on i s i nvol ved. A l though there are som e studi eson the e + e ! !f i f i (f = t;b; ; i= 1;2) attree l evel [ 16] ,the com pl ete one-l oop l evele ects of the EW and Q C D i n the sferm i on pai r producti on vi a col l i si ons are sti l l absentatpresent. In a word,the processofscal arferm i on pai rproducti on vi a photon-photon col l i si ons e + e ! !f i f i (f = t;b; ;i= 1;2) w i l lbe worthw hi l e to i nvesti gate preci sel y and can be accessi bl e i n accurate experi m ents.
In thi spaper,we w i l lcal cul ate the ful lone-l oop EW and Q C D correcti onsto thi sprocess. T he paper i s organi zed as fol l ow s: In Secti on 2,we gi ve the de ni ti ons ofthe notati ons and the anal yti cal cal cul ati ons of the cross secti ons i nvol vi ng the O ( ew ) EW and O ( s ) Q C D correcti ons. T he num eri cal resul ts and di scussi ons are presented i n Secti on 3. Fi nal l y, we gi ve a short sum m ery i n Secti on 4.
A nalyticalcalculations
In thi s secti on, we present the anal yti cal cal cul ati ons for the subprocesses !f i f i (f = ;t;b;i = 1;2) and thei r parent processes e + e ! !f i f i at the l owest order and the one-l oop l evel i n the M SSM .W e adopt the ' t H ooft-Feynm an gauge and the de ni ti ons of one-l oop i ntegralfuncti ons i n R ef. [ 17] . A s we know that for the subprocesses !q i q i (q = t;b;i= 1;2)there exi stboth Q C D and EW quantum correcti ons,w hi l e for !~ i ~ i (i= 1;2) subprocesses they have onl y EW quantum contri buti ons.
2.1 T he sferm ion sector and the low est order cross section of the subprocess !f i f i (f = ;t;b; i= 1;2)
In the M SSM ,the Lagrangi an m ass term ofthe scal ar ferm i onf can be w ri tten as
L m ass f = f Lf R M 2 f f L f R ; (f = ;t;b); (2. 1)
w here M 2 f i s the m ass m atri x off,expressed as
M 2 f = m 2 f L m f a f a y f m f m 2 f R ! (2. 2)
and
m 2 f L = M 2 fQ ;L g + (I 3L f Q f si n 2 W )cos2 m 2 Z + m 2 f ; m 2 f R = M 2 fŨ ;D ;Ẽ g + Q f si n 2 W cos2 m 2 Z + m 2 f ; a f = A f (tan ) 2I 3L f : (2. 3)
w here MQ ;ML ;MŨ ;MD and MẼ are the soft SU SY breaki ng m asses,I 3L f i s the thi rd component ofthe weak i sospi n ofthe ferm i on,Q f the el ectri c charge ofthe scal ar ferm i on, W the W ei nberg angl e,and A f i s the tri l i near scal ar coupl i ng param eters ofH i ggs boson w i th scal ar quarks, the hi ggsi no m ass param eter.
T he m ass m atri x Mf can be di agonal i zed by i ntroduci ng an uni tary m atri x Rf. T he m ass ei genstatesf 1 ,f 2 are de ned as
f 1 f 2 = Rf f L f R = cos f si n f si n f cos f ! f L f R (2. 4)
T hen the m ass term ofsferm i onf can be expressed
L m ass f = f 1f 2 Mf 2 D f 1 f 2 (f = ;t;b); (2. 5) w here Mf 2 D = RfM 2 f Rf y = m 2 f 1 0 0 m 2 f 2 ! :
(2. 6)
T he m asses off 1 ;f 2 and the angl e f are xed by the fol l ow i ng equati on
(m 2 f 1 ;m 2 f 2 )= 1 2 fm 2 f L + m 2 f R [ (m 2 f L m 2 f R ) 2 + 4j a f j 2 m 2 f ] 1=2 g; (2. 7) tan 2 f = 2j a f j m f m 2 f L m 2 f R (0 < f < ) (2. 8) W e denote the subprocess !f i f i as (p 1 )+ (p 2 )!f i (p 3 )+ f i (p 4 ) (f = ;t;b; i= 1;2): (2. 9)
w herep 1 and p 2 representthefour-m om enta ofthetwo i ncom i ng photons,p 3 and p 4 denotethe four-m om enta ofthe outgoi ng scal arferm i on and i tsanti -parti cl e,respecti vel y. T he m om enta p i (i= 1; ;4) obey the on-shel lequati ons,nam el y,p
Mt 0 = 4ie 2 Q 2 f t m 2 f i (p 1 ) (p 2 ) p 3 p 4 ; (2. 11) Mû 0 = Mt 0 (p 1 $ p 2 ); Mq 0 = 2ie 2 Q 2 f g (p 1 ) (p 2 ): (2. 12)
T he M andel stam vari abl est,û andŝ are de ned ast = (p 1
p 3 ) 2 ;û = (p 1 p 4 ) 2 ;ŝ = (p 1 + p 2 ) 2 = (p 3 + p 4 ) 2 . mf i (i= 1;
2) denotes the m asses ofthe m ass ei genstates ofscal ar ferm i ons.
T he cross secti on at tree-l evelcan be expressed aŝ
0 (ŝ)= 1 16 ŝ 2 Z tm ax t m in dt j M 0 j 2 ; (2. 13) w i th t m ax;m in = 1 2 h (2m 2 f i ŝ) q (2m 2 f i ŝ) 2 4m 4 f i i : (2. 14)
T he sum m ati on i s taken over the spi ns and col ors ofi ni ti aland nalstates,and the bar over the sum m ati on recal l s averagi ng over the i ni ti alspi ns. A fter i ntegrati on ofEq. (2. 13) we get the anal yti calexpressi ons ofthe cross secti on of !f i f i subprocess at the tree l evelaŝ
0 (ŝ)= 2 2 s Q 4 f N f C f1 + 16 4 1 2 + 4 2 (1 2 2 ) l ogvg: (2. 15) H ere 2 = m 2 f i =ŝ and = q 1 4m 2 f i =ŝ i2.2 O ( ew ) E W corrections to subprocess !f i f i (f = ;t;b; i= 1;2)
In the cal cul ati on ofthe one-l oop EW correcti ons,we adopt the di m ensi onalreducti on (D R ) regul ari zati on schem e,w hi ch i s supersym m etri c i nvari ant at l east at one-l oop l evel . W e assum e that there i s no quark m i xi ng, i . e. , the C K M -m atri x i s i denti ty m atri x, and use the com pl ete on-m ass-shel l(C O M S) renorm al i zati on schem e [ 19] . W e use FeynA rts 3 [ 20]package to generate the O ( ew ) Feynm an di agram s and am pl i tudes of the O ( ew ) EW vi rtual contri buti ons to !f i f i (f = ;t;b) subprocess. T here are total469 EW one-l oop Feynm an di agram s,and we cl assi ed them i nto fourgroups:sel f-energy,vertex,box di agram sand counter-term di agram s. T he rel evant renorm al i zati on constants are de ned as
e 0 = (1 + Z e )e; m 2 W ;0 = m 2 W + m 2 W ; m 2 Z ;0 = m 2 Z + m 2 Z ; A 0 = 1 2 Z A Z Z + (1 + 1 2 Z A A )A m 2 f i ;0 = m 2 f i + m 2 f i ;f 1;0 = Zf 1=2 11f 1 + Zf 1=2 12f 2 ;f 2;0 = Zf 1=2 22f 2 + Zf 1=2 21f 1 : (2. 16) w here Zf 1=2 ij = ij + 1 2 Zf ij :
W i th the on-m ass-shel lcondi ti ons,we can obtai n the renorm al i zed constants expressed as T he O ( ew ) vi rtual correcti ons contai n both ul travi ol et (U V ) and i nfrared (IR ) di vergences. A fter renorm al i zati on procedure,the U V di vergence shoul d vani sh. W e have checked the cancel l ati on ofthe U V di vergence both anal yti cal l y and num eri cal l y,and con rm ed that we got a U V ni te am pl i tude at the O ( ew ) order. T he IR si ngul ari ty i n the M E W vir i s ori gi nated from vi rtualphotoni c l oop correcti on. It can be cancel l ed by the contri buti on ofthe realphoton em i ssi on process. W e denote the realphoton em i ssi on as
m 2 W =R e W T (m 2 W ); m 2 Z =R e Z Z T (m 2 Z ); Z A A = R e @ A A T (p 2 ) @p 2 j p 2 = 0 ; Z Z A = 2R e Z A T (0) m 2 Z ; Z e = 1 2 Z A A + s W c W 1 2 Z Z A = 1 2R e @ A A T (p 2 ) @p 2 j p 2 = 0 + si n W cos WR e Z A T (0) m 2 Z ; (2. 17) m 2 f i =R e f ii (m 2 f i ); Zf ii = R e @ f ii (p 2 ) @p 2 j p 2 = m 2 f i ; Zf ij = R e 2 f ij (m 2 f j ) m 2 f j m 2 f i (i;j = 1;2 i6 = j):(p 1 )+ (p 2 )!f i (p 3 )+ f i (p 4 )+ (k) (f = ;t;b; i= 1;2);
(2. 20)
w here k = (k 0 ;k)i sthe fourm om entum ofthe radi ated photon,and p 1 ,p 2 ,p 3 ,p 4 are the four m om enta oftwo i ni ti alphotons and nalstate parti cl esf i f i ,respecti vel y. T he realphoton em i ssi on Feynm an di agram s for the process !f i f i are di spl ayed i n Fi g.2. In our paper, we adopt the general phase-space-sl i ci ng m ethod [ 21]to separate the soft photon em i ssi on si ngul ari ty from the realphoton em i ssi on process. B y usi ng thi s m ethod,the brem sstrahl ung phase space i s di vi ded i nto si ngul ar and non-si ngul ar regi ons. T hen the correcti on ofthe real photon em i ssi on i s broken dow n i nto correspondi ng soft and hard term s
^ E W real = ^ E W soft + ^ E W hard =^ 0 (^ E W soft +^ E W hard ): (2. 21)
In the c. m . s. fram e,the radi ated photon energy k 0 = fram e,the realcorrecti on ^ E W real i scuto i ndependent.In the cal cul ati on ofsoftterm ,we use the soft photon approxi m ati on. Si nce the di agram s i n Fi g. 2 w i th realphoton radi ati on from the i nternalsferm i on l i ne or photon-sferm i ons vertex do not l ead to IR -si ngul ari ty, we can negl ect them i n the cal cul ati on ofsoft photon em i ssi on subprocesses (2. 20) by usi ng the soft photon approxi m ati on m ethod. In thi sapproach the contri buti on ofthe softphoton em i ssi on subprocess i s expressed as [ 19,22]
d ^ E W soft = d^ 0 ew Q 2 f 2 2 Z j kj E d 3 k 2k 0 p 3 p 3 k p 4 p 4 k 2 (2. 22)
w herethe softphoton cuto E sati s esk 0 E pŝ . T hei ntegraloverthe softphoton phase space has been i m pl em ented i n R ef. [ 19] ,then one can obtai n the anal yti calresul t of the soft realphoton em i ssi on correcti on to !f ifi . A s m enti oned above,the IR di vergence ofthe vi rtualphotoni c correcti ons can be exactl y cancel l ed by that ofsoft realcorrecti on. T herefore, ^ E W vir+ soft ,the sum ofthe O ( ew ) vi rtual and soft contri buti ons, i s i ndependent of the IR regul ator m . In the fol l ow i ng num eri cal cal cul ati ons,we have checked the cancel l ati on ofIR di vergenci es and veri ed that the total contri buti onsofsoftphoton em i ssi on and the vi rtualcorrecti ons are num eri cal l y i ndependent of m . In addi ti on, we present the num eri cal veri cati on of that the total one-l oop l evel EW correcti on to the cross secti on of can be w ri tten as a sum m ati on ofthree parts as the fol l ow s:
!f i f i ,q ii (p 2 )= q(g) ii (p 2 )+ q(g) ii (p 2 )+ q(q) ii (p 2 ); (2. 23) w here q(g) ii , q(g) ii and q(q)
ii denote the scal ar quark sel f-energy parts correspondi ng to the di agram s w i th vi rtual gl uon, vi rtual gl ui no exchanges and the squark quarti c i nteracti ons respecti vel y. T he squark quarti c i nteracti ons are i ntroduced by the superpotenti al of the SU SY m odel . T he three parts from the squarkq i sel f-energy can be w ri tten expl i ci tl y as
q(g) ii (p 2 )= g 2 s C F 16 2 A 0 [ m g ]+ 4p 2 (B 0 + B 1 )[ p 2 ;m g ;mq i ]+ m 2 g B 0 [ p 2 ;m g ;mq i ] A 0 [ mq i ] ; (2. 24) q(g) ii (p 2 )= g 2 s C F 16 2 D fA 0 [ m q ]+ (m 2 g + m q mg si n 2 q )B 0 [ p 2 ;mg;m q ]+ p 2 B 1 [ p 2 ;mg;m q ] g; (2. 25) q(q) ii (p 2 )= g 2 s 12 2 fA 0 [ m 2 q 1 ]cos 2 2 q + A 0 [ m 2 q 2 ]sin 2 2 q g; (2. 26)
w here m g denotes the sm al lgl uon m ass, D = 4 2 i s the space-ti m e di m ensi on and the group C asi m i r operator has anal yti cal l y and num eri cal l y. T hen we get a U V ni te am pl i tude i ncl udi ng O ( s ) vi rtual radi ati ve correcti ons. T he IR di vergence of the Q C D vi rtualcorrecti ons of the subprocess !q i q i (q = t;b; i= 1;2) com i ng from vi rtualgl uoni c correcti on can be cancel l ed by the realsoft gl uoni c brem sstrahl ung,w hi ch i s anal ogous to the realsoft photoni c one. T he real gl uon em i ssi on di agram s ofthe process + !q i q i g are show n i n Fi g. 3. W e denote the real gl uon em i ssi on as
C F = 4 3 . T he one-l oop O ( s ) Q C D vi rtualcorrecti on ofsubprocess !q i q i (q = t;b; i = 1;2) can be expressed as ^ Q C D vir =^ 0^ Q C D vir = 1 16 ŝ 2 Zt m ax t m in dt 2R e X (M Q C D vir M y 0 ) (2.(p 1 )+ (p 2 )!q i (p 3 )+ q i (p 4 )+ g(k); (q = t;b; i= 1;2): (2. 28) 1 γ γq ĩ q i g q i 2 γ γq ĩ q i g q i 3 γ γq iq i g q i 4 γ γq iq i g q i 5 γ γq ĩ q i g q i 6 γ γq iq i g q i 7 γ γq iq i g q ĩ q i 8 γ γq iq i g q iq i 9 γ γq ĩ q i g q ĩ q i 10 γ γq iq i g q ĩ q i 11 γ γq ĩ q i g q iq i 12 γ γq ĩ q i g q iq i
Fi gure 3: T he realgl uon em i ssi on di agram s for the process !q i q i g (q = t;b; i= 1;2) A nal ogousl y, we use agai n the general phase-space-sl i ci ng m ethod to separate the soft gl uon em i ssi on si ngul ari ty from the realgl uon em i ssi on process. T herefore,the correcti on of the realgl uon em i ssi on i s di vi ded i nto soft and hard term s
^ Q C D real = ^ Q C D soft + ^ Q C D hard =^ 0 (^ Q C D soft +^ Q C D hard ) (2. 29)
B y usi ng the soft gl uon approxi m ati on, we get the contri buti on of the soft gl uon em i ssi on sunbprocess expressed as In thi s approach,we m ay agai n refer to R ef. [ 19]to get the anal yti calexpressi on ofthe soft gl uon correcti on. Fi nal l y we obtai n an U V and IR ni te
d ^ Q C D soft = d^ 0 s C F 2 2 Z j kj E g d 3 k 2k 0 p 3 p 3 kO ( s ) Q C D correcti on ^ Q C D to the subprocess !q i q i contai ni ng one-l oop O ( s ) Q C D correcti on ^ Q C D = ^ Q C D vir + ^ Q C D real =^ 0^ Q C D (2. 31) w here^ Q C D =^ Q C D vir +^ Q C D soft +^ Q C D hard i s the O ( s ) Q C D rel ati ve correcti on.
T he cross sections of parent processes
e + e ! !f i f i (f = ;t;b; i= 1;2)
T hef i f i pai r producti on vi a photon-photon fusi on i s onl y a subprocessofthe parent process e + e ! !f i f i . T he l aser back-scatteri ng techni que on el ectron beam can transform e + e beam s i nto photon beam s [ 23,24,25] . A fter i ntegrati ng over the photon l um i nosi ty i n an e + e l i near col l i der,we obtai n the totalcross secti on ofthe process
e + e ! !f i f i expressed (s)= Z xm ax 2mf i = p s dz dL dz^ ( !f i f i atŝ = z 2 s); (2. 32) w here p s and pŝ
are the e + e and c. m . s. energi es respecti vel y and dL =dz i s the di stri buti on functi on ofphoton l um i nosi ty,w hi ch i s expressed as
dL dz = 2z Z xm ax z 2 =xm ax dx x f =e (x)f =e (z 2 =x) (2. 33)
w here f =e i s the photon structure functi on of the el ectron beam [ 18,27] . For the i ni ti al unpol ari zed el ectrons and l aser photon beam s,the photon structure functi on i s gi ven by the m ost prom i si ng C om pton backscatteri ng as [ 18,28,29]
f =e = 1 D ( ) 1 x + 1 1 x 4x (1 x) + 4x 2 2 (1 x) 2 ; (2. 34) w here D ( )= (1 4 8 2 )l n (1 + )+ 1 2 + 8 1 2(1 + ) 2 ; = 2 p s! 0 m e 2 :
(2. 35) m e and p s=2 representthem assand energy oftheel ectron respecti vel y. ! 0 i sthel aser-photon energy and x i sthe fracti on ofthe energy ofthe i nci dentel ectron carri ed by the backscattered photon. T he m axi m um fracti on of energy carri ed by the backscattered photon i s x m ax = 2! m ax = p s = =(1 + ). In our cal cul ati ons, we choose ! 0 to m axi m i ze the backscattered photon energy w i thout spoi l i ng the l um i nosi ty through e + e pai r creati on. T hen we have = 2(1 + p 2),x m ax ' 0: 83,and D ( ) 1: 8397 [ 26] .
N um ericalresults
In thi s secti on,we present som e num eri calresul ts for the Z e = e 2 6(4 ) 2
<
:
4
X f N f C e 2 f + l og Q 2 x 2 f ! + Xf 2 X k= 1 N f C e 2 f + l og Q 2 m 2 f k ! + 4 2 X k= 1 + l og Q 2 m 2 k ! + 2 X k= 1 0 @ + l og Q 2 m 2 H + k 1 A 22 + l og Q 2 m 2 W ; (3. 1) w here we take x f = m Z w hen m f < m Z and x t = m t . Q f i= A l = A f .
Except above SM and M SSM i nput param eters,we shoul d have som e other param eters used i n our num eri calcal cul ati ons, for exam pl e, the Q C D renorm al i zati on scal e Q , the IR regul ari zati on param eter m (m g ) and the soft cuto E ;g =E b . In our fol l ow i ng num eri cal cal cul ati ons,we take the Q C D renorm al i zati on scal e Q to be 2mf i ,and set E ;g =E b = 10 3 , m ;g = 10 5 G eV ,i f there i s no other statem ent. A s we know ,the nalresul ts shoul d be i ndependent on IR regul ator m ;g and the cuto E ;g =E b . For dem onstrati on,we present the dependence ofthe O ( s )Q C D correcti ons to !t 1 t 1 (mt 1 = 148 G eV )i n condi ti ons of pŝ = 500 G eV and Set1 param eters(see bel ow )on the softcuto E g =E b i n Fi g. 4. T he ful l , dashed and dotted l i nes correspond to ^ Q C D vir+ soft , ^ Q C D hard and the totalcorrecti on ^ Q C D . A s show n i n thi s gure,the ful lO ( s ) one-l oop Q C D correcti on ^ Q C D i s i ndependent of the softcuto E g =E b as E g =E b runni ng from 10 5 to 10 2 ,al though both ^ Q C D vir+ soft and ^ Q C D hard depend on cuto strongl y. at the poi nt of pŝ = 2 TeV and be over 32% near the threshol d for Set1. Furtherm ore,w i th thesam ei nputparam etersasused i n [ 13] ,forexam pl e,Set2,ourcal cul ati on show sthat,w hen pŝ i s between 1200 G eV and 2000 G eV ,the EW rel ati ve correcti on to !t 1 t 1 subprocess i s about 24: 1 31: 8% ,w hi l e the EW rel ati ve correcti on to e + e !t 1 t 1 process i s about 10% [ 13] . W e noti ce that on the curves i n Fi g. 6(b) there are som e sm al lspi kes w hi ch are due to the resonance e ects. For exam pl e,the resonance e ect at the posi ti on of T he num eri calresul ts for the process e + e ! !t 1 t 1 are pl otted i n Fi g. 11. Fi g. 11(a) and Fi g. 11(c) di spl ay the B orn and ful lone-l oop EW and Q C D corrected cross secti ons as the functi ons ofM S U S Y w i th p s = 500 G eV ,800 G eV ,1000 G eV ,2000 G eV ,respecti vel y. A s we expect, the curves i n Fi g. 11 (a) G eV ,800G eV ,1000 G eV ,2000 G eV ,respecti vel y. F igure 10(b) T he ful lO ( ew )EW rel ati ve correcti ons to the e + e ! !~ 1 ~ 1 process asthe functi onsofM S U S Y w i th p s = 500 G eV ,800 G eV ,1000 G eV ,2000 G eV ,respecti vel y. F igure 10(c) he B orn and ful l O ( ew ) EW corrected cross secti ons for the e + e ! !~ 2 ~ 2 process as functi ons of the soft-breaki ng sferm i on m ass M S U S Y w i th p s = 500 G eV ,800G eV ,1000 G eV ,2000 G eV ,respecti vel y.
F igure 10(d) T he ful lO ( ew )EW rel ati ve correcti ons to the e + e ! !~ 2 ~ 2 process asthe functi onsofM S U S Y w i th p s = 500 G eV ,800 G eV ,1000 G eV ,2000 G eV ,respecti vel y. F igure 11(a) T he B orn and ful lO ( ew ) EW corrected cross secti ons for the e + e ! !t 1 t 1 processasthe functi onsofthe soft-breaki ng sferm i on m assM S U S Y w i th p s = 500, 800,1000,2000 G eV ,respecti vel y.
F igure 11(b) T he ful lO ( ew )EW rel ati ve correcti ons to the e + e ! !t 1 t 1 process asthe functi onsofM S U S Y w i th p s = 500 G eV ,800 G eV ,1000 G eV ,2000 G eV ,respecti vel y. F igure 11(c) T he B orn and ful lO ( s ) Q C D corrected cross secti ons for the e + e ! !t 1 t 1 processasthe functi onsofthe soft-breaki ng sferm i on m assM S U S Y w i th p s = 500, 800,1000,2000 G eV ,respecti vel y.
F igure 11(d) T he ful lO ( s ) Q C D rel ati ve correcti ons to the e + e ! !t 1 t 1 process asthe functi onsofM S U S Y w i th p s = 500 G eV ,800 G eV ,1000 G eV ,2000 G eV ,respecti vel y. F igure 12(a) T he B orn and ful lO ( ew ) EW corrected cross secti ons for the e + e ! !t 2 t 2 process as the functi ons of the soft-breaki ng sferm i on m ass M S U S Y w i th p s = 1000 G eV ,2000 G eV ,respecti vel y.
F igure 12(b) T he ful lO ( ew )EW rel ati ve correcti ons to the e + e ! !t 2 t 2 process as the functi ons ofM S U S Y w i th p s = 1000 G eV ,2000 G eV ,respecti vel y. F igure 12(c) T he B orn and ful lO ( s ) Q C D corrected cross secti ons for the e + e ! !t 2 t 2 process as the functi ons of the soft-breaki ng sferm i on m ass M S U S Y w i th p s = 1000 G eV ,2000 G eV ,respecti vel y.
F igure 12(d) T he ful lO ( s ) Q C D rel ati ve correcti ons to the e + e ! !t 2 t 2 process as the functi ons ofM S U S Y w i th p s = 1000 G eV ,2000 G eV ,respecti vel y. F igure 13(a) T he B orn and ful lO ( ew ) EW corrected cross secti ons for the e + e ! !b 1 b 1 processasthe functi onsofthe soft-breaki ng sferm i on m assM S U S Y w i th p s = 500 G eV ,800 G eV ,1000 G eV ,2000 G eV ,respecti vel y.
F igure 13(b) T he ful lO ( ew )EW rel ati ve correcti ons to the e + e ! !b 1 b 1 process asthe functi onsofM S U S Y w i th p s = 500 G eV ,800 G eV ,1000 G eV ,2000 G eV ,respecti vel y. F igure 13(c) T he B orn and ful lO ( s ) Q C D corrected cross secti ons for the e + e ! !b 1 b 1 processasthe functi onsofthe soft-breaki ng sferm i on m assM S U S Y w i th p s = 500 G eV ,800 G eV ,1000 G eV ,2000 G eV ,respecti vel y.
F igure 13(d) T he ful lO ( s )Q C D rel ati ve correcti ons to the e + e ! !b 1 b 1 process asthe functi onsofM S U S Y w i th p s = 500 G eV ,800 G eV ,1000 G eV ,2000 G eV ,respecti vel y.
are three Feynm an di agram s for thi s subprocess at the tree l evel ,w hi ch are show n i n Fi g. 1. T he correspondi ng tree l evelam pl i tudes ofthi s subprocess !f i f i are represented as 1: T he l owest order di agram s for the !f i f i (f = ;t;b) subprocess.
M 0 =
0Mt 0 + Mû 0 + Mq 0 (2. 10) w here Mt 0 , Mû 0 and Mq 0 represent the am pl i tudes of the t-channel , u-channel and quarti c coupl i ng di agram s respecti vel y. T he expl i ci t expressi ons can be w ri tten as
m eans taki ng the real part of the l oop i ntegral s appeari ng i n the sel f-energy. T he f ij ;(i;j = 1;2) appeared i n Eqs. (2. 18) denote the unrenorm al i zed sferm i on sel f-energy i nvol vi ng onl y the EW i nteracti ons. T he O ( ew ) one-l oop vi rtualcorrecti ons to !f i f i expressi ons oft m ax;m in have been presented i n Eq. (2. 14) and the sum m ati on w i th bar overhead representssam e operati on asthatappeared i n Eq. (2. 13). M E W vir i sthe renorm al i zed am pl i tude ofthe EW one-l oop Feynm an di agram s,w hi ch i ncl ude sel f-energy,vertex,box and counter-term di agram s.
q j kj 2 + m 2 i s cal l ed ' soft'i fk 0 E or ' hard' i f k 0 > E . H ere, m i s a sm al l photon m ass, w hi ch i s used to regul ate the i nfrared di vergences exi sti ng i n the soft term . A l though both ^ E W soft and ^ E W hard depend on the soft photon cuto E =E b , w here E b = pŝ 2 i s the el ectron beam energy i n the c. m . s.
2: T he realphoton em i ssi on di agram s for the process !f i f i (f = ;t;b)
2. 3 O
3( s ) Q C D correction to subprocess !q i q i (q = t;b; i= 1;2) In thi s subsecti on, we cal cul ate the supersym m etri c O ( s ) Q C D correcti ons. T he rel evant Feynm an di agram s and the correspondi ng am pl i tudes of the subprocess !q i q i ; (q = t;b; i = 1;2) both at tree-l evel and at one-l oop l evel , are agai n generated by the package FeynA rts 3 [ 20] . T he Feynm an di agram s ofthe one-l oop O ( s ) Q C D correcti ons al so can be cl assi ed i nto sel f-energy,vertex,box and counter-term di agram s. T he rel evant renorm al i zed constants used i n the cal cul ati on are si m i l ar w i th those i n the cal cul ati on of the one-l oop O ( ew ) EW correcti on, w hi ch are de ned and expressed as i n Eqs. (2. 16) and Eqs. (2. 18) respecti vel y,exceptal lthe EW one-l oop sel f-energi es are repl aced by the correspondi ng Q C D ones. T he SU SY Q C D unrenorm al i zed sel f-energy of the scal ar quarkq i (q = t;b;i = 1;2)
27) w here M Q C D vir i s the renorm al i zed am pl i tude ofthe one-l oop O ( s ) Q C D Feynm an di agram s, w hi ch i ncl ude sel f-energy,vertex,box and counter-term di agram s. T he vi rtual Q C D correcti ons contai n both ul travi ol et (U V ) and i nfrared (IR ) di vergences i n general . To regul ari ze the U V di vergences i n l oop i ntegral s, we adopt the dim ensi onal regul ari zati on i n w hi ch the di m ensi ons of spi nor and space-ti m e m ani fol ds are extended to D = 4 2 . W e have veri ed the cancel l ati on of the U V di vergence both
w hi ch E g i s the energy cuto ofthe soft gl uon and k 0 E g pŝ . k 0 = q j kj 2 + m 2 g i s the gl uon energy. p 3 and p 4 are the four m om enta of two nalstate parti cl esq i and q i .
one l oop O ( s ) Q C D and O ( ew ) EW correcti ons to subprocesses !f i f i and the parent processes e + e ! !f i f i . In our num eri cal cal cul ati on, the SM param eters are set to be s (m 2 Z ) = 0: 1190, m e = 0: 5110998902 M eV , m = 105: 658357 M eV , m = 1: 77699 G eV , m u = 66 M eV , m d = 66 M eV , m c = 1: 2 G eV , m s = 150 M eV , m b = 4: 3 G eV , m t = 174: 3 G eV , m Z = 91: 1876 G eV , m W = 80: 423 G eV [ 31] . T here we use the e ecti ve val ues of the l i ght quark m asses (m u and m d ) w hi ch can reproduce the hadron contri buti on to the shi ft i n the ne structure constant ew (m 2 Z )[ 32] . W e take the nestructure constantatthe Z 0 -pol e asi nputparam eter, ew (m 2 Z ) 1 j M S = 127: 918[ 31] . T hen from Eq. (2. 17) we get the counter-term ofthe el ectri c charge i n D R schem e expressed as [ 33,34,13]
s the el ectri c charge of(s)ferm i on and = 2= + l og4 . N f C i s col or factor, w hi ch equal to 1 and 3 for (s)l eptons and (s)quarks, respecti vel y. It i s obvi ous that there i s a l i ttl e di screpancy between our el ectri c charge counter-term expressi on( Eq. (3. 1)) and that i n subsecti on 3. 1 ofR ef.[ 13] .T he M SSM param eters are determ i ned by usi ng Form C al c package w i th fol l ow i ng i nput param eters[ 35] : (1) T he i nput param eters for M SSM H i ggs sector are the C P-odd m ass M A 0 and tan w i th the constrai nt tan 2: 5. T he m asses ofthe M SSM H i ggs sector are xed by taki ng i nto account the si gni cant radi ati ve correcti ons. (2) T he i nput param eters for the chargi no and neutral i no sector are the gaugi no m ass param eters M 1 , M 2 and the H i ggsi no-m ass param eter . W e adopt the grand uni cati on theory(G U T ) rel ati on M 1 = (5=3)tan 2 W M 2 for si m pl i cati on [30]and the gl ui no m ass mg i s eval uated by mg = s (Q )= ew (m Z )si n 2 W M 2 . (3) For the sferm i on sector, we assum e MQ = MŨ = MD = MẼ = ML = M S U S Y and take the soft tri l i near coupl i ngs for sferm i onsq andlbei ng equal ,i . e. ,A q
Fi gure 4: T he ful lO ( s ) Q C D correcti ons to !t 1 t 1 as a functi on ofthe soft gl uon cuto E g =E b i n condi ti ons of pŝ = 500 G eV and Set1 param eters. In order to show and di scuss the e ects ofthe radi ati ve correcti ons to the subprocess of !f i f i quanti tati vel y,we choose the fol l ow i ng four typi caldata sets: Set1: tan = 6, M A 0 = 250 G eV ,M S U S Y = 200 G eV , = 800 G eV ,M 2 = 200 G eV and A f = 400 G eV . T hen we have m~ 1;2 = (185;223) G eV ,mt 1;2 = (148;340) G eV and mb 1;2 = (146;250) G eV . Set2: tan = 20,M A 0 = 300 G eV ,M S U S Y = 400 G eV , = 1000 G eV ,M 2 = 200 G eV and A f = 500 G eV . T hen we have m~ 1;2 = (354;446) G eV ,mt 1;2 = (304;533) G eV and mb 1;2 = (256;508) G eV . Set3: tan = 30,M A 0 = 300 G eV ,M S U S Y = 250 G eV , = 200 G eV ,M 2 = 800 G eV and A f = 250 G eV . T hen we have m~ 1;2 = (231;275) G eV ,mt 1;2 = (215;368) G eV and mb 1;2 = (188;307) G eV .
Fi gure 5 :
5the i nputparam eters tan ,M A 0 ,M S U S Y , ,M 2 and A f i n above data sets,al lthe m assesofsupersym m etri cparti cl escan beobtai ned by usi ng packageForm C al c. Set1(orSet2) i sthecaseofgaugi no-l i kew i th sm al l (orm edi ate)tan ,butl i ghter(orheavi er)sferm i ons,w hi l e Set3 and Set4 are hi ggsi no-l i ke case w i th l arger tan . T he B orn and the ful lO ( ew ) EW corrected cross secti ons for the !~ 1 ~ 1 subprocess as the functi ons of c. m . s. energy of col l i der w i th above four data sets are di spl ayed i n Fi g. 5(a). T here^ 0;i ' s are the B orn cross secti ons and^ 1;i ' s are the ful lO ( ew )EW corrected crosssecti onsforthesubprocess !~ 1 ~ 1 .T hesubscri ptigoesfrom 1 to 4,w hi ch correspond to the data Set1,Set2,Set3 and Set4 respecti vel y. T he O ( ew ) EW corrected cross secti on w i th Set4 can achi eve the m axi m alval ue 0. 278 pb atthe energy nearthe threshol d pŝ 400 G eV .W hen pŝ approaches to 1. 5 TeV ,the EW corrected cross secti on w i th Set2 goes dow n to 27. 5 fb,buti ti ssti l lm uch l argerthan thatforthe processofe + e !~ 1 ~ 1 [ 13,14]w i th the sam e i nput param eters. In Fi g. 5(b),the rel ati ve O ( ew ) EW correcti ons w i th the four data sets are depi cted. A s i t can be seen i n thi s gure,the rel ati ve correcti ons^ al so have thei r m axi m alval ues at the posi ti on near the threshol d energi es and then decrease quanti tati vel y w i th the i ncrem ent of pŝ . W hen the c. m . s. energy pŝ goes from the threshol d val ue of~ 1 ~ 1 pai r producti on to 2 TeV ,the ful lO ( ew ) EW correcti ons can enhance or reduce the B orn cross secti on dependi ng on the col l i di ng energy. A t the posi ti on of col l i di ng energy pŝ = 2 TeV , the rel ati ve EW correcti on^ can reach 24: 6% , 24: 1% , 23: 5% and 23: 2% for Set1, Set2, Set3 and Set4 respecti vel y. Fi g. 5(c) show s the num eri cal resul ts of the cross secti ons of !~ 2 ~ 2 subprocess both at the B orn l eveland one-l oop l evel ,as the functi ons of the col l i di ng energy pŝ . Fi g. 5(d) di spl ays the rel ati ve O ( ew ) EW correcti on for~ 2 ~ 2pai r producti on as a functi on of pŝ . W e nd that the behavi or ofcurves i n Fi g. 5(c),w hi ch correspond to the B orn,the EW corrected crosssecti ons of~ 2 ~ 2 producti on,are qui te si m i l ar to thosei n Fi g. 5(a)for~ 1 ~ 1 producti on.B uttheval uesofthecrosssecti onsfor~ 2 ~ 2 producti on are al ways sm al l er due to the heavi er m ass of~ 2 ,and can reach 0. 173 pb near the threshol d energy of~ 2 ~ 2 pai r producti on i n the case ofSet1. T he m agni tude ofEW rel ati ve correcti on i s about-26. 1% or -24. 5% at the posi ti on of pŝ = 2 TeV ,w hi ch i s cl ose to that of !~ 1 ~ 1 subprocess. In Fi g. 6(a) and Fi g. 6(c),we depi ct the ful lO ( ew ) EW and O ( s ) Q C D corrected cross secti ons for the subprocess !t 1 t 1 . A nal ogousl y,^ 0;i (i = 1 ;4) m ean the tree-l evel cross secti ons correspondi ng to the four i nput data sets respecti vel y, and^ 1;i ' s are the ful l one-l oop corrected cross secti ons. Fi g. 6(a) dem onstrates that the correspondi ng two curves forB orn and O ( ew )EW corrected crosssecti onsi n the sam e condi ti on ofthe i nputdata set, have the sam e l i ne shape. W hi l e Fi g. 6(c) show s obvi ousl y that the O ( s ) Q C D correcti ons can be l arger than the O ( ew ) EW correcti ons,especi al l y near the threshol d. T he EW and Q C D rel ati ve correcti onstot 1 t 1 pai rproducti on subprocessare di spl ayed i n Fi g. 6(b)and (d), respecti vel y. From Fi g. 6(b),we can see thatthe O ( ew )EW rel ati ve correcti onsto !t 1 t 1 subprocessvary from posi ti ve val uesto negati ve onesas pŝ runni ng from the threshol d val ue to 2 TeV .T he absol ute val ue ofthe rel ati ve correcti on j^ E W jfor Set3 can reach about34(a) T he B orn and ful l O ( ew ) EW corrected cross secti ons for the !~ 1 ~ 1 subprocess as the functi ons ofc. m . s. energy of col l i der pŝ w i th four di erent data sets, respecti vel y. (b) T he ful lO ( ew ) EW rel ati ve correcti on to the !~ 1 ~ 1 subprocess. T he sol i d , dashed, dotted and dash-dotted curves correspond to four di erent data set cases, respecti vel y. (c) T he B orn and ful lO ( ew ) EW corrected cross secti ons for the !~ 2 ~ 2 subprocess as the functi ons ofc. m . s. energy of col l i der pŝ w i th four di erent data sets, respecti vel y. (d) T he ful lO ( ew ) EW rel ati ve correcti on to the !~ 2 ~ 2 subprocess. T he sol i d , dashed, dotted and dash-dotted curves correspond to four di erent data set cases, respecti vel y.
Fi gure 6 :Fi gure 7 :
67(a) T he B orn and ful l O ( ew ) EW corrected cross secti ons for the !t 1 t 1 subprocess as the functi ons ofc. m . s. energy of col l i der pŝ w i th four di erent data sets, respecti vel y. (b) T he ful l O ( ew ) EW rel ati ve correcti on to !t 1 t 1 subprocess. Four di erent curves correspond to four di erent data sets, respecti vel y. (c) T he B orn and ful l O ( s ) Q C D corrected cross secti ons for the !t 1 t 1 subprocess as the functi ons ofc. m . s. energy of col l i der pŝ w i th four di erent data sets,respecti vel y. (d) T he ful lO ( s ) Q C D rel ati ve correcti on to !t 1 t 1 subprocess. (a) T he B orn and ful l O ( ew ) EW corrected cross secti ons for the !t 2 t 2 subprocess as the functi ons ofc. m . s. energy of col l i der pŝ w i th four di erent data sets, respecti vel y. (b) T he ful l O ( ew ) EW rel ati ve correcti on to !t 2 t 2 subprocess. Four di erent curves correspond to four di erent data sets, respecti vel y. (c) T he B orn and ful l O ( s ) Q C D corrected cross secti ons for the !t 2 t 2 subprocess as the functi ons ofc. m . s. energy of col l i der pŝ w i th four di erent data sets,respecti vel y. (d) T he ful lO ( s ) Q C D rel ati ve correcti on to !t 2 t 2 subprocess.
Fi gure 8 :
82m H + for i nput data Set1,w hi l e the resonance e ect at the posi ti on of pŝ 1066 G eV i s caused by pŝ 2mt 2 for i nput data Set2. Furtherm ore, w hen we observe Fi g. 6(d),w hi ch show s the rel ati ve Q C D correcti on as a functi on ofc. m . s. energy pŝ for!t 1 t 1 ,we can see that the val ues of^ Q C D decrease rapi dl y to m i ni m alval ues after pŝ just goes up from the threshol d energy and then i ncrease sl ow l y to 23: 8% ,11: 4% ,19: 3% and 27: 6% at the posi ti on of pŝ = 2 TeV for Set1,Set2,Set3 and Set4,respecti vel y. T he resul ts for subprocess !t 2 t 2 are draw n i n Fi g. 7(a-d). T he ful l O ( ew ) EW and O ( s ) Q C D corrected cross secti ons are pl otted i n Fi g. 7(a) and Fi g. 7(c), respecti vel y. C om pari ng these two gures w i th Fi g. 6(a) and Fi g. 6(c), we can see that the cross secti ons for the !t 2 t 2 subprocess are al m ost one order sm al l er than those for the !t 1 t 1 subprocess quanti tati vel y due to mt 2 > mt 1 . T he EW and Q C D rel ati ve correcti ons to subprocess !t 2 t 2 are pl otted i n Fi g. 7(b) and Fi g. 7(d), respecti vel y. From Fi g. 7(d),we see that the val ues of Q C D rel ati ve correcti ons are rather l arge near the threshol d,and at posi ti on of pŝ = 2 TeV they are 10: 2% , 0: 7% ,12: 5% and 13: 2% for data Set1,Set2,Set3 and Set4 respecti vel y,w hi ch are l ess than the correspondi ng Q C D rel ati ve correcti ons to the !t 1 t 1 subprocess show n i n Fi g. 6(d). T he absol ute EW rel ati ve correcti ons are general l y l arger than those ofthe absol ute Q C D rel ati ve correcti ons show n i n Fi g. 7(d),except i n the threshol d energy regi ons. T he val ues ofEW rel ati ve correcti ons are about 45% for the two gaugi no-l i ke data sets Set1 and Set2 and 43% for the two hi ggsi no-l i ke data sets Set3 andSet4 at the posi ti on of pŝ = 2 TeV .T he EW rel ati ve correcti ons to !t 2 t 2 cross secti on are negati ve i n the range of pŝ = 700 2000 G eV w i th al l the four data sets and have the m i ni m alval ues near the posi ti on of pŝ = 800 G eV for Set1,Set3 and Set4,and i n the vi ci ni ty of pŝ 1200 G eV for Set2. W e al so show theb i b i ; (i= 1;2) pai r producti ons i n Fi g. 8 and Fi g. 9. Fi g. 8(a) i s pl otted for the B orn and ful lO ( ew ) EW corrected cross secti ons ofthe !b 1 b 1 subprocessas the functi onsof pŝ w i th fourdata setsrespecti vel y,and Fi g. 8(c)fortheB orn and ful lO ( s )Q C D corrected crosssecti ons. Fi g. 8(b) and Fi g. 8(d)di spl ay the EW and Q C D rel ati ve correcti ons, respecti vel y. T hese gures show that the behavi ors of the curves forb 1 b 1 pai r producti on are si m i l ar to those fort 1 t 1 pai r producti on. C om pari ng Fi g. 6(b) w i th Fi g. 8(b), we noti ce that the ful lO ( ew ) EW rel ati ve correcti on to !b 1 b 1 subprocess i s l arger than that to the !t 1 t 1 subprocess. T he m axi m um absol ute val ue ofthe form er can reach 50: 7% for Set2 at the posi ti on of pŝ 2000 G eV , w hi ch i s even l arger than the Q C D correcti on to !b 1 b 1 . A gai n,al lof the sm al lspi kes appeari ng on the curves of Fi g. 8(a-b) are due to the resonance e ects, such as the spi kes at the posi ti ons of pŝ 525 G eV for Set1 and pŝ 621 G eV forSet2 are caused by pŝ 2m H + ,w hi l e condi ti on of pŝ 2mt 2 l eadsto the spi ke at the posi ti on of pŝ 1066 G eV for Set2. In Fi g. 8(d),the sol i d,dashed,dotted and dash-dotted l i nes correspond to the Q C D rel ati ve correcti ons w i th param eter scenari os Set1, Set2,Set3 and Set4,respecti vel y. A l though Fi g. 8(d)dem onstratesthatthe the Q C (a) T he B orn and ful l O ( ew ) EW corrected cross secti ons for the !b 1 b 1 subprocess as the functi ons of c. m . s. energy pŝ w i th four data sets, respecti vel y. (b) T he ful lone-l oop O ( ew ) EW rel ati ve correcti on to !b 1 b 1 subprocess. (c) T he B orn and ful l O ( s ) Q C D corrected cross secti ons for the !b 1 b 1 subprocess as the functi ons ofc. m . s. energy pŝ w i th four data sets,respecti vel y. (d) T he ful lO ( s ) Q C D rel ati ve correcti on to !b 1 b 1 subprocess.
Fi gure 9 :
9(a) T he B orn and ful l O ( ew ) EW corrected cross secti ons for the !b 2 b 2 subprocessasthe functi onsofc. m . s. energy pŝ w i th fourdata sets,respecti vel y. (b)T he ful l O ( ew ) EW rel ati ve correcti ons to !b 2 b 2 subprocess. (c) T he B orn and ful lO ( s ) Q C D corrected cross secti ons for the !b 2 b 2 subprocess as the functi ons of c. m . s. energy pŝ w i th four data sets,respecti vel y. (d) T he ful lO ( s ) Q C D rel ati ve correcti on to !b 2 b 2 subprocess.correcti ons i n the regi on near the threshol d energy oftheb 1 b 1 pai r producti on are extrem el y l arge,these val ues are untrustworthy due to the non-perturbati ve Q C D e ects. T he val ues ofthe rel ati ve O ( s ) Q C D correcti ons at the posi ti on of pŝ = 2000 G eV ,are 23: 1% ,13: 4% , 21: 5% and 28: 7% ,for Set1,Set2,Set3 and Set4 respecti vel y.T heful lO ( ew )EW and O ( s )Q C D corrected crosssecti onsforthe !b 2 b 2 subprocess are depi cted i n Fi g. 9(a)and Fi g. 9(c)separatel y,w hi l e thei rcorrespondi ng rel ati ve correcti ons are pl otted i n Fi g. 9(b) and Fi g. 9(d) respecti vel y. A l though the l i ne-shapes i n Fi g. 9(a) and g. 9(c)aresi m i l arw i th thecorrespondi ng onesi n Fi g. 8(a)and (c)fortheb 1 b 1 pai rproducti on, the val ues of the corrected cross secti on i n Fi g. 9(a) and Fi g. 9(c) are m uch sm al l er due to mb 2 > mb 1 . N everthel ess, Fi g. 9(b) show s that w hen pŝ i s l arge enough, the absol ute EW rel ati ve correcti ons to !b 2 b 2 approach about 50% or beyond for al lfour data sets. T he peak at the posi ti on of pŝ 525 G eV for Set1 i n Fi g. 9(b) com es from the resonance e ect of pŝ 2m H + . W e nd al so from Fi g. 9(d) that the absol ute Q C D rel ati ve correcti ons to the !b 2 b 2 subprocess are general l y com parabl e w i th the EW correcti ons or even sm al l er than EW ones,especi al l y i n the l arge col l i di ng energy regi on.In the fol l ow i ng di scussi on,we present som e num eri calresul ts about the parent processe + e ! !f i f i (f = ;t;b; i= 1;2). For conveni ence,we denote the cross secti ons oftheparentprocess e + e ! !f i f i contai ni ng the O ( ew ) EW and O ( s ) Q C D correcti ons as E W = 0 + E W = 0 (1 + E W ); Q C D = 0 + Q C D = 0 (1 + Q C D ) w here EW and Q C D are the O ( ew )EW and O ( s )Q C D rel ati ve correcti on respecti vel y. In the fol l ow i ng num eri calcal cul ati ons,we take the i nputparam etersofhi ggsi no-l i ke data Set3, butl etM S U S Y runni ng from 200 G ev to 400 G eV .Fi g. 10(a) and Fi g. 10(c)show the B orn and ful lO ( ew ) EW corrected cross secti ons for the e + e ! !~ 1 ~ 1 and e + e ! !~ 2 ~ 2 process as the functi ons ofthe soft-breaki ng sferm i on m ass M S U S Y . In Fi g. 10(a), the sol i d and dashed curves correspond to B orn and ful lO ( ew )EW corrected crosssecti ons for p s = 500 G eV ,800 G eV ,1000 G eV ,2000 G eV ,respecti vel y .Iti sobvi ousthatal lthecurvesforthe B orn and EW corrected cross secti ons decrease rapi dl y as M S U S Y goi ng up from 200 to 400 G eV ,butthe dam pi ng decrem enti sgetti ng sm al l erw i th the i ncrem entofthe col l i di ng energy p s. W e can read from Fi g. 10(a) that the val ues ofthe EW corrected cross secti ons decrease from 28. 3 fb,98. 9 fb,110 fb and 92. 8 fb to 2. 25 fb,0. 1 fb,1. 28 fb and 20. 8 fb for p s = 500 G eV ,800 G eV ,1000 G eV ,2000 G eV respecti vel y,w hen M S U S Y i ncreases from 200 G eV to 400 G eV .In Fi g. 10(c),the curves show the B orn and O ( ew )EW corrected crosssecti ons for e + e ! !~ 2 ~ 2 process w i th p s = 800 G eV ,1000 G eV ,2000 G eV respecti vel y. A l lthe curves have the anal ogous tendency as the curves i n Fi g. 10(a). T he val ues ofthe corrected cross secti ons i n Fi g. 10(c) are sm al l er than those for e + e ! !~ 1 ~ 1 i n Fi g. 10(a). T he EW rel ati ve correcti ons to the e + e ! !~ 1 ~ 1 process as the functi ons of M S U S Y are depi cted i n Fi g. 10(b) for p s = 500 G eV , 800 G eV , 1000 G eV and 2000 G eV . From thi s gure,we can see thati n the range ofM S U S Y = 200 G eV to 400 G eV ,thi srel ati ve correcti on can reach 5: 46% at the posi ti on of M S U S Y = 340 G eV w hen we take p s = 800 G eV . If we take e + e col l i di ng energy p s = 2 TeV , we can get 18: 69% rel ati ve correcti on to e + e ! !~ 1 ~ 1 process w hen M S U S Y = 400 G eV . Fi g. 10(d) di spl ays the EW rel ati ve correcti ons to e + e ! !~ 2 ~ 2 process. W e can see from thi s gure that the num eri cal val ues ofthese rel ati ve correcti ons i ncrease rapi dl y from 16: 43% to 3: 56% w hen M Fi gure 10: (a) T he B orn and ful lO ( ew ) EW corrected cross secti ons for the e + e ! ! 1 ~ 1 process as functi ons of the soft-breaki ng sferm i on m ass M S U S Y w i th p s = 500 G eV , 800G eV ,1000 G eV ,2000 G eV ,respecti vel y. (b) T he ful lO ( ew ) EW rel ati ve correcti ons to the e + e ! !~ 1 ~ 1 process as the functi ons of M S U S Y w i th p s = 500 G eV ,800 G eV , 1000 G eV ,2000 G eV ,respecti vel y. (c)T he B orn and ful lO ( ew )EW corrected crosssecti ons for the e + e ! !~ 2 ~ 2 process as functi ons ofthe soft-breaki ng sferm i on m ass M S U S Y w i th p s = 500 G eV ,800G eV ,1000 G eV ,2000 G eV ,respecti vel y. (d) T he ful lO ( ew ) EW rel ati ve correcti ons to the e + e ! !~ 2 ~ 2 process as the functi ons ofM S U S Y w i th p s = 500 G eV ,800 G eV ,1000 G eV ,2000 G eV ,respecti vel y. ue of E W i s rel ati ve stabl e and keeps i n the range of [ -18. 93% , -19. 72% ]as M S U S Y varyi ng from 200 G eV to 400 G eV .
for the cross secti ons i n B orn approxi m ati on and at O ( ew ) EW one-l oop l evel ,have som e si m i l ar behavi ors w i th those for the~ 1 ~ 1 producti on process show n i n Fi g. 10(a). W e can nd from Fi g. 11(a) that the ful lO ( ew ) EW corrected cross secti ons decrease from 87. 3 fb and 65. 3 fb to 1. 21 fb and 11. 8 fb w hen M S U S Y goes from 200 G eV to 400 G eV for p s = 1000 G eV ,2000 G eV ,respecti vel y. In Fi g11(c) the sol i d and dotted curves correspond to the B orn and one-l oop Q C D corrected cross secti ons versus M S U S Y w i th p s = 500 G eV ,800 G eV ,1000 G eV ,2000 G eV ,respecti vel y. From thi s gure, we can see that the val ue ofthe Q C D corrected cross secti on reaches 118 fb for p s = 1000 G eV w i th our chosen param eters. In order to study the EW and Q C D radi ati ve correcti ons m ore cl earl y,we pl otthe EW and Q C D rel ati ve correcti ons to the e + e ! !t 1 t 1 process i n Fi g. 11(b)and Fi g. 11(d). In Fi g. 11(b),the resonance e ectatthe posi ti on ofM S U S Y = 386 G eV i s due to the condi ti on ofmt1 m t + m~ 0 1 . Fi g. 11(b) show s the EW rel ati ve correcti ons for p s = 1000 G eV and 2000 G eV can reach 27: 15% and 26: 78% atthe posi ti on ofM S U S Y = 400 G eV .From Fi g. 11(d) we nd the curves for p s = 500 G eV ,800 G eV and 1000 G eV go up eetl y w i th the i ncrem ent ofM S U S Y ,butfor the curve of p s = 2000 G eV the rel ati ve correcti on i s al m ost stabl e,varyi ng i n the range of[ 14: 6% ,13: 3% ] . T he resul ts for e + e ! !t 2 t 2 are represented i n Fi g. 12. Fi g. 12(a) show s the pl ot of the B orn and ful lone-l oop EW corrected cross secti ons versus M S U S Y . Fi g. 12(b) descri bes the EW rel ati ve correcti ons as the functi ons of M S U S Y . T he Q C D corrected cross secti ons and Q C D rel ati ve correcti onsare pl otted i n Fi g. 12(c)and Fi g. 12(d),respecti vel y. In Fi g. 12(a) the EW corrected cross secti on ofe + e ! !t 2 t 2 process decreases from 13. 2 fb to 5. 3 fb for p s = 2000 G eV ,w hen M S U S Y i ncreases from 200 G eV to 400 G eV .A t the posi ti on of M S U S Y = 266 G eV i n Fi g. 12(a),there i sa di theri ng on the curve of p s = 2000 G eV ,w hi ch i s dueto resonance e ectofmt 2 m t + m~ 0 2 .T he resonance e ecton the EW rel ati ve correcti on curves at the posi ti on near M S U S Y = 266 G eV i s al so show n i n Fi g. 12(b). In Fi g. 12(d),the Q C D rel ati ve correcti onsfor p s = 1000 G eV are ratherl argerand vary i n the regi on between 36. 4% and 76. 1% w i th the i ncrem ent ofM S U S Y ,but the Q C D rel ati ve correcti ons for p s = 2000 G eV are sm al l er,and have the val ues i n the range between 15. 5% and 18. 6% . W e al so present the resul ts of the e + e ! !b 1 b 1 process i n Fi g. 13. Fi g. 13(a) and Fi g. 13(c)show the B orn and ful lone-l oop EW and Q C D corrected crosssecti ons,respecti vel y. In Fi g. 13(a) and Fi g. 13(c) we nd that al lthe curves ofcross secti ons at the B orn l eveland i nvol vi ng EW and Q C D one-l oop contri buti ons,decrease w i th the i ncrem ent ofthe M S U S Y . For exam pl e,w hen M S U S Y vari es from 200 to 400 G eV ,the two curves ofthe cross secti ons i ncl udi ng ful lone-l oop EW correcti ons for p s = 1000 G eV and 2000 G eV i n Fi g. 13(a),goes dow n from 11. 1 fb and 6. 6 fb to 0. 13 fb and 0. 65 fb respecti vel y. W hi l e the two curves i n Fi g. 13(c),w hi ch represent the cross secti ons i ncl udi ng Q C D correcti ons for p s = 1000 G eV and 2000 G eV ,decrease from 12. 8 fb and 8. 4 fb to 0. 3 fb and 1. 3 fb,respecti vel y. Fi g. 13(b) Fi gure 11: (a) T he B orn and ful l O ( ew ) EW corrected cross secti ons for the e + e ! !t 1 t 1 process as the functi ons of the soft-breaki ng sferm i on m ass M S U S Y w i th p s = 500,800,1000,2000 G eV ,respecti vel y. (b) T he ful lO ( ew ) EW rel ati ve correcti ons to the e + e ! !t 1 t 1 process as the functi ons ofM S U S Y w i th p s = 500 G eV ,800 G eV ,1000 G eV ,2000 G eV ,respecti vel y. (c) T he B orn and ful lO ( s ) Q C D corrected cross secti ons for the e + e ! !t 1 t 1 process as the functi ons ofthe soft-breaki ng sferm i on m ass M S U S Y w i th p s= 500,800,1000,2000 G eV ,respecti vel y. (d)T heful lO ( s )Q C D rel ati vecorrecti ons to the e + e ! !t 1 t 1 process as the functi ons ofM S U S Y w i th p s = 500 G eV ,800 G eV , 1000 G eV ,2000 G eV ,respecti vel y.
Fi gure 12 :
12(a) T he B orn and ful lO ( ew ) EW corrected cross secti ons for the e + e ! ! t 2 t 2 processasthe functi onsofthe soft-breaki ng sferm i on m assM S U S Y w i th p s = 1000 G eV , 2000 G eV ,respecti vel y. (b)T heful lO ( ew )EW rel ati ve correcti onsto thee + e ! !t 2 t 2 processasthefuncti onsofM S U S Y w i th p s = 1000 G eV ,2000 G eV ,respecti vel y. (c)T heB orn and ful lO ( s ) Q C D corrected cross secti ons for the e + e ! !t 2 t 2 process as functi ons ofthe soft-breaki ng sferm i on m assM S U S Y w i th p s = 1000 G eV ,2000 G eV ,respecti vel y. (d) T he ful lO ( s ) Q C D rel ati ve correcti ons to the e + e ! !t 2 t 2 process as the functi ons ofM S U S Y w i th p s = 1000 G eV ,2000 G eV ,respecti vel y. Fi gure 13: (a) T he B orn and ful lO ( ew ) EW corrected cross secti ons for the e + e ! ! b 1 b 1 process asthe functi onsofthe soft-breaki ng sferm i on m ass M S U S Y w i th p s = 500 G eV , 800 G eV ,1000 G eV ,2000 G eV ,respecti vel y. (b) T he ful lO ( ew ) EW rel ati ve correcti ons to the e + e ! !b 1 b 1 process as the functi ons ofM S U S Y w i th p s = 500 G eV ,800 G eV , 1000 G eV ,2000 G eV ,respecti vel y. (c)T he B orn and ful lO ( s )Q C D corrected crosssecti ons forthe e + e ! !b 1 b 1 processasthe functi onsofthe soft-breaki ng sferm i on m assM S U S Y w i th p s = 500 G eV ,800 G eV ,1000 G eV ,2000 G eV ,respecti vel y. (d) T he ful lO ( s ) Q C D rel ati ve correcti ons to the e + e ! !b 1 b 1 process as the functi ons ofM S U S Y w i th p s = 500 G eV ,800 G eV ,1000 G eV ,2000 G eV ,respecti vel y.
Fi gure 14 :
14(a) T he B orn and ful l O ( ew ) EW corrected cross secti ons for the e + e ! !b 2 b 2 process as the functi ons of the soft-breaki ng sferm i on m ass M S U S Y w i th p s = 800 G eV ,1000 G eV ,2000 G eV ,respecti vel y. (b) T he ful lO ( ew ) EW rel ati ve correcti ons to the e + e ! !b 2 b 2 process as the functi ons of M S U S Y w i th p s = 800 G eV , 1000 G eV ,2000 G eV ,respecti vel y. (c) T he B orn and ful lO ( s ) Q C D corrected cross secti ons for the e + e ! !b 2 b 2 process as the functi ons ofthe soft-breaki ng sferm i on m ass M S U S Y w i th p s = 800 G eV ,1000 G eV ,2000 G eV ,respecti vel y. (d) T he ful lO ( s ) Q C D rel ati ve correcti onsto the e + e ! !b 2 b 2 processasthe functi onsofM S U S Y w i th p s = 800 G eV , 1000 G eV ,2000 G eV ,respecti vel y.
F
igure 1 T he l owest order Feynm an di agram s for the subprocess !f i f i (f = ;t;b). F igure 2 T he realphoton em i ssi on di agram s for the process !f i f i (f = ;t;b). F igure 3 T he realgl uon em i ssi on di agram s for the process !q i q i g (q = t;b). F igure 4 T he ful lO ( s ) Q C D correcti ons to !t 1 t 1 as a functi on ofthe soft gl uon cuto E g =E b i n condi ti ons of pŝ = 500 G eV and Set1 param eters.
F igure 5
5(a) T he B orn and ful lO ( ew ) EW corrected cross secti ons for the !~ 1 ~ 1 subprocess as the functi ons ofc. m . s. energy of col l i der pŝ w i th four di erent data sets, respecti vel y.F igure 5(b)T he ful lO ( ew ) EW rel ati ve correcti on to the !~ 1 ~ 1 subprocess. T he sol i d , dashed, dotted and dash-dotted curves correspond to four di erent data set cases, respecti vel y.F igure 5(c)T he B orn and ful lO ( ew ) EW corrected cross secti ons for the !~ 2 ~ 2 subprocess as the functi ons ofc. m . s. energy of col l i der pŝ w i th four di erent data sets, respecti vel y.F igure 5(d) T he ful lO ( ew ) EW rel ati ve correcti on to the !~ 2 ~ 2 subprocess. T he sol i d , dashed, dotted and dash-dotted curves correspond to four di erent data set cases, respecti vel y.F igure 6(a) T he B orn and ful lO ( ew ) EW corrected cross secti ons for the !t 1 t 1 subprocess as the functi ons ofc. m . s. energy of col l i der pŝ w i th four di erent data sets, respecti vel y.F igure 6(b) T he ful l O ( ew ) EW rel ati ve correcti on to !t 1 t 1 subprocess. Four di erent curves correspond to four di erent data sets,respecti vel y.F igure 6(c) T he B orn and ful lO ( s ) Q C D corrected cross secti ons for the !t 1 t 1 subprocess as the functi ons ofc. m . s. energy of col l i der pŝ w i th four di erent data sets, respecti vel y. .F igure 6(d) T he ful lO ( s ) Q C D rel ati ve correcti on to !t 1 t 1 subprocess. F igure 7(a) T he B orn and ful lO ( ew ) EW corrected cross secti ons for the !t 2 t 2 subprocess as the functi ons ofc. m . s. energy of col l i der pŝ w i th four di erent data sets, respecti vel y. F igure 7(b) T he ful l O ( ew ) EW rel ati ve correcti on to !t 2 t 2 subprocess. Four di erent curves correspond to four di erent data sets,respecti vel y. F igure 7(c) T he B orn and ful lO ( s ) Q C D corrected cross secti ons for the !t 2 t 2 subprocess as the functi ons ofc. m . s. energy of col l i der pŝ w i th four di erent data sets, respecti vel y. F igure 7(d) T he ful lO ( s ) Q C D rel ati ve correcti on to !t 2 t 2 subprocess. F igure 8(a) T he B orn and ful lO ( ew ) EW corrected cross secti ons for the !b 1 b 1 subprocess as the functi ons ofc. m . s. energy pŝ w i th four data sets,respecti vel y. F igure 8(b) T he ful lone-l oop O ( ew ) EW rel ati ve correcti on to !b 1 b 1 subprocess. F igure 8(c) T he B orn and ful lO ( s ) Q C D corrected cross secti ons for the !b 1 b 1 subprocess as the functi ons ofc. m . s. energy pŝ w i th four data sets,respecti vel y. F igure 8(d) T he ful lO ( s ) Q C D rel ati ve correcti on to !b 1 b 1 subprocess. F igure 9(a) T he B orn and ful lO ( ew ) EW corrected cross secti ons for the !b 2 b 2 subprocess as the functi ons ofc. m . s. energy pŝ w i th four data sets,respecti vel y. F igure 9(b) T he ful lO ( ew ) EW rel ati ve correcti ons to !b 2 b 2 subprocess. F igure 9(c) T he B orn and ful lO ( s ) Q C D corrected cross secti ons for the !b 2 b 2 subprocess as the functi ons ofc. m . s. energy pŝ w i th four data sets,respecti vel y. F igure 9(d) T he ful lO ( s ) Q C D rel ati ve correcti on to !b 2 b 2 subprocess. F igure 10(a) T he B orn and ful lO ( ew ) EW corrected cross secti ons for the e + e ! !~ 1 ~ 1 process as functi ons of the soft-breaki ng sferm i on m ass M S U S Y w i th p s = 500
F igure 14(a) T he B orn and ful lO ( ew ) EW corrected cross secti ons for the e + e ! !b 2 b 2 processasthe functi onsofthe soft-breaki ng sferm i on m assM S U S Y w i th p s = 800 G eV ,1000 G eV ,2000 G eV ,respecti vel y. F igure 14(b) T he ful lO ( ew )EW rel ati ve correcti ons to the e + e ! !b 2 b 2 process as the functi ons ofM S U S Y w i th p s = 800 G eV ,1000 G eV ,2000 G eV ,respecti vel y. F igure 14(c) T he B orn and ful lO ( s ) Q C D corrected cross secti ons for the e + e ! !b 2 b 2 processasthe functi onsofthe soft-breaki ng sferm i on m assM S U S Y w i th p s = 800 G eV ,1000 G eV ,2000 G eV ,respecti vel y. F igure 14(d) T he ful lO ( s )Q C D rel ati ve correcti ons to the e + e ! !b 2 b 2 process as the functi ons ofM S U S Y w i th p s = 800 G eV ,1000 G eV ,2000 G eV ,respecti vel y.
s the vel oci ty ofthe produced scal ar ferm i on. T he ki nem ati calvari abl e v i s de ned as v =(1
)=(1 + ). For squarks,we have N f
C = 3,w hi l e
for sl eptons N f
C = 1.
Fi nal l y,we get an U V and IR ni te O ( ew ) EW correcti on ^ E W :de ned as ^ E W = ^ E W
vir + ^ E W
real , i s
i ndependent ofthe cuto E .
^ E W = ^ E W
vir + ^ E W
real =^ 0^
E W
w here^ E W =^ E W
vir +^ E W
soft +^ E W
hard i s the O ( ew ) EW rel ati ve correcti on.
show s the ful lone-l oop EW rel ati ve correcti ons to the e + e ! [ 30] J.F.G uni on,H .E.H aber,N ucl .Phys.B 272 (1986) 1. [ 31] S.Ei del m an,etal .,Phys.Lett.B 592(2004)1. [ 32] F.Jegerl ehner,D ESY 01-029,hep-ph/0105283. [ 33] C .W eber,H .Eberl ,W .M ajerotto,Phys.Lett.B 572(2003) 56,hep-ph/0305250. [ 34] H . Eberl , M . K i ncel , W . M ajerotto and Y . Yam ada, N ucl . Phys. B 625(2002) 372, hep-ph/0111303. [ 35] T hom as H ahn and C hri sti an Schappacher, C om put. Phys. C om m un. 143(2002)54-68, hep-ph/0105349.
133 G eV (correspondi ng to M S U S Y 212 G eV ). Fi g. 13(d) di spl ays the ful l one-l oop Q C D rel ati ve correcti ons as the functi ons of M S U S Y w i th p s = 500, 800, 1000, 2000 G eV , respecti vel y. T he Q C D rel ati ve correcti ons can be rather l arger for p s = 500 G eV and 800 G eV ,and can reach 59% and 69% at the posi ti ons ofM S U S Y = 250 G eV and 350 G eV ,respecti vel y.Fi nal l y, we present the B orn and ful l one-l oop EW and Q C D correcti ons to e + e ! !b 2 b 2 process i n Fi g. 14. C om pari ng Fi g. 14(a) w i th Fi g. 13(a),we can see that the B orn and ful lone-l oop EW corrected cross secti ons forb 2 b 2 pai r producti on are sm al l er than the correspondi ng ones forb 1 b 1 pai r producti on because of mb 2 > mb 1 . B ut the EW rel ati ve correcti ons to e + e ! !b 2 b 2 process are rather l arge and can be com parabl e w i th thei r Q C D correcti ons as show n i n Fi g. 14(b) and Fi g. 14(d). A cknow ledgm ents:T hi s work was supported i n part by the N ati onalN aturalSci ence Foundati on of C hi na and speci alfund sponsored by C hi na A cadem y ofSci ence.
. J El L I S, S Udaz, Phys.Lett.B. 128248J.El l i s and S.R udaz,Phys.Lett.B 128(1983)248;
. J F , H E , Phys.B. 2721J. F.G uni on and H . E.H aber,N ucl . Phys.B 272(1986)1.
Internati onal study group progressreporton l i nearcol l i derdevel opm ent. C , RT - 2000-7SLA C -R -559 and K EK -R EPOInternati onal Study G roup C ol l aborati on). A pri lC .A dol phsen et al. , (Internati onal Study G roup C ol l aborati on), ' Internati onal study group progressreporton l i nearcol l i derdevel opm ent' ,SLA C -R -559 and K EK -R EPO RT - 2000-7 (A pri l ,2000).
. N Gn Study, ' , K Ek -R Epo Rt, N .A kasaka etal. , ' JLC desi gn study' ,K EK -R EPO RT -97-1.
T ESLA :T he superconducti ng el ectron posi tron l i near col l i der w i th an i ntegrated X -ray l aser l aboratory. R Ri Nkm Ann, K Ottm Ann, J Ossbach, P Schm User, N , H , Techni cal desi gn reportedi tor. Part 2: T he A ccel erator' , D ESY -01-11 (M archR . B ri nkm ann, K . Fl ottm ann, J. R ossbach, P. Schm user, N . W al ker and H . W ei se(edi tor), ' T ESLA :T he superconducti ng el ectron posi tron l i near col l i der w i th an i ntegrated X -ray l aser l aboratory. Techni cal desi gn report, Part 2: T he A ccel erator' , D ESY -01-11 (M arch,2001).
A 3 TeV e + e Li near C ol l i der B ased on C LIC Technol ogy. C ER N -2000-008ui gnard(edi tor' A 3 TeV e + e Li near C ol l i der B ased on C LIC Technol ogy' , G . G ui gnard(edi tor), C ER N -2000-008.
. A Frei Tas, A , P M Zerwas, Eur. Phys. J.C. 34A .Frei tas,A .von M anteu el ,P.M .Zerwas,Eur. Phys. J.C 34 (2004) 487-512.
. K Tu, J Pi, M , Phys. Lett.B. 328K .H ui tu,J.M aal am pi ,M .R ai dal ,Phys. Lett.B 328 (1994) 60-66.
. H Aer, B W , Phys. R ev.D. 57H oward B aer,B .W .H arri s,M ary H al lR eno,Phys. R ev.D 57 (1998) 5871-5874.
. A Frei Tas, D J Er, 3061A .Frei tas,D .J.M i l l er,eC onfC 010630 (2001) E3061.
. K I , M Obayashi, Phys.R ev.D. 36724K . I.H i kasa and M .K obayashi ,Phys.R ev.D 36(1987)724.
. W Eenakker, R Opker, P M Zerwas, Phys.Lett.B. 349463W .B eenakker,R .H opker and P. M .Zerwas,Phys.Lett.B 349(1995)463;
. A Rhri B, M Apdequi -Peyranere, A Jouadi, Phys.R ev D. 521404A .A rhri b,M . C apdequi -Peyranere and A .D jouadi ,Phys.R ev D 52(1995)1404;
. H Eberl, A , W , hep-ph/9603206Phys.B. 472H .Eberl ,A .B artland W .M ajeroto,N ucl . Phys.B 472(1996)481,hep-ph/9603206.
. C H Hang, L An, W G , Z H Yu, Phys.B. 515C . H .C hang,L.H an,W . G .M a and Z. H .Yu,N ucl . Phys.B 515 (1998) 15-33.
. K Ova R K, C Eber, H Eberl, W Ajerotto, Phys. Lett.B. 591K .K ova r k,C .W eber,H .Eberl ,W .M ajerotto,Phys. Lett.B 591 (2004) 242-254.
. A Rhri B, W , JH EP. 040473A .A rhri b,W .H ol l i k,JH EP 0404 (2004) 073.
Pi s' m a. I F Nzbyrg, G L Otki N, V G Serbo, V I Tel, ZH ET F. 34514I. F.G i nzbyrg,G . L.K otki n,V . G .Serbo and V . I.Tel nov,Pi s' m a ZH ET F 34 (1981)514;
. N ucl .Instr.M ethods. 47N ucl .Instr.M ethods 205 (1983)47.
Frank C Uypers (m Pi M Uni, Ch, hep-ph/9509400tal k presented at the Internati onal Europhysi cs C onference on H i gh Energy Physi cs, B russel s, 27/7-2/8, 1995, M PI-PhT /95-93. Frank C uypers (M PI M uni ch), tal k presented at the Internati onal Europhysi cs C onference on H i gh Energy Physi cs, B russel s, 27/7-2/8, 1995, M PI-PhT /95-93, hep-ph/9509400;
. G Ooft, M Vel Tm An, Phys.B. 153365G . ' t H ooft and M .Vel tm an,N ucl .Phys.B 153,365 (1979).
I F Nzburg, G L Otki N, V G Serbo, V , Tel nov, N ucl . Instr. M ethods 205. 47I. F. G i nzburg, G . L. K otki n, V . G . Serbo and V . I. Tel nov, N ucl . Instr. M ethods 205 (1983) 47;
. L , C G , C S Liand, W G , Phys.R ev.D. 542363L.H an,C . G .H u,C . S.Liand W . G .M a,Phys.R ev.D 54(1996)2363.
. A Enner, Fortschr.Phys. 41307A .D enner,Fortschr.Phys.41,307 (1993).
. T , Phys.C om m un. 140418T .H ahn,C om p.Phys.C om m un.140,418 (2001).
. W T , E W N , Phys.R ev.D. 461980W . T .G i el e and E. W . N .G l over,Phys.R ev.D 46,1980 (1992);
. W T El E, E W , D A Osower, Phys.B. 403633W . T .G i el e,E. W .G l over and D . A .K osower,N ucl .Phys.B 403,633 (1993);
. S , E Laenen, Phys.R ev. D. 59114004S.K el l er and E.Laenen,Phys.R ev. D 59,114004 (1999).
. G Ooft, M Vel Tm An, Phys.B. 153365G . ' t H ooft and M .Vel tm an,N ucl .Phys.B 153,365 (1979).
I G I Nzburg, G Otki N, V Serbo, V Tel, Pi zm a ZhET F. 34514I.G i nzburg,G .K otki n,V .Serbo and V .Tel nov,Pi zm a ZhET F,34 (1981) 514;
Prepri nt IN P 81-50. Jet P Lett, 34491N ovosi bi rskJET P Lett.34 (1982) 491.Prepri nt IN P 81-50,1981,N ovosi bi rsk.
I G I Nzburg, G Otki N, V Serbo, V Tel Nov, M eth.205 (1983) 47, Prepri nt IN P 81-102. N ovosi bi rskI.G i nzburg,G .K otki n,V .Serbo and V .Tel nov,N ucl .Instr.& M eth.205 (1983) 47, Prepri nt IN P 81-102,1991,N ovosi bi rsk.
. I G I Nzburg, G Otki N, S Pan L, V Serbo, V Tel Nov, Instr.& M eth. 2195I.G i nzburg,G .K otki n,S.Pan l ,V .Serbo and V .Tel nov,N ucl .Instr.& M eth.219 (1984) 5.
. K Heung, Phys. R ev.D. 473750K .C heung,Phys. R ev.D 47(1993)3750.
. R , S D , Phys.R ev.Lett. 612324R .B l ankenbecl er and S. D . D rel l , Phys.R ev.Lett.61(1988)2324;
. F Zen, C S , M L Stong, Phys.Lett.B. 274489F.H al zen, C . S.K i m and M . L.Stong,Phys.Lett.B 274(1992)489;
. M Rees, R M , Phys.Lett. 671189M .D rees and R . M .G odbol e,Phys.Lett. 67(1991)1189.
. V Tel Nov, Instr, M ethods A. 29472V .Tel nov,N ucl .Instr.M ethods A 294(1990)72.
. V Tel Nov, Instrum, M ethods Phys.R es.A. 29472V .Tel nov,N ucl .Instrum .M ethods Phys.R es.A 294(1990)72;
. L G I Nzburg, G Otki N, H Spi, Fortschr.Phys. 34687L.G i nzburg,G .K otki n and H .Spi esberger,Fortschr.Phys.34(1986)687.
| [] |
[
"Critical region of the finite temperature chiral transition",
"Critical region of the finite temperature chiral transition"
] | [
"J B Kogut \nDepartment of Physics\nUniversity of Illinois at Urbana-Champaign\n61801-3080Urbana\n\nCenter for Theoretical Physics\nDepartment of Physics\nLaboratory for Nuclear Science\nMIT\n02139CambridgeMA\n",
"M A Stephanov \nInstitute for Theoretical Physics\nSUNY\n11794-3840Stony BrookNY\n",
"C G Strouthos \nDepartment of Physics\nUniversity of Illinois at Urbana-Champaign\n61801-3080Urbana\n"
] | [
"Department of Physics\nUniversity of Illinois at Urbana-Champaign\n61801-3080Urbana",
"Center for Theoretical Physics\nDepartment of Physics\nLaboratory for Nuclear Science\nMIT\n02139CambridgeMA",
"Institute for Theoretical Physics\nSUNY\n11794-3840Stony BrookNY",
"Department of Physics\nUniversity of Illinois at Urbana-Champaign\n61801-3080Urbana"
] | [] | We study a Yukawa theory with spontaneous chiral symmetry breaking and with a large number N of fermions near the finite temperature phase transition. Critical properties in such a system can be described by the mean field theory very close to the transition point. We show that the width of the region where non-trivial critical behavior sets in is suppressed by a certain power of 1/N. Our Monte Carlo simulations confirm these analytical results. We discuss implications for the chiral phase transition in QCD. | 10.1103/physrevd.58.096001 | [
"https://export.arxiv.org/pdf/hep-lat/9805023v2.pdf"
] | 15,576,432 | hep-lat/9805023 | d426110ff01e295313e1ac230bcf9999e7681567 |
Critical region of the finite temperature chiral transition
20 Jul 1998
J B Kogut
Department of Physics
University of Illinois at Urbana-Champaign
61801-3080Urbana
Center for Theoretical Physics
Department of Physics
Laboratory for Nuclear Science
MIT
02139CambridgeMA
M A Stephanov
Institute for Theoretical Physics
SUNY
11794-3840Stony BrookNY
C G Strouthos
Department of Physics
University of Illinois at Urbana-Champaign
61801-3080Urbana
Critical region of the finite temperature chiral transition
20 Jul 1998arXiv:hep-lat/9805023v2
We study a Yukawa theory with spontaneous chiral symmetry breaking and with a large number N of fermions near the finite temperature phase transition. Critical properties in such a system can be described by the mean field theory very close to the transition point. We show that the width of the region where non-trivial critical behavior sets in is suppressed by a certain power of 1/N. Our Monte Carlo simulations confirm these analytical results. We discuss implications for the chiral phase transition in QCD.
Introduction
The transition in QCD separating the high temperature quark-gluon plasma phase from the low temperature hadronic phase has been studied intensively in the last decade. Understanding the properties of this transition is becoming increasingly important in view of recent experimental progress in the physics of ultrarelativistic heavy ion collisions.
Since the u and d quark masses are small the dynamics of the finite temperature transition is affected by the phenomenon of chiral symmetry restoration which occurs in the limit when the quark masses are put to zero. In this limit QCD with two massless quarks has a global SU(2) L ×SU(2) R symmetry which is spontaneously broken to SU(2) V at low temperatures. It can be argued that if the restoration of this spontaneously broken symmetry proceeds through a second-order phase transition the critical properties of this transition are in the universality class of the classical O(4) spin model in three dimensions [1,2]. This means that the leading singular behavior of thermodynamic quantities can be predicted, i.e., it is given by universal O(4) critical exponents. However, universality does not answer more detailed questions such as how this criticality is approached and what is the width of the region of parameters in which this singular behavior sets in. These questions require more detailed knowledge of the dynamics of the theory.
In this paper we discuss a phenomenon which is related to the way the critical behavior sets in for a certain class of theories with second-order chiral symmetry restoration transition at finite temperature, T c . These are theories with a large number of fermion species N. On general grounds, one could expect that the universal critical, or scaling, behavior near T c sets in as soon as the correlation length, ξ, of the fluctuations of the chiral condensate exceeds 1/T c . However, as we show in this paper, if the number of fermions involved in the chiral symmetry breaking is large, the critical behavior which sets in when ξ exceeds 1/T c is given by the mean field theory, rather than by the arguments based on dimensional reduction and universality as in [1]. The critical behavior given by these latter arguments sets in much later, closer to T c , when the correlation length ξ exceeds N x /T c . Below we shall determine the value of the positive exponent x.
The phenomenon of the non-trivial critical region suppression has been observed in the Yukawa model [3] and in the Gross-Neveu 1 model [4] using large-N expansion and was confirmed by lattice Monte Carlo calculations in [4]. These large-N results predicting meanfield critical behavior were in an apparent contradiction with the general arguments of [1]. In this paper we show that both critical regimes are realized in the vicinity of T c , but in separate scaling windows, one following the other.
Large N Yukawa theory near T c
In this section we consider a general Yukawa theory in d dimensions, 2 < d ≤ 4, with a large number N of fermion species and at finite temperature T . As argued in [5], in the continuum limit both Yukawa model and Gross-Neveu, or Nambu-Jona-Lasinio, models with a four-fermion interaction define the same theory. In the absence of a bare fermion mass there is a (chiral) symmetry, which can be broken spontaneously at low temperature with a suitable choice of couplings. This symmetry is restored at some finite temperature T c . We are interested in the nature of this phase transition.
In the absence of the fermions (or when the Yukawa coupling is zero) the nature of the transition is rather well known. It depends on the symmetry of the model which is restored at T c . In this paper, for simplicity, we consider a model with Z 2 symmetry. Similar results will also apply to theories with other symmetry groups (e.g., SU(2)×SU(2), as in QCD with 2 massless quarks), as long as the temperature driven symmetry restoration transition is of the second order.
One can argue that the critical behavior of a quantum theory of a scalar field near T c is the same as in a classical scalar theory with the same symmetry. The argument is based on two expected properties of the model: dimensional reduction and universality. In the Euclidean formulation the quantum scalar field is defined in a d-dimensional box with the extent in the imaginary (Matsubara) time dimension equal to 1/T . When the diverging correlation length, ξ, becomes much larger than 1/T c long wavelength fluctuations of the field become effectively (d − 1)-dimensional, i.e., on their scale the box looks like a (d − 1)-dimensional "pancake". Since such fluctuations determine the critical behavior in the model one can expect that, by universality, the critical exponents are the same as in a d − 1-dimensional theory with the same symmetry. This d − 1-dimensional theory is obviously a classical field theory at finite temperature. One can also understand this realizing that the classical thermal fluctuations whose energy is O(k B T ) dominate over the quantum fluctuations with energy O(hω) for soft modes of the field.
Another common way of describing this phenomenon in perturbation theory is to consider Fourier decomposition of the field into discrete Matsubara frequency components. Non-zero frequency acts as a mass term of order πT for the (d − 1)-dimensional components of the field. Near T c this mass is much larger than the mass of a component with zero Matsubara frequency. The dimensional reduction is then equivalent to the decoupling of the modes with nonzero frequencies.
We want to understand what happens in this theory near T c when one turns on the Yukawa coupling. Dimensional reduction and universality arguments suggest that the critical behavior of the theory should not change. This is based on the observation that there is no zero Matsubara frequency for the fermion fields due to antiperiodic boundary condition in the Euclidean time. Therefore all fermion modes should decouple at T c . In other words, the fermion fields do not have a classical limit and do not survive quantum-to-classical reduction at T c [6].
However, as was demonstrated in [4], the fermions do affect the behavior near T c in a certain way. We shall show that this happens when there are "too many" of them. The theory can be solved in the limit when the number of fermions, N, is large. In this limit the theory has mean-field critical behavior near T c [4,3]. This is different from the critical behavior of the corresponding classical scalar field theory. The mean-field behavior was also observed in numerical Monte Carlo calculations near T c [4].
Here we show that such mean-field behavior can be reconciled with the standard arguments of dimensional reduction and universality. The phenomenon which leads to an apparent contradiction is the suppression of the width of the non-mean-field critical region by a power of 1/N.
We consider the following model with one-component scalar field and Z 2 symmetry in d dimensional Euclidean space:
L = 1 2 (∂φ) 2 + 1 2 µ 2 φ 2 + λφ 4 + N f =1ψ f (∂ / + gφ) ψ f(1)
We regularize the model by some momentum cutoff, Λ. The cutoff can be removed if d < 4 but we shall keep it finite to compare with a corresponding lattice theory. There are two other important scales in the theory: the temperature, T , and the physical mass, m, which we identify with the mass of thermal excitations of the scalar field. Near T c this mass, m, is significantly different from the zero temperature mass m 0 , since m vanishes at the critical temperature. It is the mass m, or the correlation length 1/m, which is important for the critical behavior. The mass m is a natural measure (more natural than, say, T − T c ) of the distance from the criticality. 2 Therefore, near the finite temperature phase transition we have the following hierarchy of scales: Λ ≫ T ≫ m.
Let us consider the renormalization group (RG) evolution of the couplings from the scale of Λ down to the scale of T and then from T down to m. We focus on the quartic self coupling of the scalar field, λ. The evolution from the scale Λ to the scale T is governed by the RG equations of the d-dimensional quantum Yukawa model. After that, at the scale of T , we pass through a crossover region due to the fact that the fermions and nonzero Matsubara frequencies of the scalar fields do not contribute to the evolution below T (the decoupling). The evolution below T is governed by the RG equations of the scalar φ 4 theory in d − 1 dimensions.
If the window of scales between Λ and T is wide enough (as it is in the continuum limit Λ → ∞) the value of the renormalized coupling λ at the scale T , λ(T ), is close to the infrared fixed point of the d-dimensional Yukawa theory. In the large-N limit one can calculate this value [7]:
λ(T ) ∼ (4 − d) T 4−d N for 2 < d < 4.(2)
The case d = 4 is special. The infrared fixed point is trivial and is approached logarithmically as Λ/T → ∞:
λ(T ) ∼ 1 N ln(Λ/T ) for d = 4.(3)
This value provides the starting point, λ d−1 = T λ(T ), for the evolution of this coupling below T in the φ 4 theory in d − 1 dimensions. For large N this coupling is small. As we shall see shortly, this is the reason why the critical region where one observes non-trivial critical behavior is reached only very close to the phase transition. The phenomenon of the suppression of the width of a non-trivial critical region is common in condensed matter physics [8]. BCS superconductors provide the most well-known example of a system where criticality near T c is described by the Landau-Ginzburg mean-field theory.
The width, ∆T , of the region around T c where the mean-field description breaks down is tiny. There are systems where the width of the non-trivial critical region is small but measurable. In such systems one can observe the crossover between a mean-field and a non-trivial scaling.
The quantitative relation between the size of the non-trivial, non-mean-field critical region, the Ginzburg region, and certain parameters of a given system is known as the Ginzburg criterion. In superconductors such a parameter is a small ratio T /E F , i.e., the width of the Ginzburg region is suppressed by a power of this parameter. In this paper, we show that in a field theory with large number of fermions, such as (1), the width of the Ginzburg region is suppressed by a power of 1/N.
The Ginzburg criterion can be obtained by estimating the effects of fluctuations within the mean-field approximation. When the fluctuations become large the mean-field approximation breaks down because of self-inconsistency. A measure of the importance of the fluctuations, or the size the corrections to the mean-field, is the value of the effective selfcoupling of the scalar field, λ. If the coupling λ is small the effect of the fluctuations is also small and the theory can be described by the mean-field approximation very well. However, for the φ 4 theories in less than four dimensions 3 fluctuations always become important close enough to the phase transition. This happens roughly when the coupling λ d−1 on the scale of m is not small anymore. Since λ d−1 has nonzero dimension equal to 5 − d, it should be compared to m 5−d . In this way, and with the help of (2), one arrives at the following Ginzburg criterion for the applicability of the mean-field scaling:
m ≫ T N x , x = 1 5 − d .(4)
In the special case of d = 4, using (3) one finds: Figure 1: An example of a graph whose contribution breaks the mean field approximation and the large-N expansion near T c .
m ≫ T N ln(Λ/T ) for d = 4 .(5)
Alternatively, one can compare the size of the one-loop correction in the effective φ 4 theory in d − 1 dimensions to the bare λ d−1 . The loop correction becomes important when (4) is violated. 4 Indeed, consider the contribution, ∆λ of a graph such as in Fig.1 to the effective quartic coupling λ. This contribution diverges when m → 0:
∆λ ∼ T [λ(T )] 2 m 5−d ,(6)
where we integrated over fluctuations in the window between T and m with m ≪ T . Thus, this one-loop contribution, ∆λ, is negligible compared to λ(T ) if T λ(T ) ≪ m 5−d , which together with (2) leads to (4). Note also that, since λ(T ) ∼ 1/N, the contribution of the graph in Fig.1 should be subleading in the large-N expansion. Therefore, the large-N expansion breaks down when (4) is violated. The Ginzburg criterion tells us that for masses m inside the window T ≫ m ≫ T /N x the mean-field scaling holds, while for smaller masses m ≪ T /N x (i.e., closer to the transition) the non-trivial d − 1 Ising scaling sets in. We see that the size of this latter, non-trivial critical region is suppressed at large N. 5
Lattice theory
In this section we analyze how the effect of the suppression of the non-trivial critical region manifests itself in the lattice formulation of the theory. For simplicity, we shall discuss the case d = 3, The generalization to arbitrary d, 2 < d ≤ 4, can be done as in the previous section. We consider the following lattice discretization of the theory (1) in d = 3:
S = x −κ µ φx +μ φx + λ lat φ 4 x + βN 4 φ 2 x + N/2 i=1 x,yχ i x M x,y χ i y + 1 8 xχ i x χ i x x,x φx ,(7)
where χ i andχ i are Grassmann-valued staggered fermion fields defined on the lattice sites; the scalar field φ is defined on the dual lattice sites, and the symbol x, x denotes the set of 8 (i.e., 2 d ) dual lattice sitesx surrounding the direct lattice site x. The fermion kinetic (hopping) matrix M is given by
M x,y = 1 2 µ η µ (x) [δ y,x+μ − δ y,x−μ ] ,(8)
where η µ (x) are the Kawamoto-Smit phases (−1) x 1 +...+x µ−1 . The cubic lattice has L s lattice spacings a in spatial directions and L t lattice spacings in the temporal direction. The cutoff scale can be defined as Λ = 1/a, the temperature is given by T = 1/(L t a) and the mass is m = 1/(ξa), where ξ is the correlation length of the scalar field. To reach a continuum limit one has to satisfy the following two conditions: Λ ≫ T and Λ ≫ m. The condition Λ ≫ T requires a lattice with sufficiently large L t ≫ 1. The parameters of the action should then be tuned towards their critical values where the correlation length ξ ≫ 1. This satisfies the condition Λ ≫ m. The condition ξ ≫ 1, or ma = 1/ξ → 0, specifies a 2-dimensional critical surface in the space of 3 parameters κ, λ lat and β. One expects that a continuum limit taken at any generic point of this surface defines the same theory [5,10]. (This is the meaning of the equivalence between Yukawa and four-fermion theories). Therefore we can fix two out of three bare parameters, κ = λ lat = 0 and tune a single parameter β to criticality: 1/ξ → 0.
The phase diagram as a function of β and T a = 1/L t looks like in Fig. 2a. A natural measure of the distance from the criticality is ma = 1/ξ. Trading the lattice parameter β for ma we obtain the phase diagram of Fig show the location of points where the correlation length, ξ, reaches L t . The same phase diagram (b) but the distance from the critical line is expressed in terms of a natural variable ma = 1/ξ and only one side (either symmetric or broken) is shown. Various continuum limits correspond to approaching the origin in (b). The slope determines the ratio T /m in the resulting continuum theory. The values of L t and thus of T a are discrete, but this is of no importance to our discussion.
The line T = m separates the regions of quantum and classical behavior or, equivalently, the regions of d-dimensional and (d−1)-dimensional behavior. Below this line, when T ≪ m, the correlation length ξ is smaller than the extent in the time direction L t and the system behaves as a quantum Yukawa model in 2+1 dimensions. Above the line, when T ≫ m, the correlation length ξ is much larger than L t , the system looks like a "pancake" and behaves as a 2-dimensional classical statistical theory of a scalar field. Now consider changing the parameter β on a given lattice, i.e., at fixed T a = 1/L t , so that we move along a trajectory such as ABC on Fig. 3. The effective (long distance) coupling λ(ξ) follows the evolution governed by the RG equations of the Yukawa model from A to B. Near the point B it reaches some value, λ(L t ), which, if L t is large enough, is given by the infrared fixed point and is O(1/N). As we continue to increase ξ from B to C the coupling λ(ξ) evolves according to the RG equations of the 2-dimensional φ 4 theory. Since λ(L t ) ∼ 1/N, in order to reach the non-trivial critical region one needs to go to the correlation length ξ ∼ L t N x , with x = 1/2 in d = 3, according to the Ginzburg criterion (4).
Thus the line of a crossover with a slope T a/ma = N x divides the region, T > m, of a 2-dimensional, or classical, behavior into two subregions, or windows. One window, where the mean-field approximation works (L t ≪ ξ ≪ L t N x ), and another window, where the non-trivial critical behavior sets in (ξ ≫ L t N x ). Fig. 2b for a Yukawa theory with large N. The trajectory ABC corresponds to changing some lattice parameter to approach criticality on a lattice with a given L t ≡ 1/T a. The point A corresponds to the correlation length ξ ≡ 1/ma ∼ 1, ξ(B) = L t , ξ(D) = L t N x , and ξ(C) = ∞. In d = 3: x = 1/2.
Monte Carlo
We performed Monte Carlo simulations of the Gross-Neveu model in d = 2 + 1 dimensions at finite temperature to test the results of the analysis of the previous section. We chose the Z 2 four-Fermi model because it is relatively easy to simulate. We used a Hybrid Monte Carlo method described in [11], which proved to be very efficient for our purposes. Since the chiral symmetry is discrete we were able to simulate the model directly in the chiral limit m = 0. This allowed particularly accurate determination of the critical properties. The action is that of the Yukawa lattice theory (7) with κ = λ = 0, and we tuned β to reach criticality. The long-wavelength (continuum limit) behavior of such a theory is determined by the infrared fixed point, which is the same [5,10] in the more general Yukawa model (7) and in the Gross-Neveu model.
We used the following two methods to optimize the performance of the Hybrid Monte Carlo procedure. The first method consisted of tuning the effective number of fermion flavors N ′ , which is used during the integration of the equations of motion along a microcanonical trajectory, so as to maximize the acceptance rate of the Monte Carlo procedure for a fixed microcanonical time-step dτ . As the lattice size was increased, the time step dτ had to be taken smaller and the optimal N ′ approached N. For example, for an N = 4 theory on a 6 × 36 2 lattice the choices dτ = 0.15 and N ′ = 4.036 gave acceptance rates greater than 95% for all couplings of interest. To maintain this acceptance rate on a 6 × 60 2 lattice we used dτ = 0.11 and N ′ = 4.016, while on a 6 × 80 2 lattice we used dτ = 0.10 and N ′ = 4.012. In the second method the Monte Carlo procedure was optimized by choosing the trajectory length τ at random from a Poisson distribution with the mean equal toτ . This method of optimization, which guarantees ergodicity, was found to decrease autocorrelation times dramatically [12]. For most of our runs we used the average trajectory lengthτ ≃ 2.0. As usual, the errors were calculated by the jackknife blocking, which accounts for correlations in a raw data set.
As will be seen below, we used values of the lattice coupling β sufficiently close to the critical value, β c , so that we are close to the continuum limit Λ ≫ m, where m is the thermal mass, and the scaling behavior is not affected by lattice artifacts. In addition, we verified that also another important physical parameter, the zero-temperature mass, m 0 , is sufficiently smaller than the cutoff Λ. We ran on lattices with large L t , such as 20 3 with N = 4 for β = 0.600 − 0.750, and 16 3 with N = 24 for β = 0.725 − 0.875, and determined the value of the scaling exponent β m . For N = 12 we can use the results from [11]. We found values in agreement with the analytical prediction β m = 1 + O(1/N 2 ) for the T = 0 scaling [11]. This confirms that for our values of the coupling β the lattice theory remains in the scaling window for all range of temperatures down to T = 0 and effects of the lattice are negligible.
Exponents from finite size scaling
The finite size scaling (FSS) analysis is a well-established tool for studying critical properties of phase transitions [13]. The critical, singular behavior in a statistical system is caused by the divergence of the correlation length ξ. On a finite lattice the correlation length is limited by the size of the system and, consequently, no true criticality can be observed. However, if the size, L s , of the lattice is large, a qualitative change in the behavior of the system occurs when ξ ∼ L s . For 1 ≪ ξ ≪ L s the behavior of the system is almost the same as in the bulk (L s = ∞). However, when ξ ∼ L s the behavior of the system reflects the size and the shape of the box to which it is confined. The dependence of a given thermodynamic observable, P , on the size of the box, L s , is singular and, according to the FSS hypothesis, is given by:
P (t, L s ) = L ρ P /ν s Q P (tL 1/ν s ),(9)
where t is the distance from the critical point: t = (β c − β)/β c ; ν is the standard exponent of the correlation length: ξ ∼ t −ν ; and Q P is a scaling function, which is not singular at zero argument. The exponent ρ P is the standard critical exponent for the quantity P : P ∼ t −ρ P . Studying the dependence on the size of the box, L s , and using (9) one can determine such exponents. We simulated the model with N = 12 fermion flavors at β close to the critical coupling β c . The lattice sizes ranged from L s = 12 to 40 for L t = 4, and L s = 18 to 50 for L t = 6. Periodic boundary conditions in the spatial directions were used. Details of the L t = 6 runs are listed in Table 1. To perform our study most effectively we used the histogram reweighting method [14] which enables us to calculate the observables in a region of β around the simulation coupling β sim .
Exponent ν
We used the following thermodynamic observables [15] to determine the values of ν and the critical coupling β c :
V 1 ≡ 4[φ 3 ] − 3[φ 4 ] , V 2 ≡ 2[φ 2 ] − [φ 4 ] , V 3 ≡ 3[φ 2 ] − 2[φ 3 ] ,(10)
where
[φ n ] ≡ ln ∂ φ n ∂β .(11)
One can easily find that
V j ≈ (1/ν) ln L s + V j (tL 1/ν s ),(12)
for j = 1, 2, 3. At β c , i.e., t = 0, the last term in the r.h.s, V j (0), is a constant independent of L s . Scanning over a range of β's and looking for the value of β at which the slope of V j versus ln L s is j-independent, as it is in Fig. 4, we found: ν = 1.00(3) and β c = 0.7762 (15) for L t = 6, and ν = 1.00 (2) and β c = 0.682(2) for L t = 4. These values of ν are in a very good agreement with the two-dimensional Ising value ν = 1. This confirms that the behavior of the system sufficiently close to criticality is nontrivial (in the mean field theory: ν = 1/2) and is given by the arguments of dimensional reduction and universality.
Exponents β m and γ
In this subsection we consider the following two exponents: The exponent β m of the order parameter: Σ ≡ φ ∼ t βm , and the exponent γ of the susceptibility: χ ∼ t −γ . According to eq. (9), at the critical point, t = 0, the order parameter, Σ, and its susceptibility, χ, scales with L s as Σ ∼ L −βm/ν s and χ ∼ L γ/ν s . At each value of β we made a linear χ 2 -fit for ln Σ and ln χ versus ln L s . The locations of the minima of χ 2 /d.o.f. as a function of β provide estimates of β c . We estimated the error in β c and the critical exponents by looking at the values of β that increase χ 2 /d.o.f. by one. An estimate of the error in β c is min(χ 2 /d.o.f.) + 1, which also gives the error on the critical exponents. Fits at L t = 6 for the order parameter and susceptibility gave β m /ν = 0.12 (6), β c = 0.7747 (15) and γ/ν = 1.66 (9), β c = 0.7750 (15) respectively. Similar results at L t = 4 are: β m /ν = 0.16 (7), β c = 0.6805 (15), and γ/ν = 1.57 (9), β c = 0.6817 (15).
The critical exponents which we found are in a good agreement with the two-dimensional Ising exponents: β m /ν = 0.125 and γ/ν = 1.75 (in the mean field theory: β m /ν = 1 and γ/ν = 2). The values of β c extracted from this analysis are also consistent with the estimate of β c = 0.7765 (15) evaluated from the analysis of V j . Figs. 5 show the best linear fits of finite size dependence of ln Σ and ln χ at β = 0.7747 and β = 0.7753 respectively.
Scaling in the broken phase
Our task in this section is to check the analytical result of eq. (4), which is equivalent to:
ξ ≪ L t N x , x = 1/2.(13)
In other words, we want to determine the position of the crossover from the mean-field (MF) critical behavior to the non-trivial 2d-Ising critical behavior as a function of N.
A straightforward way to do this is to study the dependence of the order parameter, Σ, on β. Since Σ vanishes at the critical point, it can be thought of as a measure of the distance from the criticality. We expect that for sufficiently small Σ, i.e., close to β c , this dependence should be given by a power-law scaling with the exponent β m = 1/8 of the 2d Ising model: Σ ∼ (const−β) 1/8 . This corresponds to the region CD on the diagram Fig. 3, i.e., ξ ≫ L t N x . For larger Σ, further away from the criticality, the MF scaling holds: Σ ∼ (const−β) 1/2 (with some other value of const). This is the region DB on Fig. 3, i.e., L t ≪ ξ ≪ L t N x . For even larger Σ we should see the scaling corresponding to the fixed point of the 3d Gross-Neveu model [11]: Σ ∼ (const − β) 1 . In this region 1 ≪ ξ ≪ L t .
Since we are interested in the boundary of the mean-field scaling region we can find ξ from Σ using a well-known relation between them: ξ = 1/(2Σ) [16], which holds inside the mean-field region. Thus we avoid direct measurements of ξ, which are much harder than measurements of Σ.
We studied the dependence of Σ on β at three different values of N = 4, 12, 24 on lattices with fixed L t = 6. Ideally, in order to resolve all three critical scaling regions, or windows, we need to provide: 1 ≪ L t ≪ L t N x ≪ L s . For finite L t , L s , and N x these regions are squeezed, but for the values we used one can clearly resolve the MF region with the crossover towards the 2d-Ising region.
In contrast to the FSS analysis where ξ ∼ L s , now we need to keep ξ ≪ L s since we are studying bulk critical behavior. We monitored each simulation run for vacuum tunneling events, which signal that L s is not big enough. Away from the critical point such events were so rare that good measurements of Σ and its susceptibility χ were possible. At couplings near the phase transition we increased L s to suppress tunneling. Large number N, such as N = 24, suppresses fluctuations and allows an accurate study within reasonable amount of computer resources. On the other hand, for smaller N, such as N = 4, the simulations were also very efficient because the crossover to the 2d-Ising behavior starts at a smaller correlation length. The data from these simulations is shown in Tables 2,3,4.
Since β m = 1/2 in the MF region, we fitted Σ 2 with a linear function of β. The linearity made the fitting procedure very efficient. We evaluated the goodness of fit for data in various ranges of β in order to find the boundaries of the MF region (see Tables 5,6,7). The drop in the goodness of fit when new data are added implies that the new points deviate from the MF behavior and they belong either to the Gross-Neveu/MF crossover region or to the MF/2d-Ising crossover region. Fig. 6 shows the data for Σ 2 versus β, where the straight lines represent the best linear fits in the MF region. All three graphs show the MF/2d-Ising crossovers. The graphs for N = 12, 24 also clearly show the Gross-Neveu/MF crossover. We used the histogram reweighting method in order to extract the value of Σ at the "border" of the MF region with the 2d-Ising region. The quality of the fit drops very sharply. This allowed us to use a very conservative estimate of the error, such as the width of the region of Σ in which the quality of the fit drops down to 1% (from, e.g., 90% in the case of N = 4). The center of this region of the drop is taken as the boundary of the MF scaling window. We find Σ MF = 0.377 (11), for N = 4, Σ MF = 0.213(4) for N = 12, and Σ MF = 0.168 (20) for N = 24. It is clear that the non-trivial 2d-Ising region is squeezed as N increases. In order to check the analytical prediction of eq. (4), or (13), we fitted our results to the form Σ MF = const · N −x (see Fig. 7). We found x = 0.51(3) which is in agreement with the analytical prediction x = 0.5.
Discussion and conclusions
In this paper we demonstrated, using analytical arguments and Monte Carlo simulations, that the width of the region near the finite temperature chiral phase transition where a nontrivial critical behavior sets in is suppressed in theories with large number of fermions, N. This phenomenon explains an apparent contradiction between the large-N expansion [3,4], predicting mean-field scaling, and more general arguments based on dimensional reduction and universality [1], predicting the non-trivial scaling of a scalar theory in d − 1 dimensions. A tentative explanation of [3] was based, somewhat implicitly, on the expectation that the large-N expansion breaks down and cannot predict correct non-trivial exponents. Our key point is that we can say when, and as a function of what parameter (i.e., as a function of the distance from the criticality, m/T ) the large-N expansion breaks down. What is, perhaps, even more important, is that we show that there does exist a region where the large-N expansion is valid. This region (in the space of m/T ) of the mean-field behavior squeezes out the region of the true non-trivial critical behavior. 6 One can also look at the whole problem as a question of the order of limits. If the limit N → ∞ is taken before m/T → 0, the non-trivial region disappears, and all critical behavior 6 It was conjectured in [17] that the Ginzburg region of non-trivial scaling could turn out to be small. is given by mean-field theory. If, on the other hand, the limit m/T → 0 is taken at fixed N, the true non-trivial scaling will hold. These effects can be clearly seen in our Monte Carlo studies.
A helpful analogy can be drawn with the question of the long distance behavior of the scalar field correlator in the original 2-dimensional Gross-Neveu model with a U(1) symmetry at large N [18]. The large-N expansion predicts spontaneous symmetry breaking and longrange order, which is in clear contradiction with the Mermin-Wagner theorem. As was shown by Witten [19], there is an "almost long-range order" in the system, i.e., the correlator falls off like r −1/N . Looking at this expression, one can easily see the interplay between the limits r → ∞ and N → ∞.
We applied our arguments to a specific example of a Yukawa theory, but the mechanism responsible for this phenomenon is clearly more universal and may apply, in particular, to QCD, provided that the number of colors N c is large. The Yukawa, or the Nambu-Jona-Lasinio, theory is well-known to provide a very good description for the phenomenon of chiral symmetry breaking and restoration. One can then think of the theory we considered as an effective description of the degrees of freedom participating in chiral symmetry breaking.
As we have seen, the role of the fermions is to screen the effective self-coupling of the scalar field, λ. The strength of this effect depends on two factors: (i) large N, and (ii) large window of scales between the cutoff of the effective theory, Λ, and the temperature, T . Let us see if these conditions are fulfilled in QCD. The scale of the spontaneous symmetry breaking is of order Λ ∼ 1 GeV, while T c ≈ 160 MeV. Thus, there is almost an order of magnitude window between Λ and T , which is presumably sufficient to drive the effective self-coupling of the scalar field to its infrared fixed point at the scale of T c . How small this value is now depends on the number of the fermions (the condition (i)). In QCD the value N c = 3, though not very large, can be considered large in some cases. It would be interesting to see if this phenomenon could be rigorously shown to occur in the limit N c → ∞, which is very plausible.
The effect of the suppression of the width of the non-trivial critical region in QCD may lead to the following prediction: only mean-field (but no non-trivial O(4)) scaling behavior could be seen because of non-zero quark masses, m q . On the one hand, the mean-field scaling would hold until relatively large thermal correlation length. On the other hand, this correlation length in the real world is limited by the quark masses, m q .
The following crude estimates can serve to illustrate this point. 7 A true criticality is never reached near T c , because m q plays the same role as the external ordering magnetic field in a ferromagnet. The largest thermal correlation length in units of 1/T , T /m, can be estimated using the analogy to a ferromagnet and the mean-field value of ν/(βδ) = 1/3, as 8 :
T m ∼ T c m q ν βδ ≈ 160 MeV 5 MeV 1/3 ∼ 3.(14)
The fact that this number is not large could be guessed by observing that the zero tempera- 7 To make these estimates quantitative one needs to find numerical coefficients, which are determined by non-universal dynamical properties of QCD and by the definition of the boundary of the mean-field region. 8 We use the definition of the exponents: m ∼ t ν , Σ ∼ t β , and Σ ∼ m 1/δ q , together with the central postulate of the scaling theory: critical properties are determined by the correlation length, 1/m. ture pion masses, which are driven by m q , are as large as T c . This largest correlation length may turn out to be smaller than the one required for the crossover to non-trivial scaling region: T /m ∼ N c ln(Λ/T c ) ∼ 6, according to (5). In this case, the (near-)critical behavior observed in the window allowed by non-zero quark masses (according to (14), roughly: 1 < T /m < 3) will be given entirely by the mean-field scaling. (4) 24 3,000 0.6900 0.4359(4) 24 3,000 0.7000 0.4162(4) 24 3,000 0.7100 0.3963(4) 24 3,000 0.7200 0.3760 (5) 24 3,000 0.7300 0.3551 (5) 24 3,000 0.7400 0.3325 (5) 24 3,000 0.7500 0.3101 (5)
Figure 2 :
2A schematic phase diagram (a) of a lattice Yukawa theory in the plane of T a ≡ 1/L t and the lattice action parameter β. The solid line is the phase boundary. The dashed lines
Figure 3 :
3The same phase diagram as in
Figure 4 :
4Finite size dependence of V j at β = 0.7762 for L t = 6. All three lines have almost equal slopes.
Figure 5 :
5Left: the best linear fit for ln Σ vs. ln L s in the minimum of χ 2 (β = 0.7747). Right: the same for ln χ vs. ln L s (β = 0.7753).
Figure 6 :
6Order parameter squared vs. β for lattice theories with N = 4, 12, 24. The straight lines are the fits to the data in the mean-field regions.
Figure 7 :
7The best power-law fit (goodness 0.26) for the dependence of the boundary of the mean-field region, Σ MF , on the number of fermions, N.
Table 1 :
1Simulations for the FSS analysis with L t = 6 and N = 12.L s
β sim
trajectories
18 0.7744
130,000
18 0.7763
100,000
24 0.7744
100,000
24 0.7764
80,000
L s
β sim
trajectories
30 0.7744
120,000
30 0.7764
130,000
40 0.7754
140,000
50 0.7754
152,000
Table 2 :
2The data for the scaling of Σ in the broken phase. N = 4.β
Σ
L s trajectories
0.595 0.5008(4) 36
30,000
0.600 0.4871(4) 36
40,000
0.605 0.4724(4) 36
40,000
0.610 0.4578(4) 36
40,000
0.615 0.4426(4) 36
50,000
0.620 0.4268(4) 36
54,000
0.625 0.4106(3) 36
70,000
0.630 0.3928(4) 36
64,000
β
Σ
L s trajectories
0.635 0.3736(7) 36
70,000
0.640 0.3524(5) 36
110,000
0.645 0.3297(8) 36
130,000
0.6475 0.3162(9) 36
150,000
0.650 0.3016(10) 48
54,000
0.652 0.2858(15) 60
41,000
0.654 0.2647(26) 80
25,400
0.656 0.2398(30) 80
30,000
Table 3 :
3The data for the scaling of Σ in the broken phase. N = 12.
Table 4 :
4The data for the scaling of Σ in the broken phase. N = 24.β
Σ
L s trajectories
0.6700 0.4756(4) 24
3,000
0.6800 0.4560
Table 5 :
5Goodness of linear fits of Σ 2 vs. β for various ranges of β. N = 4.no. of points
β range
goodness
4
0.595 -0.610
0.92
5
0.595 -0.615
0.95
6
0.595 -0.620
0.98
7
0.595 -0.625
0.99
no. of points
β range
goodness
8
0.595 -0.630
0.96
9
0.595 -0.635
0.21
10
0.595 -0.640
10 −5
11
0.595 -0.645
10 −11
Table 6 :
6The same asTable 5but for N = 12.
Table 7 :
7The same asTable 5but for N = 24.no. of points
β range
goodness
7
0.74 -0.795 6 × 10 −4
8
0.74 -0.800 1.2 × 10 −3
6
0.75 -0.795
0.28
7
0.75 -0.800
0.24
3
0.76 -0.780
0.68
4
0.76 -0.790
0.91
no. of points
β range
goodness
5
0.76 -0.795
0.70
6
0.76 -0.800
0.38
7
0.76 -0.8025
0.17
8
0.76 -0.8050 1.2 × 10 −8
9
0.76 -0.8075 5 × 10 −16
In this paper we shall use the terms "Gross-Neveu model" and "Nambu-Jona-Lasinio model" interchangeably, especially when it concerns a theory with a four-fermion interaction in 2+1 dimensions.
The whole theory of critical scaling is based on this observation. We shall also use this fact more explicitly when we consider a lattice theory.
Here again the case of four dimensions is special. However, we are now discussing theories in d − 1 dimensions and 1 < d − 1 ≤ 3.4 In fact, the criterion (5) is well-known in the form λT c /m ≪ 1[9] as the criterion for the applicability of perturbation theory near a finite temperature phase transition.
Note that there is no proportionality constant in the Ginzburg criterion(4). This constant would depend on the definition of the boundary of the mean-field region which is naturally ambiguous. The Ginzburg criterion tells us how this boundary moves as a function of N .
AcknowledgmentsDiscussions with A. Kocic, R. Pisarski and T. Tran are greatly appreciated. We learned that a result similar to (4) had been independently derived by Pisarski[20]. This work was supported in part by the NSF grants PHY96-05199 and PHY97-22101.
. R Pisarski, F Wilczek, Phys. Rev. 29338R. Pisarski and F. Wilczek, Phys. Rev. D29 (1984) 338.
. K Rajagopal, F Wilczek, Nucl. Phys. 399395K. Rajagopal and F. Wilczek, Nucl. Phys. B399 (1993) 395.
. B Rosenstein, A D Speliotopoulos, H L Yu, Phys. Rev. 496822B. Rosenstein, A.D. Speliotopoulos, and H.L. Yu, Phys. Rev. D49 (1994) 6822.
. A Kocic, J Kogut, Phys. Rev. Lett. 74229Nucl. Phys.A. Kocic and J.Kogut, Phys. Rev. Lett. 74 (1995) 3109, Nucl. Phys. B455 (1995) 229.
. A Hasenfratz, P Hasenfratz, K Jansen, J Kuti, Y Shen, Nucl. Phys. B. 36579A. Hasenfratz, P. Hasenfratz, K. Jansen, J. Kuti, and Y. Shen, Nucl. Phys. B 365 (1991) 79.
. M A Stephanov, Phys. Rev. 523746M.A. Stephanov, Phys. Rev. D52 (1995) 3746.
J Zinn-Justin, Quantum Field Theory and Critical Phenomena. Clarendon PressJ. Zinn-Justin, Quantum Field Theory and Critical Phenomena (Clarendon Press, Ox- ford, 1996).
Introduction to the renormalization group and to critical phenomena. P Pfeuty, G Toulouse, WileyLondon, New YorkP. Pfeuty and G. Toulouse, Introduction to the renormalization group and to critical phenomena (Wiley, London, New York, 1977).
. S Weinberg, Phys. Rev. 93357S. Weinberg, Phys. Rev. D9 (1974) 3357.
. E Focht, W Franzki, J Jersak, M A Stephanov, Nucl. Phys. B. 429431E. Focht, W. Franzki, J. Jersak, and M.A. Stephanov, Nucl. Phys. B 429 (1994) 431.
. S Hands, A Kocic, J B Kogut, Ann. Phys. 22429S. Hands, A. Kocic, and J.B. Kogut, Ann. Phys. 224 (1993) 29.
. S J Hands, A Kocic, J B Kogut, R L Renken, D K Sinclair, K C Wang, Nucl. Phys. 413503S.J. Hands, A. Kocic, J.B. Kogut, R.L. Renken, D.K. Sinclair, and K.C. Wang, Nucl. Phys. B413 (1994) 503.
M N Barber, Phase transitions and critical phenomena. C. Domb and J. LebowitzNew YorkAcademic Press8M.N. Barber, in Phase transitions and critical phenomena, vol.8, ed. by C. Domb and J. Lebowitz (Academic Press, New York, 1983).
. A M Ferrenberg, R H Swendsen, Phys. Rev. Lett. 612635A.M. Ferrenberg and R.H. Swendsen, Phys. Rev. Lett. 61 (1988) 2635.
. K Chen, A M Ferrenberg, D P Landau, Phys. Rev. 483249K. Chen, A.M. Ferrenberg, and D.P. Landau, Phys. Rev. B48 (1993) 3249.
. Y Nambu, G Jona-Lasinio, Phys. Rev. 122345Y. Nambu and G. Jona-Lasinio, Phys. Rev. 122 (1961) 345.
K , Quark-Gluon Plasma. R. HwaWorld Scientific2K. Rajagopal, in Quark-Gluon Plasma, vol. 2, ed. by R. Hwa (World Scientific, 1995).
. D J Gross, A Neveu, Phys. Rev. D. 103235D.J. Gross and A. Neveu, Phys. Rev. D 10 (1974) 3235.
. E Witten, Nucl. Phys. B. 145110E. Witten, Nucl. Phys. B 145 (1978) 110.
. R Pisarski, private communicationR. Pisarski, private communication.
| [] |
[
"Moduli for Equivariant Vector Bundles of Rank Two on Smooth Toric Surfaces",
"Moduli for Equivariant Vector Bundles of Rank Two on Smooth Toric Surfaces",
"Moduli for Equivariant Vector Bundles of Rank Two on Smooth Toric Surfaces",
"Moduli for Equivariant Vector Bundles of Rank Two on Smooth Toric Surfaces"
] | [
"Markus Perling perling@mathematik.uni-kl.de \nDepartment of Mathematics\nUniversity of Kaiserslautern\nGermany\n",
"Markus Perling perling@mathematik.uni-kl.de \nDepartment of Mathematics\nUniversity of Kaiserslautern\nGermany\n"
] | [
"Department of Mathematics\nUniversity of Kaiserslautern\nGermany",
"Department of Mathematics\nUniversity of Kaiserslautern\nGermany"
] | [] | We give a complete classification of equivariant vector bundles of rank two over smooth complete toric surfaces and construct moduli spaces of such bundles. This note is a direct continuation of an earlier note where we developed a general description of equivariant sheaves on toric varieties. Here we give a first application of that description. | 10.1002/mana.200310137 | [
"https://arxiv.org/pdf/math/0205320v1.pdf"
] | 16,344,264 | math/0205320 | 00979e64978d15a6b09c20b9c4bbf69387db6c61 |
Moduli for Equivariant Vector Bundles of Rank Two on Smooth Toric Surfaces
May 2002 February 2002
Markus Perling perling@mathematik.uni-kl.de
Department of Mathematics
University of Kaiserslautern
Germany
Moduli for Equivariant Vector Bundles of Rank Two on Smooth Toric Surfaces
May 2002 February 2002
We give a complete classification of equivariant vector bundles of rank two over smooth complete toric surfaces and construct moduli spaces of such bundles. This note is a direct continuation of an earlier note where we developed a general description of equivariant sheaves on toric varieties. Here we give a first application of that description.
Introduction
This note is a direct continuation of [Per02] where, based on earlier work of Klyachko ([Kly90], [Kly91]), we have developed a formalism to describe equivariant sheaves on toric varieties in terms of families of vector spaces and filtrations. In [Per02] we also started to consider free resolutions of fine-graded modules over the homogeneous coordinate ring of a toric variety. In fact, filtrations and free resolutions of fine-graded modules over the homogeneous coordinate ring are in some sense dual notions. In this note we want to give first examples of such a duality for the case of equivariant vector bundles of rank 2 on smooth complete toric surfaces, which is the first nontrivial case one can consider. Given a smooth complete toric surface X which is determined by a fan ∆, we generalize a result of Kaneyama ([Kan75]) and we will show that every vector bundle E of rank 2 can be realized as the cokernel in a generalized equivariant Euler type short exact sequence (see Theorem 5.3):
0 −→ O s−2 A −→ s i=1 O( ρ∈Π i i ρ · D ρ ) −→ E −→ 0.
The precise shape of this sequence is determined by certain combinatorial data associated to E which will be expressed in terms of a partition {Π i } 1≤i≤s of the set of rays in ∆. Fixing this combinatorial data allows us to consider families of equivariant torsion free sheaves of rank 2 on X by varying matrices A as in the sequence above. On the other hand, the above resolution of E is derived from its representation in terms of filtrations of a 2-dimensional vector space and indeed there is an immediate relation between the space of variations of A and the configuration space of filtrations describing E. Plan of the paper: After recalling in Sections 2 and 3 preliminaries on toric geometry and equivariant sheaves, we briefly review in Section 4 general results on equivariant sheaves from [Per02]. In Sections 5 and 6, we will use this description to construct and analyze resolutions for general equivariant locally free sheaves of rank 2. Section 7 is devoted to duality of configurations of points in the GIT setting which we will use in Section 8 to give a GIT-classification of equivariant vector bundles of rank 2 on smooth complete toric surfaces.
Acknowledgements: I want to express my gratitude to Prof. G. Trautmann for many discussions and valuable advice.
Toric Prerequisites
The prerequisites for toric geometry are the same as in [Per02] and we only want to recall briefly some notions and to fix notation for the rest of this work. For the general theory of toric varieties and standard notation we refer to the textbooks [Oda88] and [Ful93].
• All algebraic varieties will be defined over a fixed algebraically closed field k, • T ∼ = (k * ) 2 denotes the 2-dimensional algebraic torus over k, M ∼ = Z 2 its character group, and N the Z-module dual to M; the canonical pairing between M and N is denoted ., . , • X always denotes a smooth complete toric surface, described by a fan ∆ • cones in ∆ are denoted by small Greek letters ρ, σ, τ , etc., the natural order among cones is denoted by τ < σ, ∆(i) := {σ ∈ ∆ | dim σ = i} the set of all cones of fixed dimension i, • elements of ∆(1) are called rays, the torus invariant Cartier divisor associated to the ray ρ ∈ ∆(1) is denoted D ρ , n(ρ) ∈ N denotes the primitive lattice element spanning the ray ρ, •σ := {m ∈ M R | m, n ≥ 0 for all n ∈ σ} is the cone dual to σ, σ M :=σ ∩ M denotes the subsemigroup of M associated to σ,
• the affine T -invariant subset associated to σ ∈ ∆ is denoted U σ ∼ = spec(k[σ M ]).
We will use quotient presentationsX π −→ X, due to Cox ([Cox95]). We consider the affine space k ∆(1) as a toric variety on which the torusT = (k * ) ∆(1) acts and denote S = k[x ρ | ρ ∈ ∆(1)] the ring of regular functions over k ∆(1) .X is the quasi-affine toric subvariety of k ∆(1) whose complement in k ∆(1) is described by theT -invariant ideal B in S, the so-called the irrelevant ideal. The codimension of V (B) in k ∆(1) is at least two, so that regular functions overX extend to regular functions over k ∆(1) . A theorem of Cox states that there is a morphismX π −→ X which is a categorical quotient with respect to the action of a diagonalizable subgroup G ofT (see [Cox95] for details). The actions ofT and G both induce gradings on S with respect to their character groups X(T ) and X(G) which are isomorphic to Z ∆(1) and to the Chow group A n−1 (X), respectively. Using this, one can consider homogeneous elements of S with respect to the A n−1 (X)grading as global functions over X, and therefore S is called the homogeneous coordinate ring of X. Note that by the surjection Z ∆(1) ։ A n−1 (X), Z ∆(1) -homogeneneity implies A n−1 (X)-homogeneity. We will call the Z ∆(1) -grading of S the fine grading of S.
In our case of interest, where X is a complete surface, we will make use of the following explicit description of B. Let ∆(1) = {ρ 0 , . . . , ρ m−1 } be the set of rays counted clockwise by the cyclic group Z m . Then each 2-dimensional cone σ ∈ ∆ is spanned by rays ρ k and ρ k+1 for k ∈ Z m and B is generated by elements xσ, i.e. B = xσ | σ ∈ ∆(2) , where xσ := k−1 i=k+2 x ρ i .
Equivariant Sheaves
Let G be an algebraic group which acts on a variety X. Following [MFK94], §3, we call a sheaf E on X equivariant or G-linearized if for each g ∈ G considered as automorphism of X, there is an isomorphism Φ g : g * E ∼ = E such that the following diagram commutes for all g, g ′ ∈ G:
g * g ′ * E g * Φ g ′ ( ( Q Q Q Q Q Q Q Q Q Q Q Q Q Q (g ′ g) * E ∼ = o o Φ g ′ g / / E g * E Φg 7 7 o o o o o o o o o o o o o o o
We consider the case where G is the algebraic torus T and X is a smooth complete toric surface over k.
One way to describe torus equivariant sheaves over X is to consider fine-graded modules over the homogeneous coordinate ring S of X. The sheafification operation which associates to each A n−1 (X)-graded S-module F a quasicoherent sheafF over X, maps fine-graded S-modules to equivariant sheaves over X. We will use the following facts (see [Per02] for detailed references):
Theorem 3.1: The map F →F is a covariant additive exact functor from the category of finitely generated fine-graded S-modules to the category of coherent equivariant O Xmodules. Moreover, every coherent equivariant sheaf is of the formF for some finitely generated, fine-graded S-module F .
In particular, fine-graded free S-modules of rank 1, which are of the form S(n), where n = (n ρ | ρ ∈ ∆(1)) ∈ Z ∆(1) and S(n) denotes the shift of degree by n, precisely correspond to equivariant line bundles O(n) := O( ρ∈∆(1) n ρ · D ρ ) over X. Then an equivariant homomorphism of vector bundles
m j=1 O(m j )Ã −→ n i=1 O(n i )
can be described by an n × m-matrix A = (a ij ) whose entries are monomials a ij = α ij x m j −n i , where α ij ∈ k and α ij = 0 whenever m j − n i / ∈ N ∆(1) .
∆-families and Filtrations
We recall from [Per02] some results on equivariant sheaves on toric varieties. For each σ ∈ ∆ we can define a preorder ≤ σ on M by setting m ≤ σ m ′ for m, m ′ ∈ M, iff m ′ − m ∈ σ M . In case that m ≤ σ m ′ but not m ′ ≤ σ m we write m < σ m ′ .
Definition 4.1: For a fixed σ let {E σ m } m∈M be a family of k-vector spaces. Let to each relation m ≤ σ m ′ a vector space homomorphism χ σ m,m ′ : E σ m −→ E σ m ′ be given such that χ σ m,m = 1 for all m ∈ M and χ σ m,m ′′ = χ σ m ′ ,m ′′ • χ σ m,m ′ for each triple m ≤ σ m ′ ≤ σ m ′′ . We denote such a family byÊ σ and call it a σ-family.
Any quasicoherent equivariant sheaf E over U σ gives rise to a σ-family by its isotypical decomposition which is induced by the action of T on Γ(U σ , E):
Γ(U σ , E) = m∈M Γ(U σ , E) m .
One just sets E σ m = Γ(U σ , E) m and χ σ m,m ′ the monomial multiplication by χ(m ′ − m) which is canonically induced by the ring structure. By [Per02], Prop. 5., the category of equivariant quasicoherent sheaves over U σ is equivalent to the category of σ-families. This way, there exists a natural process of gluing σ-families which mirrors the gluing of quasicoherent sheaves. A collection {Ê σ } σ∈∆ of σ-families which glues is called a ∆family and denotedÊ ∆ (for details see [Per02]). From the functorial properties of gluing it follows that there exists an equivalence of categories between ∆-families and equivariant quasicoherent sheaves over X.
Going one step further, given a ∆-familyÊ ∆ , for any σ ∈ ∆ we can consider the σ-familyÊ σ as a directed system, and we can form the direct limit lim →Ê σ =: E σ . Gluing then corresponds to vector space isomorphisms E σ ∼ = −→ E τ whenever τ < σ. In particular, for each σ ∈ ∆ there is an isomorphism E σ ∼ = −→ E 0 , where 0 is the minimal cone in ∆. If E ∆ defines a coherent sheaf E, then the E σ are finite-dimensional of dimension equal to the rank of E. If we restrict E to the dense torus T , then E| T is locally free and we can think of E 0 as a general fibre of the associated vector bundle over T .
Let us assume thatÊ ∆ represents a torsion free, coherent equivariant sheaf over X. In that case, all the morphisms χ σ m,m ′ are injective and the E σ m become subvector spaces of E σ such that morphisms χ σ m,m ′ transform to inclusions E σ m ⊂ E σ m ′ ⊂ E σ whenever m ≤ σ m ′ . This allows us to identify all the vector spaces E σ m as subvector spaces of the limit vector space E 0 , and the category of torsion free, coherent equivariant sheaves over X is equivalent to the category of embedded ∆-families. This category can be characterized by the notion of multifiltered vector spaces, for whose somewhat lengthy definition we refer to [Per02], definition 5.20. In this paper we will only need explicitly the special case of reflexive sheaves. In that case, for σ ∈ ∆ we can reconstruct a σ-familyÊ σ from the ρ-families associated to the 1-dimensional cones ρ of σ, via E σ m = ρ∈σ(1) E ρ m . In [Per02] it is shown that a ρ-family for ρ ∈ ∆(1) is equivalent to a full filtration of the vector space E 0 :
· · · ⊂ E ρ (i) ⊂ E ρ (i + 1) ⊂ · · · ⊂ E 0
where 'full' means that there exist numbers i 1 < i 2 such that E ρ (i) = 0 for i < i 1 and E ρ (i) = E 0 for i > i 2 . There is an identification of E ρ m with E ρ ( m, n(ρ) ), where n(ρ) is the primitive lattice vector of the ray ρ.
By the functoriality of direct limits an equivariant homomorphism of torsion free sheaves E −→ E ′ is equivalent to a vector space homomorphism E 0 −→ (E ′ ) 0 compatible with the multifiltrations of E 0 and (E ′ ) 0 . We summarize all this in:
Theorem 4.2 ([Per02], Theorem 5.22): let X be a toric variety. The category of equivariant reflexive sheaves over X is equivalent to the category of vector spaces with full filtrations associated to each ray in ∆(1) and whose morphisms are vector space homomorphisms which are compatible with the associated ∆-families.
In particular, if X is a smooth toric surface, then all reflexive sheaves are locally free, and thus the category of filtrations is equivalent to that of equivariant locally free sheaves.
Partitions and Resolutions of Equivariant Bundles of Rank Two
In this section we want to construct resolutions for general equivariant vector bundles of rank 2 on toric surfaces. Let E be such a bundle on X given by filtrations of a 2dimensional vector space E 0 . Any such filtration E ρ (i) can be described as follows. There are two integers i ρ
1 ≤ i ρ 2 such that E ρ (i) = 0 for i < i ρ 1 , E ρ for i ρ 1 ≤ i < i ρ 2 , E 0 for i ≥ i ρ 2 ,
where E ρ is a 1-dimensional subvector space of E 0 . Thus in the case that i ρ 1 < i ρ 2 a filtration can irredundantly be described by an ordered triple (i ρ 1 , i ρ 2 , E ρ ). If i ρ 1 = i ρ 2 , the filtration is degenerate in the sense that at i ρ 1 the dimension jumps by two and there occurs no E ρ .
Twisting E by an equivariant line bundle O(n) for some n = (n ρ ) ∈ Z ∆(1) has the effect that the numbers i ρ 1 , i ρ 2 are shifted to i ρ 1 + n ρ , i ρ 2 + n ρ for all ρ (see [Per02], §6). In our further considerations such twists do not play any role and we may assume for simplicity that i ρ 1 = −i ρ and i ρ 2 = 0 for nonnegative integers i ρ . In [Per02] we have obtained the following result which we are going to generalize:
Theorem 5.1 ([Per02], Theorem 6.1): Let X be a complete smooth toric surface and let E be an equivariant rank 2-vector bundle over X determined by filtrations
{(−i ρ , 0, E ρ )} ρ∈∆(1) . If i ρ > 0 for all ρ ∈ ∆(1) and E ρ = E ρ ′ whenever there is a σ ∈ ∆(2) such that ρ, ρ ′ ∈ σ(1),
then there exists an Euler type equivariant short exact sequence:
0 −→ O n−2 A −→ ρ∈∆(1) O(i ρ · D ρ ) −→ E −→ 0 where n = #∆(1) and A = (α ρj · x iρ ρ ), 1 ≤ j ≤ n − 2, α ρj ∈ k, an n × (n − 2)-matrix of monomials.
This theorem yields resolutions for generic equivariant vector bundles E of rank 2. In order to obtain a complete classification and moduli, one has also to consider the two ways how a set of filtrations may degenerate. Namely, the case where i ρ = 0 for some ρ ∈ ∆(1) and E ρ = E ρ ′ for two adjacent rays ρ and ρ ′ In order to do this, we consider partitions of subsets Π ⊂ ∆(1) as follows. A partition of Π is a collection P = {Π 1 , . . . , Π s } of disjoint subsets of Π such that Π = s i=1 Π i . Among the partitions of Π we define a partial order ≤ as follows: given two partitions
P = {Π 1 , . . . , Π s } and P ′ = {Π ′ 1 , . . . , Π ′ s ′ } we say P ≤ P ′ iff P is a refinement of P ′ , i.e. there exists a map {1, . . . , s} −→ {1, . . . s ′ }, i → j such that Π i ⊂ Π ′ j .
We call the map given by a refinement
π : P −→ P ′ , π(Π i ) = Π ′ j ,
projection map, and any map s : P ′ −→ P such that π • s is the identity a section with respect to π. The partial order ≤ has unique minimal and maximal elements, namely the partitions {{ρ}} ρ∈Π and {Π}.
Partitions associated to an equivariant bundle of rank two: For an equivariant bundle E of rank 2 we denote by Π = Π(E) ⊂ ∆(1) the subset of those ρ for which i ρ > 0. We assume that the rays {ρ 0 , . . . ρ m−1 } = Π are enumerated clockwise with respect to their circular order in the fan by the cyclic group Z m . On Π we can define a partition as follows.
Definition 5.2: Let Π 1 , . . . Π s be the unique partition of Π with the following properties:
(i) if ρ i , ρ j ∈ Π k then E ρ i = E ρ j , (ii) if for some 1 ≤ k < s ρ i ∈ Π k and ρ j ∈ Π k+1 , or if ρ i ∈ Π s and ρ j ∈ Π 1 , then E ρ i = E ρ j ,
(iii) if Π k contains ρ i and ρ i+l then it contains the interval ρ i+1 , . . . , ρ i+l−1 or the interval ρ i+l+1 , . . . , ρ i−1 or both.
One can think of this partition as the set of maximal intervals in the circularly ordered set Π on which the E ρ coincide. We assume that the Π i are enumerated clockwise. We denote the partition so defined P E and call it the coarse partition of Π with respect to E.
In the situation of Theorem 5.1, the coarse partition P E of ∆(1) of a bundle E is precisely the partition {{ρ}} ρ∈∆(1) . Using the definition of coarse partitions, we can extend Theorem 5.1 to the case of any equivariant bundle of rank 2 on X.
Theorem 5.3: Let E be an arbitrary equivariant vector bundle of rank 2 on a smooth complete toric surface X, defined by filtrations
{(−i ρ , 0, E ρ )} ρ∈∆(1) of a two dimensional vector space E 0 . Let Π = {ρ ∈ ∆(1) | i ρ > 0}0 −→ O s−2 A −→ s i=1 O( ρ∈Π i i ρ · D ρ ) −→ E −→ 0
where A is a matrix of monomials whose exponents are determined by the partition P E . Moreover, the (s − 2)-minors A i,i+1 of A, 1 ≤ i < s, which consist of all rows of A except the i-th and the (i + 1)-st, are of full rank. If s ≤ 2, then E splits.
Remark 5.4: The proof explains the precise relationship between A and the filtrations associated to E, see also Proposition 6.1.
Proof. Let first s ≤ 2. Then we can decompose the vector space E 0 into a direct sum E 0 = E 1 ⊕ E 2 and the filtrations decompose into direct sums of filtrations for E 1 and E 2 , respectively. Consequently, the associated bundle E splits into a direct sum of line bundles. Now assume that s > 2. Consider the Cox quotient presentationX −→ X and let {n ρ } ρ∈∆(1) be the standard basis of the latticeN ∼ = Z ∆(1) . Let {i 1 , . . . , i s−2 } ⊂ {1, . . . , s}, then we set
xΠ i := ρ∈Π i x i ρ ρ , and xΠ i 1 ...i s−2 := s−2 k=1 xΠ i k .
We can define a morphism of fine-graded free S-modules
0 −→ S s−2 A −→ s i=1 S( ρ∈Π i i ρ · n ρ )
which is an s × (s − 2)-matrix A with monomial entries:
A = (α ij · xΠ i )
where i runs from 1 to s and j from 1 to s − 2. We require that, for i = 1, . . . , s − 1, the (s − 2)-minors A i,i+1 of A which consist of all rows of A except the i-th and the (i + 1)-st, have full rank over S. After applying the sheafification functor˜to this sequence we obtain a short exact sequence of sheaves
0 −→ O s−2 A −→ s i=1 O( ρ∈Π i i ρ · D ρ ) −→ Q −→ 0
where by abuse of notion we write A instead ofÃ. The matrix A defines an equivariant injective morphism of coherent sheaves, but it is not necessarily an injective vector bundle homomorphism. This is the case if and only if the rank of A(x) equals s − 2 at all points x ∈ X. This in turn means that A is an inclusion of vector bundles if and only if there exists a k > 0 such that B k ⊂ Fitt 2 (A), where B is the irrelevant ideal associated to the quotient presentationX −→ X. If this is the case, then the cokernel Q is a vector bundle as well.
Let
Fitt 2 (A) = det A i 1 ...i s−2 = det(A ′ ) i 1 ...i s−2 · xΠ i 1 ...i s−2
Thus Fitt 2 (A) is a monomial ideal generated by the xΠ i 1 ...i s−2 . To show that B k ⊂ Fitt 2 (A) for some k > 0, it suffices to show that for each generator xσ of B there exists a generator of Fitt 2 (A) which divides some power of xσ. Without loss of generality we may assume that i ρ = 1 for all ρ ∈ Π. Then the problem is equivalent to the question whether a given xσ with σ(1) = {ρ k , ρ k+1 } is divided by some xΠ i 1 ...i s−2 which in turn is equivalent to finding i 1 , . . . i s−2 such that Π i 1 ∪ · · · ∪ Π i s−2 is contained in the interval {ρ k+2 . . . ρ k−1 } (with indices modulo m). But because the complement of this interval is {ρ k , ρ k+1 }, this complement intersects at most two of the intervals Π i , say, after renumbering, Π s−1 and Π s . Thus we choose (i 1 , . . . i s−2 ) = (1, . . . , s − 2) which is nonempty because s > 2. Moreover, Π 1 ∪ · · · ∪ Π s−2 , is contained in {ρ k+2 , . . . , ρ k−1 }, and so xΠ i 1 ,...,i s−2 divides xσ. Hence, chosing a matrix A as above ensures that the quotient Q of A is locally free. Now we have to show that any E with associated coarse partition P E can be resolved this way. We do this by explicitly writing down the filtrations for O s−2 and s i=1 O( ρ∈Π i i ρ · D ρ ) and by constructing with their help a homomorphism of the associated limit vector spaces which we will lift to a morphism of locally free sheaves. Denote F 0 the (s − 2)-dimensional filtered k-vector space associated to the vector bundle O s−2 , and G 0 the s-dimensional k-vector space associated to s i=1 O( ρ∈Π i i ρ · D ρ ). We will identify G 0 with k P E ∼ = k s and label its standard basis e 1 , . . . , e s . The filtrations are:
F ρ (i) = 0 for i < 0 F otherwise and G ρ (i) = 0 for i < −i ρ k · e i for − i ρ ≤ i < 0 and ρ ∈ Π i G 0 otherwise
The matrix A induces a vector space homomorphism from F 0 to G 0 which can be naturally identified with the matrix A ′ . We can define filtrations for the quotient vector space E 0 := G 0 /F 0 simply by taking the quotient filtrations
E ρ (i) = G ρ (i)/F ρ (i)
with respect to A ′ . These filtrations are of the form
E ρ (i) = (−i ρ , 0, k · e j )
where ρ ∈ Π j and e j is the image of e j in E 0 . If we assume that B k ⊂ Fitt 2 (A) for some k > 0, these filtrations become in a natural way the filtrations associated to the cokernel E of A. On the other hand, if we define a homomorphism from G 0 to some 2-dimensional k-vector space E 0 by fixing the images e j = 0 of the basis vectors e j , j = 1, . . . , s, of G 0 , we immediately obtain a homomorphism of filtered vector spaces whose kernel is a filtered vector space F 0 A ′ ֒→ G 0 . The corresponding matrix A with monomial entries then defines a sheaf homomorphism 0 −→ O s−2 A −→ s i=1 O( ρ∈Π i i ρ · D ρ ). As we have seen before, the cokernel of A is locally free if and only if det(A) i,i+1 = 0 for i = 1, . . . , s − 1 and det(A) s,1 = 0. Now it is a lemma from linear algebra that e i and e i+1 are linearly independent if and only if det(A ′ ) i,i+1 = 0.
More on Partitions and Resolutions
Let us fix numbers i ρ > 0 for ρ ∈ Π ⊂ ∆(1) and a partition P of Π. In this section we consider short exact sequences of type
0 −→ O s−2 A −→ s i=1 O( ρ∈Π i i ρ · D ρ ) −→ E −→ 0
where A is given by a monomial matrix. By Theorem 5.3, there are conditions such that the cokernel E is a locally free sheaf whose associated coarse partition P E coincides with P. In general, if A is arbitrary and has just maximal rank, we have the following as an immediate corollary from the constructions of the previous section:
0 −→ O s−2 A −→ s i=1 O( ρ∈Π i i ρ · D ρ ) −→ E −→ 0.
Then E is a torsion free sheaf of rank 2 over X and we can consider the short exact sequence induced on the limit vector spaces
0 −→ k s−2 A 0 −→ k PǍ 0 −→ E 0 −→ 0, withǍ 0 = (A 1 , . . . , A s ) a 2 × s-matrix.
If E is locally free, then P ≤ P E is a refinement of the coarse partition associated to E, and the filtrations for E are given by
{(−i ρ , 0, A i ) | ρ ∈ Π i } Π i ∈P ,
where A i denotes the 1-dimensional subvector space of E 0 spanned by the i-th column ofǍ 0 .
Remark 6.2: Observe that for A and E as in Proposition 6.1 and E locally free, and for {Π ′ 1 , . . . Π ′ s ′ } = P ′ ≤ P any refinement with corresponding projection π, we can write the filtrations as {(−i ρ , 0, A π(i) ) | ρ ∈ Π ′ i } Π ′ i ∈P ′ . Using the fact that each torsion free sheaf E embeds into its bidual, 0 −→ E −→ Eˇˇ, we can now completely describe torsion free equivariant sheaves of rank 2 over X without explicitly considering ∆-families:
Theorem 6.3: Let E ′ = E ′ (I, P ′ , B) be a cokernel 0 −→ O s ′ −2 B −→ s ′ i=1 O( ρ∈Π ′ i i ρ · D ρ ) −→ E ′ −→ 0.
and letB 0 be defined by
0 −→ k s ′ −2 B 0 −→ k P ′B 0 −→ k 2 −→ 0,
Let then E be the bundle defined by the filtrations {−i ρ , 0, E ρ } ρ∈Π associated toB 0 by
E ρ = 0 ρ / ∈ Π B i ρ ∈ Π ′
i Then E ∼ = E ′ˇˇ, P ′ ≤ P E , and we have an exact diagram
0 0 / / O s ′ −2 B / / s ′ i=1 O( ρ∈Π ′ i i ρ · D ρ )B / / π E ′ / / 0 0 / / O s−2 A / / s i=1 O( ρ∈Π i i ρ · D ρ )Ǎ / / E / / 0 0 / / C / / C / / 0 0 0
The cokernel sheaf C is a skyscraper sheaf whose support is contained in the set of 0dimensional orbits of X. More precisely,
supp(C) =˙ σ∈δ(2) orb(σ) where δ(2) ⊂ ∆(2) is the set of cones σ ∈ ∆(2) such that for {ρ i , ρ i+1 } = σ(1), it is true that ρ i ∈ Π ′ i , ρ i+1 ∈ Π ′ j , for some i = j, and π(Π ′ i ) = π(Π ′ j ).
Proof. Denote P := P E and let P ′ ≤ P be any refinement with projection π. Using a section t : P ′ −→ P we fix a choice of elements in the preimage of π. We define a matrix A 0 := (B t(1) , . . . , B t(s) ) andπ : k P ′ −→ k P the morphism induced by π over k. This way we obtain in the category of k-vector spaces a commutative diagram
0 / / k s ′ −2 B 0 / / k P ′B 0 / / π E 0 / / id 0 0 / / k s−2 A 0 / / k PǍ 0 / / E 0 / / 0
where id 0 is the identity homomorphism on E 0 and A 0 the kernel homomorphism ofǍ 0 . The morphisms in the left square of the diagram can immediately be lifted to morphisms of locally free sheaves by considering them as matrices of coefficients of the entries of matrices of monomials. So we obtain the diagram
0 / / O s ′ −2 B / / s ′ i=1 O( ρ∈Π ′ i i ρ · D ρ )B / / π E ′ / / 0 0 / / O s−2 A / / s i=1 O( ρ∈Π i i ρ · D ρ )Ǎ / / E / / 0
where we interprete the matricesǍ 0 andB 0 as sheaf homomorphisms. The injectivity of the homomorphism E ′ −→ E is an immediate consequence of the fact that after restriction to the open sets U ρ , ρ ∈ ∆(1), it induces the identity homomorphism. It follows that the cokernel C is a skyscraper sheaf whose support must be contained in the set of 0dimensional orbits of X, and its description is immediate.
Duality for Configuration Spaces of Points in Projective Spaces
Let m < n, T n ∼ = (k * ) n the n-dimensional algebraic torus, GL m the group of automorphisms of k m and denote G := GL m ×T n , which is a reductive group. Denote M n,m the space Hom k (k m , k n ) of n×m-matrices over k and let G act on M n,m by (g, t).A := t•A•g −1 .
We first want to consider the actions of the two subgroups GL m and T n of G separately. Because the representations of GL m and T n in GL(M n,m ) both contain the homotheties, their actions induce actions on the projective space PM n,m and linearizations of the ample line bundle O PMn,m (1), so that we are able to perform GIT-quotients of PM n,m by GL m and T n , respectively. GL m acts from the right on the matrices M n,m , and the set of semistable points in PM n,m is precisely the set of points represented by matrices which have maximal rank m: (respectively ≤).
PM ss n,m (GL m ) = { A | rank A = m}
We need to modify the second statement only slightly for the case of the action of a maximal subtorus of GL n on Gr(m, n):
Corollary 7.2: Consider the action of a maximal subtorus T n of GL n on Gr(m, n). Then a point A ∈ Gr(m, n) is (semi-)stable with respect to this action if and only if, for every proper linear subspace L of k n which is spanned by eigenspaces of the action of T n on k n , inequality (2) holds.
These results imply that the preimages of (P m−1 ) n,ss (GL m ) and Gr(m, n) ss (T n ) in PM n,m coincide and we denote this set by which can be interpreted as saying that an n × m-matrix A of rank m representing a point in Gr(m, n) is mapped to an (n − m) × n-matrixǍ representing a point in Gr(n − m, m) such that both matrices fit into a short exact sequence
0 −→ k m A −→ k nǍ −→ k n−m −→ 0
This correspondence is compatible with the action of the torus T n on both sides: Lemma 7.3: Let T n be a maximal subtorus of GL n . Consider the actions of T n on Gr(m, n) and Gr(n − m, n), induced by its natural actions on k n and the dual vector space (k n )ˇ, respectively. Then the canonical isomorphism between Gr(m, n) and Gr(n − m, n), which is induced by the canonical isomorphism between k n and (k n )ˇ, is T n -equivariant and maps the (semi-)stable points as specified in Proposition 7.1, to (semi-)stable points.
There exists a natural isomorphism
Gr(m, n) ss (T n )//T n ∼ = Gr(n − m, n) ss (T n )//T n Proof. A little bit of linear algebra shows that for some A ∈ Gr(m, n) and for all linear subspaces L ⊂ k n the following holds,
dim A ∩ L < m n dim L if and only if dim Aˇ∩ Lˇ< n − m n dim L( respectively ≤),
where Aˇand Lˇare the annihilators of A and L in (k n )ˇ.
Because of this we can extend our correspondences to the following diagram:
PM o n,m
x x r r r r r r r r r r ) ) T T T T T T T T T T T T T T T T PM o n−m,n t t j j j j j j j j j j j j j j j j
' ' O O O O O O O O O O O (P m−1 ) n,ss
Another natural parameter space is the set of configurations of s points in PE 0 ∼ = P 1 which can be given by the columns of the matrixǍ 0 . In that case equivariant isomorphism classes of bundles are determined by configurations modulo linear transformations by GL 2 . This is the sort of moduli space which has already been suggested by Klyachko in [Kly90].
By the results of the previous section both spaces can be compared in terms of the GIT-quotients M s,s−2 and M 2,s . By Theorem 6.3 the isomorphism M s,s−2 ∼ = −→ M 2,s which is given by the map A →Ǎ, respectively A 0 →Ǎ 0 , can be interpreted as the map E → Eˇw hich is defined for any cokernel E represented by some GIT-semistable matrix A in PM s,s−2 .
Let us now investigate the semistable points of PM s,s−2 and PM 2,s . Recall from proposition 7.1 that a point (p 1 , . . . , p s ) ∈ (P 1 ) s is properly semistable with respect to the action of GL 2 iff precisely s 2 of the p i coincide. Thus properly semistable points exist only in the case s = 2t even.
Proposition 8.1: Let E = E(I, P, A) be a torsion free sheaf given by a short exact sequence as above. Let A i 1 the i 1 -th column ofǍ, let P 1 = {Π i 1 , . . . Π ir } be the maximal subset of P with A i k = A i 1 for 1 ≤ k ≤ r, and let P 2 be the complement of P 1 in P.
Then the torsion free sheaf E defined by A is an extension
0 −→ E 1 −→ E −→ E 2 −→ 0
where E 1 and E 2 are torsion free sheaves of rank 1 with
E 1ˇˇ∼ = O( Π∈P 1 ρ∈Π i ρ · D ρ ) and E 2ˇˇ∼ = O( Π∈P 2 ρ∈Π i ρ · D ρ ).
Proof. We obtain this extension by partition of the matrixǍ via the following diagram:
0 0 0 0 / / O r−1 A 1 / / Π∈P 1 O( ρ∈Π i ρ · D ρ )Ǎ 1 / / E 1 / / 0 0 / / O s−2 A / / Π∈P O( ρ∈Π i ρ · D ρ )Ǎ / / E / / 0 0 / / O s−r−1 A 2 / / Π∈P 2 O( ρ∈Π i ρ · D ρ )Ǎ 2 / / E 2 / / 0 0 0 0
where A 1 is represented by the submatrix of A consisting of the rows corresponding to P 1 .
Corollary 8.2: Let E and P be as above and let F ⊂ E be any torsion free equivariant subsheaf of rank 1. Then there exists a subset P ′ ⊂ P such that Fˇˇ∼ = O( Π∈P ′ ρ∈Π i ρ · D ρ ).
Corollary 8.3: Let s = 2t be even, letǍ represent a properly semistable point in (P 1 ) ss and let 0 −→ E 1 −→ E −→ E 2 −→ 0 be the corresponding extension. Then the image of A in M s,s−2 represents all matrices whose corresponding extensions are in Ext 1 (E 2 , E 1 ) or
Ext 1 (E 1 , E 2 ), i.e. E is GIT-equivalent to the direct sum E 1 ⊕ E 2 .
Proof. Each orbit in (P 1 ) s which contains a point (p 1 , . . . , p s ) such that some t points p i 1 , . . . p it coincide contains in its closure the points of the form (p i 1 , . . . , p it , . . . p, . . . p) for some p i 1 = p ∈ P 1 .
In the generic case, we have in particular:
Corollary 8.4: Let n = #∆(1) be even and i ρ > 0 for all ρ ∈ ∆(1), let P = {{ρ}} ρ∈∆(1) be the fine partition of ∆(1), and let n = #P. Then there exists precisely one point in M n,n−2 which can be represented by a direct sum E 1 ⊕ E 2 such that E 1 and E 2 are locally free.
Proof. There exists precisely one partition Π 1∪ Π 2 = ∆(1) such that the Π i do not contain two adjacent rays and which is given by Π 1 = {{ρ 2·i }|1 ≤ i ≤ n 2 }. The observation made in Proposition 8.1 motivates the following definition:
Definition 8.5: Let I and P as before and let E be a torsion free equivariant sheaf of rank 2 over X such that P is a refinement of the coarse partition P Eˇˇa ssociated to the locally free sheaf Eˇˇ. Let F ⊂ E be a torsion free equivariant subsheaf of rank 1. Then by 8.2 Fˇˇ∼ = O( Π∈P ′ ρ∈Π i ρ · D ρ ) with a unique subset P ′ ⊂ P. We say that E is P-stable (respectively P-semistable) if for every equivariant torsion free subsheaf F ⊂ E of rank 1 #P ′ < 1 2 #P (respectively #P ′ ≤ 1 2 #P).
Theorem 8.6: Let i ρ > 0 for ρ ∈ Π ⊂ ∆(1) and let P = {Π 1 , . . . , Π s } be a partition of Π. Consider short exact sequences
0 −→ O s−2 A −→ s i=1 O( ρ∈Π i i ρ · D ρ ) −→ E −→ 0.
Then E is P-stable (respectively P-semistable) if and only if A represents a GIT-stable (respectively GIT-semistable) point in (P s−3 ) s with respect to the action of GL s−2 .
Proof. This follows from the fact that we can represent A by a configuration (p 1 , . . . , p s ) of points in (P 1 ) s . Then Definition 8.5 is equivalent to the fact that at most s 2 of the points p i coincide. Now we can define an equivalence relation on the set of P-semistable sheaves as follows:
Definition 8.7: Let E and E ′ be P-semistable sheaves. Then we say that E and E ′ are P-equivalent iff one of the following conditions holds:
(i) E and E ′ both are P-stable and equivariantly isomorphic, E ∼ = E ′ , (ii) E and E ′ both are P-semistable and the following holds. Let Ψ ∈ P E such that #Ψ = s 2 . Then either Ψ ∈ P E ′ or Π \ Ψ ∈ P E ′ .
The last condition implies that if 0 −→ E 1 −→ E −→ E 2 −→ 0 is the extension of E corresponding to Ψ as in Proposition 8.1, then by Corollary 8.3 E is P-equivalent to E 1 ⊕ E 2 . From this definition follows Theorem 8.8: Fix numbers {i ρ ≥ 0} ρ∈∆(1) , let Π = {ρ | i ρ > 0} ⊂ ∆(1) and let P = {Π 1 , . . . , Π s }. Then M s,s−2 is the set of P-equivalence classes of P-semistable torsion free equivariant sheaves of rank 2 on X.
Definition 8.9: If P is fixed, we denote M P := M s,s−2 and call M P moduli space of P-equivalence classes.
It remains to show that the spaces M P are moduli spaces of suitably defined Pfamilies, e.g. in the sense of [New78]. A detailed treatment of this problem would require to generalize all our constructions to families. We hope to come back to this question in a more general context in future work.
Remark 8.10: There is the following result of Klyachko: This means that in our situation, where X is complete, equivariant isomorphism classes and isomorphism classes of vector bundles coincide up to a twist with a character. So after fixing numbers i ρ , ρ ∈ ∆(1), the subspace M ′ P ⊂ M P consisting of isomorphism classes of vector bundles even classifies non-equivariant isomorphism classes.
Remark 8.12: We want to point out that our moduli depend only on the combinatorial structure of the underlying toric variety, that is, the number of rays ∆(1) in the fan of X, but not on the concrete realization of the fan ∆ inside the lattice N.
Example 8.13: Let X = P 2 (k), then we have to consider the quotient of PM 3,1 by the group G ∼ = GL 1 ×T 3 ∼ = k * × (k * ) 3 . This quotient is just a point, i.e. the set of equivariant isomorphism classes of indecomposable equivariant vector bundles of rank 2 on P 2 (k) is discrete. This reproduces the original result of Kaneyama ([Kan75]).
Example 8.14: Let a ≥ 0 and X = F a a Hirzebruch surface. Assume that the rays ρ 1 , . . . , ρ 4 are enumerated clockwise. The set (P 1 ) 4,s of stable points of (P 1 ) 4 with respect to the diagonal action of GL 2 is {(p 1 , . . . , p 4 ) ⊂ (P 1 ) 4 | p i = p j for all i = j}, i.e. the set of four-point configurations in P 1 no two points of which coincide. There is an isomorphism (P 1 ) 4,s ∼ = PGL 2 ×(P 1 \ {0, 1, ∞}) which we choose to be (p 1 , p 2 , p 3 , p 4 ) → (g, g.p 4 ), where g ∈ PGL 2 is the unique element which moves the points p 1 , p 2 , p 3 to the positions 0, 1, and ∞, respectively. The inverse map is given by (g, p) → (g −1 0, g −1 1, g −1 ∞, g −1 p).
The quotient PM ss 4,2 (GL 2 ×T 2 )// GL 2 ×T 2 has a completion by semistable points:
(P 1 ) 4,ss = {(p 1 , p 2 , p 3 , p 4 )| such that no three points p i coincide} In terms of 4 × 2 matrices this means that each semistable but not stable matrix can be brought into one of six standard forms with at most one zero in a row and two zeros in a column:
A 34 = * * * * 0 * 0 * , A 24 = * * 0 * * * 0 * , . . . , A 12 = 0 * 0 * * * * * .
The image of a matrix A ij in M 4,2 represents the P-equivalence class of E 1 ⊕ E 2 where E 1ˇˇ∼ = O(i ρ i · D ρ i + i ρ j · D ρ j ) and E 2ˇˇ∼ = O(i ρ k · D ρ k + i ρ l · D ρ l ) with {i, j, k, l} = {0, 1, 2, 3}. By our choice of coordinates the matrices of types A 13 and A 24 with locally free cokernels are mapped to the point 1 ∈ P 1 .
Remark 8.15: Let s > 4, let X ij , 1 ≤ i ≤ s, 1 ≤ j ≤ s − 2 be the coordinates of M s,s−2 and let X = (X ij ). Then the determinants of the minors X ij of X describe 1 2 s(s − 1) hypersurfaces T ij P in M s,s−2 . Via the duality of Section 7, the minors X ij describe precisely the configurations of points (p 1 , . . . , p s ) in P 1 , where the points p i and p j coincide. Denote T ij P the image of T ij P in M P . M P is a good quotient of PM s,s−2 , and thus T ij P is a closed subset of M P . Moreover, because s > 4, by Proposition 7.1 we see that each T ij P contains a dense subset whose preimage in PM s,s−2 consists of stable points. Thus, the T ij P describe 1 2 s(s − 1) hypersurfaces of M P and the locus in M P consisting of torsion free sheaves which are not locally free is described by the s divisors T s,1 and T i,i+1 for 1 ≤ i ≤ s − 1.
Example 8.16: Let X be a toric surface which has six rays. The space M 2,6 has been calculated in [Dol94] and is a cubic hypersurface in P 4 defined by the equation X 1 X 2 X 4 − X 3 X 0 X 4 + X 3 X 1 X 2 + X 3 X 0 X 1 + X 3 X 0 X 2 − X 3 X 2 0 = 0. This hypersurface has ten nodes representing precisely the ten P-equivalence classes of P-semistable but not P-stable sheaves.
{i 1 , . . . , i s−2 } ⊂ {1, . . . s} and let A i 1 ...i s−2 be the (s − 2) × (s − 2)-minor of A which contains the rows corresponding to {i 1 , . . . i s−2 }. Moreover, let A ′ := (α i,j ) be the matrix of coefficients of A and (A ′ ) i 1 ...i s−2 the according minor. The second Fitting ideal Fitt 2 (A) of A is generated by the determinants of all the A i 1 ...i s−2 :
Proposition 6. 1 :
1Fix a set of numbers I := {i ρ ≥ 0} ρ∈∆(1) , let Π = Π I = {ρ | i ρ > 0} ⊂ ∆(1) and let P = {Π 1 , . . . , Π s }, s ≥ 2, be a partition of Π. Let E = E(I, P, A) be a sheaf defined by a short exact sequence
Furthermore,
GL m acts freely on this set, so that PM s n,m (GL m ) = PM ss n,m (GL m ), and there exists the geometric quotient PM ss n,m (GL m )// GL m ∼ = Gr(m, n) where Gr(m, n) is the Grassmannian of m-dimensional linear subspaces of k n . Similarly, with help of the eigenspace decomposition of the left action of T n on M n,m , it easy to see that PM ss n,m (T n ) = { A | no row of A is zero }. T n acts freely on this set, so that stable and semistable points of PM n,m coincide and and we obtain a geometric quotient PM ss n,m (T n )/T n ∼ = (P m−1 ) n which is given by the map A → ( A 1 , . . . , A n ), where A i denote the row vectors of the matrix A. The action of the group G descends to actions of the groups G/ GL m ∼ = T n on Gr(m, n) and G/T n ∼ = GL m on (P m−1 ) n , respectively. Both actions are textbook examples from GIT and there are the following criteria for stability: Proposition 7.1 ([MFK94], Proposition 4.3): 1. An n-tuple (p 1 , . . . , p n ) of points in (P m−1 ) n is (semi-)stable with respect to the diagonal action of GL m if and only if for every proper linear subspace L of k m #{i | p i ∈ L} < m n dim L (1) (respectively ≤).2. Consider the action of GL n on Gr(m, n). Then a point A ∈ Gr(m, n) is (semi-) stable with respect to this action if and only if, for every proper linear subspace L of k n ,
PM o n,m . Then the sets (P m−1 ) n,ss (GL m ) and Gr(m, n) ss (T n ) both are geometric quotients of PM o n,m and their quotients Gr(m, n) ss (T n )//T n and (P m−1 ) n,ss (GL m )// GL m .are good quotients of PM o n,m as each is a good quotient of a good quotient. By the universal property of good quotients, these two spaces coincide with the good quotient PM o n,m //G, and thus are isomorphic. In particular, there is a commutative diagram consisting of good quotients:We want to extend this correspondence using the well-known isomorphism Gr(m, n) ∼ = Gr(n − m, n)
Proposition 8.11 ([Kly90], Corollary 1.2.5): Let E and E ′ be two equivariant vector bundles over a smooth complete toric variety. If there exists an arbitrary isomorphism E ∼ = E ′ of vector bundles, then there is an m ∈ M such that there is an equivariant isomorphism E ∼ = E ⊗ O(χ(m)), where O(χ(m)) denotes the structure sheaf endowed with the action by the character χ(m).
and let P E = {Π 1 , . . . Π s } be the coarse partition of Π with respect to E. If s > 2 then there exists a short exact sequence
Gr(m, n) ss ∼ = Gr(n − m, n) ss u u j j j j j j j j j j j j j j j j j ** T T T T T T T T T T T T T T TModuli of Equivariant SheavesLet us fix a tuple of nonnegative numbers I = (i ρ | ρ ∈ ∆(1)) and a partition P = {Π 1 , . . . , Π s } of the set Π = {ρ ∈ ∆(1) | i ρ > 0} where s ≥ 2. In Section 5 we have identified such data as a set of typical discrete parameters for equivariant vector bundles of rank 2 on a toric surface X.We have shown in Theorem 5.3 that for each such bundle E whose equivariant first Chern class in Z ∆(1) and whose coarse partition P E coincide with I and P, respectively, there exists a short exact sequence of the formwhich corresponds to a short exact sequence of vector spacesIn order to obtain moduli spaces, we ask for spaces which parametrize isomorphism classes of equivariant vector bundles E of rank 2 with fixed coarse partition P E = P. The conditions on A in Theorem 5.3 imply that the set of matrices A whose cokernel is such a vector bundle is dense in M s,s−2 . So by varying matrices A we have a natural candidate for a parameter space of vector bundles E with fixed I and P which is given by M s,s−2 modulo the equivariant automorphisms of O s−2 and s i=1 O( ρ∈Π i i ρ · D ρ ), GL s−2 and T s , respectively.
The Homogeneous Coordinate Ring of a Toric Variety. D A Cox, J. Algebr. Geom. 4D. A. Cox. The Homogeneous Coordinate Ring of a Toric Variety. J. Algebr. Geom. 4, 1:17-50, 1995.
Introduction to geometric invariant theory. I Dolgachev, Lecture Notes Series. 25Seoul. Seoul National UniversityI. Dolgachev. Introduction to geometric invariant theory, volume 25 of Lecture Notes Series, Seoul. Seoul National University, 1994.
Introduction to Toric Varieties. W Fulton, Princeton University PressW. Fulton. Introduction to Toric Varieties. Princeton University Press, 1993.
On equivariant vector bundles on an almost homogeneous variety. T Kaneyama, Nagoya Math. J. 57T. Kaneyama. On equivariant vector bundles on an almost homogeneous vari- ety. Nagoya Math. J., 57:65-86, 1975.
Equivariant Bundles on Toral Varieties. A A Klyachko, Math. USSR Izvestiya. 352A. A. Klyachko. Equivariant Bundles on Toral Varieties. Math. USSR Izvestiya, 35(2):337-375, 1990.
Vector Bundles and Torsion Free Sheaves on the Projective Plane. A A Klyachko, Preprint Max Planck Institut für MathematikA. A. Klyachko. Vector Bundles and Torsion Free Sheaves on the Projective Plane. Preprint Max Planck Institut für Mathematik, 1991.
Geometric Invariant Theory. D Mumford, J Fogarty, F Kirwan, Ergebnisse der Mathematik und ihrer Grenzgebiete. 34Springer-VerlagD. Mumford, J. Fogarty, and F. Kirwan. Geometric Invariant Theory, volume 34 of Ergebnisse der Mathematik und ihrer Grenzgebiete. Springer-Verlag, Berlin, Heidelberg, New York, 1994.
Lectures on introduction to moduli problems and orbit spaces. P E Newstead, Tata Institute of Fundamental Research Lectures on Mathematics and Physics. 51SpringerP. E. Newstead. Lectures on introduction to moduli problems and orbit spaces, volume 51 of Tata Institute of Fundamental Research Lectures on Mathematics and Physics. Springer, 1978.
Convex Bodies and Algebraic Geometry. Tadao Oda, SpringerTadao Oda. Convex Bodies and Algebraic Geometry. Springer, 1988.
Graded Rings and Equivariant Sheaves on Toric Varieties. M Perling, math.AG/0205311PreprintM. Perling. Graded Rings and Equivariant Sheaves on Toric Varieties. Preprint, 2002. math.AG/0205311.
| [] |
[
"Electronic Raman scattering in Tl 2 Ba 2 CuO 6+δ : symmetry of the order parameter, oxygen doping effects, and normal state scattering",
"Electronic Raman scattering in Tl 2 Ba 2 CuO 6+δ : symmetry of the order parameter, oxygen doping effects, and normal state scattering"
] | [
"L V Gasparov ",
"P Lemmens ",
"N N Kolesnikov ",
"G Güntherodt ",
"\nInstitute for Solid State Physics\nPhysikalisches Institut\nRWTH-Aachen\n52056, 142432Aachen, ChernogolovkaMoscow districtGermany, Russia\n",
"\nInstitute for Solid State Physics 142432\nPhysikalisches Institut, RWTH-Aachen\n52056Aachen, ChernogolovkaMoscow districtGermany, Russia\n",
"\nPhysikalisches Institut, RWTH-Aachen\n52056AachenGermany\n"
] | [
"Institute for Solid State Physics\nPhysikalisches Institut\nRWTH-Aachen\n52056, 142432Aachen, ChernogolovkaMoscow districtGermany, Russia",
"Institute for Solid State Physics 142432\nPhysikalisches Institut, RWTH-Aachen\n52056Aachen, ChernogolovkaMoscow districtGermany, Russia",
"Physikalisches Institut, RWTH-Aachen\n52056AachenGermany"
] | [] | Single crystals of the optimally doped, moderately and strongly overdoped high temperature superconductor Tl2Ba2CuO 6+δ (Tl-2201) with Tc=80, 56 and 30 K, respectively, have been investigated by polarized Raman scattering. By taking the peak position of the B1g component of electronic Raman scattering as 2∆0 we found that the reduced gap value (2∆0/kBTc) strongly decreases with increasing doping. The behavior of the low frequency scattering for the B1g and B2g scattering components is similar for optimally doped and overdoped crystals and can be described by a ω 3 -and ω -law, respectively, which is consistent with a d-wave symmetry of the order parameter. In contrast to the optimally doped Tl-2201 in both, moderately and strongly overdoped Tl-2201, the relative (compared to the B1g) intensity of the A1g scattering component is suppressed. We suggest that the van Hove singularity is responsible for the observed changes of Raman intensity and reduced gap value with doping. Electronic Raman scattering in the normal state is discussed in the context of the scattering from impurities and compared to the existing infrared data. The scattering rate evaluated from the Raman measurements is smaller for the overdoped samples, compared to the moderately overdoped samples. 74.25.Gz, 74.72.Fq, | 10.1103/physrevb.58.11753 | [
"https://arxiv.org/pdf/cond-mat/9809159v1.pdf"
] | 118,997,184 | cond-mat/9809159 | 1ce9adc40fdabe96fa47137b7d5f2505a3901b79 |
Electronic Raman scattering in Tl 2 Ba 2 CuO 6+δ : symmetry of the order parameter, oxygen doping effects, and normal state scattering
10 Sep 1998 (August 11, 2018)
L V Gasparov
P Lemmens
N N Kolesnikov
G Güntherodt
Institute for Solid State Physics
Physikalisches Institut
RWTH-Aachen
52056, 142432Aachen, ChernogolovkaMoscow districtGermany, Russia
Institute for Solid State Physics 142432
Physikalisches Institut, RWTH-Aachen
52056Aachen, ChernogolovkaMoscow districtGermany, Russia
Physikalisches Institut, RWTH-Aachen
52056AachenGermany
Electronic Raman scattering in Tl 2 Ba 2 CuO 6+δ : symmetry of the order parameter, oxygen doping effects, and normal state scattering
10 Sep 1998 (August 11, 2018)
Single crystals of the optimally doped, moderately and strongly overdoped high temperature superconductor Tl2Ba2CuO 6+δ (Tl-2201) with Tc=80, 56 and 30 K, respectively, have been investigated by polarized Raman scattering. By taking the peak position of the B1g component of electronic Raman scattering as 2∆0 we found that the reduced gap value (2∆0/kBTc) strongly decreases with increasing doping. The behavior of the low frequency scattering for the B1g and B2g scattering components is similar for optimally doped and overdoped crystals and can be described by a ω 3 -and ω -law, respectively, which is consistent with a d-wave symmetry of the order parameter. In contrast to the optimally doped Tl-2201 in both, moderately and strongly overdoped Tl-2201, the relative (compared to the B1g) intensity of the A1g scattering component is suppressed. We suggest that the van Hove singularity is responsible for the observed changes of Raman intensity and reduced gap value with doping. Electronic Raman scattering in the normal state is discussed in the context of the scattering from impurities and compared to the existing infrared data. The scattering rate evaluated from the Raman measurements is smaller for the overdoped samples, compared to the moderately overdoped samples. 74.25.Gz, 74.72.Fq,
INTRODUCTION
The symmetry of the order parameter is one of the most important questions for the high temperature superconductors (HTSC). This issue is especially interesting as a function of different doping levels. Electronic Raman scattering (ELRS) plays a special role in addressing this problem [1][2][3][4][5][6][7] . The symmetry properties of the order parameter can be determined by investigating the anisotropy of the scattering cross section for the different symmetry components. Different scattering components originate from different areas of the Fermi surface (FS). The ratio of one scattering component compared with another one reflects the changes of the Fermi surface (FS) topology with doping 8 . There are several theoretical attempts [3][4][5][6][7] to describe the electronic Raman scattering in HTSC at T < T c , but still there is no consensus concerning the exact mechanism of the scattering.
In the optimally doped HTSC the electronic Raman scattering from single crystals in the superconducting state reveals several common features [3][4][5][6][7][9][10][11][12][13][14][15][16] . The superconducting transition manifests itself in a redistribution of the ELRS continuum into a broad peak (pair-breaking peak), the intensity and frequency position Ω of which differs for the different symmetry components. For the optimally doped samples, one has Ω(B 1g ) > Ω(B 2g ) > Ω(A 1g ) [3][4][5][6][7][9][10][11][12][13][14][15][16] . The scattering on the low frequency side of the pair-breaking peak does not reveal additional peaks or a cut-off, which would be an indication of anisotropic s-wave component. In contrast, a power-law decrease of the scattering intensity toward zero frequency shift is observed. In the B 1g scattering component this power-law is close to a ω 3 -dependence, while in the A 1g and B 2g scattering components a linear-in-ω decrease is observed. The above mentioned features were first described by Devereaux et al. 3 in the framework of a d-wave order parameter, i.e. using the gap function ∆( k) = ∆ max cos 2φ, where φ is an angle between k and Cu-O bond direction within the CuO 2 plane. 1 The general description of the Raman scattering cross section follows from the fluctuation-dissipation theorem. For the case of nonresonant scattering the Raman scattering cross section is given by the Raman response function χ γ,γ ( q, ω :)
∂ 2 σ ∂ω∂Ω ∝ [1 + n(ω)] ℑm χ γγ ( q, ω),(1)
where n(ω) = 1/(exp(ω/T )− 1) is the Bose-factor, ω=ω I − ω S , is Stokes Raman shift, where ω I (ω S ) is the frequency of the incident (scattered) photon.
The Raman response function due to the breaking of Cooper pairs in a superconductor and including Coulomb repulsion can be written as 1,2 :
χ γγ ( q, ω) = γ 2 k λ k − γ k λ k 2 λ k ,(2)
where γ k is the Raman vertex, which describes the strength of the corresponding Raman transition, λ k is the Tsuneto function 17 and the brackets · · · denote the average of the momentum k over the Fermi surface.
The Tsuneto function is determined as:
λ( k, iω) = ∆( k) 2 E( k) tanh E( k) 2T 1 2E( k) + iω + 1 2E( k) − iω ,(3)
with excitation energy E 2 ( k) = ξ 2 ( k) + ∆ 2 ( k), conduction band ξ( k) = ǫ( k) − µ, chemical potential µ and superconducting energy gap ∆( k).
There is an important consequence following from Eqs. 2 and 3 that the Raman response is proportional to the square of the superconducting order parameter, therefore, as it was already mentioned in Ref. 5, Raman scattering is not sensitive to the sign of the order parameter.
In the case of nonresonant scattering one can describe the Raman vertex through the curvature of the energy bands ǫ( k):
γ k = αβ e I α ∂ 2 ǫ( k) ∂k α ∂k β e S β ,(4)
where e I ( e S ) is the polarization vector of the incident (scattered) photon, and α and β are summation indices which correspond to the different projections of k. If one assumes 3 that the Raman vertex does not depend on the frequency of the incident photon one can take into account symmetry considerations to evaluate corresponding Raman scattering components. In such a case the Raman vertex can be described in terms of Brillouin zone (BZ) or Fermi surface harmonics 3 Φ L ( k), which transform according to point group transformations of the crystal.
γ k (ω i , ω s ) = L γ L (ω i , ω s )Φ L ( k).(5)
For tetragonal symmetry one gets the following form of the Raman vertices 3 :
γ B1g ∝ cos 2φ γ B2g ∝ sin 2φ γ A1g ∝ 1 + cos 4φ.(6)
Let us analyze the Raman response (Eq.2). For simplicity we have drawn in Fig.1 corresponding polar plots of the functions contained in each of the two terms of the Raman response. The first term is the "bare" Raman response which reflects the attractive interaction in the Cooper pair whereas the second term ("screening") is due to the Coulomb repulsion. Let us start with the "screening"term. This term is proportional to the squared FS average of the product of the Raman vertex γ and the Tsuneto function λ. The Tsuneto function in turn is proportional to the square of the gap function. Following Devereaux and Einzel 3 we assume a d-wave gap in the form of ∆( k) = ∆ max cos 2φ, which has a B 1g symmetry. When squared it becomes totally symmetric (A 1g ). Therefore an averaged product of the Raman vertex and Tsuneto function will be nonzero only if the vertex function is totally symmetric. This is not the case for the B 1g (γ ∼ cos 2φ) and B 2g (γ ∼ sin 2φ) Raman vertexes, but only for the A 1g (γ ∼ 1 + cos 4φ) as seen in Fig.1. Therefore A 1g scattering component is the only component strongly affected or "screened" by the long range Coulomb interaction [3][4][5][6][7] . Let us now look on the bare Raman response. This term is proportional to the FS average of the product of the squared Raman vertex γ 2 and Tsuneto function λ (∝ ∆( k) 2 ). Both γ 2 and λ are totally symmetric. One sees from Fig.1 that maxima and nodes of the squared B 1g Raman vertex coincide with that of squared d-wave gap. This leads to the highest relative peak position for the B 1g scattering component and a ω 3 -dependence of the low frequency scattering. In contrast, maxima of the B 2g Raman vertex coincide with nodes of the squared d-wave order parameter, resulting in a lower relative peak position and a linear-in-ω low frequency dependence for this component. The A 1g scattering component is the only one which is screened. The "screening" term shifts the peak position of the A 1g scattering component to a frequency smaller than that of the B 1g . Because of the "screening" term, one could expect that the A 1g ELRS peak should be the weakest one [5][6][7] . Nevertheless in all optimally doped HTSC (YBCO 9-11 , Bi-2212 12 , Tl-2223 13 , La-214 14 , Tl-2201 15,16 ) the relative intensity of the A 1g ELRS peak is strong and comparable to that of the B 1g peak. This contradicts existing LDA-based calculations of the electronic Raman scattering cross-section 5 . However, resonance effects 18,19 may alter these calculations. This picture qualitatively describes the experimental results for all optimally doped HTSC's. The only exception is the n-type superconductor (Nd,Ce)-214, which demonstrates a behavior consistent with an s-wave type of order parameter 20 .
For the overdoped or underdoped samples the above mentioned universality of the experimental results does not hold anymore. For instance C. Kendziora et al. 21 reported for overdoped Tl 2 Ba 2 CuO 6+δ (Tl-2201) a similar peak position for the different symmetry components of the electronic Raman scattering. The authors pointed out that the gap does not scale with T c , but rather decreases with an increase of doping, yielding a 2∆ 0 /k B T c = 3.9. This led them to suggest that in the overdoped Tl-2201 the order parameter has s-symmetry. One should note, however, that existing calculations of the ELRS peak positions (especially for the A 1g scattering component 3-7 ) strongly depend on the chosen electronic structure and gap function parameters. For the optimally doped Tl-2201 the difference between the peak positions of the B 1g and B 2g components is about 10% only 16 . One can estimate an expected difference between the corresponding peak positions for strongly overdoped Tl-2201 by scaling the peak position of the B 1g scattering component in optimally doped Tl-2201 (≈ 430 cm −1 ) to that reported for the strongly overdoped Tl-2201 (≈ 80-100 cm −1 ). Such an estimate gives for the strongly overdoped crystal a peak position of the B 2g scattering component at only 8-10 cm −1 lower frequency than that of the B 1g component. This is actually within experimental error. Therefore the same position of the peaks cannot prove s-wave pairing.
According to Devereaux et al. 3 , the low frequency power-law behavior of the ELRS intensity is more "robust" concerning changes of the FS topology as a result of overdoping and underdoping. Particularly the ω 3 -law for the low frequency scattering in the B 1g scattering component and ω-law for the A 1g and B 2g scattering components should not change with doping in a d-wave superconductor. Unfortunately the ELRS peaks in strongly overdoped Tl-2201 have their maxima at a rather low frequency, which makes it difficult to determine their low-frequency tails precisely. Additionally the low frequency scattering for the A 1g component is easily obscured by Rayleigh scattering. In order to test the low frequency behavior in the overdoped Tl-2201 it is therefore necessary to investigate moderately overdoped samples with a pair-breaking peak not at too low frequency.
In addition to the scattering in the superconducting state the normal state scattering provides important information about carrier dynamics. Raman scattering in the normal state in channel L and assuming a single impurity scattering lifetime τ can be described by a Lorentzian:
ℑmχ L (ω, T > T c ) = 2N F γ 2 L ωτ (ωτ ) 2 + 1 ,(7)
where Γ = 1/τ is the scattering rate, γ L is a Raman vertex, and N F is the carrier density of states at the Fermi level 22,23 . Generally speaking, τ is a function of the scattering channel L and momentum k 24 . ℑmχ L (ω, T > T c ) has a peak at the frequency ω = 1/τ , and the spectrum falls off as 1/ω. Using this fact one can analyze Raman spectra in the normal state and determine how scattering rates change with doping. Hackl et al. 25 fitted their data for Bi-2212 using Eq.7 and a frequency dependence of Γ given by the nested Fermi liquid model 26 . The scattering rates at T≈ 100 K were found to be Γ(B 1g ) ≈ 600 cm −1 , Γ(B 2g ) ≈ 170 cm −1 for the nearly optimally doped Bi-2212 and Γ(B 1g ) ≈ 160 cm −1 , Γ(B 2g ) ≈ 120 cm −1 for overdoped Bi-2212 25 .
In this paper we present electronic Raman scattering experiments on moderately overdoped Tl 2 Ba 2 CuO 6+δ with T c =56 K. These are compared with measurements on optimally doped (T c =80 K) and strongly overdoped (T c =30 K) crystals. We show that similarly to optimally doped Tl-2201 also moderately overdoped Tl-2201 samples show a ω 3 -low frequency behavior of the B 1g scattering component and a linear low frequency behavior for the B 2g scattering component. The above mentioned power laws are consistent with d-wave symmetry of the order parameter. Additionally we will discuss the changes of the relative intensities of the pair breaking peaks in the A 1g and B 1g scattering components with doping, as well as the electronic Raman scattering in the normal state. EXPERIMENTAL We investigated the electronic Raman scattering in the single-CuO 2 layered compound Tl-2201. This provides a single-sheeted Fermi surface 27 . Therefore the inter-valley scattering due to the multi-sheeted Fermi surface 5 invoked for the explanation of unexpectedly large A 1g scattering intensity does not play a role. Our samples had a shape of rectangular platelets with the size of 2x2x0.15mm. Moderately overdoped and strongly overdoped crystals of Tl-2201 were characterized by a SQUID magnetometer, T c was found equal to 56±2 K (moderately overdoped) and 30±2 K (strongly overdoped), respectively. The orientation of the crystals was controlled by X-ray diffraction. The Raman measurements were performed in quasi-backscattering geometry. Raman scattering was excited using an Ar + -ion laser. The laser beam with 3mW power was focused into a spot of 50µm diameter. The laser induced heating was estimated by increasing the laser power level at a fixed temperature (5 K) and comparing the dependence of the ELRS B 1g -peak intensity on laser power with the temperature dependence of the intensity of this peak measured at fixed laser power (3mW). Estimated additional heating was found to be about 12.5±2.5 K (all data are plotted with respect to the estimated temperature). In order to analyze pure scattering geometries we extracted the A 1g scattering component from the X'X' (A 1g +B 2g ) and XY (B 2g ) scattering geometries. The X' and Y' axes are rotated by 45 • with respect to the X and Y-axes. The X-and Y-axes are parallel to the Cu-O bonds in the CuO 2 plane of the Tl-2201 unit cell. After subtraction of the dark counts of the detector the spectra were corrected for the Bose-factor in order to obtain the imaginary part of the Raman response function. In order to analyze the low frequency behavior of the B 1g scattering component in moderately overdoped Tl-2201 with T c =56 K we performed measurement in superfluid He (T=1.8 K). This gives us several advantages: Because of the huge thermal conductivity of superfluid helium we do not have any overheating of the sample due to laser radiation. The absence of overheating allows us to precisely determine the real temperature of the excited volume. For T=1.8 K the Bose factor is equal to zero down to at least 10 cm −1 . Therefore down to 10 cm −1 we actually measure the imaginary part of the Raman response function.
RESULTS AND DISCUSSION
The Raman spectrum of Tl-2201 shows several phonons and a broad electronic continuum. The superconducting transition leads to the redistribution of the continuum into a broad peak. In Figs.2-4 we show the B 1g , A 1g and B 2g scattering components of the Raman scattering for T ≪ T c (solid line) and T > T c (dashed line) for the Tl-2201 single crystals with T c =80 K (Fig.2), T c =56 K (Fig.3), and T c =30 K (Fig.4). In order to emphasize the redistribution of the scattering intensity in the superconducting state compared to the normal state we draw not only the Bose-factorcorrected raw spectra (Figs.2, 3 and 4, upper panel), but we subtract the spectra above T c from the spectra well below T c (Fig.2, 3 and 4, lower panel). The positions of the ELRS peaks in the superconducting state for different scattering components as a function of doping are summarized in Table I.
It is generally accepted that the B 1g scattering component reflects much of the properties of the superconducting density of states 6 . Therefore it is reasonable to analyze intensities of other components relative to the B 1g scattering component.
There are several differences between optimally-and overdoped crystals. i) If one identifies the peak in the B 1g ELRS component as a 2∆ 0 one obtains the reduced gap value 2∆ 0 /k B T c ≈ 7.8 for the optimally doped crystal, while in the overdoped crystals 2∆ 0 /k B T c is close to 3, (see Table I).
ii) For the optimally doped crystals the peak positions of the B 2g and A 1g scattering components are lower than that of the B 1g , (see Fig.2 and Table I Table.I.), although its peak position is still about 10±2% lower (similar to the optimally doped Tl-2201, see Table I). The A 1g peak position is close to that of the B 1g peak as well, although an exact determination of the pair-breaking peak position for the A 1g scattering component is difficult due to the A 1g phonon at 127 cm −1 of moderately overdoped Tl-2201, (see Fig.3) or due to the superimposed Rayleigh scattering in strongly overdoped Tl-2201 (see Fig.4).
iii) The most drastic changes of the relative ELRS peak intensity with doping are seen in the A 1g scattering component. For the optimally doped crystal we observe a strong peak, which is comparable in intensity to that of the B 1g component, see Fig.2a and b, lower panel. In contrast, for two overdoped crystals (Figs.3, 4 a and b, lower panel) the relative intensity of the ELRS peak in the A 1g scattering component is weak.
iiii) In contrast to the A 1g scattering component the intensity of the B 1g scattering component is stronger in the moderately overdoped sample (Fig. 3a) compared to the optimally doped one (Fig. 2a). For the strongly overdoped sample an exact determination of the relative intensity of the pair-breaking peak is difficult in all scattering components. The pair-breaking peak is at too low frequency (≈ 60 cm −1 ), therefore its intensity is very sensitive to the Bose-factor correction, which in turn depends upon the uncertainty in the estimated temperature. Additionally, Rayleigh scattering and impurity induced scattering 3 may obscure the evaluated difference between the corresponding spectra below and above T c .
According to Devereaux et al. 3 , the ω 3 -law for the low frequency scattering in the B 1g scattering component and the ω-law for the A 1g and B 2g scattering components should not change with doping in d-wave superconductors. In order to check these power laws for the moderately overdoped Tl-2201 we have performed measurements in superfluid helium (T=1.8 K). To illustrate the low frequency behavior of the imaginary part of the Raman response function in the B 1g and B 2g scattering components on the same frequency scale we have scaled the Raman shift by the corresponding peak position, as shown in Fig. 5a. The fit of the low frequency scattering in the B 1g scattering component with the ω n -function leads to exponents n=2.9 and 3.5 for the optimum doped and moderately overdoped Tl-2201, respectively. An even better fit to the low frequency scattering intensity in moderately overdoped Tl-2201 was obtained with a linear term added to the ω n function, similarly to overdoped Bi-2212 25 . The appearance of such a crossover from linear to a power law in the B 1g scattering component indicates the presence of impurities 3 . For the B 2g scattering component one can easily fit the low frequency scattering of optimally to overdoped samples with a linear-in-ω law as shown in Fig.5b. Unfortunately in the T c =30-K crystal the expected ELRS peak is too close to zero frequency to make a definite conclusion about its low frequency behavior. The observed power laws ( Fig.5) lead to the conclusion that even overdoped Tl-2201 has a d-wave symmetry of the order parameter.
Let us now discuss temperature induced spectral changes in the overdoped crystal. A detailed temperature dependence for the Tl-2201 (T c =56 K) sample is shown for the B 1g component in Fig.6. With increasing temperature the intensity of the pair-breaking peak decreases and its position shifts toward lower frequency. This dependence slightly differs from that predicted by the BCS theory, as shown in the inset of Fig.6, i.e. the gap opens more abruptly. At the same time the intensity of the pair-breaking peak decreases nearly linearly with increasing temperature (see insert in Fig.7) whereas the intensity of the low frequency scattering (at for instance ≈50 cm −1 ) increases. At a temperature close to T c both intensities match. From this data one can determine the ratio of the superconducting response to the normal state response in the static limit ("static ratio"), i.e when ω → 0 and compare it with the calculations of the ratio in the presence of impurities 4 . From such a comparison we found for the moderately overdoped Tl-2201 the corresponding value of the scattering rate to be Γ/∆(0) ≈ 0.5. This leads to Γ ≈60 cm −1 . In the normal state spectra (we discuss the imaginary part of the Raman response function) one sees an increase of the intensity towards zero with a broad peak at ≈50 cm −1 , Figs.3 and 7. This peak is more pronounced in the B 1g scattering component. Such a peak can be attributed to impurity induced scattering. According to Eq.7 the frequency of the peak corresponds to the scattering rate Γ = 1/τ of the normal state 22,23 . The position of the peak depends strongly on doping. It is roughly 35 or 50 cm −1 for strongly and moderately overdoped Tl-2201, respectively. Practically there is no anisotropy of the peak position comparing the B 1g and B 2g scattering components. Note that the scattering rates calculated from the peak positions are very close to that evaluated from the "static ratio" and sufficiently smaller than that found by Hackl et al. 25 using a frequency dependence of Γ given by the nested Fermi liquid model. Scattering rates may also be determined using the frequency dependent conductivity from the infrared measurements. One finds for many HTSC scattering rates 1/τ of about 100-200 cm −1 at T≈100 K 28 . Additionally and very surprisingly, the scattering rates decrease with increasing overdoping 29 . From our Raman measurements we found scattering rates Γ = 1/τ =35 or 50 cm −1 for strongly and moderately overdoped Tl-2201 not too far from the infrared data, and a similar decrease of Γ with increasing overdoping.
We would like to sum up the effects of overdoping that are also partly observed in other HTSC: In the nearly optimally doped regime (YBCO 9-11 , Bi-2212 12 , Tl-2223 13 , La-214 14 , Tl-2201 15,16 ) the ELRS peak positions scale with T c for all scattering components. The B 1g scattering component is most sensitive to changes of T c . The relative intensity of the ELRS A 1g peak is stronger or at least comparable to that of the B 1g component. The relative intensity of the B 2g peak is always the weakest one.
For the overdoped crystals (Tl-2201, Bi-2212) 21,25 the peak position of the B 1g scattering component decreases faster than T c so that 2∆ 0 /k B T c decreases with overdoping from 7.4 to ≈3 (data of this paper), or from 8 to 5 in Bi-2212 25 . The relative intensity of the A 1g ELRS peak as compared to B 1g decreases when the system becomes more overdoped 30 . This is an important point concerning the influence of the Fermi surface topology changes on Raman scattering and will be discussed further below.
We will now discuss some reasons which may explain the shift of the B 1g peak position with doping. The decrease of the B 1g ELRS peak position and 2∆ 0 /k B T c with doping is connected to the fact that the crossing of the Fermi surface with the Brillouin zone moves away from the (0,±π), (±π,0) points with doping. Therefore the FS average γ 2 k λ k of the Raman vertex with the Tsuneto function in Eq.2 gives a ∆ 0 smaller than ∆ max . A detailed discussion of this poin is given in the work of Branch and Carbotte 6 . In the case of optimum doping it is supposed that the Fermi level is close to the van Hove singularity (vHs) so that the FS pinches at the (0,±π), (±π,0) points of the BZ 31 leading to ∆ 0 ≈ ∆ max . Now let us turn to the decrease of the A 1g vs. B 1g intensities of the ELRS with doping. In contrast to B 1g and B 2g the A 1g scattering component is affected by the screening term. We suppose that "screening" itself is connected with the FS anisotropy, which is in turn affected by the van Hove singularity. In optimally doped crystals vHs is close to the Fermi level (FL) leading to strongly anisotropic FS. By overdoping we move FL from vHs. This leads to a more isotropic FS with larger "screening". Therefore the increase of "screening" with doping would be a plausible explanation for the observed decrease of the A 1g scattering component with doping. This suggestion has a consequence for the intensity of the B 1g scattering component. Namely the "screening" term for the A 1g scattering component has the same symmetry as the bare term for the B 1g scattering component (see Fig.1). If we suppose that the "screening" increases, the B 1g response should also increase. This is in agreement with our results (see Figs. 2a and 3a, lower panel).
In conclusion we have presented measurements of the electronic Raman scattering on optimally doped as well as moderately and strongly overdoped Tl-2201 single crystals. The strong decrease of the A 1g scattering intensity with increasing overdoping has been observed. We connect this effect with the changes of the FS topology connected to the existence of a van Hove singularity. We propose investigations on other overdoped HTSC in order to check this idea. Our measurements of the low frequency behavior of the electronic Raman scattering in optimally doped and moderately overdoped Tl-2201 confirmed a d-wave symmetry of the order parameter, in contrast to earlier reports 21 . The scattering rates, we have evaluated from the normal state Raman spectra as well as a decrease of them with overdoping are consistent with the existing infrared data. ℑmχ ∼ ω n (n=2.9 and 3.5 for the crystals with Tc=80±5 K and 56±2 K, respectively) and b) ℑmχ ∼ ω function.
FIG. 6. Temperature dependence of the Raman scattering intensity of the B1g scattering component in the moderately overdoped Tl-2201 (Tc=56±2 K) without corrections of the Bose-factor. With increase of the temperature the pair-breaking peak position of the ELRS shifts to lower frequency and its intensity decreases. The insert shows the temperature dependence of the pair-breaking peak position (solid circles) and the expected dependence from BCS theory (solid line). Note that at T=59 K>Tc = 56 K one sees a characteristic increase of the intensity towards zero frequency which is attributed to impurity induced scattering. Question mark (?) indicates no detection of a pair-breaking peak.
). In the overdoped crystals the B 2g component peaks at a frequency very close to that of the B 1g scattering component (seeFigs.3,4 and
III, edited by D.M. Ginsberg, World Scientific, Singapore, New Jersey, London, Hong Kong, 1992. 29 A.V. Puchkov, P. Fournier, T. Timusk, and N.N. Kolesnikov Phys. Rev. Lett. 77, 1853 (1996). 30 G. Blumberg, private communication. 31 D.L. Novikov and A.J. Freeman, Recent Developments in High Temperature Superconductivity, eds. J. Klamut et al., Springer series Lecture notes in Physics, V.475, 17 (1995).FIG. 1. Schematic representation of the Raman response function in Eq.2 due to the breaking of Cooper pairs and including Coulomb repulsion. The Tsuneto function λ k is represented as a squared gap function ∆ 2 ( k) = ∆ 2 max cos 2 2φ. Raman vertices are chosen as γB 1g ∝ cos 2φ, γB 2g ∝ sin 2φ, γA 1g ∝ 1 + cos 4φ, where φ is an angle between k and the Cu-O bond direction within the CuO2 plane.FIG. 2. Imaginary part of the Raman response in optimally doped Tl-22O1 (Tc=80±5 K) at T=20 K (solid curve), and T=110 K (dashed curve) for the a) B1g, b) A1g, and c) B2g scattering components (upper panel) and for the corresponding subtracted spectra: ℑmχ(T=20 K)−ℑmχ(T=110 K)(lower panel). FIG. 3. Imaginary part of the Raman response in moderately overdoped Tl-2201 (Tc=56±2 K) at T=20 K (solid curve), and T=75 K (dashed curve) for the a) B1g , b) A1g, and c) B2g scattering components (upper panel) and for the corresponding subtracted spectra: ℑmχ(T=20 K)−ℑmχ(T=75 K) (lower panel). FIG. 4. Imaginary part of the Raman response in strongly overdoped Tl-2201 (Tc=30±2 K) at T=15 K (solid curve), and T=50 K (dashed curve) for the a) B1g, b) A1g, and c) B2g scattering components (upper panel), and for the corresponding subtracted spectra: ℑmχ(T=15 K)−ℑmχ(T=50 K) (lower panel). FIG. 5. Imaginary part of the Raman response in optimally doped (Tc=80±5 K, dashed line, at T=20 K), moderately overdoped (Tc=56±2 K, dash-dotted line, at T=1.8 K), and strongly overdoped (Tc=30±5 K, dotted line, at T=15 K) Tl-2201 for the a) B1g and b) B2g scattering components. For each doping and scattering component the frequency-axis is rescaled to the position of the respective pair-breaking peak. The solid curves show fits to the low frequency scattering with the a)
FIG. 7 .
7Imaginary part of the Raman response function at T≤Tc, with T=55 K (solid line), T=48 K (dashed line) and T=45 K (dotted line) for the B1g scattering component of moderately overdoped Tl-2201. Arrows show the pair-breaking peak (≈ 105 cm −1 at T=45 K) and the peak due to scattering on the "normal excitations" (≈ 50 cm −1 ). The insert shows a temperature dependence of the pair-breaking peak intensity (solid squares) and the intensity of the "normal excitation" scattering (open squares) in the B1g scattering component in the moderately overdoped Tl-2201 (Tc=56±2 K).
TABLE I .
IPeak positions of the B1g , A1g and B2g electronic Raman scattering components, for optimally and overdoped Tl-2201.Compound Tl-2201
Tc [K]
B1g [cm −1 ]
A1g [cm −1 ]
B2g [cm −1 ]
2∆max/kBTc
optimally doped
80±5
430±15
345±20
380±35
7.8±0.4
moderately overdoped
56±2
125±10
110±20
120±10
3.3±0.3
strongly overdoped
30±2
60±5
?
50±5
2.9±0.4
. A A Abrikosov, V M Genkin, Sov. Phys.-JETP. 38417A.A. Abrikosov and V.M. Genkin, Sov. Phys.-JETP 38, 417 (1973).
. M V Klein, S B Dierker, Phys. Rev.B. 294976M.V. Klein and S.B. Dierker, Phys. Rev.B 29, 4976 (1984).
. T P Devereaux, D Einzel, Phys. Rev. B. 5116336T.P. Devereaux , D. Einzel, Phys. Rev. B 51, 16336 (1995).
. T P Devereaux, A Kampf, unpublishedT.P. Devereaux and A. Kampf, unpublished.
. T Strohm, M Cardona, Phys. Rev. B. 5512725T. Strohm, and M. Cardona, Phys. Rev. B 55, 12725 (1997).
. D Branch, J P Carbotte, Phys. Rev. B. 5413288D. Branch and J.P. Carbotte, Phys. Rev. B 54, 13288 (1996).
. F Wenger, M Käll, Phys. Rev. B. 5597F. Wenger and M. Käll, Phys. Rev. B 55, 97 (1997).
. X K Chen, J G Naeini, K C Hewitt, J C Irwin, R Liang, W N Hardy, unpublishedX.K. Chen, J.G. Naeini, K.C. Hewitt and J.C. Irwin, R. Liang, W.N. Hardy, (unpublished).
. R Hackl, W Glaser, P Müller, D Einzel, K Andres, Phys. Rev. B. 387133R. Hackl, W. Glaser, P. Müller, D. Einzel, and K. Andres, Phys. Rev. B 38, 7133 (1988).
. S L Cooper, F Slakey, M V Klein, J P Rice, E D Bukowski, D M Ginsberg, Phys. Rev. B. 3811934S.L. Cooper, F. Slakey, M.V. Klein, J.P. Rice, E.D. Bukowski, and D.M. Ginsberg, Phys. Rev. B 38, 11934 (1988).
. X K Chen, E Altendorf, J C Irwin, R Liang, W N Hardy, Phys. Rev. B. 4810530X.K. Chen, E. Altendorf, J.C. Irwin, R. Liang, and W.N. Hardy, Phys. Rev. B 48, 10530 (1993).
. T Staufer, R Nemetschek, R Hackl, P Müller, H Veith, Phys. Rev. Lett. 681069T. Staufer, R. Nemetschek, R. Hackl, P. Müller, and H. Veith, Phys. Rev. Lett. 68, 1069 (1992).
. A Hoffmann, P Lemmens, G Güntherodt, V Thomas, K Winzer, A. Hoffmann, P. Lemmens, G. Güntherodt, V. Thomas, and K. Winzer, Physica C 235-240, 1897 (1994).
. X K Chen, J C Irwin, H J Trodahl, T Kimura, K Kishio, Phys. Rev. Lett. 733290X.K. Chen, J.C. Irwin, H.J. Trodahl, T. Kimura, and K. Kishio, Phys. Rev. Lett. 73, 3290 (1994).
. R Nemetschek, O V Misochko, B Stadlober, R Hackl, Phys. Rev. B. 473450R. Nemetschek, O.V. Misochko, B. Stadlober, and R. Hackl, Phys. Rev. B 47, 3450 (1993).
. L V Gasparov, P Lemmens, M Brinkman, N N Kolesnikov, G Güntherodt, Phys. Rev. B. 551223L.V. Gasparov, P. Lemmens, M. Brinkman, N.N. Kolesnikov, G. Güntherodt, Phys. Rev. B 55, 1223 (1997).
. T Tsuneto, Phys. Rev. 1181029T. Tsuneto, Phys. Rev. 118, 1029 (1960).
. M Kang, G Blumberg, M V Klein, N N Kolesnikov, Phys. Rev. Lett. 774434M. Kang, G. Blumberg, M.V. Klein and N.N. Kolesnikov, Phys. Rev. Lett. 77, 4434 (1996).
It has been shown here that anisotropy of the Raman vertex does not follow the effective mass anisotropy even away from a resonance. E Ya, Sherman, E. Ya. Sherman, (unpublished). It has been shown here that anisotropy of the Raman vertex does not follow the effective mass anisotropy even away from a resonance.
. B Stadlober, G Krug, R Nemetschek, R Hackl, Phys. Rev. Lett. 744911B. Stadlober, G. Krug, R. Nemetschek, and R. Hackl, Phys. Rev. Lett. 74, 4911 (1995).
. C Kendziora, R Kelley, M Onellion, Phys. Rev. Lett. 77727C. Kendziora, R. Kelley and M. Onellion. Phys. Rev. Lett. 77, 727 (1996).
. A Zavadowski, M Cardona, Phys. Rev. B. 4210732A. Zavadowski and M. Cardona, Phys. Rev. B 42, 10732 (1990).
. V N Kostur, Z. Phys. B. 89149V.N. Kostur, Z. Phys. B 89, 149 (1992).
. O V Misochko, E Ya, Sherman, Int. J. Mod. Phys. B. 243371O.V. Misochko, E. Ya. Sherman., Int. J. Mod. Phys. B 24, 3371 (1994).
. R Hackl, M Opel, P Müler, G Krug, B Stadlober, R Nemetschek, H Berger, L Forro, Journal of Low Temp. Phys. 105733R. Hackl, M. Opel, P. Müler, G. Krug, B. Stadlober, R. Nemetschek, H. Berger, L. Forro, Journal of Low Temp. Phys. 105, 733 (1996).
. A Virosztek, Phys. Rev. B. 45347A. Virosztek et al., Phys. Rev. B 45, 347 (1992).
. V V Tatarskii, M Paranthaman, A M Hermann, Phys. Rev. B. 4714489V.V. Tatarskii, M. Paranthaman, and A.M. Hermann, Phys. Rev. B 47, 14489 (1993).
| [] |
[
"The vehicle routing problem with drones and drone speed selection",
"The vehicle routing problem with drones and drone speed selection"
] | [
"Felix Tamke \nFaculty of Business and Economics\n01062Dresden, DresdenTUGermany\n",
"Udo Buscher \nFaculty of Business and Economics\n01062Dresden, DresdenTUGermany\n"
] | [
"Faculty of Business and Economics\n01062Dresden, DresdenTUGermany",
"Faculty of Business and Economics\n01062Dresden, DresdenTUGermany"
] | [] | Joint parcel delivery by trucks and drones has enjoyed significant attention for some time, as the advantages of one delivery method offset the disadvantages of the other. This paper focuses on the vehicle routing problem with drones and drone speed selection (VRPD-DSS), which considers speed-dependent energy consumption and drone-charging in detail. For this purpose, we formulate a comprehensive mixed-integer problem that aims to minimize the operational costs consisting of fuel consumption costs of the trucks, labor costs for the drivers, and energy costs of the drones. The speed at which a drone performs a flight must be selected from a discrete set. We introduce preprocessing steps to eliminate dominated speeds for a flight to reduce the problem size and use valid inequalities to accelerate the solution process. The consideration of speed-dependent energy consumption leads to the fact that it is advisable to perform different flights at different speeds and not to consistently operate a drone at maximum speed. Instead, drone speed should be selected to balance drone range and speed of delivery. Our extensive computational study of a rural real-world setting shows that, by modeling energy consumption realistically, the savings in operational costs compared to truck-only delivery are significant but smaller than those identified in previously published work. Our analysis further reveals that the greatest savings stem from the fact that overall delivery time decreases compared to truck-only delivery, allowing costly truck-driver time to be reduced. The additional energy costs of the drone, however, are largely negligible. | 10.1016/j.cor.2022.106112 | [
"https://arxiv.org/pdf/2111.13050v1.pdf"
] | 244,709,004 | 2111.13050 | cf5a948d199d789679bc58d2ecf5e045f3d6429b |
The vehicle routing problem with drones and drone speed selection
Felix Tamke
Faculty of Business and Economics
01062Dresden, DresdenTUGermany
Udo Buscher
Faculty of Business and Economics
01062Dresden, DresdenTUGermany
The vehicle routing problem with drones and drone speed selection
Vehicle routing problemDronesEnergy consumption
Joint parcel delivery by trucks and drones has enjoyed significant attention for some time, as the advantages of one delivery method offset the disadvantages of the other. This paper focuses on the vehicle routing problem with drones and drone speed selection (VRPD-DSS), which considers speed-dependent energy consumption and drone-charging in detail. For this purpose, we formulate a comprehensive mixed-integer problem that aims to minimize the operational costs consisting of fuel consumption costs of the trucks, labor costs for the drivers, and energy costs of the drones. The speed at which a drone performs a flight must be selected from a discrete set. We introduce preprocessing steps to eliminate dominated speeds for a flight to reduce the problem size and use valid inequalities to accelerate the solution process. The consideration of speed-dependent energy consumption leads to the fact that it is advisable to perform different flights at different speeds and not to consistently operate a drone at maximum speed. Instead, drone speed should be selected to balance drone range and speed of delivery. Our extensive computational study of a rural real-world setting shows that, by modeling energy consumption realistically, the savings in operational costs compared to truck-only delivery are significant but smaller than those identified in previously published work. Our analysis further reveals that the greatest savings stem from the fact that overall delivery time decreases compared to truck-only delivery, allowing costly truck-driver time to be reduced. The additional energy costs of the drone, however, are largely negligible.
Introduction
This paper introduces the vehicle routing problem with drones and drone speed selection (VRPD-DSS). The integration of drones into transportation systems for parcel delivery has attracted significant attention in recent years [9,27,28,32,37]. In contrast to delivery trucks, drones are not restricted to road networks and are, therefore, considered faster. However, they have several drawbacks. Drones have very limited capacities, e. g., in terms of the number of parcels or maximum payload, as well as limited range. Therefore, they are not well-suited as a stand-alone solution for delivery scenarios involving longer distances. One way to offset the disadvantages of drones is to combine trucks and drones into tandems [1,10,29,48]. In these cases, a truck carries one or more drones atop its roof and operates as a kind of mobile warehouse for drones but it also makes deliveries to customers.
Truck-drone tandems are considered a viable option for last-mile delivery, especially in rural areas [5,19], where distances between customers are greater because the population density is lower. Thus, trucks have to travel long distances to make deliveries to remote customers, while drones can fly more-direct routes. However, direct drone delivery from a warehouse or store, as tested by various companies for urban and suburban areas, is usually not possible due to the long distances. Hence, the trucks are able to work as range extenders for the drones. Moreover, drones can usually operate in rural areas without interference from tall buildings or other obstacles. This allows for less-stringent regulations concerning their use. Another aspect to consider is a potential customer-delivery zone. Residences in rural areas are mostly stand-alone houses. This allows a drone to drop the package somewhere on the customer's property, for example with a winch, which simplifies the delivery process.
One of the most important factors of drone use is their limited range. In the literature on truck-drone tandems, the range is often only an approximation based on a maximum distance or a time limit [27,49]. However, both approaches are simplifications for the maximum energy consumption of a drone. The energy consumed by a drone depends on several factors. Zhang et al. [49] grouped these factors with respect to drone design, environment, drone dynamics, and delivery operations. These authors showed that, in particular, the total weight of the drone at takeoff, which consists of the payload plus drone and battery weight, and the airspeed have a major impact on energy consumption and, thus, on range. For a given drone configuration and just a single customer delivery per flight, the total weight at takeoff is fixed, and only the speed of the flight can be adjusted to vary the range.
However, in most optimization approaches for truck-drone tandems, the range is either independent of the speed, or the speed is the same for all flights. Thus, speed is not included in the decision-making process. In the context of the problem considered here, i.e., trucks and drones make deliveries, to the best of our knowledge, variable drone speeds affecting the range are presented only in [35] for a single truck with multiple drones. They introduced a heuristic approach in which drone speeds can be dynamically adjusted and demonstrated that significant time-savings can be achieved with variable drone speeds.
In contrast to [35], we present for the first time an exact approach for routing truck-drone tandems that takes into account speed-dependent energy consumption and other relevant aspects such as recharging. Here, considering drone speed as a continuous decision variable is not practical because the speed affects the energy expended in a nonlinear manner [49]. Therefore, we perform a discretization of the speed to obtain different discrete levels. The energy consumption for each discrete level can be determined in advance and, thus, a speed has to be selected for a flight. We call this new problem the vehicle routing problem with drones and drone speed selection and make the following additional contributions in this paper:
• We conduct numerical experiments on realistic instances for a rural scenario. The instances incorporate real-world routing between locations and realistic drone parameters.
The results show that substantial cost-savings can be achieved by truck-drone tandems in comparison to traditional truck-only delivery for the given rural test instances. However, because of the more-realistic assumptions, the savings are not as high as those shown in other publications. • We show in our experiments that the greatest savings can be achieved by shortening delivery times, while the power costs of the drones are almost negligible. • We find that the speed selected in advance has a large impact on the costs of the VRPD when only a single speed is available. In contrast, the VRPD-DSS is independent of this pre-selected speed and achieves at least the minimal costs of all VRPDs but usually provides better solutions. • We present and prove preprocessing methods to eliminate dominated speeds for drone flights and unnecessary variables to efficiently reduce the problem size without excluding optimal solutions.
The paper is structured as follows. A brief overview of the related literature is provided in Section 2. We then present the assumptions made to define the VRPD-DSS and explain the energy-consumption model used in this paper in detail in Section 3. A mixed-integer linear programming formulation for the VRPD-DSS is introduced in Section 4. Section 5.1 presents the newly developed preprocessing methods. Known and new valid inequalities to strengthen the model formulation are described in Section 5.2. The generation of the rural test instances and the results of our computational studies are reported in Section 6. Finally, we conclude the paper in Section 7.
Related literature
The literature on optimization problems associated with drones is growing rapidly as shown in recent surveys [9,27,28,32,37]. Therefore, we focus our brief review of the relevant literature mostly on the drone range concepts used and on problems where trucks and drones can perform deliveries. Macrina et al. [27] distinguished this class of problems from problem classes where only drones are able to make deliveries, either from stationary depots (drone delivery problem) or from mobile ground vehicles (carrier-vehicle problem with drones). We refer to the most recent surveys in Chung et al. [9], Macrina et al. [27], and Moshref-Javadi and Winkenbach [28] for a more detailed review of these problems.
Combined delivery by trucks and drones as tandems was introduced by Murray and Chu [29] as the flying sidekick traveling salesman problem (FSTSP) for a single truck with a single drone. They limited the range of the drone by a maximum time it can be airborne and presented a mixed-integer linear programming (MILP) model as well as a heuristic approach to solve the FSTSP. Agatz et al. [1] presented a variant where drones can perform loops, i.e., start and end a flight at the same node. They called this problem the traveling salesman problem with drones (TSPD) and introduced two heuristics. In contrast to [29], the drone range is limited by a maximum flight distance. In addition, they varied the speed of the drone in the computational experiments and showed that a faster drone leads to significantly reduced costs. However, the range is independent from the speed. Many additional heuristic algorithms, e.g., [6,12,22,23,34], and exact approaches, e.g., [3,4,14,15,36,44], have been designed to solve these two basic problems or closely related variants. Currently, the best-known exact approach for a single tandem with one drone is a branch-and-price algorithm introduced in [36]. Unlike the other approaches, Jeong et al. [23] used an approximation of a simple energy-consumption function that takes into account the loaded weight. In a recent heuristic approach [6], the authors used a more detailed energy-consumption model that considers the parcel weight and a fixed speed.
A natural extension of the basic problems FSTSP and TSPD is to consider multiple drones for one truck, as in [7,13,30,35], for example. Murray and Raj [30] introduced the multiple flying sidekicks traveling salesman problem (mFSTSP) and focused on the scheduling of launch and retrieval operations of multiple drones to deal with the small space on a delivery truck. They present an MILP model to solve small instances and a heuristic algorithm for larger instances. In addition, they investigated different approaches to drone endurance, including an energy-consumption model based on parcel weight and speed. However, the speed is not part of the decision-making process. In their computational experiments, they demonstrated that not using an actual energy-consumption model often leads to under-utilization of resources or infeasible solutions in terms of energy consumption. In a subsequent paper [35], the authors included the drone speed as a decision variable and called the resulting problem the mFSTSP with variable drone speeds. They introduced a heuristic approach to solve the problem and showed that variable drone speeds lead to substantial time-savings.
Another extension of the FSTSP and the TSPD arises from using multiple tandems with one or more drones per truck. This problem is introduced in Wang et al. [46] as the vehicle routing problem with drones (VRPD). As for the problem with one tandem, several heuristic algorithms, e.g. [11,17,24,26,38,39,45], and exact approaches, e.g. [16,40,43,47], have been presented for the VRPD and several related problems. Several approaches consider time windows [16,17], allow multiple visits to customers on the same flight [26,47], or enable the launch and retrieval of drones at discrete points on an arc [39]. However, only the approach presented by Liu et al. [26] considers a range that is not limited by a maximum time or distance. Instead, they used an approximation of a drone's energy consumption that depends on the loaded weight.
Different concepts for a drone's range are also applied in problems without combined deliveries, e.g., in [8,33,18]. Cheng et al. [8] developed an exact algorithm for a drone delivery problem with multiple trips and with non-linear energy consumption based on the payload. Poikonen and Golden [33] studied a problem with one truck and multiple drones and proposed a heuristic algorithm. Drones are allowed to visit multiple customers on the same flight, but the truck cannot visit a node when a drone is in the air. The energy consumption used in their approach takes into account the loaded weight of the parcels. Two different drone speeds are tested in their computational experiments, but the speed has no impact on the expended energy. Dukkanci et al. [18] considered a problem where drones are first transported to launch points by trucks, serve a customer, and then return to the truck to start the next service. The trucks remain at the launch points and perform no deliveries, while the drones serve the customers. The authors determined the energy consumption explicitly and used the speed of the drones as continuous decision variables. To solve this problem, they reformulated the non-linear model into a second-order cone-programming problem.
In summary, to the best of our knowledge, there is currently no approach for the VRPD that takes into account the actual energy consumption of drones and incorporates drone speed into the decision-making process. Additionally, in contrast to battery-switching, recharging the battery while the drone is on the truck has not yet been included. Therefore, we adapt and extend one of the currently best exact approaches for the VRPD, as presented in [43], to address these relevant and important additional features.
Preliminary considerations
Assumptions on drone operations and truck-drone interaction
To model the VRPD-DSS, we make the following assumptions:
(a) A truck can be equipped with one or more drones. However, each drone is associated with one truck exclusively. Therefore, that drone may not be launched or received by any other truck. This is reasonable since the technological effort required to coordinate multiple drones on one truck is high. (b) Trucks and drones do not have to use the same distance metric in a network because trucks are bound by the road network whereas drones are not. (c) Each drone operation comprises three steps. First, it has to be launched from the truck.
Next, the drone performs a delivery to exactly one customer, and then, it returns to its associated truck. (d) A drone must be launched and retrieved at nodes of the given network, i.e., the depot or customer locations. A drone must not start and end a flight at the same customer location. In addition, a truck must not return to an already-visited customer to retrieve a drone. Likewise, trucks and drones may return to the depot only once. (e) We consider service times at customer locations for truck as well as drone deliveries. We also take into account the time needed to prepare a launch. The time required to retrieve a drone is not considered since we assume that the drones operate autonomously. (f) A drone can fly at different speeds. The speed for a flight can be selected from a discrete set of available speeds and is constant during the whole flight (steady flight). The speed of the truck is given and is not part of the decision-making process. (g) A drone expends energy by flying and hovering. Its energy consumption while flying depends on the selected speed and the weight. Hovering occurs in two cases: first, if the drone has to wait at the retrieval location and, second, while it is serving a customer. Other operations like climb and descent are not taken into consideration. We assume that the amount of time not spent hovering or in steady flight is negligible. (h) The energy that can be expended during flight and hovering is limited by the available energy of the drone battery. However, the battery can be charged at a constant rate when the drone is on top of the truck. It cannot be recharged while the drone is being prepared for a launch.
Energy-consumption model
An energy-consumption model for drone operations is essential in our study. We use the model presented in [41] for two reasons. First, multi-rotor drones are considered, corresponding to the technology currently used in truck-drone tandems. Second, the model provides different power functions for steady flight and hovering. Thus, we are able to distinguish between these two operations and can include the speed of a flight into energy considerations and decision-making. We use the octocopter presented in [41] in all our tests (see Table A.1 for parameters) because octocopters are capable of carrying heavier packages and, thereby, are suitable for truck-drone tandems.
A drone consumes energy when hovering and flying because it has to resist both gravity and drag forces. The latter are caused by the forward motion of the drone and the wind. However, we assume perfect ambient conditions such as no wind. All of a drone's activities in the air are accomplished by adjusting the speed of each rotor. This leads to the required thrust and pitch to perform an operation, e.g., moving forward at a desired speed. The thrust of the individual rotors differs depending on the type of activity. On average, however, they are almost equal and, together, exactly balance gravity and drag forces. Therefore, the total required thrust T can be described as the sum
T = F g + F d(1)
of gravity F g and the total drag forces F d . As described in [41], it is convenient to divide the drone into three relevant components: drone body (db), battery (b), and a customer's package (p). All three components i ∈ DP = {db, b, p} have the same attributes: mass m i , drag coefficient c d i , and projected area A i perpendicular to the direction of travel. Hence, gravity F g is equal to
F g = g i∈DP m i ,(2)
where g is the standard acceleration due to gravity. The total drag force F d for steady flight with speed v can be estimated with the equation
F d = 1 2 ρv 2 i∈DP c d i A i ,(3)
where ρ is the density of air. For a drone with n rotors of diameter D, we are now able to calculate the theoretical minimum power required to hover as
P H,min = T 3 2 1 2 πnD 2 ρ .(4)
Note that for hovering, v = 0 and, therefore, F d = 0 and T = F g . The theoretical minimum power for steady flight with speed v is given by
P F,min = T v sin α + v i(5)
with pitch angle α and induced speed v i . Pitch angle α is the tilt of the drone in the direction of travel and can be determined by
α = arctan F d F g .(6)
The induced speed at the rotors v i can be computed by solving the implicit equation
v i − 2T πnD 2 ρ (v cos α) 2 + (v sin α + v i ) 2 = 0,(7)
which can easily be done numerically. Finally, we consider an overall power efficiency of the drone η and also use a safety coefficient σ. The latter is used to reflect circumstances not considered in the energy-consumption function, such as wind and temperature, and to prevent an underestimation of the power consumption. Hence, the expended power during hover P H and forward flight P F is expressed as
P H = P H,min η (1 + σ) and P F = P F,min η (1 + σ) .(8)
In our studies, we vary two parameters that are relevant for computing the energy consumption of a given drone configuration: speed v and the mass of the package for a customer m p . Hence, we introduce the expended power for hovering and steady forward flight as functions P H (m p ) and P F (m p , v). Note that the drag coefficient c d p and the projected area A d p also change with different packages due to their different shapes. However, these effects are negligible compared to those of other two factors.
In the following, we analyze the trade-off between drone speed and energy consumed for the given model. Consider the examples shown in Figure 1 to better understand the relationship between package mass, speed, and corresponding expended power (1a) and energy consumption (1b, 1c) for the introduced model. We distinguish between two different energy-consumption scenarios. First, we take into account only the energy consumption for flying 1000 m (1b). Secondly, we assume that the drone flies 1000 m and then has to hover and wait for the truck, which arrives 180 s after the drone takes off (1c). This is a relevant scenario for truck-drone tandems and is required for synchronization. Since all flights are faster than 180 s, the drone must hover regardless of the speed. However, hovering time increases with increasing drone speed.
The expended power increases monotonously with the drone speed and is higher for larger package masses. In contrast, the energy consumption for steady flight initially decreases and then increases again with increasing speed; this is due to the trade-off between expended power and flight duration. At low speeds, less power is expended, but the flight duration is longer. In the reverse case, exactly the opposite is true; higher speeds expend more power, but the flight is faster. As a result, the range of the drone depends significantly on the selected speed, and faster drone speeds are not automatically better. Additionally, if waiting for the truck and hovering were not necessary in general, we could determine an optimal speed for a given package mass m p and exclude all slower speeds. Speeds slower than the optimum speed cause higher energy consumption plus longer flight duration. However, hovering might be necessary for truck-drone tandems. If we include energy consumption for hovering, slower speeds usually have a smaller total energy consumption, as shown in Figure 1c. Hence, faster drones may lead to faster deliveries but have a smaller range and higher energy consumption. The latter is especially important if the battery has to be recharged on the truck before the next flight and is not swapped. Therefore, it is essential to include the speed of the drones in the route-planning process of truck-drone tandems when their energy consumption is considered.
Mixed-integer linear programming formulation for the VRPD-DSS
Notation 4.1.1. Sets and parameters
We distinguish five different but partly overlapping sets of nodes. First, we denote the set of all customers by C = {1, . . . , c}. Not all customers can be served by a drone due, for example, to weight restrictions or customer preferences. Therefore, we introduce subset C ⊆ C as the set of all customers that can be served by a drone. Furthermore, we introduce nodes 0 and c + 1 as start depot and end depot for the same physical location. We then define N = {0} ∪ C ∪ {c + 1} as the set of all nodes, N 0 = N \{c + 1} as the set of all departure nodes, and N + = N \{0} as the set of all arrival nodes.
A homogeneous fleet of truck-drone tandems F is available to supply all customers. Each tandem f ∈ F consists of a single truck f and a set of drones D. The distance from node i ∈ N to node j ∈ N for a truck is denoted by δ T ij and the corresponding travel time by τ T ij . The distance between two nodes i and j for the drone is represented by δ D ij . In contrast to a truck, a drone can travel at different speeds v ∈ V , where V is the set of possible speeds. We introduce τ D,v ij = δ D ij /v as the travel time of a drone from node i to node j at speed v. In addition to travel times, we consider service times τ S,T j and τ S,D j for truck and drone deliveries for each node j ∈ N . The amount of time needed to prepare a launch is represented by τ L . We also introduce the maximum amount of time a drone is allowed to hover before retrieval as τ MH and the maximum time a truck is allowed to remain stationary at a node as τ MS . Both times can be limited to reflect more-realistic scenarios. The maximum duration of a route is denoted by M .
A drone flight is defined as triple (i, j, k) with node i ∈ N 0 as the launch node, j ∈C as the customer node, and k ∈ N + as the retrieval node. An operation (i, j, k) v represents the execution of the flight (i, j, k) with speed v. Assuming that both legs of the flight (i to j and j to k) are executed at the same speed v, we can determine the time τ v ijk of an operation with
τ v ijk = τ D,v ij + τ S,D j + τ D,v jk .(9)
The energy consumption of an operation can be computed in a similar manner. During the flight to the customer j, the drone must carry the package with mass m j , whereas no package is transported on the way to retrieval node k. We assume that the drone hovers at customer j's property to deliver the package and that the mass is constant (m j ) over the delivery time to take additional energy consumption, e.g., for using the winch, into account. Therefore, the energy consumption for operation (i, j, k) v corresponds to
e v ijk = τ D,v ij · P F (m j , v) + τ S,D j · P H (m j ) + τ D,v jk · P F (0, v) .(10)
The battery of a drone has a nominal energy of E. However, in order to increase its service life, a battery should usually not be fully discharged. Hence, we use as the maximum depth of discharge (DoD) in percent. A DoD of 0% means the battery is fully charged, while at a DoD of 100%, the battery is empty. Therefore, the maximum available energy is E. Furthermore, it can be recharged with a fixed charging rate of P C .
W v is the set of feasible drone operations for speed v ∈ V . Each drone speed v leads to a different set of feasible operations since the speed has a large impact on the range. The set of all feasible drone operations is W = v∈V W v . An operation (i, j, k) v is feasible only under three conditions: (i) all nodes have to be pairwise different; (ii) customer j can be supplied by a drone, i.e., j ∈C; and (iii) the minimum energy consumption of operation (i, j, k) v does not exceed the maximum available energy E of the battery. In addition to the energy consumption e v ijk , we can take into account the minimum hovering time at retrieval node k as the truck could arrive after the drone, although it travels directly from i to k.
Thus, operation (i, j, k) v is feasible for i ∈ N 0 , j ∈C, k ∈ N + , i = j, i = k, j = k, if e v ijk + max(τ T ik − τ v ijk , 0) · P H (0) ≤ E.(11)
Finally, we define the cost parameters: (i) λ as fuel cost per distance unit traveled by a truck, (ii) β as cost per time unit of working time of a truck driver, and (iii) γ as cost per energy unit expended by the drone.
Decision variables
Several decision variables are required to describe the problem as an MILP:
• x f ij = 1 if truck f ∈ F drives directly from node i ∈ N 0 to node j ∈ N + and, otherwise, 0. • y f dv ijk = 1 if drone d of tandem f ∈ F performs operation (i, j, k) v and, otherwise, 0. • q f i = 1 if node i ∈ N is visited by truck f ∈ F and, otherwise, 0. • b f ij = 1 if nodes i, j ∈ C with j > i are visited by truck f ∈ F and, otherwise, 0. • u f i ∈ N 0 specifies the position of customer i ∈ C on the route of truck f ∈ F . • p f ij = 1 if node i ∈ N 0
precedes nodes j ∈ N + in the tour of truck f ∈ F and, otherwise, 0.
Time
Residual energy
dtt i = dtd i atd k att k dt k att l r i r k r l τ D,v ij τ S,D τ D,v jk htd k ltd k τ v ijk τ T ik τ S,T τ T kl Drone Truck e v ijk P F (m j , v) P H (m j ) P F (0, v) P H (0) P C
Figure 2: Relation between variables for resources time and energy
• z f d lik = 1 if drone d ∈ D of tandem f ∈ F performs a flight with launch node i ∈ N 0 and retrieval node k ∈ N + and truck f visits node l ∈ C in between and, otherwise, 0.
• att f i ∈ R + represents the arrival time of truck f ∈ F at node i ∈ N . • dtt f i ∈ R + represents the departure time of truck f ∈ F from node i ∈ N . • atd f d i ∈ R + represents the arrival time of drone d ∈ D of tandem f ∈ F at node i ∈ N . • dtd f d i ∈ R + represents the departure time of drone d ∈ D of tandem f ∈ F from node i ∈ N . • htd f d i ∈ R + represents the amount of time that drone d ∈ D of tandem f ∈ F hovers at node i ∈ N . • ltd f d i ∈ R + represents the amount of time that is used for drone d ∈ D of tandem f ∈ F to be loaded at node i ∈ N . • r f d i ∈ [(1 − )E, E] represents the residual energy of drone d ∈ D of tandem f ∈ F when arriving at node i ∈ N or at reunion with truck f if i is a retrieval node. • w f d ij ∈ [0, 1] represents the share of travel time τ T ij on arc (i, j) that is used by drone d ∈ D of tandem f ∈ F for recharging. • tec f d ∈ R + represents the total energy consumption of drone d of tandem f .
The relationship between time variables of trucks and drones as well as energy-related variables of drones is shown in Figure 2. Note that we omit the indices for truck and drone as we consider only one vehicle of each type in this example. Both vehicles leave node i at the same time (dtt i = dtd i ). The drone performs operation (i, j, k) v and arrives at retrieval node k at time atd k = dtd i + τ v ijk . Its residual energy decreases from r i to r i − e v ijk on arrival at node k. The truck arrives at node k at att k = dt i + τ T ik > atd k . Thus, the drone must hover for htd k = att k − atd k time units. The residual energy of the drone at its reunion with the truck at node k is r k = r i − e v ijk − htd k P H (0). After the arrival of the truck, customer k is served by the driver. The drone is recharged on the truck for ltd k = τ S,T k time units during the service to customer k. Finally, the truck departs from customer location k at time dt k = att k + τ S k and travels to node l. The drone is recharged during the complete travel time τ T kl (w kl = 1) while atop the truck, and its residual energy is r l = τ T kl P C when the tandem reaches node l.
Model
The VRPD-DSS can be formulated as an MILP with the notation and decision variables introduced above. To facilitate understanding, the objective function and the various groups of constraints are presented in several sections.
Objective function
The objective function
min λ f ∈F i∈C j∈N + δ T ij x f ij + β f ∈F att f c+1 + γ f ∈F d∈D tec f d(12)
minimizes the total operational costs. The first term of (12) corresponds to the fuelconsumption costs of the total distance traveled by all trucks. The second term represents the total working-time costs of all drivers, and the total energy costs of all drones are determined by the last term.
Complete demand satisfaction Constraints
f ∈F q f j + i∈N 0 k∈N + d∈D v∈V y f dv ijk = 1 ∀j ∈ C(13)
guarantee that all packages must be delivered and ensure that each customer is visited only once by truck or drone.
Truck routing
We introduce constraints
i∈N 0
x f ij = q f j ∀ j ∈ N + , f ∈ F(14)
j∈N +
x f ij = q f i ∀ i ∈ N 0 , f ∈ F (15) u f i − u f j + c · x f ij + (c − 2) · x f ji ≤ c − 1 ∀ i, j ∈ C, f ∈ F (16) c · p f ij − (c − 1) ≤ u f j − u f i ∀ i, j ∈ C, f ∈ F(17)p f 0,i = q f i ∀ i ∈ C, f ∈ F (18) p f i,c+1 = q f i ∀ i ∈ C, f ∈ F(19)
to ensure feasible truck routes. Constraints (14) and (15) preserve the flow of a vehicle f . Inequalities (16) are lifted Miller-Tucker-Zemlin subtour elimination constraints. However, their primary purpose is not to prevent subtours but to correctly determine variables u.
Variables u are used in constraints (17) to set precedence variables p. If p f ij = 1, then node j is visited after node i by truck f and u f j is larger than u f i + 1. Equations (18) and (19) guarantee that the depot precedes (node 0) and succeeds (node c + 1) customer i on the route of truck f if and only if customer i is visited by truck f .
Additionally, we ensure that either p f ij or p f ji equals 1 if and only if both nodes i and j are visited by the truck. Hence, we impose
p f ij + p f ji = q f i · q f j ∀i, j ∈ C, j > i.
As the right-hand side of this inequality is nonlinear, we use variables b f ij and the following inequalities to linearize this relationship:
p f ij + p f ji = b f ij ∀ i, j ∈ C, j > i, f ∈ F (20) q f i ≤ b f ij ∀ i, j ∈ C, j > i, f ∈ F (21) q f j ≤ b f ij ∀ i, j ∈ C, j > i, f ∈ F (22) q f i + q f j ≤ 1 + b f ij ∀ i, j ∈ C, j > i, f ∈ F(23)
Coordination of drone actions
A drone can perform various actions. Considering our assumptions, it can
• start or end a flight at a node, • be airborne while the truck visits a node, • recharge while the truck is traveling from one node to another, • recharge at a node, or • idle on the truck either at a node or on an arc.
However, it can never perform multiple activities at the same time, and its actions must be coordinated with the activities of its truck. Since times when the drone is idle do not need to be modeled separately and recharging at a node is included in the energy-consumption constraints in Section 4.2.6, we need to introduce only constraints
i∈N 0 j∈C v∈V y f dv ijk ≤ q f k ∀ k ∈ N + , f ∈ F, d ∈ D (24) w f d ik ≤ x f ik ∀ i, k ∈ N, f ∈ F, d ∈ D (25) z f d lik ≥ p f il + p f lk + j∈C v∈V y f dv ijk − 2 ∀ l ∈ C, i ∈ N 0 , k ∈ N + , f ∈ F, d ∈ D(26)
to model the coordination of the remaining activities. Constraints (24) assure that the retrieval node of a drone flight has to be visited by the truck. Constraints (25) guarantee that a charging activity of drone d on arc (i, k) may occur only if truck f travels from i to k. Constraints (26) determine whether drone d is in the air at node l while the truck is visiting l. Thus, these constraints determine whether a drone is available to start an activity at node l or not. Consider Figure 3 for a better understanding. Truck f visits customer l between nodes i and k; therefore, p f il = 1 and p f lk = 1. At the same time, drone d starts a flight with any speed v at node i, visits a customer j, and is retrieved by the truck at k. Thus, drone d is in the air at node l and constraints (26) Now, multiple actions that take place simultaneously can be prevented with
enforce z f d lik = 1. i l k j j∈C v∈V y f dv ijk = 1 z f d lik ≥ 1 + 1 + 1 − 2 = 1 p f il = 1 p f lk = 1n∈N + w f d lm + v∈V m∈C n∈N + y f dv lmn ≤ q f l − i∈N 0 k∈N + z f d lik ∀ l ∈ C, f ∈ F, d ∈ D.(27)
Constraints (27) guarantee that drone d can either be charged on an arc leaving node l or can start an operation from node l. However, these actions are possible only if node l is visited by truck f and drone d is not already in the air at node l.
Temporal synchronization between trucks and drones
We introduce constraints
att f k ≥ dtt f i + τ T ik · x f ik − M i 1 − x f ik ∀ i ∈ N 0 , k ∈ N + , f ∈ F (28) atd f d k ≥ dtd f d i + v∈V j∈C τ v ijk y f dv ijk − M i 1 − v∈V j∈C y f dv ijk ∀ f ∈ F, d ∈ D, i ∈ N 0 , k ∈ N + (29) atd f d k ≤ dtd f d i + v∈V j∈C τ v ijk y f dv ijk + M i 1 − v∈V j∈C y f dv ijk ∀ f ∈ F, d ∈ D, i ∈ N 0 , k ∈ N + (30) htd f d i ≥ att f i − atd f d i ∀ i ∈ C, f ∈ F, d ∈ D (31) htd f d i ≤ q f i τ MH i ∀ i ∈ C, f ∈ F, d ∈ D (32) ltd f d i ≤ q f i τ MS i ∀ i ∈ C, f ∈ F, d ∈ D(33)
to synchronize the activities of trucks and their associated drones with respect to time.
Constraints (28) bound the arrival time att f k of truck f at node k if truck f travels directly from node i to node j, where M i = M − τ T i,c+1
is the latest possible departure time from node i. Constraints (29) and (30) (32) and (33) can be used to limit the maximum time a drone is allowed to hover at node i and the maximum time a drone can be recharged at customer node i. They also ensure that drone d of tandem f can only hover or be recharged at customer i if truck f visits customer i.
Constraints
dtt f i ≥ att f i + τ S,T i ∀ i ∈ N, f ∈ F (34) dtt f i ≥ dtd f d i ∀ i ∈ N, f ∈ F, d ∈ D (35) dtt f i ≤ att f i + τ MS ∀ i ∈ C, f ∈ F (36) dtd f d i ≥ atd f d i + htd f d i + ltd f d i + τ L j∈C k∈N + v∈V y f dv ijk ∀ i ∈ N, f ∈ F, d ∈ D(37)
set the departure times of trucks and drones. The earliest departure time of truck f from node i is determined by constraints (34) and (35). In addition, constraints (36) limit the maximum time a truck can remain stationary at a node. The departure time of a drone is determined with constraints (37). Drone d must not depart from node i before it has finished hovering and loading and is prepared for launch if it starts an operation at node i. Thus, all truck-and drone-related activities at a node must be completed before the truck can leave that node.
Energy consumption of drones
The total energy consumption tec f d of drone d belonging to tandem f is determined by constraints
tec f d = i∈N 0 j∈C k∈N + v∈V e v ijk y f dv ijk + P H (0) i∈C htd f d i ∀ f ∈ F, d ∈ D(38)
and is equal to the energy expended for all flights plus the energy expended for hovering. The residual energy of a drone is computed by constraints
r f d k ≤ r f d i + ltd f d i P C − v∈V j∈C e v ijk y f dv ijk − htd f d k P H (0) + E 1 − v∈V j∈C y f dv ijk ∀ f ∈ F, d ∈ D, i ∈ N 0 , k ∈ N +(39)r f d i + ltd f d i P C ≤ E ∀ i ∈ N 0 , f ∈ F, d ∈ D (40) r f d j ≤ r f d i + ltd f d i P C + τ T ij P C w f d ij + E 1 − x f ij ∀ i ∈ N 0 , j ∈ N + , f ∈ F, d ∈ D.(41)
Constraints (39) restrict the residual energy of drone d when reuniting with truck f at node k if a flight is performed between nodes i and k. The residual energy at reunification is the residual energy at departure from node i minus the expended energy. The residual energy at departure from node i consists of the residual energy at arrival r f d i plus the recharged energy while stationary at node i. The expended energy comprises the energy e v ijk for performing operation (i, j, k) v and the energy needed for hovering at reunification node k. Constraints (40) limit the residual energy at departure to the maximum value E since this is not guaranteed by constraints (39). Finally, constraints (41) represent the reloading of drone d while traveling on truck f from node i to node j. However, this is valid only if truck f uses arc (i, j).
Model strengthening
Preprocessing 5.1.1. Elimination of dominated drone operations
Identifying drone speeds that will never be used in an optimal solution for a flight (i, j, k) and eliminating them can substantially reduce the number of possible drone operations. Consequently, the number of variables in our model is also reduced, thereby improving the performance. In general, we have to consider the different resources time and energy consumption. An operation is referred to as dominated by another operation for the same flight if is not beneficial with respect to time or to energy consumption.
General dominance rules.
Proposition 1. For flight (i, j, k) ∈ P , operation (i, j, k) v is dominated by operation (i, j, k) s with s > v and v, s ∈ V , if e s ijk + τ v ijk − τ s ijk P H (0) ≤ e v ijk .(42)
Proof. Operation (i, j, k) s is faster than operation (i, j, k) v because τ s ijk < τ v ijk if s > v. However, since the truck can arrive after the drone, we can state only that operation (i, j, k) s is always at least as good as operation (i, j, k) v with respect to time.
The energy consumption of operation (i, j, k) v is e v ijk . Since the truck can arrive after the drone, we also have to consider the maximum additional hover time τ v ijk − τ s ijk to reach the same point in time with operation (i, j, k) s as with operation (i, j, k) v . After that, the energy expended by hovering is the same for both speeds. Therefore, the maximum energy consumption of operation (i, j, k) s to reach the same point in time as operation (i, j, k) v is
e s ijk + τ v ijk − τ s ijk P H (0).(43)
Thus, operation (i, j, k) s dominates operation (i, j, k) v , if (42) is true.
However, due to the drone power-usage models for flight and hover mode introduced in Section 3.2, it is unlikely that the dominance rule in Proposition 1 applies. This also highlights the trade-off between energy consumption and execution time. Nevertheless, we are able to construct two special cases to eliminate dominated operations. In the following, we first show how a slower operation can dominate a faster one. Secondly, we demonstrate the reverse case, where faster operations are superior to slower operations.
Elimination of operations with faster speeds.
Proposition 2. For flight (i, j, k) ∈ P , operation (i, j, k) s is dominated by operation (i, j, k) v with s > v and v, s ∈ V if τ v ijk ≤ τ T ik ∧ e v ijk ≤ e s ijk + τ v ijk − τ s ijk · P H (0) .(44)
Proof. As stated above, operation (i, j, k) s is always at least as good with respect to time as
(i, j, k) v . However, if τ v ijk ≤ τ T ik ,
then there is no benefit in using the faster speed s. Thus, both operations are equal with respect to time. Now, we can eliminate operation (i, j, k) s if the energy consumption is at least as high as the energy consumption of the slower operation (i, j, k) v . Analogous to the general case, the energy consumption of operation (i, j, k) v is e v ijk , and the energy consumption of operation (i, j, k) s to reach the same point in time as operation (i, j, k) v can be determined with (43). Thus, operation (i, j, k) s is dominated by (44) is true.
operation (i, j, k) v , if
Elimination of operations with slower speeds.
Proposition 3. For flight (i, j, k) ∈ P , operation (i, j, k) v is dominated by operation (i, j, k) s with s > v and v, s ∈ V if ¬∃l ∈ C, l = j s.t. e v ijk + max τ T il + τ S,T l + τ T lk − τ v ijk , 0 · P H (0) ≤ E (45) ∧¬∃l ∈ C, l = j s.t. e s ijk + max τ T il + τ S,T l + τ T lk − τ s ijk , 0 · P H (0) ≤ E (46) ∧τ v ijk > τ T ik ∧ e s ijk + max τ T ik − τ s ijk , 0 · P H (0) ≤ e v ijk .(47)
Proof. Conditions (45) and (46) ensure that operations (i, j, k) v and (i, j, k) s require a direct trip of the truck from the launch node i to the retrieval node k. Here, a direct trip is necessary if the truck cannot serve a customer l ∈ C between i and k since this detour via l would increase the hover time of the drone at k, resulting in energy consumption that is too high. Taking this special case into account, the amount of hover time is known as the truck travels directly from the launch to the retrieval node. Therefore, we are able to determine the expended energy, including hovering, exactly. In addition, we assume that the drone always arrives after the truck if operation (i, j, k) v is performed τ v ijk > τ T ik ; hence, it never needs to hover. This also means that, in contrast to the general rule, if the drone performs operation (i, j, k) s , it can always be retrieved by the truck before reaching the same point in time as operation (i, j, k) v . Thus, the maximum additional hover time is max τ T ik − τ s ijk , 0 and operation (i, j, k) s dominates operation (i, j, k) v if conditions (45) -(47) hold true.
Elimination of variables z
Different modeling approaches for the VRPD prohibit, in different ways, the launch of a drone when it is already in flight. We introduce variables z f d lik to check whether drone d of tandem f performs a flight from i to k and is, therefore, not available at node l. This leads to a large number of variables for larger instances. However, we can eliminate several unnecessary variables to reduce the problem size without excluding any optimal solutions. Proposition 4. Variables z f d lik for three pairwise different nodes l ∈ C, i ∈ N 0 , k ∈ N + can be eliminated for all f ∈ F, d ∈ D if there is no drone operation with launch node i and retrieval node k or
¬∃ (i, j, k) v ∈ W v , v ∈ V, l = j s.t. e v ijk + max τ T il + τ S,T l + τ T lk − τ v ijk , 0 · P H (0) ≤ E.(48)
Proof. Following the definition of the variables, it is obvious that z f d lik can be eliminated if there is no drone operation with launch node i and retrieval node k. Condition (48) states that there is no feasible operation with launch node i and retrieval node k if the truck performs a detour via node l, since the energy consumption of the operation plus the additional energy consumption while hovering at node k exceeds the drone's available energy. Thus, the truck cannot visit node l between i and k if a drone performs any operation with launch node i and retrieval node k and variables z f d lik can be eliminated.
Valid inequalities
Most of the valid inequalities used in this paper are similar to the valid inequalities introduced in [43]. Since they are explained in detail there, we refer to that work for a more detailed discussion.
Lower bounds on arrival and departure times
The lower bounds on arrival times at nodes and departure times from nodes are modified in comparison to [43] to include the additional aspects considered in this paper. However, the operating principle is similar. The following inequalities set lower bounds on the completion time of a truck f and a drone d belonging to f :
att f c+1 ≥ i∈N 0 j∈N + τ T ij + τ S,T j x f ij ∀f ∈ F (49) atd f d c+1 ≥ i∈N 0 j∈C k∈N + v∈V τ L + τ v ijk y f dv ijk + i∈N htd f d i + ltd f d i + i∈N 0 j∈N + w f d ij τ T ij ∀f ∈ F, d ∈ D (50) att f c+1 ≥ dtt f i + k∈N + τ T ik + τ S,T k + τ T k,c+1 x f ik ∀ i ∈ C, f ∈ F.(51)
The first two consider the total active time of the vehicles. The active time of a truck consists of travel and service times (49). In addition to these, hovering and recharging times at nodes and on arcs must also be taken into consideration for drones (50). In contrast to the first two inequalities, inequalities (51) determine the completion time based on the minimum travel time from a customer i via another node k back to the depot. If truck f travels directly from i to k, then the earliest arrival time at the depot is the departure time at node i, plus the travel time from i to k, the service time at node k, and the travel time from node k to the depot.
att f k ≥ i∈N 0 τ T 0,i + τ S,T i + τ T ik x f ik ∀k ∈ C, f ∈ F (52) dtd f d k ≥ i∈N 0 j∈C v∈V τ T 0,i + τ L + τ v ijk y f dv ijk + htd f d k + ltd f d k ∀k ∈ C, f ∈ F, d ∈ D. (53)
Inequalities (52) establish lower bounds on the arrival time at a customer k. As in (51), the detour via another node is considered, but now the truck starts at the depot and travels directly to detour node i. Inequalities (53) set lower bounds on the departure time of drone d associated with truck f at node k. Drone d travels atop truck f from the depot to detour node i and performs an operation with retrieval node k
Problem-specific cuts
In addition to the lower bounds on arrival and departure times, we use the VRPD-specific cuts introduced in [43]:
i∈C f ∈F q f i ≥ |C| − |D| · |F | |D| + 1 (54) x f ik ≤ p f ik ∀i ∈ N 0 , k ∈ N + , f ∈ F (55) j∈C v∈V y f dv ijk ≤ p f ik ∀i ∈ N 0 , k ∈ N + , f ∈ F, d ∈ D (56) x f 0c+1 + q f j ≤ 1 ∀j ∈ Cf ∈ F.(57)
First, we set a lower bound on the number of customers that can be visited by all trucks with inequality (54). Inequalities (55) state that, if truck f travels directly from node i to node k, then i has to precede k in the route of f . Inequalities (56) ensure that, if drone d performs any flight with launch node i and retrieval node k, then i must be visited before k by truck f . Inequalities (57) prohibit the artificial trip between the two depot nodes 0 and c + 1 if any customer is visited by the truck. Furthermore, we use the extended subtour elimination constraints (ESECs)
f ∈F i∈S j∈S
introduced in [43] as well. Since there is an exponential number of ESECs, we cannot add them at the beginning but, rather, have to detect violated cuts during the optimization. Therefore, we use the separation algorithm presented in [43]. Finally, we introduce the following new cuts, which have been proven to be useful:
j∈C v∈V
y f dv ijk ≤ l∈C z f d lik + x f ik ∀ i ∈ N 0 , k ∈ N + , f ∈ F, d ∈ D (59) r f d i + ltd f d i P C − (1 − ) E ≥ v∈V j∈C k∈N + e v ijk y f dv ijk ∀ i ∈ C, f ∈ F, d ∈ D (60) tec f d = i∈N 0 j∈N + w f d ij τ T ij P C + i∈C ltd f d i P C + r f d 0 − r f d c+1 ∀ f ∈ F, d ∈ D.(61)
Inequalities (59) state that, if drone d of tandem f performs any operation with launch node i and retrieval node k, then it has to be in the air at any node l visited by truck f between i and k, or truck f has to travel directly from i to k. Inequalities (60) set a lower bound on the available energy, consisting of the residual energy on arrival and the recharged energy while stationary, at node i. The available energy must be sufficient for a drone operation starting at node i. Finally, equations (61) represent a second variant to determine the total energy consumption of a drone. Constraints (38) take into account the energy used for flying and hovering. In contrast, equations (61) consider the energy that is used to recharge the battery of a drone. However, the battery need not be fully charged at the end of the tour. Therefore, we have to additionally consider the difference between the residual energy at the beginning and at the end to determine the drone's total energy consumption.
Computational studies
The algorithm is implemented in C# with .NET Framework 4.6.1 and Gurobi 9.0 is used as the MILP solver. All tests are performed on a Windows Server 2012 R2 with Intel(R) Xeon(R) CPU E5-4627 v2 3.3 GHz processors with 32 cores and 768GB RAM. We use 12 cores to solve each instance, and the memory consumption is very low. As in [43], extended subtour elimination constraints (58) are not added at every node of the branch-and-bound tree. Here, they are added at every 100th node.
Generation of real-world rural-area test instances
We generate test instances that represent a real-world, rural-area based scenario for the use of truck-drone tandems to test our approach and gain managerial insights. All instances are created with Python 3.7 and are available at [42].
Depot and customer locations. The basis of all test instances is an rectangular-shaped area approximately 20 km by 30 km located in Minnehaha County, South Dakota, USA. We have selected approximately 700 possible customer locations and a UPS Customer Center in Sioux Falls as the depot. The map in Figure 4 shows the distribution of the selected customer locations (dots) and the depot (triangle). To create a single instance, we randomly select |C| customers out of all customer locations. Specifications of drone model and battery. As in the examples in Section 3.2, we use the octocopter model presented in [41]. We assume an overall power efficiency of the drone η = 0.7 and a safety coefficient σ = 0.2. Similar to [38], we assume that all drones can transport packages weighing up to 5 kg. In addition, we use an existing lithium polymer (LiPo) battery from Grepow Inc. to power the drone [21]. Since large drones have higher energy consumption, they also require large batteries. The LiPo battery selected for our experiments weighs six kilograms and consists of 12 cells with a total nominal voltage of 44.4 V and a nominal capacity of 22 000 mA h. Thus, it has a nominal energy of E = 976.8 W h= 3516.48 kJ. In addition, we set the maximum DoD = 0.80. Finally, we assume a charge rate of 1 C, which means that the battery can be completely charged in one hour and P C = 3516.48 kJ/h. Selection of drone customers. A customer j ∈ C can only be supplied by a drone if the mass of the package m j is lass than the payload of the drone. We assume that 90% of all packages are below the drone's payload and range from 0.05 kg to 5 kg. The other 10% range from 5 kg to 50 kg. Hence, with probability p ∈ [0, 1), we draw m j from interval [0.05, 5] if p ≤ 0.9 and from interval (5, 50) otherwise. However, a package may not be eligible for drone delivery even though its weight is below the payload. This can occur, for example, if a customer is not willing to be supplied by a drone, which may well be the case when a new technology is introduced. In our computational studies, we assume that 75% of all customers allow drone delivery of their packages.
Distances, travel times, and time parameters. We use openrouteservice.org [31] to obtain actual road network distances and travel times between all locations. The beeline distances for drone flights are determined with GeoPy [20]. Both are free-to-use Python packages. The maximum route duration M is eight hours. For each customer j ∈ C, we set the service time of a truck delivery τ S,T j at 120 seconds and the service time of drone delivery τ S,D j at 90 seconds. For depot nodes 0 and c + 1 the times are fixed at zero. The time needed to prepare the launch of a drone τ L is 60 seconds. Unless otherwise noted, the maximum time a truck is allowed to remain stationary at a node τ MS and the maximum time a drone is allowed to hover at retrieval τ MH j are set high enough that they are not constraining.
Costs. We consider fuel costs of the trucks, wages of the truck drivers, and energy costs of the drones as described in the objective function (12). These costs differ between different truck-types, regions, and companies and vary over time. The costs per distance unit traveled by truck λ in our experiments are based on a typical P70 UPS truck. We assume a fuel consumption of 11 mpg (0.214 l/km) for rural areas [25] and a diesel price of $0.76/l. Thus, distance cost parameter λ is approximately $0.16/km. Furthermore, we assume that a driver costs approximately β = $20/h and the electricity rate γ = $0.09/(kW h) ($0.025/kJ).
Results for small instances
We use 10 small instances with 20 customers to assess the following:
1) the impact of our preprocessing methods on the runtime, 2) the influence of varying drone speeds on the costs in the VRPD, and 3) the benefits of speed selection in comparison to a single fixed speed.
We have chosen five possible drone speeds ranging from 8 m/s to 16 m/s in steps of 2 m/s. Therefore, we have five VRPDs with |V | = 1 and one VRPD-DSS with V = {8, 10, 12, 14, 16}. Table 1
Performance improvements through preprocessing
The average results of all 10 instances for each set of speeds are presented in Table 2. We solve each instance with two algorithm configurations. First, we apply only the model plus the cuts introduced in Section 5.2 (Model + Cuts). The second configuration includes our preprocessing methods (PP + Model + Cuts). We perform five runs per instance to deal with the performance variability in the MILP solution process. The number of nonzero matrix elements (#NZ) following presolve performed by Gurobi is used to represent the size of a problem. In addition, we use the run-time to optimality in seconds (Time) as the performance indicator and show the optimal costs (Costs). Finally, the relative change ∆ between the two configurations for #NZ and Time is displayed as a percentage. The results show that the number of nonzero elements in the constraint matrix can be reduced significantly by the preprocessing steps. However, optimal solutions are not excluded since costs are the same with and without preprocessing for all instances. The problem size reduction is larger for the VRPD-DSS than for a single-speed VRPD. In the VRPD, only unnecessary variables z can be eliminated, while in the case of the VRPD-DSS, dominated drone operations are also removed. On average, 23.10 % of the drone operations are removed by applying our dominance rules, as shown in Table 1. Table 2 also highlights that a reduced problem size leads to significantly faster run times. However, in contrast to the problem size reductions, the run-time reductions are smaller for the VRPD-DSS than for the single VRPDs. These results demonstrate that our preprocessing methods introduced in Section 5.1 are highly effective for the considered test instances and therefore, will be used in all further tests. Table 1 clearly shows that the number of feasible drone operations is heavily dependent on the selected speed. The number of operations first increases with drone speed and, then, it decreases again. Thus, flying faster than a certain threshold reduces the range of a drone due to the nonlinear energy-consumption function (see Figure 1b). For the chosen drone model, using a speed of 10 m/s leads to the largest number of operations on average. However, in some instances, a speed of 12 m/s generates the most feasible operations.
Impact of different drone speeds for the VRPD
Nevertheless, the results in Table 2 demonstrate that more feasible operations do not necessarily lead to lower costs. On average across all 10 instances, the lowest costs considering a single-speed problem can be obtained for both one and two drones with a drone speed of 12 m/s. A faster drone is, therefore, not necessarily advantageous and can result in higher costs. This supports the findings in [35]. Although using a speed of 12 m/s leads to the lowest costs on average, this does not apply to each individual instance. Figure 5 shows the costs for each speed for all 20 customer instances separated by the number of available drones. In some instances, there is almost no difference between two or more speeds (e.g., SF_20_5), whereas in other instances, the difference between best and second-best speed is fairly high (e.g., SF_20_6). Moreover, the speed that results in the lowest costs for an instance can depend on the number of drones available. For example, with instance SF_20_4, 16 m/s leads to minimal cost with one drone, while with two drones, 12 m/s is the best choice. In general, it can be summarily stated that the speed selected in advance for the VRPD can have a significant impact on the costs. To address this issue, drone speed should be included in the decision-making process, such as in the VRPD-DSS. In addition, although performing all flights at speed of 16 m/s leads to substantially fewer operations than a speed of 8 m/s, the average costs are lower. This further illustrates the trade-off between the ability to serve more customers with drones by flying slower, and the savings that can be achieved through shorter delivery times by flying faster.
VRPD vs. VRPD-DSS
The optimal solution of the VRPD-DSS is always at least as good as the best solution of all VRPDs with a single speed. In addition, the costs of the VRPD-DSS are often lower because using different speeds is beneficial in terms of energy consumption or delivery time. However, the cost deviations between the VRPD and VRPD-DSS vary. Table 3 shows the minimum, average, and maximum percentage costs deviation (∆Costs) of all 10 instances for each VRPD compared to the VRPD-DSS. In addition, the total number of operations (#OP) at each speed used in all 10 optimal solutions of the VRPD-DSS is given.
Of course, the average costs of the VRPD with speed 12 m/s deviate the least from the optimal solution of the VRPD-DSS since it has the lowest costs, on average, of all VRPDs. They are, on average, 0.59 % (one drone) and 0.78 % (two drones) worse than the optimal costs of the VRPD-DSS. For two instances in the case of a single drone and for one instance when two drones are available, the costs are the same, i.e., all flights are performed at 12 m/s although other speeds are available. In contrast, the deviation is over 1 % in some instances. All other speeds lead to higher cost deviations on average and in the best and worst cases. for both one and two drones. The slowest speed, 8 m/s, is never used in any solution, and the fastest speed, 16 m/s, is never selected if the tandem has only one drone. We also observe that the deviations between the VRPDs and the VRPD-DSS are larger with two drones. Thus, it is especially important to consider different drone speeds when multiple drones are available.
Results for larger instances
In our further studies, we use larger instances with 30, 40, and 50 customers to gain insights into the benefits of truck-drone tandems under the realistic circumstances presented in this paper. In contrast to the small instances, we limit the maximum time a truck can stop at a node to τ MS four minutes, which is twice the service time for a customer visited by a truck. Moreover, the maximum hover time at a retrieval node τ MH is restricted to two minutes. Preliminary tests on 20 customer instances have shown that these values improve computational performance compared to the unrestricted case but increase costs only slightly. We focus on using the MILP solver as a heuristic rather than as an exact approach in the tests with larger instances. Today, state-of-the-art solvers contain powerful primal heuristics to find good feasible solutions quickly [2]. To test Gurobi's ability to provide good solutions quickly, we conduct two different experiments.
In the first experiment (Experiment 1), we set the Gurobi parameter MIPFocus to 1 and Heuristics to 0.75. The former modifies the high-level solution strategy to focus on finding good feasible solutions, while the latter lets Gurobi spend even more time on primal heuristics. Using this setting, we perform five runs with different seed values for each instance and limit the maximum time per run to one hour. In the second experiment (Experiment 2), we attempt to achieve a good lower bound. For this purpose, we use the default values of MIPFocus and Heuristics and provide the best found solution in the first experiment as the starting solution. In addition, we increase the maximum run time to eight hours but perform only a single run per instance. Detailed results for both experiments are shown in Table A.2 in the appendix. Table 4 displays the average over all 10 instances per instance class of: the average objective function value at termination over all five runs of Experiment 1 (Obj); the coefficient of variation (CV), i.e., the ratio of the standard deviation to the mean, of the objective function value as a percentage; the objective function value of the best known solution (BKS); the optimality gap of BKS at termination (Gap) as a percentage; and the relative percentage deviation RPD = Obj − BKS /BKS · 100 at different points in time. Note that BKS corresponds to the objective function value at the termination of Experiment 2.
The results show that it is difficult to prove optimality with the given approach for VRPD-DSS instances with just 30 customers, and it becomes more difficult as the number of customers and drones increases. However, the solver is able to consistently provide good solutions in a reasonable amount of time. Similar to the optimality gap, the coefficient of variation of the objective function value increases with a growing number of customers and drones. This means that the spread of the objective function values at the end of Experiment 1 increases and the consistency decreases noticeably. Nevertheless, we consider an average coefficient of variation of 1 % as a small and acceptable spread. In addition to consistency, we assess Gurobi as providing good-quality solutions for the given instances. For instances with 30 and 40 customers, the average RPD is less than one percent within 600 s. After one hour of run time, the average RPD is between 0.13 % and 1.50 % for all instance sizes and only greater than 1 % for 50 customers and two drones per tandem. Note that the best known solution is almost always identical to the best solution found in Experiment 1 (RPD * in Table A.2 is almost always 0). Thus, the best solution of an instance in Experiment 1 can very rarely be improved in eight hours in Experiment 2.
Benefits of truck-drone tandems
Finally, we analyze the benefits and cost-savings of truck-drone tandems for the VRPD-DSS compared to traditional truck-only delivery (TO). We use the best solution for each instance obtained in the experiments to determine potential savings. Detailed information on the results are provided in the appendix. Table A.3 presents information on the instances and truck-only delivery, while Table A.4 and Table A.5 show detailed results for tandems with one and two drones, respectively. Table 5 displays the average solution information for different numbers of customers and drones. It includes the operating time of trucks (Time); the distance traveled by trucks (Dist); the number of operations per drone (#OP); the distance covered per drone (Dist); the number of charge cycles per drone (#CC); the different cost components, i.e., wages, fuel, and power; the total costs (Total); and finally, the relative change in the total costs compared to truck-only delivery (∆TO) as percentages.
The results show that significant cost-savings can be achieved by using truck-drone tandems and that savings increase with an additional drone. However, the benefit of the second drone is less than the benefit of the first drone. As expected, the total number of drone operations and the total distance traveled by all drones increase when two drones are used instead of one drone per tandem. Thus, more customers are served by drones, which results in lower costs. However, the workload per drone decreases, which may result in longer life spans of drones and batteries. Furthermore, savings increase with the number of customers, i.e., with higher customer density, since higher customer density leads to more feasible drone operations (see |W | in Table A. 3). Yet the increase from 40 to 50 customers is very small, so perhaps there is a saturation effect that limits the positive impact of customer density on savings, or the heuristic solutions for instances with 50 customers have poorer quality. Finally, Figure 6 presents a more detailed insight into the cost components and savings. First, we observe that wages account for the largest share of the costs. Fuel costs are less than a quarter of the total operational costs, while power costs are almost negligible. Note that, although power costs have little impact on the total costs, proper consideration of the energy consumption is critical for feasibility, and including drone operations also reduces wages and fuel costs. Moreover, the average reduction in wages is greater than the average reduction in fuel costs. For example, for instances with 30 customers, the use of tandems with a single drone can reduce wages by 13.7 %, while fuel costs can be reduced only by 9.5 %. Hence, the application of drones can reduce working hours more than the traveled distance of trucks. This highlights expedited delivery through parallelization of services as one of the key benefits of truck-drone tandems. Therefore, it is advisable to include some element of time in the evaluation of truck-drone tandems when comparing them to traditional truck-only delivery.
Conclusion and future research
In this paper, combined parcel delivery by trucks and drones is studied, where the speed of a drone flight can be selected from a discrete set of different speeds. We call this problem the vehicle routing problem with drones and drone speed selection, and the following trade-off in speed selection is considered: On one hand, a faster speed shortens delivery times; on the other, it leads to increased energy consumption and, thereby, to a shorter range. We introduce an MILP for this problem, as well as preprocessing methods to eliminate dominated drone speeds and unnecessary variables, and valid inequalities to further strengthen the formulation. We test our approach on instances that closely resemble a real-world scenario in a rural area. The results clearly demonstrate the effectiveness of the preprocessing methods. We also show that, if only a single speed is available for each drone flight, increasing the speed above a threshold does not usually lead to lower costs. However, this threshold differs between instances. Therefore, from a cost perspective, it is always beneficial to consider multiple speeds. The results further indicate that a general solver such as Gurobi can consistently provide high-quality solutions for larger instances of the VRPD-DSS with up to 50 customers. Finally, our results show that truck-drone tandems can achieve significant savings compared to truck-only delivery for the rural scenario considered here. In addition, truck-driver wages account for the largest share of costs but can also be reduced the most by using tandems. In contrast, electricity costs for the drones are almost negligible. However, considering the energy consumption of drones with different speeds is crucial for the feasibility of solutions. There are many potential avenues for future research. For example, heuristic algorithms can be developed for the VRPD-DSS, and the solutions obtained with the exact approach presented here can be used to evaluate the algorithms. In addition, VRPD-DSS heuristics can be compared to heuristics for the mFSTSP-VDS to investigate the effects of discrete speed levels instead of continuous drone-speed decision variables. Other exact approaches could also be developed to consider continuous drone speeds. A further interesting area of research could be the incorporation of external circumstances such as weather in order to derive more-robust routing decisions.
Figure 1 :
1Expended power, energy consumption to fly 1000 m, and energy consumption to fly 1000 m plus hovering up to 180 s are reached for different speeds and package masses m p
Figure 3 :
3Visualization of constraints (26) that determine if drone d of tandem f is in air at node l.
set the arrival time atd f d k of drone d belonging to tandem f at reunification node k if it performs operation (i, j, k) v . Hover time htd f d k of drone d at node k is defined by constraints (31). In case drone d arrives before its corresponding truck f at node i atd f d i < att f i , it is equal to the difference between the truck arrival time att f i and the drone arrival time atd f d i ; otherwise, it is 0. Inequalities
j∈S k∈S d∈D v∈V y f dv ijk + i∈S j∈S k∈S d∈D v∈V y f dv ijk ≤ |S| − 1 ∀ S ⊆ C, |C| ≥ 2
Figure 4 :
4Distribution of all possible customers (dots) and the depot (triangle in the lower right corner)
shows the characteristics of each of the 20 customer instances. It includes the number of customers that are available for drone deliveries |C| and the number of operations |W | for each VRPD with |V | = 1 and for the VRPD-DSS with |V | = 5. For the VRPD-DSS, we also show the number of operations without the elimination of dominated drone operations (No OE), with operation elimination (OE), and the percentage of dominated operations that can be eliminated (∆W).
Figure 5 :
5As a result, most flights in the speed selection problem are performed at a speed of 12 mCosts for each 20 customer instance and varying speeds
Figure 6 :
6Cost structure of different delivery systems and average savings of a tandem with one drone (T-1D) and two drones (T-2D) compared to truck-only delivery (TO)
Table 2 :
2Average results for different drone speeds for instances with 20 customers
Table 3 :
3Comparison of solutions with a single speed and solutions with speed selection for 20 customers
6.3.1. The MILP solver as a heuristic
Table 4 :
4Aggregated results for experiments with larger instances|C| |D|
Truck
Drone
Costs [$]
∆TO [%]
Time [min] Dist [km] #OP Dist [km] #CC Wages
Fuel Power
Total
30
1
270.73
160.76
5.90
37.15
2.92
90.25 25.72
0.26 116.23
-12.56
2
250.42
153.26
4.90
32.36
2.52
83.48 24.52
0.45 108.44
-18.36
40
1
304.29
170.06
8.00
43.68
3.48
101.44 27.21
0.31 128.95
-14.68
2
277.51
160.96
6.60
37.27
2.96
92.51 25.76
0.52 118.79
-21.41
50
1
352.78
189.52
10.50
50.03
4.12
117.60 30.33
0.36 148.29
-14.75
2
321.69
176.18
8.15
43.33
3.48
107.24 28.19
0.61 136.04
-21.89
Table 5 :
5Aggregated information on solutions for tandems with one and two drones.
Appendix A. Tablesn
number rotors
8
D
diameter of rotor
0.432 m
m db mass drone frame
10 kg
m b mass battery
6 kg
c db drag coefficient drone body
1.49
c b
drag coefficient battery
1.00
c p
drag coefficient package
2.20
A db projected area drone body
0.224 m 2
A b
projected area battery
0.015 m 2
A p
projected area package
0.0929 m 2
g
standard acceleration due to gravity 9.81 m/s 2
ρ
density of air
1.2250 kg/m 3
Table A
A.1: Parameters of the octocopter energy model as in[41] Obj -Average objective function value at termination (Experiment 1) Obj * -Best objective function value at termination (Experiment 1) CV -Coefficient of variation BKS -Best known solution, corresponds to the objective function value at termination (Experiment 2) Gap -Optimality gap of BKS at termination in percent (Experiment 2) RPD -Relative percentage deviation of Obj with respect to BKS RPD * -Relative percentage deviation of Obj * with respect to BKS|C|
Instance
|D| = 1
|D| = 2
Obj
Obj *
CV
BKS
Gap RPD
RPD *
Obj
Obj *
CV
BKS
Gap RPD RPD *
30
SF_30_1
108.51 108.51 0.00 108.51
2.16
0.00
0.00 104.82 104.53 0.35 104.53
5.15
0.28
0.00
SF_30_2
122.89 122.33 0.23 122.33
0.00
0.46
0.00 115.91 115.86 0.09 115.01
0.00
0.78
0.74
SF_30_3
119.95 119.78 0.17 119.78
4.43
0.14
0.00 112.05 110.90 0.60 110.90
6.72
1.04
0.00
SF_30_4
122.51 122.42 0.06 121.95
4.96
0.46
0.39 110.22 110.22 0.00 110.22
5.95
0.00
0.00
SF_30_5
117.20 117.20 0.00 117.20
0.00
0.00
0.00 109.42 109.09 0.61 109.09
1.39
0.30
0.00
SF_30_6
119.68 119.67 0.00 119.67
1.52
0.01
0.00 111.30 111.15 0.26 111.15
2.48
0.13
0.00
SF_30_7
113.36 113.36 0.00 113.36
4.70
0.00
0.00 105.62 105.56 0.11 105.56
6.33
0.06
0.00
SF_30_8
104.96 104.96 0.00 104.96
2.71
0.00
0.00
98.58
98.08 0.41
98.08
3.45
0.51
0.00
SF_30_9
104.39 104.39 0.00 104.39
2.12
0.00
0.00
98.97
98.96 0.01
98.96
3.86
0.01
0.00
SF_30_10
130.42 130.14 0.11 130.14
3.79
0.22
0.00 121.18 120.94 0.24 120.94
5.60
0.20
0.00
Avg
116.39 116.28 0.06 116.23
2.64
0.13
0.04 108.81 108.53 0.27 108.44
4.09
0.33
0.07
40
SF_40_1
127.71 127.71 0.00 127.71
7.74
0.00
0.00 119.85 119.15 0.42 119.15 12.32
0.59
0.00
SF_40_2
126.90 126.63 0.28 126.63
6.08
0.21
0.00 117.70 117.63 0.09 117.63
8.43
0.06
0.00
SF_40_3
132.23 132.23 0.00 132.23
6.74
0.00
0.00 122.76 122.76 0.00 122.76
7.85
0.00
0.00
SF_40_4
119.02 118.29 0.38 118.29
7.97
0.62
0.00 109.38 108.83 0.43 108.83 18.51
0.51
0.00
SF_40_5
127.12 127.12 0.00 127.12
5.72
0.00
0.00 116.87 116.48 0.28 116.48 11.28
0.33
0.00
SF_40_6
131.04 130.44 0.56 130.44
8.33
0.46
0.00 121.57 120.04 0.89 120.04 12.82
1.27
0.00
SF_40_7
134.06 133.90 0.24 133.90
8.68
0.12
0.00 125.34 124.70 0.33 124.70 12.21
0.51
0.00
SF_40_8
128.36 127.99 0.26 127.99
6.21
0.29
0.00 118.94 118.42 0.38 118.42 13.72
0.44
0.00
SF_40_9
134.37 134.20 0.26 134.20
8.92
0.13
0.00 121.97 121.41 0.68 121.41 14.23
0.46
0.00
SF_40_10
131.52 131.02 0.55 131.02
7.89
0.38
0.00 118.57 118.44 0.23 118.44
9.22
0.11
0.00
Avg
129.23 128.95 0.25 128.95
7.43
0.22
0.00 119.30 118.79 0.37 118.79 12.06
0.43
0.00
50
SF_50_1
151.61 150.63 0.34 150.63 12.91
0.65
0.00 135.10 133.91 1.05 133.91 14.91
0.89
0.00
SF_50_2
154.33 153.62 0.38 153.62
9.06
0.46
0.00 143.82 143.11 0.53 143.11 19.36
0.50
0.00
SF_50_3
147.26 145.66 1.24 145.66
8.01
1.10
0.00 141.81 134.83 2.74 134.83 15.53
5.18
0.00
SF_50_4
150.35 150.18 0.23 150.18
8.76
0.11
0.00 137.67 137.13 0.43 137.13 11.88
0.39
0.00
SF_50_5
147.10 146.02 0.97 146.02
8.90
0.74
0.00 136.86 134.99 1.31 134.99 17.54
1.39
0.00
SF_50_6
147.32 146.94 0.15 146.94
8.35
0.26
0.00 135.97 134.42 0.77 133.71 13.28
1.69
0.53
SF_50_7
162.37 161.46 0.66 161.46
8.50
0.56
0.00 151.34 149.99 0.49 149.99 11.70
0.90
0.00
SF_50_8
143.42 142.36 0.81 142.36
7.91
0.74
0.00 131.78 130.77 0.49 130.77 15.03
0.77
0.00
SF_50_9
139.75 139.75 0.00 139.75
9.55
0.00
0.00 130.28 127.69 1.37 127.69 16.58
2.03
0.00
SF_50_10
146.61 146.28 0.26 146.28 11.02
0.23
0.00 135.91 134.24 0.82 134.24 17.84
1.24
0.00
Avg
149.01 148.29 0.50 148.29
9.30
0.49
0.00 138.05 136.11 1.00 136.04 15.37
1.50
0.05
Table A .
A2: Detailed results for MILP solver as heuristic for experiments with larger instances Time [min] Dist [km] Wages Fuel Total Table A.3: Information on instances and detailed results for truck usage and costs for truck-only delivery. Time [min] Dist [km] #OP Dist [km] #CC Wages Table A.4: Detailed information on truck and drone usage and costs for tandems with one drone. Time [min] Dist [km] #OP Dist [km] #CC Wages Table A.5: Detailed information on truck and drone usage and costs for tandems with two drones.33
|C| Instance
|C|
|W |
Truck
Costs [$]
30 SF_30_1
22
5789
290.03
153.15 96.69 24.50 121.19
SF_30_2
18
1393
330.72
187.26 110.25 29.96 140.21
SF_30_3
19
1466
315.10
178.29 105.04 28.53 133.57
SF_30_4
23
1236
337.82
184.52 112.61 29.52 142.14
SF_30_5
21
1938
327.00
190.07 109.01 30.41 139.42
SF_30_6
18
1588
324.42
183.67 108.15 29.39 137.54
SF_30_7
20
2714
306.67
176.89 102.23 28.30 130.53
SF_30_8
18
3901
281.42
163.23 93.81 26.12 119.93
SF_30_9
20
4023
280.03
158.53 93.35 25.37 118.72
SF_30_10
19
1594
343.77
200.57 114.60 32.09 146.69
Avg
19.8 2564.2
313.70
177.62 104.57 28.42 132.99
40 SF_40_1
26
7012
351.95
193.77 117.33 31.00 148.33
SF_40_2
29
6291
351.08
184.20 117.04 29.47 146.51
SF_40_3
25
6055
359.02
199.14 119.68 31.86 151.54
SF_40_4
28
8452
337.88
171.67 112.64 27.47 140.10
SF_40_5
26
5026
363.95
197.57 121.33 31.61 152.94
SF_40_6
25
4796
375.78
202.95 125.27 32.47 157.74
SF_40_7
22
5642
370.98
190.17 123.67 30.43 154.10
SF_40_8
29
7113
367.85
187.38 122.63 29.98 152.61
SF_40_9
37
7139
377.13
203.96 125.72 32.63 158.35
SF_40_10
27
5144
358.42
186.91 119.48 29.91 149.39
Avg
27.4 6267.0
361.40
191.77 120.48 30.68 151.16
50 SF_50_1
29
9792
407.40
211.05 135.81 33.77 169.58
SF_50_2
41
10169
455.78
230.59 151.94 36.89 188.83
SF_50_3
33
11278
412.35
213.13 137.46 34.10 171.56
SF_50_4
33
10573
416.78
213.74 138.94 34.20 173.14
SF_50_5
34
16652
414.42
213.56 138.15 34.17 172.32
SF_50_6
36
18007
401.65
205.79 133.89 32.93 166.82
SF_50_7
31
7793
453.87
229.16 151.30 36.67 187.97
SF_50_8
29
9707
406.98
212.18 135.67 33.95 169.62
SF_50_9
31
19755
406.72
199.94 135.58 31.99 167.57
SF_50_10
36
15933
419.12
207.15 139.72 33.14 172.86
Avg
33.3 12965.9
419.51
213.63 139.85 34.18 174.03
|C|
Instance
Truck
Drone
Costs [$]
∆TO [%]
Fuel Power
Total Wages
Fuel
Total
30
SF_30_1
254.97
145.46
6.00
32.72
2.70
85.00 23.27
0.24 108.51
-12.09
-5.02 -10.46
SF_30_2
285.51
168.06
6.00
37.31
2.97
95.18 26.89
0.26 122.33
-13.67 -10.25 -12.75
SF_30_3
278.23
167.54
5.00
32.58
2.58
92.75 26.81
0.23 119.78
-11.70
-6.03 -10.32
SF_30_4
288.59
159.08
5.00
43.41
3.37
96.20 25.45
0.30 121.95
-14.57 -13.79 -14.20
SF_30_5
274.23
159.48
6.00
36.31
3.04
91.42 25.52
0.27 117.20
-16.14 -16.08 -15.94
SF_30_6
279.59
163.71
6.00
39.93
3.12
93.20 26.19
0.27 119.67
-13.82 -10.89 -12.99
SF_30_7
261.63
161.68
7.00
41.20
3.15
87.22 25.87
0.28 113.36
-14.68
-8.59 -13.15
SF_30_8
243.97
146.39
6.00
29.32
2.32
81.33 23.42
0.20 104.96
-13.30 -10.34 -12.48
SF_30_9
240.87
149.09
6.00
34.41
2.71
80.30 23.86
0.24 104.39
-13.98
-5.95 -12.07
SF_30_10
299.70
187.15
6.00
44.30
3.28
99.91 29.94
0.29 130.14
-12.82
-6.70 -11.28
Avg
270.73
160.76
5.90
37.15
2.92
90.25 25.72
0.26 116.23
-13.68
-9.36 -12.56
40
SF_40_1
301.64
167.78
9.00
42.17
3.54
100.56 26.84
0.31
127.71
-14.29
-13.42 -13.90
SF_40_2
300.20
164.11
8.00
42.09
3.46
100.07 26.26
0.30
126.63
-14.50
-10.89 -13.57
SF_40_3
310.97
176.87
7.00
39.47
2.97
103.67 28.30
0.26
132.23
-13.38
-11.17 -12.74
SF_40_4
280.11
153.82
8.00
39.60
3.40
93.38 24.61
0.30 118.29
-17.10 -10.41 -15.57
SF_40_5
298.85
170.02
8.00
41.18
3.33
99.62 27.20
0.29 127.12
-17.89 -13.95 -16.88
SF_40_6
307.60
172.33
7.00
46.97
3.67
102.54 27.57
0.32
130.44
-18.14 -15.09 -17.31
SF_40_7
316.63
175.07
7.00
49.53
3.83
105.55 28.01
0.34
133.90
-14.65
-7.95 -13.11
SF_40_8
301.53
169.67
9.00
46.79
3.64
100.52 27.15
0.32
127.99
-18.03
-9.44 -16.13
SF_40_9
315.48
179.51
8.00
44.42
3.49
105.17 28.72
0.31
134.20
-16.35 -11.98 -15.25
SF_40_10
309.85
171.45
9.00
44.56
3.43
103.29 27.43
0.30
131.02
-13.55
-8.29 -12.30
Avg
304.29
170.06
8.00
43.68
3.48
101.44 27.21
0.31 128.95
-15.79 -11.26 -14.68
50
SF_50_1
356.03
197.39
10.00
49.44
4.15
118.68 31.58
0.36 150.63
-12.61
-6.49 -11.17
SF_50_2
367.72
191.66
9.00
53.40
4.18
122.58 30.67
0.37
153.62
-19.32
-16.86 -18.65
SF_50_3
348.02
182.93
11.00
52.00
4.24
116.01 29.27
0.37 145.66
-15.60 -14.16 -15.10
SF_50_4
356.84
192.77
11.00
53.63
4.31
118.95 30.84
0.38 150.18
-14.39
-9.82
-13.26
SF_50_5
348.54
184.23
10.00
49.66
3.99
116.19
29.48
0.35 146.02
-15.90 -13.73 -15.26
SF_50_6
349.67
187.73
10.00
46.47
3.86
116.56
30.04
0.34 146.94
-12.94
-8.78
-11.92
SF_50_7
383.20
208.28
11.00
53.96
4.41
127.74
33.33
0.39 161.46
-15.57
-9.11
-14.10
SF_50_8
336.55
186.18
10.00
49.22
4.30
112.19
29.79
0.38 142.36
-17.31 -12.25 -16.07
SF_50_9
333.30
176.90
11.00
45.44
3.82
111.11
28.30
0.34 139.75
-18.05 -11.53 -16.60
SF_50_10
347.95
187.16
12.00
47.09
3.91
115.99
29.95
0.34 146.28
-16.98
-9.63
-15.38
Avg
352.78
189.52
10.50
50.03
4.12
117.60 30.33
0.36 148.29
-15.87 -11.24 -14.75
|C|
Instance
Truck
Drone
Costs [$]
∆TO [%]
Fuel Power
Total Wages
Fuel
Total
30
SF_30_1
245.28
139.65
5.00
29.18
2.36
81.77 22.34
0.42 104.53
-15.43
-8.82 -13.75
SF_30_2
266.55
160.80
4.50
31.24
2.42
88.86 25.73
0.43 115.01
-19.40 -14.12 -17.97
SF_30_3
259.20
150.09
4.50
35.85
2.73
86.41 24.01
0.48 110.90
-17.74 -15.84 -16.97
SF_30_4
256.93
150.56
4.50
32.89
2.73
85.65 24.09
0.48 110.22
-23.94 -18.39 -22.46
SF_30_5
251.92
154.17
4.50
32.37
2.52
83.98 24.67
0.44 109.09
-22.96 -18.88 -21.75
SF_30_6
254.83
160.57
5.50
37.92
2.89
84.95 25.69
0.51 111.15
-21.45 -12.59 -19.19
SF_30_7
240.90
155.17
5.00
32.22
2.44
80.31 24.83
0.43 105.56
-21.44 -12.26 -19.13
SF_30_8
224.10
143.60
5.50
28.71
2.25
74.71 22.98
0.40
98.08
-20.36 -12.02 -18.22
SF_30_9
226.10
144.94
5.50
28.53
2.25
75.37 23.19
0.40
98.96
-19.26
-8.59 -16.64
SF_30_10
278.35
173.06
4.50
34.70
2.62
92.79 27.69
0.46 120.94
-19.03 -13.71 -17.55
Avg
250.42
153.26
4.90
32.36
2.52
83.48 24.52
0.45 108.44
-20.10 -13.52 -18.36
40
SF_40_1
279.58
158.84
6.50
38.19
3.04
93.20 25.42
0.53 119.15
-20.57 -18.00 -19.67
SF_40_2
273.50
161.93
8.00
37.25
3.11
91.18 25.91
0.55 117.63
-22.10 -12.08 -19.71
SF_40_3
285.22
170.34
6.00
29.94
2.40
95.08 27.25
0.42 122.76
-20.55 -14.47 -18.99
SF_40_4
257.80
140.10
6.00
34.85
2.70
85.94 22.42
0.47 108.83
-23.70 -18.38 -22.32
SF_40_5
269.82
162.64
6.50
36.70
2.91
89.95 26.02
0.51 116.48
-25.86 -17.68 -23.84
SF_40_6
279.72
164.11
6.00
39.21
3.06
93.25 26.26
0.54 120.04
-25.56 -19.13 -23.90
SF_40_7
292.78
165.86
6.50
39.65
3.17
97.60 26.54
0.56 124.70
-21.08 -12.78 -19.08
SF_40_8
276.57
160.56
7.00
39.34
3.05
92.20 25.69
0.54 118.42
-24.92 -14.51 -22.53
SF_40_9
282.51
166.69
7.00
39.64
3.20
94.18 26.67
0.56 121.41
-25.09 -18.27 -23.33
SF_40_10
277.64
158.55
6.50
37.88
2.93
92.56 25.37
0.52 118.44
-22.53 -15.18 -20.72
Avg
277.51
160.96
6.60
37.27
2.96
92.51 25.76
0.52 118.79
-23.20 -16.05 -21.41
50
SF_50_1
317.33
172.32
6.50
41.43
3.16
105.78 27.57
0.55
133.91
-22.11
-18.36 -21.03
SF_50_2
338.44
185.44
8.00
45.45
3.53
112.82 29.67
0.62
143.11
-24.98
-20.20 -23.70
SF_50_3
320.17
171.51
9.00
46.13
3.71
106.73 27.44
0.65
134.83
-23.74
-19.93 -22.61
SF_50_3
324.07
177.92
7.50
44.85
3.60
108.03 28.47
0.63
137.13
-22.25
-16.75 -20.80
SF_50_5
319.27
174.94
8.00
39.97
3.23
106.43 27.99
0.57
134.99
-22.96 -18.09 -21.66
SF_50_6
313.15
179.69
8.50
40.57
3.26
104.39 28.75
0.57
133.71
-22.03 -12.69 -19.85
SF_50_7
352.63
198.73
8.50
45.71
3.66
117.55 31.80
0.64
149.99
-22.31 -13.28 -20.21
SF_50_8
307.89
171.80
8.50
44.53
3.66
102.64 27.49
0.64
130.77
-24.35 -19.03 -22.90
SF_50_9
304.77
159.27
8.00
42.20
3.47
101.60 25.48
0.61
127.69
-25.06 -20.35 -23.80
SF_50_10
319.18
170.19
9.00
42.41
3.49
106.40 27.23
0.61
134.24
-23.85 -17.83 -22.34
Avg
321.69
176.18
8.15
43.33
3.48
107.24 28.19
0.61 136.04
-23.36 -17.65 -21.89
Optimization approaches for the traveling salesman problem with drone. N Agatz, P Bouman, M Schmidt, Transportation Science. 524N. Agatz, P. Bouman, and M. Schmidt. Optimization approaches for the traveling salesman problem with drone. Transportation Science, 52(4):965-981, 2018.
Measuring the impact of primal heuristics. T Berthold, 0167-6377Operations Research Letters. 416T. Berthold. Measuring the impact of primal heuristics. Operations Research Letters, 41(6):611-614, 2013. ISSN 0167-6377.
A column-and-row generation approach for the flying sidekick travelling salesman problem. M Boccia, A Masone, A Sforza, C Sterle, 0968-090XTransportation Research Part C: Emerging Technologies. 124102913M. Boccia, A. Masone, A. Sforza, and C. Sterle. A column-and-row generation approach for the flying sidekick travelling salesman problem. Transportation Research Part C: Emerging Technologies, 124: 102913, 2021. ISSN 0968-090X.
Dynamic programming approaches for the traveling salesman problem with drone. P Bouman, N Agatz, M Schmidt, Networks. 724P. Bouman, N. Agatz, and M. Schmidt. Dynamic programming approaches for the traveling salesman problem with drone. Networks, 72(4):528-542, 2018.
Last-mile delivery concepts: a survey from an operational research perspective. N Boysen, S Fedtke, S Schwerdfeger, Or Spectrum. 431N. Boysen, S. Fedtke, and S. Schwerdfeger. Last-mile delivery concepts: a survey from an operational research perspective. Or Spectrum, 43(1):1-58, 2021.
A multi-start vns algorithm for the tsp-d with energy constraints. G Campuzano, E Lalla-Ruiz, M Mes, International Conference on Computational Logistics. SpringerG. Campuzano, E. Lalla-Ruiz, and M. Mes. A multi-start vns algorithm for the tsp-d with energy constraints. In International Conference on Computational Logistics, pages 393-409. Springer, 2021.
Exact methods for the traveling salesman problem with multiple drones. S Cavani, M Iori, R Roberti, 0968-090XTransportation Research Part C: Emerging Technologies. 130S. Cavani, M. Iori, and R. Roberti. Exact methods for the traveling salesman problem with multiple drones. Transportation Research Part C: Emerging Technologies, 130:103280, 2021. ISSN 0968-090X.
Drone routing with energy function: Formulation and exact algorithm. C Cheng, Y Adulyasak, L.-M Rousseau, 0191-2615Transportation Research Part B: Methodological. 139C. Cheng, Y. Adulyasak, and L.-M. Rousseau. Drone routing with energy function: Formulation and exact algorithm. Transportation Research Part B: Methodological, 139:364-387, 2020. ISSN 0191-2615.
Optimization for drone and drone-truck combined operations: A review of the state of the art and future directions. S H Chung, B Sah, J Lee, Computers & Operations Research. 123105004S. H. Chung, B. Sah, and J. Lee. Optimization for drone and drone-truck combined operations: A review of the state of the art and future directions. Computers & Operations Research, 123:105004, 2020.
The Mercedes-Benz Vision Van. For a highly efficient logistics concept. A G Daimler, Daimler AG. The Mercedes-Benz Vision Van. For a highly efficient logistics concept, 2019. URL https://www.daimler.com/innovation/specials/vision-van/en/.
R Daknama, E Kraus, arXiv:1705.06431Vehicle routing with drones. arXiv preprintR. Daknama and E. Kraus. Vehicle routing with drones. arXiv preprint arXiv:1705.06431, 2017. URL https://arxiv.org/abs/1705.06431.
A variable neighborhood search for flying sidekick traveling salesman problem. J C De Freitas, P H V Penna, International Transactions in Operational Research. 271J. C. de Freitas and P. H. V. Penna. A variable neighborhood search for flying sidekick traveling salesman problem. International Transactions in Operational Research, 27(1):267-290, 2020.
Modeling the flying sidekick traveling salesman problem with multiple drones. M Dell'amico, R Montemanni, S Novellani, 2021M. Dell'Amico, R. Montemanni, and S. Novellani. Modeling the flying sidekick traveling salesman problem with multiple drones. Networks, 2021.
Exact models for the flying sidekick traveling salesman problem. M Dell'amico, R Montemanni, S Novellani, International Transactions in Operational Research. 2021M. Dell'Amico, R. Montemanni, and S. Novellani. Exact models for the flying sidekick traveling salesman problem. International Transactions in Operational Research, n/a(n/a), 2021.
Algorithms based on branch and bound for the flying sidekick traveling salesman problem. M Dell'amico, R Montemanni, S Novellani, 0305-0483Omega. 104M. Dell'Amico, R. Montemanni, and S. Novellani. Algorithms based on branch and bound for the flying sidekick traveling salesman problem. Omega, 104:102493, 2021. ISSN 0305-0483.
Last-mile deliveries by using drones and classical vehicles. L , Di Puglia Pugliese, F Guerriero, International Conference on Optimization and Decision Science. SpringerL. Di Puglia Pugliese and F. Guerriero. Last-mile deliveries by using drones and classical vehicles. In International Conference on Optimization and Decision Science, pages 557-565. Springer, 2017.
Trucks and drones cooperation in the last-mile delivery process. L Di Puglia Pugliese, G Macrina, F Guerriero, NetworksL. Di Puglia Pugliese, G. Macrina, and F. Guerriero. Trucks and drones cooperation in the last-mile delivery process. Networks, 2020.
Minimizing energy and cost in range-limited drone deliveries with speed optimization. O Dukkanci, B Y Kara, T Bektaş, Transportation Research Part C: Emerging Technologies. 125102985O. Dukkanci, B. Y. Kara, and T. Bektaş. Minimizing energy and cost in range-limited drone deliveries with speed optimization. Transportation Research Part C: Emerging Technologies, 125:102985, 2021.
A systems-level technology policy analysis of the truck-and-drone cooperative delivery vehicle system. F Gaba, M Winkenbach, 2020-mitscale-ctl-03MIT Global Supply Chain and Logistics Excellence NetworkTechnical ReportF. Gaba and M. Winkenbach. A systems-level technology policy analysis of the truck-and-drone cooperative delivery vehicle system. Technical Report 2020-mitscale-ctl-03, MIT Global Supply Chain and Logistics Excellence Network, 2020.
. Geopy, Documentation, GeoPy. Documentation, 2019. URL https://geopy.readthedocs.io/en/stable/.
Tattu uav drone battery. Grepow Inc, Grepow Inc. Tattu uav drone battery, 2019. URL https://www.grepow.com/page/uav-battery. html#2.
A hybrid genetic algorithm for the traveling salesman problem with drone. Q M Ha, Y Deville, Q D Pham, M H Hà, Journal of Heuristics. 262Q. M. Ha, Y. Deville, Q. D. Pham, and M. H. Hà. A hybrid genetic algorithm for the traveling salesman problem with drone. Journal of Heuristics, 26(2):219-247, 2020.
Truck-drone hybrid delivery routing: Payload-energy dependency and no-fly zones. H Y Jeong, B D Song, S Lee, 0925-5273International Journal of Production Economics. 214H. Y. Jeong, B. D. Song, and S. Lee. Truck-drone hybrid delivery routing: Payload-energy dependency and no-fly zones. International Journal of Production Economics, 214:220-233, 2019. ISSN 0925-5273.
Multiple traveling salesman problem with drones: Mathematical model and heuristic approach. P Kitjacharoenchai, M Ventresca, M Moshref-Javadi, S Lee, J M Tanchoco, P A Brunese, Computers & Industrial Engineering. 129P. Kitjacharoenchai, M. Ventresca, M. Moshref-Javadi, S. Lee, J. M. Tanchoco, and P. A. Brunese. Mul- tiple traveling salesman problem with drones: Mathematical model and heuristic approach. Computers & Industrial Engineering, 129:14 -30, 2019.
Thirty-six month evaluation of ups diesel hybrid-electric delivery vans. M Lammert, K Walkowicz, NREL/TP-5400-53503National Renewable Energy LaboratoryTechnical ReportM. Lammert and K. Walkowicz. Thirty-six month evaluation of ups diesel hybrid-electric delivery vans. Technical Report NREL/TP-5400-53503, National Renewable Energy Laboratory, 2012.
Two-echelon routing problem for parcel delivery by cooperated truck and drone. Y Liu, Z Liu, J Shi, G Wu, W Pedrycz, 10.1109/TSMC.2020.2968839IEEE Transactions on Systems, Man, and Cybernetics: Systems. Y. Liu, Z. Liu, J. Shi, G. Wu, and W. Pedrycz. Two-echelon routing problem for parcel delivery by cooperated truck and drone. IEEE Transactions on Systems, Man, and Cybernetics: Systems, pages 1-16, 2020. doi: 10.1109/TSMC.2020.2968839.
Drone-aided routing: A literature review. G Macrina, L Di Puglia Pugliese, F Guerriero, G Laporte, Transportation Research Part C: Emerging Technologies. 120102762G. Macrina, L. Di Puglia Pugliese, F. Guerriero, and G. Laporte. Drone-aided routing: A literature review. Transportation Research Part C: Emerging Technologies, 120:102762, 2020.
Applications and research avenues for drone-based models in logistics: A classification and review. M Moshref-Javadi, M Winkenbach, Expert Systems with Applications. 177114854M. Moshref-Javadi and M. Winkenbach. Applications and research avenues for drone-based models in logistics: A classification and review. Expert Systems with Applications, 177:114854, 2021.
The flying sidekick traveling salesman problem: Optimization of drone-assisted parcel delivery. C C Murray, A G Chu, Transportation Research Part C: Emerging Technologies. 54C. C. Murray and A. G. Chu. The flying sidekick traveling salesman problem: Optimization of drone-assisted parcel delivery. Transportation Research Part C: Emerging Technologies, 54:86-109, 2015.
The multiple flying sidekicks traveling salesman problem: Parcel delivery with multiple drones. C C Murray, R Raj, Transportation Research Part C: Emerging Technologies. 110C. C. Murray and R. Raj. The multiple flying sidekicks traveling salesman problem: Parcel delivery with multiple drones. Transportation Research Part C: Emerging Technologies, 110:368-398, 2020.
Optimization approaches for civil applications of unmanned aerial vehicles (UAVs) or aerial drones: A survey. A Otto, N Agatz, J Campbell, B Golden, E Pesch, Networks. 724A. Otto, N. Agatz, J. Campbell, B. Golden, and E. Pesch. Optimization approaches for civil applications of unmanned aerial vehicles (UAVs) or aerial drones: A survey. Networks, 72(4):411-458, 2018.
Multi-visit drone routing problem. S Poikonen, B Golden, 0305-0548Computers & Operations Research. 113104802S. Poikonen and B. Golden. Multi-visit drone routing problem. Computers & Operations Research, 113: 104802, 2020. ISSN 0305-0548.
A branch-and-bound approach to the traveling salesman problem with a drone. S Poikonen, B Golden, E A Wasil, INFORMS Journal on Computing. 312S. Poikonen, B. Golden, and E. A. Wasil. A branch-and-bound approach to the traveling salesman problem with a drone. INFORMS Journal on Computing, 31(2):335-346, 2019.
The multiple flying sidekicks traveling salesman problem with variable drone speeds. R Raj, C Murray, Transportation Research Part C: Emerging Technologies. 120102813R. Raj and C. Murray. The multiple flying sidekicks traveling salesman problem with variable drone speeds. Transportation Research Part C: Emerging Technologies, 120:102813, 2020.
Exact methods for the traveling salesman problem with drone. R Roberti, M Ruthmair, Transportation Science. 552R. Roberti and M. Ruthmair. Exact methods for the traveling salesman problem with drone. Trans- portation Science, 55(2):315-335, 2021.
Unmanned aerial vehicles/drones in vehicle routing problems: a literature review. D Rojas Viloria, E L Solano-Charris, A Muñoz-Villamizar, J R Montoya-Torres, International Transactions in Operational Research. 284D. Rojas Viloria, E. L. Solano-Charris, A. Muñoz-Villamizar, and J. R. Montoya-Torres. Unmanned aerial vehicles/drones in vehicle routing problems: a literature review. International Transactions in Operational Research, 28(4):1626-1657, 2021.
An adaptive large neighborhood search metaheuristic for the vehicle routing problem with drones. D Sacramento, D Pisinger, S Ropke, Transportation Research Part C: Emerging Technologies. 102D. Sacramento, D. Pisinger, and S. Ropke. An adaptive large neighborhood search metaheuristic for the vehicle routing problem with drones. Transportation Research Part C: Emerging Technologies, 102: 289-315, 2019.
A hybrid vns/tabu search algorithm for solving the vehicle routing problem with drones and en route operations. D Schermer, M Moeini, O Wendt, Computers & Operations Research. 109D. Schermer, M. Moeini, and O. Wendt. A hybrid vns/tabu search algorithm for solving the vehicle routing problem with drones and en route operations. Computers & Operations Research, 109:134-158, 2019.
A matheuristic for the vehicle routing problem with drones and its variants. D Schermer, M Moeini, O Wendt, 0968-090XTransportation Research Part C: Emerging Technologies. 106D. Schermer, M. Moeini, and O. Wendt. A matheuristic for the vehicle routing problem with drones and its variants. Transportation Research Part C: Emerging Technologies, 106:166-204, 2019. ISSN 0968-090X.
Energy use and life cycle greenhouse gas emissions of drones for commercial package delivery. J K Stolaroff, C Samaras, E R O'neill, A Lubers, A S Mitchell, D Ceperley, Nature communications. 91409J. K. Stolaroff, C. Samaras, E. R. O'Neill, A. Lubers, A. S. Mitchell, and D. Ceperley. Energy use and life cycle greenhouse gas emissions of drones for commercial package delivery. Nature communications, 9 (1):409, 2018.
VRPD-DSS. F Tamke, F. Tamke. VRPD-DSS, Nov 2021. URL https://github.com/FelTam/VRPD-DSS.
A branch-and-cut algorithm for the vehicle routing problem with drones. F Tamke, U Buscher, Transportation Research Part B: Methodological. 144F. Tamke and U. Buscher. A branch-and-cut algorithm for the vehicle routing problem with drones. Transportation Research Part B: Methodological, 144:174-203, 2021.
An exact solution method for the tsp with drone based on decomposition. S A Vásquez, G Angulo, M A Klapp, 0305-0548Computers & Operations Research. 127105127S. A. Vásquez, G. Angulo, and M. A. Klapp. An exact solution method for the tsp with drone based on decomposition. Computers & Operations Research, 127:105127, 2021. ISSN 0305-0548.
Routing and scheduling for hybrid truck-drone collaborative parcel delivery with independent and truck-carried drones. D Wang, P Hu, J Du, P Zhou, T Deng, M Hu, IEEE Internet of Things Journal. 66D. Wang, P. Hu, J. Du, P. Zhou, T. Deng, and M. Hu. Routing and scheduling for hybrid truck-drone collaborative parcel delivery with independent and truck-carried drones. IEEE Internet of Things Journal, 6(6):10483-10495, 2019.
The vehicle routing problem with drones: Several worst-case results. X Wang, S Poikonen, B Golden, Optimization Letters. 114X. Wang, S. Poikonen, and B. Golden. The vehicle routing problem with drones: Several worst-case results. Optimization Letters, 11(4):679-697, 2017.
Vehicle routing problem with drones. Z Wang, J.-B Sheu, Transportation research part B: methodological. 122Z. Wang and J.-B. Sheu. Vehicle routing problem with drones. Transportation research part B: methodological, 122:350-364, 2019.
Horsefly drone delivery. Workhorse, Workhorse. Horsefly drone delivery, 2015. URL https://workhorse.com/horsefly.html.
Energy consumption models for delivery drones: A comparison and assessment. J Zhang, J F Campbell, D C Sweeney, I I , A C Hupman, Transportation Research Part D: Transport and Environment. 90102668J. Zhang, J. F. Campbell, D. C. Sweeney II, and A. C. Hupman. Energy consumption models for delivery drones: A comparison and assessment. Transportation Research Part D: Transport and Environment, 90:102668, 2021.
| [
"https://github.com/FelTam/VRPD-DSS."
] |
[
"TASK ADAPTIVE FEATURE TRANSFORMATION FOR ONE-SHOT LEARNING",
"TASK ADAPTIVE FEATURE TRANSFORMATION FOR ONE-SHOT LEARNING"
] | [
"Imtiaz Masud Ziko ",
"Freddy Lecue ",
"Ismail Ben Ayed ",
"Thales Canada ",
"Jpmorgan Chase ",
"T S Montreal "
] | [] | [] | We introduce a simple non-linear embedding adaptation layer, which is fine-tuned on top of fixed pre-trained features for one-shot tasks, improving significantly transductive entropybased inference for low-shot regimes. Our norm-induced transformation could be understood as a re-parametrization of the feature space to disentangle the representations of different classes in a task specific manner. It focuses on the relevant feature dimensions while hindering the effects of non-relevant dimensions that may cause overfitting in a one-shot setting. We also provide an interpretation of our proposed feature transformation in the basic case of few-shot inference with K-means clustering. Furthermore, we give an interesting bound-optimization link between K-means and entropy minimization. This emphasizes why our feature transformation is useful in the context of entropy minimization. We report comprehensive experiments, which show consistent improvements over a variety of one-shot benchmarks, outperforming recent state-of-the-art methods. | 10.48550/arxiv.2304.06832 | [
"https://export.arxiv.org/pdf/2304.06832v1.pdf"
] | 258,170,009 | 2304.06832 | c633b2ad14572a7250488891c479bcf86a7085d4 |
TASK ADAPTIVE FEATURE TRANSFORMATION FOR ONE-SHOT LEARNING
Imtiaz Masud Ziko
Freddy Lecue
Ismail Ben Ayed
Thales Canada
Jpmorgan Chase
T S Montreal
TASK ADAPTIVE FEATURE TRANSFORMATION FOR ONE-SHOT LEARNING
Index Terms-Few-Shot Learning, Domain adaptation
We introduce a simple non-linear embedding adaptation layer, which is fine-tuned on top of fixed pre-trained features for one-shot tasks, improving significantly transductive entropybased inference for low-shot regimes. Our norm-induced transformation could be understood as a re-parametrization of the feature space to disentangle the representations of different classes in a task specific manner. It focuses on the relevant feature dimensions while hindering the effects of non-relevant dimensions that may cause overfitting in a one-shot setting. We also provide an interpretation of our proposed feature transformation in the basic case of few-shot inference with K-means clustering. Furthermore, we give an interesting bound-optimization link between K-means and entropy minimization. This emphasizes why our feature transformation is useful in the context of entropy minimization. We report comprehensive experiments, which show consistent improvements over a variety of one-shot benchmarks, outperforming recent state-of-the-art methods.
INTRODUCTION
Deep learning models have achieved impressive success in a breadth of applications. However, these successes mostly rely on learning from huge amounts of annotated data, which requires a time-consuming and expensive process. Deep learning models still have difficulty generalizing to novel classes unseen during training, given only a few labeled instances for these new classes. In this context, few-shot learning research has attracted wide interest recently. For example in a one-shot learning setting, a model is first trained on substantial labeled data over an initial set of classes, commonly called the base classes. Then, supervision is confined to one labeled example per novel class, which is not observed during base training. The model is then fine-tuned on these labeled examples from the novel classes (the support set) and evaluated on the unlabeled samples (the query set). Traditional fine-tuning would result in over-fitting in such low-data regimes. A large body of works investigated few-shot learning via meta-learning strategies, such as the very popular prototypical networks [1]. Meta-learning creates a set of fewshot tasks (or episodes), with support and query samples that simulate generalization difficulties during testing and train the model to generalize well on these tasks.
Related works: Our method is in line with recent transductive methods in the few-shot learning literature, e.g. [2,3,4,5,6], among others. Transductive inference performs class predictions jointly for all the unlabeled query samples of the task, rather than one sample at a time as in inductive inference. For instance, TPN [3] uses label propagation along with episodic training and a specific network architecture; the goal was to learn how to propagate labels from the support to the query samples. CAN-T [7] is another meta-learning based transductive method, which uses attention mechanisms to propagate labels to unlabeled samples. The authors of [5] proposed a method based on graph clustering, which regularizes the inductive predictions of query samples with a Laplacian term.
Many transductive few-shot methods focused on strategies for fine-tuning a pre-trained model during inference. For instance, the entropy fine-tuning in [4] re-trains the whole network, performing costly gradient updates over all the parameters during inference. Transductive Information Maximization (TIM) [6] proposes an entropy-based fine-tuning loss, which maximizes the mutual information between the query features and their label predictions for a few-shot task at inference, while minimizing the cross-entropy loss on the support set. However, instead of retraining the whole network, [6] only fine-tunes the softmax layer on top of the fixed pre-trained features. This showed substantial improvements over retraining the whole network. In addition to its recent successful use in transductive few-shot classification [6,4], it is worth noting that entropy minimization is widely used in semi-supervised learning [8,9], and has been successfully used recently in unsupervised domain adaptation [10] and unsupervised representation learning [11].
Our Contribution: Fine-tuning the classifier on top of fixed pre-trained features from the base classes may not take full advantage of the expressive power of the task-specific feature space. A standard linear transformation causes overfitting when dealing with limited supervision. In this regard, we propose a simple yet effective norm-induced feature transformation, which is fine-tuned to emphasize class-specific feature dimensions while hindering the effect of non-relevant dimensions that may cause overfitting in a few-shot setting. Our non-linear transformation could be understood as arXiv:2304.06832v1 [cs.
LG] 13 Apr 2023 Support set (mini-Imagenet) Fig. 1. TSNE plots depicting the feature space with or without the proposed feature transformation in (1). The support images of the 1-shot task are provided (leftmost). The bigger markers correspond to the support image in each class.
Support set (Aircraft) (a) (b) (c) (d) (e) (f)
a re-parametrization of the feature space, which disentangles the representations of the different classes in a task-specific manner. While our motivation is conceptually similar to early kernel-based metric-learning methods [12], in which non-linear transformations are implicit, our transformation is explicit. We provide an interpretation of our transformation in the basic case of few-shot inference with K-means clustering. Furthermore, we give an interesting bound-optimization link between K-means and entropy minimization. This emphasizes why our feature transformation is useful in the context of entropy minimization, which is widely used in learning, even beyond few-shot classification. We report comprehensive experiments, showing that the proposed transformation could yield consistent improvements over various one-shot benchmarks, outperforming recent state-of-the-art methods.
TASK ADAPTIVE FEATURE TRANSFORMATION FOR ONE-SHOT LEARNING
In the one-shot setting, we are given a labeled support set S with C novel test classes, where each novel class has one labeled example. The objective is to accurately classify unlabeled unseen query sample set Q from these C classes. Let f φ denote the embedding function of a deep convolutional neural network, with parameters φ and x i = f φ (y i ) ∈ R d is the features of a given sample y i . f φ is pre-trained from a labeled set X base , via a standard cross-entropy loss, with base classes that are different from the test classes of S and Q. Proposed Transformation: The proposed feature transformation is performed during fine-tuning, which is derived from minimizing an entropy-based loss function for the target one-shot task. We will detail the entropy-based loss function below, and draw an interesting connection to the basic K-means objective through bound optimization. For now, let us introduce our non-linear transformation, which reads as follows for each L 2 -normalized pre-trained feature vector x i of a given target few-shot task:
g(x i , W) = − 1 2 x i − w 1 2 , . . . , x i − w d 2 T(1)
where x i is the initial feature vector either from the support set S or query set Q, and superscript T denotes the transpose operator. We introduce a learnable transformation ma-
trix W = [w T 1 . . . w T d ] ∈ R d×d ,
which is updated during the fine-tuning procedure. To understand the effect of our transformation, let us first consider a transductive inference with a basic K-means clustering of the transformed features of the query set. This could be done by optimizing the following mixed objective:
J (W, θ, Q) = i∈Q C c=1 q ic θ c − g(x i , W) 2(2)
where θ = (θ c ) 1≤c≤C represent class prototypes, and Q is the |Q|-by-C matrix whose rows are given by binary assignment simplex vectors q i = (q ic ) 1≤c≤C ∈ {0, 1} C : q ic = 1 if sample x i is assigned to class c and q ic = 0 otherwise. Alternating iterative minimization of the mixed K-means objective in Eq. (2) with respect to W, θ and Q could be viewed as a joint task-adaptive metric learning and clustering, and has a clear interpretation of the effect of the proposed transformation. Optimization with respect to transformation parameters W encourages new features g(x i , W) to approach their current class prototypes (or means) θ c , thereby disentangling the class representations; see the TSNE-plot illustrations in Figure 1. Clearly, given current assignments q j ic at iteration j, the optimal θ c minimizing (2) corresponds to the mean of features within class c: θ j c = i∈Q g(xi,W)
|Q|
. Also, given both q j ic and θ j c , it is clear that the objective in Eq. (2) contains, for each sample, an L 2 distance between the transformed feature of the sample and its current-class prototype. Therefore, optimization with respect to W encourages the transformed feature to align with its current-class prototype.
It is important to note that the specific norm-induced form of g that we propose in (1) constrains implicitly the transformation, hindering the effects of non-relevant dimensions that may cause over-fitting. An unconstrained transformation g, such as a neural net for instance, trained jointly with Kmeans, might yield trivial solutions i.e. bringing all of the transformed features in the same cluster. This difficulty is known in the context of deep clustering [13]. In fact, the norm-based form of each component in transformation (1) forces some dimensions to approach zero, when aligning with the prototypes. Identity, for instance, is not recoverable under this constrained form, unlike an unconstrained neural-net transformation.
Let us now give more details in terms of the TSNE plots in Figure 1. We randomly sample a 1-shot task with 5 test classes from each of the miniImageNet and Aircraft datasets. The task from miniImageNet contains three fine-grained classes sampled from the generic dog category: 'Golden retriever', 'Dalmatian', and 'African hunting dog'. The other two classes are from generic categories: 'Nematode' and 'Crate'. The task from the Aircraft dataset contains the finegrained categories of 5 different airplane models, which are visually quite similar to each other. The leftmost plots refer to the entropy-based loss in (3) fine-tuned on top of the initial fixed pre-trained feature vector x i without the feature transformation. In this case, only the classifier weights {θ c } C c=1 are updated, which results in accuracies of 77.3% for miniImagenet and 69.33% for the Aircraft. Note that the support images (with bigger markers) are not well separated along with their corresponding query samples in the pretrained feature space learned from the base classes. Linear transformation for feature transformation during fine-tuning causes overfitting with limited supervision, as can be seen in figures 1(b) and 1(e). In this case, features from different classes are brought in the same cluster and the estimates of errors could be found from the respective classification accuracy given at the top of each plot. Finally, if we utilize the proposed transformation in Eq. (1) on top of the initial pre-trained features, we achieve a better spread-out task adaptive feature space as shown in the rightmost TSNE plot 1(c) and 1(f), with boosted accuracies of 93.33% (16% improvement) and 84.00% (13% improvement), respectively for miniImagenet and Aircraft dataset.
Entropy-based loss function: In our case, the transformation matrix W is learned by fine-tuning a transductive information maximization (TIM) loss [6]. TIM loss is a combination of cross-entropy defined over the support set S and a mutual information term, which includes two Shannon entropies: The entropy of posterior predictions (i.e., the softmax outputs of the network) and the entropy of the marginal probabilities of the classes, both defined over the query set Q:
L(W, θ) = cross-entropy − λ |S| i∈S C c=1 y ic log(p ic ) − α |Q| i∈Q C c=1 p ic log(p ic ) conditional entropy + C c=1 p c log p c marginal entropy(3)
where
p ic = s(θ c , W, x i ) = exp − τ 2 θ c − g(x i , W) 2 k exp − τ 2 θ k − g(x i , W) 2
denotes the softmax probability outputs, p c = 1 |Q| i∈Q p ic is the marginal probability of class c, and y ic ∈ {0, 1} the ground-truth labels for the support samples and τ is the temperature parameter. Minimizing the conditional entropy pushes the network probability predictions toward the vertices of the simplex, yielding confident predictions. The marginal entropy term avoids the trivial single-class solutions that might result from conditional entropy minimization.
On the link between entropy and K-means: We now change gear and show an interesting bound-optimization link between the conditional entropy in Eq. (3) and K-means in Eq. (2). This link further clarifies why our feature transformation is useful in the context of entropy minimization. In addition to its recent successful use in few-shot classification [6,4], entropy is widely used in semi-supervised learning [8,9], and has been successfully used recently in unsupervised domain adaptation [10] and unsupervised representation learning [11]. Therefore, connecting entropy minimization to K-means could provide interesting insights even beyond fewshot classification. To show the link, let us first decompose the conditional entropy in (3):
i,c s(θ c , W, x i ) θ c − g(x i , W) 2 H(W,θ): Clustering + i l(θ, W, x i ) Prototype dispersion (4) where l(θ, W, x i ) = log c exp − τ 2 θ c − g(x i , W) 2 .
Minimizing the prototype dispersion encourages large distances between the prototypes and the features of all data points. Term H(W, θ) in Eq. (4) is closely related to basic Kmeans (2) from a bound-optimization perspective, although it seems more complex. The following shows that optimizing a soft K-means could be viewed as an approximate Majorize-Minimize (MM) algorithm for optimizing H(W, θ).
Given a function H(W, θ), the general MM paradigm minimizes iteratively a tight upper bound on H:
H(W, θ) ≤ A j (W, θ) ∀ W, θ H(W j , θ j ) = A j (W j , θ j )(5)
where j is the current iteration index. An upper bound satisfying the tightness condition in (5) is often referred to as auxiliary function of the original objective H. It is straightforward to verify that minimizing A j iteratively guarantees the original objective does not increase:
H(W j+1 , θ j+1 ) ≤ A j (W j+1 , θ j+1 ) ≤ A j (W j , θ j ) = H(W j , θ j ).H(W, θ) ≤ J (W, θ, Q) + τ 2 i q T i log q i (6)
Furthermore, given parameters W j and prototype θ j = (θ j c ) 1≤c≤C at iteration j, choosing specific expressions q ic = s(θ j c , W j , x i ) in upper bound (6) yields an approximate auxiliary function on H(W, θ) when τ is small (τ → 0). (6) is convex w.r.t Q as it is the sum of linear and convex functions. Solving the KKT conditions for minimizing this bound, s.t. simplex constraint on each q i , yields closed-form solutions:q ic = s(θ c , W, x i ) The inequality in (6) follows directly from plugging these optimal solutions in the upper-bound in (6) and using the fact that τ is small (τ → 0). Finally, it is straightforward to verify that the specific choice q ic = s(θ j c , W j , x i ) makes the upper bound in (6) tight at the current solution and, hence, an auxiliary function, when temperature τ → 0.
Proof. The upper bound in
EXPERIMENTS
Datasets: We used four one-shot benchmarks, including both the fine-grained classification settings (CUB and Aircraft) Algorithm 1 FT-TIM inference Input: Pre-trained encoder f φ , One-shot task {S, Q} TIM Parameters: Number of Fine-tuning iterations I, temperature τ , loss weights {λ, α},learning rate for {θ c } C c=1 FT Parameters: Learning rate for transformation matrix W, iteration number I W to start transformation 1: L2 normalization: Compute Update {θ c } C c=1 17: end while 18: return Query predictions y i = arg max c p ic , ∀i ∈ Q and standard one-shot classification setting (miniImageNet and tieredImageNet). miniImageNet is a subset of the larger ILSVRC-12 dataset [23]. We use the standard split of 64 classes for base training, 16 for validation, and 20 for testing. tieredImageNet [24] is also a subset of the ILSVRC-12 dataset, but with 608 classes instead. We split the dataset into 351 classes for base training, 97 for validation and 160 for testing. CUB [25] is a fine-grained image classification dataset with 200 categories. We split it into 100 classes for base training, 50 for validation and 50 for testing. Aircraft or FGVCAircraft [26] is a fine-grained image classification dataset with 100 airplane models. Following the same ratio as CUB, we split classes into 50 base classes for training, 25 validation, and 25 test classes. Images are resized to 84 × 84 pixels. Implementation Details: The results of the proposed FT-TIM is reproduced and evaluated in the same settings as in [5,6] for fair comparisons. The network models are trained with cross-entropy loss on the base classes. We utilize the same publicly available pre-trained models of [5,6] for miniImageNet, tieredImageNet, and CUB. For the Aircraft dataset, we train the model according to the same protocol. The evaluation is done on two different setups of 5-way one-shot benchmark: 1) Standard one-shot benchmark, 15 samples per class in the query set for each task, and the aver-
x i = f φ (x i )/ f φ (x i ) 2 , ∀x i ∈ S ∪ Q 2: Initialize W = X T s X s , where X s = {x s },p ic = softmax(− τ 2 θ c − x i 2 ),
Methods
Network miniImageNet tieredImageNet CUB Aircraft MAML [14] ResNet-18 49.61 -68.42 -TPN [3] ResNet-12 59.46 ---Entropy-min [4] ResNet age accuracy over query sets are reported. 2) Semi-supervised one-shot benchmark, where we treat 15 samples per class as the additional unlabeled samples along with the support set and report the accuracy on a separate held out test set containing 5 test samples from each class. In this setup, we compare the results with and without the proposed task adaptive feature transformation while fine-tuning entropy loss (TIM) [6] in (3). The average accuracy over 600 one-shot tasks are reported. In case of FT-TIM, transformation matrix W is fine-tuned with 0.01 learning rate, starting from the 200th fine-tuning iteration, which we decide from mini-Imagenet validation set accuracy. The feature transformation weights W is initialized from the cosine similarity matrix formed with the L 2 normalized initial pre trained support set features.
Results
The results of the general one-shot classification are highlighted in Table 1. It can be observed that in each of the datasets and network models, the proposed FT-TIM which includes the proposed feature transformation consistently improves the 1-shot accuracy by 1 − 3% in comparison to the baseline TIM [6] without the proposed transformation. Note that, the proposed FT-TIM also outperforms the other recent transductive methods such as ICIR [27] and RAP-LaplacianShot [2] by simply fine-tuning the feature transformation during evaluation. The similar consistent improvement is also reflected in the case of fine-grained clas-sification on both of the CUB and Aircraft datasets in Table 1. These results clearly demonstrate that the proposed feature transformation can bring out the expressive power of the task adaptive feature space in one-shot learning. We again evaluate the efficacy of the proposed feature transformation in semi-supervised one-shot tasks, where additional unlabeled samples are provided along with the oneshot labeled data per novel class. The transformation weights and the classifier weights are updated during the fine-tuning with the labeled data and the additional unlabeled data in the one-shot task. Finally, the inference is performed on a separate held out test set. To observe the benefit of plugging the proposed feature transformation during fine-tuning entropy based loss, we compare the proposed FT-TIM with respect to baseline TIM [6] without the proposed transformation. From the results in Table 2, we can observe that consistent improvements are achieved by FT-TIM across different datasets, number of shots, and network models. These results indicate that the proposed transformation layer, while finetuned on top of pre-trained features jointly with the classifier, helps to disentangle the representations of different classes in a task-specific manner.
CONCLUSION
In this paper, we present a simple yet effective feature transformation layer, which brings consistent improvements in transductive one-shot learning while fine-tuned on top of Table 2. Average accuracy (in %) for the semi-supervised one-shot learning setup. The best results are highlighted in bold font.
pre-trained features. The proposed transformation takes full advantage of the expressive power of the task-specific feature space. It could be understood as a re-parametrization of the feature space, which disentangles the representations of different classes in a task-specific manner. We further provided an interpretation of our transformation in the basic case of few-shot inference with K-means clustering, along with an interesting bound-optimization link between K-means and entropy minimization. This emphasizes why our feature transformation is useful in the context of entropy minimization, which is widely used in learning.
Proposition 1 .
1H(W, θ) is upper bounded by the following soft K-means objective for any set of soft simplex assignment variables q i = (q ic ) 1≤c≤C ∈ [0, 1] C , i ∈ Q:
Table 1. Average one-shot accuracy (in %) for the standard benchmark.-12
62.35
68.36
-
-
DPGN [15]
ResNet-18
66.63
70.46
-
-
CAN+T [7]
ResNet-18
67.19
73.21
-
-
DSN-MR [16]
ResNet-18
64.60
67.39
-
-
MetaoptNet [17]
ResNet-18
62.64
65.99
-
-
LaplacianShot [5]
ResNet-18
70.89
77.60
79.93
-
TIM [6]
ResNet-18
72.77
80.80
82.24
83.06
RAP-LaplacianShot [2] ResNet-12
74.29
-
83.59
-
FT-TIM (ours)
ResNet-18
75.00
83.45
85.54
84.47
AWGIM [18]
WRN
63.12
67.69
-
-
Entropy-min [4]
WRN
65.73
73.34
-
-
SIB [19]
WRN
70.0
70.90
-
-
BD-CSPN [20]
WRN
70.31
78.74
-
-
SIB+E 3 BM [21]
WRN
71.4
75.6
-
-
LaplacianShot [5]
WRN
73.44
78.80
-
-
IFSL [22]
WRN
73.51
83.07
-
-
TIM [6]
WRN
77.8
82.1
-
-
FT-TIM (ours)
WRN
79.22
85.06
-
-
MethodsminiImageNet tieredImageNet CUBTIM [6]
72.40
80.70
82.60
FT-TIM (ours)
73.96
83.45
84.69
Prototypical networks for few-shot learning. J Snell, K Swersky, R Zemel, NeurIPSJ. Snell, K. Swersky, and R. Zemel, "Prototypical net- works for few-shot learning," in NeurIPS, 2017.
Reinforced attention for fewshot learning and beyond. J Hong, P Fang, W Li, T Zhang, C Simon, M Harandi, L Petersson, CVPR. J. Hong, P. Fang, W. Li, T. Zhang, C. Simon, M. Ha- randi, and L. Petersson, "Reinforced attention for few- shot learning and beyond," in CVPR, 2021.
Learning to propagate labels: Transductive propagation network for few-shot learning. Y Liu, J Lee, M Park, S Kim, E Yang, S Hwang, Y Yang, ICLR. Y. Liu, J. Lee, M. Park, S. Kim, E. Yang, S. Hwang, and Y. Yang, "Learning to propagate labels: Transductive propagation network for few-shot learning," in ICLR, 2019.
A baseline for few-shot image classification. G S Dhillon, P Chaudhari, A Ravichandran, S Soatto, ICLR. G. S. Dhillon, P. Chaudhari, A. Ravichandran, and S. Soatto, "A baseline for few-shot image classifica- tion," in ICLR, 2020.
Laplacian regularized few-shot learning. I M Ziko, J Dolz, E Granger, I B Ayed, ICML. I. M. Ziko, J. Dolz, E. Granger, and I. B. Ayed, "Lapla- cian regularized few-shot learning," in ICML, 2020.
Transductive information maximization for few-shot learning. M Boudiaf, I M Ziko, J Rony, J Dolz, P Piantanida, I B Ayed, NeurIPS. M. Boudiaf, I. M. Ziko, J. Rony, J. Dolz, P. Piantanida, and I. B. Ayed, "Transductive information maximization for few-shot learning," NeurIPS, 2020.
Cross attention network for few-shot classification. R Hou, H Chang, M Bingpeng, S Shan, X Chen, NeurIPSR. Hou, H. Chang, M. Bingpeng, S. Shan, and X. Chen, "Cross attention network for few-shot classification," in NeurIPS, 2019.
Semi-supervised learning by entropy minimization. Y Grandvalet, Y Bengio, NeurIPS. Y. Grandvalet and Y. Bengio, "Semi-supervised learning by entropy minimization," in NeurIPS, 2005.
Mixmatch: A holistic approach to semi-supervised learning. D Berthelot, N Carlini, I Goodfellow, N Papernot, A Oliver, C A , NeurIPSD. Berthelot, N. Carlini, I. Goodfellow, N. Papernot, A. Oliver, and C. A. Raffel, "Mixmatch: A holistic ap- proach to semi-supervised learning," in NeurIPS, 2019.
Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. J Liang, D Hu, J Feng, ICML. J. Liang, D. Hu, and J. Feng, "Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation," in ICML, 2020.
Selflabelling via simultaneous clustering and representation learning. Y M Asano, C Rupprecht, A Vedaldi, ICLR. Y. M. Asano, C. Rupprecht, and A. Vedaldi, "Self- labelling via simultaneous clustering and representation learning," in ICLR, 2020.
Object classification from a single example utilizing class relevance metrics. M Fink, NeurIPS. M. Fink, "Object classification from a single example utilizing class relevance metrics," NeurIPS, 2005.
Deep clustering: On the link between discriminative models and k-means. Mohammed Jabi, Marco Pedersoli, Amar Mitiche, Ismail Ben Ayed, TPAMIMohammed Jabi, Marco Pedersoli, Amar Mitiche, and Ismail Ben Ayed, "Deep clustering: On the link between discriminative models and k-means," in TPAMI, 2021.
Model-agnostic meta-learning for fast adaptation of deep networks. C Finn, P Abbeel, S Levine, ICML. C. Finn, P. Abbeel, and S. Levine, "Model-agnostic meta-learning for fast adaptation of deep networks," in ICML, 2017.
Dpgn: Distribution propagation graph network for fewshot learning. L Yang, L Li, Z Zhang, X Zhou, E Zhou, Y Liu, CVPR. L. Yang, L. Li, Z. Zhang, X. Zhou, E. Zhou, and Y. Liu, "Dpgn: Distribution propagation graph network for few- shot learning," in CVPR, 2020.
Adaptive subspaces for few-shot learning. Christian Simon, Piotr Koniusz, Richard Nock, Mehrtash Harandi, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionChristian Simon, Piotr Koniusz, Richard Nock, and Mehrtash Harandi, "Adaptive subspaces for few-shot learning," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 4136-4145.
Meta-learning with differentiable convex optimization. Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, Stefano Soatto, Conference on Computer Vision and Pattern Recognition (CVPR). Kwonjoon Lee, Subhransu Maji, Avinash Ravichan- dran, and Stefano Soatto, "Meta-learning with differen- tiable convex optimization," in Conference on Computer Vision and Pattern Recognition (CVPR), 2019.
Attentive weights generation for few shot learning via information maximization. Y Guo, N Cheung, CVPR. Y. Guo and N. Cheung, "Attentive weights generation for few shot learning via information maximization," in CVPR, 2020.
Empirical bayes transductive meta-learning with synthetic gradients. S X Hu, P G Moreno, Y Xiao, X Shen, G Obozinski, N D Lawrence, A Damianou, ICLR. S. X. Hu, P. G. Moreno, Y. Xiao, X. Shen, G. Obozinski, N. D. Lawrence, and A. Damianou, "Empirical bayes transductive meta-learning with synthetic gradients," in ICLR, 2020.
Prototype rectification for few-shot learning. J Liu, L Song, Y Qin, ECCVJ. Liu, L. Song, and Y. Qin, "Prototype rectification for few-shot learning," ECCV, 2020.
An ensemble of epochwise empirical bayes for few-shot learning. Y Liu, B Schiele, Q Sun, ECCV. Y. Liu, B. Schiele, and Q. Sun, "An ensemble of epoch- wise empirical bayes for few-shot learning," in ECCV, 2020.
Interventional few-shot learning. Z Yue, H Zhang, Q Sun, X Hua, NeurIPS. Z. Yue, H. Zhang, Q. Sun, and X. Hua, "Interventional few-shot learning," NeurIPS, 2020.
. O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Z Huang, A Karpathy, A Khosla, M Bernstein, A C Berg, F Li, ImageNet Large Scale Visual Recognition Challenge," IJCVO. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and F. Li, "ImageNet Large Scale Visual Recognition Challenge," IJCV, 2015.
Meta-learning for semi-supervised few-shot classification. M Ren, E Triantafillou, S Ravi, J Snell, K Swersky, J B Tenenbaum, H Larochelle, R S Zemel, ICLR. M. Ren, E. Triantafillou, S. Ravi, J. Snell, K. Swer- sky, J. B. Tenenbaum, H. Larochelle, and R. S. Zemel, "Meta-learning for semi-supervised few-shot classifica- tion," in ICLR, 2018.
C Wah, S Branson, P Welinder, P Perona, S Belongie, The caltech-ucsd birds-200-2011 dataset. C. Wah, S. Branson, P. Welinder, P. Perona, and S. Be- longie, "The caltech-ucsd birds-200-2011 dataset," 2011.
Fine-grained visual classification of aircraft. S Maji, J Kannala, E Rahtu, M Blaschko, A Vedaldi, Tech. Rep. S. Maji, J. Kannala, E. Rahtu, M. Blaschko, and A. Vedaldi, "Fine-grained visual classification of air- craft," Tech. Rep., 2013.
How to trust unlabeled data instance credibility inference for few-shot learning. Y Wang, L Zhang, Y Yao, Y Fu, TPAMIY. Wang, L. Zhang, Y. Yao, and Y. Fu, "How to trust un- labeled data instance credibility inference for few-shot learning," TPAMI, 2021.
| [] |
[
"Prescission neutron multiplicity and fission probability from Langevin dynamics of nuclear fission",
"Prescission neutron multiplicity and fission probability from Langevin dynamics of nuclear fission"
] | [
"Gargi Chaudhuri \nVariable Energy Cyclotron Centre\n1/AF Bidhan Nagar700 064KolkataIndia\n",
"Santanu Pal \nVariable Energy Cyclotron Centre\n1/AF Bidhan Nagar700 064KolkataIndia\n"
] | [
"Variable Energy Cyclotron Centre\n1/AF Bidhan Nagar700 064KolkataIndia",
"Variable Energy Cyclotron Centre\n1/AF Bidhan Nagar700 064KolkataIndia"
] | [] | A theoretical model of one-body nuclear friction which was developed earlier, namely the chaosweighted wall formula, is applied to a dynamical description of compound nuclear decay in the framework of the Langevin equation coupled with statistical evaporation of light particles and photons. We have used both the usual wall formula friction and its chaos-weighted version in the Langevin equation to calculate the fission probability and prescission neutron multiplicity for the compound nuclei 178 W, 188 Pt, 200 Pb, 213 Fr, 224 Th, and 251 Es. We have also obtained the contributions of the presaddle and postsaddle neutrons to the total prescission multiplicity. A detailed analysis of our results leads us to conclude that the chaos-weighted wall formula friction can adequately describe the fission dynamics in the presaddle region. This friction, however, turns out to be too weak to describe the postsaddle dynamics properly. This points to the need for a suitable explanation for the enhanced neutron emission in the postsaddle stage of nuclear fission. * Electronic address: gargi@veccal.ernet.in † Electronic address: santanu@veccal.ernet.in | 10.1103/physrevc.65.054612 | [
"https://export.arxiv.org/pdf/nucl-th/0105010v2.pdf"
] | 32,039,855 | nucl-th/0105010 | ff5a422a327708540385d786a45a08e60f06add2 |
Prescission neutron multiplicity and fission probability from Langevin dynamics of nuclear fission
Gargi Chaudhuri
Variable Energy Cyclotron Centre
1/AF Bidhan Nagar700 064KolkataIndia
Santanu Pal
Variable Energy Cyclotron Centre
1/AF Bidhan Nagar700 064KolkataIndia
Prescission neutron multiplicity and fission probability from Langevin dynamics of nuclear fission
arXiv:nucl-th/0105010v2 1 Apr 2002
A theoretical model of one-body nuclear friction which was developed earlier, namely the chaosweighted wall formula, is applied to a dynamical description of compound nuclear decay in the framework of the Langevin equation coupled with statistical evaporation of light particles and photons. We have used both the usual wall formula friction and its chaos-weighted version in the Langevin equation to calculate the fission probability and prescission neutron multiplicity for the compound nuclei 178 W, 188 Pt, 200 Pb, 213 Fr, 224 Th, and 251 Es. We have also obtained the contributions of the presaddle and postsaddle neutrons to the total prescission multiplicity. A detailed analysis of our results leads us to conclude that the chaos-weighted wall formula friction can adequately describe the fission dynamics in the presaddle region. This friction, however, turns out to be too weak to describe the postsaddle dynamics properly. This points to the need for a suitable explanation for the enhanced neutron emission in the postsaddle stage of nuclear fission. * Electronic address: gargi@veccal.ernet.in † Electronic address: santanu@veccal.ernet.in
I. INTRODUCTION
The emission of light particles and photons during the prescission stage of a fissioning nucleus has proved to be a useful source of information regarding the dynamics of nuclear fission [1]. In particular, the multiplicity of prescission neutrons, measured over a wide range of excitation energies for a number of compound nuclei, has confirmed [2] that the fission lifetime of a hot nucleus is substantially longer than that determined from the statistical model of Bohr and Wheeler [3]. It is, therefore, natural to expect that a dissipative dynamical model would provide an appropriate description of nuclear fission at high excitation energies. This has given rise to a renewed interest in the works of Kramers [4] who considered the dynamics of nuclear fission to be similar to that of a Brownian particle floating in a viscous heat bath. Though Fokker-Planck equation was initially used to describe such dissipative fission dynamics [5,6], the application of the Langevin equation was found to be more convenient in the later works [1,7].
The Langevin equation has been used extensively in the recent years [1,[7][8][9][10][11] in order to explain the prescission neutron multiplicity and fission probability of highly excited (typically a few tens of MeV and above) compound nuclei formed in heavy-ion induced fusion reactions. In these calculations, evaporation of the neutrons and photons (and other light particles) is considered at each instant of time evolution of the fission degrees of freedom. One of the most important inputs to such Langevin dynamical calculations is the dissipative property of the nucleus since it accounts for both the dissipative and the random forces acting on the fission degrees of freedom. While the other inputs to the Langevin equation such as the potential and inertia can be obtained from standard nuclear models, the strength of the dissipative force is still not an unambiguously defined quantity and is often fixed empirically in order to fit the experimental data. In this paper, we shall be mainly concerned with the choice of a dissipative force, based on physical arguments, which can be used in a dynamical description of nuclear fission.
Fröbrich et al. [10] made a detailed study of the fission dynamics and prescission particle emission using the Langevin equation. A comparison of the calculated fission probability and prescission neutron multiplicity excitation functions for a number of nuclei with the experimental data led to a phenomenological shape-dependent nuclear friction in this work. The phenomenological friction turned out to be considerably smaller (∼ 10%) than the standard wall formula value for nuclear friction for near-spherical shapes of the compound nucleus whereas a strong increase of this friction was found to be necessary at large deformations. Similar observations were also reported by other workers [11] who obtained a better agreement with the experimental values of prescission neutron multiplicity by reducing the strength of the wall friction. Earlier, Nix and Sierk [12,13] also suggested in their analysis of mean fragment kinetic energy data that the dissipation is about 4 times weaker than that predicted by the wall-and-window formula of one-body dissipation.
The wall formula for nuclear dissipation was invented long ago in a simple classical picture by extending the mean field concepts to the domain of dissipative dynamics [14]. One crucial assumption of the wall formula concerns the randomization of the particle (nucleon) motion due to the successive collisions it suffers at the nuclear surface. The derivation of the wall formula assumes that the particle motion is fully randomized. It was early realized that any deviation from this full randomization assumption would give rise to a reduction in the strength of the wall formula friction [14,15]. However, it is only recently that a modification of the wall formula has been proposed in which the full randomization assumption is relaxed in order to make it applicable to systems with partly chaotic single-particle motion [16]. In what follows, we shall use the term "chaos-weighted wall formula" (CWWF) for this modified friction in order to distinguish it from the original wall formula (WF) friction. As was shown in Ref. [16], the CWWF friction coefficient η cwwf will be given as
η cwwf = µη wf , (1.1)
where η wf is the friction coefficient as was given by the original wall formula [14] and µ is a measure of chaos (chaoticity) in the single-particle motion and depends on the instantaneous shape of the nucleus. The value of chaoticity µ changes from 0 to 1 as the nucleus evolves from a spherical shape to a highly deformed one. The CWWF friction is thus much smaller than the WF friction for compact nuclear shapes while they become closer at large deformations. Thus the suppression of the strength of wall formula friction achieved in the chaos-weighted wall formula suggests that a lack of full randomization or chaos in single-particle motion can provide a physical explanation for the reduction in strength of friction for compact nuclear shapes as required in the phenomenological friction of Ref. [10].
The main motivation of the present work is to verify to what extent the chaos-weighted wall formula can account for the experimental prescission neutron multiplicity and fission probability data. To this end, we shall use both the CWWF and WF frictions as input to the Langevin equation. The Langevin equation will be solved by coupling it with neutron and γ evaporation at each step of its time evolution. Following the work of Fröbrich et al. [10], we shall use a combined dynamical and statistical model for our calculation in which a switching over to a statistical model description will be made when the fission process reaches the stationary regime. The prescission neutron multiplicity and fission probability will be obtained by sampling over a large number of Langevin trajectories. We shall perform calculations at a number of excitation energies for each of the compound nuclei 178 W, 188 Pt, 200 Pb, 213 Fr, 224 Th, and 251 Es. A detailed comparison of the calculated values with the experimental data will be presented.
It is worthwhile here to point out a special feature of the present work. We do not have any adjustable parameter in our entire calculation. All the input parameters except the friction coefficients are fixed by standard nuclear models. The chaos weighted wall friction coefficient is obtained following a specific procedure [16] which explicitly considers particle dynamics in phase space in order to calculate the chaoticity factor µ in Eq.1.1. There is no free parameter in this calculation of friction. In fact, our main aim in this paper is to calculate observable quantities using the theoretically predicted friction and compare them with experimental values in order to draw conclusions regarding the validity of the theoretical model of nuclear friction. As it would turn out, our calculation would not only confirm the theoretical model of the chaos weighted wall friction, it would also provide physical justification for the empirical values of friction used in other works [10]. The present work is thus expected to contribute significantly to our understanding of the dissipative mechanism in nuclear fission.
The paper is organized as follows. The dynamical model along with the necessary input as used in the present calculation will be given in the next section. The details of the calculation will also be given here. The calculated prescission neutron multiplicities and fission probabilities will be compared with the experimental values in Sec.III. A summary of the results along with the conclusions will be presented in the last section.
II. DETAILS OF THE MODEL
A. Collective coordinates, potential and inertia
We have discussed the Langevin equation along with the various input as used in the present calculation in a recent publication [17]. We shall use the same definitions and notations in the present work, a brief description of which are as follows. The shape parameters c, h and α as suggested by Brack et al. [18] will be taken as the collective coordinates for the fission degree of freedom. However, we will simplify the calculation by considering only symmetric fission (α = 0). We shall further assume in the present work that fission would proceed along the valley of the potential landscape in (c, h) coordinates though we shall consider the Langevin equation in elongation (c) coordinate alone in order to simplify the computation. Consequently, the one-dimensional potential in the Langevin equation will be defined as V (c) = V (c, h) at valley. Other quantities such as inertia and friction will also be similarly defined. Since our main concern in the present work is to distinguish between CWWF and WF frictions which give rise to fission rates differing by more than 100%, and it has been already noted that the fission rates in two-dimensional and one-dimensional cases differ by not more than 15% [19], our approximation of considering fission dynamics in one dimension can be considered adequate for our purpose. Moreover, we have also checked that the prescission neutron multiplicity and fission probability change by less than 5% when the input fission rates are changed by 15%. Therefore, we estimate that the uncertainty associated with our calculation is rather small allowing us to compare our results with the experimental data.
We shall, therefore, proceed by considering c and its conjugate momentum p as the dynamical variables for fission for our present study and the coupled Langevin equations in one dimension will be given [8] as
dp dt = − p 2 2 ∂ ∂c 1 m − ∂F ∂c − ηċ + R(t), dc dt = p m . (2.1)
The shape-dependent collective inertia and the friction coefficient in the above equations are denoted by m and η respectively. The free energy of the system is denoted by F while R(t) represents the random part of the interaction between the fission degree of freedom and the rest of the nuclear degrees of freedom considered collectively as a thermal bath in the present picture. The collective inertia, m, will be obtained by assuming an incompressible irrotational flow and making the Werner-Wheeler approximation [20]. The driving force in a thermodynamic system should be derived from its free energy for which we will use the following expression [10] considering the nucleus as a noninteracting Fermi gas:
F (c, T ) = V (c) − a(c)T 2 , (2.2)
where T is the temperature of the system and a(c) is the coordinate-dependent level density parameter which will be chosen following Ref. [10]. The surface of a nucleus of mass number A with elongation and neck coordinates, c and h, is defined as
ρ 2 (z) = 1 − z 2 c 2 0 (a 0 c 2 0 + b 0 z 2 ), (2.3) where c 0 = cR, R = 1.16A 1 3 and a 0 = 1 c 3 − b 0 5 , b 0 = 2h + c − 1 2 ,
in cylindrical coordinates. The potential energy V (c) is obtained from the finite-range liquid drop model [21] where we calculate the generalized nuclear energy by double folding the uniform density within the above surface with a Yukawa-plus-exponential potential. The Coulomb energy is obtained by double folding another Yukawa function with the density distribution. The various input parameters are taken from Ref. [21] where they were determined from fitting fission barriers of a wide range of nuclei. The centrifugal part of the potential is calculated using the rigid body moment of inertia. The potential is calculated over a grid of (c, h) values and the valley of the minimum potential located. Potential values along this valley is used in solving the Langevin equation. The instantaneous random force R(t) is modeled after that of a typical Brownian motion and is assumed to have a stochastic nature with a Gaussian distribution whose average is zero [7]. It is further assumed that R(t) has an extremely short correlation time implying that the intrinsic nuclear dynamics is Markovian. Consequently the strength of the random force can be obtained from the fluctuation-dissipation theorem and the properties of R(t) can be written as
R(t) = 0, R(t)R(t ′ ) = 2ηT δ(t − t ′ ). (2.4) B. Dissipation
One-body dissipation is usually considered to be more successful in describing fission dynamics than two-body viscosity [7,8]. We shall, therefore, use the one-body wall-and-window dissipation [14] in the Langevin equation. For the one-body wall dissipation, we shall use the chaos-weighted wall formula (Eq.1.1) introduced in the preceding section. The chaoticity µ in Eq.1.1 is a measure of chaos in the single-particle motion of the nucleons within the nuclear volume and in the present classical picture, this will be given as the average fraction of the nucleon trajectories that are chaotic when the sampling is done uniformly over the nuclear surface. A trajectory is identified either as a regular or as a chaotic one by considering the magnitude of its Lyapunov exponent and the nature of its variation with time. The details of this procedure are given in Ref. [22]. The chaoticity is calculated for all possible shapes up to the scission configuration. A plot of the variation of chaoticity with elongation can be found in Ref. [17].
In the wall-and-window model of one-body dissipation, the window friction is expected to be effective after a neck is formed in the nuclear system [23]. Further, the radius of the neck connecting the two future fragments should be sufficiently narrow in order to enable a particle that has crossed the window from one side to the other to remain within the other fragment for a sufficiently long time. This is necessary to allow the particle to undergo a sufficient number of collisions within the other side and make the energy transfer irreversible. It therefore appears that the window friction should be very nominal when neck formation just begins. Its strength should increase as the neck becomes narrower reaching its classical value when the neck radius becomes much smaller than the typical radii of the fragments. We however know very little regarding the detailed nature of such a transition. We shall therefore refrain from making any further assumption regarding the onset of window friction. Instead, we shall define a transition point in the elongation coordinate c win beyond which the window friction will be switched on. We shall also assume that the compound nucleus evolves into a binary system beyond c win and accordingly correction terms for the motions of the centers of mass of the two halves will be applied to the wall formula for c > c win [23].
The choice of a suitable value for the transition point requires some consideartion. We first note that while the window friction makes a positive contribution to the total wall-and-window friction for c > c win , the center of mass motion correction reduces the wall friction. Therefore, these two contributions cancel each other to a certain extent. Consequently, the resulting wall-and-window friction is not very sensitive to the choice of the transition point. We shall further explore this point quantitatively as follows. When a nucleus moves along the fission path, a neck formation just begins at c = 1.5. Thus the transition point can lie anywhere beyond this point upto the scission configuratrion. We have performed a few calculations for prescission neutron multiplicity and fission probability with values of c win beyond 1.5 the calculated values are in agreement within 5%. Therefore, the value of c win is not very critical for our purpose. We shall choose a value for c win at which the nucleus has a binary shape and the neck radius is half of the radius of either of the would-be fragments. This value of c win is thus half-way between its lower and the upper limit in terms of the neck radius. Though such a consideration to choose a value of c win is still arbitrary, we have just demonstrated that it will have little influence on our results.
We shall use the following expressions to calculate the wall-and-window friction coefficients (η wf will henceforth stand for the full wall-and-window friction) [23]:
η wf (c < c win ) = η wall (c < c win ), (2.5) where η wall (c < c win ) = 1 2 πρ mv zmax zmin ∂ρ 2 ∂c 2 ρ 2 + 1 2 ∂ρ 2 ∂z 2 − 1 2 dz,(2.6)
and In the above equations, ρ m is the mass density of the nucleus,v is the average nucleon speed inside the nucleus and D 1 , D 2 are the positions of the centers of mass of the two parts of the fissioning system relative to the center of mass of the whole system. z min and z max are the two extreme ends of the nuclear shape along the z axis and z N is the position of the neck plane that divides the nucleus into two parts. In the window friction coefficient, R is the distance between centers of mass of future fragments and ∆σ is the area of the window between the two parts of the system.
η wf (c ≥ c win ) = η wall (c ≥ c win ) + η win (c ≥ c win ), (2.7) where η wall (c ≥ c win ) = 1 2 πρ mv zN zmin ∂ρ 2 ∂c + ∂ρ 2 ∂z ∂D 1 ∂c 2 ρ 2 + 1 2 ∂ρ 2 ∂z 2 − 1 2 dz + zmax zN ∂ρ 2 ∂c + ∂ρ 2 ∂z ∂D 2 ∂c 2 ρ 2 + 1 2 ∂ρ 2 ∂z 2 − 1 2 dz ,(2.
The wall friction coefficients given by (Eqs.2.6 and 2.8) are obtained [14] under the assumption of a fully chaotic nucleon motion within the nuclear volume. However, a fully chaotic motion is achieved only when the nuclear shape is extremely irregular whereas the nucleon motion is partly chaotic in varying degrees for typical nuclear shapes through which a nucleus evolves when it undergoes fission. We have already argued in the preceding section that for such cases, the chaos weighted wall friction (η cwwf ) should be employed instead of the original wall friction. Accordingly, we shall replace Eqs.2.6 and 2.8 by their chaos weighted versions and the chaos-weighted wall-and-window friction (denoted henceforth by η cwwf ) is subsequently obtained as
η cwwf (c < c win ) = µ(c)η wall (c < c win ),(2.10)
and
η cwwf (c ≥ c win ) = µ(c)η wall (c ≥ c win ) + η win (c ≥ c win ). (2.11)
Defining a quantity β(c) = η(c)/m(c) as the reduced friction coefficient, its dependence on the elongation coordinate is shown in Fig.1 for both the WF and CWWF frictions for the 213 Fr nucleus. A strong suppression of the original wall formula friction for compact shapes of the nucleus can be immediately noticed in the CWWF friction. This implies that the chaoticity is very small for near spherical shapes (c ∼ 1), the physical picture behind which is as follows. A particle moving in a spherical mean field represents a typical integrable system and its dynamics is completely regular. When the boundary of the mean field is set into motion (as in fission), the energy gained by the particle at one instant as a result of a collision with the moving boundary is eventually fed back to the boundary motion in the course of later collisions. An integrable system thus becomes completely nondissipative in this picture resulting in a vanishing friction coefficient. This aspect has been investigated extensively on earlier occasions [15,16] and has been found to be valid for any generic integrable system. The reduction in the strength of the wall friction as shown in Fig.1 thus becomes evident from chaos considerations. The phenomenological reduced friction obtained in Ref. [10] is also shown in this figure. Though the one-body friction with the CWWF agrees qualitatively with the phenomenological friction for c < 1.5, it is beyond its scope to explain the steep increase of phenomenological friction for c > 1.5. We shall discuss this point further while presenting the results.
C. Combined dynamical and statistical model calculation
In our calculation, we first specify the entrance channel through which a compound nucleus is formed. Assuming complete fusion of the target with the projectile, the spin distribution of the compound nucleus is usually found to follow the analytical form
dσ(l) dl = π k 2 (2l + 1) 1 + exp (l−lc) δl (2.12)
where the parameters l c and δl should be obtained by fitting the experimental fusion cross sections. It is however found that these parameters for different systems follow an approximate scaling [1] and we shall, therefore, use the scaled values of these parameters. The initial spin of the compound nucleus will be obtained by sampling the above spin distribution function. The initial distribution of the coordinates and momenta (c, p) is assumed to be close to equilibrium and hence their initial values are chosen from sampling random numbers following the Maxwell-Boltzmann distribution. With these initial conditions, the Langevin equations (Eq.2.1) are numerically integrated following the procedure outlined in Ref. [7]. The total excitation energy (E * ) of the compound nucleus can easily be obtained from the beam energy of the projectile and energy conservation in the form
E * = E int + V (c) + p 2 /2m (2.13)
gives the intrinsic excitation energy E int and the corresponding nuclear temperature T = (E int /a) 1/2 at each time step of integration. The centrifugal potential is included in V (c) in the above equation. We shall also consider neutron and giant dipole γ evaporation at each Langevin time step τ in the following manner [10]. We shall first calculate the neutron and γ decay widths, Γ n and Γ γ , by using the inverse cross-section formula as given in Ref. [1]. These widths depend upon the temperature, spin and the mass number of the compound nucleus and hence are to be evaluated at each interval of time evolution of the compound nucleus. We shall next decide whether any evaporation takes place during the interval or not by first calculating the ratio x = τ /τ tot where τ tot =h/Γ tot and Γ tot = Γ n + Γ γ . We shall then choose a random number r by sampling from a uniformly distributed set between 0 and 1. If we find r < x, it will be interpreted as emission of either a neutron or a γ during that interval. The type of the emitted particle is next decided by a Monte Carlo selection where it is considered as a neutron if 0 ≤ r ≤ Γ n /Γ tot , r being again sampled from a uniform distribution of random numbers (0 ≤ r ≤ 1), and as a γ otherwise. This procedure simulates the law of radioactive decay for the emitted particles. The energy of the emitted particle is then obtained by another Monte Carlo sampling of its energy spectrum. The intrinsic excitation energy, mass and spin of the compound nucleus are recalculated after each emission. The spin of the compound nucleus is reduced only in an approximate way by assuming that each neutron or a γ carries away 1h angular momentum. A Langevin trajectory will be considered as undergone fission if it reaches the scission point (c sci ) in course of its time evolution. Alternately it will be counted as an evaporation residue event if the intrinsic excitation energy becomes smaller than either the fission barrier or the binding energy of a neutron. The calculation proceeds until the compound nucleus undergoes fission or ends up as an evaporation residue. The number of emitted neutrons and photons is recorded for each fission event. This calculation is repeated for a large number of Langevin trajectories and the average number of neutrons emitted in the fission events will give the required prescission neutron multiplicity. The fission probability will be obtained as the fraction of the trajectories which have undergone fission.
The above scheme can however take an extremely long computer time particularly for those compound nuclei whose fission probability is small. We shall therefore follow a combined dynamical and statistical model, first proposed by Mavlitov et al. [9], in the present calculation. In this model, we shall first follow the time evolution of a compound nucleus according to the Langevin equations as described above for a sufficiently long period during which a steady flow across the fission barrier is established. Beyond this period, a statistical model for compound nucleus decay is expected to be a equally valid and more economical in terms of computation. We shall therefore switch over to a statistical model description after the fission process reaches the stationary regime. We shall, however, require the fission width along with the neuton and γ widths in the statistical branch of the calculation. This fission width should be the stationary limit of the fission rate as determined by the Langevin equation. Though analytic solutions for fission rates can be obtained in special cases [4,24] assuming a constant friction, this is not the case with the CWWF friction which is not constant and is strongly shape dependent. Thus it becomes necessary to find a suitable parametric form of the numerically obtained stationary fission widths using the CWWF (and also WF) frictions in order to use them in the statistical branch of our calculation. The details of this procedure is given in Ref. [17] following which we shall calculate all the required fission widths for the present work.
III. RESULTS
We have calculated the prescission neutron multiplicity and the fission probability for a number of compound nuclei formed in heavy-ion induced fusion reactions. We have used both the CWWF and WF frictions in our calculation. Figure 2 shows the results for prescission neutron multiplicity along with the experimental data. A number of systematic features can be observed from these results. First, the prescission neutron multiplicity values calculated with the CWWF and WF frictions are very close at smaller excitation energies, though at higher excitation energies, the WF predictions are larger than those obtained with the CWWF. This aspect is present in the decay of all the compound nuclei which we consider here and can be qualitatively understood as follows. The magnitude of the CWWF friction being smaller than that of the WF friction, fission rate with the CWWF friction is higher than that obtained with the WF friction. We have shown earlier [17] that the stationary fission width with the CWWF friction is about twice of that with the WF friction. However at a low excitation energy where a compound nucleus is formed with a low value of spin, the fission barrier is high and both the CWWF and WF fission widths turn out to be many times smaller than the neutron width. The neutrons, therefore, have enough time to be emitted long before a compound nucleus undergoes fission irrespective of its dynamics being controlled by either the CWWF or the WF frictions. Thus the prescission neutron multiplicities are rather insensitive to fission time scales at lower excitation energies. On the other hand, a compound nucleus is formed with a larger spin at higher excitation energies resulting in a reduction of the fission barrier. The fission time scales and the neutron lifetimes start becoming comparable at higher excitation energies and less neutrons are predicted from calculations with the CWWF than those with the WF. The prescission neutron multiplicity thus becomes capable of discriminating between different models of nuclear friction at higher excitation energies of the compound nucleus.
A similar explanation also holds for the systematic variation of the calculated prescission neutron multiplicities with respect to the mass number of the compound nucleus. We find that the WF prediction for prescission neutrons starts getting distinct from that of the CWWF at smaller values of the excitation energy with increasing mass number of the compound nucleus. Since the fission barrier decreases with the increasing mass of a compound nucleus, the fission time scales and the neutron lifetimes become comparable for heavier compound nuclei at lower excitation energies. This results in a fewer neutrons from calculations with the CWWF than those with the WF as one considers heavier compound nuclei.
A number of interesting points can be noted while comparing the calculated values with the experimental data. For the compound nucleus 178 W, the available experimental points [25] are at low excitation energies and therefore, cannot distinguish between the calculated values using the CWWF and WF frictions, which are almost identical. The calculated values slightly overestimate the prescission neutron multiplicity compared to the experimental data. A more extensive set of experimental values for prescission neutron multiplicity are available for the compound nuclei 188 Pt, 200 Pb, 213 F,r and 224 Th [25][26][27] covering a wider range of excitation energy in which the calculated values with the CWWF and WF differ. Clearly, the CWWF predicted values give excellent agreement with the experimental data for these compound nuclei whereas the WF predictions are considerably higher. However, similar conclusions cannot be drawn for the heavier nucleus 251 Es. It appears that the WF predictions are closer to the experimental data [28,26,25] whereas the CWWF predictions are somewhat lower. We shall return to this point later for a detailed discussion. For the present, we shall consider the results of fission probability calculations.
The calculated and experimental values of fission probability are shown in Fig. 3 for four compound nuclei. Experimental data for 224 Th is rather scanty and fission probability for 251 Es is almost 100%. Hence they are excluded from the present discussion. The calculated values of fission probability complements the picture of fission dynamics which was obtained while discussing the prescission neutron data. The fission probability is found to be more sensitive to the choice of friction at lower excitation energies than at higher excitations. The CWWF predicted fission probabilities are larger than those from the WF predictions. Moreover, the CWWF predictions are consistently closer to the experimental values of fission probability than those from the WF predictions.
In order to gain further insight into the dynamics of fission, we have also calculated the presaddle and postsaddle (saddle to scission) contributions to the multiplicity of prescission neutrons. Figure 4 shows the results obtained with both the CWWF and WF frictions. For all the cases, starting from almost zero multiplicity at small excitation energies, the postsaddle contribution increases at higher excitation energies. It is further observed that the postsaddle neutron multiplicities calculated with the CWWF and WF frictions are almost same for all the compound nuclei over the range of excitation energies considered here. This would be due to the fact that the number of postsaddle neutrons depends on the time scale of descent from the saddle to the scission. This, in turn, will depend upon the strength of the friction between the saddle and the scission and we have already seen in Fig. 1 that the CWWF and WF frictions are indeed close at large deformations. We shall next compare the presaddle contributions calculated with the CWWF and WF frictions for each of the nuclei under consideration. We immediately notice that the WF predictions are consistently larger than those from the CWWF at higher excitation energies. This gives rise to the enhancement of the WF prediction for total prescission multiplicity compared to that from the CWWF prediction, which we have already noticed in Fig. 2 and have discussed earlier. Since the CWWF predicted neutron multiplicities agree with the experimental values for the nuclei 178 W, 188 Pt, 200 Pb, 213 Fr, and 224 Th, we conclude that the chaos-weighted wall formula provides the right kind of friction to describe the presaddle dynamics of nuclear fission.
While comparing the relative importance of the presaddle and postsaddle neutrons, we further note that the postsaddle neutrons are more frequently emitted from heavier compound nuclei. For 251 Es, most of the prescission neutrons predicted by the CWWF are accounted for by the postsaddle neutrons. The underlying physical picture can be described as follows. When a compound nucleus is formed in a heavy-ion induced fusion reaction, its spin distribution is assumed to be given by Eq.2.12. If the compound nucleus is formed with a spin at which there is no fission barrier, its transition to the scission point will be essentially considered as postsaddle dynamics. In order to simplify our discussion, let us assume that most of the compound nuclei at a given excitation energy are formed with the spin l 0 of Eq.2.12 and let l b be the limiting spin value at which the fission barrier vanishes. We can then find a critical excitation energy, E crit , above which l 0 becomes greater than l b and most of the fission dynamics at excitations above this critical value can be considered as comprising of only postsaddle trajectories. In Fig. 5, we have plotted the fraction of neutrons emitted in the postsaddle stage as a function of the excitation energy for a number of compound nuclei. The critical excitation energy for each nucleus is also given in this plot. We have used the CWWF predicted neutron multiplicities for this plot where we find that the critical excitation energy decreases with increase in the compound nuclear mass. Thus the dominance of postsaddle neutrons sets in at lower excitation energies for heavier nuclei which, in turn, gives rise to the increase in the fraction of postsaddle neutrons with increasing mass of the compound nucleus.
Though the above discussion clearly establishes the importance of postsaddle neutrons for a very heavy compound nucleus, the number of postsaddle neutrons calculated with the CWWF friction still falls short of making the total prescission multiplicity equal to the experimental values for 251 Es. We consider the apparent better agreement between the WF predicted prescission neutron multiplicity and the experimental data for 251 Es as shown in Fig. 2 as a mere coincidence and we do not find any physical justification for abandoning the chaos-weighted factor in one-body friction for such heavy nuclei. Instead, we feel that the mechanism of neutron emission in the postsaddle stage requires a closer scrutiny essentially because the nucleus becomes strongly deformed beyond the saddle point. The neutron decay width of such a strongly deformed nucleus could be quite different from that of the equilibrated near-spherical nucleus which we use in our calculation. In particular, the neutron-to-proton ratio is expected to be higher in the neck region than that in the nuclear bulk and this can cause more neutrons to be emitted. Further, dynamical effects such as inclusion of the neck degree of freedom in the Langevin equation can influence the time scale of the postsaddle dynamics and hence the number of emitted neutrons. Such possibilities should be examined in future for a better understanding of the postsaddle dynamics of nuclear fission.
IV. SUMMARY AND CONCLUSIONS
We have applied a theoretical model of one-body nuclear friction, namely the chaos-weighted wall formula, to a dynamical description of compound nuclear decay where fission is governed by the Langevin equation coupled with the statistical evaporation of light particles and photons. We have used both the normal wall formula and its modified form with the chaos-weighted factor in our calculation in order to find its effect on the fission probabilities and prescission neutron multiplicities for a number of compound nuclei. The strength of the chaos-weighted wall formula friction being much smaller than that of the wall formula, the fission probabilities calculated with the CWWF are found to be larger than those predicted with the WF friction. On the other hand, the prescission neutron multiplicities predicted with the CWWF friction turn out to be smaller than those using the WF friction. Both the prescission neutron multiplicity and fission probability calculated with the CWWF friction for the compound nuclei 178 W, 188 Pt, 200 Pb, 213 Fr, and 224 Th agree much better with the experimental data compared to the predictions of the WF friction.
We have subsequently investigated the role of presaddle and postsaddle neutrons at different excitation energies for different compound nuclei. It has been shown that the majority of the prescission neutrons are emitted in the postsaddle stage for a very heavy nucleus like 251 Es. The CWWF friction, however, cannot produce enough neutrons to match the experimental prescission multiplicities for such a nucleus. It is, therefore, possible that in the postsaddle region, either the fission dynamics gets considerably slowed down or the neutrons are more easily emitted. These aspects require further studies before we draw conclusions regarding the postsaddle dynamics of nuclear fission.
The presaddle neutrons are however found to account for most of the prescission neutrons for lighter nuclei at lower excitation energies. On the basis of the comparison of the calculated prescission multiplicities with experimental data as given in the preceding section, we can conclude that the chaos-weighted wall formula friction can adequately describe the fission dynamics in the presaddle region. 251 Es are from Refs. [25], [25,26], [25][26][27], [25][26][27], [26,28], and [25,26], respectively. [29], [29], [30], and [31], respectively.
FIG. 1 .
1Reduced one-body friction coefficient β with chaos-weighted wall formula (solid line) and wall formula (dashed line) frictions for 213 Fr. The phenomenological reduced coefficient (dotted line) from Ref.[10] is also shown.
FIG. 2 .
2Prescission neutron multiplicities calculated with the CWWF friction are shown as points connected by solid lines whereas those calculated with the WF friction are shown as points connected by dashed lines. The experimental data for 178 W, 188 Pt, 200 Pb, 213 Fr, 224 Th, and
FIG. 3 .
3Fission probabilities calculated the CWWF friction are shown as points connected by solid lines whereas those calculated with the WF friction are shown as points connected by dashed lines. The experimental data for 178 W, 188 Pt, 200 Pb, and 213 Fr are from Refs.
FIG. 4 .
4Neutrons emitted during the presaddle and postsaddle (saddle to scission) stages of fission. Figures in the left panel show values calculated with the CWWF friction whereas those in right panel are obtained with the WF friction. In each plot, the solid circles, the solid squares and the solid triangles represent the total number of prescission neutrons, the number of presaddle neutrons and the number of postsaddle neutrons, respectively.
ACKNOWLEDGMENTSThe authors are grateful to Nicolas Carjan for making valuable suggestions during the course of the work.
. P Fröbrich, I I Gontchar, Phys. Rep. 292131P. Fröbrich and I.I. Gontchar, Phys. Rep. 292, 131 (1998).
. M Thoennessen, G F Bertsch, Phys. Rev. Lett. 714303M. Thoennessen and G.F. Bertsch, Phys. Rev. Lett. 71, 4303 (1993).
. N Bohr, J A Wheeler, Phys. Rev. 56426N. Bohr and J.A. Wheeler, Phys. Rev. 56, 426 (1939).
. H A Kramers, Physica (Amsterdam). 4284H.A. Kramers, Physica (Amsterdam) 4, 284 (1940).
. P Grange, H A Weidenmüller, Phys. Lett. 9626P. Grange and H.A. Weidenmüller, Phys. Lett. 96B, 26 (1980).
. P Grange, Q Li-Jang, H A Weidenmüller, Phys. Rev. C. 272063P. Grange, Q. Li-Jang, and H.A. Weidenmüller, Phys. Rev. C 27, 2063 (1983).
. Y Abe, S Ayik, P.-G Reinhard, E Suraud, Phys. Rep. 27549Y. Abe, S. Ayik, P.-G. Reinhard, and E. Suraud, Phys. Rep. 275, 49 (1996).
. T Wada, Y Abe, N Carjan, Phys. Rev. Lett. 703538T. Wada, Y. Abe, and N. Carjan, Phys. Rev. Lett. 70, 3538 (1993).
. N D Mavlitov, P Fröbrich, I I Gonchar, Z. Phys. A. 342195N.D. Mavlitov, P. Fröbrich, and I.I. Gonchar, Z. Phys. A 342, 195 (1992)
. P Fröbrich, I I Gontchar, N D Mavlitov, Nucl. Phys. 556281P. Fröbrich, I.I. Gontchar, and N.D. Mavlitov, Nucl. Phys. A556, 281 (1993).
. K Pomorski, B Nerlo-Pomorska, A Surowiec, M Kowal, J Bartel, K Dietrich, J Richert, C Schmitt, B Benoit, E De Goes, L Brennand, C Donadille, Badimon, Nucl. Phys. 67925K. Pomorski, B. Nerlo-Pomorska, A. Surowiec, M. Kowal, J. Bartel, K. Dietrich, J. Richert, C. Schmitt, B. Benoit, E. de Goes Brennand, L. Donadille, and C. Badimon, Nucl. Phys. A679, 25 (2000).
J R Nix, A J , JINR-D7-87-68Sierk in Proceedings of the International School-Seminar on Heavy Ion Physics. Dubna, USSR453ReportJ.R. Nix and A.J. Sierk in Proceedings of the International School-Seminar on Heavy Ion Physics, Dubna, USSR, 1986, Report No. JINR-D7-87-68 (1987), p.453.
J R Nix, A J , Sierk in Proceedings of the 6th Adriatic Conference on Nuclear Physics: frontiers of Heavy Ion Physics. N. Cindro et al.Dubrovnik, Yogoslavia; SingaporeWorld Scientific333J.R. Nix and A.J. Sierk in Proceedings of the 6th Adriatic Conference on Nuclear Physics: frontiers of Heavy Ion Physics, Dubrovnik, Yogoslavia, 1990, edited by N. Cindro et al. (World Scientific, Singapore, 1990), p. 333.
. J Blocki, Y Boneh, J R Nix, J Randrup, M Robel, A J Sierk, W J Swiatecki, Ann. Phys. (N.Y.). 113330J. Blocki, Y. Boneh, J.R. Nix, J. Randrup, M. Robel, A.J. Sierk, and W.J. Swiatecki, Ann. Phys. (N.Y.) 113, 330 (1978).
. S E Koonin, J Randrup, Nucl. Phys. 289475S.E. Koonin and J. Randrup, Nucl. Phys. A289, 475 (1977).
. S Pal, T Mukhopadhyay, Phys. Rev. 541333S. Pal and T. Mukhopadhyay, Phys. Rev. C54, 1333 (1996).
. Gargi Chaudhuri, S , Phys. Rev. 6364603Gargi Chaudhuri and S. Pal, Phys. Rev. C63, 064603 (2001).
. M Brack, J Damgard, A S Jensen, H C Pauli, V M Strutinsky, C Y Wong, Rev. Mod. Phys. 44320M. Brack, J. Damgard, A.S. Jensen, H.C. Pauli, V.M. Strutinsky, and C.Y. Wong, Rev. Mod. Phys. 44, 320 (1972).
. T Wada, N Carjan, Y Abe, Nucl. Phys. 538283T. Wada, N. Carjan, and Y. Abe, Nucl. Phys. A538, 283c (1992).
. K T R Davies, A J Sierk, J R Nix, Phys. Rev. 132385K.T.R. Davies, A.J. Sierk, and J.R. Nix, Phys. Rev. C13, 2385 (1976).
. A J Sierk, Phys. Rev. 332039A.J. Sierk, Phys. Rev. C33, 2039 (1986).
. J Blocki, F Brut, T Srokowski, W J Swiatecki, Nucl. Phys. 545511J. Blocki, F. Brut, T. Srokowski, and W.J. Swiatecki, Nucl. Phys. A545, 511c (1992).
. A J Sierk, J R Nix, Phys. Rev. 21982A.J. Sierk and J. R. Nix, Phys. Rev. C21, 982 (1980).
. I I Gontchar, P Fröbrich, N I Pischasov, Phys. Rev. 472228I.I. Gontchar, P. Fröbrich, and N.I. Pischasov, Phys. Rev. C47, 2228 (1993).
. J O Newton, D J Hinde, R J Charity, R J Leigh, J J M Bokhorst, A Chatterjee, G S Foote, S Ogaza, Nucl. Phys. 483126J.O. Newton, D.J. Hinde, R.J. Charity, R.J. Leigh, J.J.M. Bokhorst, A. Chatterjee, G.S. Foote, and S. Ogaza, Nucl. Phys. A483, 126 (1988).
. D J Hinde, D Hilscher, H Rossner, B Gebaure, M Lehmann, M Wilpert, Phys. Rev. 451229D.J. Hinde, D. Hilscher, H. Rossner, B. Gebaure, M. Lehmann, and M. Wilpert, Phys. Rev. C45, 1229 (1992).
. D J Hinde, H Ogata, M Tanaba, T Shimoda, N Takahashi, A Shinohara, S Wakamatsu, K Katori, H Okamura, Phys. Rev. 392268D.J. Hinde, H. Ogata, M. Tanaba, T. Shimoda, N. Takahashi, A. Shinohara, S. Wakamatsu, K. Katori, and H. Okamura, Phys. Rev. C39, 2268 (1989).
. H Rossner, D J Hinde, J R Leigh, J P Lestone, J O Newton, J X Wei, S Elfstrom, Phys. Rev. 45719H. Rossner, D.J. Hinde, J.R. Leigh, J.P. Lestone, J.O. Newton, J.X. Wei, and S. Elfstrom, Phys. Rev. C45, 719 (1992).
. R J Charity, J R Leigh, J J M Bokhorst, A Chatterjee, G S Foote, D J Hinde, J O Newton, S Ogaza, D Ward, Nucl. Phys. 457441R.J. Charity, J.R. Leigh, J.J.M. Bokhorst, A. Chatterjee, G.S. Foote, D.J. Hinde, J.O. Newton, S. Ogaza, and D. Ward, Nucl. Phys. A457, 441 (1986).
. J S Forster, L V Mitchell, J U Andersen, A S Jensen, E Laegsgard, W M Gibson, K Reichelt, Nucl. Phys. 464497J.S. Forster, L.V. Mitchell, J.U. Andersen, A.S. Jensen, E. Laegsgard, W.M. Gibson, and K. Reichelt, Nucl. Phys. A464, 497 (1987).
. D J Hinde, R J Charity, G S Foote, J R Leigh, J O Newton, S Ogaza, A Chatterjee, Nucl. Phys. 452550D.J. Hinde, R.J. Charity, G.S. Foote, J.R. Leigh, J.O. Newton, S. Ogaza, and A. Chatterjee, Nucl. Phys. A452, 550 (1986).
Excitation energy ( MeV ). Excitation energy ( MeV )
Fraction of neutrons emitted between saddle and scission is shown as a function of excitation energy for different compound nuclei. The the open square, the solid square, the open circle and the solid circle represent the calculated values forFIG. 5. Fraction of neutrons emitted between saddle and scission is shown as a function of excitation energy for different compound nuclei. The the open square, the solid square, the open circle and the solid circle represent the calculated values for
224 Th, 213 Fr, and 178 W, respectively. The critical excitation energy (in units of MeV), as defined in the text, is indicated for each nucleus. Es, Es, 224 Th, 213 Fr, and 178 W, respectively. The critical excitation energy (in units of MeV), as defined in the text, is indicated for each nucleus.
| [] |
[
"Composite local low-rank structure in learning drug sensitivity",
"Composite local low-rank structure in learning drug sensitivity"
] | [
"Tien The ",
"Mai \nOslo Centre for Biostatistics and Epidemiology\nDepartment of Biostatistics\nUniversity of Oslo\nNorway. (\n",
"Leiv Rønneberg \nOslo Centre for Biostatistics and Epidemiology\nDepartment of Biostatistics\nUniversity of Oslo\nNorway. (\n",
"Zhi Zhao \nOslo Centre for Biostatistics and Epidemiology\nDepartment of Biostatistics\nUniversity of Oslo\nNorway. (\n",
"Manuela Zucknick \nOslo Centre for Biostatistics and Epidemiology\nDepartment of Biostatistics\nUniversity of Oslo\nNorway. (\n",
"Jukka Corander \nOslo Centre for Biostatistics and Epidemiology\nDepartment of Biostatistics\nUniversity of Oslo\nNorway. (\n\nDepartment of Mathematics and Statistics\nUniversity of Helsinki\nFinland\n"
] | [
"Oslo Centre for Biostatistics and Epidemiology\nDepartment of Biostatistics\nUniversity of Oslo\nNorway. (",
"Oslo Centre for Biostatistics and Epidemiology\nDepartment of Biostatistics\nUniversity of Oslo\nNorway. (",
"Oslo Centre for Biostatistics and Epidemiology\nDepartment of Biostatistics\nUniversity of Oslo\nNorway. (",
"Oslo Centre for Biostatistics and Epidemiology\nDepartment of Biostatistics\nUniversity of Oslo\nNorway. (",
"Oslo Centre for Biostatistics and Epidemiology\nDepartment of Biostatistics\nUniversity of Oslo\nNorway. (",
"Department of Mathematics and Statistics\nUniversity of Helsinki\nFinland"
] | [] | The molecular characterization of tumor samples by multiple omics data sets of different types or modalities (e.g. gene expression, mutation, CpG methylation) has become an invaluable source of information for assessing the expected performance of individual drugs and their combinations. Merging the relevant information from the omics data modalities provides the statistical basis for determining suitable therapies for specific cancer patients. Different data modalities may each have their own specific structures that need to be taken into account during inference. In this paper, we assume that each omics data modality has a low-rank structure with only few relevant features that affect the prediction and we propose to use a composite local nuclear norm penalization for learning drug sensitivity. Numerical results show that the composite low-rank structure can improve the prediction performance compared to using a global low-rank approach or elastic net regression. | 10.1007/978-3-030-63061-4_7 | [
"https://export.arxiv.org/pdf/1905.00095v2.pdf"
] | 141,495,521 | 1905.00095 | 29835983b1f032d51b4af8662d434cdd88b43792 |
Composite local low-rank structure in learning drug sensitivity
5 Sep 2019
Tien The
Mai
Oslo Centre for Biostatistics and Epidemiology
Department of Biostatistics
University of Oslo
Norway. (
Leiv Rønneberg
Oslo Centre for Biostatistics and Epidemiology
Department of Biostatistics
University of Oslo
Norway. (
Zhi Zhao
Oslo Centre for Biostatistics and Epidemiology
Department of Biostatistics
University of Oslo
Norway. (
Manuela Zucknick
Oslo Centre for Biostatistics and Epidemiology
Department of Biostatistics
University of Oslo
Norway. (
Jukka Corander
Oslo Centre for Biostatistics and Epidemiology
Department of Biostatistics
University of Oslo
Norway. (
Department of Mathematics and Statistics
University of Helsinki
Finland
Composite local low-rank structure in learning drug sensitivity
5 Sep 2019Proceedings of CIBB 2019 1local low-rankdrug sensitivitymulti-omicsnuclear norm penalization
The molecular characterization of tumor samples by multiple omics data sets of different types or modalities (e.g. gene expression, mutation, CpG methylation) has become an invaluable source of information for assessing the expected performance of individual drugs and their combinations. Merging the relevant information from the omics data modalities provides the statistical basis for determining suitable therapies for specific cancer patients. Different data modalities may each have their own specific structures that need to be taken into account during inference. In this paper, we assume that each omics data modality has a low-rank structure with only few relevant features that affect the prediction and we propose to use a composite local nuclear norm penalization for learning drug sensitivity. Numerical results show that the composite low-rank structure can improve the prediction performance compared to using a global low-rank approach or elastic net regression.
Introduction
In recent years, large-scale in-vivo pharmacological profiling of cancer drugs on a panel of cancer cell lines, which are well characterised by multiple omics data sets, have been proposed as a promising route to precision medicine [1,2,3,4]. The omics data can for example consist of genome-wide measurements of mRNA expression, DNA copy numbers, DNA single point and other mutations or CpG methylation of cell lines. These measurements reflect different heterogeneous molecular profiles of the cancer cell lines with respect to driver effects, intra correlations, measurement scales and background noise [5]. The response or sensitivity of a cell line to a drug is characterized by parameters estimated from the dose-response curve, for example by the half maximal inhibitory concentration (IC 50 ) [2].
Various supervised machine learning methods have been proposed for the problem of drug response prediction. For example, [6] utilize a multiple kernel learning (MKL) approach to combine different datasets and explicitly incorporate prior biological information. [7] combine a MKL approach with matrix factorization, using the model to uncover the latent relationships between drug targets and intracellular pathways. Some other related works can be found in a recent review by [8].
Combining different data sources and prior biological knowledge can clearly help to shed light on the complexities of cancer drug sensitivity prediction. Most of the previous approaches based on combined multiple omics data employ a global structure for parameters inference such as low-rank or sparsity. However, in this application each data source has its own specific structure that is important for capturing the effects of drug sensitivity. Borrowing motivation from the recent work [9] in another application domain that explores a composite local low-rank structure, we propose to use a local low-rank model for predicting drug sensitivity with multi-omics data. To the best of our knowledge, it is the first time a composite local low-rank structure is applied in the context of drug sensitivity prediction with multi-omics data.
Materials and Methods
The pharmacological data Y = {y ij } ∈ R n×q represent the sensitivities of n samples (e.g. cell lines or patients) to q drugs. We observe the high-dimensional (multiomics) data that contain p k features X k ∈ R n×p k for k = 1, . . . , K, and in total all p = K k=1 p k features are available for n samples across the K different data modalities. Here X = (X 1 , . . . , X K ) ∈ R n×p and let's denote the linear model mapping from high-dimensional covariates data to multivariate responses as
Y = K k=1 X k B k + E = XB + E(1)
where B = (B ⊤ 1 , . . . , B ⊤ K ) ⊤ ∈ R p×q is the unknown regression coefficient matrix partitioned corresponding to the predictor groups. The random errors E are assumed to be zero-mean, where specific correlation structures are examined in the simulation study.
Under the model (1), we assume each omics data set X k has its own low-rank coefficient matrix B k . Note that this local low-rank assumption does not necessarily imply a low-rank structure of the whole coefficient matrix B. We propose to use a composite nuclear-norm penalization to estimate a local low-rank structurê
B CLR = arg min B∈R p×q 1 2n Y − XB 2 2 + λ K k=1 w k B k * ,(2)
where λ > 0 is a tuning parameter and w k s are the pre-specified weights. Here A s = ( ij |A ij | s ) 1/s denotes the matrix ℓ s -norm and A * = rank(A) j=1 σ j (A) is the nuclear norm with σ j (·) denoting the jth largest singular value of the enclosed matrix. As studied in [9], the weights are used to adjust for the dimension and scale differences of X k s and the choice
w k = σ 1 (X k )( √ q + rank(X k ))/n(3)
balances the penalization of different views and allows us to use only a single tuning parameter. Remark 1: Note that problem (2) covers other well-known problems as its special cases such as: i) global low-rank structrure, also known as nuclear norm penalized regression: with p 1 = p, K = 1 the penalty in (2) becomes the nuclear norm penalty of the whole parameter matrix B.
ii) multi-task learning: with p k = 1, p = K, (2) becomes a special case of multi-task learning that all the tasks share the same set of features and samples.
Remark 2: Some theoretical results of the composite local low-rank estimator (2) have been laid out in [9] for a specific case where E has i.i.d Gaussian entries. More specifically, non-asymptotic bounds for estimation and prediction are given in the Theorem 1 and Theorem 2 in [9].
3 Numerical study In this section, we conduct simulation studies to examine the efficacy of the proposed composite local low-rank method. The R code to reproduce the results is available at https://github.com/zhizuio/LowRankLearning.
Simulation setups and details
Using the dimensionalities: q = 24, n = 90, K = 2, p 1 = p 2 = 100, the data are simulated as in linear model (1), where X = [X 1 , X 2 ]. Each X k (k = 1, 2) is generated from a multivariate normal distribution with mean 0 and covariance matrix Σ X . The covariance matrix Σ X has the diagonal values equal to 1 and all off-diagonal elements equal to ρ X ≥ 0. To take into account the correlation between the drugs, we simulate the noise E from a multivariate normal distribution with mean 0 and covariance matrix Σ ǫ . The covariance matrix Σ ǫ has the diagonal values equal to 1 and all off-diagonal elements equal to ρ ǫ ≥ 0.
We vary different correlation setups in omics data X and the drugs as follows:
(a). fix ρ X = 0 and vary ρ ǫ as 0.0; 0.3; 0.6, (b). fix ρ ǫ = 0 and vary ρ X as 0.3; 0.6; 0.9.
Then, for each of the above setups, we consider various settings for the true coefficient
matrix B = (B ⊤ 1 , B ⊤ 2 ) as: S1: each B k , k = 1, 2 is a low-rank matrix with rank(B 1 ) = 4, rank(B 2 ) = 6 which is generated as B k = L rank(B k ) R ⊤ rank(B k ) with the entries of L rank(B k ) ∈ R p k ×rank(B k ) and R rank(B k ) ∈ R q×rank(B k ) both generated from N (0, 1).
S2: B 1 is low-rank as in S1 and B 2 is a sparse matrix where 50% of the elements are non-zero and simulated from N (0, 1).
S3: global low-rank, the whole matrix B is a rank-2 matrix simulated as in S1.
S4: global sparsity, the whole matrix B is sparse where 20% of the elements are nonzero and simulated from N (0, 1). Table 1: MSPE with fixed ρ X = 0 and ρ ǫ is varied. The composite low-rank (CLR) method returns the smallest prediction errors. In general, the prediction errors for the 3 methods are increasing when the correlation between drugs increase. Table 2: MSEE with fixed ρ X = 0 and ρ ǫ is varied. The composite low-rank (CLR) returns the smallest error when there is local low-rank structrure in the data, while the reduced-rank regression (GLR) returns the smallest error when there is a global structrure. We compare the composite local low-rank estimator (CLR) in (2) with a global lowrank estimator (GLR) for the reduced-rank regression,
ρ ǫ = 0 ρ ǫ = 0.3 ρ ǫ =B GLR = arg min B∈R p×q 1 2n Y − XB 2 2 , s.t rank(B) ≤ r
and the elastic-net (sparsity-inducing) estimator (Enet)
B enet = arg min B∈R p×q 1 2n Y − XB 2 2 + λ(α B 1 + 0.5(1 − α) B 2 2 ).
We use an implementation of the reduced-rank regression from R package 'rrpack' 1 where the rank is chosen by cross-validation. For the elastic-net, we use the R package 'glmnet' 2 with λ chosen by 10-folds cross-validation and α = 0.2. The evaluations are done by using the mean squared estimation error (MSEE) and the mean squared prediction error (MSPE)
MSEE := 1 pq B − B 2 2 , MSPE := 1 nq Y − XB 2 2 .
Note that in real-world applications, where the true B is not known, we can only access the prediction errors. We repeat each experiment setting 30 times and report the mean of the outputs.
Numerical results
A first conclusion from the numerical results is that the proposed composite local low-rank (CLR) method has the smallest prediction error. This can be seen from the Table 1 and Table 3. In term of estimation error, the reduced-rank regression (global low-rank method, GLR) seems to be better and also more robust to the correlation between the drugs (Table 2) and between the covariates (Table 4). On the other hand, the Elastic-Net neither works well for estimation nor for prediction.
Regarding the prediction error, besides the fact that the CLR method returns the smallest prediction errors, it is also robust to the correlation between the covariates (the omics data) as in Table 3. This can be easily seen as the composite local low-rank method does take into account the local structure of each omics data. On the other hand, as the correlation between the drugs increases, the prediction errors of the CLR also increase. Therefore, it would be nice to incorporate the drugs correlation into the preposed prediction method, this idea has been studied in a different model in [10].
On the estimation error, the global low-rank (GLR) method works better than CLR and Enet, see Table 2 and Table 4. In particular, the proposed composite local low-rank (CLR) method returns very high estimation errors in all simulation settings (except S4) when the correlation between the covariates increases in Table 4. This could be due to the weights w k in (3) being calculated based on the theoretical study for independent and identical distributed errors, and a restricted eigenvalue condition on X, see [9].
4 Real data analysis: GDSC data To test our approach on a real dataset, we use data from a large-scale pharmacogenomic study, the Genomics of Drug Sensitivity in Cancer (CDSC) [4], made available online 3 by Garnett et al. [2]. The dataset consists of drug sensitivity data (IC 50 ) for a large panel of cancer drugs screened on multiple cell lines, in addition to various omics measurements from the cancer cell lines.
We select a subset of the data, consisting of 97 screened drugs and 498 human cancer cell lines from 13 cancer tissue types, such that the response matrix Y ∈ R 498×97 is fully observed. For each cell line, they measured the mutation status for a panel of known cancer genes, and genome-wide copy number variation and gene expression. For data preprocessing and feature selection, we follow the procedure in [10], which results in preselecting 68 binary mutation features (X 1 ∈ R 498×68 ), 426 integer copy Table 3: MSPE with fixed ρ ǫ = 0 and ρ X is varied. The composite low-rank (CLR) method returns the smallest prediction errors. ρ X = 0.3 ρ X = 0.6 ρ X = 0.9 CLR GLR Enet CLR GLR Enet CLR GLR Enet setting S1 (local low-rank)
0 Table 4: MSEE with fixed ρ ǫ = 0 and ρ X is varied. Overall, the reduced-rank regression (GLR) returns the smallest error. number features (X 2 ∈ R 498×426 ) and 2602 continuous gene expression features (X 3 ∈ R 498×2602 ), respectively, and the drug sensitivity being measured as log IC 50 . Note that, we did not consider 13 cancer tissue indicators in our analysis as it is non-omics data. The performances of 3 methods on the GDSC data set are given in Table 5. The estimated ranks of each omics data source B 1 ∈ R 2602×97 , B 2 ∈ R 426×97 , B 2 ∈ R 68×97 from the composite local low-rank model are 97, 54 and 61 respectively. For reduced-rank regression (GLR), we fit model with maximum rank to be 50 and the best selected rank is 50. GLR returns smallest prediction error as, from the simulation, it is more robust to the correlation among the drugs. For elastic net, 99% of the coefficient is estimated to be zero. More specifically, among the smallest predictions drugs, CLR reports some drugs with smaller prediction errors comparing to GLR: for example prediction error of Bicalutamide drug by CLR is 0.003 while GLR returns 0.046.
ρ X = 0.3 ρ X = 0.6 ρ X =
Discussion and Conclusion
In this paper, we have studied the problem of drug sensitivity prediction with multiomics data. Under the assumption that each omics data modality affects the drug sensitivity prediction only through a few latent features (low-rankness), we propose to use a composite local low-rank model that takes into account this local structure. Our numerical results illustrate beneficial performance regarding the prediction errors of the proposed method compared to global methods, such as reduced-rank regression and elastic net.
This paper represents an initial take on the drug prediction based on local low-rank structures. There are some clear limitations in our approach, such as: (i) incorporating correlations between drugs and the heterogeneity of multi-omics data, as in [10], into our model would help to make our method more robust; (ii) incorporating other local structure rather than low-rankness, could help our method to become more flexible; (iii) extending the proposed method by including full-rank "mandatory" non-omics data sources ( e.g clinical variables). These problems open further venues of research in this area in the future.
S1 (local low-rank) 0.011 0.517 5.111 0.018 0.727 9.172 0.025 1.659 9.755 setting S2 (low-rank, sparse) 0.011 0.056 3.286 0.018 0.066 3.834 0.025 0.083 4.318 setting S3 (global low-rank) 0.011 0.899 12.81 0.018 1.228 21.45 0.025 2.212 20.00 setting S4 (global sparsity) 0.011 0.023 0.237 0.018 0.076 0.321 0.025 0.068 0.695
global low-rank) 1.213 1.131 3.014 1.235 1.054 2.860 4.225 1.137 3.1670.9
CLR GLR
Enet
CLR GLR
Enet
CLR GLR
Enet
setting S1 (local low-rank)
2.165 2.755 7.640 4.691 2.752 7.855 14.60 2.726 7.692
setting S2 (low-rank, sparse) 1.153 1.298 3.581 1.828 1.243 3.507 5.447 1.186 3.266
setting S3 (setting S4 (global sparsity)
0.121 0.112 0.331 0.120 0.112 0.331 0.119 0.112 0.335
Table 5 :
5MSPE with real data.CLR
GLR (rank-50)
Enet
real data
0.3340
0.1257
0.7475
https://cran.r-project.org/package=rrpack 2 https://cran.r-project.org/package=glmnet 3 ftp://ftp.sanger.ac.uk/pub4/cancerrxgene/releases/release-5.0/
AcknowledgmentsThe first two authors contributed equally. L.R. is supported by The Norwegian Research Council 237718 through the Big Insight Center for research-driven innovation. The research of T.T.M. and J.C. are supported by the European Research Council (SCARABEE, no. 742158).
The Cancer Cell Line Encyclopedia enables predictive modelling of anticancer drug sensitivity. J Barretina, G Caponigro, N Stransky, K Venkatesan, A A Margolin, S Kim, C J Wilson, Nature. 483Barretina, J., Caponigro, G., Stransky, N., Venkatesan, K., Margolin, A.A., Kim, S., Wilson, C.J. et al. (2012). "The Cancer Cell Line Encyclopedia enables predictive modelling of anticancer drug sensitivity". Nature 483, 603-607.
Systematic identification of genomic markers of drug sensitivity in cancer cells. M Garnett, E Edelman, S Heidorn, C D Greenman, A Dastur, K W Lau, P Greninger, Nature. 483Garnett, M., Edelman, E., Heidorn, S., Greenman, C.D., Dastur, A., Lau, K.W., Greninger, P. et al. (2012). "Systematic identification of genomic markers of drug sensitivity in cancer cells". Nature 483, 570-575.
Global proteomics profiling improves drug sensitivity prediction: results from a multi-omics, pan-cancer modeling approach. M Ali, S A Khan, K Wennerberg, T Aittokallio, Bioinformatics. 34Ali, M., Khan, S.A., Wennerberg, K. and Aittokallio, T. (2018). "Global proteomics profiling im- proves drug sensitivity prediction: results from a multi-omics, pan-cancer modeling approach". Bioinformatics 34, 1353-1362.
Genomics of Drug Sensitivity in Cancer (GDSC): a resource for therapeutic biomarker discovery in cancer cells. W Yang, J Soares, P Greninger, E J Edelman, H Lightfoot, Nucleic Acids Reserch. 41Yang, W., Soares, J., Greninger, P., Edelman, E.J., Lightfoot, H., et al. (2013). "Genomics of Drug Sensitivity in Cancer (GDSC): a resource for therapeutic biomarker discovery in cancer cells". Nucleic Acids Reserch 41, D955-961.
Multi-omics approaches to disease. Y Hasin, M Seldin, A Lusis, Genome Biology. 1883Hasin, Y., Seldin, M. and Lusis, A. (2017). "Multi-omics approaches to disease". Genome Biology 18, 83.
A community effort to assess and improve drug sensitivity prediction algorithms. J C Costello, L M Heiser, E Georgii, M Gönen, Nature Biotechnology. 32121202Costello, J. C., Heiser, L. M., Georgii, E., Gönen, M., et al. (2014). "A community effort to assess and improve drug sensitivity prediction algorithms". Nature Biotechnology 32(12), 1202.
Drug response prediction by inferring pathway-response associations with kernelized Bayesian matrix factorization. M Ammad-Ud-Din, S A Khan, D Malani, A Murumägi, O Kallioniemi, T Aittokallio, S Kaski, Bioinformatics. 3217Ammad-ud-din, M., Khan, S. A., Malani, D., Murumägi, A., Kallioniemi, O., Aittokallio, T., Kaski, S. (2016). "Drug response prediction by inferring pathway-response associations with kernelized Bayesian matrix factorization". Bioinformatics, 32(17), i455-i463.
Machine learning and feature selection for drug response prediction in precision oncology applications. M Ali, T Aittokallio, Biophysical reviews. 111Ali, M., & Aittokallio, T. (2019). "Machine learning and feature selection for drug response predic- tion in precision oncology applications". Biophysical reviews, 11(1), 31-39.
Integrative Multi-View Regression: Bridging Group-Sparse and Low-Rank Models. G Li, X Liu, K Chen, Biometrics. Li, G., Liu, X., Chen, K. (2018). "Integrative Multi-View Regression: Bridging Group-Sparse and Low-Rank Models". Biometrics.
Structured penalized regression for drug sensitivity prediction. Z Zhao, M Zucknick, 1902.04996Zhao, Z., Zucknick, M. (2019). "Structured penalized regression for drug sensitivity prediction". arXiv, 1902.04996.
| [
"https://github.com/zhizuio/LowRankLearning."
] |
[
"Simple and explicit bounds for multi-server queues with 1 1−ρ scaling",
"Simple and explicit bounds for multi-server queues with 1 1−ρ scaling"
] | [
"Yuan Li \nCornell ORIE\n\n",
"Amazon David \nCornell ORIE\n\n",
"A Goldberg \nCornell ORIE\n\n"
] | [
"Cornell ORIE\n",
"Cornell ORIE\n",
"Cornell ORIE\n"
] | [] | We consider the FCFS GI/GI/n queue, and prove the first simple and explicit bounds that scale as 1 1−ρ under only the assumption that inter-arrival times have finite second moment, and service times have finite 2 + ǫ moment for some ǫ > 0. Here ρ denotes the corresponding traffic intensity. Conceptually, our results can be viewed as a multi-server analogue of Kingman's bound. Our main results are bounds for the tail of the steady-state queue length and the steady-state probability of delay. The strength of our bounds (e.g.in the form of tail decay rate) is a function of how many moments of the service distribution are assumed finite. Our bounds scale gracefully even when the number of servers grows large and the traffic intensity converges to unity simultaneously, as in the Halfin-Whitt scaling regime. Some of our bounds scale better than 1 1−ρ in certain asymptotic regimes. In these same asymptotic regimes we also prove bounds for the tail of the steady-state number in service.Our main proofs proceed by explicitly analyzing the bounding process which arises in the stochastic comparison bounds of Gamarnik and Goldberg [49] for multi-server queues. Along the way we derive several novel results for suprema of random walks and pooled renewal processes which may be of independent interest. We also prove several additional bounds using drift arguments (which have much smaller pre-factors), and point out a conjecture which would imply further related bounds and generalizations.We also show that when all moments of the service distribution are finite and satisfy a mild growth rate assumption, our bounds can be strengthened to yield explicit tail estimates decaying as O exp(−x α ) , with α ∈ (0, 1) depending on the growth rate of these moments. : Simple and explicit bounds for multi-server queues with 1 1−ρ scaling 7 2. Main Results.Notation.Let us fix an arbitrary FCFS GI/GI/n queue with inter-arrival times having the same distribution as r.v. A, and service times having the same distribution as r.v. S, and denote this queueing system by Q n . Let N o (respectively A o ) denote an ordinary renewal process with renewals distributed as S (respectively A), and N o (t) respectively A o (t) the number of renewals in [0, t]. In general, we will use script font (e.g. N o , A o ) to refer to the corresponding stochastic process, with notation such as N o (t), A o (t) referring to the associated counting process evaluated at a particular time.Let {N e,i , i ≥ 1} respectively {N o,i , i ≥ 1} denote a mutually independent collection of equilibrium (respectively ordinary) renewal processes with renewals distributed as S; A e an independent equilibrium renewal process with renewals distributed as A; and {N e,i (t), i ≥ 1} respectively | null | [
"https://export.arxiv.org/pdf/1706.04628v3.pdf"
] | 259,075,158 | 1706.04628 | 6f04f1eff91bac5ba071062d7926b80d8acf0731 |
Simple and explicit bounds for multi-server queues with 1 1−ρ scaling
2 Jun 2023
Yuan Li
Cornell ORIE
Amazon David
Cornell ORIE
A Goldberg
Cornell ORIE
Simple and explicit bounds for multi-server queues with 1 1−ρ scaling
2 Jun 2023many-server queuesstochastic comparisonKingman's boundrenewal processHalfin-Whitt
We consider the FCFS GI/GI/n queue, and prove the first simple and explicit bounds that scale as 1 1−ρ under only the assumption that inter-arrival times have finite second moment, and service times have finite 2 + ǫ moment for some ǫ > 0. Here ρ denotes the corresponding traffic intensity. Conceptually, our results can be viewed as a multi-server analogue of Kingman's bound. Our main results are bounds for the tail of the steady-state queue length and the steady-state probability of delay. The strength of our bounds (e.g.in the form of tail decay rate) is a function of how many moments of the service distribution are assumed finite. Our bounds scale gracefully even when the number of servers grows large and the traffic intensity converges to unity simultaneously, as in the Halfin-Whitt scaling regime. Some of our bounds scale better than 1 1−ρ in certain asymptotic regimes. In these same asymptotic regimes we also prove bounds for the tail of the steady-state number in service.Our main proofs proceed by explicitly analyzing the bounding process which arises in the stochastic comparison bounds of Gamarnik and Goldberg [49] for multi-server queues. Along the way we derive several novel results for suprema of random walks and pooled renewal processes which may be of independent interest. We also prove several additional bounds using drift arguments (which have much smaller pre-factors), and point out a conjecture which would imply further related bounds and generalizations.We also show that when all moments of the service distribution are finite and satisfy a mild growth rate assumption, our bounds can be strengthened to yield explicit tail estimates decaying as O exp(−x α ) , with α ∈ (0, 1) depending on the growth rate of these moments. : Simple and explicit bounds for multi-server queues with 1 1−ρ scaling 7 2. Main Results.Notation.Let us fix an arbitrary FCFS GI/GI/n queue with inter-arrival times having the same distribution as r.v. A, and service times having the same distribution as r.v. S, and denote this queueing system by Q n . Let N o (respectively A o ) denote an ordinary renewal process with renewals distributed as S (respectively A), and N o (t) respectively A o (t) the number of renewals in [0, t]. In general, we will use script font (e.g. N o , A o ) to refer to the corresponding stochastic process, with notation such as N o (t), A o (t) referring to the associated counting process evaluated at a particular time.Let {N e,i , i ≥ 1} respectively {N o,i , i ≥ 1} denote a mutually independent collection of equilibrium (respectively ordinary) renewal processes with renewals distributed as S; A e an independent equilibrium renewal process with renewals distributed as A; and {N e,i (t), i ≥ 1} respectively
Introduction.
The multi-server queue with independent and identically distributed (i.i.d.) inter-arrival and service times, and first-come-first-serve (FCFS) service discipline, is a fundamental object of study in Operations Research and Applied Probability. Its study was originally motivated by the design of telecommunication networks in the early 20th century (Janssen et al. [81]). Since that time the model has found many additional applications across a wide range of domains (Worthington [148]).
A key result for GI/GI/1 (i.e. single server) queues came in the seminal 1962 paper of John Kingman (Kingman [89]), in which a simple and explicit upper bound was given for the steady-state expected waiting time E[W ] (and thus also for E[L] by Little's Law, see e.g. Bertsimas [16]). This bound, now referred to as Kingman's bound, states the following (with A a random variable with the same distribution as an inter-arrival time, and S a random variable with the same distribution as a service time):
E[W ] ≤ σ 2 A +σ 2 S 2E[A] × 1 1−ρ , E[L] ≤ σ 2 A +σ 2 S 2(E[A]) 2 × 1 1−ρ .
Here σ A (respectively σ S ) denotes the standard deviation of the inter-arrival (respectively service) distribution, E[A] denotes the mean inter-arrival time, and ρ denotes the traffic intensity. For GI/GI/1 queues, ρ = E[S] E [A] . We now show that one can rewrite Kingman's bound for E [L] in terms of only ρ and certain quantities derived from the distributions of A and S which are invariant under scaling A or S by a constant. In particular, for a general random variable (r.v.) X with finite mean and variance, let µ X
∆ = 1 E[X]
, and c X denote the coefficient of variation (i.e. c X ∆ = µ X σ X ). Note that for a given r.v. X, c X is invariant under scaling X by a constant. Then it follows from some straightforward algebra that Kingman's bound for E[L] may be equivalently written as follows: E[L] ≤ 1 2 c 2 A + ρ 2 c 2 S × 1 1−ρ . Importantly, note that since c 2 A and c 2 S are invariant under scaling A and S by any constants, the above bound is in some sense insensitive / unchanged / scale-free as one passes to heavy-traffic. The term 1 1−ρ dictates how E[L] scales as ρ ↑ 1 in a broad sense. Note that the following slight weakening of Kingman's bound (which is essentially equivalent as ρ ↑ 1),
E[L] ≤ 1 2 c 2 A + c 2 S × 1 1 − ρ ,(1)
captures the fundamental essence of Kingman's bound in heavy traffic. As most performance metrics of the general GI/GI/1 queue have no simple closed-form solution, this combination of simplicity, accuracy, and scalability has made Kingman's bound very popular in the analysis of queueing systems. In the same paper, Kingman established that (under appropriate technical conditions) this bound becomes tight as ρ ↑ 1, and the bound was later tightened in Whitt [136,137], Daley [37].
Soon after Kingman's seminal work, other authors had begun using the tools of weak convergence to attempt to extend this analysis to the more complicated multi-server queue. Indeed, Kingman himself conjectured such a result in 1965 for a sequence of GI/GI/n queues which approaches heavy-traffic (with n held fixed) as the traffic intensity ρ (defined as µ A nµ S for GI/GI/n queues) approaches 1 (Kingman [91]). Such a weak convergence result was proven in Kollerstrom [93].
Namely, it was proven in Kollerstrom [93] that if one considers an appropriate sequence of GI/GI/n queues in heavy-traffic (indexed by their traffic intensity ρ, with W ρ and L ρ the corresponding steady-state waiting time and queue length), with n held fixed as ρ ↑ 1, then (letting ⇒ denote weak convergence and E 1 denote a mean one exponentially distributed r.v.)
{(1 − ρ)W ρ , ρ ↑ 1} ⇒ E[A] 2 × c 2 A + c 2 S × E 1 ; {(1 − ρ)L ρ , ρ ↑ 1} ⇒ 1 2 c 2 A + c 2 S × E 1 .(2)
Related results are also proven in Borovkov [18], Iglehart and Whitt [80], Loulou [99], Kollerstrom [94], Kennedy [87], Nagaev [106,107]. However, all these results and bounds were premised on keeping the number of servers fixed while ρ ↑ 1. Other approaches (based on techniques including stochastic comparison) ran into similar obstacles (see e.g. Kingman [92], Wolff [146], Mori [104], Scheller-Wolf [118], Scheller-Wolf et al. [119], Rein and Scheller-Wolf [114]).
That said, there has been some partial progress towards a non-asymptotic multi-server analogue of Kingman's bound (with all prefactors independent of the number of servers), with early progress for restricted asymptotic regimes and distribution classes appearing in Makino [101], Brumelle [24], Suzuki et al. [126], Arjas et al. [5], Oliver [108], Seshadri [121]. Later progress was made using a range of techniques, including convexity (e.g. Rolski and Stoyan [116], Whitt [134,139], Daley and Rolski [40]), other modified service disciplines (e.g. Chawla et al. [30], Smith and Whitt [125], Harchol-Balter et al. [71]), large deviations theory (e.g. Sadowsky [117]), and robust optimization (e.g. Whitt [141], Whitt et al. [142], Whitt [143], Bandi et al. [10]). However, to date, none of those approaches have been able to make substantial progress towards proving a non-asymptotic bound such as (1) for general multi-server queues.
Other recent works have focused on proving bounds on the error of heavy-traffic approximations in the Halfin-Whitt asymptotic scaling regime (Dai et al. [34], Braverman and Dai [21,22,23], Braverman et al. [20], Gurvich et al. [67,68], Mandelbaum et al. [102], Janssen et al. [82,84], Jin et al. [85]). However, outside the case of Markovian service times, all of these results suffer from the presence of non-explicit constants, which may depend on the underlying service distribution in a complicated and unspecified way. Furthermore, in the Halfin-Whitt setting, the relevant limiting quantities themselves generally have no explicit representation (Reed [112], Aghajani and Ramanan [2], Gamarnik and Momcilovic [50]). Also, at least regarding multiserver queues, essentially all such results are restricted to a particular asymptotic regime (although more universal results are known for single-server systems, see Huang et al. [78], Braverman et al. [20], Braverman and Dai [22,23], Gaunt and Walton [53]).
Lyapunov function and drift arguments have also been used to yield bounds (Gamarnik and Zeevi [52], Gamarnik and Momcilovic [50], Gamarnik and Stolyar [51], Dai et al. [34], Hokstad [76], Grosof et al. [61], Scully et al. [120], Grosof et al. [62]). However, these works generally have additive error terms that scale with the number of servers, making them not amenable to proving bounds which hold in heavy-traffic regimes where ρ and n vary together (as in the Halfin-Whitt regime), and/ or involve non-explicit constants or restrict to certain asymptotic regimes. An exception is the very interesting recent work Wang et al. [132], which uses Lyapunov function arguments to provide a general bound for the steady-state probability of delay (s.s.p.d.), i.e. the probability that all servers are busy, in certain multi-server systems with hyper-exponentially distributed service times and Markovian inter-arrival times. Although that work considers a more general type of queueing model in which jobs can occupy more than one server, and is thus incomparable to our own, when restricted to our setting of FCFS GI/GI/n queues, it implies the following bound.
For systems in which service times are hyper-exponentially distributed (as a mixture of K exponentials) and inter-arrival times are exponentially distributed, the s.s.p.d. is at most 1 1−ρ ( 3K √ n + 1 n ). Related techniques were used in Hong et al. [77] to provide asymptotic upper and lower bounds for E[L] scaling like Kingman's bound for certain sequences of queues (up to some non-explicit constants) in a super-Halfin-Whitt scaling regime, in which (1 − ρ) √ n log(n) → 0. The authors also
show that for such sequences of multi-server queues, but in a sub-Halfin-Whitt regime in which
(1 − ρ)
√ n log(n) → ∞, the s.s.p.d. is asymptotically at most exp − Cn(1 − ρ) 2 for some non-explicit C > 0.
For further discussion regarding the massive literature on this problem, we refer the interested reader to the surveys Doig [44], Ovuworie [111], Whitt [144]. We especially point the reader to Whitt [144] for an overview of the many heuristic approximations available in the literature.
We note that this body of work is further complicated by several mistakes in the literature, as discussed in Daley [37], Daley and Rolski [40], Daley [42], Wolff [146]. Indeed, even the popular textbook Gross et al. [63] seems to incorrectly claim the bound (1) for general multi-server queues in its Section 7.1.3.
1.0.1. Why 1 1−ρ ? In the above discussion, we several times referenced the fact that certain bounds "scaled as 1 1−ρ ", or "did not scale correctly" because they did not scale as 1 1−ρ . It is of course reasonable to ask why, and in what precise sense, 1 1−ρ should be the bar. There are at least two fundamental justifications here. First, it follows from well-known results for the M/M/n queue (Halfin and Whitt [70]) that for any fixed B > 0, there exist ζ B ∈ (0, 1), independent of n and ρ, such that any M/M/n queue for which ρ ∈ (1 − Bn − 1 2 , 1) satisfies ζ B × 1 1−ρ ≤ E[L] ≤ 1 1−ρ . Thus, in a fairly general sense, this is the correct scaling for the M/M/n queue in heavy-traffic. Second, this is the correct scaling in classical heavy-traffic, as well as the Halfin-Whitt and non-degenerate slowdown scaling regimes (Kollerstrom [93], Gamarnik and Goldberg [49], Aghajani and Ramanan [2], Atar et al. [8]), and in single-server queues. As referenced above, this is also the correct scaling for certain sequences of queues in a super-Halfin-Whitt regime (Hong et al. [77]). It has also been known for some time that this is the correct scaling when service times are deterministic, as in that setting certain lower and upper bounds proved earlier by Kingman coincide asymptotically (Kingman [92]). More broadly, several known lower bounds exhibit such a scaling across multiple heavy-traffic regimes (e.g. Kingman [92], Gamarnik and Goldberg [49], Goldberg [55], Hong et al. [77]). Indeed, the 1 1−ρ scaling is a guiding meta-principle throughout much of the literature on multi-server queues. This includes not just whether the exact bound (1) holds, but whether any bound representable as a simple function of a few normalized moments (of A and S) multiplied by 1 1−ρ holds. Over the years, Daley has several times lamented on this state of affairs, and we refer the reader to Daley [37], Daley and Rolski [40], Daley [42], Allen [4] for some directly related discussion.
There Daley conjectures that such a bound should indeed be possible (and even that (1) should hold). However, in spite of Daley's optimism, other works bring this into question. For example, the results of Gupta et al. [65] prove that two queues whose inter-arrival and service times have the same first two moments can still have very different mean waiting times. Similarly, the known results for queues in the Halfin-Whitt regime (e.g. Aghajani and Ramanan [2], Gamarnik and Momcilovic [50], Dai et al. [34]) suggest that simple bounds may not hold in that regime. Indeed, those works show that the limiting behavior of the steady-state waiting time may depend in a very complex way on the underlying service distribution. That stands in contrast to the classical heavy-traffic setting in which only one simple limiting behavior is possible (as dictated by (2)).
Moreover, when service times only have few finite moments, deriving uniform tail bounds is very subtle even in the single-server setting (Olvera-Cravioto et al. [109,110]). In light of such results, it is unclear whether a simple bound which scales as 1 1−ρ across different notions of heavy-traffic and depends only on a few normalized moments even exists.
Our contribution.
In this paper, we use stochastic comparison arguments (combined with several novel bounds for associated random walks) to prove the first such Kingman-like bound for general multi-server queues, only requiring the assumption that E[A 2 ] < ∞ and E[S 2+ǫ ] < ∞ for some ǫ > 0. Our bounds for the steady-state queue length and probability of delay are simple, explicit, and scale as a simple function of a few normalized moments (of the inter-arrival and service distributions) multiplied by 1 1−ρ , regardless of the particular notion of heavy-traffic considered (and including both the classical and Halfin-Whitt scalings). Some of our bounds scale better than 1 1−ρ , and in these same asymptotic regimes we also prove bounds for the the tail of the steady-state number in service. We also prove several additional bounds using drift arguments (which have much smaller pre-factors).
In addition, we prove much stronger tail bounds under the assumption that all moments of the service distribution are finite and satisfy a mild growth rate assumption.
Outline of paper.
The remainder of our paper proceeds as follows. In Section 2, we state our main results and provide a few illustrative implications and extensions. More precisely, we state our most central results Theorem 1 and 2 (bounds for the tail of the queue length and steady-state probability of delay under minimal assumptions) in Subsection 2.2. Then, we state a handful of illustrative implications and extensions in Subsection 2.3, and provide a separate outline of those results in Subsubsection 2.3.1. Let us point out that this includes our stronger results (under mild growth assumptions on the moments of S) in Subsubsection 2.3.3. We also provide a discussion of the prefactors arising in our results, and some of the limitations of our results, in Subsection 2.4. We state our results derived using simplified drift arguments (with no large prefactors), as well as an intuitive conjecture which would imply even stronger such results, in Section 3. Section 4 is devoted to the proofs of our most central results Theorems 1 and 2, which constitute the majority of the technical analysis of the manuscript. As the proofs are somewhat involved, we proceed as follows to improve readability. First, we sketch a high-level outline of the proof in Subsection 4.1. Second, we provide a more detailed proof (but still without most technical details), containing all of the most important auxiliary results and main flow of logic (albeit in many cases without their proofs), in Subsection 4.2. Third, we provide many of the most important technical details of the proofs (but still with many of the finer subarguments omitted) in the technical appendix Section 7. Finally, we defer many of the finer subarguments of these proofs to the supplemental appendix Section 8.
We prove our bounds for the steady-state probability of delay and number of busy servers in Section 5, and provide some concluding remarks and directions for future research in Section 6.
Let us also point out that in addition to providing many of the finer subarguments of the proofs of our main results Theorems 1 and 2, and the proofs of our results with no large prefactors based on simple drift arguments, our supplamental appendix also includes : implications of our main results for higher order moments in Subsection 8.1; implications of our main results for queues in the Halfin-Whitt regime (and an open question of Chawla et al. [30]) in Subsection 8.2; a more in-depth discussion of the prefactors arising in our main results in Subsection 8.5; and a sketch of a plausible approach to generalizing our main results to the network setting in Subsection 8.14. Here we recall that an equilibrium renewal process is one in which the first renewal interval is distributed as the equilibrium distribution of S. For a r.v. X, recall that a r.v. R is distributed
according to the equilibrium distribution of X if P(R > y) = 1 E[X] ∞
y P(X > z)dz for all y > 0. For a r.v. X, we let R(X) denote a r.v. distributed as the equilibrium distribution of X. We note that such a process captures the long-run behavior of a renewal process, since under quite general assumptions the "time until next renewal" in a renewal process with renewals distributed as X converges in distribution (as time grows large) to R(X). Noting that in heavy traffic any given server of a multi-server queue behaves like a renewal process for long stretches of time (as there is some job waiting in queue to replace any job that completes service), it is intuitive that (at least in heavy-traffic) the residual service time of a busy server would have the same distribution (at least approximately). Interestingly, it can be shown that this is true generally for GI/GI/n queues, i.e. under mild technical conditions the steady-state residual work on a busy server has the equilibrium distribution of a service time (see e.g. Hokstad [76]). It is also well-known that the same phenomena manifests in infinite-server models and loss models with Markovian arrival processes (Sevastyanov [122], Eick et al. [46]).
Let {A i , i ≥ 1} (respectively {S i , i ≥ 1}) denote the sequence of inter-event times in A o (respec- tively N o ).
Let us evaluate all empty summations to zero, and all empty products to unity; and as a convention take 1 ∞ = 0 and 1 0 = ∞. For an event E, let I(E) denote the corresponding indicator function. Unless stated otherwise, all processes should be assumed right-continuous with left limits (r.c.l.l.), as is standard in the literature. For our results involving steady-state queue lengths, we will generally require that the total number of jobs in Q n (number in service + number waiting in queue) converges in distribution (as time goes to infinity, independent of the particular initial condition) to a steady-state r.v. Q n (∞). As a shorthand, we will denote this assumption by saying "Q n (∞) exists", and refer the interested reader to Asmussen [7] for a discussion of technical conditions ensuring this property holds. We will adopt a parallel convention when talking about the steady-state waiting time of an arriving job. Namely, we will generally require that the distribution of the waiting time (in queue, not counting time in service) of the kth arrival to the system converges in distribution (as k → ∞, independent of the particular initial condition) to a steady-state r.v. W n (∞). As a shorthand, we will denote this assumption by saying "W n (∞) exists". Supposing that Q n (∞) exists, let L n (∞) denote a r.v. distributed as the steady-state number of jobs waiting in queue, i.e. L n (∞) is distributed as max 0, Q n (∞) − n .
In addition, for some of our results (i.e. those based on simple drift arguments) we will require that the (sorted) vector representing the residual service times of the set of jobs currently in service converges in distribution (independent of the particular initial condition) to a steady-state random vector W n service (∞), and denote this by saying "W n service (∞) exists". In such a setting, we let Num n service (∞) denote the corresponding steady-state number in service i.e. number of non-zero components of W n service (∞) , and Work n service (∞) denote the corresponding steady-state amount of work in service (i.e. sum of components of W n service (∞)). In these settings we will also require that the total amount of work in system (remaining work of those in service + service times of those in queue) converges to a steady-state r.v. Work n (∞) (again independent of initial conditions), and denote this by saying "Work n (∞) exists".
For a general r.v. X, let X + denote max(0, X). For k ≥ 1, let ρ k ∆ = µ A kµ S . Whenever there is no ambiguity as regards a particular GI/GI/n system, we will let L(∞), W (∞), Q(∞), W service (∞), Work service (∞), Num service (∞), Work(∞), ρ denote L n (∞), W n (∞), Q n (∞), W n service (∞), Work n service (∞), Num n service (∞), Work n (∞), ρ n . Note that for any GI/GI/n queue, one can always rescale both the service and inter-arrival times so that E[S] = µ S = 1, without changing either ρ or the distribution of Q n (∞). As doing so will simplify (notationally) several arguments and statements, sometimes we impose the additional assumption that E[S] = µ S = 1, and will point out whenever this is the case. For x > 0, we let Γ(x) ∆ = ∞ 0 t x−1 e −t dt denote the standard gamma function (see e.g. Batir [11]). Finally, we will sometimes use the standard Bachman-Landau (i.e. "big-O") asymptotic notation to describe the growth rate of functions, often to informally build intuition for more formal and explicit statements and bounds. Let us recall that two functions f, g of the same parameter (say r) are said to satisfy the asymptotic relation f = O(g) if there exists an absolute finite constant C > 0 s.t. f (r) ≤ C × g(r) for all r (over some appropriate unbounded domain). Similarly, the relation f = Ω(g) indicates that there exists an absolute finite constant c > 0 s.t. f (r) ≥ c × g(r) for all r (again over an appropriate domain).
Also, the relation f = Θ(g) indicates that both f = O(g) and f = Ω(g). We note that these notations can sometimese be composed with other functions. Thus for example the statement f = r O(r) would indicate that there exists C > 0 s.t. f (r) ≤ r C×r for all r in some appropriate domain, while f = r Ω(r) would indicate that there exists c > 0 s.t. f (r) ≥ r c×r for all r in that domain.
Main results.
Our main results are the following novel, explicit, and general tail bounds for multi-server queues, which scale as 1 1−ρ , along with corresponding bounds for the steady-state probability of delay. Our bounds only require that E[A 2 ] < ∞ and E[S 2+ǫ ] < ∞ for some ǫ > 0, although in general the more moments of S assumed finite the stronger the bounds become. Let r * ∆ = sup{r :
E[S r ] < ∞}, where
we note that r * may equal ∞. As our bounds scale quite differently as r * ↓ 2 and r * ↑ ∞, we state our results by breaking into two cases : r * ≤ 2.5 and r * > 2.5.
Theorem 1 (Tail bounds when r * ≤ 2.5, i.e. S has few finite moments). Suppose that for a GI/GI/n queue with inter-arrival times having the same distribution as r.v. A, and service times having the same distribution as r.v. S, the following is true :
(1) E[A 2 ] < ∞; (2) r * ∈ (2, 2.5]; (3) µ A < nµ S ; (4) Q(∞) exists. Then for all x > 0, P L(∞) ≥ x 1−ρ is at most inf r∈(2,r * ) 3 × 10 19 × E[(Sµ S ) 2 ] × E[(Sµ S ) 2 ] r−1 + E[(Sµ S ) r ] × ( r 2 − 1) −(r+1) × x − r 2 + 1.1 × exp − .0225 E[(Aµ A ) 2 ] −1 x ;
and the steady-state probability of delay (s.s.p.d.), P Q(∞) ≥ n , is at most
inf r∈(2,r * ) 4 × 10 20 × E[(Sµ S ) 2 ] × E[(Sµ S ) 2 ] r−1 + E[(Sµ S ) r ] × ( r 2 − 1) −(r+1) × n(1 − ρ 2 ) − r 2 + 1.1 × exp − .0028 E[(Aµ A ) 2 ] −1 n(1 − ρ) 2 .
Theorem 2 (Tail bounds when r * > 2.5, i.e. S has more finite moments). Suppose that for a GI/GI/n queue with inter-arrival times having the same distribution as r.v. A, and service times having the same distribution as r.v. S, the following is true :
(1) E[A 2 ] < ∞; (2) r * > 2.5 (where r * may equal ∞); (3) µ A < nµ S ; (4) Q(∞) exists. Then for all x > 0, P L(∞) ≥ x 1−ρ is at most inf r∈[2.5,r * ) 2 × 10 4 × E[(Sµ S ) 2 ] × 10 6 r × E[(Sµ S ) 2 ] r−1 × r 2.5r + r 1.5r × E[(Sµ S ) r ] × x − r 2 + 1.1 × exp − .0225 E[(Aµ A ) 2 ] −1 x ;
and the steady-state probability of delay (s.s.p.d.), P Q(∞) ≥ n , is at most
inf r∈[2.5,r * ) 2 × 10 4 × E[(Sµ S ) 2 ] × 10 7 r × (E[(Sµ S ) 2 ]) r−1 × r 2.5r + r 1.5r × E[(Sµ S ) r ] × n(1 − ρ) 2 − r 2 + 1.1 × exp − .0028 E[(Aµ A ) 2 ] −1 n(1 − ρ) 2 .
The proof of our bounds for L(∞) in Theorems 1 -2 constitute the largest part of our technical analysis. The proofs are first sketched at a high-level in Section 4.1, then in greater depth in Section 4.2, with additional details appearing in the technical appendix Section 7 and supplemental appendix Section 8. The proof of our bounds for the s.s.p.d. in Theorems 1 -2 appears in Section 5.
Some additional comments are in order.
• The inf terms appearing in Theorems 1 and 2 can be replaced by evaluating the associated expression at any r in the given range, yielding an explicit bound with decay rate x − r 2 for any such r (where for any given x there is a trade-off between x − r 2 and the pre-factor). We illustrate this explicitly (for r = 3) in Corollary 1 below, and later (in Theorem 3) "optimize this bound" (solving for the optimal r for any given x) under the assumption that E[(Sµ S ) r ] satisfies a mild growth rate assumption, yielding a much stronger tail bound.
• Our bounds have a very different behavior depending on whether r * is very close to 2 or r * is significantly larger than 2. The choice of setting a cutoff at 2.5 was somewhat arbitrary, and simply to make the statement of our results more clear. Note that the relevant function of r appearing within the corresponding infimum diverges (albeit in different ways) as r ↓ 2 and as r → ∞. Due to the fact that (in Theorem 2) one takes the inf of the resulting expression over r ∈ [2.5, r * ), the divergence as r → ∞ is not as fundamental of a problem (as one can, for each x, apply the bound for any r ∈ [2.5, r * )). In contrast, as r * ↓ 2, the inf does not remedy the situation, and we leave it as a very interesting open question whether such a degradation is fundamental or merely an artifact of our approach. Let us again point out that in the single-server case, no such degradation occurs, and one need only assume E[S 2 ] < ∞.
• Note that as one assumes the existence of more moments for S, the bounds generally become tighter, as the infimum is over a larger range. In our analysis we prove a bound for each assumption of the form E[S r ] < ∞, and the infimum thus arises since
E[S r ] < ∞ → E[S r ′ ] < ∞ for all r ′ ∈ [0, r].
• Let us point out that our analysis actually implies that in Theorem 2 (i.e. the setting r * > 2.5), we could have taken the infimum to also include the bounds of Theorem 1 (i.e. the setting r * ≤ 2.5), in which case the infimum would have been over a more complex piecewise function (we have not stated the results this way to improve readability).
• Note that if r * < ∞ and E[S r * ] < ∞, then the (easily verified) continuity of our bounds implies one can "plug in" r * to derive a bound with tail decay rate x − r * 2 , even though in principle r * itself is excluded from the range over which the inf is taken.
• Note the asymmetry of our bounds in A (only finite second moment required, term involving A exhibits exponential decay) and S (finite r * > 2 moment required, term involving S exhibits power law decay depending on r * ). Related discrepancies appear in past results on e.g. existence of moments for the queue length in single and multi-server queues (see e.g. Kiefer and Wolfowitz [88], Scheller-Wolf [118], Scheller-Wolf et al. [119]). Intuitively, this manifests because an occasional very large inter-arrival time actually helps the system in some sense, while a large service time will cause the queue to build. More formally, in our proof we first use a union bound (Lemma 2) to separate our analysis into a term involving the arrivals and a term involving the services, and then observe that the term involving the arrivals contains the supremum of a random walk in which all jumps up are uniformly bounded (even if A itself is not).
• The precise relationship between existence of moments of the queue length / tail decay rate, existence of moments of A, S, and scaling in 1 1−ρ remains a very interesting open question. We note that even for the single-server queue, such questions can become very subtle when S has a sufficiently heavy tail, and to our knowledge such simple and explicit bounds for the tail of the queue length have not appeared before in the literature even in the single-server case. We refer the interested reader to Olvera-Cravioto et al. [109,110], Whitt [145], Abate et al. [1] for some related discussion, and to Gaunt and Walton [53], Kingman [90], Huang et al. [78], Kollerstrom [95] for related results in the single-server setting. We note that the tail decay rate implied by our main results have several natural implications for the existence and scaling of higher order moments of L(∞), and for completeness we state and discuss such an implication in Section 8.1 of the supplemental appendix.
• Note that the bound for the s.s.p.d. appearing in Wang et al. [132] for systems with hyperexponentially distributed service times (with K components) and Markovian inter-arrival times has a related dependence on n(1 − ρ) 2 as our bounds for the s.s.p.d., as the bound in that work equals
3K n(1 − ρ) 2 − 1 2 + n(1 − ρ) −1 .
In our results, the inter-arrival and service times may be from a general distribution, and the demonstrated decay can be an arbitrarily high power of n(1 − ρ) 2 .
Our optimized bounds (in Theorem 3 below) under stronger assumptions lead to a much faster decay in n(1 − ρ) 2 .
• Note that for multi-server queues in the Halfin-Whitt scaling regime, n(1 − ρ) 2 is exactly the square of the spare capacity parameter B. That the s.s.p.d. would grow small as n(1 − ρ) 2 grows large is thus consistent with past results in the Halfin-Whitt scaling regime, in which the s.s.p.d. grows small as the spare capacity parameter B grows large (see e.g. Halfin and Whitt [70], Goldberg [55]). For completeness, we state and discuss some implications of our main results for queues in the Halfin-Whitt regime in Section 8.2 of the supplemental appendix.
• Let us also note that although the the prefactor arising in the bounds of Theorem 2 involves large constants, these constants (and terms scaling only exponentially in r) are asymptotically dominated by a term scaling roughly as r 2.5r + r 1.5r × E[S r ]. As E[S r ] will scale as r Θ(r) for many S of interest, this fact will allow us to "optimize" our bounds (by selecting the best r for each x) to yield much stronger results. These results appear later in Section 2.3.3, and we include an in-depth discussion of the r Ω(r) scaling, its necessity in closely related bounds, and interesting related open questions in Section 2.4.2. Let us also note that we take great care in our results and analysis to separate out and treat the terms that scale as r Ω(r) or E[S r ] (which will have the same r Ω(r) scaling in many cases), as these terms will dominate our bounds asymptotically (in contrast to terms scaling as c r or (E[S 2 ]) r ).
Additional implications of main results.
We now present several implications and extensions of our main results for illustrative purposes.
2.3.1. Outline of our presentation of additional implications of main results. We now briefly overview the additional implications of our main results which we will present in the sections below. In Section 2.3.2, we state two explicit and concrete bounds implied by our main results for illustrative purposes (which do not involve an infimum over r), as well as the implied bounds for the expected queue length (which scale as 1 1−ρ ). In Section 2.3.3, we actually compute the infima appearing in Theorem 2 under additional assumptions on the moments of S (which will hold for all but very heavy-tailed service distributions), which implies much stronger tail bounds.
In Section 2.3.4, we show that our results actually imply a scaling better than 1 1−ρ in certain asymptotic regimes. In Section 2.3.5, we show that our results also imply bounds for the number of busy servers.
2.3.2.
Illustrative corollaries and bounds for the expected queue length. An important point is that Theorems 1 and 2 also imply bounds for the expected queue length E[L(∞)], by integrating the bounds for P L(∞) ≥ x 1−ρ (also using the fact that any probability is always bounded by 1, so the divergence of the bounds as x ↓ 0 is not a problem). To most accurately state the implied bounds for E[L(∞)] (which scale as 1 1−ρ , as in Kingman's bound), the relevant expression should be the integral of an infimum over r (to derive a bound for each x, which is then integrated). The associated statement is somewhat cumbersome, and thus we do not state it here out of considerations of readability. Instead, we present two illustrative implications of our main results, both of which follow from Theorems 1 -2 and simple integration.
First, in the corollary below we concretely illustrate our results (also the implied bound for E[L(∞)]) under the assumption that E[S 3 ] < ∞ (where higher moments may or may not exist).
These results follow by replacing the infimum (over r) in Theorem 2 by the choice r = 3 (implicitly using continuity as r = 3 would not technically be included in the infimum in the borderline case r * = 3). To be clear, Theorem 2 actually implies a stronger result when E[S 3 ] < ∞ (due to the infimum). However, the infimum can be a bit hard to interpret without additional assumptions (although becomes quite interpretable under additional assumptions on the moment sequence of S, as in our Theorem 3).
Corollary 1 (Illustration of main results when S has finite third moment).
Suppose that for a GI/GI/n queue with inter-arrival times having the same distribution as r.v.
A, and service times having the same distribution as r.v. S, the following is true :
(1) E[A 2 ] < ∞; (2) E[S 3 ] < ∞; (3) µ A < nµ S ; (4) Q(∞) exists. Then for all x > 0, P L(∞) ≥ x 1−ρ is at most 8 × 10 25 × E[(Sµ S ) 2 ] × E[(Sµ S ) 2 ] 2 + E[(Sµ S ) 3 ] × x −1.5 + 1.1 × exp − .0225 E[(Aµ A ) 2 ] −1 x ;
the s.s.p.d. is at most
8 × 10 28 × E[(Sµ S ) 2 ] × E[(Sµ S ) 2 ] 2 + E[(Sµ S ) 3 ] × n(1 − ρ) 2 −1.5 + 1.1 × exp − .0028 E[(Aµ A ) 2 ] −1 n(1 − ρ) 2 ;
and E L(∞) is at most
1.61 × 10 26 × E[(Sµ S ) 2 ] × E[(Sµ S ) 2 ] 2 + E[(Sµ S ) 3 ] + 49E[(Aµ A ) 2 ] × 1 1 − ρ .
Of course, there was nothing special about the number 3 in Corollary 1, and an analogous result can be easily derived for any number strictly greater than 2.
As a second illustration, we now state the bounds implied by Theorem 1 when one only assumes that E[S 2+ǫ ] < ∞ for some small ǫ (where higher moments may or may not exist). Here our result follows directly from Theorem 1 and some straightforward algebra, along with the easily verified fact that ( 1 ǫ ) ǫ < 1.5 for all ǫ > 0.
Corollary 2 (Illustration of main results when S has finite 2 + ǫ moment). Suppose that for a GI/GI/n queue with inter-arrival times having the same distribution as r.v. A, and service times having the same distribution as r.v. S, the following is true :
(1) E[A 2 ] < ∞; (2) there exists ǫ ∈ (0, .5) s.t. E[S 2+ǫ ] < ∞; (3) µ A < nµ S ; (4) Q(∞) exists.
Then for all x > 0,
P L(∞) ≥ x 1−ρ is at most 5.1 × 10 20 × E[(Sµ S ) 2 ] × E[(Sµ S ) 2 ] 1+ǫ + E[(Sµ S ) 2+ǫ ] × ( 1 ǫ ) 3 × x −(1+ ǫ 2 ) + 1.1 × exp − .0225 E[(Aµ A ) 2 ] −1 x ;
the s.s.p.d. is at most
6.8 × 10 21 × E[(Sµ S ) 2 ] × E[(Sµ S ) 2 ] 1+ǫ + E[(Sµ S ) 2+ǫ ] × ( 1 ǫ ) 3 × n(1 − ρ 2 ) −(1+ ǫ 2 ) + 1.1 × exp − .0028 E[(Aµ A ) 2 ] −1 n(1 − ρ) 2 ;
and E L(∞) is at most
2.1 × 10 21 × E[(Sµ S ) 2 ] × E[(Sµ S ) 2 ] 1+ǫ + E[(Sµ S ) 2+ǫ ] × ( 1 ǫ ) 4 + 49E[(Aµ A ) 2 ] × 1 1 − ρ .
As mentioned before, we leave it as a very interesting open question whether the divergence of these bounds (in ǫ as ǫ ↓ 0) is fundamental or merely an artifact of our approach.
Let us point out that our main results also imply bounds for the higher moments of L(∞) (scaling as an appropriate power of 1 1−ρ ), and for completeness we provide such a bound in the supplemental appendix Section 8.1.
2.3.3.
Stronger explicit bounds when r * = ∞ and S is not very heavy tailed. We now show that when all moments of S exist, and this moment sequence satisfies a mild technical condition (which in general will hold for all but very heavy-tailed distributions), we can derive much tighter bounds. These results follow by actually computing the infima appearing in Theorem 2, i.e. optimizing the bound (over r) for each fixed x.
To motivate the precise conditions we will impose on the moments of S, let us first make an observation pointing out that all but very heavy-tailed distributions satisfy a natural growth condition on their moment sequence. Namely, under this mild condition one will have E[S r ] = r O(r) .
The observation follows from standard results for the moments of a Weibull distribution and some straightforward algebra, and for completeness we include a proof in the supplemental appendix
constants a ′ , b ′ , c ′ > 0 s.t. E[(Sµ S ) r ] ≤ a ′ × b ′r × r c ′ r = r O(r) for all r ≥ 2.
These bounds thus apply (for example) if S has exponentially decaying tails (in which case c = 1), or even much heavier tails with decay rate comparable to that of a heavier-tailed Weibull distribution (in which case c ∈ (0, 1)).
We note that the case P (Sµ S > x) ∼ a exp(−bx c ) for c ∈ (0, 1) puts the associated queueing model beyond the scope of several past results which assume that sup t>0 E[Sµ S − t|Sµ S > t] < ∞ (Downey [45], Grosof et al. [62]).
The next result shows that if the moments of S can be bounded as in Observation 1, our results imply a much stronger tail bound for GI/GI/n queues. We include a proof in the supplemental appendix Section 8.3.
Theorem 3 (Stronger bounds if moments of S scale as
r O(r) ). Suppose there exist a, b, c ≥ 1 s.t. E[(Sµ S ) r ] ≤ a × b r × r cr for all r ≥ 2.
Suppose also that the assumptions of Theorem
2 hold. Let f ∆ = 4 × 10 4 × a, g ∆ = 10 7 × a × b 3 × 4 c , δ ∆ = 1 e × (1.5 + c) × ( 1 g ) 1 1.5+c .
Then for all
x > 0, P L(∞) ≥ x 1−ρ is at most f × exp − δx 1 3+2c + 1.1 × exp − .0225 E[(Aµ A ) 2 ] −1 x ;
and the s.s.p.d. is at most
f × exp − δ n(1 − ρ) 2 1 3+2c + 1.1 × exp − .0028 E[(Aµ A ) 2 ] −1 n(1 − ρ) 2 .
Let us note that although f and δ involve large constants, c does not, and will in general be a small integer (depending on the actual growth rate of the moments of S). Thus (for example) if S has exponentially decaying tails, then c = 1 and 3 + 2c = 5. Thus in this case, our results imply an explicit tail decay rate which scales asymptotically as exp(−x 1 5 ) (note that the term arising from the arrival process will always have an asymptotically faster decay rate). Let us also point out that although for illustrative purposes we have focused here on the assumption that [30] on distribution-independent bounds for queues in the Halfin-Whitt regime. We defer a more formal discussion of the connection to Chawla et al. [30] to the supplemental appendix Section 8.2. 1 1−ρ scaling in certain asymptotic regimes. We now observe that by integrating the minimum of our bound for P L(∞) ≥ x 1−ρ and our bound for the s.s.p.d. (which also yields a bound for the tail of L(∞)), we can obtain a scaling better than 1 1−ρ . For illustrative purposes and clarity of exposition, we state this result under the assumption that E[S 3 ] < ∞, although analogous results (which decay as different powers of n(1 − ρ) 2 ) can be derived for any r s.t. E[S r ] < ∞, and even stronger results could be derived under assumptions analogous to those of Theorem 3. For completeness, we include a proof in the supplemental appendix Section 8.4.
E[(Sµ S ) r ] ≤ a × b r × r cr ,
Better than
To make a comparison to our previous bound for E[L(∞)] appearing in Corollary 1 easier, let us
define a 1 ∆ = 1.61 × 10 26 × E[(Sµ S ) 2 ] × E[(Sµ S ) 2 ] 2 + E[(Sµ S ) 3 ] , a 2 ∆ = 49E[(Aµ A ) 2 ]
. Note that our previous Corollary 1 asserted that under appropriate assumptions,
E[L(∞)] ≤ (a 1 + a 2 ) × 1 1−ρ .
Corollary 3 (Better than 1 1−ρ scaling). Under the same assumptions as Corollary 1, and supposing also that
n(1 − ρ) 2 ≥ 10 6 × E[(Aµ A ) 2 ] 2 , it holds that E L(∞) ≤ (a 1 + a 2 ) × 1 1−ρ × 1000 n(1 − ρ) 2 −.5 .
Corollary 3 goes beyond the 1 1−ρ scaling, with an additional correction term 1000 n(1 − ρ) 2 −.5 which converges to 0 as n(1 − ρ) 2 grows large. Such a bound can be interpreted as a generalization of the fact that in an M/M/n queue, E[L(∞)] = P Q(∞) ≥ n × ρ 1−ρ , with the term n(1 − ρ) 2 −.5 acting as a proxy for P Q(∞) ≥ n . We note that such a term will be significant in certain asymptotic scaling regimes, e.g. in the Halfin-Whitt regime when the spare capacity parameter B is large, or in the quality-driven scaling regime for multi-server queues (in which n → ∞ for a fixed ρ, see e.g. Borst et al. [19], Maglaras et al. [100], Whitt [138], Halfin and Whitt [70]).
2.3.5.
Bound for number of busy servers in certain scaling regimes. In certain scaling regimes, the primary metric of interest will be the number of busy servers, as with high probability there is no queue. Our next result provides bounds in exactly this setting, and we include a proof in Section 5.
Theorem 4 (Bound for number of busy servers). Under the same assumptions as Theorem 2, and supposing also that ρ ∈ [ 1 n , 1 − 2 n ], the following is true. For all x ∈ 4, 1 2 min
µ A µ S , n− µ A µ S µ A µ S , it holds that P Num service (∞) ≥ µ A µ S + x µ A µ S is at most inf r∈[2.5,r * ) 2 × 10 4 × E[(Sµ S ) 2 ] × 4 × 10 6 r × E[(Sµ S ) 2 ] r−1 × r 2.5r + r 1.5r × E[(Sµ S ) r ] × x −r + 1.1 × exp − .00015 E[(Aµ A ) 2 ] −1 x 2 ;
We note that in the quality-driven regime (i.e. n → ∞ for fixed ρ), for any given M > 0,
min µ A µ S , n− µ A µ S µ A µ S
will be greater than M for all sufficiently large n. Furthermore,
µ A µ S + x µ A µ S
is at most n for x in the same range. Thus for any fixed r * and all sufficiently large x (with "sufficiently large" a function of r * and the moments of Sµ S , Aµ A ), Theorem 4 will yield meaningful (i.e. strictly less than unity) bounds on the probability that the number of busy servers exceeds µ A µ S + x µ A µ S in the quality-driven regime. Such a result is consistent with the known Poisson approximations for the number in service in these scaling regimes (see e.g. Borst et al. [19], Maglaras et al. [100], Whitt [138], Halfin and Whitt [70]). By a similar logic, our results can also yield meaningful (i.e. strictly less than unity) bounds for the number of busy servers in the Halfin-Whitt scaling regime when the spare capacity parameter B is sufficiently large. We further note that a stronger result could again be proven under the same moment assumptions as Theorem 3, although we do not formalize that here.
Additional discussion of prefactors and "universality" of our main results.
In this section we present some additional discussion of whether our main results provide "universal" bounds (spoiler -they do not), and provide some additional context and supporting results to help explain the (large) prefactors appearing in our main results.
2.4.1. Do these results provide "universal" bounds? One can ask whether our bounds are in a meaningful sense "universally accurate" across all scalings of n and ρ. Although our bounds do represent a step in that direction, they unfortunately fall short in this regard.
• The tail decay rate implied by our bounds (also our Theorem 3) does not match known results in several cases where the asymptotic behavior of the tail of L(∞) is well-understood (including for single-server queues).
• It follows from known results for queues in the Halfin-Whitt regime (Halfin and Whitt [70], Goldberg [55]), super-Halfin-Whitt regime (Hong et al. [77]), and light traffic regimes (Burman et al. [26], Daley and Rolski [41], Gupta et al. [66]), that a faster decay in n(1 − ρ) 2 holds for the s.s.p.d. in certain scaling regimes.
• Our bounds are not applicable in a very light traffic regime where ρ ↓ 0 for fixed n, as they do not converge to zero as ρ ↓ 0.
• The large prefactors appearing in our bounds render them meaningless for certain parameter ranges, and no doubt extremely conservative in many cases (e.g. one can compare our bounds to known results for the M/M/1 queue).
2.4.2.
On the large prefactors appearing in our bounds. Let us now address the proverbial "elephant in the room" -namely, the massive prefactors in our results. These prefactors involve both large numerical constants, as well as functions of r whose asymptotics (at least for large r) are dominated by terms of the form r cr for some small constant c. More precisely, c = max(2.5, 1.5 + γ)
for some explicit γ depending on the growth rate of the moments of S, where γ = 1 if S has exponentially decaying tails, and will equal α if the tail of S decays as exp(−x 1 α ) as discussed in Observation 1. It is natural to ask whether such a large prefactor is fundamental, or merely an artifact of our approach. We provide some relevant discussion here, and include a more detailed discussion in Section 8.5 of the supplemental appendix.
First, let us point out that for any given queueing system, the prefactors appearing in our bounds may be very loose (for example comparing our bounds to known results for M/M/1 queues). Second, regarding the explicit numerical constants (e.g. 10 4 or 10 6 ) appearing in our bounds, these arise not from any one or two particular aspects of the proof, but instead from the composition of many bounds and results from the literature. Although we did take some care to try and control these constants in our analysis, it is likely that they could be further improved by an even more careful analysis, or possibly by a completely different type of analysis (see Section 3).
The asymptotic scaling (in r) of the terms and prefactors appearing within the infima arising in our main results leads to some subtle and interesting questions. It is easy to see that in our main results, those functions of r scale as r O(r) for large r (so long as E[(Sµ S ) r ] scales as r O(r) ). Although the fact that one takes an infimum over these terms means that these prefactors do not in general appear in the bound for any given x, they do appear in the bound if one insists on acheiving the highest possible tail decay rate (as a function of r) under minimal assumptions (i.e. how many moments of S exist). For example, our main result Theorem 2 directly implies the following (we formally prove this in Section 8.5 of the supplemental appendix).
all GI/GI/n queues in which E[A r ] < ∞, E[S r ] < ∞, µ A < nµ S , and Q(∞) exists, it holds that for all x > 0, P L(∞) ≥ x 1−ρ ≤ c r E[(Aµ A ) 2 ] r × E[(Aµ A ) r ] + E[(Sµ S ) 2 ] r × E[(Sµ S ) r ] × x − r 2 .
In addition, one can take c r < 4 × 10 4 × 10 6 r × r 2.5r for all r > 2.5.
Note that 4 × 10 4 × 10 6 r × r 2.5r is asymptotically dominated by the r 2.5r term. As for any δ > 0 the function r δr grows quite rapidly (faster than any exponential in r), it is reasonable to ask whether such a growth is fundamental in the prefactors arising in e.g. Corollary 4, or is merely an artifact of our analysis. It turns out that such a scaling is indeed fundamental, as indicated by the following theorem which we prove in Section 8.5 of the supplemental appendix. The intuition for the above is actually quite straightforward, and arises from the fact that when A, S are uniformly bounded it holds that E[A r ], E[S r ] scale only exponentially in r, while in a singleserver queue the rth moment of the steady-state queue length (with inter-arrival times distributed as A and service times distributed as S) scales as r Ω(r) , inherited from the scaling of the moments of an exponential distribution.
In the proofs of our main results, the prefactors scaling as r Ω(r) arise largely from our bounds for the higher order central moments of (pooled) renewal processes, i.e. E |N e (t) − µ S t| r and It turns out that the lower bound analysis used in our proof of Theorem 5 does not apply to our actual bounds from our main result Theorem 2, with the discrepancy arising from the fact that in our actual bounds there is a term of the form 1.
E | n i=1 N e,i (t) − µ S nt| r ,1 × exp − .0225 E[(Aµ A ) 2 ] −1 x instead of a term of the form E[(Aµ A ) 2 ] r × E[(Aµ A ) r ] × x − r 2
, and we provide further details in Section 8.5 of the supplemental appendix. More broadly, the required prefactors could in principle be quite sensitive to the exact form of the desired bound. This may be especially true if the desired bounds are to capture the "best possible" tail decay rate. For example, it is shown in Abate et al. [1] that in this case the relevant prefactor may depend on explicit terms arising in the asymptotics of the tail of the service time distribution, which may not be boundable (even in principle) in terms of the moments of S. Also, as noted before, such uniform tail bounds are known to be very complex even in the single-server setting, when S has a heavy tail (see e.g. Olvera-Cravioto et al. [109,110]).
We leave the question of deriving tighter bounds, possibly under different assumptions and/or in the study of qualitatively different types of bounds, as a very interesting direction for future research. We take some steps along these lines with our results in Section 3.
Towards bounds with no large prefactors.
Although it remains an interesting open question whether bounds analagous to our main results (including for the mean queue length) in GI/GI/n queues exist with no large prefactors, here we provide some partial evidence that this may indeed be possible. In particular, we prove such bounds in three related settings of interest : (1) the s.s.p.d. when the arrival process is Markovian;
(2) the tail of the number of busy servers when the arrival process is Markovian; and (3) the mean queue length when the arrival process is Markovian and there are Markovian abandonments. We also provide an intuitive conjecture which would imply such a bound for the mean queue length in M/GI/n queues. Our conjecture is closely inspired by a known relationship for the workload in M/GI/n queues, and is similar to related conjectures appearing previously in the literature.
Our proofs of these results are fundamentally different from those of our other results. These results follow from a simplified version of drift arguments very similar to those appearing in Wang et al. [132], Hong et al. [77], Hokstad [76], Scully et al. [120], Grosof et al. [62]. We note that several of these works were primarily focused on the study of more general models in which jobs can utilize more than one server and/or use other service disciplines. We defer all proofs to Section 8.6 of the supplemental appendix.
Our first result, for the s.s.p.d., is as follows. We defer the proof to Section 8.6.1 of the supplemental appendix.
Theorem 6. Consider an M/GI/n queue Q n with Markovian inter-arrival times having the same distribution as r.v. A, and service times having the same distribution as r.v. S satisfying
E[S] < ∞. Suppose also that µ A < nµ S , and that Q(∞) and W n service (∞) exist. Then the s.s.p.d. is at most 1 2 √ ρ n(1 − ρ) 2 − 1 2 .
The bound also has a similar qualitative dependence on n(1 − ρ) 2 as several of our previous results.
In contrast to our previous results for the s.s.p.d., here no assumption on S is needed except that E[S] < ∞, and the bound converges to zero as ρ ↓ 0 for fixed n. However, the bound does not decay faster in n(1 − ρ) 2 under the assumption of additional moments (on S), and requires inter-arrival times to be Markovian.
Our second result, for the steady-state number of busy servers, is as follows. We defer the proof to Section 8.6.2 of the supplemental appendix.
Theorem 7. Under the same assumptions as Theorem 6, for all x ∈ (0,
n− µ A µ S µ A µ S ], P Num service (∞) ≥ µ A µ S + x µ A µ S ≤ 1 2x .
The bound again has no large prefactors, but is restricted to Markovian arrivals, and unable to demonstrate faster decay rates in x under the assumption that more moments of S are finite.
Our next result is for M/P H/n + M queues, i.e. multi-server queues with Markovian interarrival times, Markovian abandonments, and phase-type service times. For this result we restrict to the class of phase-type service distributions, which are well-known to be dense within the space of all distributions (see Asmussen et al. [6]). This restriction enables us to apply certain technical results regarding e.g. existence and construction of the relevant stationary measures and processes. However, our actual bounds will not depend on any parameters of the phase-type distribution beyond the mean-interarrival and service time and abandonment rate, and could likely be extended to general service times by simple continuity arguments (using e.g. the results of Whitt [133]). Several past works have studied such queueing systems with abandonments using Lyapunov arguments (see e.g. Gamarnik and Stolyar [51], Dai et al. [34], Braverman and Dai [21]), but to our knowledge such a simple and explicit bound has not appeared previously in the literature.
First, let us more formally describe the relevant system. Let Q n a be a M/P H/n + M multiserver queue with Markovian inter-arrival times distributed as (the exponentially distributed r.v.)
A, service times distributed as the (phase-type) r.v. S, and patience times distributed as the
(exponentially distributed) r.v. B with θ = 1 E[B]
. Suppose the queue is initially empty. For a more formal description of such multi-server queues with abandonments, we refer the reader to e.g.
Gamarnik and Stolyar [51], Dai et al. [34], Dai and He [35], Braverman and Dai [21], Mandelbaum et al. [103]. It follows from the results of Dai et al. [34] that such a system is positive Harris recurrent, and the total number of jobs in Q n a (number in service + number waiting in queue) converges in distribution (as time goes to infinity, independent of the particular initial condition) to a steady-state r.v. Q n a (∞). Also, let L n a (∞) denote a r.v. distributed as the steady-state number of jobs waiting in queue. For such a system with abandonments, we again define ρ = µ A nµ S . We note that in general a system with abandonments will be stable even when ρ > 1, although here consider only the case ρ < 1 in analogy to our results without abandonments.
Then our result is as follows. We defer the proof to Section 8.6.3 of the supplemental appendix.
Theorem 8. Consider an M/P H/n + M queue Q n a with Markovian inter-arrival times having the same distribution as r.v. A, service times having the same distribution as phase-type r.v. S, and patience times having the same distribution as exponentially distributed r.v.
B with θ = 1 E[B] . Suppose also that µ A < nµ S . Then E L n a (∞) ≤ 2 π µ S θ ρ √ n. If in addition ρ ∈ [ 3 4 , 1 − 4 n ], then E L n a (∞) ≤ 2 µ S θ √ n exp − 1 4 n(1 − ρ) 2 .
Note that the first result above is most applicable for values of ρ in the Halfin-Whitt scaling (when 1 1−ρ scales as Θ( √ n)), while the second result is applicable in heavy-traffic more broadly. Our second result recaptures a scaling qualitatively similar to that previously shown for other related metrics in different particular asymptotic scaling regimes (see e.g. Wang et al. [132] and Goldberg [55]). The second result is also meaningful (i.e. yields bounds strictly less than unity) in the socalled quality-driven regime, with ρ ≥ 3 4 fixed and n → ∞. We also note the restriction ρ ∈ [ 3 4 , 1 − 4 n ] was chosen as a technical convenience, and different bounds (with a different exponent) could be derived under different assumptions.
Unfortunately, as the system with abandonments approaches a system without abandonments (i.e. θ ↓ 0), the bound of Theorem 8 becomes meaningless. For M/GI/n queues without abandonments, we now present a conjecture which would imply an analogous result. To state the conjecture, we first state a particular equation for E L(∞) . This equation follows directly (after some straightforward algebra) from a known equation for the steady-state expected work-in-system studied via drift arguments in several past works (see e.g. Hokstad [76], Scully et al. [120], Grosof et al. [62], Wang et al. [132], Hong et al. [77]), and for completeness we provide a proof in Section 8.6.4 of the supplemental appendix.
Theorem 9. Consider an M/GI/n queue Q n with with Markovian inter-arrival times having the same distribution as r.v. A, service times having the same distribution as r.v. S satisfying
E[S 2 ] < ∞, s.t. the c.d.f. of S is absolutely continuous. Suppose also that µ A < nµ S , and that Q(∞), Work(∞), and W service (∞) exist. Then E L(∞) equals 1 2 E[(Sµ S ) 2 ] ρ 1 − ρ + E Num service (∞) × E Work service (∞) − E Num service (∞) × Work service (∞) n(1 − ρ)E[S] . Theorem 8 would imply the simple bound E L(∞) ≤ 1 2 E[(Sµ S ) 2 ] ρ 1−ρ if the term E Num service (∞) × E Work service (∞) − E Num service (∞) × Work service (∞)(3)
was negative. We note that 1 2 E[(Sµ S ) 2 ] ρ 1−ρ closely resembles the steady-state expected number in queue in an M/GI/1 queue with inter-arrival times having the same distribution as A, and service times having the same distribution as S n . Indeed, that expectation equals
1 2 E[(Sµ S ) 2 ] ρ 2 1−ρ .
The fact that such relationships generally elucidate connections to such a sped-up single-server queue is well-known, see e.g. Hokstad [76], Scully et al. [120], Grosof et al. [62]. However, (3) is simply the negative of the covariance between the total work in service and the total number in service.
One would intuitively expect this covariance to be positive for non-pathological FCFS M/GI/n systems, and we indeed conjecture that such a result holds for a broad class of M/GI/n queues.
E Num service (∞) × E Work service (∞) ≤ E Num service (∞) × Work service (∞) , and hence E L(∞) ≤ 1 2 E[(Sµ S ) 2 ] ρ 1−ρ .
We leave a further study as an interesting open question, and note that tools from the theory of associated r.v.s (which have been used to prove related results in the literature) may be relevant here (see e.g. Baccelli et al. [9]). We note that bounds for this covariance are implicit (or in some cases explicit) in past work (see e.g. Hokstad [76], Scully et al. [120], Grosof et al. [62], Wang et al. [132], Hong et al. [77]), but do not seem to have the desired scaling when n → ∞ and ρ ↑ 1 simultaneously (at least in certain parameter regimes). We also note that closely related conjectures for multi-server queues involving similar (albeit perhaps less interpretable) covariance terms appear throughout the literature, and we refer the reader to Mori [104], Daley [42] for additional discussion.
Proof of our main results : bounds for L(∞) in Theorems 1 -2.
In this section we prove our central main results, i.e. the first part of Theorems 1 -2 in which
P L(∞) ≥ x 1−ρ is bounded.
To maximize readability, we proceed as follows. First, we sketch a high-level outline of the proof in Section 4.1. Second, we provide a more detailed proof (but still without most technical details), containing all of the most important auxiliary results and main flow of logic (albeit in many cases without their proofs), in Section 4.2. Third, we provide many of the most important technical details of the proofs (but still with many of the finer subarguments omitted) in the technical appendix Section 7. Finally, we defer many of the finer subarguments of these proofs to the supplemental appendix Section 8.
High-level outline.
We begin by sketching the high-level outline of our proof of the bounds for P L(∞) ≥ x 1−ρ appearing in Theorems 1 -2.
1. Use stochastic comparison results of Gamarnik and Goldberg [49]
to bound P L(∞) ≥ x by P sup t≥0 A e (t) − n i=1 N e,i (t) ≥ x .(4)
2. Use a union bound, and the connection between A e and A o , to bound (4) by
P sup t≥0 A o (t) − µ A t − 1 2 (n − µ A )t ≥ 1 2 x − 1 (5) + P sup t≥0 nt − n i=1 N e,i (t) − 1 2 (n − µ A )t ≥ 1 2 x ,(6)
thus separating the proof into an analysis of a supremum arising from the arrival process (5) and a supremum arising from a centered pooled equilibrium renewal process with renewal intervals distributed as service times (6). 3. Bound (5) by relating the supremum to a simple single-server queue and using known bounds for that setting (specifically a martingale inequality proven in Kingman [90]).
Conditionally bound (6) by proving that IF one could suitably bound
E[ nt − n i=1
N e,i (t)| r for some r > 2 (and all t) THEN one could bound (6) as required (using modifications of known maximal inequalities).
5. Combine the bound for (5) and conditional bound for (6) to conditionally bound (4). in Theorems 1 -2, as well as our bound for the number of busy servers (Theorem 4 ), follow from a very similar logic, and those proofs can be found in Section 5. We structure our proofs so the arguments used in proving our bounds for P L(∞) ≥ x 1−ρ in Theorems 1 -2 can be easily ported over to these other settings. Our bounds under additional assumptions on the moments of S (Theorem 3) follows by optimizing our bounds (by solving for the "best r" for each x) from Theorem 2, and we defer the proof to the supplemental appendix Section 8.3. Our results from Section 3 (i.e. Theorems 6 -9), whose proofs appear in the supplemental appendix Section 8.6, are derived using very different (and simpler) drift arguments. (4). In Gamarnik and Goldberg [49], the authors proved that L(∞) is stochastically dominated by the supremum of a certain one-dimensional random walk. This random walk arises from analyzing a modified queueing system in which an artificial arrival is added to the system whenever a server would otherwise go idle. To simplify notation the authors of Gamarnik and Goldberg [49] imposed the restriction that P(A = 0) = P(S = 0) = 0 (to preclude having to deal with simultaneous events). However, this restriction is unnecessary and the proofs of Gamarnik and Goldberg [49] can be trivially modified to accomodate this setting. As such, we state the relevant stochastic-comparison result of Gamarnik and Goldberg [49] without that unnecessary assumption.
Prove that one can indeed bound
E[ nt − n i=1
Lemma 1 (Gamarnik and Goldberg [49]). Suppose that µ A < nµ S , and that Q(∞) exists.
Then for all x > 0,
P L(∞) ≥ x ≤ P sup t≥0 A e (t) − n i=1 N e,i (t) ≥ x . 4.2.2. 2.
: Use a union bound, and the connection between A e and A o , to bound (4) by the sum of (5) and (6). Note that we may construct A e , A o on the same probability
space s.t. w.p.1, A e (t) ≤ 1 + A o (t) for all t ≥ 0.(7)
The above inequality follows by observing that the set of event times in A e , after the first event, is an ordinary renewal process. Next, we apply a straightforward union bound to reduce the problem of bounding (4) to that of bounding (5) and (6). We defer a formal proof to the supplemental appendix Section 8.7.
Lemma 2.
Suppose that E[S] = 1 and µ A < n. Then for all x > 2,
P sup t≥0 A e (t) − n i=1 N e,i (t) ≥ x ≤ P sup t≥0 A o (t) − µ A t − 1 2 (n − µ A )t ≥ 1 2 x − 1 + P sup t≥0 nt − n i=1 N e,i (t) − 1 2 (n − µ A )t ≥ 1 2 x . 4.2.3. 3.
: Bound (5) by relating the supremum to a simple single-server queue.
We now bound (5). Here we bound the corresponding supremum for general positive linear drift ν, but will later plug in 1 2 (n − µ A ). In particular, we prove the following.
Lemma 3. Suppose that E[A 2 ] < ∞. Then for all ν > 0 and x > 0, P sup t≥0 A o (t) − µ A t − νt ≥ x ≤ exp − .09 ν ν + µ A E[(Aµ A ) 2 ] −1 x .
Our proof proceeds by relating the supremum to the steady-state waiting time in a certain singleserver queue, and then applying a result of Kingman [90] bounding the relevant tail probabilities.
We defer the proof to the technical appendix Section 7.1. Lemma 4. Suppose that E[S] = 1, and that for some fixed integer n ≥ 1 and constants C 1 , C 2 > 0; r 1 > s > 1; and r 2 > 2:
(i) For all t ≥ 1, E | n i=1 N e,i (t) − nt| r 1 ≤ C 1 n r 1 2 t s . (ii) For all t ∈ [ 2 n , 1], E | n i=1 N e,i (t) − nt| r 2 ≤ C 2 (nt) r 2 2 .
Then for all ν > 0 and x ≥ 8,
P sup t≥0 nt − n i=1 N e,i (t) − νt ≥ x is at most 3.6 × (1 + 1 r 1 − s ) × (16 r 1 + 1 s − 1 ) r 1 +1 × C 1 n r 1 2 ν −s x −(r 1 −s) + (23 r 2 + 1 r 2 2 − 1 ) r 2 +1 × C 2 n r 2 2 (xν) − r 2 2 .
4.2.5. 5. : Conditionally bound (4). In this section, we prove that IF one could suitably
bound E[ nt − n i=1
N e,i (t)| r (for some r > 2 and all t), THEN one could bound (4) by combining our previous bound for (5) with our previous conditional bound for (6), as we have already bounded (4) by the sum of (5) and (6). The proof follows in a straightforward manner by using Lemma 4 to bound (6), and Lemma 3 to bound (5), combined with Lemma 2 and some straightforward algebra, and we omit the details.
Theorem 10. Suppose that E[S] = 1, and that for some fixed integer n ≥ 1 and constants C 1 , C 2 > 0; r 1 > s > 1; and r 2 > 2, the following conditions hold:
(i) µ A < n. (ii) For all t ≥ 1, E | n i=1 N e,i (t) − nt| r 1 ≤ C 1 n r 1 2 t s . (iii) For all t ∈ [ 2 n , 1], E | n i=1 N e,i (t) − nt| r 2 ≤ C 2 (nt) r 2 2 .
Then for all
x ≥ 18, P sup t≥0 A e (t) − n i=1 N e,i (t) ≥ x is at most 1.8 × (1 + 1 r 1 − s ) × (32 r 1 + 1 s − 1 ) r 1 +1 × C 1 n r 1 2 (n − µ A ) −s x −(r 1 −s) + 1.8 × (1 + 1 r 1 − s ) × (46 r 2 + 1 r 2 2 − 1 ) r 2 +1 × C 2 n r 2 2 (n − µ A ) − r 2 2 x − r 2 2 + 1.1 × exp − .045 n − µ A n + µ A E[(Aµ A ) 2 ] −1 x .E n i=1 N e,i (t) − nt r is at most .76 × E[S 2 ] + 1 r × 1032 r × r 2r + .21 × (E[S 2 ] + 1) × 516 r × r 1.5r × (E[S r ] + 1) × (nt) r 2 .
We defer the proofs to the supplemental appendix Section 8.12.
Lemma 6. Suppose that E[S] = 1, and E[S 2 ] < ∞. Then for all n ≥ 1, r ≥ 2, t ∈ [ 2 n , 1], E n i=1 N e,i (t) − nt r ≤ 5.2 × 35(1 + E[S 2 ]) r × r 2.5r × (nt) r 2 .(8)
We defer the proofs to the supplemental appendix Section 8.13. Thus suppose E[S] = 1. Then combining Lemmas 5 and 6 with some straightforward algebra, we conclude the following. For each integer n ≥ 1 s.t. n > µ A , the conditions of Theorem 10 are met with the following parameters:
r 1 = r 2 = r , s = r 2 , C 1 = .76 × E[S 2 ] + 1 r × 1032 r × r 2r + .21 × (E[S 2 ] + 1) × 516 r × r 1.5r × (E[S r ] + 1), C 2 = 5.2 × 35(1 + E[S 2 ]) r × r 2.5r .
Thus, applying Theorem 10 and some straightforward algebra (also using the fact that n n−µ A = 1
1−ρ and n−µ A n+µ A = n n+µ A (1 − ρ) ≥ 1 2 (1 − ρ))
, we find that for all x ≥ 18 and r ∈ (2, r * ), (4) is at most
906 × ( r + 1 r 2 − 1 ) r+1 × E[S 2 ] + 1 × 33024 r × E[S 2 ] + 1 r−1 × r 2.5r + r 1.5r × E[S r ] × x(1 − ρ) − r 2 +1.1 × exp − .0225 E[(Aµ A ) 2 ] −1 (1 − ρ)x .
We now break into 2 cases, r ≤ 2.5 and r > 2.5. For r ≤ 2.5, we find (after some straightforward algebra) that for x ≥ 18 and r ∈ 2, min(2.5, r * ) , (4) is at most
5 × 10 18 × E[S 2 ] + 1 × E[S 2 ] + 1 r−1 + E[S r ] × ( r 2 − 1) −(r+1) × (1 − ρ)x − r 2 + 1.1 × exp − .0225 E[(Aµ A ) 2 ] −1 (1 − ρ)x ;
while for r ∈ (2.5, r * ), using the easily verified fact that r+1 r 2 −1 ≤ 14 for r ≥ 2.5, we find that for x ≥ 18 and r ∈ (2.5, r * ), (4) is at most
1.3 × 10 4 × (E[S 2 ] + 1) × 4.6 × 10 5 r × (E[S 2 ] + 1) r−1 × r 2.5r + r 1.5r × E[S r ] × (1 − ρ)x − r 2 + 1.1 × exp − .0225 E[(Aµ A ) 2 ] −1 (1 − ρ)x .
Noting that the bounds are anyways at least one for x ∈ (0, 18), and combining with the fact that Here we show that a certain enhancement to the bounds of Gamarnik and Goldberg [49] can overcome this problem. The enhancement is based on the intuition that the probability that an n-server queueing system has at least n jobs in system is at most the probability that an n ′ -server queueing system has at least n jobs in system for n ′ < n. But the bounds of Lemma 1 DO yield meaningful bounds for the latter quantity, as it is equivalent to the probability that an n ′ -server system has at least n − n ′ jobs waiting in queue. We defer the proof to the technical appendix Section 7.3.
Lemma 7.
Suppose that for the GI/GI/n queue with inter-arrival times distributed as the r.v.
A and service times distributed as the r.v. S, it holds that µ A < nµ S , and Q(∞) exists. Then for all n ′ ∈ {1, . . . , n} and x ≥ n ′ − n, In this section, we again suppose without loss of generality (by rescaling) that E[S] = 1. As a direct corollary of Lemma 7 (by plugging in x = 0, n ′ = n − ⌊ 1 2 (n − µ A )⌋), we conclude the following bound for the s.s.p.d.
P Q(∞) − n ≥ x ≤ P sup t≥0 A e (t) − n ′ i=1 N e,i (t) ≥ x + (n − n ′ ) .P Q(∞) ≥ n ≤ P sup t≥0 A e (t) − n−⌊ 1 2 (n−µ A )⌋ i=1 N e,i (t) ≥ ⌊ 1 2 (n − µ A )⌋ .
Note that Corollary 5 reduces bounding the s.s.p.d. to bounding P sup t≥0 A e (t) − n ′ i=1 N e,i (t) ≥ x ′ for x ′ = ⌊ 1 2 (n − µ A )⌋ and n ′ = n − ⌊ 1 2 (n − µ A )⌋ some integer different from the number of servers n in the original system. None-the-less, we can still apply those parts of Theorems 1 -2 which we have already proven, i.e. the tail bounds for the queue length, with these different parameters to complete the proofs of our bounds for the s.s.p.d. Here we implicitly use the fact that the relevant bounds from Theorems 1 -2 are actually bounds for sup t≥0 A e (t) − n i=1 N e,i (t) . Proof of bounds for s.s.p.d. in Theorems 1 -2. First, suppose ρ ≤ 1 − 4 n , which implies that
1 4 (n − µ A ) ≤ ⌊ 1 2 (n − µ A )⌋ ≤ 1 2 (n − µ A ). Let x ′ = ⌊ 1 2 (n − µ A )⌋ and n ′ = n − ⌊ 1 2 (n − µ A )⌋.
Then it follows from some straightforward algebra that x ′ (1 − ρ n ′ ) is at least
1 4 (n − µ A ) × 1 − µ A n − 1 2 (n − µ A ) = 1 4 (n − µ A ) 2 n + µ A ≥ 1 8 (n − µ A ) 2 n = n 8 (1 − ρ) 2 .
Combining with some straightforward algebra, Corollary 5, and Theorems 1 -
x ′ = µ A − n + x √ µ A , n ′ = ⌈µ A + x 2 √ µ A ⌉. Noting that x ′ ≥ n ′ − n by construction (as x √ µ A > x 2 √ µ A )
, and supposing that n ′ ≤ n (which we will enforce by requiring x ∈ [0, 2 n−µ A −1 √ µ A ]), we may thus apply Lemma 7 with n,
x ′ = µ A − n + x √ µ A , n ′ = ⌈µ A + x 2 √ µ A ⌉. As Q(∞) − n ≥ x ′ ↔ Q(∞) ≥ µ A + x √ µ A , and x ′ + (n − n ′ ) ≥ x 2 √ µ A − 1,x ∈ [0, 2 n−µ A −1 √ µ A ], it holds that P Q(∞) ≥ µ A + x √ µ A ≤ P sup t≥0 A e (t) − ⌈µ A + x 2 √ µ A ⌉ i=1 N e,i (t) ≥ x 2 √ µ A − 1 .
Note that Corollary 6 reduces bounding the number of busy servers to bounding ; and (2) µ A ≥ 1. Next, note that
P sup t≥0 A e (t) − n ′′ i=1 N e,i (t) ≥ x ′′ for x ′′ = x 2 √ µ A − 1 and n ′′ = ⌈µ A + x 2 √ µ A ⌉.x ′′ (1 − ρ n ′′ ) = x 2 √ µ A − 1 × 1 − µ A ⌈µ A + x 2 √ µ A ⌉ ≥ x 4 √ µ A × 1 − µ A µ A + x 2 √ µ A by our assumptions that x ≥ 4 and µ A ≥ 1 = x 4 √ µ A × x 2 √ µ A µ A + x 2 √ µ A ≥ x 4 √ µ A × x 2 √ µ A µ A + 1 2 √ µ A × √ µ A by our assumption that x ≤ 1 2 √ µ A ≥ x 2 16 .
The desired result then follows by using the tail bounds for the queue length of Theorem 2 with
parameters n ′′ , x ′′ to bound P sup t≥0 A e (t) − n ′′ i=1 N e,i (t) ≥
x ′′ , along with some straightforward algebra.
Conclusion and future research directions.
In this paper, we proved the first simple and explicit bounds for GI/GI/n queues which scale as and we obtain even stronger results under the assumption that all moments of S exist and satisfy a mild growth rate assumption. Our bounds scale gracefully even when the number of servers grows large and the traffic intensity converges to unity simultaneously, as in the Halfin-Whitt scaling regime. Some of our bounds scale better than 1 1−ρ in certain asymptotic regimes, for which we also prove bounds for the tail of the steady-state number in service. We also prove several additional bounds using drift arguments (which have much smaller pre-factors), and point out a conjecture which would imply further related bounds and generalizations.
Our results leave many interesting directions for future research.
• Our demonstrated tail decay rates are suboptimal, and a bound which is uniformly accurate (in n, ρ, and x) across all scaling regimes remains elusive. This is also true regarding our bounds for the s.s.p.d., which are similarly suboptimal. 1 • We conjecture that our approach can be modified to yield general and explicit bounds with an appropriate analogue of 1 1−ρ scaling for a broad range of queuing networks. Although we are not aware of any simple and explicit analogues of Kingman's bound for queueing networks conjectured in the literature, we note that past work on heavy-traffic in queueing networks suggests that the number in queue at each station i should scale as 1 1−ρ i with ρ i the effective traffic intensity at that station (as dictated by the so-called traffic equations, see e.g. Reiman [113], Mandelbaum et al. [102], Gamarnik and Zeevi [52], Dai et al. [36]). We leave a formal investigation along these lines as an interesting direction for future research, but do provide the sketch of a plausible approach in the supplemental appendix Section 8.14. It is also interesting to ask for what other queueing systems our approach can be implemented. For example, in the parallel work Goldberg [59], the authors extend the stochastic comparison approach of Gamarnik and Goldberg [49] to certain multi-server systems with abandonments and hyper-exponentially distributed service times. The authors have also extended this approach to heavy-tailed systems (in the Halfin-Whitt regime) in the parallel work Goldberg [57]. 2 Understanding the complete set of systems to which our stochastic comparison approach (or a suitable modification thereof) can be applied to derive simple and explicit bounds remains an interesting direction for future research.
Broader connections to the applied probability and operations research communities.
Taking a broader view of the literature not just on queueing, but on applied probability and operations research more broadly, let us reflect on some of the high-level take-aways of this work. [55] (with the present manuscript modifying and using those results). In this version of the present manuscript these results instead appear in the present manuscript, and will be cited and used as appropriate by a new version of Goldberg [55], making the present manuscript self-contained. We note that in contrast to the present work, Goldberg [55] is restricted to asymptotic results in the Halfin-Whitt regime (and furthermore does not study the tail or mean of the queue length).
• Our work provides an example of how "quantifying" a simple coupling argument yields explicit bounds for a fundamental model. Such an approach has been sucessful in several operations research models recently, and we point the reader to Xin et al. [149], Vera et al. [131].
• Our work provides an example of a setting where a powerful inuition/scaling for a very simple "base model" (here 1 1−ρ scaling for single-server queues) carries over to natural generalizations more relevant in practice (here multi-server queues).
• Our work contextualizes and compares many approaches taken to multi-server queues, including stochastic comparison and Lyapunov drift, also surveying the relevant results. As these different approaches appear in the study of many stochastic models, lessons learned from the queueing setting can inform the study of other stochastic models. Our work also provides a useful reference for the vast literature on multi-server queues.
• Our work touches on meta-questions about how to conceptualize the trade-off between simplicity/explicitness, and accuracy, in approximations for operations research models.
Final thoughts on the trade-off between simplicity and accuracy in operations research models. Several pressing big-picture questions along these lines remain unresolved in the study of stochastic models broadly.
• What is the right notion of "complexity" in approximations for such models?
• How should one compare analytical bounds with results derived from simulation and numerical procedures?
• What is the formal algorithmic complexity of both numerical computation, and simulation, for the limiting processes which arise?
• And last, but by no means least, which types of approximations may be most useful in practice? 7. Technical Appendix. 7.1. Bound (5) by relating to a GI/GI/1 queue, and proof of Lemma 3.
In this section we fill in the details in our proof of Lemma 3, which we used to bound the supremum associated with the arrival process, (5). Our proof proceeds as follows. First, we relate the desired supremum to a discrete-time supremum associated with k − k i=1 (µ A A i ). Second, we observe that this supremum is the steady-state waiting time in a certain single-server queue, and then apply a result of Kingman [90] bounding the relevant tail probabilities. We also prove a novel result showing that a certain exponent appearing in Kingman's results can be bounded only in terms of the first two moments of A and the associated drift parameter, which may be of independent interest.
We begin with the following lemma relating the supremum appearing in (5) to the steady-state waiting time in a certain single-server queue, whose proof we defer to the supplemental appendix Section 8.8. We note that the result follows in a straightforward manner by applying the basic definitions of renewal processes and some standard transformations.
Lemma 8. For all ν > 0 and x > 0,
P sup t≥0 A o (t) − µ A t − νt ≥ x (9) equals P sup k≥0 µ A µ A + ν k − k i=1 (µ A A i ) ≥ x(1 + ν µ A ) −1 .(10)
We note that the supremum term appearing in (10) is exactly the steady-state waiting time in a single-server queue with inter-arrival times i.i.d. distributed as Aµ A , and service times the
constant µ A µ A +ν .
Next, we recall the relevant result of Kingman for bounding the tails of such suprema. We state the result in a specific form that we will need it, but note that the results of Kingman [90] hold in a more general setting. We note that the results of Kingman [90]
P sup k≥0 (c × k − Z k ) ≥ x ≤ exp(−θx) for all x > 0.
Next, we show that one can explicitly characterize a θ s.t. E exp θ(c − Z 1 ) ≤ 1 in terms of c and the first two moments of Z 1 , using a Taylor expansion. To our knowledge the result is novel, and may be of independent interest in the analysis of single-server queues. We defer the proof to the supplemental appendix Section 8.9.
= µ A µ A +ν , {Z i , i ≥ 1} = {µ A A i , i ≥ 1}
to Lemma 8, after some straightforward algebra.
Conditionally bound (6) and proof of Lemma 4.
In this section we fill in the details in our proof of Lemma 4, which provides conditional bounds on the supremum associated with the pooled renewal process, (6). We proceed as follows.
• First,we prove a conditional result, which asserts that if the supremum of a centered continuous-time stochastic process can be controlled over : (1) sets of consecutive integers; and
(2) intervals of size at most 1, then one can bound the tail of the all-time supremum of the same centered stochastic process with any negative linear drift. We will ultimately apply this to the • Third, we combine the above to complete the proof of Lemma 4.
centered process nt − n i=1 N e,i (t) with drift − 1 2 (n − µ A ). • Second,
7.2.1.
Proof that controlling the supremum of a centered stochastic process over sets of consecutive integers, and short intervals, implies control of its all-time supremum with linear drift. In this section we prove a conditional result, which converts bounds for the supremum of a suitable stochastic process over sets of consecutive integers, and intervals of length at most one, to bounds for the general all-time supremum (with negative drift). We will ultimately use this result to bound (6), the supremum associated with nt − n i=1 N e,i (t). We note that similar arguments have been used to bound all-time suprema of stochastic processes (Szczotka [127]), also in the heavy-tailed setting (Szczotka and Woyczynski [128]). We include a self-contained exposition and proof in the supplemental appendix Section 8.10.
Lemma 11. Let {φ(t), t ≥ 0} be a stochastic process with stationary increments such that φ(0) = 0. Here, stationary increments means that for all s 0 ≥ 0, {φ(s + s 0 ) − φ(s 0 ), s ≥ 0} has the same distribution (on the process level) as {φ(s), s ≥ 0}. Suppose there exist strictly positive finite constants H 1 , H 2 , s, r 1 , r 2 and Z ≥ 0 such that r 1 > s > 1 and r 2 > 2, and the following two conditions hold:
(i) For all integers m ≥ 1 and real numbers x ≥ Z,
P( max j∈{0,...,m} φ(j) ≥ x) ≤ H 1 m s x −r 1 .
(ii) For all t 0 ∈ (0, 1] and x ≥ Z,
P( sup 0≤t≤t 0 φ(t) ≥ x) ≤ H 2 t r 2 2 0 x −r 2 .
Then for any drift parameter ν > 0, and all x ≥ 4Z,
P sup t≥0 (φ(t) − νt) ≥ x ≤ 12(1 + 1 r 1 − s ) H 1 4 r 1 x −(r 1 −s) ν −s + H 2 4 r 2 (xν) − r 2 2 .
Proof that if the central moments of nt
− n i=1 N e,i (t)
can be suitably bounded then the conditions of Lemma 11 hold. We now prove that if the central moments of nt − n i=1 N e,i (t) can be suitably bounded then the conditions of Lemma 11 hold, i.e. one can control the supremum of the centered pooled renewal process s.t. one can plug in nt − n i=1 N e,i (t) for φ(t) in Lemma 11. Our proof proceeds as follows.
First, we prove a relevant conditional bound for the supremum of of centered pooled renewal process over consecutive integers, as required by the first condition of Lemma 11. We defer the proof to the supplemental appendix Section 8.11.
Lemma 12. Suppose that E[S] = 1, and that for some fixed n ≥ 1, C 1 > 0, s > 1, and r 1 ≥ s, the following condition holds:
(i) For all t ≥ 1, E | n i=1 N e,i (t) − nt| r 1 ≤ C 1 n r 1 2 t s .
Then it also holds that for all non-negative integers k and x > 0,
P max j∈{1,...,k} nj − n i=1 N e,i (j) ≥ x ≤ 2.25 × 2 s × (2 r 1 + 1 s − 1 ) r 1 +1 × C 1 n r 1 2 k s x −r 1 .
Second, we prove a relevant conditional bound for the supremum of of centered pooled renewal process over small intervals, as required by the second condition of Lemma 11. We defer the proof to the supplemental appendix Section 8.11.
Lemma 13. Suppose that E[S] = 1, and that for some fixed n ≥ 1, C 2 > 0 and r 2 > 2, the following condition holds:
(i) For all t ∈ [ 2 n , 1], E | n i=1 N e,i (t) − nt| r 2 ≤ C 2 (nt) r 2 2 .
Then it also holds that for all t 0 ∈ [0, 1] and x ≥ 4,
P sup t∈[0,t 0 ] nt − n i=1 N e,i (t) ≥ x(11)
is at most .8 × (5.7 r 2 +1 r 2
2 −1 ) r 2 +1 × C 2 × (nt 0 ) r 2 2 x −r 2 .
We note that both Lemmas 12 and 13 will follow from a general maximal inequality of Longnecker and Serfling [98] which converts bounds on the moments/tail of the partial sums of a stochastic process to bounds on the supremum of that process (see Lemma 18 of the supplemental appendix Section 8.11). Let us also note that although pooled renewal processes are a special family of stochastic processes, they can still exhibit complex behaviors, and it is not clear how to prove the necessary explicit bounds (at the level of precision required to prove the desired 1 1−ρ scaling) without such general tools from probability theory. Let us also point out that intuitively, statements of the form E | n i=1 N e,i (t) − nt| r ≤ C(nt) r 2 capture a notion that the correlations/fluctuations of the process n i=1 N e,i (t) can be sufficiently controlled, which holds here due to the nice properties of renewal processes.
Proof of Lemma 4.
Proof of Lemma 4: By our assumptions and Lemma 12, for all non-negative integers k and
x > 0, P max j∈{1,...,k} nj − n i=1 N e,i (j) ≥ x ≤ 2.25 × 2 s × (2 r 1 + 1 s − 1 ) r 1 +1 × C 1 n r 1 2 k s x −r 1 .
Next, by our assumptions and Lemma 13, for all t 0 ∈ [0, 1] and x ≥ 4,
P sup t∈[0,t 0 ] nt − n i=1 N e,i (t) ≥ x ≤ .8 × (5.7 r 2 + 1 r 2 2 − 1 ) r 2 +1 × C 2 × (nt 0 ) r 2 2 x −r 2 .
It then follows from our assumptions that the conditions of Lemma 11 are met with φ(t) = nt − n i=1 N e,i (t), s, r 1 , r 2 , ν their given values, Z = 4,
H 1 = 2.25 × 2 s × (2 r 1 + 1 s − 1 ) r 1 +1 × C 1 n r 1 2 , H 2 = .8 × (5.7 r 2 + 1 r 2 2 − 1 ) r 2 +1 C 2 n r 2 2 .
Combining the above with the implications of Lemma 11 and some straightforward algebra completes the proof.
Proof of Lemma 7.
To prevent having to make additional unnecessary (albeit minor) assumptions about the existence of steady-state distributions for different number of servers (both n and n ′ as opposed to only n),
we first state a small variant of Lemma 1 which also follows directly from the results of Gamarnik and Goldberg [49]. Let Q n ′ res be the FCFS GI/GI/n ′ queue with inter-arrival times having the same distribution as r.v. A, service times having the same distribution as r.v. S, and the following initial conditions. The time until the first arrival is distributed as R(A), and there are exactly n ′ jobs in service, with initial residual service times drawn i.i.d. distributed as R(S), independent from the arrival process. Let Q n ′ res (t) denote the number in system at time t in Q n ′ res .
Lemma 14 (Gamarnik and Goldberg [49]
P Q n ′ res (t) − n ′ ≥ x ≤ P sup 0≤s≤t A e (s) − n ′ i=1 N e,i (s) ≥ x .
We will also need to define an additional queueing system. Let Q n,n ′ res be the FCFS GI/GI/n queue with inter-arrival distribution A, service time distribution S, and the following initial conditions. The time until the first arrival is distributed as R(A). There are exactly n ′ jobs in service (note here n − n ′ servers are initially empty in Q n,n ′ res ), with initial residual service times drawn i.i.d. distributed as R(S), independent from the arrival process. Let Q n,n ′ res (t) denote the number in system at time t in Q n,n ′ res . Finally, we will need a well-known stochastic comparison result for multi-server queues. In particular, it follows from known results in the stochastic comparison of multi-server queues as one varies the number of servers, see e.g. Berger et al. [14] Theorem 1 and also Whitt [135], that a pathwise stochastic comparison holds between Q n,n ′ res and Q n ′ res . Here we only use the weaker implied distributional comparison, i.e. that P Q n,n ′ res (t) ≥ z ≤ P Q n ′ res (t) ≥ z for all t, z ≥ 0.
With Lemma 14 and (12) in hand, we now complete the proof of Lemma 7.
Proof of Lemma 7: Notice that Q n ′ res satisfies the conditions of Lemma 14, and thus for all t ≥ 0 and z ≥ 0,
P Q n ′ res (t) − n ′ ≥ z ≤ P sup 0≤s≤t A e (s) − n ′ i=1 N e,i (s) ≥ z .
Combining the above, we conclude that for all t ≥ 0 and x ≥ n ′ − n,
P Q n,n ′ res (t) − n ≥ x ≤ P Q n ′ res (t) − n ≥ x = P Q n ′ res (t) − n ′ ≥ x + n − n ′ ≤ P sup 0≤s≤t A e (s) − n ′ i=1 N e,i (s) ≥ x + n − n ′ .
As our assumption that Q n (∞) exists (which by our definitions must be independent of initial conditions) implies that {Q n,n ′ res (t), t ≥ 0} converges in distribution to Q n (∞), and applying the monotonicity of the supremum operator and continuity of probability measures, completes the proof.
8. Supplemental Appendix.
Bounds for higher order moments.
Theorems 1 and 2 also imply bounds for E[L s (∞)] for all s < r * 2 , by integrating the corresponding bounds for the tail of the queue length (and using the tail integral form for higher moments, see e.g. Nadarajah et al. [105]). To most accurately state the implied bounds for higher moments, the relevant expression should be the integral of an infimum over r (to derive a bound for each
x, which is then integrated). The associated statement is somewhat cumbersome, and thus we do not state it here out of considerations of readability. Instead, we present an illustrative implication of our main results. The result follows essentially immediately from our main result Theorem 2, the fact that E[X r ] = r ∞ 0 x r−1 P (S > x)dx for any non-negative r.v. X (see e.g. Nadarajah et al. [105]), the fact that Γ(1 + x) ≤ 1.5x x for all x ≥ 0 (which follows from the bounds of Batir [11] Theorem 2.3), and some straightforward algebra and calculus, and we omit the details.
Corollary 7.
Under the same assumptions as Theorem 2, and supposing in addition that r * <
∞ and E[S r * ] < ∞, for all ǫ ∈ (0, 1 4 ), it holds that E L r * 2 −ǫ (∞) is at most 4 × 10 4 × r * ǫ × E[(Sµ S ) 2 ] × 10 6 r * × E[(Sµ S ) 2 ] r * −1 × r * 2.5r * + r * 1.5r * × E[(Sµ S ) r * ] + 1.5 × 15 E[(Aµ A ) 2 ] r * × r * .5r * +1 × ( 1 1 − ρ ) r * 2 −ǫ .
Let us make some additional clarifications regarding the above results.
• Analogous results for E[L s (∞)] also follow from our results (for all s) when r * = ∞, although we do not present those results here. Indeed, to present those results most accurately would require integrating an infimum, leading to somewhat combersome expressions.
• The ( 1 1−ρ ) s scaling of E[L s (∞)] is consistent with known results for the single-server queue, and those limited settings of the multi-server queue where results are available.
• It is easily verified that all prefactors appearing in Corollary 7 are asymptotically dominated by terms of the form r * O(r * ) . Let us note that such an r Ω(r) scaling is in fact unavoidable, as even for the M/M/1 queue with ρ = 1 e , one has that (for integer r ≥ 2)
E L r (∞) = (1 − 1 e ) ∞ k=1 k r × e −k ≥ (1 − 1 e ) ∞ 1 x r × e −x dx ≥ (1 − 1 e ) ∞ 0 x r × e −x dx − 1 = (1 − 1 e ) ∞ 0 x r × e −x dx − 1 = (1 − 1 e ) r! − 1 ,
where it follows from well-known bounds for the factorial function that r! = r Ω(r) (Beesack [13]).
• It is known that under the assumptions of Corollary 7, E[L s (∞)] < ∞ for a range of s strictly greater than r * 2 , and in fact the interplay between which moments of S are finite, the number of servers, and which moments of L(∞) are finite is quite subtle (Scheller-Wolf [118], Scheller-Wolf et al. [119]). However, it is not known how these higher moments scale with 1 1−ρ . It is possible that at the level of generality considered in our results (where e.g. only few moments of S may be finite), a complex behavior could in principle arise where some of these higher moments (say the sth moment) would no longer scale as ( 1 1−ρ ) s due to a complex interplay between s, r * , and n. Better understanding the scaling of higher moments, and the relation of our results to those of Scheller-Wolf [118], Scheller-Wolf et al. [119], remains an interesting open question.
• Although for the single-server queue simple recursive schemes are known for expressing higher moments of the steady-state waiting time in terms of A and S (see e.g. Takacs [130], Gong et al. [60]), even in that setting it is not clear that explicit bounds (not given as recursive formulas) have appeared previously in the literature, especially in the setting where only few moments of S are finite. Chawla et al. [30].
Implications for the Halfin-Whitt scaling regime and connections to
In this section, we state several implications for queues in the Halfin-Whitt regime, in which ρ scales as 1 − Bn − 1 2 for some excess capacity parameter B > 0. Although all of our main results can be customized to this setting, here we present the illustrative example of our Theorem 3. For all x > 0, P L(∞) ≥ xn 1 2 is at most
f × exp − δ(Bx) 1 3+2c + 1.1 × exp − .0225 E[(Aµ A ) 2 ] −1 Bx ;
and the s.s.p.d. is at most g. [105], it follows that
f × exp − δB 2 3+2c + 1.1 × exp − .0028 E[(Aµ A ) 2 ] −1 B 2 .E[S r ] = r ∞ 0 x r−1 P (S > x)dx ≤ r ∞ 0 x r−1 a exp(−bx c )dx = ar ∞ 0 x r−1 exp(−bx c )dx = ar ∞ 0 x r−1 exp(−( x b − 1 c ) c )dx.
Let Z denote a Weibull r.v. with scale parameter α = b − 1 c , shape parameter β = c, and location parameter 0, see Lehman [97].
Then P (Z > x) = exp(−( x b − 1 c ) c )
for all x > 0, and combining with the above we find that [97]). Combining with the fact that Γ(1 + x) ≤ 1.5x x for all x ≥ 0 (which follows from the bounds of Batir [11] Theorem 2.3), we find that
E[S r ] = aE[Z r ] = a × (b − 1 c ) r × Γ(1 + r c ) (LehmanE[S r ] ≤ a × (b − 1 c ) r × 1.5 × ( r c ) r c = 1.5 × a × (bc) − 1 c r × r 1 c r ,
completing the proof of the first part of the observation. For the part regarding the fact that sup t>0 E[S − t|S > t] = ∞ for a r.v. S satisfying P (S > x) = exp(−bx c ) for all x > 0 and some c < 1, this follows from the results of Downey [45], which imply that any r.v. with uniformly bounded mean residual life must have an exponentially decaying tail (in contrast to the Weibull distribution with c < 1 whose tails are heavier than any exponential).
Proof of Theorem 3
We first prove the bound for P L(∞) ≥ x 1−ρ . First, let us derive a slightly simpler bound than that of Theorem 2, to simplify our analysis. Note that as c ≥ 1, the dominant term E[(Sµ S ) 2 ] r−1 × r 2.5r +r 1.5r E[(Sµ S ) r ] will scale as r (1.5+c)r . It is easily verified (from Theorem 2) that our results also imply P L(∞) ≥ x 1−ρ is at most inf r>2.5
4 × 10 4 × a × 10 6 E[(Sµ S ) 2 ]b r r (1.5+c)r × x − r 2(13)+1.1 × exp − .0225 E[(Aµ A ) 2 ] −1 x ,
where the terms optimized over in (13) have the same r (1.5+c)r scaling. This is the bound which we will use in our analysis. Let f
∆ = 4 × 10 4 × a, g ∆ = 10 6 E[(Sµ S ) 2 ]b, h ∆ = 1.5 + c.
Then we may rewrite the above as the fact that for all
x > 0, P L(∞) ≥ x 1−ρ is at most inf r>2.5 f × g r × r hr × x − r 2 (14) +1.1 × exp − .0225 E[(Aµ A ) 2 ] −1 x , For a given x > 0, letr x ∆ = e −1 g − 1 h x 1
2h . It turns out that looking atr x is motivated by the fact that it can be shown to be the optimal solution to (14) when the infimum is taken over r > 0 (instead of r > 2.5). We will not need this fact here, although for completeness we prove this at the end of this section. Note that for all x s.t.r x > 2.5, the above implies that P L(∞) ≥ x 1−ρ is at most
f × gr x ×r hrx x × x −r x 2 +1.1 × exp − .0225 E[(Aµ A ) 2 ] −1 x , = f exp − e −1 hg − 1 h x 1 2h (15) +1.1 × exp − .0225 E[(Aµ A ) 2 ] −1 x ,
completing the proof of Theorem 3 in that case. However, note thatr x ≤ 2.5 iff x ≤ (2.5e) 2h g 2 .
However, for any x ≤ (2.5e) 2h g 2 , it holds that
f exp − e −1 hg − 1 h x 1 2h ≥ f exp − e −1 hg − 1 h × (2.5eg 1 h ) = f exp(−2.5h) ≥ 1,
where the final inequality can be easily verified for our particular choice of f, h. As P L(∞) ≥
4 × 10 4 × a × 10 7 E[(Sµ S ) 2 ]b r r (1.5+c)r × n(1 − ρ) 2 − r 2 +1.1 × exp − .0028 E[(Aµ A ) 2 ] −1 n(1 − ρ) 2 ,
and we omit the details. Combining the above completes the proof.
8.3.1.
Proof thatr x is the claimed minimizer. Here, for completeness (and to motivate our use ofr x ), we prove thatr x is the the claimed minimizer.
Lemma 15. Suppose f, g, h ≥ 1 are some real numbers. Then for all x > 1,r x ∈ arg min r>0 f × g r × r hr × x − r 2 .
Proof : As it will have the same minimizer, let us take logarithms, and consider finding the r minimizing log(f ) + r log(g) + hr log(r) − r 2 log(x). As d dr
log(f ) + r log(g) + hr log(r) − r 2 log(x) = log(g) + h log(r) + h − 1 2 log(x),
we find that log(f ) + r log(g) + hr log(r) − r 2 log(x) is a convex function of r, which will attain its minimum where log(g) + h log(r) + h − 1 2 log(x) = 0, i.e. at
r ∆ = exp 1 2h log(x) − log(g) h − 1 =r x ,
completing the proof.
Proof of Corollary 3.
Proof of Corollary 3 : Combining the two bounds of Corollary 1, along with some straightforward algebra and the tail integral formula for expected value, we conclude that
(1 − ρ)E[L(∞)] is at most n(1−ρ) 2 0 8 × 10 28 × E[(Sµ S ) 2 ] × E[(Sµ S ) 2 ] 2 + E[(Sµ S ) 3 ] × n(1 − ρ) 2 −1.5 + 1.1 × exp − .0028 E[(Aµ A ) 2 ] −1 n(1 − ρ) 2 dx + ∞ n(1−ρ) 2 8 × 10 25 × E[(Sµ S ) 2 ] × E[(Sµ S ) 2 ] 2 + E[(Sµ S ) 3 ] × x −1.5 + 1.1 × exp − .0225 E[(Aµ A ) 2 ] −1 x dx ≤ 8 × 10 28 × E[(Sµ S ) 2 ] × E[(Sµ S ) 2 ] 2 + E[(Sµ S ) 3 ] × n(1 − ρ) 2 −.5 + 1.1 × n(1 − ρ) 2 × exp − .0028 E[(Aµ A ) 2 ] −1 n(1 − ρ) 2 + 1.6 × 10 26 × E[(Sµ S ) 2 ] × E[(Sµ S ) 2 ] 2 + E[(Sµ S ) 3 ] × n(1 − ρ) 2 −.5 + 49E[(Aµ A ) 2 ] exp − .0225 E[(Aµ A ) 2 ] −1 n(1 − ρ) 2 ≤ 8.001 × 10 28 × E[(Sµ S ) 2 ] × E[(Sµ S ) 2 ] 2 + E[(Sµ S ) 3 ] × n(1 − ρ) 2 −.5 + 1.1 × n(1 − ρ) 2 + 49E[(Aµ A ) 2 ] exp − .0028 E[(Aµ A ) 2 ] −1 n(1 − ρ) 2 ,
and thus E[L(∞)] is at most
8.001 × 10 28 × E[(Sµ S ) 2 ] × E[(Sµ S ) 2 ] 2 + E[(Sµ S ) 3 ] × n(1 − ρ) 2 −.5 + 1.1 × n(1 − ρ) 2 + 49E[(Aµ A ) 2 ] exp − .0028 E[(Aµ A ) 2 ] −1 n(1 − ρ) 2 × 1 1 − ρ .
Combining with the fact that e −x ≤ 2 x 2 for all x > 0 (by a simple Taylor series expansion), which implies (after some straightforward algebra) that
1.1 × n(1 − ρ) 2 + 49E[(Aµ A ) 2 ] exp − .0028 E[(Aµ A ) 2 ] −1 n(1 − ρ) 2 is at most 2 × 49E[(Aµ A ) 2 ] × n(1 − ρ) 2 −.5 if n(1 − ρ) 2 ≥ 10 6 × E[(Aµ A ) 2 ]
2 completes the proof. Proof of Corollary 4 : Since E[S r ] < ∞, Theorem 2 implies that P L(∞) ≥ x 1−ρ is at most
2 × 10 4 × E[(Sµ S ) 2 ] × 10 6 r × E[(Sµ S ) 2 ] r−1 × r 2.5r + r 1.5r × E[(Sµ S ) r ] × x − r 2 + 1.1 × exp − .0225 E[(Aµ A ) 2 ] −1 x .
A Taylor expansion implies that
1.1 exp − .0225 E[(Aµ A ) 2 ] −1 x ≤ ≤ 1.1 × ⌈ r 2 ⌉! × ( 1 .0225 ) r 2 +1 × E[(Aµ A ) 2 ] ⌈ r 2 ⌉ × x − r 2 .
Further applying the fact that ⌈x⌉! ≤ 2x x for all x ≥ 1 (which follows from known bounds for the factorial function and some straightforward algebra, see e.g. Beesack [13], Batir [11]), we conclude that
1.1 × exp − .0225 E[(Aµ A ) 2 ] −1 x ≤ 108 × 5 r × r r 2 × E[(Aµ A ) 2 ] r 2 .
Further combining with some straightforward algebra completes the proof. sequence of an exponential distribution, since even when A, S are uniformly bounded the queue length itself is not and will have an exponential decay by known resuilts for single-server queues).
The desired result will immediately follow, since if c r did not similarly scale as r Ω(r) a contradiction would be reached. Before proving Theorem 5, let us recall a lower bound on the tail of the waiting time in a single-server queue proven in Kingman [92].
P sup n≥0 n i=1 (S i − A i ) ≥ x ≥ exp(−θ * B) exp(−θ * x).
With Lemma 16 in hand, we now complete the proof of Theorem 5.
Proof of Theorem 5 : Consider the single-server queue in which A is distributed uniformly on (2) P L(∞) ≥ x ≥ P ⌊Work(∞)⌋ − 1 ≥ x ≥ P Work(∞) ≥ x + 2 . It follows that for all x > 0,
P L(∞) ≥ x ≥ P W (∞) ≥ x + 4 .
Combining with Lemma 16, we conclude that for all x > 0,
P L(∞) ≥ x ≥ e −θ * (x+5) ≥ exp(−14.
3) × exp(−2.86x). It then follows from the tail integral form for higher moments ( [105]) and known bounds for the factorial function (Beesack [13], Batir [11]) that for all r > 16,
E[L r 4 (∞)] ≥ r 4 ∞ 0 x r 4 −1 exp(−14.3) × exp(−2.86x)dx ≥ exp(−14.3) × r 4 × 2.86 − r 4 × ⌊ r 4 − 1⌋! ≥ exp(−14.3) × r 4 × 2.86 − r 4 × ⌊ r 4 − 1⌋ ⌊ r 4 −1⌋ × exp(−⌊ r 4 − 1⌋) ≥ exp(−14.3) × r 4 × (2.86e) − r 4 × ( r 8 ) r 8 ≥ exp(−14.3) × 2.2 −r × r 1 8 r .
Now, suppose that for all r > 16 and x > 0, it holds that P L(
∞) ≥ x 1−ρ ≤ c r × E[(Aµ A ) 2 ] r × E[(Aµ A ) r ] + E[(Sµ S ) 2 ] r × E[(Sµ S ) r ] × x − r 2
for ρ, A, S as in the above single-server queue. Note that ρ = 1 2 , and using standard results for the moments of a uniform r.v.
we have E[(Aµ A ) r ] = E[(Sµ S ) r ] = 2 r r+1 ≤ 2 r , and E[(Aµ A ) 2 ] r = E[(Sµ S ) 2 ] r = ( 4 3 ) r .
It then follows from some straightforward algebra that for all
x > 0, P L(∞) ≥ x ≤ 2 × c r × ( 8 3 ) r × x − r 2 .
Applying the tail integral form for higher moments, see e.g. [105], we thus have that E L scaling can ultimately be attributed to our having to explicitly bound E | n i=1 N e,i (t) − nµ S t| r , which we do in our Lemma 5 for the case t ≥ 1. It follows from some straightforward algebra that our Lemma 5 can be loosely interpreted as asserting the following (which our Lemma 5 can indeed be formally shown to imply using some straightforward algebra, the details of which we omit).
r 4 (∞)] ≤ r 4 + 2 × c r × r 4 × ( 8 3 ) r × ∞ 1 x r 4 −1 × x − r 2 dx ≤ r 4 + c r × 2 × ( 8 3 ) r .t. E[S r ] < ∞, it holds hat E | n i=1 N e,i (t) − nµ S t| r ≤ c r × E[(Sµ S ) 2 ] r × E[(Sµ S ) r ] × (nt) r 2 .
Furthermore, one can take c r = r O(r) .
We now show that the r Ω(r) scaling in Corollary 9 is in fact unavoidable, even for the case n = 1, t = 1. Intuitively, this will follow from the simple fact that athough the rth moment of a r.v.
which puts probability on 0 and 2 scales at most exponentially in r, the rth central moment of the associated renewal process scales as r Ω(r) (inherited from a related geometrically distributed r.v.).
As such bounds for pooled renewal processes (with the (nt) r 2 scaling) are essential to implementing our overall approach, this is further suggestive of the fact that avoiding constants scaling in this way may require fundamentally different approaches, and/or imposing additional assumptions on S. Proof : Let S be the r.v. s.t. P (S = 0) = P (S = 2) = 1 2 . Note that R(S) is uniformly distributed on the interval [0,2]. Note also that P N o (1) ≥ k ≥ 2 −k for all k ≥ 1, and it follows that P N e (1) − 1 ≥ k ≥ 2 −(k+3) for all k ≥ 1. Thus applying the tail integral form for higher moments (Nadarajah et al. [105]), it follows that for all r > 2,
E |N e (1) − 1| r ≥ ∞ 0 rx r−1 2 −⌈x+3⌉ ≥ r 16 ∞ 0 x r−1 2 −x = log e (2) −r Γ(r).
The desired result then follows from some straightforward asymptotics and standard bounds for the Gamma function (Beesack [13], Batir [11]), the details of which we omit.
We next comment briefly on where precisely in our proofs associated with the centered moments of pooled renewal processes the r Ω(r) scaling arises. First, in bounding E[|N o (t) − µ S t| r ], we apply the Burkholder-Rosenthal ineuqality in our Lemma 21, to bound the higher moments of certain martingale-related terms. Our application of this inequality is consistent with the approach sketched in Gut [69], which our analysis makes completely explicit. Second, in bounding E | n i=1 N e,i (t) − nµ S t| r for t ≥ 1, we apply the Marcinkiewicz-Zygmund inequality in our Lemma 26 to convert out bounds on E |N e (t) − µ s t| r into bounds for the corresponding pooled process. This inequality is a powerful tool for converting bounds for the higher moments of individual mean-zero r.v.s into bounds for the higher moments of the sums of those r.v.s. Although the family of renewal processes have special structure, and much is known about their general asymptotic scaling, the associated (pooled) counting processes can still exhibit complex behaviors, where the analyses of these behaviors has been at the core of several recent analyses of multi-server queues (see e.g.
Bazhba et al. [12]). As the 1 1−ρ bounds we prove are very sensitive to how all aspects of our proof scale in n and t, it is not clear whether it is possible to explicitly and non-asymptotically bound the central moments of pooled renewal processes at the level of generality we consider without applying such general inequalities from probability theory. Let us point out that additional r Ω(r) scaling arises in the analysis for t ≤ 1, e.g. in our Lemma 28, again due to the application of general inequalities from probability theory.
Summary of discussion of prefactors arising in our bounds. In summary, it
remains an interesting open question whether the r Ω(r) prefactors arising in our main results are fundamental, or an artifact of our analysis. That said, we have proven that for very closely related bounds implied by our main results, the r Ω(r) results are indeed fundamental. This arises at least in part due to the level of generality of our main results, e.g. that they hold for both the setting that A, S are uniformly bounded, as well as the setting that A, S have quite heavy tails with few finite moments. It would be very interesting to derive other qualitatively different bounds for multi-server queues with 1 1−ρ scaling, possibly under different and/or stronger assumptions and using fundamentally different methods of analysis. We note that our Theorem 3, in which we prove stronger tail bounds with fundamentally different behavior and no dependence on any r parameter, as well as our Theorems 6, 7, 8, and 9 based on drift arguments, represent a step in this direction. 8.6. Proofs of Theorems 6, 7, 8, and 9. 8.6.1. Proof of Theorem 6. Our proof proceeds by noting that
E[Num service (∞)] = µ A µ S implies E max 0, Num service (∞) − µ A µ S = E max 0, µ A µ S − Num service (∞)
, and then relating the left hand side of this equality to the s.s.p.d. and the right hand side to the well-understood M/GI/∞ queue. We again note that the proof is very similar to those of Wang et al. [132], Hong et al. [77], which were primarily focused on the study of more general models.
Proof of Theorem 6: Let N denote Num n service (∞), the steady-state number of busy servers. It follows from standard Little's Law type conservation arguments (Wolff [147], Heyman et al. [74],
Whitt [140]) that E[N ] = µ A µ S . Since for any integrable r.v. X, it holds that E[X] = E max(0, X) − E max(0, −X) , we conclude that E max(0, N − µ A µ S ) = E max(0, µ A µ S − N ) .(16)
Recall that Q n (∞), which we will denote simply by Q, is a r.v. distributed as the steady-state total number in system. Since µ A µ S < n, and the basic dynamics of the FCFS GI/GI/n queue imply that one can construct N and Q on a common probability space s.t.
I(N = k) = I(Q = k) for k ∈ {0, . . . , n − 1}, it follows that E max(0, µ A µ S − N ) = E max(0, µ A µ S − Q) .(17)
Let Q ∞ denote a r.v. distributed as the steady-state total number in system in an M/GI/∞ queue with the same inter-arrival and service time distribution as Q n . It follows from standard and wellknown stochastic comparison results between multi-server and infinite-server queues, see e.g. Whitt [145], Gamarnik and Goldberg [49], Wang et al. [132], Hong et al. [77], that P Q ≥ x ≥ P Q ∞ ≥
x for all x ∈ R. As the function f (x)
∆ = max(0, µ A µ S − x)
is non-increasing in x, we obseve (as in Wang et al. [132], Hong et al. [77]) that the basic properties of stochastic dominance (see e.g.
Brumelle et al. [25]) thus imply
E max(0, µ A µ S − Q) ≤ E max(0, µ A µ S − Q ∞ ) .
Combining with (16) -(17), we conclude that
E max(0, N − µ A µ S ) ≤ E max(0, µ A µ S − Q ∞ ) .(18)
Next, very similar to Wang et al. [132], Hong et al. [77], we observe that since by assumption µ A µ S < n, non-negativity implies
E max(0, N − µ A µ S ) = E (N − µ A µ S )I(N ≥ µ A µ S ) ≥ E (N − µ A µ S )I(N ≥ n) = E (n − µ A µ S )I(N = n) = (n − µ A µ S )P (N ≥ n),(19)
where we have used the fact that the dynamics of a FCFS GI/GI/n queue imply {N ≥ n} iff {N = n}. Combining (18) - (19), we conclude that
P (N ≥ n) ≤ (n − µ A µ S ) −1 E max(0, µ A µ S − Q ∞ ) .(20)
Using the well-known fact that Q ∞ has a Poisson distribution with mean µ A µ S , and letting Poi denote a r.v. with this distribution, we have that
P (N ≥ n) ≤ (n − µ A µ S ) −1 E max(0, µ A µ S − Poi) .(21)
Next, as in Wang et al. [132], Hong et al. [77], we use a simple application of Jensen's inequality to bound E max(0, µ A µ S − Poi) in terms of the standard deviation of Poi, which equals µ A µ S . More formally, we proceed as follows. Recall that E[X] = E max(0, X) − E max(0, −X) for a general integrable r.v. X, and for essentially identical reasons
E[|X|] = E max(0, X) + E max(0, −X) for a general integrable r.v. X. Applying with X = Poi − µ A µ S , we conclude that E max(0, µ A µ S − Poi) = 1 2 E Poi − µ A µ S . As Jensen's inequality implies E Poi − µ A µ S ≤ E Poi − µ A µ S 2
, and the variance of Poi equals µ A µ S , we may combine with (21) to conclude that
E max(0, µ A µ S − Poi) ≤ 1 2 µ A µ S ,(22)
and
P (N ≥ n) ≤ 1 2 µ A µ S n − µ A µ S .(23)
Combining with the fact that the s.s.p.d. equals P (N ≥ n), along with some straightforward algebra which yields
µ A µ S n − µ A µ S = µ A nµ S √ n − √ n µ A nµ S = √ ρ √ n(1 − ρ) ,
completes the proof.
8.6.2. Proof of Theorem 7. It follows from our proof of Theorem 6, specifically Equations (18) and (22), that E max(0,
N − µ A µ S ) ≤ 1 2 µ A µ S
. The desired result then follows from Markov's inequality.
8.6.3. Proof of Theorem 8. Our proof proceeds by relating the long-run fraction of jobs that abandon the system to θ × E L n a (∞) , and then using known stochastic comparison results for multi-server systems with abandonments to further bound this quantity in terms of the wellunderstood Erlang loss model. Let us also point out that although one might think that the case of abandonments would be "more challenging", in this case it is actually simpler for such drift arguments, as the queue length manifests more directly in the rate at which jobs depart.
Proof of Theorem 8: Let L n a (t) denote the number of jobs waiting in queue in Q n a at time t, and Aban(t) denote the number of jobs that abandon from Q n a on [0, t]. It follows from standard Poisson constructions for M/P H/n + M queues, see e.g. Dai et al. [33], that E[Aban(t)] = θE[
Let Q n loss denote an n-server Erlang loss model with the same inter-arrival and service distribution as Q n a , also initially empty. Note that Q n loss is equivalent to a M/P H/n +GI queue in which patience times are w.p.1 equal to zero. For a more formal review of this family of well-studied systems, we refer the reader to Davis et al. [43], Sevastyanov [122], Franken et al. [48]. Let Loss(t) denote the number of jobs that abandon from Q n loss on [0, t]. It follows from the stochastic comparison results of Bhattacharya et al. [17], specifically Theorem 3.1 of that work, that E[Aban(t)] ≤ E[Loss(t)] for all t ≥ 0, and thus by (24)
E[L n a (∞)] ≤ θ −1 lim sup t→∞ E[Loss(t)] t .(25)
Let Poi denote a r.v. with a Poisson distribution, with mean µ A µ S . Then it follows from well-known insensitivity results for the Erlang loss model, see e.g. Davis et al. [43], Sevastyanov [122], Franken et al. [48], that
lim t→∞ E[Loss(t)] t = µ A P Poi = n|Poi ≤ n = µ A exp(− µ A µ S )( µ A µ S ) n n! n k=0 exp(− µ A µ S )( µ A µ S ) k k! .(26)
Although many bounds exist for this blocking probability, see e.g. Hariel [72,73], Janssen et al. [83], to prove our first bound we provide a self-contained and very simple bound. As our assumptions imply µ A µ S < n, and it follows from Chen et al. [31] that the median of Poi is at most ⌈ µ A µ S ⌉ ≤ n, we conclude that (26) is at most
2µ A exp(− µ A µ S )( µ A µ S ) n n! .(27)
Applying Stirling's inequality, which implies n! ≥ √ 2πn( n e ) n , we conclude that (26) is at most
2 π µ A exp(n − µ A µ S )(1 − n− µ A µ S n ) n √ n .(28)
Noting that 1 − x ≤ e −x for all x ≥ 0, we conclude that (26) is at most
2 π µ A √ n .
Combining with (25), we conclude that
E[L n a (∞)] ≤ 2 π θ −1 µ A √ n = 2 π θ −1 µ A nµ S nµ S √ n = 2 π ρ µ S θ √ n,
completing the proof of the first bound. For the second part, we note that since it is easily verified that e −x x n is increasing on [0, n], we may upper bound
exp(− µ A µ S )( µ A µ S ) n n! by exp(−⌈ µ A µ S ⌉)(⌈ µ A µ S ⌉) n n! . Further supposing ρ ∈ [ 3 4 , 1 − 4 n ] (also implying n ≥ ⌈ µ A µ S ⌉ + 2) , then it follows from Glynn [54] Proposition 2 that exp(−⌈ µ A µ S ⌉)(⌈ µ A µ S ⌉) n n! is at most (2π⌈ µ A µ S ⌉) − 1 2 exp − (n − ⌈ µ A µ S ⌉)(n − ⌈ µ A µ S ⌉ − 1) 2⌈ µ A µ S ⌉ + (n − ⌈ µ A µ S ⌉)(n − ⌈ µ A µ S ⌉ − 1) 2(n − ⌈ µ A µ S ⌉) − 1 12(⌈ µ A µ S ⌉) 2 ≤ (2π⌈ µ A µ S ⌉) − 1 2 exp − (n − ⌈ µ A µ S ⌉)(n − ⌈ µ A µ S ⌉ − 1) 2⌈ µ A µ S ⌉ 1 − n − ⌈ µ A µ S ⌉ 3⌈ µ A µ S ⌉ ≤ (2π⌈ µ A µ S ⌉) − 1 2 exp − 8 9 (n − ⌈ µ A µ S ⌉)(n − ⌈ µ A µ S ⌉ − 1) 2⌈ µ A µ S ⌉ since our assumptions imply 1 − n − ⌈ µ A µ S ⌉ 3⌈ µ A µ S ⌉ ≥ 8 9 = (2π⌈ µ A µ S ⌉) − 1 2 exp − 4 9 (n − ⌈ µ A µ S ⌉) 2 − (n − ⌈ µ A µ S ⌉) ⌈ µ A µ S ⌉ ≤ (2π⌈ µ A µ S ⌉) − 1 2 exp − 4 9 (n − ⌈ µ A µ S ⌉) 2 n + 4 9 × 4 3 × n − ⌈ µ A µ S ⌉ n ≤ (2π⌈ µ A µ S ⌉) − 1 2 exp − 1 4 √ n(1 − ρ) 2 + (1 − ρ) ,
the final inequality following from the fact that
(n−⌈ µ A µ S ⌉) 2 n = √ n(1 − ρ) 2 × ( n−⌈ µ A µ S ⌉ n− µ A µ S
) 2 , and our assumption that ρ ≤ 1 − 4 n implies that ( 16 . We conclude that (26) is at most
n−⌈ µ A µ S ⌉ n− µ A µ S ) 2 ≥ 92 π √ µ A µ S exp − 1 4 √ n(1 − ρ) 2 + (1 − ρ)
. Combining with (25), we conclude that
E[L n a (∞)] ≤ 2 π θ −1 √ µ A µ S exp − 1 4 √ n(1 − ρ) 2 + (1 − ρ) = 2 π √ ρ µ S θ √ n exp − 1 4 √ n(1 − ρ) 2 + (1 − ρ) ≤ 2 π µ S θ √ n exp − 1 4 √ n(1 − ρ) 2 + (1 − ρ) .
Combining the above with the fact that ρ ≥ 3 4 implies 2 π exp(1 − ρ) ≤ 2 completes the proof.
8.6.4. Proof of Theorem 9. We begin by stating the aforementioned equation for the steadystate expected work-in-system studied in several past works (see e.g. Hokstad [76], Scully et al.
[120], Grosof et al. [62], Wang et al. [132], Hong et al. [77]), which arises when applying the Lyapunov drift method to the square of the total work in system. We impose the technical condition that S has absolutely continuous distribution function to be consistent with the assumptions and arguments of Hokstad [76], although as noted in Hokstad [76] the result holds in greater generality.
Theorem 11 (Hokstad [76], Scully et al. [120], Grosof et al. [62]). Consider an M/GI/n queue Q n with Markovian inter-arrival times having the same distribution as r.v. A, service times having the same distribution as r.v. S satisfying E[S 2 ] < ∞, such that the c.d.f. of S is absolutely continuous. Suppose also that µ A < nµ S , and that Q(∞), Work(∞), and W service (∞)
exist. Then
E Work(∞) = 1 2 µ A E[S 2 ] + E n − Num service (∞) Work service (∞) × 1 n(1 − ρ)
.
With Theorem 11 in hand, our result will follow from a straightforward rearranging of terms.
Proof of Theorem 9: By the basic properties of a GI/GI/n queue, and decomposing the work in system into that in service and that waiting in queue, note that
E Work(∞) = E Work service (∞) + E[S]E L(∞) .
Combining with Theorem 11 and some straightforward algebra then yields that E L(∞) equals
1 2 µ A E[S 2 ] n(1 − ρ)E[S] + E n − Num service (∞) Work service (∞) − n(1 − ρ)E Work service (∞) n(1 − ρ)E[S] = 1 2 E[(Sµ S ) 2 ] ρ 1 − ρ + E − Num service (∞) × Work service (∞) + n × ρ × E Work service (∞) n(1 − ρ)E[S] = 1 2 E[(Sµ S ) 2 ] ρ 1 − ρ + E Num service (∞) × E Work service (∞) − E Num service (∞) × Work service (∞) n(1 − ρ)E[S] ,
the final equality using the fact that E Num service (∞) = nρ (a fact also used in our proof of Theorem 6). Combining the above completes the proof.
Proof of Lemma 2.
Proof of Lemma 2:
P sup t≥0 A e (t) − n i=1 N e,i (t) ≥ x = P sup t≥0 A e (t) − 1 2 (n + µ A )t + 1 2 (n + µ A )t − n i=1 N e,i (t) ≥ x = P sup t≥0 A e (t) − µ A + 1 2 (n − µ A ) t + n − 1 2 (n − µ A ) t − n i=1 N e,i (t) ≥ x = P sup t≥0 A e (t) − µ A t − 1 2 (n − µ A )t + nt − n i=1 N e,i (t) − 1 2 (n − µ A )t ≥ x ≤ P sup t≥0 A e (t) − µ A t − 1 2 (n − µ A )t ≥ x 2 + P sup t≥0 nt − n i=1 N e,i (t) − 1 2 (n − µ A )t ≥ x 2 .
Using (7) to bound A e (t) by A o (t) + 1 completes the proof.
Proof of Lemma 8.
Proof of Lemma 8:
As {A o (t) − µ A t − νt, t ≥ 0} jumps up only at times { k i=1 A i , k ≥ 1}
and at all other times drifts downward at linear rate −(µ A + ν), we conclude that we may examine the relevant supremum only at times { k i=1 A i , k ≥ 0}, from which it follows that (9) equals
P sup k≥0 k − k i=1 (µ A + ν)A i ≥ x .(29)
Further observing that
k − (µ A + ν) k i=1 A i = (1 + ν µ A )k − (µ A + ν) k i=1 A i − ν µ A k = (1 + ν µ A ) k − µ A k i=1 A i − ν µ A + ν k = (1 + ν µ A ) µ A µ A + ν k − k i=1 (µ A A i )
completes the proof.
Proof of Lemma 10.
Proof of Lemma 10: First, note that E[exp θ(c − Z 1 ) ] exists for all θ > 0 since Z 1 is nonnegative and c is a fixed constant. We consider two cases, since the desired scaling is somewhat different as c ↓ 0. First, suppose c ≥ 1 2 (and thus c ∈ [ 1 2 , 1)). In this case, we argue that if E[exp θ(c − Z 1 ) ] > 1, then it must hold that θ > 1−c some θ ∈ (0, 1) (here it suffices to consider θ ∈ (0, 1) as c ≥ 1 2 implies 1−c 2.75E[Z 2 1 ] < 1). It follows from the fact that θc ∈ (0, 1), and a straightforward calculus exercise (the details of which we omit), that (1) : exp(θc) < 1 + θc + .75θ 2 c 2 ; and (2) : exp(−θZ 1 ) < 1 1+θZ 1 .
(2) implies
E[exp(−θZ 1 )] < E[ 1 1 + θZ 1 ] = E[1 − θZ 1 + (θZ 1 ) 2 1 + θZ 1 ] ≤ 1 − θE[Z 1 ] + θ 2 E[Z 2 1 ],
where we note that a related logic appears in Goldberg [56]. Thus E[exp θ(c − Z 1 ) ] > 1, combined with our other assumptions, implies
1 + θc + .75θ 2 c 2 × 1 − θE[Z 1 ] + θ 2 E[Z 2 1 ] > 1,
which by some straightforward algebra is equivalent to
(c − E[Z 1 ])θ + E[Z 2 1 ] − cE[Z 1 ] + .75c 2 θ 2 + cE[Z 2 1 ] − .75c 2 E[Z 1 ] θ 3 + .75c 2 E[Z 2 1 ]θ 4 > 0.
Combining with the fact that c, θ ∈ (0, 1) and Thus suppose c ∈ (0, 1 2 ), and E[exp θ(c − Z 1 ) ] > 1 for some θ ∈ 0, (2c) −1 . Here it suffices to consider θ ∈ 0, (2c) −1 since 11cE[Z 2 1 ] −1 < (2c) −1 . It follows from the non-negativity of Z 1 and fact that c ∈ (0, 1 2 ) that for all θ > 0, w.
p.1 θ(c − Z 1 ) < 2θc( 1 2 − Z 1 ), and thus E[exp θ(c − Z 1 ) ] < E[exp 2θc( 1 2 − Z 1 ) ]. Thus E[exp θ(c − Z 1 ) ] > 1 for some θ ∈ 0, (2c) −1 implies E[exp 2θc( 1 2 − Z 1 ) ] > 1 for some θ ∈ 0, (2c) −1 .
As θ ∈ 0, (2c) −1 implies 2θc ∈ (0, 1), it then follows from an argument nearly identical to that used in our previous analysis of the c ≥ 1 2 case (and the details of which we omit) that 2θc > Proof of Lemma 11: Note that for λ > 0, P sup t≥0 φ(t) − νt ≥ λ equals
P ∞ k=0 φ(t) − νt ≥ λ for some t ∈ [2 k , 2 k+1 ] φ(t) − νt ≥ λ for some t ∈ [0, 1] ≤ ∞ k=0 P sup t∈[2 k ,2 k+1 ] φ(t) − νt ≥ λ(30)+ P sup t∈[0,1] φ(t) − νt ≥ λ .(31)
We now bound (30), and proceed by bounding (for each k ≥ 0)
P sup t∈[2 k ,2 k+1 ] φ(t) − νt ≥ λ .(32)
Since t ∈ [2 k , 2 k+1 ] implies νt ≥ ν2 k , we conclude that (32) is at most P sup t∈[2 k ,2 k+1 ] φ(t) ≥ λ + ν2 k , which by adding and subtracting φ(2 k ), and applying stationary increments and a union bound, is at most
P sup t∈[2 k ,2 k+1 ] φ(t) − φ(2 k ) + φ(2 k ) ≥ λ + ν2 k ≤ P sup t∈[2 k ,2 k+1 ] φ(t) − φ(2 k ) ≥ 1 2 (λ + ν2 k ) + P φ(2 k ) ≥ 1 2 (λ + ν2 k ) = P sup t∈[0,2 k ] φ(t) ≥ 1 2 (λ + ν2 k ) + P φ(2 k ) ≥ 1 2 (λ + ν2 k ) ≤ 2P sup t∈[0,2 k ] φ(t) ≥ 1 2 (λ + ν2 k ) .(33)
We proceed to bound (33) by breaking the supremum into two parts, one part taken over integer points, one part taken over intervals of length one corresponding to the regions between these integer points. In particular, the assumptions of the lemma, combined with a union bound and stationary increments, ensure that P sup
t∈[0,2 k ] φ(t) ≥ 1 2 (λ + ν2 k ) ≤ P sup j∈{0,...,2 k } φ(j) + sup j∈{0,...,2 k −1} t∈[0,1] φ(j + t) − φ(j) ≥ 1 2 (λ + ν2 k ) ≤ P sup j∈{0,...,2 k } φ(j) ≥ 1 4 (λ + ν2 k ) + 2 k P sup t∈[0,1] φ(t) ≥ 1 4 (λ + ν2 k ) ≤ H 1 4 r 1 2 ks (λ + ν2 k ) r 1 + H 2 4 r 2 2 k (λ + ν2 k ) r 2 ,(34)
where the final inequality is applicable since λ ≥ 4Z implies 1 4 (λ + ν2 k ) ≥ Z, in which case the inequality follows from our assumptions. Combining (33) and (34), we conclude that (30) is at
most 2 ∞ k=0 H 1 4 r 1 2 ks (λ + ν2 k ) r 1 + 2 ∞ k=0 H 2 4 r 2 2 k (λ + ν2 k ) r 2 .(35)
We now treat two cases. First, suppose λ > ν. Then (35) is at most
2H 1 4 r 1 ⌈log 2 ( λ ν )⌉−1 k=0 2 ks λ r 1 + 2H 2 4 r 2 ⌈log 2 ( λ ν )⌉−1 k=0 2 k λ r 2 + 2H 1 4 r 1 ∞ k=⌈log 2 ( λ ν )⌉ 2 −(r 1 −s)k ν r 1 + 2H 2 4 r 2 ∞ k=⌈log 2 ( λ ν )⌉ 2 −(r 2 −1)k ν r 2 = 2H 1 4 r 1 λ −r 1 2 ⌈log 2 ( λ ν )⌉s − 1 2 s − 1 + 2H 2 4 r 2 λ −r 2 (2 ⌈log 2 ( λ ν )⌉ − 1) + 2H 1 4 r 1 ν −r 1 2 −(r 1 −s)⌈log 2 ( λ ν )⌉ 1 − 2 −(r 1 −s) + 2H 2 4 r 2 ν −r 2 2 −(r 2 −1)⌈log 2 ( λ ν )⌉ 1 − 2 −(r 2 −1) ≤ 4H 1 4 r 1 λ −r 1 ( λ ν ) s + 4H 2 4 r 2 λ −r 2 λ ν + 2H 1 1 − 2 −(r 1 −s) −1 4 r 1 ν −r 1 ( λ ν ) −(r 1 −s) + 2H 2 1 − 2 −(r 2 −1) −1 4 r 2 ν −r 2 ( λ ν ) −(r 2 −1) ,
with the first line of the final inequality following from the fact that 2 ⌈log 2 ( λ ν )⌉s − 1 ≤ 2 s ( λ ν ) s and 2 s − 1 ≥ 2 s−1 . Combining with the fact that r 2 > 2 implies 1 − 2 −(r 2 −1) −1 ≤ 2, we conclude that if λ > ν, then (35) is at most
6H 1 1 − 2 −(r 1 −s) −1 4 r 1 λ −(r 1 −s) ν −s(36)+ 8H 2 4 r 2 λ −(r 2 −1) ν −1 .
Combining with the fact that λ > ν > 0 and r 2 > 2 implies λ −(r 2 −1) ν −1 ≤ (λν) − r 2 2 , we conclude that if λ > ν, then (35) is at most
6H 1 1 − 2 −(r 1 −s) −1 4 r 1 λ −(r 1 −s) ν −s(37)+ 8H 2 4 r 2 (λν) − r 2 2 .
Alternatively, suppose λ ≤ ν. Then (35) is at most
2H 1 4 r 1 ∞ k=0 2 −(r 1 −s)k ν r 1 + 2H 2 4 r 2 ∞ k=0 2 −(r 2 −1)k ν r 2 ≤ 2H 1 4 r 1 ν −r 1 1 − 2 −(r 1 −s) −1 + 4H 2 4 r 2 ν −r 2 ≤ 2H 1 1 − 2 −(r 1 −s) −1 4 r 1 λ −(r 1 −s) ν −s(38)+ 4H 2 4 r 2 (λν) − r 2 2 ,
the final inequality following from the fact that ν ≥ λ, r 1 > s, r 2 > 2 implies ν −r 1 ≤ λ −(r 1 −s) ν −s , and ν −r 2 ≤ (λν) − r 2 2 . Next, we claim that 1 − 2 −(r 1 −s) −1 ≤ 2(1 + 1 r 1 −s ). Indeed, first, suppose r 1 − s < 1. In this case, as it is easily verified that 1 − 2 −z ≥ z 2 for all z ∈ (0, 1), the result follows. Alternatively, if r 1 − s ≥ 1, then 1 − 2 −(r 1 −s) −1 ≤ 2, completing the proof. Combining with (37) and (38), and our assumptions, it follows that in all cases (35), and hence (30), is at most
12H 1 (1 + 1 r 1 − s )4 r 1 λ −(r 1 −s) ν −s + 8H 2 4 r 2 (λν) − r 2 2 .(39)
We next bound (31). First, suppose λ ≥ ν. Then our assumptions (applied with t 0 = 1) imply that (31) is at most
H 2 λ −r 2 ≤ H 2 (λν) − r 2 2 .(40)
Alternatively, suppose that λ < ν. Then applying our assumptions with t 0 = λ ν , along with a union bound, we conclude that P sup
t∈[0,1] φ(t) − νt ≥ λ ≤ P sup t∈[0, λ ν ] φ(t) − νt ≥ λ(41)+ P sup t∈[ λ ν ,1] φ(t) − νt ≥ λ .(42)
It follows from our assumptions that (41) is at most
P sup t∈[0, λ ν ] φ(t) ≥ λ ≤ H 2 ( λ ν ) r 2 2 λ −r 2 = H 2 (λν) − r 2 2 .(43)
We next bound (42), which by stationary increments, a union bound, and our assumptions is at
most P sup t∈[ λ ν ,1] φ( λ ν ) + φ(t) − φ( λ ν ) − ν(t − λ ν ) ≥ 2λ ≤ P φ( λ ν ) ≥ 1 2 λ + P sup t∈[ λ ν ,1] φ(t) − φ( λ ν ) − ν(t − λ ν ) ≥ 3 2 λ = P φ( λ ν ) ≥ 1 2 λ + P sup s∈[0,1− λ ν ] φ(s + λ ν ) − φ( λ ν ) − νs ≥ 3 2 λ ≤ P sup t∈[0, λ ν ] φ(t) ≥ 1 2 λ + P sup s∈[0,1− λ ν ] φ(s + λ ν ) − φ( λ ν ) − νs ≥ 3 2 λ ≤ H 2 ( λ ν ) r 2 2 ( λ 2 ) −r 2 + P sup s∈[0,1− λ ν ] φ(s) − νs ≥ 3 2 λ ≤ 2 r 2 H 2 (λν) − r 2 2(44)+ P sup t∈[0,1] φ(t) − νt ≥ 3 2 λ .(45)
Let us define f (z)
∆ = P sup t∈[0,1] φ(t) − νt ≥ z .
Then using (43) to bound (41), and (44) - (45) to bound (42), we conclude that for all z ∈ [2Z, ν),
f (z) ≤ 2 r 2 +1 H 2 (zν) − r 2 2 + f ( 3 2 z).(46)
Let j * ∆ = sup{j ∈ Z + : ( 3 2 ) j λ < ν}. Then it follows from (46) that for all j ∈ [0, j * ],
f ( 3 2 ) j λ ≤ 2 r 2 +1 H 2 (λν) − r 2 2 ( 3 2 ) r 2 2 −j + f ( 3 2 ) j+1 λ .(47)
Combining (47) with a straightforward induction and our assumptions, and noting that f is a non-increasing function, we conclude that for all λ ∈ [2Z, ν),
f (λ) ≤ 2 r 2 +1 H 2 (λν) − r 2 2 j * j=0 ( 3 2 ) r 2 2 −j + f (ν) ≤ 2 r 2 +1 H 2 (λν) − r 2 2 ∞ j=0 ( 3 2 ) −j + H 2 ν −r 2 ≤ 2 r 2 +3 H 2 (λν) − r 2 2 + H 2 ν −r 2 ≤ 2 r 2 +4 H 2 (λν) − r 2 2 ,(48)
the final inequality following from the fact that by assumption ν > λ and thus ν −r 2 ≤ (λν) − r 2 2 by the same logic as in (40). Thus using (40) to bound (31) in the case λ ≥ ν, and (48) to bound (31) in the case λ < ν, we conclude that in all cases (31) is at most
2 r 2 +4 H 2 (λν) − r 2 2 .(49)
Using (39) to bound (30), and (49) to bound (31), combined with some straightforward algebra, demonstrates that for all λ ≥ 4Z, P sup t≥0 φ(t) − νt ≥ λ is at most
12H 1 (1 + 1 r 1 − s )4 r 1 λ −(r 1 −s) ν −s + 8H 2 4 r 2 (λν) − r 2 2 + 2 r 2 +4 H 2 (λν) − r 2 2 ≤ 12H 1 (1 + 1 r 1 − s )4 r 1 λ −(r 1 −s) ν −s + 12H 2 4 r 2 (λν) − r 2 2 ≤ 12(1 + 1 r 1 − s ) H 1 4 r 1 λ −(r 1 −s) ν −s + H 2 4 r 2 (λν) − r 2 2 ,
completing the proof.
Proof of Lemmas 12 and 13.
In this section we prove Lemmas 12 and 13.
8.11.1. A maximal inequality we will use in the proof of both Lemmas 12 and 13.
As mentioned previously, our proofs of both Lemmas 12 and 13 will rely heavily on a maximal inequality of Longnecker and Serfling [98]. We begin by stating (a variant of) the relevant maximal inequality of Longnecker and Serfling [98] which we will use in both proofs.
Lemma 18 (Longnecker and Serfling [98] Theorem 2). Let {X l , 1 ≤ l ≤ L} be a completely general sequence of r.v.s. Suppose that for some fixed γ > 1, ν ≥ γ, and C > 0 the following condition holds:
(i) For all x > 0 and non-negative integers 1 ≤ i ≤ j ≤ L,
P | j k=i X k | ≥ x ≤ C(j − i + 1) γ x −ν .
Then it must also hold that P max i∈{1,...,L}
| i k=1 X k | ≥ x ≤ 2.25 × 2 γ × (2 ν + 1 γ − 1 ) ν+1 (CL) γ x −ν .
For completeness we show how Lemma 18 follows from the results of Longnecker and Serfling [98]. First, we state the relevant result of Longnecker and Serfling [98].
Lemma 19 (Longnecker and Serfling [98] Theorem 2). Let {X l , 1 ≤ l ≤ L} be a completely general sequence of r.v.s. Suppose there exist ν > 0, γ > 1, and C > 0 such that for all x > 0 and non-negative integers 1 ≤ i ≤ j ≤ L, it holds that
P | j k=i X k | ≥ x ≤ C(j − i + 1) γ x −ν .
Then it must also hold that
P max i∈{1,...,L} | i k=1 X k | ≥ x ≤ 2 γ 1 + 2 − 1 ν+1 − 2 − γ ν+1 −(ν+1) (CL) γ x −ν .
Here let us point out that although Longnecker and Serfling [98]
2 γ 1 + 2 − 1 ν+1 − 2 − γ ν+1 −(ν+1) ≤ 2.4 × 2 γ × (2 ν + 1 γ − 1 ) ν+1 .(50)
Note that
2 − 1 ν+1 − 2 − γ ν+1 = 2 − 1 ν+1 1 − 2 − γ−1 ν+1 .(51)
As our assumptions imply 0 < γ−1 ν+1 < 1, and it is easily verified that 1 − 2 −z ≥ z 2 for all z ∈ [0, 1], we conclude that
1 − 2 − γ−1 ν+1 −(ν+1) ≤ (2 ν + 1 γ − 1 ) ν+1 .(52)
Combining (51) and (52) with the fact that (by our assumptions) ( ν+1 γ−1 ) ν+1 ≥ 1, we conclude that
2 γ 1 + 2 − 1 ν+1 − 2 − γ ν+1 −(ν+1) = 2 γ 1 + 2 − 1 ν+1 1 − 2 − γ−1 ν+1 −(ν+1) ≤ 2 γ 1 + 2(2 ν + 1 γ − 1 ) ν+1 .
As our assumptions imply that (2 ν+1 γ−1 ) ν+1 ≥ 4, the desired result then follows from some straightforward algebra.
Proof of Lemma 12.
Proof of Lemma 12: We proceed by verifying that for each fixed k ≥ 1, the conditions of Lemma 18 hold for n − n i=1 N e,i (j) − N e,i (j − 1) , j = 1, . . . , k . Let us fix some k ≥ 1, and non-negative integers l ≤ m ≤ k. Then for any x > 0, it follows from the fact that the given sequence of r.v.s is centered and stationary, the independence of {N e,i (t), i ≥ 1}, our assumptions, and Markov's inequality (after raising both sides to the r 1 power), that for all 1 ≤ l ≤ m ≤ k, Proof of Lemma 13: We begin by noting that it suffices to bound the supremum of interest over a suitable mesh, which follows immediately from the fact that w.p.1 nt − n i=1 N e,i (t) can increase by at most 2 over any interval of length at most 2 n . We thus take our mesh to be { 2k n , k = 0, . . . , ⌊ nt 0 2 ⌋}, and conclude that (11) is at most
P m j=l n − n i=1 N e,i (j) − N e,i (j − 1) ≥ x ≤ E m j=l n − n i=1 N e,i (j) − N e,i (j − 1) r 1 x −r 1 = E n(m − l + 1) − n i=1 N e,i (m) − N e,i (l − 1) r 1 x −r 1 = E n i=1 N e,i (m − l + 1) − n(m − l + 1) r 1 x −r 1 ≤ C 1 n r 1 2 (m − l + 1) s x −r 1 .P 2 + max k∈{0,...,⌊ nt 0 2 ⌋} 2k − n i=1 N e,i ( 2k n ) ≥ x .(53)
We now verify that the conditions of Lemma 18 hold for 2 − n i=1 N e,i ( 2k n ) − N e,i ( 2(k−1) n ) , k = 0, . . . , ⌊ nt 0 2 ⌋ . Let us fix some non-negative integers m ≤ j ≤ ⌊ nt 0 2 ⌋. Then for any x > 0, it follows from stationary increments, centeredness, and Markov's inequality (after raising both sides to the
r 2 power) that P j l=m 2 − n i=1 N e,i ( 2l n ) − N e,i ( 2(l − 1) n ) ≥ x(54)
is at most
E j l=m 2 − n i=1 N e,i ( 2l n ) − N e,i ( 2(l − 1) n ) r 2 x −r 2 = E 2(j − m + 1) − n i=1 N e,i ( 2(j − m + 1) n ) r 2 x −r 2 ,
which by our assumptions (and noting that in this case the nt appearing in our assumptions equals 2(j − m + 1)) is at most C 2 2(j − m + 1) r 2 2 x −r 2 . We thus find that the conditions of Lemma
18 are met with L = ⌊ nt 0 2 ⌋, {X l , 1 ≤ l ≤ L} = 2 − n i=1 N e,i ( 2l n ) − N e,i ( 2(l−1) n ) , l = 1, . . . , ⌊ nt 0 2 ⌋ , C = 2(C 2 ) 2 r 2 , ν = r 2 , γ = r 2 2 . Thus for all x > 0, P max k∈{0,...,⌊ nt 0 2 ⌋} 2k − n i=1 N e,i ( 2k n ) ≥ x ≤ 2.25 × 2 r 2 2 × (2 r 2 + 1 r 2 2 − 1 ) r 2 +1 (CL) r 2 2 x −r 2 .
It then follows from (53), and the fact that x ≥ 4 implies (x − 2) −r 2 ≤ 2 r 2 x −r 2 , that (11) is at most
2 r 2 × 2.25 × 2 r 2 2 × (2 r 2 + 1 r 2 2 − 1 ) r 2 +1 × C 2 × 2 r 2 2 × ( nt 0 2 ) r 2 2 x −r 2 ≤ .8 × (5.7 r 2 + 1 r 2 2 − 1 ) r 2 +1 × C 2 × (nt 0 ) r 2 2 x −r 2 ,
completing the proof. showing that this moment scales (with t) like t r 2 and providing a completely explicit bound along these lines. Second, we modify these bounds to instead yield bounds for the rth central moment of N e (t). Third, we apply general results from the literature for converting bounds on the moments of zero-mean r.v.s to bounds on the moments of sums of those r.v.s to convert the above bounds for individual cenetered renewal processes into bounds for centered pooled renewal processes.
We begin by bounding the rth central moment of N o (t). Our approach can essentially be viewed as "making completely explicit", e.g. all constants explicitly worked out, the approach to bounding the central moments of a renewal process sketched in Gut [69]. As noted in Gut [69] (and used in Gamarnik and Goldberg [49]), a non-explicit bound proving that the rth central moment indeed scales asymptotically (with t) like t r 2 was first proven in Chao et al. [29]. To our knowledge such a completely explicit bound is new, and may prove useful in other settings. In particular, we begin by proving the following.
E ∞ i=1 X i r 1 r ≤ 10r E ∞ i=1 E[X 2 i |F i−1 ] r 2 1 r + 10r E sup i≥1 |X r i | 1 r .
Since for any sequence of r.v.s {Z i , i = 1, . . . , n} and r ≥ 1 it follows from convexity that w.p.1
n i=1 Z i r ≤ n r−1 n i=1 |Z i | r ,(55)
we deduce the following corollary.
Corollary 10. Under the same definitions and assumptions as Lemma 21, for all r ≥ 2,
E ∞ i=1 X i r ≤ (20r) r E ∞ i=1 E[X 2 i |F i−1 ] r 2 + (20r) r E sup i≥1 |X r i | .
We next recall a certain inequality for the non-central moments of N o (t), proven in Gut [
E N o (t) + 1 r ≤ (2t) r E N o (1) + 1 r .
Next, we prove several explicit bounds for the higher moments of a renewal process. Although there is a large literature on bounds for renewal processes, including explicit bounds for the mean and variance of the number of renewals and non-explicit/asymptotic bounds for the higher moments of these processes (see e.g. Daley [38,39], Smith [124], Hunter [79], Leadbetter [96], Grubel et al.
[64], Taga [129]), we believe our bounds to be novel, and potentially of independent interest.
Lemma 23. Suppose that E[S] = 1. Then for all r ≥ 1 and θ > 0,
E N o (1) + 1 r ≤ 1 + 1.5 × r × (r − 1) r−1 × exp(2θ) × 1 − E[exp(−θS)] −r .
Proof: Note that for all j ≥ 1 and θ > 0,
P N o (1) + 1 ≥ j = P( j−1 i=1 S i ≤ 1) = P exp − θ j−1 i=1 S i ≥ exp(−θ) ≤ exp(θ) × E j−1 [exp(−θS)] by Markov's inequality.(56)
Applying the tail integral form for higher moments (Nadarajah et al. [105]), it follows that
E[(N o (1) + 1) r ] = r ∞ 0 x r−1 P N o (1) + 1 ≥ x dx ≤ 1 + r exp(θ) ∞ 1 x r−1 E x−1 [exp(−θS)]dx ≤ 1 + r exp(θ) E[exp(−θS)] −1 ∞ 0 x r−1 E[exp(−θS)] x dx = 1 + r exp(θ) E[exp(−θS)] −1 Γ(r) × log −r 1 E[exp(−θS)] ≤ 1 + r exp(2θ)Γ(r) 1 − E[exp(−θS)] −r ,
with the final inequality following from the fact that log( 1
x ) ≥ 1 − x for all x ∈ (0, 1),= 1 − θS + θ 2 S 2 1 + θS ≤ 1 − θS + θ 2 S 2 .
It follows that for all θ > 0,
1 − E[exp(−θS)] ≥ θE[S] − θ 2 E[S 2 ].(57)
Taking θ = E[S] 2E[S 2 ] and recalling that E[S] = 1, we find that
1 1 − E[exp(−θS)] ≤ 4E[S 2 ],(58)
completing the proof.
Let us note that since in general P (S = 0) may be positive (in which case E[exp(−θS)] will not shrink to 0 as θ ↑ ∞), it seems dificult to derive generic bounds using large values of θ without introducing additional parameters related to the distribution of S, and thus we have instead taken advantage of the fact that Taylor series may be applied when θ is small.
Combining Lemmas 23 and 24 with some straightforward algebra, we come to the following corollary.
Corollary 11. Suppose that E[S] = 1. Then for all r ≥ 1,
E N o (1) + 1 r ≤ 4.4 × 4E[S 2 ]r) r .
Further combining with Lemma 22, we come to the following corollary.
Corollary 12. Suppose that E[S] = 1. Then for all r ≥ 1 and t ≥ 1,
E N o (t) + 1 r ≤ 4.4 × 8E[S 2 ]r) r × t r .
As it will arise in some of our calculations, let us also state a tighter known explicit bound for the first moment of N o (t) + 1, which follows directly from Daley [39].
Lemma 25 (Daley [39]
N o (t) − t = N o (t) + 1 − t − 1 ≤ N o (t) + 1 − t + 1 ≤ No(t)+1 i=1 S i − N o (t) + 1 + No(t)+1 i=1 S i − t + 1.(59)
It then follows from (55) and (59) that
E N o (t) − t r ≤ 3 r−1 E No(t)+1 i=1 S i − N o (t) + 1 r(60)+ 3 r−1 E No(t)+1 i=1 S i − t r(61)+ 3 r−1 .(62)
We next bound
E No(t)+1 i=1 S i − N o (t) + 1 r ,(63)
and proceed by applying the Burkholder-Rosenthal Inequality. In particular, we will use Corollary 10 to bound (63). First, we rewrite (63) in terms of an appropriate martingale difference sequence.
Namely, note that (63) equals
E ∞ i=1 S i I N o (t) + 1 ≥ i − ∞ i=1 I N o (t) + 1 ≥ i r = E ∞ i=1 (S i − 1)I N o (t) + 1 ≥ i r .(64)
We now prove that {(S i − 1)I N o (t) + 1 ≥ i , i ≥ 1} is a martingale difference sequence w.r.t.
the filtration {σ(S 1 , . . . , S i ), i ≥ 1}. Finite expectations and measurability are trivial. Furthermore, since I N o (t) + 1 ≥ i is σ(S 1 , . . . , S i−1 )-measurable (due to the greater than or equal to sign), it follows from independence and the basic properties of conditional expectation that w.p.1
E (S i − 1)I N o (t) + 1 ≥ i |σ(S 1 , . . . , S i−1 ) = I N o (t) + 1 ≥ i E (S i − 1)|σ(S 1 , . . . , S i−1 ) = I N o (t) + 1 ≥ i E S i − 1 = 0.
Thus we find that the conditions of Corollary 10 are satisfied with X i = (S i − 1)I N o (t) + 1 ≥ i , F i = σ(S 1 , . . . , S i ). Before stating the given implication, we first show that several resulting terms can be simplified. First, note that
E ∞ i=1 E (S i − 1)I N o (t) + 1 ≥ i 2 σ(S 1 , . . . , S i−1 ) r 2 = E ∞ i=1 E (S i − 1) 2 I N o (t) + 1 ≥ i σ(S 1 , . . . , S i−1 ) r 2 = E ∞ i=1 I N o (t) + 1 ≥ i E (S i − 1) 2 σ(S 1 , . . . , S i−1 ) r 2 = E ∞ i=1 I N o (t) + 1 ≥ i E (S − 1) 2 r 2 = E (S − 1) 2 r 2 E ∞ i=1 I N o (t) + 1 ≥ i r 2 ≤ E[S 2 ] + 1 r 2 E N o (t) + 1 r 2 .(65)
Second, note that
E sup i≥1 (S i − 1)I N o (t) + 1 ≥ i r ≤ E ∞ i=1 I N o (t) + 1 ≥ i S i − 1 r = ∞ i=1 E I N o (t) + 1 ≥ i S i − 1 r = ∞ i=1 P N o (t) + 1 ≥ i E S i − 1 r by independence = E N o (t) + 1 E S − 1 r ,(66)
the final inequality following since {S i , i ≥ 1} is i.i.d. Combining (65) and (66) we conclude that (63) is at most
(20r) r E[S 2 ] + 1 r 2 E N o (t) + 1 r 2 + (20r) r E N o (t) + 1 E S − 1 r(67)
Combining (67), Corollary 12, and Lemma 25, we conclude (after some straightforward algebra) that (63) is at most
4.4 × E[S 2 ] + 1 r 2 × 40 E[S 2 ]r 1.5 ) r × t r 2 + (20r) r (1 + E[S 2 ])E S − 1 r t.(68)
We next bound (61), by bounding
E No(t)+1 i=1 S i − t r .(69)
By definition,
No(t)+1 i=1
S i − t is the residual life of the renewal process N o at time t, i.e. the remaining time until the next renewal (at time t), and it follows that w.p.1
No(t)+1 i=1 S i − t r ≤ S r No(t)+1 ≤ No(t)+1 i=1 S r i .
Combining with Wald's identity, we conclude that (69)
E N e (t) − t r is at most .76 × E[S 2 ] + 1 r × 240 r × r 1.5r + .21 × (E[S 2 ] + 1) × 120 r × r r × (E[S r ] + 1) × t r 2 .
Proof: Let S e denote the first renewal interval in N e , and f S e its density function, whose existence is guaranteed by the basic properties of the equilibrium distribution. Observe that we may construct N e and N o on the same probability space so that N o is independent of S e , and for
all t ≥ 0, w.p.1 N e (t) − t = N o (t − S e ) + − t − S e + I(S e < t) + I(S e ≤ t) − t − (t − S e ) + .
Fixing some t ≥ 1, it follows from (55) and the triangle inequality that
E |N e (t) − t| r ≤ 2 r−1 E N o (t − S e ) + − t − S e + r I(S e < t)(70)+2 r−1 E |I S e ≤ t − t − (t − S e ) + | r .(71)
We now bound the term E N o (t − S e ) + − t − S e + r I(S e < t) appearing in (70), which equals
t−1 0 E |N o t − s − t − s | r f S e (s)ds + t t−1 E |N o t − s − t − s | r f S e (s)ds.(72)
Lemma 20 and Markov's inequality (after raising both sides to the rth power), combined with our assumptions on r and t, implies that the first summand of (72) is at most Combining the above, we find that (70) is at most
1.5 × E[S 2 ] + 1 r × 120 r × r 1.5r + .4 × (E[S 2 ] + 1) × 60 r × r r × (E[S r ] + 1) × t−1 0 (t − s) r 2 f S e (s)2 r−1 × 1.5 × E[S 2 ] + 1 r × 120 r × r 1.5r + .4 × (E[S 2 ] + 1) × 60 r × r r × (E[S r ] + 1) × t r 2 + 2 r−1 × 4.4 × 4E[S 2 ]r r ≤ .76 × E[S 2 ] + 1 r × 240 r × r 1.5r + .2 × (E[S 2 ] + 1) × 120 r × r r × (E[S r ] + 1) × t r 2 .(73)
We now bound (71), which is at most
2 2r−2 1 + E | t − (t − S e ) + | r ≤ 2 2r−2 1 + t 0 s r f S e (s)ds + ∞ t t r f S e (s)ds .(74)
It follows from the basic properties of the equilibrium distribution and Markov's inequality that for all s ≥ 0,
f S e (s) = P(S > s) ≤ E[S r ]s −r .
Thus the term t 0 s r f S e (s)ds + ∞ t t r f S e (s)ds appearing in (74) is at most
t 0 s r E[S r ]s −r ds + t r ∞ t E[S r ]s −r ds = E[S r ] t 0 ds + t r ∞ t s −r ds = E[S r ] t + t r (r − 1) −1 t 1−r ≤ 2E[S r ]t.(75)
Using (73) to bound (70), and (75) and (74) to bound (71), and combining with some straightforward algebra completes the proof. We note that the bounds of Lemma 26, in particular the r r 2 scaling, are tight even in the i.i.d. case (Ren and Liang [115]). Combining with Corollary 13 and some straightforward algebra completes the proof.
8.13. Proof of Lemma 6.
We will prove Lemma 6 by breaking up the term n i=1 N e,i (t) in such a manner that our bounds for moments of sums, such as Lemma 26, can be applied to achieve the desired 1 1−ρ scaling. This requires us to show that E n i=1 N e,i (t) − nt r scales (jointly in n, t) as (nt) r 2 when t may be near 0, and may also scale non-trivially with n. Note that our analysis for the t ≥ 1 case used heavily results for the scaling of the higher central moments of renewal processes for t ≥ 1, quantifying explicitly the asymptotic scaling for large t, and those results are no longer applicable when t is very small. Instead, we will proceed as follows. We rewrite E N l (t) − t .
The two essential properties are the following. First, n 1 (t) scales roughly as nt, which means that if we apply Lemma 26 to n 1 (t) m=1 n 2 (t) l=1 N (m−1)n 2 (t)+l (t) − t by thinking of each n 2 (t) l=1 N (m−1)n 2 (t)+l (t) − t term as its own r.v. Y m (and thus thinking of the overall sum as n 1 (t) m=1 Y m with {Y m , m = 1, . . . , n 1 (t)} i.i.d.), Lemma 26 would yield a (nt) r 2 scaling as long as we could sufficiently control the moments of each Y m . Second, n 2 (t) scales roughly as 1 t , which will allow us to think of each such Y m term as the sum of 1 t independent terms each of which is 0 with probability roughly 1 − t, and some modest value with probability t, since an equilibrium renewal process over a small interval t has no events with probability roughly 1 − t. This intuition will allow us to tightly bound the moments of each Y m roughly by thinking of Y m as a modified Binomial r.v.
(with mean which can be bounded independent of n, t), in such a way that the higher moments of each Y m will not scale with nt. By showing the remainder term k l=n 1 (t)n 2 (t)+1 N l (t) − t consists of so few terms that its moments can also be sufficiently bounded, combining all of the above will yield the desired (nt) r 2 scaling.
To implement the above approach and prove the desired (nt) r 2 scaling and Lemma 6, we first prove a bound for E k i=1 N e,i (t) − kt r which will be applicable for the inner terms of the aforementioned double sum, for which k scales roughly as 1 t . As mentioned above, the proof proceeds by interpreting k i=1 N e,i (t) as a modified binomial random variable. To make the overall proof of Lemma 6 more readable, we defer the proof to the end of this section, here only stating the relevant result.
Lemma 27. Suppose that E[S] = 1. Then for all k ≥ 1, r ≥ 2, t ∈ [0, min( 2 k , 1)],
E k i=1 N e,i (t) − kt r ≤ 9.1 × E[S 2 ] + 1 r × 4 r × r 2r .(76)
Proof of Lemma 6: Let n 1 (t) ∆ = ⌊nt⌋. Noting that t ≥ 2 n implies n 1 (t) > 0, in this case we may define n 2 (t) ∆ = ⌊ n n 1 (t) ⌋. Then the left-hand-side of (8) equals N e,l (t) − t r .
As n 1 (t) ≤ nt, it thus follows from Lemma 26 that (78) is at most 2 r−1 × 4.3r 1 2 r × (nt) r 2 × E |N e (t) − t| r .
Noting that |N e (t) − t| is stochastically dominated by N o (1) + 1 for all t ≤ 1 (using the basic relationship between equilibrium and ordinary renewal processes), we may use Corollary 11 to bound E |N e (t) − t| r by 4.4 × 4E[S 2 ]r) r . We conclude that (78) Combining the above with some straightforward algebra completes the proof.
E k i=1 X i r ≤ r r max (kE[X 1 ]) r , kE[X r 1 ] .
We note that the bounds of Berend et al. [15] in fact show that the r r scaling of Lemma 28 can be improved slightly (for large r) to ( r log(r) ) r , but that this ( r log(r) ) r scaling is essentially tight, even for the moments of a binomial distribution (Ahle [3]). For simplicity, and as it will not substantially change the asymptotics of our final bounds since either way a term scaling as r r persists, here we use the simpler bound r r .
Proof of Lemma 27: Since |a − b| r ≤ a r + b r for any a, b ∈ R + , the left-hand-side of (76) is at
most E k i=1 N e,i (t) r + (kt) r .(83)
We now bound the term E
= E E M t i=1 1 + N o,i (t) r |M t ≤ E r r max M t E[1 + N o (t)] r , M t E[ 1 + N o (t) r ] ≤ r r E (M t ) r × E[1 + N o (1)] r + E[M t ]E 1 + N o (1) r ] ≤ r r E (M t ) r × 1 + E[S 2 ] r + r r E[M t ] × 4.4 × 4E[S 2 ]r) r .
It follows from Lemma 28 that
E[M r t ] ≤ r r max (kp t ) r , kp t ,
and E[M t ] = kp t since M t has a binomial distribution. We may combine the above and find that E k i=1 N e,i (t) r is at most
r 2r × 1 + E[S 2 ] r max (kp t ) r , kp t + 4.4 × r r × 4E[S 2 ]r) r × kp t .
Since it follows from the definition of the equilibrium distribution and p t that p t ≤ t (as here we are assuming E[S] = 1), and as kt ≤ 2, the desired result then follows from straightforward algebra.
8.14. Sketch of plausible approach generalizing our results to queueing networks.
It is natural and interesting to ask whether our approach extends to more complex queueing systems, such as queueing networks. Here we explore this question at a somewhat informal / conjectural level, sketching a possible approach, and leaving a formal investigation as an interesting direction for future research. For simplicity, let us restrict our discussion to a tandem system of two n-server queues, one upstream and one downstream, although note that (as we will see) this setting already captures many of the complexities of such an extension. Furthermore, suppose all external arrivals are to the upstream queue, that arrival process is Markovian with rate λ, and all service times are i.i.d. with mean one. In this setting, the natural extension of our approach would be to consider a modified system in which an extra arrival is added to the upstream queue whenever a server would otherwise have gone idle in the upstream system, and an extra arrival is added to the downstream queue whenever a server would otherwise have gone idle in the downstream system. It seems likely our approach could be extended to this setting to yield bounds in terms of certain suprema of processes which are the difference of pooled renewal processes, or for general networks the splitting and merging of appropriate renewal processes. Intuitively, the "input" at certain queues would now be the splitting and merging of pooled renewal processes representing the departures from other queues. The relevant monotonocities necessary for such a modification to yield upper bounds (on e.g. the total number in system) should follow from known results for stochastic comparison of queueing networks (Shanthikumar et al. [123], Chen et al. [32]).
However, a naive implementation of such a bounding methodology does not work in the network setting, as we now explain. In particular, our bounding methodology would cause the departure process from the upstream system to become the pooling of n renewal processes. However, in general this will drive the downstream system into instability. Indeed, the SLLN for renewal processes implies that the long-run rate at which work departs the upstream system (and heads to the downstream system) under the modifications required for our approach will be n, not λ as in the original system, which will overload (critically load, to be more precise) the downstream system.
We note that this "overloading phenomena" only arises in the network setting, since when there is a single multi-server queue the "extra arrivals" only occur when a server would have anyways gone idle. Alternatively, in the network setting, there is not "sufficient coordination" between the "extra arrivals" at different queues to prevent instability.
Perhaps surprisingly, this issue is not insurmountable. As explored in Chang et al. [27,28] for networks of single-server queues, a viable approach to overcome this problem is as follows.
First, one "slows down" the service times at the upstream station (e.g. by simply multiplying all service times at the first station by some constant inflation factor greater than one), in such a way that stability at the upstream station is maintained. Then, on this modified system (in which service times at the upstream station are now stochastically larger than at the downstream system), one implements our approach. With the upstream station services "slowed down", the departure process from the upstream station (under the modifications required for our approach) will no longer induce instability at the downstream station. Interestingly, it is shown in Chang et al. [27,28] that for a broad class of single-server queueing networks, it is always possible to implement such an approach (i.e. slowing down service times at each queue by an appropriate factor) such that under this construction stability is maintained for the overall network. Although those works considered networks of single-server queues, the general methodology seems likely to directly extend to multi-server queues. Furthermore, such a transformation will again lead to an upper bound, using the same standard results for comparison of queueing networks (Shanthikumar et al. [123], Chen et al. [32]). However, those works only show that such a transformation can be implemented to preserve stability, without studying how this would effect the scaling of queue lengths.
We conjecture that such an approach can indeed be implemented to yield general and explicit bounds with an appropriate analogue of 1 1−ρ scaling for a broad range of queuing networks. Although we are not aware of any simple and explicit analogues of Kingman's bound for queueing networks conjectured in the literature, we note that past work on heavy-traffic in queueing networks suggests that the number in queue at each station i should scale as 1 1−ρ i with ρ i the effective traffic intensity at that station (as dictated by the so-called traffic equations, see e.g. Reiman [113], Mandelbaum et al. [102], Gamarnik and Zeevi [52], Dai et al. [36]). For example, for the simple 2-queue tandem queue described above, such a scaling can be acheived by "slowing down" service times at the upstream station by multiplying service times at that station by n λ . Under such a transformation, the upstream station becomes an n-server queueing system with arrival rate λ ′ = λ and service rate µ ′ = λ n (with additional arrivals as appropriate when a server would go idle), and the downstream station becomes an n-server queue with arrival rate nµ ′ (coming from the departure process at the upstream station) and service rate 1 (again with additional arrivals when servers would go idle).
Thus both the upstream and downstream queues effectively become n-server queues with traffic intensity λ n √ λ n = n √ λ n n = λ n , with extra arrivals when a server would otherwise go idle. But as the effective traffic intensity at both stations in the original system is easily seen to be λ n , and as it is easily verified that 1 1− √ λ n ≤ 2 × 1 1− λ n for all n ≥ 0 and λ ∈ (0, n), we find that such a transformation indeed preserves the desired 1 1−ρ scaling at each station in an appropriate sense. We leave a formal investigation along these lines as an interesting direction for future research, and point out that for more general (e.g. multi-class) queueing networks questions of stability in networks can indeed be quite subtle (Dai et al. [36]).
of state-of-the-art. In summary, the question of whether a multi-server analogue of Kingman's bound exists remains an open problem despite over 50 years of research.
{N o,i (t), i ≥ 1} and A e (t) the respective number of renewals in [0, t]. We also let N e be a separate independent equilibrium renewal process with renewals distributed as S, with N e (t) the corresponding number of renewals in [0, t].
Observation 1 (If S is not very heavy tailed then moment sequence scales as r O(r) ) Suppose there exist a, b > 0 and c ≤ 1 s.t. P (Sµ S > x) ≤ a exp(−bx c ) for all x > 0. Then E[(Sµ S ) r ] ≤ 1.5 × a × (bc) − 1 c r × r 1 c r for all r ≥ 2. It follows that as long as there exists α > 0 s.t. the tail of Sµ S decays (asymptotically) at least as fast as exp − x α ), then there will exist
analogous results (with different tail decay properties) could be derived under any growth rate assumption on the relevant moment sequence. In addition, we note that these results regarding the s.s.p.d. (as well as our other results for the s.s.p.d.), when applied to queues in the Halfin-Whitt regime, imply progresss on an open question posed in Chawla et al.
Corollary 4 .
4For each r > 2.5, there exists a finite constant c r (depending only on r) s.t. for
Theorem 5 .
5For each r > 2.5, let c r denote the infimum of all constants c r for which the bound of Corollary 4 holds. Then there exists an absolute constant ǫ > 0 s.t. c r ≥ ǫ × r 1 9 r for all r > 16.
often inherited from other results in the literature. We also show in Section 8.5 of the supplemental appendix that the relevant prefactors in these intermediate bounds must indeed have a r Ω(r) scaling.
Conjecture 1 .
1In any M/G/n queue in which E[S 2 ] < ∞ and all relevant steady-state distributions exist, it holds that
N e,i (t)| r as needed in the conditional bounds (by making completely explicit, and enhancing, an approach to bounding centered renewal processes sketched in Gut[69]).7. Combine all of the above to yield the desired result .
-level outline of proofs of other results. Our bounds for the s.s.p.d. appearing
4. 2 .
2Proof of our main results : bounds for L(∞) in Theorems 1 -2 . We now prove our main results, the bounds for the tail of L(∞) in Theorems 1 -2, by implementing the proof outlined in Section 4.1 above. 4.2.1. 1. : Use stochastic comparison results of Gamarnik and Goldberg [49] to bound P L(∞) ≥ x by
bound(6). In this section, we prove that IF one could suitably bound E[ nt − n i=1 N e,i (t)| r (for some r > 2 and all t), THEN one could bound (6) as required (using known maximal inequalities) We defer all proofs to the technical appendix Section 7.2, and instead simply state the most relevant result. We restrict to the setting E[S] = 1, which suffices since one can derive the general case by simply rescaling time (i.e. multiplying both the service and inter-arrival times by µ S ). This follows from the fact that such a rescaling does not change the distribution of Q(∞), and only impacts the proven bounds by replacing terms of the form E[S k ] by E[(Sµ S ) k ], leaving all other quantities unchanged (as E[(Aµ A ) k ] and ρ are unchanged by such a rescaling).
of our main results, bounds for L(∞) in Theorems 1 -2. In this section, we prove our main results, the first part of Theorems 1 -2 in which P L(∞) the conditional bounds of Theorem 10. Proof of the bounds for L(∞) in Theorems 1 -2 : First, we again note that it suffices to prove the result for the case E[S] = 1, by a simple rescaling argument (in which S is replaced by Sµ S ).
E[S 2
2], E[S r ] ≥ 1 and some straightforward algebra, completes the proof of our main results, the bounds for P L(∞) ≥ x 1−ρ appearing in Theorems 1 -2.5. Proofs of the bounds for the s.s.p.d. in Theorems 1 -2 and number of busy servers in Theorem 4. In this section we complete the bounds for the s.s.p.d. in the proofs of Theorems 1 -2, and for the number of busy servers in Theorem 4. Note that the stochastic comparison result Lemma 1, upon which our entire analysis is premised, can only provide trivial bounds for the s.s.p.d. (i.e. plugging in x = 0 yields a trivial bound of 1), and similarly cannot be used to bound the number of busy servers.
5. 1 .
1Proofs of the bounds for the s.s.p.d. in Theorems 1 -2.
Corollary 5 .
5Under the same assumptions as Lemma 7, and supposing also E[S] = 1, it holds that
2 proves the desired result for all n ≥ 5 s.t. ρ ≤ 1 − 4 n . Noting that if either n ≤ 4, or n ≥ 5 and ρ > 1 − 4 n , then all relevant bounds are at least one (and hence hold for the s.s.p.d.) completes the proof of the bounds for the s.s.p.d. of Theorems 1 -2.
5. 2 .
2Proof of bound for the number of busy servers in Theorem 4. Again suppose E[S] = 1. Similar to our logic in bounding the s.s.p.d, let
(analogous to Kingman's bound for single-server queues), assuming only that E[A 2 ] < ∞ and E[S 2+ǫ ] < ∞ for some ǫ > 0. Our main results are bounds for the tail of the steady-state queue length and the steady-state probability of delay. The strength of our bounds (e.g. in the form of tail decay rate) is a function of how many moments of the service distribution are assumed finite,
•
The necessity of r Ω(r) prefactors in our bounds (as well as what numerical constants are actually required) remains an open question, as does the question of whether tighter bounds (with smaller prefactors, possibly of a qualitatively different nature) can be derived (possibly under additional assumptions on A, S and/or using a different proof technique).
•
Kingman's bound for single-server queues requires only that E[S 2 ] < ∞, while our bounds require E[S 2+ǫ ] < ∞ for some ǫ > 0. Closing this gap remains an interesting open question.• Conjecture 1, which would imply a very simple bound for the expected queue length in M/GI/n queues, remains a very interesting open question.
Lemma 10 .1
10Under the same assumptions asLemma 9, and supposing in addition that E[Z 2 1 ] < ∞, it holds that E exp 1(c − Z 1 ) ≤ 1. Namely, one can take θ = ] scales as Ω(1 − c) as c ↑ 1, and as Ω( 1 c ) as c ↓ 0. With Lemmas 9 and 10 in hand, we now complete the proof of the desired result Lemma 3.Proof of Lemma 3:The result follows by applying Lemmas 9 and 10 with c
we prove a conditional result that if the moments of |nt − n i=1 N e,i (t)| can be bounded as in the conditions of Lemma 4, then the supremum of nt − n i=1 N e,i (t) over both sets of consecutive integers and intervals of size at most one can indeed by suitably controlled.
Corollary 8 .
8Under the same assumptions and definitions as Theorem 3, and supposing in addition that ρ ≤ 1 − Bn − 1 2 for some B > 0, the following holds.
These results regarding the s.s.p.d. (as well as the other results for the s.s.p.d. in the Halfin-Whitt regime implied by our main results) make progress on an open question posed in Chawla et al. [30] related to distribution-independent bounds for the s.s.p.d. of multi-server queues in the Halfin-Whitt regime. Indeed, those authors pose the question of whether the s.s.p.d. scales as exp − Ω(B 2 ) for general service time distributions in the Halfin-Whitt regime. Our results imply a bound of exp − Ω(B α ) for some α ∈ (0, 1) for quite general service time distributions, thus representing partial progress, but falling short of the exp − Ω(B 2 ) scaling. The question in its original form remains an interesting open problem, and we refer the reader to Goldberg [55] for further related progress on this problem. 8.3. Proof of Observation 1 and Theorem 3. Proof of Observation 1 For simplicity (and as noted without loss of generality by a simple rescaling argument), suppose E[S] = 1. Suppose P (S > x) ≤ a exp(−bx c ) for all x > 0. Then by the tail integral form for higher moments, see e.
. 8 . 5 .
85Proofs of Corollary 4 and Theorem 5, and further discussion of prefactors and r Ω(r) scaling. 8.5.1. Proof of Corollary 4. First, let us complete the proof of Corollary 4, a natural implication of our main results very similar to the types of bounds which appeared in an earlier version of this manuscript (Goldberg [58]).
Lemma 16 (
16Kingman [92]). Suppose that : (1) n = 1;(2) there exist θ * > 0 s.t. E exp θ * (S − A) = 1; and (3) there exists a finite constant B s.t. P (S ≤ B) = 1. Then for all x > 0,
[0, 2 ]
2, and S is distributed uniformly on [0, 1]. It is easily verified that there exists a unique strictly positive θ * ∼ 2.851 s.t. E exp θ * (S − A) = 1. It follows from standard queueing results appearing in e.g. Asmussen [7] that Work(∞), W (∞), and Q(∞) exist; and for all x > 0, P Work(∞) > x = P W (∞) + S − R(A) > x , with W (∞), S, R(A) independent r.v.s. Using the fact that P (S ≤ 1) = 1 and P (A ≤ 2) = 1, it easily follows that for all x > 0 : (1) P Work(∞) ≥ x ≥ P W (∞) ≥ x + 2 ;
Combining the above lower and upper bounds for E[L r 4 (∞)], it follows that r 4+ c r × 2 × ( 8 3 ) r ≥ exp(−14.3) × 2.2 −r × r 1 8 r .A straightforward contradiction argument then implies that there exists ǫ > 0 s.t. c r ≥ ǫ × r 1 9 r for all r > 16, which completes the proof. 8.5.3. Where in our proofs does the r Ω(r) scaling arise? We now comment explicitly on where in our proofs the r Ω(r) scaling arises. Although our proofs involve several moving parts and multiple bounds which are composed together, the vast majority of those bounds do not actually contribute terms scaling as r Ω(r) . Essentially all aspects of our proof which contribute to the r Ω(r)
Corollary 9 .
9For each r > 2, there exists a finite constant c r (depending only on r) s.t. for all integers n ≥ 1, all t ≥ 1, and all S s.
Lemma 17 .
17let c r denote the infimum of all constants c r for which the bound of Corollary 9 holds. Then there exists an absolute constant ǫ > 0 s.t. c r ≥ ǫ × r 1 2 r for all r > 16.
t
t ≥ 0. The positive Harris recurrence proven in Dai et al. [34], along with standard implications of positive Harris recurrence for countable-state continuous time Markov chains, imply that lim t→∞ t −1 −1 E[Aban(t)] = θE[L n a (∞)].
s
Thus we find that the conditions of Lemma 18 are met with L = k, {X l , 1 ≤ l ≤ L} = n − n i=1 N e,i (l) − N e,i (l − 1) , l = 1, . . . , k , C = (, ν = r 1 , γ = s, and the desired result follows. 8.11.3. Proof of Lemma 13.
8. 12 .
12Proof of Lemma 5. Our proof of Lemma 5 proceeds in several steps. First, we bound the rth central moment of N o (t),
with the fact that the conditions of Corollary 10 are satisfied with X i = (S i − 1)I N o (t) + 1 ≥ i , F i = σ(S 1 , . . . , S i ),
n i=1 N e,i (t) − ntr as a double-sum (plus remainder term), in such a way that two essential properties hold. Let n 1 (t)
i=1 N e,i (t) r appearing in(83).Let {B i , i ≥ 1} denote a sequence of i.i.d. Bernoulli r.v. s.t P(B i = 1) = p t ∆ = P(R(S) ≤ t), and P(B i = 0) = 1 − p t . Note that we may construct {N e,i (t), i ≥ 1}, {N o,i (t), i ≥ 1}, {B i , i ≥ 1}on the same probability space s.t. w.p.1
N
e,i (t) ≤ B i 1 + N o,i (t) for all i ≥ 1, with {N o,i (t), i ≥ 1}, {B i , i ≥ 1} mutually independent. i , i.e.M t is the corresponding binomially distributed r.v. Then it follows from Lemma 28, Corollary 12, Lemma 25, the fact that t ≤ 1, and Jensen's inequality thatE k i=1 N e,i (t) r ≤ E M t i=1 1 + N o,i (t)
4.2.6. 6. : Prove that one can indeed bound E[ nt − n i=1 N e,i (t)| r as needed to apply the conditional bounds of Lemma 4 and Theorem 10. We now show that one can indeed appropriately bound the central moments of n i=1 N e,i (t) s.t. the conditional bounds of Lemma 4 and Theorem 10 can be applied. We state two results, one for t ≥ 1, and one for t ∈ [ 2 n , 1]. In both cases, we defer the proofs to the supplemental appendix Sections 8.12 and 8.13.Lemma 5.Suppose that E[S] = 1, and E[S r ] < ∞ for some r ≥ 2. Then for all n ≥ 1 and t ≥ 1,
we conclude the following bound for the tail of the number of busy servers (byapplying Lemma 7).
Corollary 6. Under the same assumptions as Lemma 7, for all
As in the proof of our bounds for the s.s.p.d., we can still apply the tail bounds for the queue length of Theorem 2 with these different parameters to complete the proof of our bound for the number of busy servers.Proof of Theorem 4: Let x ′′ = x
2
√
µ A − 1 and n ′′ = ⌈µ A + x
2
√ µ A ⌉. First, let us point out
that requiring ρ ∈ [ 1
n , 1 − 2
n ] may be seen to imply two properties (after some straightforward
algebra) : (1) 2(n−µ A −1)
√
µ A
≥ 1
2
n−µ A
√
µ A , and hence Corollary 6 is applicable to all x in the range
4, 1
2 min
√ µ A , n−µ A
√ µ A
follow from standard martingale techniques, i.e. looking at an appropriate exponential martingale and applying a maximal inequality for martingales. We state the relevant result in terms of a more general Suppose {Z i , i ≥ 1} is an i.i.d. sequence of non-negative r.v.s with E[Z 1 ] = 1, and c ∈ (0, 1) is some constant. Suppose E exp θ(c − Z 1 ) ≤ 1 for some θ > 0. Thensupremum sup k≥0 (c × k − Z k ) for c ∈ (0, 1) and {Z k , k ≥ 1} an i.i.d. sequence of non-negative mean
one r.v.s.
Lemma 9 (Kingman [90]).
). Suppose that 0 < E[A], E[S] < ∞. Then for allt, x > 0,
x 1
x−ρ ≤ 1 in all cases, it follows that the desired bound holds for all x, completing the proof. We note that in our actual result we have presented a slightly weaker bound so we do not need to define different constants in the statement of our result for the queue length and the s.s.p.d., and plugged in the bound implied by our assumptions for E[(Sµ S ) 2 ].The proof for the s.s.p.d. follows by applying a nearly identical argument to the slightly different
bound (implied by Theorem 2 for the s.s.p.d.)
inf
r>2.5
8.5.2. Proof of Theorem 5. The intuition of our proof is actually quite simple. Intuitively, we will show that for single-server queues in which A and S are uniformly bounded, E[A r ] and E[S r ] grow only as exp O(r) , while E[L r (∞)] grows as r Ω(r) (essentially inheriting the moment
With Lemma 19 in hand, we now complete the proof of Lemma 18.Proof of Lemma 18: As the conditions of Lemma 19 and Lemma 18 are identical, it suffices to prove that ν ≥ γ (along with the other assumptions of Lemma 19) impliespresents several related
bounds, they all seem to ultimately yield bounds scaling similar to those of Lemma 18 in our
setting. We leave it as an interesting open question whether bounds which exhibit significantly
tighter scaling (e.g. as r ↓ 2 and r ↑ ∞) are possible for such maximal inequalities in our setting.
Lemma 20. Suppose that E[S] = 1, and that E[S r ] < ∞ for some r ≥ 2. Then for all t ≥ 1,E N o (t) −t r is at most 1.5 × E[S 2 ] + 1 r × 120 r × r 1.5r + .4 × (E[S 2 ] + 1) × 60 r × r r × (E[S r ] + 1) × t 8.12.1. Preliminary results for the proof of Lemma 20. Before proving Lemma 20, let us prove some preliminary technical results. First, we recall the celebrated Burkholder-Rosenthal Inequality for bounding the moments of a martingale. We state a particular variant given in Hitczenko [75]. Lemma 21 (Burkholder-Rosenthal Inequality, Hitczenko [75]). Let {X i , i ≥ 1} be a martingale difference sequence w.r.t. the filtration {F i , i ≥ 0}. Namely, we have that {X i , i ≥ 1} is adapted to {F i , i ≥ 0}; E[|X i |] < ∞ for all i ≥ 1; and E[X i |F i−1 ] = 0 for all i ≥ 1. Suppose also that { n i=1 X i , n ≥ 1} converges a.s. to a limiting r.v. which we denote ∞ i=1 X i . Then for all r ≥ 2,r
2 .
69 ]
69Equation 5.11. Lemma 22 (Gut [69] Equation 5.11). For all r ≥ 1 and t ≥ 1,
and the fact that Jensen's inequality implies E[exp(−θS)] −1 ≤ exp(θ) since E[S] = 1. Combining with the fact that Γ(1 + x) ≤ 1.5x x for all x ≥ 0 (which follows from the bounds of Batir [11] Theorem 2.3) completes the proof. To bound those terms involving θ in Lemma 23, we now prove the following result, which allows us to bound those terms (for an appropriate choice of θ) purely in terms of the moments of S. The proof uses a similar strategy to that used in our proof of Lemma 10. Lemma 24. Suppose that E[S] = 1, and that E[S 2 ] < ∞. Then for θ = (2E[S 2 ]) −1 , 1 − Proof : Note that for all θ > 0, w.p.1, exp(θS) ≥ 1 + θS (by the exponential inequality), andE[exp(−θS)]
−1 ≤ 4E[S 2 ].
hence
exp(−θS) ≤
1
1 + θS
). Suppose that E[S] = 1. Then for all t ≥ 1,E N o (t) + 1 ≤ (1 + E[S 2 ])t.8.12.2. Proof of Lemma 20. We now complete the proof of Lemma 20. Proof of Lemma 20: By definition (as is well-known), N o (t) + 1 = min{n ≥ 1 : n i=1 S i > t} is a stopping time w.r.t. the natural filtration generated by {S i , i ≥ 1}. By the triangle inequality, w.p.1
is at most E N o (t) + 1 E[S r ], also providing a bound for(61). Again using Corollary 12 to bound E N o (t) + 1 , applying (68) to bound(63), and combining with some straightforward algebra completes the proof.8.12.3. Extend Lemma 20 to the corresponding equilibrium renewal process. We now extend Lemma 20 to the corresponding equilibrium renewal process. We note that given the results of Lemma 20, such an extension follows nearly identically to the proof of Lemma 8 of Gamarnik and Goldberg [49], although we include a self-contained proof for completeness. Corollary 13. Suppose that E[S] = 1, and E[S r ] < ∞ for some r ≥ 2. Then for all t ≥ 1,
ds ≤ 1.5 × E[S 2 ] + 1 r × 120 r × r 1.5r+ .4 × (E[S 2 ] + 1) × 60 r × r r × (E[S r ] + 1) × t × E[S 2 ] + 1 r × 120 r × r 1.5r + .4 × (E[S 2 ] + 1) × 60 r × r r × (E[S r ] + 1) × t Since t − s ≤ 1 implies w.p.1 |N o (t − s) − (t − s)| r ≤ N o (1) + 1 r ,it follows from Corollary 11 that the second summand of (72) is at most 4.4 × 4E[S 2 ]r r .r
2
t−1
0
f S e (s)ds
≤
1.5 r
2 .
8.12.4. Result from literature to convert bounds for moments of zero-mean r.v.s to bounds for moments of sums of zero-mean r.v.s. Before completing the proof of Lemma 5, we recall the celebrated Marcinkiewicz-Zygmund inequality, a close relative of the Rosenthal inequality. The precise result which we will use follows immediately from Ren and Liang[115] Theorem 2, and we refer the interested reader to Figiel et al.[47] for a further overview of related results. We note that for several results which we will state, it is not required that the r.v.s be identically distributed, although we only state the results for that setting.Lemma 26 (Ren and Liang [115] Theorem 2). Suppose that for some r ≥ 2, {X i , i ≥ 1} is a collection of i.i.d. zero-mean r.v.s. s.t. E[|X 1 | r ] < ∞. Then for all k ≥ 1,E
k
i=1
X i
r ≤ 4.3r
1
2
r E[|X 1 | r ]k
r
2 .
is at most 2.2 × (35E[S 2 ]) r × r 1.5r × (nt)r
2 .
8.13.1. Proof of Lemma 27. We now complete the proof of Lemma 27. First, let us state a bound for the uncentered moments of sums of i.i.d. non-negative random variables from the literature, which implies a simple and explicit bound on the moments of a binomially distributed r.v.. The result follows immediately from the results of Berend et al. [15]. Lemma 28 (Berend et al. [15]). Suppose that for some r ≥ 2, {X i , i ≥ 1} is a collection of i.i.d. non-negative r.v.s. s.t. E[X r 1 ] < ∞. Then for all k ≥ 1,
Yuan Li and David A. Goldberg: Simple and explicit bounds for multi-server queues with 1 1−ρ scaling
We refer the reader to Goldberg[55] for further progress on asymptotic bounds for the s.s.p.d. In the previous version of Goldberg[55] and the present manuscript, a version of Lemma 7 and Corollary 5 instead appeared inGoldberg
We note that the work Goldberg[59] is restricted to the Halfin-Whitt scaling regime, and does not yield explicit non-asymptotic bounds. We also note that the analysis of Goldberg[57] for multi-server queues with heavy tails (in the Halfin-Whitt regime) relies heavily on the bounds in the present manuscript, combined with a novel analysis of heavy-tailed renewal processes.
Acknowledgements.The authors gratefully acknowledge support from NSF grant no. 1333457, as well as the anonymous referees for very thoughtful feedback, and helpful conversations with Mor Harchol-Balter, Isaac Grosof, Jamol Pender, Alan Scheller-Wolf, Ziv Scully, and Weina Wang.We now bound(77). First, let us apply Lemma 26 to conclude that(77)is at mostNext, we show that we may apply Lemma 27 to E n 2 (t) l=1 N e,l (t) − t r , by arguing that t ≤ 2 n 2 (t) . In particular,But since t ≥ 2 n implies nt ≥ 2, and g(z) ∆ = z z−1 is a decreasing function of z on (1, ∞), it follows from (80) that tn 2 (t) ≤ 2. Thus we may apply Lemma 27 (with k = n 2 (t), along with the fact that n 1 (t) ≤ nt, tn 2 (t) ≤ 2, and t ≤ 1) to conclude that (77) is at mostWe now bound(78). Note that the sum n l=n 1 (t)n 2 (t)+1 N l (t) − t appearing in (78) is taken over n − n 1 (t)n 2 (t) terms. Furthermore, n − n 1 (t)n 2 (t) = n − n 1 (t)⌊ n n 1 (t) ⌋ ≤ n − n 1 (t) n n 1 (t) − 1 = n 1 (t).
Waiting-time tail probabilities in queues with long-tail service-time distributions. J Abate, L C Gagan, W Whitt, Queueing systems. 16Abate, J., L. C. Gagan, W. Whitt. "Waiting-time tail probabilities in queues with long-tail service-time distributions." Queueing systems 16 (1994): 311-338.
The limit of stationary distributions of many-server queues in the Halfin-Whitt regime. R Aghajani, K Ramanan, Mathematics of Operations Research. 453Aghajani, R., K. Ramanan. "The limit of stationary distributions of many-server queues in the Halfin-Whitt regime." Mathematics of Operations Research 45, no. 3 (2020): 1016-1055.
Sharp and simple bounds for the raw moments of the binomial and Poisson distributions. T Ahle, Statistics and Probability Letters. 182109306Ahle, T. "Sharp and simple bounds for the raw moments of the binomial and Poisson distributions." Statistics and Probability Letters 182 (2022): 109306.
Probability, statistics, and queueing theory. A O Allen, Academic PressAllen, A.O. Probability, statistics, and queueing theory. Academic Press, 2014.
Approximating many server queues by means of single server queues. E Arjas, T Lehtonen, Mathematics of Operations Research. 3Arjas, E., T. Lehtonen. "Approximating many server queues by means of single server queues." Mathe- matics of Operations Research 3.3 (1978): 205-223.
Fitting phase-type distributions via the EM algorithm. S Asmussen, M O. Nerman, Olsson, Scandinavian Journal of Statistics. Asmussen, S., O. Nerman, M. Olsson. "Fitting phase-type distributions via the EM algorithm." Scandi- navian Journal of Statistics (1996): 419-441.
Applied probability and queues. S Asmussen, Springer Science and Business Media51Asmussen, S. Applied probability and queues. Vol. 51. Springer Science and Business Media, 2008.
Asymptotically optimal interruptible service policies for scheduling jobs in a diffusion regime with nondegenerate slowdown. R Atar, N Solomon, Queueing Systems. 69Atar, R., N. Solomon. "Asymptotically optimal interruptible service policies for scheduling jobs in a diffusion regime with nondegenerate slowdown." Queueing Systems 69.3 (2011): 217-235.
Elements of queueing theory: Palm Martingale calculus and stochastic recurrences. F Baccelli, P Bremaud, Springer Science and Business Media26Baccelli, F., P. Bremaud. Elements of queueing theory: Palm Martingale calculus and stochastic recur- rences. Vol. 26. Springer Science and Business Media, 2013.
Robust queueing theory. C Bandi, D Bertsimas, N Youssef, Operations Research. 63Bandi, C., D. Bertsimas, N. Youssef. "Robust queueing theory." Operations Research 63.3 (2015): 676- 700.
Bounds for the Gamma Function. N Batır, Results in Mathematics. 72Batır, N. "Bounds for the Gamma Function." Results in Mathematics 72 (2017): 865-874.
Queue length asymptotics for the multiple-server queue with heavy-tailed Weibull service times. M Bazhba, J Blanchet, C Rhee, B Zwart, Queueing Systems. 93Bazhba, M., J. Blanchet, C. Rhee, B. Zwart. "Queue length asymptotics for the multiple-server queue with heavy-tailed Weibull service times." Queueing Systems 93 (2019): 195-226.
Improvements of Stirling's formula by elementary methods. P Beesack, Publikacije Elektrotehničkog fakulteta. Serija Matematika i fizika. 274Beesack, P. "Improvements of Stirling's formula by elementary methods." Publikacije Elektrotehničkog fakulteta. Serija Matematika i fizika 274/301 (1969): 17-21.
Comparisons of multi-server queues with finite waiting rooms. A Berger, W Whitt, Stochastic Models. 8Berger, A., W. Whitt. "Comparisons of multi-server queues with finite waiting rooms." Stochastic Models 8, no. 4 (1992):
Improved bounds on Bell numbers and on moments of sums of random variables. D Berend, T Tassa, Probability and Mathematical Statistics. 302Berend, D., T. Tassa. "Improved bounds on Bell numbers and on moments of sums of random variables." Probability and Mathematical Statistics 30, no. 2 (2010): 185-205.
The distributional Little's law and its applications. D Bertsimas, D Nakazato, Operations Research. 43Bertsimas, D., D. Nakazato. "The distributional Little's law and its applications." Operations Research 43.2 (1995): 298-310.
Stochastic monotonicity properties of multiserver queues with impatient customers. P Bhattacharya, A Ephremides, Journal of Applied Probability. 283Bhattacharya, P., A. Ephremides. "Stochastic monotonicity properties of multiserver queues with impa- tient customers." Journal of Applied Probability 28, no. 3 (1991): 673-682.
Some Limit Theorems in the Theory of Mass Service, II Multiple Channels Systems. A A Borovkov, Theory of Probability and Its Applications. 10Borovkov, A.A. "Some Limit Theorems in the Theory of Mass Service, II Multiple Channels Systems." Theory of Probability and Its Applications 10.3 (1965): 375-400.
Dimensioning large call centers. S Borst, A Mandelbaum, M Reiman, Operations research. 521Borst, S., A. Mandelbaum, M. Reiman. "Dimensioning large call centers." Operations research 52, no. 1 (2004): 17-34.
Stein's method for steady-state diffusion approximations: an introduction through the Erlang-A and Erlang-C models. A Braverman, J G Dai, J Feng, Stochastic Systems. 62Braverman, A., J. G. Dai, and J. Feng. "Stein's method for steady-state diffusion approximations: an introduction through the Erlang-A and Erlang-C models." Stochastic Systems 6, no. 2 (2017): 301-366.
Stein's method for steady-state diffusion approximations of M/P h/n + M systems. A Braverman, J G Dai, Annals of applied probability. 271Braverman, A., J. G. Dai. "Stein's method for steady-state diffusion approximations of M/P h/n + M systems." Annals of applied probability 27(1) : 550 -581.
High order steady-state diffusion approximation of the Erlang-C system. A Braverman, J G Dai, arXiv:1602.02866arXiv preprintBraverman, A., J. G. Dai. "High order steady-state diffusion approximation of the Erlang-C system." arXiv preprint arXiv:1602.02866 (2016).
High order steady-state diffusion approximations. A Braverman, J G Dai, X Fang, arXiv:2012.02824arXiv preprintBraverman, A., J. G. Dai, X. Fang. "High order steady-state diffusion approximations." arXiv preprint arXiv:2012.02824 (2020).
Bounds on the Wait in a GI/M/k Queue. S Brumelle, Management Science. 19Brumelle, S. "Bounds on the Wait in a GI/M/k Queue." Management Science 19.7 (1973): 773-777.
A unified approach to stochastic dominance. S L Brumelle, R G Vickson, Stochastic optimization models in finance. Academic PressBrumelle, S. L., R. G. Vickson. "A unified approach to stochastic dominance." In Stochastic optimization models in finance, pp. 101-113. Academic Press, 1975.
A light-traffic theorem for multi-server queues. D Burman, Donald, Smith, Mathematics of Operations Research. 8Burman, D., Donald. Smith. "A light-traffic theorem for multi-server queues." Mathematics of Opera- tions Research 8, no. 1 (1983): 15-25.
Stability, queue length and delay. I. Deterministic queueing networks. C Chang, [1992] Proceedings of the 31st IEEE Conference on Decision and Control. IEEEChang, C. "Stability, queue length and delay. I. Deterministic queueing networks." In [1992] Proceedings of the 31st IEEE Conference on Decision and Control, pp. 999-1004. IEEE, 1992.
On the stability of open networks: a unified approach by stochastic dominance. C Chang, J Thomas, S Kiang, Queueing systems 15Chang, C., J. Thomas, S. Kiang. "On the stability of open networks: a unified approach by stochastic dominance." Queueing systems 15, no. 1 (1994): 239-260.**
Extended renewal theory and moment convergence in Anscombe's theorem. Y Chao, C Hsiung, T Lai, The Annals of Probability. 72Chao, Y., C. Hsiung, T. Lai, "Extended renewal theory and moment convergence in Anscombe's theo- rem." The Annals of Probability 7 (1979), no. 2, 304-318.
Stability of Service under Timeof-Use Pricing. S Chawla, N Devanur, A Holroyd, A Karlin, J Martin, B Sivan, Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing. the 49th Annual ACM SIGACT Symposium on Theory of ComputingChawla, S., N. Devanur, A. Holroyd, A. Karlin, J. Martin, B. Sivan. "Stability of Service under Time- of-Use Pricing." In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pp. 184-197. 2017.
Bounds for the difference between median and mean of gamma and Poisson distributions. J Chen, H Rubin, Statistics and probability letters. 46Chen, J., H. Rubin. "Bounds for the difference between median and mean of gamma and Poisson distributions." Statistics and probability letters 4, no. 6 (1986): 281-283.
H Chen, D Yao, Fundamentals of queueing networks: Performance, asymptotics, and optimization. Springer4Chen, H., D. Yao. Fundamentals of queueing networks: Performance, asymptotics, and optimization. Vol. 4. New York: Springer, 2001.
Many-server diffusion limits for G/Ph/n+ GI queues. J G Dai, S He, T Tezcan, The Annals of Applied Probability. 205Dai, J. G., S. He, T. Tezcan. "Many-server diffusion limits for G/Ph/n+ GI queues." The Annals of Applied Probability 20, no. 5 (2010): 1854-1890.
Validity of heavy-traffic steady-state approximations in many-server queues with abandonment. J G Dai, A B Dieker, X Gao, Queueing Systems. 78Dai, J. G., A. B. Dieker, X. Gao. "Validity of heavy-traffic steady-state approximations in many-server queues with abandonment." Queueing Systems 78.1 (2014): 1-29.
Many-server queues with customer abandonment: Numerical analysis of their diffusion model. J G Dai, S He, Stochastic Systems. 3Dai, J. G., S. He. "Many-server queues with customer abandonment: Numerical analysis of their diffusion model." Stochastic Systems 3.1 (2013): 96-146.
J G Dai, J M Harrison, Processing Networks: Fluid Models and Stability. essing Networks: Fluid Models and StabilityCambridge University PressDai, J. G., J.M. Harrison. Processing Networks: Fluid Models and Stability. Cambridge University Press, 2020.
Inequalities for moments of tails of random variables, with a queueing application. D Daley, Probability Theory and Related Fields. 41Daley, D. "Inequalities for moments of tails of random variables, with a queueing application." Proba- bility Theory and Related Fields 41.2 (1977): 139-143.
Bounds for the variance of certain stationary point processes. D Daley, Stochastic Processes and their Applications. 7Daley, D. "Bounds for the variance of certain stationary point processes." Stochastic Processes and their Applications 7, no. 3 (1978): 255-264.
Tight bounds for the renewal function of a random walk. D Daley, The Annals of Probability. 8Daley, D. "Tight bounds for the renewal function of a random walk." The Annals of Probability 8, no.
Some Comparability Results for Waiting Times in Single-and Many-Server Queues. D Daley, T Rolski, Journal of Applied Probability. 214Daley, D., T. Rolski. "Some Comparability Results for Waiting Times in Single-and Many-Server Queues." Journal of Applied Probability, vol. 21, no. 4, 1984, pp. 887-900.
Light traffic approximations in many-server queues. D Daley, T Rolski, Advances in applied probability. 24Daley, D., T. Rolski. "Light traffic approximations in many-server queues." Advances in applied prob- ability 24, no. 1 (1992): 202-218.
Some results for the mean waiting-time and workload in GI/GI/k queues. D Daley, Frontiers in Queueing: Models and Applications in Science and Engineering. Dshalalow, J.H.Boca Raton, FL, USADaley, D.: Some results for the mean waiting-time and workload in GI/GI/k queues. In: Dshalalow, J.H. (ed.) Frontiers in Queueing: Models and Applications in Science and Engineering, Boca Raton, FL, USA, pp. 35-59 (1997).
Sensitivity to the service-time distribution in the nonstationary Erlang loss model. J L Davis, W Massey, W Whitt, Management Science. 416Davis, J.L., W. Massey, W. Whitt. "Sensitivity to the service-time distribution in the nonstationary Erlang loss model." Management Science 41, no. 6 (1995): 1107-1116.
A bibliography on the theory of queues. A Doig, Biometrika. Doig, A. "A bibliography on the theory of queues." Biometrika (1957): 490-514.
Bounding Synchronization Overhead for Parallel Iteration. P Downey, ORSA Journal on Computing. 34Downey, P. "Bounding Synchronization Overhead for Parallel Iteration." ORSA Journal on Computing 3, no. 4 (1991): 288-298.
The physics of the Mt/G/∞ queue. S Eick, W Massey, W Whitt, Operations Research. 414Eick, S., W. Massey, W. Whitt. "The physics of the Mt/G/∞ queue." Operations Research 41, no. 4 (1993): 731-742.
Extremal properties of Rademacher functions with applications to the Khintchine and Rosenthal inequalities. T Figiel, P Hitczenko, W Johnson, G Schechtman, J Zinn, Transactions of the American Mathematical Society. 349Figiel, T., P. Hitczenko, W. Johnson, G. Schechtman, and J. Zinn. "Extremal properties of Rademacher functions with applications to the Khintchine and Rosenthal inequalities." Transactions of the American Mathematical Society 349.3 (1997): 997-1027.
Queues and Point Processes. P Franken, D Konig, U Arndt, V Schmidt, John Wiley and Sons, Inc., 1 Wiley Drive230Somerset, N.J. 08873Franken, P., D. Konig, U. Arndt, V. Schmidt. "Queues and Point Processes." John Wiley and Sons, Inc., 1 Wiley Drive, Somerset, N.J. 08873, 1982, 230 (1982).
Steady-state GI/G/n queue in the Halfin-Whitt regime. D Gamarnik, D A Goldberg, The Annals of Applied Probability. 23Gamarnik, D., D.A. Goldberg. "Steady-state GI/G/n queue in the Halfin-Whitt regime." The Annals of Applied Probability 23.6 (2013): 2382-2419.
Steady-state analysis of a multiserver queue in the Halfin-Whitt regime. D Gamarnik, P Momcilovic, Advances in Applied Probability. 40Gamarnik, D., P. Momcilovic. "Steady-state analysis of a multiserver queue in the Halfin-Whitt regime." Advances in Applied Probability 40.2 (2008): 548-577.
Multiclass multiserver queueing system in the Halfin-Whitt heavy traffic regime: asymptotics of the stationary distribution. D Gamarnik, A Stolyar, Queueing Systems. 712Gamarnik, D., A. Stolyar. "Multiclass multiserver queueing system in the Halfin-Whitt heavy traffic regime: asymptotics of the stationary distribution." Queueing Systems 71.1-2 (2012): 25-51.
Validity of heavy traffic steady-state approximations in generalized Jackson networks. D Gamarnik, A Zeevi, The Annals of Applied Probability. Gamarnik, D., A. Zeevi. "Validity of heavy traffic steady-state approximations in generalized Jackson networks." The Annals of Applied Probability (2006): 56-90.
Stein's method for the single server queue in heavy traffic. R Gaunt, N Walton, Statistics and Probability Letters. 156108566Gaunt, R., N. Walton. "Stein's method for the single server queue in heavy traffic." Statistics and Probability Letters 156 (2020): 108566.
Upper bounds on Poisson tail probabilities. P Glynn, Operations research letters. 61Glynn, P. "Upper bounds on Poisson tail probabilities." Operations research letters 6, no. 1 (1987): 9-14.
On the steady-state probability of delay and large negative deviations for the GI/GI/n queue in the Halfin-Whitt regime. D A Goldberg, arXiv:1307.0241Under revision. previous version available at arXiv preprintGoldberg, D.A. "On the steady-state probability of delay and large negative deviations for the GI/GI/n queue in the Halfin-Whitt regime." Under revision, previous version available at arXiv preprint arXiv:1307.0241 (2016). https://arxiv.org/abs/1307.0241v2
Asymptotic optimality of constant-order policies for lost sales inventory models with large lead times. D A Goldberg, D Katz-Rogozhnikov, Y Lu, M Sharma, M Squillante, Mathematics of Operations Research. 413Goldberg, D.A., D. Katz-Rogozhnikov, Y. Lu, M. Sharma, and M. Squillante. "Asymptotic optimality of constant-order policies for lost sales inventory models with large lead times." Mathematics of Operations Research 41, no. 3 (2016): 898-913.
Heavy-tailed queues in the Halfin-Whitt regime. D A Goldberg, Y Li, arXiv:1707.07775Under revision. previous version available at arXiv preprintGoldberg, D.A., Y. Li. "Heavy-tailed queues in the Halfin-Whitt regime." Under revision, previous version available at arXiv preprint arXiv:1707.07775 (2017). https://arxiv.org/abs/1707.07775
Simple and explicit bounds for multi-server queues with universal 1/(1-rho) scaling. D A Goldberg, Y Li, arXiv:1706.04628arXiv preprintGoldberg, D.A., Y. Li. "Simple and explicit bounds for multi-server queues with universal 1/(1-rho) scaling." arXiv preprint arXiv:1706.04628 (2017).
Large deviations analysis for the M/H 2 /n + M queue in the Halfin-Whitt regime. D A Goldberg, D Mukherjee, Y Li, arXiv:1803.01082Under revision. previous version available at arXiv preprintGoldberg, D.A., D. Mukherjee, Y. Li. "Large deviations analysis for the M/H 2 /n + M queue in the Halfin-Whitt regime." Under revision, previous version available at arXiv preprint arXiv:1803.01082 (2018). https://arxiv.org/abs/1803.01082
The MacLaurin series for the GI/G/1 queue. W Gong, J Hu, Journal of Applied Probability. 291Gong, W., J. Hu. "The MacLaurin series for the GI/G/1 queue." Journal of Applied Probability 29, no. 1 (1992): 176-184.
SRPT for multiserver systems. I Grosof, Z Scully, M Harchol-Balter, Performance Evaluation. 127Grosof, I., Z. Scully, M. Harchol-Balter. "SRPT for multiserver systems." Performance Evaluation 127 (2018): 154-175.
The Finite-Skip Method for Multiserver Analysis. I Grosof, M Harchol-Balter, A Scheller-Wolf, arXiv:2109.12663arXiv preprintGrosof, I., M. Harchol-Balter, A. Scheller-Wolf. "The Finite-Skip Method for Multiserver Analysis." arXiv preprint arXiv:2109.12663 (2021).
D Gross, J Shortle, J Thompson, C Harris, Fundamentals of Queueing Theory. John Wiley and Sons627Gross, D., J. Shortle, J. Thompson, C. Harris. Fundamentals of Queueing Theory. Vol. 627. John Wiley and Sons, 2011.
On the moments of the number of renewal epochs. R Grubel, U Jensen, ZAMM? Zeitschrift f r Angewandte Mathematik und Mechanik. 61Grubel, R., and U. Jensen. "On the moments of the number of renewal epochs." ZAMM? Zeitschrift f r Angewandte Mathematik und Mechanik 61, no. 10 (1981): 531-532.
On the inapproximability of M/G/K: why two moments of job size distribution are not enough. V Gupta, M Harchol-Balter, J G Dai, B Zwart, Queueing Systems. 64Gupta, V., M. Harchol-Balter, J. G. Dai, B. Zwart. "On the inapproximability of M/G/K: why two moments of job size distribution are not enough." Queueing Systems 64.1 (2010): 5-48.
Tight moments-based bounds for queueing systems. V Gupta, T Osogami, Proceedings of the ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems. the ACM SIGMETRICS joint international conference on Measurement and modeling of computer systemsGupta, V., T. Osogami. "Tight moments-based bounds for queueing systems." In Proceedings of the ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems, pp. 133-134. 2011.
Excursion-based universal approximations for the Erlang-A queue in steady-state. I Gurvich, J Huang, A Mandelbaum, Mathematics of Operations Research. 39Gurvich, I., J. Huang, A. Mandelbaum. "Excursion-based universal approximations for the Erlang-A queue in steady-state." Mathematics of Operations Research 39.2 (2013): 325-373.
Diffusion models and steady-state approximations for exponentially ergodic Markovian queues. I Gurvich, The Annals of Applied Probability. 24Gurvich, I. "Diffusion models and steady-state approximations for exponentially ergodic Markovian queues." The Annals of Applied Probability 24.6 (2014): 2527-2559.
Stopped random walks. A Gut, Springer-VerlagNew York IncorporatedGut, A. Stopped random walks. Springer-Verlag New York Incorporated, 2009.
Heavy-traffic limits for queues with many exponential servers. S Halfin, W Whitt, Operations research. 29Halfin, S., W. Whitt. "Heavy-traffic limits for queues with many exponential servers." Operations research 29.3 (1981): 567-588.
Surprising results on task assignment in server farms with high-variability workloads. M Harchol-Balter, A Scheller-Wolf, A Young, ACM SIGMETRICS Performance Evaluation Review. 37Harchol-Balter, M., A. Scheller-Wolf, A. Young. "Surprising results on task assignment in server farms with high-variability workloads." ACM SIGMETRICS Performance Evaluation Review 37.1 (2009): 287- 298.
Sharp bounds and simple approximations for the Erlang delay and loss formulas. A Harel, Management Science. 348Harel, A. "Sharp bounds and simple approximations for the Erlang delay and loss formulas." Manage- ment Science 34, no. 8 (1988): 959-972.
Sharp and simple bounds for the Erlang delay and loss formulae. A Harel, Queueing Systems. 642Harel, A. "Sharp and simple bounds for the Erlang delay and loss formulae." Queueing Systems 64, no. 2 (2010): 119-143.
D P Heyman, M J Sobel, Stochastic Models in Operations Research. New YorkMcGraw-HiliID. P. Heyman and M. J. Sobel, Stochastic Models in Operations Research, Vol. I, New York: McGraw- Hili, 1982.
Best constants in martingale version of Rosenthal's inequality. P Hitczenko, The Annals of Probability. Hitczenko, P. "Best constants in martingale version of Rosenthal's inequality." The Annals of Probability (1990): 1656-1668.
Relations for the workload of the GI/G/s queue. P Hokstad, Advances in applied probability. 174Hokstad, P. "Relations for the workload of the GI/G/s queue." Advances in applied probability 17, no. 4 (1985): 887-904.
Y Hong, W Wang, arXiv:2109.05343Sharp Waiting-Time Bounds for Multiserver Jobs. arXiv preprintHong, Y., W. Wang. "Sharp Waiting-Time Bounds for Multiserver Jobs." arXiv preprint arXiv:2109.05343 (2021).
Beyond heavy-traffic regimes: Universal bounds and controls for the single-server queue. J Huang, I Gurvich, Operations Research. 664Huang, J., I. Gurvich. "Beyond heavy-traffic regimes: Universal bounds and controls for the single-server queue." Operations Research 66, no. 4 (2018): 1168-1188.
On the moments of Markov renewal processes. J Hunter, Advances in Applied Probability. 12Hunter, J. "On the moments of Markov renewal processes." Advances in Applied Probability 1, no. 2 (1969): 188-210.
Multiple channel queues in heavy traffic. II: Sequences, networks, and batches. D Iglehart, W Whitt, Advances in Applied Probability. 2Iglehart, D., W. Whitt. "Multiple channel queues in heavy traffic. II: Sequences, networks, and batches." Advances in Applied Probability 2.02 (1970): 355-369.
Back to the roots of the M/D/s queue and the works of Erlang, Crommelin and Pollaczek. A Janssen, J S H Van Leeuwaarden, Statistica Neerlandica. 62Janssen, A., J. S. H. Van Leeuwaarden. "Back to the roots of the M/D/s queue and the works of Erlang, Crommelin and Pollaczek." Statistica Neerlandica 62.3 (2008): 299-313.
Corrected asymptotics for a multi-server queue in the Halfin-Whitt regime. A Janssen, J S H Van Leeuwaarden, B Zwart, Queueing Systems. 58261Janssen, A., J. S. H. Van Leeuwaarden, B. Zwart. "Corrected asymptotics for a multi-server queue in the Halfin-Whitt regime." Queueing Systems 58.4 (2008): 261.
Gaussian expansions and bounds for the Poisson distribution applied to the Erlang B formula. A Janssen, J S H Van Leeuwaarden, B Zwart, Advances in Applied Probability. 401Janssen, A., J. S. H. Van Leeuwaarden, B. Zwart. "Gaussian expansions and bounds for the Poisson distribution applied to the Erlang B formula." Advances in Applied Probability 40, no. 1 (2008): 122-143.
Refining square-root safety staffing by expanding Erlang C. A Janssen, J S H Van Leeuwaarden, B Zwart, Operations Research. 59Janssen, A., J.S.H. Van Leeuwaarden, B. Zwart. "Refining square-root safety staffing by expanding Erlang C." Operations Research 59.6 (2011): 1512-1522.
An approximation to steady-state of M/Ph/n+ M queue. X Jin, G Pang, L Xu, X Xu, arXiv:2109.03623arXiv preprintJin, X., G. Pang, L. Xu, X. Xu. "An approximation to steady-state of M/Ph/n+ M queue." arXiv preprint arXiv:2109.03623 (2021).
Best constants in moment inequalities for linear combinations of independent and exchangeable random variables. W Johnson, G Schechtman, J Zinn, The Annals of Probability. Johnson, W., G. Schechtman, J. Zinn. "Best constants in moment inequalities for linear combinations of independent and exchangeable random variables." The Annals of Probability (1985): 234-253.
Rates of convergence for queues in heavy traffic. II: Sequences of queueing systems. D Kennedy, Advances in Applied Probability. 4Kennedy, D. "Rates of convergence for queues in heavy traffic. II: Sequences of queueing systems." Advances in Applied Probability 4.02 (1972): 382-391.
On the characteristics of the general queueing process, with applications to random walk. J Kiefer, J Wolfowitz, The Annals of Mathematical Statistics. Kiefer, J., and J. Wolfowitz. "On the characteristics of the general queueing process, with applications to random walk." The Annals of Mathematical Statistics (1956): 147-161.
On queues in heavy traffic. J F Kingman, Journal of the Royal Statistical Society. Series B (Methodological). Kingman, J. F. C. "On queues in heavy traffic." Journal of the Royal Statistical Society. Series B (Methodological) (1962): 383-392.
A martingale inequality in the theory of queues. J F Kingman, Mathematical Proceedings of the Cambridge Philosophical Society. Cambridge University Press60Kingman, J.F.C. "A martingale inequality in the theory of queues." In Mathematical Proceedings of the Cambridge Philosophical Society, vol. 60, no. 2, pp. 359-361. Cambridge University Press, 1964.
The heavy traffic approximation in the theory of queues. J F Kingman, Proceedings of the Symposium on Congestion Theory. the Symposium on Congestion TheoryChapel Hill, NCUniversity of North Carolina PressKingman, J. F. C. "The heavy traffic approximation in the theory of queues." Proceedings of the Symposium on Congestion Theory. No. 2. University of North Carolina Press, Chapel Hill, NC, 1965.
Inequalities in the Theory of Queues. J F Kingman, Journal of the Royal Statistical Society. Series B (Methodological). 321Kingman, J. F. C. "Inequalities in the Theory of Queues." Journal of the Royal Statistical Society. Series B (Methodological), vol. 32, no. 1, 1970, pp. 102-110., www.jstor.org/stable/2984406.
Heavy Traffic Theory for Queues with Several Servers. I. J Kollerstrom, Journal of Applied Probability. Kollerstrom, J. "Heavy Traffic Theory for Queues with Several Servers. I." Journal of Applied Proba- bility (1974): 544-552.
Heavy Traffic Theory for Queues with Several Servers. II. J Kollerstrom, Journal of Applied Probability. Kollerstrom, J. "Heavy Traffic Theory for Queues with Several Servers. II." Journal of Applied Proba- bility (1979): 393-401.
A second-order heavy traffic approximation for the queue GI/G/1. J Kollerstrom, Advances in Applied Probability. 131Kollerstrom, J. "A second-order heavy traffic approximation for the queue GI/G/1." Advances in Applied Probability 13, no. 1 (1981): 167-185.
On series expansions for the renewal moments. M R Leadbetter, Biometrika. 501-2Leadbetter, M. R. "On series expansions for the renewal moments." Biometrika 50, no. 1-2 (1963): 75-80.
Shapes, moments and estimators of the Weibull distribution. E Lehman, IEEE Transactions on Reliability. 123Lehman, E. "Shapes, moments and estimators of the Weibull distribution." IEEE Transactions on Reliability 12, no. 3 (1963): 32-38.
General moment and probability inequalities for the maximum partial sum. M Longnecker, R J Serfling, Acta Mathematica Hungarica. 30Longnecker, M., R. J. Serfling. "General moment and probability inequalities for the maximum partial sum." Acta Mathematica Hungarica 30.1-2 (1977): 129-133.
Multi-channel queues in heavy traffic. R Loulou, Journal of Applied Probability. 10Loulou, R. "Multi-channel queues in heavy traffic." Journal of Applied Probability 10.04 (1973): 769-777.
Optimal price and delay differentiation in large-scale queueing systems. C Maglaras, J Yao, A Zeevi, Management science. 645Maglaras, C., J. Yao, A. Zeevi. "Optimal price and delay differentiation in large-scale queueing sys- tems." Management science 64, no. 5 (2018): 2427-2444.
Investigation of the mean waiting time for queueing system with many servers. T Makino, Annals of the Institute of Statistical Mathematics. 21Makino, T. "Investigation of the mean waiting time for queueing system with many servers." Annals of the Institute of Statistical Mathematics 21.1 (1969): 357-366.
Strong approximations for Markovian service networks. A Mandelbaum, W Massey, M Reiman, Queueing Systems. 30Mandelbaum, A., W. Massey, M. Reiman. "Strong approximations for Markovian service networks." Queueing Systems 30.1 (1998): 149-201.
Queues with many servers and impatient customers. A Mandelbaum, P Momcilovic, Mathematics of Operations Research. 371Mandelbaum, A., P. Momcilovic. "Queues with many servers and impatient customers." Mathematics of Operations Research 37, no. 1 (2012): 41-65.
Some bounds for queues. M Mori, J. Operat. Res. Soc. Japan. 18Mori, M. "Some bounds for queues." J. Operat. Res. Soc. Japan 18 (1975): 152-181.
On the tail integral formulae for real-valued random variables. Saralees Nadarajah, Idika E Okorie, The Mathematical Gazette. 106567Nadarajah, Saralees, and Idika E. Okorie. "On the tail integral formulae for real-valued random vari- ables." The Mathematical Gazette 106, no. 567 (2022): 487-493.
On the speed of convergence in a boundary problem. I. S Nagaev, Theory of Probability and Its Applications. 15Nagaev, S. "On the speed of convergence in a boundary problem. I." Theory of Probability and Its Applications 15.2 (1970): 163-186.
On the speed of convergence in a boundary problem. II. S Nagaev, Theory of Probability and Its Applications. 15Nagaev, S. "On the speed of convergence in a boundary problem. II." Theory of Probability and Its Applications 15.3 (1970): 403-429.
Stochastic bounds for heterogeneous-server queues with Erlang service times. S Y Oliver, Journal of Applied Probability. 11Oliver, S.Y. "Stochastic bounds for heterogeneous-server queues with Erlang service times." Journal of Applied Probability 11.04 (1974): 785-796.
Uniform approximations for the M/G/1 queue with subexponential processing times. M Olvera-Cravioto, P Glynn, Queueing Systems. 681Olvera-Cravioto, M., P. Glynn. "Uniform approximations for the M/G/1 queue with subexponential processing times." Queueing Systems 68, no. 1 (2011): 1-50.
On the transition from heavy traffic to heavy tails for the M/G/1 queue: the regularly varying case. M Olvera-Cravioto, J Blanchet, P Glynn, Olvera-Cravioto, M., J. Blanchet, P. Glynn. "On the transition from heavy traffic to heavy tails for the M/G/1 queue: the regularly varying case." (2011): 645-668.
Multi-channel queues: a survey and bibliography. G Ovuworie, International Statistical Review/Revue Internationale de Statistique. Ovuworie, G. "Multi-channel queues: a survey and bibliography." International Statistical Review/Revue Internationale de Statistique (1980): 49-71.
The G/GI/N queue in the Halfin-Whitt regime. J Reed, The Annals of Applied Probability. 19Reed, J. "The G/GI/N queue in the Halfin-Whitt regime." The Annals of Applied Probability 19.6 (2009): 2211-2269.
Open queueing networks in heavy traffic. M Reiman, Mathematics of operations research. 9Reiman, M. "Open queueing networks in heavy traffic." Mathematics of operations research 9, no. 3 (1984): 441-458.
Delay Moment Bounds for Multiserver Queues with Infinite Variance Service Times. R Vesilo, A Scheller-Wolf, INFOR: Information Systems and Operational Research. 51Vesilo, R., A. Scheller-Wolf. "Delay Moment Bounds for Multiserver Queues with Infinite Variance Service Times." INFOR: Information Systems and Operational Research 51.4 (2013): 161-174.
On the best constant in Marcinkiewicz-Zygmund inequality. Y Ren, H Liang, 53Statistics and probability lettersRen, Y., H. Liang. "On the best constant in Marcinkiewicz-Zygmund inequality." Statistics and prob- ability letters 53.3 (2001): 227-233.
On the comparison of waiting times in GI/G/1 queues. T Rolski, D Stoyan, Oper. Res. 24Rolski, T., D. Stoyan, "On the comparison of waiting times in GI/G/1 queues". Oper. Res. 24, 197-200 (1976).
Large deviations theory and efficient simulation of excessive backlogs in a GI/GI/m queue. J Sadowsky, IEEE Transactions on Automatic Control. 36Sadowsky, J. "Large deviations theory and efficient simulation of excessive backlogs in a GI/GI/m queue." IEEE Transactions on Automatic Control 36.12 (1991): 1383-1394.
Necessary and sufficient conditions for delay moments in FIFO multiserver queues with an application comparing s slow servers with one fast one. A Scheller-Wolf, Operations Research. 51Scheller-Wolf, A. "Necessary and sufficient conditions for delay moments in FIFO multiserver queues with an application comparing s slow servers with one fast one." Operations Research 51.5 (2003): 748-758.
Structural interpretation and derivation of necessary and sufficient conditions for delay moments in FIFO multiserver queues. A Scheller-Wolf, R Vesilo, Queueing Syst. 543Scheller-Wolf, A., R. Vesilo. "Structural interpretation and derivation of necessary and sufficient con- ditions for delay moments in FIFO multiserver queues." Queueing Syst. 54(3), 221-232 (2006).
The Gittins policy is nearly optimal in the M/G/k under extremely general conditions. Z Scully, I Grosof, M Harchol-Balter, Proceedings of the ACM on Measurement and Analysis of Computing Systems. 43Scully, Z., I. Grosof, M. Harchol-Balter. "The Gittins policy is nearly optimal in the M/G/k under extremely general conditions." Proceedings of the ACM on Measurement and Analysis of Computing Systems 4, no. 3 (2020): 1-29.
A sample path analysis of the delay in the M/G/C system. S Seshadri, J. Appl. Prob. 33Seshadri, S. "A sample path analysis of the delay in the M/G/C system." J. Appl. Prob 33 (1996): 256-266.
An ergodic theorem for Markov processes and its application to telephone systems with refusals. B A Sevastyanov, Theory of Probability and Its Applications. 2Sevastyanov, B.A. "An ergodic theorem for Markov processes and its application to telephone systems with refusals." Theory of Probability and Its Applications 2, no. 1 (1957): 104-112.
Stochastic monotonicity in general queueing networks. J G Shanthikumar, D Yao, Journal of Applied Probability. 262Shanthikumar, J.G., D. Yao. "Stochastic monotonicity in general queueing networks." Journal of Applied Probability 26, no. 2 (1989): 413-417.
On the cumulants of renewal processes. W Smith, Biometrika. 461Smith, W. "On the cumulants of renewal processes." Biometrika 46, no. 1/2 (1959): 1-29.
Resource sharing for efficiency in traffic systems. D Smith, W Whitt, Bell System Technical Journal. 60Smith, D., W. Whitt. "Resource sharing for efficiency in traffic systems." Bell System Technical Journal 60.1 (1981): 39-55.
Inequalities for many-server queue and other queues. T Suzuki, Y Yoshida, J. Oper. Res. Soc. Japan. 13Suzuki, T., Y. Yoshida. "Inequalities for many-server queue and other queues." J. Oper. Res. Soc. Japan 13 (1970): 59-77.
Tightness of the stationary waiting time in heavy traffic. W Szczotka, Advances in Applied Probability. Szczotka, W. "Tightness of the stationary waiting time in heavy traffic." Advances in Applied Proba- bility (1999): 788-794.
Heavy-tailed dependent queues in heavy traffic. W Szczotka, W A Woyczynski, Probability and Mathematical Statistics -Wroclaw University. 24. 167Szczotka, W., W. A. Woyczynski. "Heavy-tailed dependent queues in heavy traffic." Probability and Mathematical Statistics -Wroclaw University. 24.1 (2004): 67.
On high order moments of the number of renewals. Y Taga, Annals of the Institute of Statistical Mathematics. 15Taga, Y. "On high order moments of the number of renewals." Annals of the Institute of Statistical Mathematics 15 (1963): 187-196.
A single-server queue with Poisson input. L Takacs, Operations research. 103Takacs, L. "A single-server queue with Poisson input." Operations research 10, no. 3 (1962): 388-394.
The bayesian prophet: A low-regret framework for online decision making. A Vera, S Banerjee, Management Science. 673Vera, A., and S. Banerjee. "The bayesian prophet: A low-regret framework for online decision making." Management Science 67, no. 3 (2021): 1368-1391.
Zero queueing for multi-server jobs. W Wang, Q Xie, M Harchol-Balter, Abstract Proceedings of the 2021 ACM SIGMETRICS/International Conference on Measurement and Modeling of Computer Systems. Wang, W., Q. Xie, M. Harchol-Balter. "Zero queueing for multi-server jobs." In Abstract Proceedings of the 2021 ACM SIGMETRICS/International Conference on Measurement and Modeling of Computer Systems, pp. 13-14. 2021.
The continuity of queues. W Whitt, Advances in Applied Probability. 61Whitt, W. "The continuity of queues." Advances in Applied Probability 6, no. 1 (1974): 175-183.
The effect of variability in the GI/G/s queue. W Whitt, J. Appl. Probab. 17Whitt, W. "The effect of variability in the GI/G/s queue". J. Appl. Probab. 17, 1062-1071 (1980).
Comparing counting processes and queues. W Whitt, Advances in Applied Probability. 131Whitt, W. "Comparing counting processes and queues." Advances in Applied Probability 13, no. 1 (1981): 207-220.
Refining diffusion approximations for queues. W Whitt, Operations Research Letters. 1Whitt, W. "Refining diffusion approximations for queues." Operations Research Letters 1.5 (1982): 165-169.
The Marshall and Stoyan bounds for IMRL/G/1 queues are tight. W Whitt, Operations Research Letters. 1Whitt, W. "The Marshall and Stoyan bounds for IMRL/G/1 queues are tight." Operations Research Letters 1.6 (1982): 209-213.
On the heavy-traffic limit theorem for GI/G/∞ queues. W Whitt, Advances in Applied Probability. 141Whitt, W. "On the heavy-traffic limit theorem for GI/G/∞ queues." Advances in Applied Probability 14, no. 1 (1982): 171-190.
Comparison conjectures about the M/G/s queue. W Whitt, Operations Research Letters. 25Whitt, W. "Comparison conjectures about the M/G/s queue." Operations Research Letters 2.5 (1983): 203-209.
The bell system technical journal 62. W Whitt, The queueing network analyzerWhitt, W. "The queueing network analyzer." The bell system technical journal 62, no. 9 (1983): 2779-2815.
On approximations for queues, I: Extremal distributions. W Whitt, ATT Bell Laboratories Technical Journal. 63Whitt, W. "On approximations for queues, I: Extremal distributions." ATT Bell Laboratories Technical Journal 63.1 (1984): 115-138.
On approximations for queues, II: Shape constraints. W Whitt, J G Kuncewicz, ATT Bell Laboratories Technical Journal. 63Whitt, W., J.G. Kuncewicz. "On approximations for queues, II: Shape constraints." ATT Bell Labo- ratories Technical Journal 63.1 (1984): 139-161.
On approximations for queues, III: Mixtures of exponential distributions. W Whitt, ATT Bell Laboratories Technical Journal. 63Whitt, W. "On approximations for queues, III: Mixtures of exponential distributions." ATT Bell Laboratories Technical Journal 63.1 (1984): 163-175.
Approximations for the GI/G/m queue. W Whitt, Production and Operations Management2Whitt, W. "Approximations for the GI/G/m queue." Production and Operations Management 2.2 (1993): 114-161.
The impact of a heavy-tailed service-time distribution upon the M/GI/s waiting-time distribution. W Whitt, Queueing Systems. 361Whitt, W. "The impact of a heavy-tailed service-time distribution upon the M/GI/s waiting-time distribution." Queueing Systems 36, no. 1 (2000): 71-87.
Upper bounds on work in system for multichannel queues. R W Wolff, Journal of applied probability. 24Wolff, R.W. "Upper bounds on work in system for multichannel queues." Journal of applied probability 24.02 (1987): 547-551.
Little's law and related results. R W Wolff, Wiley encyclopedia of operations research and management science. 4Wolff, R.W. "Little's law and related results." Wiley encyclopedia of operations research and manage- ment science 4 (2011): 2828-2841.
Reflections on queue modelling from the last 50 years. D Worthington, Journal of the Operational Research Society. 60Worthington, D. "Reflections on queue modelling from the last 50 years." Journal of the Operational Research Society 60.1 (2009): S83-S92.
Optimality gap of constant-order policies decays exponentially in the lead time for lost sales models. L Xin, D A Goldberg, Operations Research. 646Xin, L., D.A. Goldberg. "Optimality gap of constant-order policies decays exponentially in the lead time for lost sales models." Operations Research 64, no. 6 (2016): 1556-1565.
| [] |
[
"SPECTRA OF QUOTIENT MODULES",
"SPECTRA OF QUOTIENT MODULES"
] | [
"Michael Didas ",
"Jörg Eschmeier ",
"ANDMichael Hartz ",
"Marcel Scherer "
] | [] | [] | We determine the Taylor spectra of quotient tuples of the d-shift on Drury-Arveson spaces with finite-dimensional coefficient spaces. We show the the Taylor spectrum can be described in terms of the approximate zero set of the annihilator ideal, and in terms of the pointwise behavior of the inner multiplier associated with the quotient tuple.f ∈I(M )AZ(f ).Moreover, the Taylor spectrum and the right spectrum coincide in this case. | null | [
"https://export.arxiv.org/pdf/2306.02997v1.pdf"
] | 259,075,395 | 2306.02997 | 3c62ec795707aca7f8d84c534d71abf8d659151a |
SPECTRA OF QUOTIENT MODULES
5 Jun 2023
Michael Didas
Jörg Eschmeier
ANDMichael Hartz
Marcel Scherer
SPECTRA OF QUOTIENT MODULES
5 Jun 2023
We determine the Taylor spectra of quotient tuples of the d-shift on Drury-Arveson spaces with finite-dimensional coefficient spaces. We show the the Taylor spectrum can be described in terms of the approximate zero set of the annihilator ideal, and in terms of the pointwise behavior of the inner multiplier associated with the quotient tuple.f ∈I(M )AZ(f ).Moreover, the Taylor spectrum and the right spectrum coincide in this case.
Introduction and main results
This result is motivated by recent work of Clouâtre and Timko [3], which in particular contains the equality of the Taylor spectrum and the approximate zero set in the case of one-dimensional D. The inclusion "⊂" for finite-dimensional D can be deduced from the results of Clouâtre and Timko (see the discussion preceding Proposition 6 below for details). This relies on the corona theorem for H 2 d . Related spectral inclusion theorems can also be found in [6].
For the reverse inclusion, we establish an alternative description of the spectrum in terms of an operator-valued multiplier that generates M . To be more precise, if M ∈ Lat(M z , H 2 d (D)), then by the McCullough-Trent version of Beurling's invariant subspace theorem (Theorem 4.1 in [11]), there exist a Hilbert space E and a holomorphic multiplier
θ : B d → L(E, D) from H 2 d (E) to H 2 d (D) such that M = θH 2 d (E)
and θ is inner, which means by definition that the induced multiplication operator M θ :
H 2 d (E) → H 2 d (D)
is a partial isometry. A result of Greene, Richter and Sundberg (Theorem 3.2 in [9]) then guarantees that for almost every z ∈ ∂B d the non-tangential boundary value θ(z) ∈ L(E, D) exists (in the SOT) and is a partial isometry.
To formulate our result appropriately, we need the following generalized notion of pointwise surjectivity for operator-valued maps: Given a holomorphic operator-valued map θ : B d → L(E, D), we say that θ is surjective at λ ∈ B d if either λ ∈ B d and θ(λ)E = D, or λ ∈ ∂B d and there exists an extension of θ to a holomorphic map
θ : U → L(E, D) on some open set U ⊃ B d ∪ {λ}
such that θ(λ)E = D. Our proof of the reverse inclusion "⊃" in Theorem 1 relies on the following result of independent interest. Theorem 2. Let D be a finite-dimensional Hilbert space, M ∈ Lat(M z , H 2 d (D)) and θ :
B d → L(E, D) an inner multiplier from H 2 d (E) to H 2 d (D) with M = θH 2 d (E). Then σ(M z , H 2 d (D)/M ) = {λ ∈ B d ; θ is not surjective at λ}.
The inclusion "⊂" is established in Proposition 6. The reverse inclusion is finally settled as Corollary 8. The main ingredients in the proof are a result of Greene [10] (to handle the part inside B d ) and structure theory of pure row contractions (in particular their characteristic function [2]) applied to T = P M ⊥ M z |M ⊥ . We will also see that the Taylor spectrum σ(M z , H 2 d (D)/M ) agrees with the right spectrum σ r (M z , H 2 d (D)/M ). Note that the set appearing on the right-hand side in the statement of the preceding theorem extends the classical notion of support of an inner function θ : D → C on the unit disc: Recall that λ ∈ D belongs to supp(θ) if either θ(λ) = 0 or θ does not holomorphically extend across λ. This concept has also been one of the starting points for [3], but was generalized in another direction there.
In the scalar-valued case D = C, the set of points λ ∈ B d where θ(λ) : E → C is not surjective, is easily seen to coincide with the common zero set Z(M ) of all functions in M , thus the statement of Theorem 2 then specializes to [9]. It was conjectured in [13] that S(M ) = {λ ∈ ∂B d ; lim inf z→λ θ(z) = 0}. This equality would follow if the corona theorem of Costea, Sawyer and Wick [5] held for bounded row multipliers. Since this is not known, we must leave the question of Gleason, Richter and Sundberg open here.
Calculating the spectrum inside B d
We begin by setting up the necessary notation from multivariable spectral theory. Let T ∈ L(X) d be a commuting d-tuple of operators on a complex Banach space X. We write K • (T, X) for the Koszul complex
0 −→ Λ d (X) δ d,T −→ Λ d−1 (X) δ d−1,T −→ . . . δ 2,T −→ Λ 1 (X) δ 1,T −→ Λ 0 (X) −→ 0 consisting of the spaces K p (T, X) = Λ p (X) = X ⊗ p C d ∼ = X ( d p )
and the boundary maps defined by the formula
δ p,T (x ⊗ e I ) = p α=1 (−1) α−1 T iα x ⊗ e Iα (x ∈ X, e I = e i 1 ∧ . . . ∧ e ip ), where I = (i 1 , . . . , i p ) ∈ N p is a multi-index with i 1 < i 2 < . . . < i p .
Here p C d stands for the p-fold exterior product of C d with itself, (e 1 , . . . , e d ) is the standard basis of C d and the multi-index I α ∈ N p−1 arises from I ∈ N p by dropping the α-th entry.
The Taylor spectrum of T (and its various subsets) are explained in terms of the homology groups of K • (T, X),
H p (T, X) = ker δ p,T /ran δ p+1,T (p = 0, . . . , d).
The Taylor spectrum of T is defined as the set of points in C d for which the Koszul complex of λ − T is not exact, i.e.,
σ(T ) = {λ ∈ C d ; H p (λ − T, X) = 0 for some p ∈ {1, . . . , d}},
where λ − T stands for the operator tuple with entries
λ i · 1 X − T i (1 ≤ i ≤ d).
It is well known that σ(T ) ⊂ C d is compact. As usual, we write ρ(T ) = C d \ σ(T ) for the resolvent set. A particular role for our calculations is played by the right spectrum
σ r (T ) = {λ ∈ C d ; H 0 (λ − T, X) = 0}.
The right essential spectrum σ re (T ) consists of all λ ∈ σ r (T ) where even dim H 0 (λ − T, X) = ∞. Note that, modulo the identifications Λ 0 (X) ∼ = X and Λ 1 (X) ∼ = X d , we have
δ 1,λ−T (x i ) d i=1 = d i=1 (λ i − T i )x i ((x i ) d i=1 ∈ X d ), and therefore H 0 (λ − T, X) ∼ = X/ d i=1 (λ i − T i )X. Similarly, up to isomorphy, δ d,T acts as δ d,T x = (T i x) d i=1 (x ∈ X), and hence H d (λ − T, X) ∼ = d i=1 ker(λ i − T i ).
We recall a result of Devin Greene [10] which leads to a description of the points in the Taylor spectrum of the quotient tuple M z /M in B d . This result relates the homology of the Koszul complex of a multiplication tuple to the homology of a localized resolution.
Let D be complex Hilbert space. Given an M z -invariant subspace M ∈ Lat(M z , H 2 d (D)), we apply the McCullough-Trent version of Beurling's invariant subspace theorem (Theorem 4.1 in [11]) inductively to obtain Hilbert spaces
D i (i ≥ 0) starting with D 0 = D together with multipliers θ i ∈ M(H 2 d (D i ), H 2 d (D i−1 )
) for i ≥ 1 such that the induced multiplication operators form an exact sequence
. . . −→ H 2 d (D 2 ) M θ 2 −→ H 2 d (D 1 ) M θ 1 −→ H 2 d (D) q −→ H 2 d (D)/M → 0. Localizing the right-truncated sequence to a point λ ∈ B d , we obtain a complex . . . −→ D 2 θ 2 (λ) −→ D 1 θ 1 (λ) −→ D −→ 0 denoted by (D • , θ • (λ)).
The following result is due to Greene [10]. For completeness sake, we indicate a shortened version of the original proof based on standard homological algebra.
Theorem 3. Given λ ∈ B d and M ∈ Lat(M z , H 2 d (D)) for some complex Hilbert space D, there are vector space isomorphisms H p (λ − M z , H 2 d (D)/M ) ∼ = H p (D • , θ • (λ)) (p ≥ 0). Proof. Let ε λ : H 2 d (D) → D, f → f (λ)
, denote the point evaluation at λ. It is well known that the augmented Koszul complex
K • (λ − M z , H 2 d (D)) ε λ −→ D −→ 0
is exact in the case D = C (see, e.g., [8, Proposition 2.6]). Since tensoring with 1 D preserves exactness, it remains exact in the general case.
We consider the double complex
K = (K p,q , ∂ ′ , ∂ ′′ ) with spaces K p,q = K q (λ−M z , H 2 d (D p )), p-th row (K p,• , ∂ ′′ • ) equal to (−1) p times the augmented Koszul complex of the commuting tuple λ − M z ∈ L(H 2 d (D p )) d and q-th column (K •,q , ∂ ′ • ) given by the n q -fold direct sum of the complex (H 2 d (D • ), M θ• ), respectively (D • (θ • (λ)
) as the last column: . . . . . .
(−1) · K • (λ − M z , H 2 d (D 2 )) D 2 0 K • (λ − M z , H 2 d (D 1 )) D 1 0 (−1) · K • (λ − M z , H 2 d (D)) D 0 K • (λ − M z , H 2 d (D)/M ) 0 0 θ 2 −ε λ θ 2 (λ) θ 1 ε λ θ 1 (λ) q −ε λ 0
Then K is a double complex with anti-commuting squares and bounded diagonals, and all but the last column and all but the last row are exact. In this setting, standard double complex arguments (Lemma A2.6 in [7]) show that there are induced vector space isomorphisms
H p (λ − M z , H 2 d (D)/M ) ∼ = H ′′ p H ′ 0 (K) ∼ = H ′ p H ′′ 0 (K) = H p (D • , θ • (λ)),
as we claimed.
As an immediate consequence, we have:
Corollary 4. Let D be a complex Hilbert space, M ∈ Lat(H 2 d (D)), and θ : B d → L(E, D) be an inner multiplier from H 2 d (E) to H 2 d (D) with M = M θ H 2 d . Then we have σ r (M z , H 2 d (D)/M ) ∩ B d = {λ ∈ B d : θ(λ)E = D}. Moreover, if D is finite-dimensional, then σ re (M z , H 2 d (D)/M ) ⊂ ∂B d .
Proof. Note that we may choose θ 1 = θ and D 1 = E in the preceeding theorem to obtain for λ ∈ B d that
H 0 (λ − M z , H 2 d (D)/M ) ∼ = H 0 (D • , λ • (λ)) ∼ = D/(θ(λ)E)
. This implies the statement for the right-spectrum, as well as
σ re (M z , H 2 d (D)/M ) ∩ B d = ∅ if D is finite-dimensional.
Upper estimates for the spectrum
The aim of this section is to provide a proof of both inclusions "⊂" from the statements of our main Theorems 1 and 2. As a preparatory result, we state the following observation which can be seen as a partial extension of a result of Sz.-Nagy and Foiaş (Theorem VI.5.2 in [14]) to the multivariable case. Then det(Θ) ∈ I(M ). If λ ∈ B d is a point such that θ is surjective at λ, then there is a multiplier f ∈ I(M ) with lim z→λ z∈B d f (z) = 1.
Proof. Choose a matrix R = (r ij ) 1≤i,j≤N ∈ M N (M(H 2 d )) such that ΘR = det(Θ) · 1 N . (It is standard linear algebra that R can be obtained pointwise as the transpose of the so-called cofactor matrix C of Θ, whose components consist -except for the sign -of the determinants of all possible (N − 1) × (N − 1) submatrices of Θ.)
To prove the first assertion, we may suppose that det(Θ) does not vanish identically on B d . Then the vectors e 1 , . . . , e N form a basis of their linear span F. It is elementary to check that the composition of operators Now we prove the announced inclusions. The first one can be deduced from a result of Clouâtre and Timko [3, Corollary 3.14] that depends on the corona theorem for H 2 d due to Costea, Sawyer and Wick [5]. Alternatively, we can argue directly with the help of the corona theorem.
R : H 2 d (D) → H 2 d (F), N i=1 f i d i → N i=1 N j=1 r ij f j e i and θ : H 2 d (F) → H 2 d (D), N i=1 g i e i → N i=1 N j=1 θ ij g j d i = θ N j=1 g j e j .2 d (E) into H 2 d (D) with θH 2 d (E) ⊂ M . Then we have the inclusions σ(M z , H 2 d (D)/M ) ⊂ f ∈I(M ) AZ(f ) ⊂ {λ ∈ B d ; θ is not surjective at λ}
Proof. Note that the second inclusion readily follows from the preceding lemma which says that, if λ does not belong to the set on the right-hand side, then there is a function f ∈ I(M ) with λ ∈ AZ(f ).
Towards a proof of the first inclusion, let λ / ∈ f ∈I(M ) AZ(f ). Then there exists h ∈
I(M ) with λ / ∈ AZ(h), hence inf z∈B d d i=1 |λ i − z i | + |h(z)| > 0.
By the corona theorem for H 2 d [5], there exist [7] shows that λ / ∈ σ(M z , H 2 d (D)/M ) as desired.
f 1 , . . . , f d , f ∈ M(H 2 d ) such that d i=1 (λ i − z i )f i + f h = 1. Let g = f h ∈ I(M ). From the very definition of I(M ) it follows that M g /M = 0 and hence d i=1 (λ i − M z i /M )(M f i /M ) = 1 H 2 d (D)/M .
Lemma 2.2.4 in
Lower estimate for the spectrum and proof of main results
In view of Proposition 6, both Theorems 1 and 2 will follow as soon as we can show the missing inclusion
{λ ∈ B d ; θ is not surjective at λ} ⊂ σ(M z /M ).
To achieve this, we make use the characteristic function θ T of the pure row contraction T = P H M z |H ∈ L(H) d where H = M ⊥ . Let us first establish the necessary notations and recall some basic facts.
Let H be a complex Hilbert space, and let T ∈ L(H) d be a commuting row contraction, which means by definition that T 1 T * 1 + . . . + T d T * d ≤ 1 H or, equivalently, that the row operator
T = [T 1 , . . . , T d ] : H d → H, (x i ) d i=1 → d i=1 T i x i ,
is a contraction. Note that (modulo the canonical identifications) the row operator T : H d → H is nothing else than the boundary map δ 1,T in the Koszul complex of T . Similarly, the adjoint T * : H → H d acts as δ d,T * .
Following [2] we define the defect operators
D T = (1 H d − T * T ) 1/2 ∈ L(H d ) and D T * = (1 H − T T * ) 1/2 ∈ L(H),
and the respective defect spaces as
D T = D T H d ⊂ H d and D T * = D T * H ⊂ H.
The intertwining relations (Lemma 2.1 in [2])
T D T = D T * T and T * D T * = D T T * yield the inclusions T D T ⊂ D T * and T * D T * ⊂ D T .
It is well known (Lemma 2.2 in [2]) that the so-called characteristic function of T defined by
θ T : B d → L(D T , D T * ), θ T (z) = −T + D T * (1 H − ZT * ) −1 ZD T
is an analytic function that induces a well-defined contractive multiplier Fix z ∈ ρ(T ) ∩ ∂B d . By the poynomial spectral mapping theorem for T * applied to
M θ T : H 2 d (D T ) → H 2 d (D T * ), f → θ T f.p(w) = 1 − d i=1 z i w i ∈ C[w], we obtain 0 = 1 − |z| 2 ∈ {1 − z, w ; w ∈ σ(T )} = σ(1 H − ZT * ).
Hence the characteristic function θ T of T extends to a holomorphic map θ T : U → L(D T , D T * ) given by the same formula that defines θ T on the open set
U = {z ∈ C d : 1 H − ZT * invertible} ⊃ B d ∪ (ρ(T ) ∩ ∂B d ) ⊃ B d ∩ ρ(T ).
If T ∈ L(H) is a single contraction with ρ(T ) ∩ ∂D = ∅, then the values of the extended characteristic function are unitary operators θ T : D T → D T * for each point λ ∈ ρ(T )∩ ∂D.
In particular, dim D T = dim D T * (see Chapter VI.1 in [14]). In the multivariable case the situation is quite different. Nevertheless, we obtain at least a partial result of the same type.
Theorem 7. Let T ∈ L(H) d be a commuting row contraction such that dim D T * < ∞.
Then the characteristic function
θ T : B d → L(D T , D T * )
of T is surjective at every point λ ∈ B d ∩ ρ(T ).
Proof. Let U and θ T : U → L(D T , D T * ) be defined as above and let λ ∈ U ∩ ρ(T ) be given. We show that θ T (λ) is surjective. Towards this, we first observe that
θ T (z) * = −T * + D T Z * (1 H − T Z * ) −1 D T * ∈ L(D T * , D T )
for all z ∈ U and hence
D T θ T (z) * = (−T * + (1 H − T * T )Z * (1 H − T Z * ) −1 )D T * = (−T * + Z * (1 H − T Z * ) −1 − T * (T Z * )(1 H − T Z * ) −1 )D T * = (−T * + Z * (1 H − T Z * ) −1 + T * − T * (1 H − T Z * ) −1 )D T * = (Z * − T * )(1 H − T Z * ) −1 D T * for z ∈ U . If y = D T * x (x ∈ H), then D T θ T (z) * y = (Z * − T * )(1 H − T Z * ) −1 (1 H − T * T )x
for z ∈ U . In particular, for z ∈ U ∩ ρ(T ), we have that Z * − T * ∼ = δ d,z−T * is injective, so for x ∈ H, y = D T * x with θ T (z) * y = 0, we obtain that
y 2 = D 2 T * x, x = (1 H − T * T )x, x = 0. Hence the condition that dim D T * < ∞ implies that θ T (z) * ∈ L(D T * , D T ) is injective for z ∈ U ∩ ρ(T ). But then θ T (z) ∈ L(D T , D T * ) is surjective for z ∈ U ∩ ρ(T ).
Let T be a row contraction. We say that T is pure if the completely positive map P T :
B(H) → B(H), X → d i=1 T i XT * i associated with T satisfies SOT − lim m→∞ P m T (1 H ) = 0.
For a pure row contraction T , the map Lemma 3.6]. Since M θ T is a partial isometry, this leads to the orthogonal direct sum decomposition
j : H → H 2 d (D T * ), j(x) = α∈N d |α|! α! (D T * T * α x)z α yields an isometry intertwining T * ∈ L(H) d and M * z ∈ L(H 2 d (D T * )) d componentwise such that M θ T M * θ T + jj * = 1 H 2 d (D T * ) , see [2,H 2 d (D T * ) = θ T H 2 d (D T ) ⊕ jH.
We will subsequently refer to the map j from above as the canonical dilation of T .
Let us return to our default setting now: Define H = H 2 d (D) ⊖ M and T = P H M z |H ∈ L(H) d , which is known to be a pure row contraction. In view of Proposition 6, the following missing inclusion settles the proof of our main results from Section 1, with the exception of the statement about the right spectrum. Proof. By Corollary 4, it suffices to show that
σ(T ) ∩ ∂B d ⊃ {λ ∈ ∂B d ; θ is not surjective at λ},
or equivalently, that θ is surjective at λ for all λ ∈ ρ(T ) ∩ ∂B d .
Towards this, fix such a point λ. Then, Theorem 7 guarantees that the characteristic function θ T is surjective at λ. The rest of the proof is about establishing a connection between θ T and θ.
Let R ⊂ H 2 d (D) be the smallest reducing subspace for M z with R ⊃ H. Then (see [1, Since θ T is surjective at λ, so is θ. Known uniqueness results about inner multipliers show that there exists a partial isometry V : D ⊕ D T → E such that θ(z) = θ(z)V and θ(z) = θ(z)V * for all z ∈ B d ; see [11,Theorem 4.2] or [4,Proposition 2.3]. The second equality shows that θ extends to a holomorphic function in a neighborhood of λ, and the first equality then shows that θ is surjective at λ.
Section 2]) R = α∈N d z α (R ∩ D) = H 2 d (R ∩ D).
Applications to row contractions
Since every pure commuting row contraction T ∈ L(H) d is unitarily equivalent to a quotient tuple of the form M z /M ∈ L(H 2 (D T * )/M ) d , Theorem 2 yields a description of the Taylor spectrum of T in terms of its characteristic function.
Corollary 10. Let T ∈ L(H) d be a pure commuting row contraction such that dim D T * < ∞. Then σ(T ) = σ r (T ) = {λ ∈ B d ; θ T is not surjective at λ}.
Let D be a complex Hilbert space and let M ∈ Lat(M z , H 2 d (D)) be a closed invariant subspace for the tuple M z = (M z 1 , . . . , M z d ) ∈ L(H 2 d (D)) d consisting of the multiplication operators with the coordinate functions on the D-valued Drury-Arveson space H 2 d (D) over the Euclidean unit ball B d ⊂ C d . Quotient tuples of the form M z /M on H 2 d (D)/M appear as model operators for pure commuting row contractions; see Section 5 below for more details. The aim of this note is to provide descriptions of the Taylor spectrum of the quotient tuple M z /M in the case of finite-dimensional D. Towards a precise formulation of our main result, we define the approximate zero set of a function f : B d → C to be AZ(f ) = {λ ∈ B d ; lim inf z→λ |f (z)| = 0}. Equivalently, λ ∈ AZ(f ) if and only if there exists a sequence (z k ) k≥0 in B d such that lim k→∞ z k = λ and lim k→∞ f (z k ) = 0. Moreover, with each M ∈ Lat(M z , H 2 d (D)) we associate a closed ideal I(M ) = {f ∈ M(H 2 d ); f H 2 d (D) ⊂ M } of the multiplier algebra M(H 2 d ). It turns out that the Taylor spectrum in the quotient module H 2 d (D)/M can be expressed in terms of this so-called annihilator ideal I(M ): Theorem 1. Let D be a finite-dimensional complex Hilbert space and M ∈ Lat(M z , H 2 d (D)). Then, for the Taylor spectrum of the tuple induced by M z on H 2 d (D)/M , we have σ(M z , H 2 d (D)/M ) =
σ(M z , H 2 d /M ) = Z(M ) ∪ S(M ) with S(M ) = {λ ∈ ∂B d ; θ not surjective at λ}. The fact that σ(M z /M ) ∩ B d = Z(M ) was first observed by Gleason, Richter and Sundberg in
Lemma 5 .
5Let D be a finite-dimensional Hilbert space with orthonormal basis (d 1 , . . . , d N ) and M ∈ Lat(M z , H 2 d (D)) a closed invariant subspace. Let E be a Hilbert space and θ : B d → L(E, D) a multiplier from H 2 d (E) into H 2 d (D) with θH 2 d (E) ⊂ M . Fix vectors e 1 , . . . , e N ∈ E and denote by Θ = (θ ij ) 1≤i,j≤N ∈ M N (M(H 2 d )) the matrix whose coefficients are determined by θ(z)e j = N i=1 θ ij (z)d i (j = 1, . . . , N, z ∈ B d ).
satisfies det(Θ)f = θRf ∈ M for all f ∈ H 2 d (D), i.e., det(Θ) ∈ I(M ), as desired. For the remaining part of the assertion, fix λ ∈ B d and a holomorphic extension θ :U → L(E, D) of θ to U ⊃ B d ∪ {λ} such that θ(λ)E = D.Then there are vectors e 1 , . . . , e N ∈ E with θ(λ)e j = d j (j = 1, . . . , N ). Let Θ = (θ ij ) 1≤i,j≤N be the matrix formed as above with respect to the vectors e 1 , . . . , e N chosen in this way. Then Θ, viewed as a map B d → M N (C), continuously extends to U and satisfies lim z→λ z∈B d Θ(z) = 1 N . Hence f = det(Θ) defines a function in I(M ) as in the statement of the lemma.
Proposition 6 .
6Let D be a finite-dimensional Hilbert space and M ∈ Lat(M z , H 2 d (D)) a closed invariant subspace. Let E be a Hilbert space and θ : B d → L(E, D) an inner multiplier from H
Here, the symbol Z stands for row operator Z = [z 1 1 H , . . . , z d 1 H ] : H d → H associated with z = (z 1 , . . . , z d ) ∈ C d . Details on characteristic functions of commuting row contractions and their properties can be found in[2].
Corollary 8 .
8Let D be a finite-dimensional Hilbert space, M ∈ Lat(M z , H 2 d (D)) and θ : B d → L(E, D) an inner multiplier from H 2 d (E) to H 2 d (D) with M = θH 2 d (E). Then σ(M z , H 2 d (D)/M ) ⊃ {λ ∈ B d ; θ is not surjective at λ}.
Since the inclusion map i : H → H 2 d (R ∩ D) and the canonical dilation j : H → H 2 d (D T * ) are both minimal dilations for T , there is a unitary operator U : D T * → R ∩ D such that 1 ⊗ U • j = i; see [1, Theorem 3.1]. Define D = D ⊖ (R ∩ D). Then
H 2 d
2( D) = H 2 d (D) ⊖ H 2 d (R ∩ D) = H 2 d (D) ⊖ R ⊂ M is the largest reducing subspace for M z contained in M . Note that (1 ⊗ U )θ T H 2 d (D T ) = (1 ⊗ U )(H 2 d (D T * ) ⊖ Imj) = H 2 d (R ∩ D) ⊖ H = M ∩ H 2 d ( D) ⊥ .Hence we obtain the orthogonal decompositionM = H 2 d ( D) ⊕ (M ∩ H 2 d ( D) ⊥ ) = H 2 d ( D) ⊕ (1 ⊗ U )(θ T H 2 d (D T )).The operator-valued map θ : B d → L( D ⊕ D T , D), θ(z) = 1 D ⊕ (U θ T (z)) defines an inner multiplier from H 2 d ( D ⊕ D T ) into H 2 d (D) with θH 2 d ( D ⊕ D T ) = M .
Remark 9. A general result from multivariable spectral theory (Corollary 3.5 in[15]) says that, for a commuting tupleT ∈ L(H) d with σ(T ) ⊂ B d , we have σ r (T )∩∂B d = σ(T )∩∂B d .Moreover, by Theorem 2 and Corollary 4, we haveσ(M z , H 2 d (D)/M ) ∩ B d = {λ ∈ B d ; θ is not surjective at λ} = σ r (M z , H 2 d (D)/M ) ∩ B d .Therefore, in the setting of Theorem 2, we have σ(M z , H 2 d (D)/M ) = σ r (M z , H 2 d (D)/M ).
Proof. Recall from the discussion following the proof of Theorem 7 that since T is a pure row contraction, the map. Since M θ T is a partial isometry, this leads to the orthogonal direct sum decomposition In the single variable case d = 1 there is a natural extension of the result stated in Corollary 8 to the case of completely non-unitary contractions T ∈ L(H) with no restriction on the defect space D T * (Theorem VI.4.1 in[14]). At this moment it remains open whether Corollary 10 remains true without the condition that the defect space D T * is finite dimensional.As a consequence of Corollary 10 we obtain a dichotomy for pure commuting row contractions T ∈ L(H) d with dim D T * < ∞ whose characteristic function extends to an open neighbourhood of the closed ball B d .[9]implies that θ T (λ)D T = D T * for all λ ∈ ∂B d . Corollary 10, now shows that σ(T ) ⊂ B d . On the other hand, since dim D T * < ∞, Corollary 4 implies that σ re (T ) ⊂ ∂B d , hence σ re (T ) = ∅. Therefore, dim H < ∞ (see e.g. Theorems 9 and 17 in [12, Section 19]), and hence σ(T ) ⊂ B d is a finite set.
Dilations, wandering subspaces, and inner functions. M Bhattacharjee, J Eschmeier, Dinesh K Keshari, Jaydeb Sarkar, 10.1016/j.laa.2017.02.032Linear Algebra Appl. 523M. Bhattacharjee, J. Eschmeier, Dinesh K. Keshari, and Jaydeb Sarkar. Dilations, wandering subspaces, and inner functions. Linear Algebra Appl., 523:263-280, 2017. doi:10.1016/j.laa.2017.02.032.
Characteristic function of a pure commuting contractive tuple. T Bhattacharyya, J Eschmeier, J Sarkar, 10.1007/s00020-004-1309-5Integral Equations Oper. Theory. 531T. Bhattacharyya, J. Eschmeier, and J. Sarkar. Characteristic function of a pure commuting contrac- tive tuple. Integral Equations Oper. Theory, 53(1):23-32, 2005. doi:10.1007/s00020-004-1309-5.
Localizable points in the support of a multiplier ideal and spectra of constrained operators. R Clouâtre, E J Timko, arXiv:1911.03525arXiv preprintR. Clouâtre and E. J. Timko. Localizable points in the support of a multiplier ideal and spectra of constrained operators. arXiv preprint, 2019. arXiv:1911.03525.
A Beurling-Lax-Halmos theorem for spaces with a complete Nevanlinna-Pick factor. Raphaël Clouâtre, Michael Hartz, Dominik Schillo, Proc. Amer. Math. Soc. 148Raphaël Clouâtre, Michael Hartz, and Dominik Schillo. A Beurling-Lax-Halmos theorem for spaces with a complete Nevanlinna-Pick factor. Proc. Amer. Math. Soc., 148:731-740, 2020.
The corona theorem for the Drury-Arveson Hardy space and other holomorphic Besov-Sobolev spaces on the unit ball in C n. Ş Costea, E T Sawyer, B D Wick, 10.2140/apde.2011.4.499Anal. PDE. 44Ş. Costea, E. T. Sawyer, and B. D. Wick. The corona theorem for the Drury-Arveson Hardy space and other holomorphic Besov-Sobolev spaces on the unit ball in C n . Anal. PDE, 4(4):499-550, 2011. doi:10.2140/apde.2011.4.499.
Spectral inclusion theorems. R G Douglas, J Eschmeier, Mathematical methods in systems, optimization and control, number 222 in Oper. Theory: Adv. Appl. Birkhäuser/Springer Basel AG. R. G. Douglas and J. Eschmeier. Spectral inclusion theorems. In Mathematical methods in systems, optimization and control, number 222 in Oper. Theory: Adv. Appl. Birkhäuser/Springer Basel AG, 2012.
J Eschmeier, M Putinar, Spectral Decompositions and Analytic Sheaves. OxfordClarendon Press10New SeriesJ. Eschmeier and M. Putinar. Spectral Decompositions and Analytic Sheaves, volume 10 of LMS Mono- graphs, New Series. Clarendon Press, Oxford, 1996.
On the index of invariant subspaces in spaces of analytic functions of several complex variables. J Gleason, S Richter, C Sundberg, 10.1515/crll.2005.2005.587.49J. Reine Angew. Math. 587J. Gleason, S. Richter, and C. Sundberg. On the index of invariant subspaces in spaces of analytic functions of several complex variables. J. Reine Angew. Math., 587:49-76, 2005. doi:10.1515/crll.2005.2005.587.49.
The structure of inner multipliers on spaces with complete Nevanlinna Pick kernels. D C V Greene, S Richter, C Sundberg, 10.1006/jfan.2002.3928J. Funct. Anal. 1942D. C. V. Greene, S. Richter, and C. Sundberg. The structure of inner multipliers on spaces with com- plete Nevanlinna Pick kernels. J. Funct. Anal., 194(2):311-331, 2002. doi:10.1006/jfan.2002.3928.
Free resolutions in multivariable operator theory. C V Devin, Greene, 10.1016/S0022-1236(02)00043-5J. Funct. Anal. 2002Devin C. V. Greene. Free resolutions in multivariable operator theory. J. Funct. Anal., 200(2):429-450, 2003. doi:10.1016/S0022-1236(02)00043-5.
Invariant subspaces and Nevanlinna-Pick kernels. S Mccullough, T Trent, 10.1006/jfan.2000.3664J. Funct. Anal. 1781S. McCullough and T. Trent. Invariant subspaces and Nevanlinna-Pick kernels. J. Funct. Anal., 178(1):226-249, 2000. doi:10.1006/jfan.2000.3664.
Spectral Theory of Linear Operators and Spectral Systems in Banach Algebras. V Müller, 10.1007/978-3-0348-7788-6Operator Theory: Advances and Applications. Birkhäuser Basel. V. Müller. Spectral Theory of Linear Operators and Spectral Systems in Banach Algebras. Operator Theory: Advances and Applications. Birkhäuser Basel, 2007. doi:10.1007/978-3-0348-7788-6.
Cyclic vectors in the Drury-Arveson space. S Richter, C Sundberg, S. Richter and C. Sundberg. Cyclic vectors in the Drury-Arveson space, 2012. Slides from a talk.
Harmonic analysis of operators on Hilbert space. B Sz, C Nagy, Foias, North-Holland Publishing CompanyAmsterdamB. Sz.-Nagy and C. Foias. Harmonic analysis of operators on Hilbert space. North-Holland Publishing Company, Amsterdam, 1970.
On semi-Fredholm theory and essential normality. M Wernet, 10.22028/D291-26577Saarland UniversityPhD thesisM. Wernet. On semi-Fredholm theory and essential normality. PhD thesis, Saarland University, 2014. doi:10.22028/D291-26577.
| [] |
[
"Fate of bubble clusters rising in a quiescent liquid",
"Fate of bubble clusters rising in a quiescent liquid"
] | [
"Tian Ma \nInstitute of Fluid Dynamics\nHelmholtz-Zentrum Dresden -Rossendorf\n01328DresdenGermany\n",
"Hendrik Hessenkemper \nInstitute of Fluid Dynamics\nHelmholtz-Zentrum Dresden -Rossendorf\n01328DresdenGermany\n",
"Dirk Lucas \nInstitute of Fluid Dynamics\nHelmholtz-Zentrum Dresden -Rossendorf\n01328DresdenGermany\n",
"Andrew D Bragg \nDepartment of Civil and Environmental Engineering\nDuke University\n27708DurhamNCUSA\n"
] | [
"Institute of Fluid Dynamics\nHelmholtz-Zentrum Dresden -Rossendorf\n01328DresdenGermany",
"Institute of Fluid Dynamics\nHelmholtz-Zentrum Dresden -Rossendorf\n01328DresdenGermany",
"Institute of Fluid Dynamics\nHelmholtz-Zentrum Dresden -Rossendorf\n01328DresdenGermany",
"Department of Civil and Environmental Engineering\nDuke University\n27708DurhamNCUSA"
] | [] | We use experiments to study the evolution of bubble clusters in a swarm of freely rising, deformable bubbles. A new machine learning-aided algorithm allows us to identify and track bubbles in clusters and measure the cluster lifetimes. The results indicate that contamination in the carrier liquid can enhance the formation of bubble clusters and prolong the cluster lifetimes. The mean bubble rise velocities conditioned on the bubble cluster size are also explored, and we find a positive correlation between the cluster size and the rise speed of the bubbles in the cluster, with clustered bubbles rising up to 20% faster than unclustered bubbles. | null | [
"https://export.arxiv.org/pdf/2306.02101v2.pdf"
] | 259,075,479 | 2306.02101 | e50067f729f479574590aa09348af0023bce9475 |
Fate of bubble clusters rising in a quiescent liquid
Tian Ma
Institute of Fluid Dynamics
Helmholtz-Zentrum Dresden -Rossendorf
01328DresdenGermany
Hendrik Hessenkemper
Institute of Fluid Dynamics
Helmholtz-Zentrum Dresden -Rossendorf
01328DresdenGermany
Dirk Lucas
Institute of Fluid Dynamics
Helmholtz-Zentrum Dresden -Rossendorf
01328DresdenGermany
Andrew D Bragg
Department of Civil and Environmental Engineering
Duke University
27708DurhamNCUSA
Fate of bubble clusters rising in a quiescent liquid
(Received xx; revised xx; accepted xx)Under consideration for publication in J. Fluid Mech. 1
We use experiments to study the evolution of bubble clusters in a swarm of freely rising, deformable bubbles. A new machine learning-aided algorithm allows us to identify and track bubbles in clusters and measure the cluster lifetimes. The results indicate that contamination in the carrier liquid can enhance the formation of bubble clusters and prolong the cluster lifetimes. The mean bubble rise velocities conditioned on the bubble cluster size are also explored, and we find a positive correlation between the cluster size and the rise speed of the bubbles in the cluster, with clustered bubbles rising up to 20% faster than unclustered bubbles.
case of isolated bubbles. Hallez & Legendre (2011) showed that the side-by-side configuration maximizes the drag force acting on a pair of bubbles, while the in-line bubble configuration minimizes the drag due to wake entrainment for the trailing bubble.
These studies on bubble pair dynamics have provided much insight, however, there are many open questions concerning the behaviour of bubble swarms where two or more bubbles may be clustered together, whose motion may also be affected by the wakes of other bubbles and bubble clusters in the flow. Indeed, while the rise velocity of bubble pairs in a quiescent liquid is well understood, its behaviour in the context of bubble swarms is debated. For example, the experiments of Stewart (1995) and Brücker (1999) for large deformable bubbles in a swarm found that the mean rise velocity was considerably larger than that for a single bubble. However, this contradicts other experimental (Ishii & Zuber 1979) and numerical (Roghair et al. 2011) studies for large bubbles which argue that the mean bubble rise velocity decreases monotonically as the gas void fraction is increased.
Several fundamental questions remain mostly unexplored: what is the probability to form clusters involving number of bubbles? What is the lifetime of these clusters? How does the rise velocity of bubbles in a cluster depend on ? How do the answers to these questions depend on contaminants in the liquid? In this paper we explore these questions experimentally by tracking thousands of deformable bubbles in a vertical column, using a recently developed machine-learning algorithm to detect and follow the evolution of bubble clusters, and explore how the bubble rise velocities depend on . We also consider the effect of surfactants to provide a more complete picture for real systems where contaminants may cause behavior that differs substantially from that of an idealized clean system.
Experimental method
Experimental set-up
The experimental apparatus is identical to that in Ma et al. (2022), and we therefore refer the reader to that paper for additional details; here, we summarize. The experiments were conducted in a rectangular bubble column (depth 50 mm and width 112.5 mm), with a water fill height of 1,000 mm. Air bubbles are injected through 11 spargers which are homogeneously distributed at the bottom of the column.
We use tap water in the present work as the base liquid and consider two different bubble sizes by using spargers with different inner diameters. For each bubble size, we manipulate the gas flow rate and ensure that all cases are not in the heterogeneous regime of dispersed bubbly flows. In total, we have six mono-dispersed cases (see supplementary movies 1-6) labelled as SmTapLess, SmTap, SmPen+, LaTapLess, LaTap, and LaPen+ in table 1, including some basic characteristic dimensionless numbers for the bubbles. Here, "Sm/La" stand for smaller/larger bubbles, "Pen+" stands for corresponding cases added by 1,000 ppm 1-Pentanol, and "Less" stands for lower gas void fraction than * * Tap/ * * Pen+ cases for smaller/larger bubbles, respectively. It should be noted that the three cases with larger bubble sizes have higher gas void fractions than the three cases with smaller bubbles. This is because in our setup it is not possible to have the same flow rate for two different spargers while also maintaining a homogeneous gas distribution for mono-dispersed bubbles. Furthermore, the bubble size is slightly reduced when adding 1-Pentanol for both types of sparger. This is due to the influence of the surfactants that reduce the surface tension and hence affect the bubble formation at the rigid orifice.
To identify and track bubble clusters, we use planar shadow images obtained by recording the flow with a high-speed camera and illuminating the setup with a LED. The measurement resolution in time and space are 250 fps and 59.9 µm/Px, respectively, with a field of view (FOV)
Bubble identification and tracking
In our study only one camera is used, however, as will be shown in § 2.3, we are nevertheless able to perform quasi-3D tracking of bubbles. Independent on the number of cameras used, the task to track bubbles in image sequences can be done in a detect-to-track or in a track-to-detect fashion. While the former links previously detected bubbles in each frame to form suitable tracks, the latter uses extrapolations of already established tracks to detect bubbles in follow-up images. We use the former detect-to-track strategy, which allows to incorporate detections among multiple frames to establish tracks, with however relying more strongly on an accurate detector that finds bubbles in individual frames.
Even for low gas volume fractions, detecting bubbles in individual images is a challenging task since bubbles can overlap in the images. Fully overlapping bubbles cannot be detected, but partially overlapping bubbles can be dealt with and deep-learning-based strategies for this have recently shown very promising results (e.g. Kim & Park 2021). In our previous work (Hessenkemper et al. 2022), we developed such an approach that used a trained convolutional neural network (CNN) to segment overlapping bubbles. Furthermore, the contour of each detected bubble is reconstructed using 64 radial vectors pointing from the segmentation centre to the boundary (figure 1a), and the radial vectors of partly occluded bubbles are corrected using an additional multi-layer perceptron (MLP).
The subsequent tracking of multiple detected bubbles in close proximity poses further challenges, as the tracker not only has to be robust against inaccuracies of the detector, i.e. missing or false detections, but also has to be able to track bubbles that are fully occluded even for multiple times steps, while at the same time having numerous possible associations in the near vicinity. To solve these issues, a graph-based tracking formalism is used. Specifically, we follow the framework of Brasó & Leal-Taixé (2020), utilizing multiple MLPs to predict valid connections of detections on graph structured data. The four main aspects of this tracking framework are described as follows. Details on network architectures, the created training dataset as well as validation tests are provided in the Supplementary Materials.
Graph construction: To track the bubbles, each sequence is modelled as a graph, with detections (bubbles) being the nodes of the graph and possible connections in time being the edges of the graph, i.e. a pair of detections forward or backward in time that are possibly from the same bubble (figure 1b). The task is then to classify the edges in active and non-actives edges, which at the end form a set of valid tracks that fulfil the so-called 'flow conservation constraints'each node having an active edge to at most one node forward in time and at most one node backward in time.
t 1 t 2 t 3 (a) (b) (c) (d) (e) Top Front (image domain) Side (in-depth) (f) (g) (h) centre plane 2d b 2d b d b d b
Feature encoding: For both the nodes and the edges of the graph, features are encoded with two separate MLPs (figure 1c). The node embeddings represent the appearance features of the detections, which are usually encoded with a CNN (Brasó & Leal-Taixé 2020). However, monochromatic bubble images show few distinct features, with the size and the shape of the bubble image being the most relevant ones. Thus, we have chosen the 64 radial vectors from the bubble detector as input for the node feature encoder, providing not only a more accurate description of the features bubble size/shape, but also a better 2D bubble contour in the case of overlapping bubbles due to the additional correction of the radial locations. The edge embeddings represent tracking-related features. For each detected pair of frames, the time increment, relative coordinate and size are fed into the edge feature encoding MLP to generate edge embeddings.
Message Passing Network: The core of the tracking algorithm is the Message Passing Network (MPN) whose main purpose is to update node and edge embeddings w.r.t. their surrounding nodes and edges in the graph, and this is done iteratively using message passing steps. First, the edge embeddings are updated by combining their embeddings with the embeddings of the adjacent pair of nodes and feeding them into an edge-update MLP (figure 1c). Then, a time-aware node update step is applied by aggregating edge embeddings of adjacent edges, which already contain information of connected nodes due to the previous edge update step. The time-awareness is introduced by at first aggregating and updating separately incoming edges, i.e. connections backward in time, and outgoing edges, i.e. connections forward in time, with individual MLPs
Identification and tracking of bubble clusters
The detection of bubble clusters at each time step follows a distance criteria between neighbouring bubbles whose centres in the 2D image domain are below a predefined threshold 2 from each individual case (figure 1g). This value is mainly based on the work of Legendre et al. (2003) that a considerable drag enhancement is observed for a bubble pair rising side-by-side within this distance. Tests for different thresholds (2 ± 0.5 ) were conducted and the trends of the results in § 3 were found to be insensitive to the choice of this parameter. Furthermore, since we attempt to detect the bubble cluster in a quasi-3D manner, we keep the in-focus region in the depth direction to also be 2 (figure 1f,h). To estimate this depth distance to the centre plane we use the gray value gradient of the detected bubbles and consider only sharp bubbles in the shallow Depth of Field (DoF) region (see supplementary materials for more detail). In summary, we utilize a cylindric search volume, (2 ) 2 × 2 , for the cluster identification, radially in the 2D image domain and linear in the depth direction. For all the cases, the mean inter-bubble distance based on the global void fraction (table 1) is much larger than the search radius 2 , indicating that the bubble clusters to be discussed are dynamically significant.
The cluster tracking strategy is inspired by the work of Liu et al. (2020) for characterising the temporal evolution of inertial particle clusters in turbulence. Considering two clusters identified in two consecutive time steps (Δ = 1/250 ), we take both to be successive realizations of the same cluster if the number of bubbles they share is above a given threshold. The shared bubbles across clusters in successive time steps are termed connections. We consider forward-in-time and backward-in-time connections, and apply thresholds on the fraction of connected bubbles over the total number of bubbles in each cluster. We illustrate in figure 2(a) an example: Cluster A (identified in time step 1) shares all its bubbles with cluster C (identified in time step 2), while C shares 2/3 of its bubbles with A. Therefore, the fractions of forward and backward connections between A and C are 1 and 2/3, respectively. On the other hand, B shares 1/3 of its bubbles with C, and C shares 1/3 of its bubbles with B. Thus, the forward and backward connections between B and C are 1/3 and 1/3, respectively. Following Liu et al. (2020), two clusters in consecutive time steps are identified as the same cluster when the fractions of their backward and forward connections are both 1/2. In the example of figure 2(a), A and C are recognized as belonging to the same cluster. The cluster lifetime is defined as the time elapsed between birth (the first instance a cluster is identified) and death (the last time it is recognized). Here, we explicitly include the lower threshold of 1/2, since many 2-bubble clusters appear and require an additional criterion for tracking. In figure 2(b) we give an example where cluster A at time step 2 splits into B and C. To decide whether B or C should be regarded as the continuation of cluster A for the purposes of tracking, we consider whether cluster B or C persists longer into the future. In this example, while cluster C survives until time n, B does not. Therefore, we regard C, D, and A as belonging to the same cluster, while cluster B is considered to be a newborn cluster at time step 2. This approach eliminates ambiguities since it ensures that a cluster at any instant can only be associated with at most one cluster either in the past or future.
Results
Probability to be clustered
We first consider the percentage of bubbles in the flow that are clustered, and the results in figure 3(a) show that this percentage increases in the order of LaTapLess, LaTap to LaPen+ for the larger bubbles. While the increase from LaTapLess to LaTap is quite understandable due to the increase of the gas void fraction, the result from LaPen+ (whose is sightly less than that of LaTap) shows the surfactant promotes the formation of clusters. We obtained similar results (not shown) for the three cases with smaller bubbles.
In figure 3b-g we consider the probability to find a given number of bubbles within a cluster for all cases, respectively. The results show that the probability decreases with increasing , and consistent with previous studies, = 2 is the most common cluster size for all 6 cases (Zenit et al. 2001;Bunner & Tryggvason 2003). However, the results also show that = 3, 4 clusters occur with non-negligible probability, and there are even rare events with = 8 clusters. The results also show that adding contaminants decreases the probability to form = 2 clusters, and increases the probability to form larger clusters, although the dependence is not too strong. For a fixed contaminant level, increasing has the same effect.
Lifetime
We now turn to consider the mean lifetime of the clusters, , as a function of (only the results for 5 are shown as the statistics for > 5 are not converged). Figure 4(a,b) shows normalized by a characteristic timescale of BIT, ≡ / , where is the mean vertical slip velocity between the bubble and liquid at the column center. The values of / are order unity, suggesting that is indeed a dynamically relevant timescale for the cluster lifetimes. The results also reveal a systematic dependence on and the liquid contamination. First, decreases monotonically with increasing , such that larger bubble clusters are not only rarer (see § 3.1), but also more unstable. While this may not seem surprising, it is in fact the opposite to what has been observed for inertial particles where the cluster size and its lifetime are positively correlated (Liu et al. 2020). The difference could be simply due to the fact that the most common values of for our clusters are much smaller than those for the inertial particles in Liu et al. (2020), and as a result relatively small changes in the bubble configurations can result in the formation or destruction of a given cluster. The other significant difference is that our bubbles hydrodynamically interact, unlike the numerically simulated inertial particles in Liu et al. (2020) where a one-way coupling assumption is used. Second, increasing not only leads to the formation of larger clusters, but also slightly longer mean lifetimes for the clusters, although the lifetimes for = 2 are the least sensitive to . Third, the mean lifetimes of the bubble clusters notably increase with increasing contamination levels. In a recent paper we showed that increased contamination leads to a reduction of and an increase in BIT (Ma et al. 2023). The reduction in causes the bubble trajectories to be less chaotic, and this may explain why the cluster lifetimes increase with increasing contamination. Figure 4(c,d) show the probability density functions (PDFs) for the cluster lifetimes, which have been computed using clusters of all sizes. The general dependence on the flow variables is similar to that observed for the mean cluster lifetime, with the PDF tails becoming increasingly heavy in the order TapLess, Tap and Pen+ for both the small and large bubbles. The majority of the bubble clusters survive for / = (1), however, there are extreme cases where clusters survive for up to / ≈ 15 for the Pen+ cases. The central regions of the PDFs are well described by stretched exponential functions with parameters that vary between the cases.
Mean -bubble cluster rise velocity
We finally consider the role that clustering plays in the mean bubble rise velocity. Figure 5 shows the mean rise velocity of bubbles in clusters conditioned on , and the results for unclustered bubbles = 1 are also shown for reference. Consistent with our previous results based on averaging over all bubbles (Ma et al. 2023), the results show that for almost all , increasing the liquid contamination leads to a reduction in , due to the modification of the bubble boundary conditions. For the larger bubbles we also observe a clear increase in with increasing , with an increase of up to 20% when going from unclustered bubbles ( = 1) to bubble pairs ( = 2), while the increase is more moderate when is increased beyond 2. The enhancement of when going from = 1 to = 2 is also observed in the SmPen+ case, with only slight enhancements when is increased beyond 2. However, for the SmTapLess and SmTap cases, varies only weakly with , even in going from = 1 to = 2. What is the physical explanation for why the clustered bubbles rise faster than unclustered bubbles? We begin by considering the case of bubble pairs = 2 and plot in figure 6 the mean inclination angle of bubble pair centreline with respect to the vertical direction (see sketch in the figure). (It should be noted that since our measurements are only quasi-3D, = 0 does not necessarily mean that the bubbles are in-line because they may nevertheless be separated in the depth direction by up to a distance 2 .) For all cases an almost uniform distribution of is observed, i.e. there is no preferential alignment for bubble pair. This is consistent with visual inspection of the experimental images (see supplementary movies) which show that the bubble pair orientations are not persistent, but instead the bubbles continually trade places in a 'leapfrog' fashion. This observation was also found in many 3D experiments (Stewart 1995;Riboux et al. 2010) and DNS for bubble swarm (Bunner & Tryggvason 2003;Esmaeeli & Tryggvason 2005). It is, however, strikingly different from the behaviour observed for isolated bubble pairs where a stable configuration is observed for two clean spherical bubbles where they rise side-by-side (Hallez & Legendre 2011), while for contaminated systems their stable configuration is to be in-line due to the lift reversal experienced by the trailing bubble (Atasi et al. 2023). One possible reason for this discrepancy is that in our experiments the bubbles have oscillating and/or chaotic rising paths, for which the probability that two bubbles will rise in a stable arrangement is very low. By contrast, in Hallez & Legendre (2011) the bubbles are fixed at various positions, and in Atasi et al. (2023) is small enough such that the bubbles have straight rising paths. Another reason is that in our experiments the bubble pairs are not isolated and can experience fluctuations and turbulence due to the wakes of other bubbles in the flow, and this will readily suppress any preferential orientation that might have occurred were the bubble pairs isolated.
Although the bubble pair orientation is almost random, the impact of their interaction on will however depend on , especially in the present bubble regime (deformable bubbles with ∼ (100-1000)). For example, in the side-by-side configuration the two bubbles are outside of each others wake and the modification to the drag force on each bubble is minimal (Kong et al. 2019). On the other hand, for the in-line configuration the trailing bubble is sheltered by the leading bubble and the reduced pressure behind the leading bubble causes the trailing bubble to be sucked towards it, increasing the rise velocity of the trailing bubble while the leading bubble is almost unaffected (Zhang et al. 2021). These effects mean that only the rise velocity of trailing bubbles will be significantly affected by the clustering, and hence when averaged over all orientations, the increased vertical velocity of the trailing bubbles leads to an overall increase in . This explains the increased mean rise velocity for = 2 compared to the = 1 results in figure 5. The increase is, however, minimal for the cases SmTapLess and SmTap. This is most likely due to the bubble wakes being weaker for these cases, a result of which is that the bubble interaction and the associated effect on is also weaker. The results in figure 5 show that further increases as is increased beyond 2. The can be understood in terms of the enhanced opportunity for bubbles to be sheltered by other bubbles as increases. However, the increase is not as strong as when going from = 1 to = 2 because the greatest effect of sheltering will occur when two bubbles are in-line; if the in-line bubbles are part of a cluster, the additional bubbles in the cluster must be displaced in the horizontal direction due to the finite size of the bubbles, and they will therefore be less effective in sheltering the trailing bubble. It is interesting to note that for experiments on heavy particles settling in a quiescent fluid, similar behaviour was also found, with clustered particles falling faster (Huisman et al. 2016). In that case, the enhanced settling velocity was also attributed to a sheltering effect, i.e. reduced drag on particles that are falling in the wake of other particles. However, in that context, the particle clusters were found to exhibit strong alignment with the vertical direction, unlike our bubble clusters whose orientations are almost random (at least for the = 2 case).
Conclusions
We conducted experiments on the temporal evolution of bubble clusters with the aid of a new bubble tracking method for crowded swarms. Our results show that 2-bubble clusters are the most common, however, 3 and 4-bubble clusters also often occur. The clusters persist on average for a time of order / , although rare clusters persisting for an order of magnitude longer are also observed. Furthermore, surfactants are observed to enhance the cluster sizes and their lifetimes. A positive correlation between cluster size and bubble rise speed is observed, with clustered bubbles rising up to 20% faster than unclustered bubbles. Finally, while our cluster tracking method is only quasi-3D, a fully 3D method for dense, deformable bubbles can be developed by combining our bubble identification method with the recent tracking algorithm of Tan et al. (2023) that currently applies to spherical bubbles.
Figure 1 :
1Steps of the tracking algorithm: (a) detections represented with 64 radial vectors, (b) graph construction, (c) feature encoding with node encoder MLP (dark blue rectangle) and edge encoder MLP (orange rectangle) together with edge update MLP (green rectangle), (d) time-aware node update MLPs (light blue rectangle), (e) predicted active edges. Cluster search region: view from (f ) the top, (g) the front and (h) the side. Note that (f,h) are only schematic representations, showing possible bubble arrangements.
Figure 2 :
2Method of tracking clusters: (a) illustrates an example of two clusters merging into one cluster; (b) illustrates an example of one cluster separating into two clusters. and then concatenating the outcome to finally update the node embeddings with a node update MLP (figure 1d). For each iteration, information of nodes one step further in time is passed through the network to the node/edge to be updated. Thus, the number of iterations defines the time increment of the information of other nodes that are supplied to the current node.Edge classification and post-processing: After updating all node and edge embeddings with the MPN, the edges are classified with a classifier MLP (figure 1e). The predictions are postprocessed and remaining violations of the flow conservation constraints are treated with an exact rounding scheme (Brasó & Leal-Taixé 2020). Lastly, missing links in the trajectories are interpolated using bilinear interpolation and each trajectory is smoothed with a uniform filter.
Figure 3 :
3(a) Percentage of bubbles in cluster for 3 cases with larger bubbles. (b-g) Probability of number of bubbles within a cluster for the different cases.
Figure 4 :
4Mean lifetime of 2-, 3-, 4-and 5-bubble clusters (a,b) and PDF of the bubble cluster lifetime (c,d): the smaller bubble cases (a,c) and the larger bubble cases (b,d).
Figure 5 :
5Mean bubble rise velocity as a function of the number of bubbles in the cluster: (a) smaller bubbles and (b) larger bubbles.= 1 denotes unclustered bubbles.
Figure 6 :
6Orientation of bubble pair for the different cases (e.g. in-line bubble pair for = 0 • and side-by-side for = 90 • ).
Parameter SmTapLess SmTap SmPen+ LaTapLess LaTap LaPen+0.51%
0.79% 0.71%
1.2%
1.98% 1.91%
(mm)
3
3.1
2.7
4
4.3
3.8
1.9
1.9
1.2
1.9
2.0
1.3
(mm)
14.1
12.7
11.4
14.1
13.0
11.2
512
538
437
788
879
730
1.29
1.38
1.05
2.30
2.66
2.08
755
739
493
912
1022
782
0.61
0.70
1.04
0.98
0.97
1.14
Table 1: Selected characteristics of the six bubble swarm cases. Here, is the averaged
gas void fraction,
the equivalent bubble diameter, the aspect ratio,
the
inter-bubble distance, the gas flow rate at injection,
≡
√︃
− 1
3 / the Galileo
number,
≡ Δ
2 / , the Eötvös number. The bubble Reynolds number
and drag
coefficient
are based on
and the bubble to fluid relative velocity.
of 90 mm × 76 mm. For each case, we record 1,000 sequences with each having 70-75 frames -
approximately the time a bubble passing through the complete image height.
| [] |
[
"Rank-heterogeneous preference models for school choice",
"Rank-heterogeneous preference models for school choice"
] | [
"Amel Awadelkarim ameloa@stanford.edu \nStanford University Stanford\nCAUSA\n",
"Arjun Seshadri \nAmazon San Francisco\nCAUSA\n",
"Itai Ashlagi iashlagi@stanford.edu \nStanford University Stanford\nCAUSA\n",
"Irene Lo \nStanford University Stanford\nCAUSA\n",
"Johan Ugander jugander@stanford.edu \nStanford University Stanford\nCAUSA\n"
] | [
"Stanford University Stanford\nCAUSA",
"Amazon San Francisco\nCAUSA",
"Stanford University Stanford\nCAUSA",
"Stanford University Stanford\nCAUSA",
"Stanford University Stanford\nCAUSA"
] | [] | School choice mechanism designers use discrete choice models to understand and predict families' preferences. The most widelyused choice model, the multinomial logit (MNL), is linear in school and/or household attributes. While the model is simple and interpretable, it assumes the ranked preference lists arise from a choice process that is uniform throughout the ranking, from top to bottom. In this work, we introduce two strategies for rank-heterogeneous choice modeling tailored for school choice. First, we adapt a contextdependent random utility model (CDM), considering down-rank choices as occurring in the context of earlier up-rank choices. Second, we consider stratifying the choice modeling by rank, regularizing rank-adjacent models towards one another when appropriate. Using data on household preferences from the San Francisco Unified School District (SFUSD) across multiple years, we show that the contextual models considerably improve our out-of-sample evaluation metrics across all rank positions over the non-contextual models in the literature. Meanwhile, stratifying the model by rank can yield more accurate first-choice predictions while down-rank predictions are relatively unimproved. These models provide performance upgrades that school choice researchers can adopt to improve predictions and counterfactual analyses. | 10.1145/3580305.3599484 | [
"https://export.arxiv.org/pdf/2306.01801v1.pdf"
] | 259,075,518 | 2306.01801 | 9432ef135acaa1460a7372fcb48ac12973fb41fd |
Rank-heterogeneous preference models for school choice
Amel Awadelkarim ameloa@stanford.edu
Stanford University Stanford
CAUSA
Arjun Seshadri
Amazon San Francisco
CAUSA
Itai Ashlagi iashlagi@stanford.edu
Stanford University Stanford
CAUSA
Irene Lo
Stanford University Stanford
CAUSA
Johan Ugander jugander@stanford.edu
Stanford University Stanford
CAUSA
Rank-heterogeneous preference models for school choice
CCS CONCEPTSInformation systems → Rank aggregation;Applied com- puting → Economics KEYWORDS school choice, discrete choice, preference modeling, ranking models
School choice mechanism designers use discrete choice models to understand and predict families' preferences. The most widelyused choice model, the multinomial logit (MNL), is linear in school and/or household attributes. While the model is simple and interpretable, it assumes the ranked preference lists arise from a choice process that is uniform throughout the ranking, from top to bottom. In this work, we introduce two strategies for rank-heterogeneous choice modeling tailored for school choice. First, we adapt a contextdependent random utility model (CDM), considering down-rank choices as occurring in the context of earlier up-rank choices. Second, we consider stratifying the choice modeling by rank, regularizing rank-adjacent models towards one another when appropriate. Using data on household preferences from the San Francisco Unified School District (SFUSD) across multiple years, we show that the contextual models considerably improve our out-of-sample evaluation metrics across all rank positions over the non-contextual models in the literature. Meanwhile, stratifying the model by rank can yield more accurate first-choice predictions while down-rank predictions are relatively unimproved. These models provide performance upgrades that school choice researchers can adopt to improve predictions and counterfactual analyses.
INTRODUCTION
Large school districts around the world employ school choice mechanisms to assign students to K-12 schools. In many of these systems, families submit ranked preference lists over school programs to their district, and the district in turn assigns children to schools via a centralized mechanism. School choice researchers employ discrete choice models, statistical models of choices made from slates of discrete options, to describe the preference-generation process by breaking a ranking into a sequence of choices from dwindling choice sets.
Such models are useful for explanation, indirectly identifying the most influential school characteristics in the decision-making of families, saving time and resources in surveying families. They can also be used for forecasting and planning potential changes in the district offerings. Finally, these models are also central to evaluating changes in school choice mechanisms themselves, as policymakers propose changes to assignment mechanisms with the hope of improving district outcomes. In the latter contexts, these models play a role in simulating preferences and assignments, and/or evaluating the resulting welfare of assignment under the proposed mechanism. Put simply, better preference models lead to better school choice analyses, and better analysis lead to better childhood educational outcomes.
The widely-used ranked preference models in this space, including the Plackett-Luce "exploded logit" model [22,28], model the process of constructing a preference ranking as a series of independent discrete choices (conditional multinomial logit (MNL) in the case of Plackett-Luce) based on school, program, and household attributes. While many such models are simple and interpretable, there is long-standing evidence in the discrete choice literature for ranking behavior that is rank-heterogeneous, meaning that the sequence of choices are driven by different considerations as individuals work down a preference list [10,12,16]. The criteria an agent uses for selecting top-ranked alternatives may differ from those at lower ranks, either due to true preference shifts or behavioral mechanisms such as decision fatigue.
In this work, we present and evaluate two strategies for incorporating rank-heterogeneity in choice models for school choice. One strategy achieves heterogeneity through a sequential dependence using context effects, while the other relies on regularized model stratification.
Rank-heterogeneity via context effects. Context effects describe the influence of a particular decision context, including the available or previously-chosen options, on a individual's relative preferences between alternatives. We adapt a previous model of context effects, the context-dependent random utility model (CDM) [30], to the school ranking setting. The CDM has been used to study ranked preferences [31] by decomposing the ranking process as a series of choices in the context of the dwindling set of items yet to be chosen. We consider a variation of the CDM more natural to the school choice setting: modeling the ranking instead as a series of choices in the context of the already chosen items. Surprisingly, we show that the two modeling approaches (respectively, forwarddependence and backwards-dependence) are equivalent, and opt to use the latter variation when interpreting our results.
arXiv:2306.01801v1 [stat.AP] 1 Jun 2023
Rank-heterogeneity via model stratification. An alternative approach to inducing rank-heterogeneity is stratifying the modeling problem by rank position. Simply learning a series of independent models for each rank position, however, can split the data too finely and result in poor generalization. To avoid this pitfall, we apply Laplacian regularization [37] to the independent models, with carefully tuned regularization graphs that bring models of adjacent choices close together.
Incorporating context effects and model stratification are not mutually exclusive, and we also evaluate the combination of both approaches in our analysis. Moreover, we perform a series of ablation studies to demonstrate the independent contributions of each approach. We evaluate these new tools by modeling the preferences for the San Francisco Unified School District (SFUSD) kindergarten programs during the 2017-18 and 2018-19 assignment years. We find that the first strategy (context effects) dramatically lowers out-of-sample negative log likelihood, particularly on down-rank choices, when compared to rank-homogeneous models. The second strategy (model stratification) delivers more accurate prediction in top choices than a rank-homogeneous model-essentially, by modelling them separately-but otherwise does not appear to produce any significant improvements over the non-stratified baseline. Furthermore, we evaluate the performance of our context effect model against a nested MNL model and demonstrate sizable advantages in the school choice setting.
Outline. Section 2 introduces notation and definitions used throughout the work. Section 3 explains the SFUSD assignment system, its inputs and outputs, and summarizes the data we use for training and evaluation. In Section 4, we describe the choice models studied in this work, presenting the backwards-dependent contextdependent model (CDM) and the stratified approach with Laplacian regularization. Section 5 addresses identifiability of the models and details our model optimization framework. In Section 6, we present and discuss the performance of our models; Section 7 concludes.
Related Work
The present work closely relates to various prior works that develop or apply preference models in school choice. Laverde [20] uses an MNL choice model to simulate counterfactual assignments in Boston in 2010-2013, quantifying the role of distance and unequal access on stated preferences. Agarwal & Somaini [2] develop a procedure for estimating an MNL model in the presence of strategic reporting. Abdulkadiroğlu et al. [1] use MNL models to find links between preferences, school effectiveness and peer quality in New York City in 2003-2013. For an in-depth review of prior applications of preference models in school choice, see Agarwal & Somaini [3].
Meanwhile, many works have studied the relative suitability of different choice models in school choice, evaluating accuracy and prediction errors of preference models. For example, Pathak & Shi [27] examine out-of-sample estimates for three models after a large-scale policy change in Boston. They develop several model evaluation metrics, and we adapt one to our work. Calsamiglia et al. [9] similarly estimate a full choice system and evaluate it out-ofsample using administrative data from 2006 and 2007 school years in Barcelona. Several prior efforts aim to understand preference heterogeneity between various demographic groups. For example, Laverde [20] estimates MNL models for White, Black, and Hispanic families by including indicator variables for these features in the chosen MNL utility. Hastings et al. [15] apply mixed-logit models [24] to data from Charlotte-Mecklenburg, North Carolina, learning separate model coefficients by race and SES status. In contrast to these examples of heterogeneity between groups, the present work focuses instead on preference heterogeneity within participants as they assemble their rankings.
Our idea is inspired by prior works in psychology, economics and marketing research, all of which cite inconsistent agent behavior in the assembly of rankings. Under the observation that individuals are generally more careful in reporting their top choices than lower ranked ones, Hausman & Ruud [16] model structured rankheterogeniety through a common choice model with increasing variance as choosers proceed down the ranks, Chapman & Staelin [10] drop ranked alternatives after a threshold, and Allison & Christakis [4] interact model covariates with indicators for early (top-4) or late (5+) rank choices. Our work extends this last idea by fully stratifying models by rank position of choice, interacting all model parameters with indicators for the first ranks. More on our stratification (and regularization) framework in Section 4.
Finally, our work applies recent advances from the discrete choice and preference learning literatures to the school choice domain. The MNL model satisfies the axiom of independence of irrelevant alternatives (IIA), that the relative probability of selecting any item over another item from choice set is independent of the other items in . However, this axiom is highly restrictive and often not representative of the true choice process [38,39]. We adopt strategies for going beyond the independence of irrelevant alternatives (IIA) assumption from Seshadri et al. [30], in turn adapted from Batsell & Polking [7], extending that framework from a previously-studied forward-dependent model of ranking [31] to backwards-dependent ranking. Other recent work extending the CDM include studies of salient features [8] and feature-based context effects [34]; we leave the evaluation of such model extensions as future work. Further, we benchmark the performance of our approaches against the nested MNL model [23], which also goes beyond the restrictive IIA assumption, in Section 6.1.
CHOICE PRELIMINARIES
We begin by introducing our notation for viewing school choice through the lens of discrete choice. For a specific school year, let U [ ] = {1, ..., } denote the universe set of all offerings, or alternatives, in the district, labeled 1 through , and let be the number of students seeking assignment in the choice system. Throughout this work, we use "household" and "student" interchangeably to represent the decision-maker, as enrollment pertains to the student but rankings are often submitted by caretakers. Further, let PO(U) denote the set of all partial orders on the alternatives in U. A preference list ∈ PO(U) is household 's partial ranking of the alternatives in U, and we denote by ≤ the length of that ranking. The vector of observable covariates on student and offering ∈ U are given by , containing demographic, socioeconomic, geographic, and performance-related information on the pair. Then, a school choice dataset, (D, ), is defined as the collection of all participating-household's partial rankings submitted to the district, D = { 1 , ..., }, and observed student-program covariates, ∈ R × × , where is a length-vector of attributes pertaining to student and alternative .
To learn a model of rank data, researchers typically transform rankings to choices and then apply discrete choice models such as the MNL, resulting in what is known (equivalently) as the rankordered logit [16], exploded-logit [10,29], or Plackett-Luce [22,28] model for rankings, which we present in Section 4. The generality of converting rankings to choice is non-obvious, but the most powerful and widespread transformation is motivated by the theory of L-decomposable ranking distributions [11,21] (L as in Left). A ranking distribution is said to be L-decomposable if the probability of observing ranking = ( 1 , ..., ) can be decomposed into probabilities of choices from dwindling choice sets, from most to least preferred:
( ) = ( 1 |{ 1 , ..., }) ( 2 |{ 2 , ..., })... ( −1 |{ −1 , }).
This unraveling-from-the-left decomposition is sometimes also referred to as repeated selection [31]. In the present work, we apply repeated selection to ranking data throughout, simplifying the name of the ranking model to just the enlisted choice model employed after unraveling.
Encoding the unraveled choices as (agent, choice, choice set) triples, the rank data D then becomes a choice dataset, :
= ∈ D ∈ [ ] , ,(1)
where is represents the -th selection by agent on ranking , and ⊆ U is the slate of available alternatives, or choice set, when choosing position of ranking . The size of the resulting
dataset is | | = ∈ [ ] .
To concretely illustrate the decomposition at the level of a data point, given
SAN FRANCISCO SCHOOL CHOICE
In this section, we present the assignment process implemented within the San Francisco Unified School District (SFUSD) from 2014 to the present. SFUSD is made up of 130 schools with 150+ unique program offerings. Students enroll in these programs via an annual assignment lottery where families submit ranked preferences over available offerings to the district, and the district tries to honor family choices while satisfying capacity constraints. The algorithm performing this constrained assignment is the student-proposing deferred acceptance algorithm [13]. Participation may occur across all grade levels, but kindergarten enrollment is by far the largest participating group each year, making up over a third of all annual participants. As such, and following suit with many other studies of school choice, we focus solely on kindergarten assignment. In the face of overly-demanded program seats, the district uses the following priority hierarchy to make assignments: (1) Sibling: Highest priority. Given to younger siblings of students enrolled at the school. For each program, a student is considered in the highest priority category for which they qualify. Within each priority tier, ties are broken by next highest tier if applicable, or by a random number, , drawn uniformly at random for each student-school pair 1 . Once all submitted preference lists have been exhausted by the matching algorithm, there may be students left without any assignment, for which the district administratively assigns these students to a program not on their list. In this work, as we are solely interested in modeling the preferences submitted by families in the first stage, such assignments fall outside the scope of our analysis.
Dataset
To understand families stated preferences, we study data from both the 2017-18 and 2018-19 school years, principally training models on the 2017-18 data and evaluating out-of-sample on the 2018-19 data. We opted for this train-test split, following other work in school choice [9,27], to prevent data leakage. This split also mimics real use cases of school choice preference models, where models are used to simulate future years' outcomes.
Within each year, we collapse all three rounds of stated preference elicitation (instead of focusing only on the first and largest round), to better capture the full district demographics. We exclude programs that are newly offered the 18-19 school year, dropping 804 households from the test dataset, as our model's fixed effects do not extrapolate to never-before-seen alternatives. Handling these out-of-distribution preferences is a known limitation of the MNL class of models and an area of future work. Summary statistics for each school year's data are found in Table 1.
The student-program covariates used in this work were selected by domain experts at SFUSD:
• Distance: scalar, in miles,
• Square-root distance: scalar, in sqrt. miles, • Square-root distance × CTIP1: scalar, in sqrt. miles, • Within 0.5 miles: indicator, • Bus route: indicator for whether the district has bus routes between student ZIP code and school ZIP code, • Sibling match: indicator for whether the student has one or more sibling(s) already enrolled at the school (not necessarily same program), • Language match: indicator for whether a language program is in a student's (non-English) home language, • Attendance area school: indicator for whether the student lives in the attendance area of the school, • PreK/TK continuation: indicator for whether or not student is enrolled in an SFUSD Pre-K or transitional kindergarten in the same attendance area as or within the school.
We consider the following school-specific features as well, modeled as interacting with the CTIP1-status of the student.
• Average color: state-defined metric quantifying school's absolute performance and improvement in English/language arts, math, chronic absenteeism, and suspension rates. Ordinal color code in each category, encoded as 1-5, and averaged (higher is better), • Fraction reduced lunch: fraction of the school's population that qualifies for free or reduced-price lunch by the district, • Before/after school programs: indicator for whether or not school offers before-or after-school programs.
We acknowledge that additional attention to feature engineering can likely yield measurable performance improvements, but we consider the above features adequate and realistic for our purposes, namely evaluating the value of modeling rank-heterogeneity.
CHOICE MODELS FOR SCHOOL CHOICE
A choice model models probability distributions over subsets of a collection. More formally, let S = { : ⊆ U, | | ≥ 2} denote the set of all subsets of size at least two of a collection U. Let ( | , ) describe, for each agent ∈ [ ] and each set ∈ S, the probability of agent selecting item from set . Recall from Section 2 that in the SFUSD school choice mechanism, households submit partial rankings 1 , 2 , ..., with student-program covariates . Each partial ranking is decomposed into choices per Equation (1), obtaining a dataset of choices.
We begin by considering a random utility model (RUM) of choice. The utility to agent of alternative in choice set is given by
( | , ) = ( | , ) + ,
decomposed into a part labeled that is known by the researcher up to some parameters, the representative utility, and an unknown part that is treated as random [35]. Under the assumption of independent Gumbel noise , agent 's probability of choosing alternative from choice set is given in closed form by
( | , ) = ( | , ) ∈ ( | , ) ,(2)
deriving the most ubiquitous RUM-especially in school choice-the conditional multinomial logit (MNL) [21]. Taking the noise instead to be jointly Gumbel distributed with correlation yields variations on a mixed MNL [24] or nested MNL [35] model, the latter featuring correlations across pre-specified clusters of alternatives. We benchmark our performance against a nested MNL model in Section 6. Mixed MNL models have performed comparably to ordinary MNL in several prior school choice studies [15,26], so we do not benchmark against it in this work.
Under the MNL model, the task of the researcher is to define a representative utility function, typically a parametric model, denoted . We select our model from the chosen model class using regularized maximum likelihood, selecting parameters to minimize
( ; ) = ℓ ( ; ) + ( ), (3) where ℓ ( ; ) is the negative log-likelihood (NLL) loss, ℓ ( ; ) = − 1 | | ∑︁ ( , , ) ∈ log ( | , ) ,
( ) = || || 2 is the ℓ 2 penalty, and is the regularization gain.
Basic utilities
At this point, our task is to define the representative utility, .
Assigning inherent utilities to each alternative,
( | , ) = ,(4)
reduces the model to the basic Plackett-Luce model [21]. Here, ∈ R˜where˜< is defined as the number of unique schools plus the number of unique program-types offered in the district 2 , = + . See Table 1 for district summary statistics. We will refer to this model as the fixed-effect MNL.
Adding user-alternative specific covariates yields a linear MNL,
( | , ) = + ,(5)
the most common utility structure in the school choice literature. Note that covariates contained in , detailed in the previous section, are indexed both by student and alternative; school-or program-specific features (only indexed by ) are absorbed into the fixed effects , and student-specific features (only indexed by ) cancel out in the expression of the MNL choice probability, Eq. (2).
As as auxiliary benchmark, we implement a nested MNL model, using the same expression of representative utility as the linear MNL in Eq. (5), for benchmarking in Section 6. In the nested MNL, alternatives are explicitly assigned to one of nests (non-overlapping subsets of U), and choice probabilities are defined to be correlated within nests. See Appendix C.3 for full discussion and presentation of the nested MNL choice probabilities. The nests we implement in this work are 'Chinese Language', 'Filipino Language', 'General Education', 'Japanese Language', 'Korean Language', 'Spanish Language', and 'Special Education' offerings. Each nest is associated with a number of unique program offerings -see Table 1 for more details. 2 Each alternative in the choice universe ∈ U has an associated school, ( ), and program type, ( ). Example program types are general education, special education, and language program offerings. As such, our fixed effect is actually shorthand for ( ) + ( ) , reducing degrees of freedom while allowing our models to better generalize to new offerings at existing schools. We refer the interested reader to our code for the exact implementation.
The fixed-effect and linear MNL models presented in this section satisfy IIA (see Section 1.1). As a result, they are rank-homogeneous, relying on a constant representative utility ( | , ; ) for each alternative regardless of when the choice is being made in the ranking process. The contextual choice model that follows does not satisfy IIA and thus leads to rank-heterogeneous choice distributions.
Context effects
The context-dependent model (CDM) [31] is our first strategy for incorporating rank-heterogeneity into the traditional MNL models above. The CDM relaxes the strict IIA assumption, initially crafted to model "choice set effects" [36] whereby the slate of alternatives under consideration impacts the choice probabilities of the agent. We modify this modeling framework to suit the school choice sequential ranking problem. Specifically, under the standard CDM, each choice from choice set occurs within the context of the choice set . For our purposes, we generalize this framework to consider the choices as occurring within the context of a generic and possibly different context set of alternatives, .
The representative utility of this generalized CDM models context effects as a linear dependence between items, interpretable as "push" and "pull" factors, with items in pushing and pulling on each alternative in the choice set ,
( | , , ) = + + 1 | | ∑︁ ∈ , ∀ ∈ .
When = \ we recover the standard CDM. The parameters are defined for all , ∈ U where ≠ . The generalized CDM has the same parameter complexity as standard CDM, requiring ( − 1) parameters beyond the linear model, arranged in a matrixlike structure ∈ R × , with undefined diagonal. To reduce the parametric complexity of the model, can be factorized as the product of two low-rank matrices, = with , ∈ R × serving as target and context embeddings, respectively, analogous to word2vec-type methods [25]. The low-rank CDM representative utility is then written as
( | , , ) = + + 1 | | ∑︁ ∈ .(6)
We proceed with the factorized form of the CDM in this work. The low-rank CDM introduces a hyperparameter, in the form of the embedding dimension ; see Section 5 for a discussion of hyperparameter tuning.
To accompany this change in model, the structure of the data described in Eq. (1) must be generalized to include a generic context set for each choice, resulting in the following choice dataset
= ∈ D ∈ [ ] , , , ,(7)
where is the context set when agent chose item . Table 2 summarizes the representative utilities of the three models-the fixed-effect MNL, linear MNL, and CDM-and their parameters.
Forward vs. backward-dependence. In the original formulation of the CDM for rankings [31], the context set was assumed to be the choice set itself, = \ , a formulation we refer to as the forward-dependent contextual ranking model.
Considering the generalized CDM above, we consider instead a model where the context set is the set of already-chosen alternatives. Equivalently, let = U \ , the complement of the current choice set. We introduce this model as the backward-dependent contextual ranking model. Rather than modeling context effects between alternatives in the choice set, it measures how well each alternative fits with the choices already made. This conceptual shift is better suited to the psychology of the school choice selection process than the former framing, and yields a more interpretable model in ranking settings where choice sets are large, such as in school choice.
Considering these two different approaches to modeling rankings as a sequence of contextual choices, it seems as though these formulations constitute different model classes. However, we find that the forwards-and backwards-dependent CDM ranking model classes are in fact equivalent, and provide a bijection between the spaces of parameters for both the unfactorized and factorized models. See Appendix A for proofs of Theorems 1 and 2.
( ) = + ∑︁ ∈ U\ , ∀ , , − .
The inverse map is the map itself, −1 = .
Theorem 2. Let = { , , , } denote model parameters of the low-rank forward-dependent CDM ranking model, and denote those of the low-rank backward-dependent model. These model parameters are equivalent under the bijection = ( ), where
( ) = + ∑︁ ∈ U\ , ∀ , , , − .
The inverse map is the map itself, −1 = .
In Section 6, as a supplemental analysis, we consider results for truncated top--dependent context sets, = { 1 , ..., } for = min( , ), to evaluate whether a more limited dependence (and thus, simpler model) performs as well as full backward-dependence. We find that even the truncated top-1 CDM-with knowledge only of the agent's first choice-makes considerable gains over the linear MNL model, but the full (top-) backwards-dependent CDM exhibits the best performance.
Stratifying across ranks
It has been generically noted [10,12,16] that the criteria individuals use for selecting top-ranked alternatives differs from those for lower-ranked alternatives, either due to decision fatigue or a true preference shift. As such, we consider the possibility of withinagent preference shift by stratifying the choice model by rank, and learning independent choice models at each rank position. A possible concern with this approach is that we end up with considerably less data for each model in this stratification, compared to estimating a single common model. To address this concern, we encourage . Here denotes the number of alternatives (school and program pairs) offered by the district,˜= + denotes the total number of unique schools and program types, is the length of , and is the embedding dimension of the low-rank CDM.
Model
( | , , )
Fixed { }L inear + { , }˜+ CDM + + 1 | | ∈
{ , , , }˜+ + 2 models at neighboring ranks to be close to one another via Laplacian regularization, resulting in a Laplacian-regularized stratified model [37]. The methods of Laplacian-regularized stratification are closely related to popular methods for smoothing (ℓ 2 ) and trend filtering (ℓ 1 ) in temporal [17] and general graphical [32,40] domains, where the underlying idea of parameter fusion dates back to at least the work of Land and Friedman [19,33].
The stratification builds upon a base choice model-in this work, one of the three models summarized in Table 2. Taking the number of strata to be , a stratified choice model is then the composition of sub-models with parameters = { 1 , ..., } ∈ R × , where is the number of parameters in the chosen base model. The models are regularized towards each other as dictated by an accompanying regularization graph [37]. In our case, rank-based stratification lends itself well to a common "path graph" for regularization, where models of adjacent ranks are connected by edges and thus regularized towards each other. Laplacian regularization here is then defined as:
L ( ) = L ∑︁ =2 || − −1 || 2 2 ,
where L is a chosen Laplacian regularization strength, and L is convex in . Compared to the non-stratified objective in Eq. (3), the regularized, stratified objective function is the sum of decoupled model losses (each with a local ℓ 2 regularization) and the Laplacian regularization term:
( | ) = ∑︁ =1 [ℓ ( ; ) + ( )] + L ( ).(8)
Regularized stratified models feature two additional hyperparameters over their base models, the number of strata and the Laplacian regularization gain L ; see Section 5 for a discussion of hyperparameter tuning.
MODEL SELECTION AND OPTIMIZATION
We briefly discuss the identifiability of the presented models, alongside details about hyperparameter tuning and optimization. A model is identifiable if no two distinct sets of parameters, and ′ , produce the same probability distributions over all choice sets ∈ S. Identifiability is crucial in settings where decisions are made based on interpreting parameter estimates. If the goal is solely to make decisions based on the resulting distributions only, e.g., from predictions or simulations, identifiability is not strictly necessary.
The traditional MNL family of ranking distributions are nonidentifiable due to their shift-invariance. In this case, strategies for achieving identifiability are to fix one of the parameters, constrain their sum, or to apply regularization and obtain the minimum-norm parameter estimates [41]. In our work, we employ the latter strategy for the MNL and all other models, applying non-zero ℓ 2 regularization, ( ), in the objective function and achieving identifiability by obtaining the minimum norm solution.
The models in this work introduce additional hyperparameters; the low-rank CDM requires the selection of the embedding dimension , and a stratified model is specified by and L ; the number of strata and amount of Laplacian regularization, respectively. We tune these hyperparameters via 5-fold cross validation within our training dataset, selecting the values that minimize validation loss. Figures illustrating our search over these hyperparameters are found in Appendix B, with Table 3 summarizing the chosen values.
With hyperparameter values selected and regularization in place, the models are fully specified and we proceed to train our models on the full 2017-18 dataset for testing on the 2018-19 dataset.
We run Adam [18], implemented in PyTorch, with default parameters, lr = 0.001, = (0.9, 0.999), = 1 −8, adding ℓ 2 regularization with weight = 1 − 5 in accordance with our hyperparameter selection for . Model parameters are updated over batches of training data until reaching max_epoch = 1000 or convergence, i.e., when the absolute difference in losses is less than = 1 − 4. See Appendix C for a discussion on the learned model parameters.
RESULTS
In this section, we evaluate and examine eight models-non-stratified and stratified versions of the fixed-effect MNL, linear MNL, CDM, and nested MNL models-trained on 2017-18 preference data and evaluated out-of-sample on 2018-19 data. We observe unique advantages of the context effects modeled by the CDM when benchmarked against the other models, and find that stratifying any model results in strictly (but marginally) better predictions, mostly for top (first) choices. Figure 1 depicts train and test negative log likelihood (NLL) losses on the left, and test losses disaggregated by rank on the right. We include a "null" model in the plots, representing uniform choices over programs, as a baseline reference point. We see that the CDM models, stratified or not, result in considerably lower test losses than the fixed-effect, linear, and nested MNL models overall. Stratifying provides modest decreases in overall test loss across all models.
Goodness of fit
On top choices, many families have priority access to one school in the district (e.g., sibling or PreK/TK priorities), and in most cases, rank these schools first. The linear and CDM models incorporate these priorities into the model, and therefore model top choices better than the fixed-effect model. The CDM leverages no additional information in the first choice as the context set is empty (i.e., no choices have been made). As such, there is negligible difference between the linear MNL and CDM models at position 1. However, after the relatively easy task of predicting top-choices, the CDM is able to leverage the choices made and separates itself from the lowerfidelity models. Stratifying yields a lower test loss for top choice across all three models, but quickly loses its advantage at lower ranks, likely due to diminishing training data at those positions (Cf. Table 1, households rank fewer than 10 programs on average).
Truncated top--dependent CDM. Recall from Eq. (6) that the CDM utility differs from the linear MNL model via a sum of pairwise interactions between alternatives and a context set, . Throughout this work, the context set is taken to be the set of all previously-chosen alternatives, which has a powerful equivalence in expressivity (Theorems 1 and 2) to the standard CDM. As a robustness check, it is reasonable to ask which prior choices are most relevant to the context set. We evaluate several variations on the CDM model with the context being defined as the set of top alternatives. Specifically, the utility is given by Eq. (6) with = { 1 , ..., } for agent 's -th choice, where = min( , ). In other words, for choices made after position , only the first choices constitute the context. Figure 2 presents the losses for these top--dependent CDM models. When = 0, the context set is always empty and the model is equivalent to the linear MNL. When = , the number of offered programs, we recover the backwards-dependent CDM considered everywhere else in this work. We find that even a minimal context set, e.g., the top-choice only ( = 1), provides considerable improvement compared to the no-context linear model. That is, the information of what an agent chose first supplies the model with meaningful signal in making all down-rank predictions. That said, letting the context effect be linear in the full set of prior choices ( = ) has measurable advantages.
Interpreting context effects. In Figure 3, we show the pairwise interactions = estimated for the (non-stratified) backwardsdependent CDM. Element = is the utility boost that program receives from chosen program being in the context set. In the heatmap, programs on the -and -axes were arranged first by program type and then by (descending) popularity within each. We see significant block structure in the matrix, suggesting that the CDM primarily (but not only) uses the context set to learn program type affinities. For example, the third block along the diagonal corresponds to Special Education programs, where we see a strong positive context effect. That is, once a family has ranked a special education program, it becomes much more likely that the family will rank other special education programs. This model behavior is highly intuitive, and is also beyond the behavior of an MNL model or any other model assuming independence. Put is the utility boost program receives from program being in the context set. The block diagonal structure, highlighted with grey outlines, suggests that the CDM primarily (but not only) learns effects between like program types.
simply, the CDM's use of context effects enables it to pick up on household signals, from the second choice and onward, that are otherwise not available a priori at the household level.
The block structure of may seem to suggest good performance from a nested MNL model, as the latter explicitly clusters similar programs. Instead, in Figure 1 we find that the nested MNL shows only marginal gains over the linear MNL model and is not competitive with the CDM. This result sounds surprising, but is fairly intuitive; in models obeying IIA (such as the fixed-effect and linear MNL models), when an item is removed from the choice set, that item's probability is proportionally redistributed to the remaining alternatives for follow-up choices. The nested model instead allows the removed alternative's probability to be non-proportionally distributed to the remaining items, specifically by favoring the alternatives in its nest (see Appendix C.3 for details). However, in this setting, the choice universe and nests are relatively large, so the impact of redistributing already-small choice probabilities is marginal.
To illustrate how choice probabilities are redistributed in different models, Figure 4 showcases first-and second-choice probabilities by the non-stratified linear MNL, nested MNL, and CDM models over the special education subset for an example household in the district who first chose a special education program. We see that the CDM drastically alters its second-choice distribution, (correctly) boosting the likelihood of this household choosing another special education program, while the nested model's top-and second-choice distributions are almost indistinguishable. Special education programs are low-probability selections in the data at large, and the nested paradigm can only marginally influence future predictions when one such item is removed from the choice set. The CDM has a far greater ability to update its future distributions in the context of rare chosen items.
Down-rank prediction accuracy
Beyond in-and out-of-sample goodness of fit, we now consider the prediction quality of the models on the test dataset. Specifically, we task the models with making a prediction at rank position , conditional on the first − 1 choices made, resulting in an "accuracy in th Prediction" evaluation metric. Recall that denotes the true ranking of household , where is the th item in the ranking for ≤ . Let where the representative utilities (·) are defined in Section 4. The metric is then given by Figure 5 summarizes model performances on this metric. The CDM models are significantly more accurate in making down-rank predictions when given earlier choices, which is precisely the usecase of the contextual model. Stratification leads to improvements in down-rank predictions made by the fixed-effect MNL model, but has limited effect on the linear, nested and CDM models. It appears to learn that if a household has not already ranked the most popular programs, they wont be adding them later, as seen in Figure 8 of Appendix C. Doing so, it outperforms its non-stratified counterpart beyond position 5.
Accuracy in th Prediction
= 1 | | ∑︁ ∈ 1( = ).
We can also disaggregate these accuracies by sub-populations of interest, see Appendix D. We find that the groups receiving sibling and PreK/TK priorities have top choices that are relatively easy to predict, as their preferences are concentrated on their (typically singular) priority schools. All models generally under-perform on CTIP1, Hispanic/Latino, and Black student populations, relative the broader population, for one of two reasons: either the subgroups demonstrate more varied preferences than other subgroups, or the training data was relatively small. Lastly, we see in Figure 12c that the CDM specializes in predicting down-rank choices for households with non-mainstream initial preferences.
CONCLUSION
In this work, we introduce rank-heterogeneous preference modeling for school choice and present two strategies, discrete choice context effects via a backwards-dependent CDM, and model stratification by rank position. Rank-heterogeneous models have the potential to leverage already-chosen alternatives when making down-rank predictions, or to broadly capture evolving household values down a ranking. We define and evaluate several metrics, finding that incorporating context terms in the utility dramatically decreases test loss over the linear and nested MNL models, capturing signals not present in covariates alone while also seeing particular improvements in modeling rare choices. The contextual model also generates more accurate predictions for list-completion tasks. Stratifying by rank yields improvements in top-choice accuracy across all models, but otherwise does not result in significant improvements or additional predictive power down-rank.
While rank-heterogeneous models enable school choice researchers to improve predictions and perform counterfactual analysis, our methods do not come without limitations. For one, the increased parametric complexity of the CDM and regularized stratification strategies raises, albeit mildly, model training times and data requirements relative to the MNL. Recent developments to the CDM [34] mitigates this problem by leveraging the model's block structure and learning interactions between program attributes rather than the programs themselves. Applying this work to the school choice setting would reduce complexity while uncovering context effects, a promising direction for future work. Another limitation stems from our model failing to generalize to new program offerings with undefined fixed-effects. Here, applying strategies for out-of-distribution prediction-such as establishing a prior on the fixed-effects of new program offerings based on the values for similar offerings-provide further directions for future work. Despite these limitations, we strongly encourage school choice researchers to consider rank-heterogeneous models in their preference modeling tasks for improved down-rank and rare-event prediction. Reproducibility. The SFUSD data used in this work is not public, but implementations of all models as well as notebooks used to generate plots in this paper are available at: https://github.com/ ameloa/rankingmodels.
A CDM FORWARDS AND BACKWARDS EQUIVALENCE
( ) = + ∑︁ ∈ U\ , ∀ , , − .
The inverse map is the map itself: −1 ( ) = ( ) = .
Proof The latter statement follows immediately from Lemma 3. Consider the full forwards-dependent CDM: given a set of model parameters { , , } for ∈ R + , ∈ R , and ∈ R × , we have that
( | , ) = exp( + + ∈ \ ) ∈ exp( + + ∈ \ )
.
where corresponds to the element in at the row index position corresponding to item and column index position corresponding to item . The context of relevance here are the other items in the choice set, and that the choice of an item from a set is related to how interacts with that context, in addition to the item fixed-effect and interactions with agent covariates via the linear term.
Consider now the full backwards-dependent CDM: given a set of model parameters { , , } for ∈ R + , ∈ R , and ∈ R × , we have that
( | , ) = exp( + + ∈ U\ ) ∈ exp( + + ∈ U\ )
.
Unlike before, the context of relevance here are items not in the choice set-in our case, those that were already chosen-and that the choice of an item from a set is related to how interacts with that context (along with the item fixed-effect and linear interactions, as before).
The forward-dependent CDM may seem like a model class that models different choice probabilities than the backward-dependent model, but the two classes are, in fact, identical. That is, the two model classes model the same collection of choice systems, and have a mapping from one to another. We will show this below. Beginning with the forward-dependent model:
( | , ) = exp( + + ∈ \ ) ∈ exp( + + ∈ \ )
. = exp( + + ∈ U\ − ∈ U\ + ∈ \ ) ∈ exp( + + ∈ U\ − ∈ U\ + ∈ \ ) . = exp( + ∈ U\ + − ∈ U\ ) ∈ exp( + ∈ U\ + − ∈ U\ )
. = exp( + + ∈ U\ ) ∈ exp( + + ∈ U\ )
.
where the first statement is the definition of the forward-dependent model, the second adds and subtracts ∈ U\ from the numerator and ∈ U\ from the denominator, the third collects and rearranges terms, and the finally line follows by setting
:= + ∑︁ ∈ U\ , ∀ := := − .
We observe that the last line is the backwards-dependent CDM, showing that any forwards-dependent CDM can be mapped to a backwardsdependent CDM. Moreover, since the mapping between { , , } and { , , } is , from Lemma 3, we know that −1 exists (and is ), and hence any backwards-dependent CDM can be mapped to a forwards-dependent CDM. This concludes the proof.
( ) = + ∑︁ ∈ U\ , ∀ , , , − .
The inverse map is the map itself: −1 ( ) = ( ) = .
Proof The proof follows the same structure as Theorem 1. The latter statement of the theorem follows immediately from Lemma 4. We begin with the factorized forwards-dependent CDM: given a set of model parameters { , , , } for ∈ R + , ∈ R , and , ∈ R × , consider the low-rank forwards-dependent CDM model:
( | , ) = exp + + ∈ \ ∈ exp + + ∈ \ .
We have
( | , ) = exp + + ∈ U\ − ∈ U\ + ∈ \ ∈ exp + + ∈ U\ − ∈ U\ + ∈ \ = exp + ∈ U\ + − ∈ U\ ∈ exp + ∈ U\ + − ∈ U\ = exp + + ∈ U\ ∈ exp + − ∈ U\ .
where the first statement adds and subtracts
( ( )) = ( ) = ({ , , }) = + ∑︁ ∈ U\ , ∀ , , − = + ∑︁ ∈ U\ + ∑︁ ∈ U\ − , ∀ , , − − . = , ∀ , , . = { , , } = .
where the first line follows from applying the definition of ( ), the second from applying the definitions of , and , and the third from canceling terms, and the last from the definition of .
Since was chosen arbitrarily, we have shown ( ( )) = , ∀ and thus −1 = ( ).
( ( )) = ( ) = ({ , , , }) = + ∑︁ ∈ U\ , ∀ , , , − = + ∑︁ ∈ U\ + ∑︁ ∈ U\ − , ∀ , , , − − = , ∀ , , , . = { , , , } = .
where the first line follows from applying the definition of ( ), the second from applying the definitions of , and and , and the third from canceling terms, and the last from the definition of .
Since was chosen arbitrarily, we have shown ( ( )) = , ∀ and thus −1 = ( ).
B HYPERPARAMETER TUNING
The models in this work each require the selection of various hyperparameters; the low-rank CDM requires the selection of the embedding dimension , a stratified model is specified by and L , the number of strata and amount of Laplacian regularization, respectively, and all models apply non-zero ℓ 2 regularization, > 0. We tune these hyperparameters via 5-fold cross validation within our training dataset, selecting the values that minimize validation loss. A minimal amount of local regularization, = 10 −5 , is applied, only towards achieve identifiability of the parameters, and the embedding dimension of the low-rank CDM is selected to be = 10 ( Figure 6).
In Figure 7, we see that all training errors (top row) are minimized with the most strata and least regularization, resulting in a model with maximum flexibility to fit the training data. However in validation (bottom row), the stratified CDM pays a large price with more stratification and less regularization. The multiplicative increase in parameters by the stratification leads to more significant over-fitting to the training data for the CDM than the linear and fixed-effect models, so more regularization is needed in this case. Thus, the selected stratification hyperparameters, ( , L ), are (10, 10 −4 ) for the fixed-effect MNL and linear MNL models, and (10, 10 −3 ) for the CDM and nested models. We summarize the tuned model hyperparameters in Table 3. Band denotes 95% confidence intervals across 5-folds. We take = 10 −5 to achieve identifiability of the models while minimizing validation loss. Right: Embedding dimension of the low-rank CDM, , tuning, used for both stratified and non-stratified models. = 10 minimizes validation loss. C PARAMETER ESTIMATES C.1 Stratified fixed-effects,Î n Figure 8, we plot the fixed-effects learned by the stratified fixed-effect MNL at ranks = {1, 10}. We sort the schools and program-types on the -axes by the = 1 model's parameter estimates,ˆ1. We see that the later distributions shift weight away from the top-choice-popular alternatives. This redistribution yields improved performance for the fixed-effect model down-rank. See Figure 5 for evidence of this result. C.2 Stratified vs. non-stratifiedN ext we report the coefficient estimatesˆfrom the non-stratified ( Figure 9) and stratified ( Figure 10) linear MNL and CDM models. In Figure 9, the sign of most coefficients align with intuition in both linear MNL and CDM models. For example, the coefficients on distance are negative, signaling that there is a preference for proximity to home. Meanwhile the parameters for before/after school programs, PreK/TK continuation, language match, and sibling match are all positive. D is t a n c e S q r t . d is t a n c e S q r t . d is t a n c e , C T IP S q r t . d is t a n c e , n o n -C T IP year preference data. Previous rank-heterogeneous models [16] assume that coefficient magnitudes contract towards zero down-rank, but relaxing this assumption we find coefficients frequently show non-linear/non-monotonic behaviors down rank.
When stratifying the models into = 10 strata for both the linear and CDM models respectively, we see in Figure 10 that the parameter magnitudes mostly diminish towards zero as we model down-rank choices. This contraction is consistent with either a less confident model-the choice datasets shrink in size as we subset on lower rank choices as the number of families ranking at least alternatives decreases with increasing -or less confident assembly of down-rank preferences by households. The latter is the main hypothesis of Hausman and Ruud [16]; these authors developed a heteroscedastic model with uniformly diminishing coefficients at each rank position to model this effect.
However, there are a few examples of non-monotonically shrinking coefficients in the stratified linear model's coefficients. Most notably, the coefficients on the fraction eligible for reduced lunch actually becomes more negative down rank within the CTIP1 and stays constant for the non-CTIP1 populations. In this way, the regularized stratified model allowed us to learn from truly evolving, not simply vanishing, preferences. This finding is an example where Hausman and Ruud's work alone is insufficient in modeling household values.
C.3 Nested MNL
The nested model extends the MNL to allow groups of alternatives, called nests, to be "similar" to each other in an unobserved way; that is, to have correlated error terms. To define the model, let be the number of predefined nests. Denote by ⊆ U the set of alternatives assigned to nest for ∈ [ ], and ( ) ∈ [ ] to be the unique nest membership of alternative . Given these nests and memberships, the nested MNL choice probability is given in closed form by the following formula:
( | , ) = / ( ) ∈ ( ) / ( ) ( ) −1 ℓ=1 ∈ (ℓ ) / ℓ ℓ = / ( ) ∈ ( ) / ( ) · ∈ ( ) / ( ) ( ) ℓ=1 ∈ (ℓ ) / ℓ ℓ = | , ( ) · ( ) | ,
where is a measure of independence in nest . When = 1, the model is identical to the standard MNL and nests are abandoned, and < 1 indicates positive correlation amongst nest alternatives.
We implement a nested MNL in our setting by nesting the program offerings by program type. Specifically, by 'Chinese Language', 'Filipino Language', 'General Education', 'Japanese Language', 'Korean Language', 'Spanish Language', and 'Special Education' offerings, for a total of = 7 nests spanning the full menu of available programs in SFUSD. Representative utilities are taken to be identical to the linear MNL specification in Eq. (5) with the same covariates. As with the fixed-effect, linear MNL, and CDM models, we run Adam with default parameters, adding ℓ 2 regularization in accordance with our strength selection of 1 − 5 in Table 3. Model parameters are updated over batches of training data until reaching max_epoch = 1000 or convergence, i.e., when the absolute difference in losses is less than = 1 − 4. Learned scale parameters ∈ R are given in Table 4. In Figure 1, we find that the nested MNL performs almost identically to the uncorrelated linear model, with only marginal performance improvement. The CDM model outperforms even the nested model as it learns a more nuanced similarity amongst the alternatives, and does so implicitly rather than through explicitly defined subsets.
To investigate this effect further, we plot top-and second-choice probabilities of the linear, nested, and CDM models for a select household in the test data in Figure 11. Specifically, this household selected a special education program for their student in their top-position. One would expect both the nested MNL and CDM models to make use of this information in their second choice distribution. However, the 1st and 2nd probability distributions are effectively identical in the nested model. The nested model "redistributes" the selected choice's first-choice probability to the remaining programs in a way that favors other special education offerings, but the effect is minimal as the selection of any special education program first is relatively rare in our data. The CDM, on the other hand, makes great use of this intel and dramatically increases the likelihood of choosing other special education programs. The CDM is capable of modeling behavioral signals that are not present in household or program covariates, and therefore presents measurable advantages over the rank-homogeneous models studied in this work. S p e c ia l E d u c a t io n G e n e r a l E d u c a t io n S p a n is h L a n g u a g e C h in e s e L a n g u a g e F il ip in o L a n g u a g e Ja p a n e s e L a n g u a g e K o r e a n L a n g u a g e
D MODEL ACCURACY BY SUBPOPULATION
In addition to reporting goodness of fit and overall accuracy of the models, it is important to also evaluate model performances by (1) their ability to predict the choice of easy-to-predict subgroups (e.g. sibling), serving as a sanity check, and (2) their ability to predict the choices of subgroups of interest to the decision-maker (e.g. Black, Hispanic/Latino, CTIP1), since the choice model will eventually be used to predict outcomes of policies on these subgroups. We drop the null and fixed effect models from these plots as they are relatively inaccurate, evidenced in Figure 5.
In Figure 12a, we see that sibling and PreK/TK priority groups show highest accuracy in top choice, as these groups gain strong priority to specific schools and tend to rank those schools first. The models systematically under perform in accuracy on CTIP1 (Figure 12a), Hispanic/Latino, and Black/African-American populations (Figure 12b). These groups are either highly varied in their demonstrated preferences, making them harder to predict for, or there was not enough training data present for these populations. Finally, the CDM demonstrates the largest lead in predicting second and third choices for households who ranked a special education or language program first ( Figure 12c).
E MODEL CONSISTENCY
Here we note the sampling consistency-how similar sampled choices are-via two metrics: weighted Kendall's correlations amongst generated lists per household, and sampling consistency when completing at position . We first present weighted Kendall's tau correlations between generated preferences and then report how often the model agrees with itself when predicting the -th choice when given the true first − 1 choices, , −1 .
To compute the Kendall's statistic, we sample choices sequentially from each model and generate = 100 full rankings, total orderings of the elements of U, for each household . Figure 13 shows the weighted Kendall's correlation between these generated preferences across all model pairs, averaged over all students. As expected, the null model generates preferences that are completely uncorrelated with itself and the rest. Unsurprisingly, the CDM model class generates preference samples that are more unlike the other samples, as seen in the two CDM rows. The CDM models are susceptible to a snow-ball effect when generating full preferences from scratch-the top choices have strong down-stream effects on later choices whereas the linear and fixed-effect (and effectively nested, in this case) MNL models are identically and effectively independently sampled down rank, resulting in more similar lists to themselves and each other.
The right plot of Figure 13 displays the consistency of the model predictions at the -th position when given the true first − 1 choices made. To measure consistency, we make = 100 -th choice predictions for student , compute the fraction of 2 pairs that agree with each other, and average these fractions over all students in the test set. The null and fixed-effect classes of models remain similarly (in)consistent throughout, indicating similar probability distributions across the ranking, regardless of prior choices. The nested, linear, and CDM models predict very consistently-more pairs of predictions at agree with one another-in the top few choices as they learn from priority statuses and program affiliations. The CDM shows strong consistency in position 2-the introduction of the first context effect skews the distribution per household to be more modal in follow up program selections. The effect gradually diminishes after the second rank position, however, as the context effects get averaged out by the growth of the context set, Eq. (6). Preferences generated by the null distribution are unlike the rest, as expected, and those by the CDM models are more unlike fixed-effect, linear, and nested samples than they are unlike each other. Right: Consistency vs rank position for the studied models. Null and fixed-effect remain fairly (in)consistent, whereas the nested, linear, and CDM models report more consistent predictions in top predictions. The CDM is most consistent from position 2 onward due to the information of the top-choice.
a universe of alternatives U = { , , , }, consider a dataset made up of one ranking, by agent 1, D = { 1 }, where 1 = ( , , ). Following Eq. (1), the choice dataset becomes = {(1, 11 , 11 ), (1, 12 , 12 ), (1, 13 , 13 )} = {(1, , { , , , }), (1, , { , , }), (1, , { , })}.
( 2 )
2PreK/TK: Given to students who (1) live in the attendance area of the school (if applicable), and (2) are enrolled in a PreK or TK program at the school itself or in the attendance area of the school (if applicable). (3) Test score area ("CTIP1"): Given to students living in neighborhoods with low average test scores. Grants priority across the district, not just to one program or school. (4) Attendance area (AA): Given to students living in the attendance area of the school. (5) No priority: The absence of any of the above priorities.
Theorem 1 .
1Let = { , , } denote model parameters of the unfactorized forward-dependent CDM ranking model, and denote those of the backward-dependent model. The forward-and backward-dependent parameters are equivalent under the bijection = ( ), where
Figure 2 :
2Negative log likelihoods of truncated top-dependent CDMs where the context set is only the topchosen alternatives. Here = 0 is equivalent to the linear MNL, and = , the total number of offered programs, is equivalent to the standard CDM.From top/left to bottom/right, the program types are General Education (65), Spanish Language (32), Special Education (27), Chinese Language (24), and Miscellaneous Language (6) programs.
Figure 3 :
3The context effect matrix, = , of the CDM. Element
Figure 4 :
4Top-and second-choice probabilities of the nonstratified linear, nested, and CDM models over special education programs for a sample household whose first choice was a special education program. The CDM model learns a drastically updated second-choice probability given the context of the top choice.
be the set of their true top-choices, = { 1 , 2 , ..., }. Denote by = { : ≥ } the set of households who have ranked at least alternatives. Then, given a choice model, denote by the modal prediction by the model at position , i.e., the highest probability alternative over remaining programs, in the context of the previous − 1 choices, = , −1 ,
Figure 5 :
5Accuracy in th Prediction. The CDM model makes use of the provided context and generates the most accurate predictions at lower rank positions.
Theorem 1 .
1Let = { , , } denote model parameters of the unfactorized forward-dependent CDM ranking model, and denote those of the backward-dependent model. The forward-and backward-dependent parameters are equivalent under the bijection ( ) = , where
Theorem 2 .
2Let = { , , , } denote model parameters of the low-rank forward-dependent CDM ranking model, and denote those of the low-rank backward-dependent model. These model parameters are equivalent under the bijection ( ) = , where
.
We observe that the last line is the backwards-dependent factorized CDM, showing that any forwards-dependent factorized CDM can be mapped to a backwards-dependent factorized CDM. Moreover, since the mapping between { , , , } and { , , , } is , from Lemma 4, we know that −1 exists (and is ), and hence any backwards-dependent factorized CDM can be mapped to a forwards-dependent factorized CDM. This concludes the proof.Lemma 3. Let = { , , } denote model parameters of the unfactorized forward-dependent CDM ranking model, and let The inverse map is the map itself: −1 = ( ) Proof. If ( ( )) = , ∀ , then −1 = ( ). We show the former to be true. Let := { , , } and let := ( ) = ( , , ).
Lemma 4 ..
4Let = { , , , } denote model parameters of the factorized forward-dependent CDM ranking model, and let The inverse map is the map itself: −1 = ( ) Proof. The proof follows the same form and steps as the previous lemma. If ( ( )) = , ∀ , then −1 = ( ). We show the former to be true. Let := { , , , } and let := ( ) = ( , , , ). We have,
Figure 6 :
6Left: Amount of local ℓ 2 regularization, , tuning, used on all model parameters.
Figure 7 :
7Hyperparameter tuning for the number of stratification buckets, , and amount of stratified regularization, L for the three main models + nested. Top row denotes training loss, bottom row shows validation. ( , L ) = [(10, 10 −4 ), (10, 10 −4 ), (10, 10 −3 ), (10, 10 −3 )] minimizes validation loss for fixed effect MNL, linear MNL, CDM, and nested models, respectively.
Figure 8 :
8Stratified fixed-effects for = {1, 10}. Schools (left) and program-types (right) are sorted on the -axes according to = 1 fixed-effects. Later distributions shift away from top-choice-popular programs.
Figure 9 :
9Learned linear weights,ˆ, from training the linear MNL and CDM models on 2017-18 school year preference data. The sign of most coefficients align with intuition in both linear MNL and CDM models. For example, the coefficients on (square-root) distance is negative, signaling that there is a preference for proximity to home. Meanwhile the parameters for before/after school programs, PreK/TK continuation, language match, and sibling match are all positive.
Figure 10 :
10Learned linear model estimatesˆfrom training the stratified linear MNL and CDM models on 2017-18 school
Figure 11 :
11Linear MNL, nested MNL, and CDM choice probabilities in top-choice (no context) and second-choice (one-chosen program) across all available alternatives for an example household that chose a special education program first. In the context of the selected alternative, linear and nested MNL models do not significantly redistribute second-choice probabilities, whereas the CDM distribution is more adaptive to the information of chosen alternatives.
( a )
aAccuracy by priority categories. See Section 3 for definitions of all priority categories.
( c )
cAccuracy by which program type was ranked in the first position.
Figure 12 :
12Prediction Accuracy at = [1, 2, 3] over key sub-populations. First column corresponds to top-choice prediction ( = 1), last is third-choice ( = 3). Groups are ordered largest to smallest from left to right on x-axes.
Figure 13 :
13Correlation and consistency statistics for the 8 studied models. Left: average weighted Kendall's correlations between simulated preferences.
Table 1 :
1Summary statistics of SFUSD dataset, by school year.School year
2017-18 2018-19
Table 2 :
2Summary of models and the number of degrees of freedom,
Table 3 :
3Tuned model hyperparameters.Hyperparameter Applies to
Value
Fixed-effect Linear CDM Nested
All
10 −5
CDM
-
-
10
-
Stratified
10
10
10
10
10 −4
10 −4
10 −3
10 −3
10 5
10 4
10 3
10 2
10 1
10 0
Local regularization,
3.00
3.25
3.50
3.75
4.00
4.25
4.50
4.75
NLL
fixed, train
fixed, val
linear, train
linear, val
nested, train
nested, val
CDM, train
CDM, val
2
3
4
5
6
7
8
9
10
CDM embedding dimension, r
2.80
2.85
2.90
2.95
3.00
3.05
3.10
3.15
3.20
NLL
CDM, train
CDM, val
Table 4 :
4Learned independence parameters,ˆ, of our nested MNL model. Lowestˆ, and therefore highest within-nest correlation, in bold.Nest,
Nest size, | | Parameter,Ĝ
eneral Education
65
0.4271
Spanish Language
32
0.6562
Special Education
27
0.3634
Chinese Language
24
0.7709
Korean Language
2
0.5118
Filipino Language
2
0.5549
Japanese Language
2
0.5708
sesarjun@amazon.com Amazon San Francisco, CA, USA Itai Ashlagi iashlagi@stanford.edu Stanford University Stanford, CA, USA Irene Lo ilo@stanford.edu Stanford University
This lottery design is known as the multiple tie-breaking rule (MTB) as students receive multiple values, one for each ranked school. By contrast, the single tie-breaking rule (STB) assigns a single lottery value to each student, used across desired schools. For more on the analysis of tie-breaking rules, see[5, 6, 14? ].
ACKNOWLEDGMENTSWe thank the San Francisco Unified School District for providing us access to choice data, specifically thanking SFUSD representatives Joseph Monardo, Jennifer Lebarre, Lauren Koehler, and Reed Levitt for helpful discussions. This work was supported in part by a gift from the Koret Foundation.
Do Parents Value School Effectiveness?. Atila Abdulkadiroğlu, A Parag, Jonathan Pathak, Christopher R Schellenberg, Walters, American Economic Review. 110Atila Abdulkadiroğlu, Parag A. Pathak, Jonathan Schellenberg, and Christopher R. Walters. 2020. Do Parents Value School Effectiveness? American Economic Review 110, 5 (May 2020), 1502-39.
Demand Analysis Using Strategic Reports: An Application to a School Choice Mechanism. Nikhil Agarwal, Paulo Somaini, Econometrica. 86Nikhil Agarwal and Paulo Somaini. 2018. Demand Analysis Using Strategic Reports: An Application to a School Choice Mechanism. Econometrica 86, 2 (2018), 391-444.
Revealed Preference Analysis of School Choice Models. Nikhil Agarwal, Paulo Somaini, Annual Review of Economics. 12Nikhil Agarwal and Paulo Somaini. 2020. Revealed Preference Analysis of School Choice Models. Annual Review of Economics 12, 1 (2020), 471-501.
Logit Models for Sets of Ranked Items. D Paul, Nicholas A Allison, Christakis, Sociological Methodology. 24Paul D. Allison and Nicholas A. Christakis. 1994. Logit Models for Sets of Ranked Items. Sociological Methodology 24 (1994), 199-228.
What matters in school choice tie-breaking? How competition guides design. Itai Ashlagi, Afshin Nikzad, Journal of Economic Theory. 190105120Itai Ashlagi and Afshin Nikzad. 2020. What matters in school choice tie-breaking? How competition guides design. Journal of Economic Theory 190 (2020), 105120.
Assigning more students to their top choices: A comparison of tie-breaking rules. Itai Ashlagi, Afshin Nikzad, Assaf Romm, Games and Economic Behavior. 115Itai Ashlagi, Afshin Nikzad, and Assaf Romm. 2019. Assigning more students to their top choices: A comparison of tie-breaking rules. Games and Economic Behavior 115 (2019), 167-187.
A New Class of Market Share Models. R Richard, John C Batsell, Polking, Marketing Science. 4Richard R. Batsell and John C. Polking. 1985. A New Class of Market Share Models. Marketing Science 4, 3 (1985), 177-198.
Preference modeling with contextdependent salient features. Amanda Bower, Laura Balzano, International Conference on Machine Learning. PMLR. Amanda Bower and Laura Balzano. 2020. Preference modeling with context- dependent salient features. In International Conference on Machine Learning. PMLR, 1067-1077.
Structural Estimation of a Model of School Choices: The Boston Mechanism versus Its Alternatives. Caterina Calsamiglia, Chao Fu, Maia Güell, Journal of Political Economy. 128Caterina Calsamiglia, Chao Fu, and Maia Güell. 2020. Structural Estimation of a Model of School Choices: The Boston Mechanism versus Its Alternatives. Journal of Political Economy 128, 2 (2020), 642-680.
Exploiting Rank Ordered Choice Set Data within the Stochastic Utility Model. G Randall, Richard Chapman, Staelin, Journal of Marketing Research. 19Randall G. Chapman and Richard Staelin. 1982. Exploiting Rank Ordered Choice Set Data within the Stochastic Utility Model. Journal of Marketing Research 19, 3 (1982), 288-301.
Probability models on rankings. Douglas E Critchlow, Michael A Fligner, Joseph S Verducci, Journal of Mathematical Psychology. 35Douglas E. Critchlow, Michael A. Fligner, and Joseph S. Verducci. 1991. Probability models on rankings. Journal of Mathematical Psychology 35, 3 (1991), 294-318.
A rank-ordered logit model with unobserved heterogeneity in ranking capabilities. Dennis Fok, Richard Paap, Bram Van Dijk, Journal of Applied Econometrics. 27Dennis Fok, Richard Paap, and Bram Van Dijk. 2012. A rank-ordered logit model with unobserved heterogeneity in ranking capabilities. Journal of Applied Econometrics 27, 5 (2012), 831-846.
College Admissions and the Stability of Marriage. D Gale, L S Shapley, The American Mathematical Monthly. 69D. Gale and L. S. Shapley. 1962. College Admissions and the Stability of Marriage. The American Mathematical Monthly 69, 1 (1962), 9-15.
The Performance of School Assignment Mechanisms in Practice. Monique De Haan, Pieter Gautier, Journal of Political Economy. Hessel Oosterbeek, and Bas Van der Klaauw. nullMonique De Haan, Pieter Gautier, Hessel Oosterbeek, and Bas Van der Klaauw. 2015. The Performance of School Assignment Mechanisms in Practice. Journal of Political Economy (2015), null.
Heterogeneous Preferences and the Efficacy of Public School Choice. Justine S Hastings, Thomas J Kane, Douglas O Staiger, Justine S. Hastings, Thomas J. Kane, and Douglas O. Staiger. 2008. Heterogeneous Preferences and the Efficacy of Public School Choice.
Specifying and testing econometric models for rank-ordered data. Jerry A Hausman, Paul A Ruud, Journal of Econometrics. 34Jerry A. Hausman and Paul A. Ruud. 1987. Specifying and testing econometric models for rank-ordered data. Journal of Econometrics 34, 1 (1987), 83-104.
. Seung-Jean Kim, Kwangmoo Koh, Stephen Boyd, Dimitry Gorinevsky, SIAM review. 51\ell_1 trend filteringSeung-Jean Kim, Kwangmoo Koh, Stephen Boyd, and Dimitry Gorinevsky. 2009. \ell_1 trend filtering. SIAM review 51, 2 (2009), 339-360.
Adam: A Method for Stochastic Optimization. P Diederik, Jimmy Kingma, Ba, Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Opti- mization.
Variable fusion: A new adaptive signal regression method. R Stephanie, Jerome H Land, Friedman, Stephanie R Land and Jerome H Friedman. 1997. Variable fusion: A new adaptive signal regression method.
Distance to Schools and Equal Access in School Choice Systems. Working Papers 2022-002. Human Capital and Economic Opportunity Working Group. Mariana Laverde, Mariana Laverde. 2022. Distance to Schools and Equal Access in School Choice Systems. Working Papers 2022-002. Human Capital and Economic Opportunity Working Group.
Individual Choice Behavior: A Theoretical analysis. R , Duncan Luce, WileyNew York, NY, USAR. Duncan Luce. 1959. Individual Choice Behavior: A Theoretical analysis. Wiley, New York, NY, USA.
The choice axiom after twenty years. R , Duncan Luce, Journal of Mathematical Psychology. 15R. Duncan Luce. 1977. The choice axiom after twenty years. Journal of Mathe- matical Psychology 15, 3 (1977), 215-233.
Modelling the choice of residential location. Daniel Mcfadden, Spatial interaction Theory and Planning Models. Daniel McFadden. 1978. Modelling the choice of residential location. Spatial interaction Theory and Planning Models (1978).
Mixed MNL models for discrete response. Daniel Mcfadden, Kenneth Train, Journal of Applied Econometrics. 15Daniel McFadden and Kenneth Train. 2000. Mixed MNL models for discrete response. Journal of Applied Econometrics 15, 5 (2000), 447-470.
Efficient Estimation of Word Representations in Vector Space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space.
Demand Modeling, Forecasting, and Counterfactuals. A Parag, Peng Pathak, Shi, Part IParag A. Pathak and Peng Shi. 2014. Demand Modeling, Forecasting, and Coun- terfactuals, Part I.
How Well Do Structural Demand Models Work? Counterfactual Predictions in School Choice. A Parag, Peng Pathak, Shi, 24017National Bureau of Economic Research. Working PaperParag A Pathak and Peng Shi. 2017. How Well Do Structural Demand Models Work? Counterfactual Predictions in School Choice. Working Paper 24017. National Bureau of Economic Research.
Random Permutations. R L Plackett, Journal of the Royal Statistical Society: Series B (Methodological). 30R. L. Plackett. 1968. Random Permutations. Journal of the Royal Statistical Society: Series B (Methodological) 30, 3 (1968), 517-534.
A Model of Consumer Information Search Behavior for New Automobiles. N Girish, Richard Punj, Staelin, Journal of Consumer Research. 94Girish N. Punj and Richard Staelin. 1983. A Model of Consumer Information Search Behavior for New Automobiles. Journal of Consumer Research 9, 4 (03 1983), 366-380.
Discovering Context Effects from Raw Choice Data. Arjun Seshadri, Alexander Peysakhovich, Johan Ugander, arXiv:1902.03266Arjun Seshadri, Alexander Peysakhovich, and Johan Ugander. 2019. Discov- ering Context Effects from Raw Choice Data. CoRR abs/1902.03266 (2019). arXiv:1902.03266
Learning Rich Rankings. Arjun Seshadri, Stephen Ragain, Johan Ugander, Advances in Neural Information Processing Systems. 33Arjun Seshadri, Stephen Ragain, and Johan Ugander. 2020. Learning Rich Rank- ings. In Advances in Neural Information Processing Systems, Vol. 33. 9435-9446.
Kernels and regularization on graphs. J Alexander, Risi Smola, Kondor, Learning theory and kernel machines. SpringerAlexander J Smola and Risi Kondor. 2003. Kernels and regularization on graphs. In Learning theory and kernel machines. Springer, 144-158.
Sparsity and smoothness via the fused lasso. Robert Tibshirani, Michael Saunders, Saharon Rosset, Ji Zhu, Keith Knight, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 67Robert Tibshirani, Michael Saunders, Saharon Rosset, Ji Zhu, and Keith Knight. 2005. Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 67, 1 (2005), 91-108.
Learning interpretable feature context effects in discrete choice. Kiran Tomlinson, Austin R Benson, Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. the 27th ACM SIGKDD Conference on Knowledge Discovery & Data MiningKiran Tomlinson and Austin R Benson. 2021. Learning interpretable feature con- text effects in discrete choice. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 1582-1592.
Kenneth Train, Discrete Choice Methods with Simulation. New York, NY, USACambridge University Presssecond edition ed.Kenneth Train. 2009. Discrete Choice Methods with Simulation (second edition ed.). Cambridge University Press, New York, NY, USA.
Not just for consumers: Context effects are fundamental to decision making. S Jennifer, Trueblood, D Scott, Andrew Brown, Jerome R Heathcote, Busemeyer, Psychological science. 24Jennifer S Trueblood, Scott D Brown, Andrew Heathcote, and Jerome R Buse- meyer. 2013. Not just for consumers: Context effects are fundamental to decision making. Psychological science 24, 6 (2013), 901-908.
A Distributed Method for Fitting Laplacian Regularized Stratified Models. Jonathan Tuck, Shane Barratt, Stephen Boyd, J. Mach. Learn. Res. 22ArticleJonathan Tuck, Shane Barratt, and Stephen Boyd. 2021. A Distributed Method for Fitting Laplacian Regularized Stratified Models. J. Mach. Learn. Res. 22, 1, Article 60 (Jan 2021), 37 pages.
Intransitivity of Preferences. Amos Tversky, Psychological Review. 76Amos Tversky. 1969. Intransitivity of Preferences. Psychological Review 76, 1 (1969), 31-48.
Context-Dependent Preferences. Amos Tversky, Itamar Simonson, Management Science. 39Amos Tversky and Itamar Simonson. 1993. Context-Dependent Preferences. Management Science 39, 10 (1993), 1179-1189.
Trend filtering on graphs. Yu-Xiang Wang, James Sharpnack, Alex Smola, Ryan Tibshirani, Artificial Intelligence and Statistics. PMLR. Yu-Xiang Wang, James Sharpnack, Alex Smola, and Ryan Tibshirani. 2015. Trend filtering on graphs. In Artificial Intelligence and Statistics. PMLR, 1042-1050.
Learning and Decision-Making from Rank Data. Lirong Xia, 10.2200/S00876ED1V01Y201810AIM040Synthesis Lectures on Artificial Intelligence and Machine Learning. 13Lirong Xia. 2019. Learning and Decision-Making from Rank Data. Synthesis Lectures on Artificial Intelligence and Machine Learning 13 (02 2019), 1-159. https: //doi.org/10.2200/S00876ED1V01Y201810AIM040
| [] |
[
"Gotta Go Fast: Measuring Input/Output Latencies of Virtual Reality 3D Engines for Cognitive Experiments",
"Gotta Go Fast: Measuring Input/Output Latencies of Virtual Reality 3D Engines for Cognitive Experiments"
] | [
"Taeho Kang ",
"Christian Wallraven "
] | [] | [] | Virtual Reality (VR) is seeing increased adoption across many fields. The field of experimental cognitive science is also testing utilization of the technology combined with physiological measures such as electroencephalography (EEG) and eye tracking. Quantitative measures of human behavior and cognition process, however, are sensitive to minuscule time resolutions that are often overlooked in the scope of consumer-level VR hardware and software stacks. In this preliminary study, we implement VR testing environments in two prominent 3D Virtual Reality frameworks (Unity and Unreal Engine) to measure latency values for stimulus onset execution code to Head-Mount Display (HMD) pixel change, as well as the latency between human behavioral response input to its registration in the engine environment under a typical cognitive experiment hardware setup. We find that whereas the specifics of the latency may further be influenced by different hardware and software setups, the variations in consumer hardware is apparent regardless and report detailed statistics on these latencies. Such consideration should be taken into account when designing VR-based cognitive experiments that measure human behavior. | null | [
"https://export.arxiv.org/pdf/2306.02637v1.pdf"
] | 259,075,531 | 2306.02637 | f48047c6be71aba6fc76016fbf386b3f8641f7fb |
Gotta Go Fast: Measuring Input/Output Latencies of Virtual Reality 3D Engines for Cognitive Experiments
JULY 2023 1
Taeho Kang
Christian Wallraven
Gotta Go Fast: Measuring Input/Output Latencies of Virtual Reality 3D Engines for Cognitive Experiments
JULY 2023 1Index Terms-Virtual realityVREEGcognitive experimentshuman behavioralbehavioral measurementseye-trackingla- tencyresponse time
Virtual Reality (VR) is seeing increased adoption across many fields. The field of experimental cognitive science is also testing utilization of the technology combined with physiological measures such as electroencephalography (EEG) and eye tracking. Quantitative measures of human behavior and cognition process, however, are sensitive to minuscule time resolutions that are often overlooked in the scope of consumer-level VR hardware and software stacks. In this preliminary study, we implement VR testing environments in two prominent 3D Virtual Reality frameworks (Unity and Unreal Engine) to measure latency values for stimulus onset execution code to Head-Mount Display (HMD) pixel change, as well as the latency between human behavioral response input to its registration in the engine environment under a typical cognitive experiment hardware setup. We find that whereas the specifics of the latency may further be influenced by different hardware and software setups, the variations in consumer hardware is apparent regardless and report detailed statistics on these latencies. Such consideration should be taken into account when designing VR-based cognitive experiments that measure human behavior.
I. Introduction
T HE idea of utilizing naturalistic stimuli in cognitive experiments is increasingly gaining traction, and its importance has been recognized in an increasing number of studies [1]- [7]. 3D environments such as Virtual and Mixed Reality (VR/MR) can provide an excellent platform for implementing experimental paradigms where immersive, interactive and naturalistic stimuli presentation is desired [8]- [10]. Especially for VR, behavioral and cognitive investigative experiments performed in virtual reality have the advantage of being able to control and manipulate environmental variables that in real-life settings would be nearly impossible to control [9], [11]. Possibly due to this, virtual reality has seen increased utilization in the area of behavioral and cognitive investigations, from simple behavioral experiments [12], to neuroimaging studies [?], [13], to even timing sensitive studies involving physiological signal measurements such as EEG [14]- [21].
Due to the nature of cognitive processes of interest, behavioral and cognitive studies investigating timing-critical brain processes have been historically sensitive to latency in experimental hardware and software [22], [23]. It has been suggested, however, that specialized behavioral input devices may not be as crucial for even time-critical experiments, as the variability from human behavior itself is generally larger in scale than the input lag occurring from individual hardware devices [24]. For ease of experimental equipment acquisition that may in turn be relevant to the easy replication of studies, usage of adequately performant consumer hardware may be preferable to limited costly specialized equipment for behavioral input.
Nonetheless, especially in studies measuring time-sensitive behaviors, there is importance in measuring expected latency in hardware and software setups used for experimental paradigms. Wimmer et al. [25] used opto-couplers to measure latency of 36 different serial input devices connected to a Raspberry Pi device and formed probability distribution models for each of the devices, and reported different input latency distributions per device; suggesting the need for measuring input latency levels in interactive experimental setups that make use of serial device based user input device. Furthermore, due to higher graphical computation requirements than conventional displays arising from not only generally higher refresh rates but also other factors such as needing to render twice for stereo vision, the current state of VR hardware suffers from latency greater than of conventional user interface devices [26].
A final point of consideration in this context concerns the use of higher-level APIs for generating three-dimensional, interactive environments that afford realistic levels of sensory realism and interactivity. While it is possible to create wellcontrolled low-level stimuli with relative ease in computer graphics languages, the amount of work necessary to create environments which, for example, contain objects that interact with each other in a physically-realistic fashion from scratch is beyond the capabilities of standard cognitive and behavioral research labs. For this reason, many researchers have increasingly turned to 3D game engines for creating such environments. One drawback of this development is that these engines offer only a reduced degree of control over their timing internals given that much of the behind-the-scenes calculation remains hidden from the API user (examples include the calculation of graphic primitives, the determination of collisions in physics-aware simulations, etc.). This raises the question of how much timing accuracy and precision is possible in game engine simulation programming environments. In this context, Wiesing et al. [27] have measured stimuli duration and onset measurements in Unreal Engine with a dedicated response pad and reported increased average latency, compared to dedicated cognitive experiment software such as Psychopy and Psychtoolbox. While Unreal Engine as a serious 3D engine has been used for VR based behavioral experiments [28], possibly due to the relative ease of implementation in comparison to the former, Unity Engine has been seeing increased applications in cognitive experiments [17], [29]- [34]. While they are suited for similar purposes, due to differences in implementation detail Unity and Unreal Engine often exhibit different behaviors even when the same effect is intended, especially in frame and I/O related latency performances [35].
In light of these considerations, to ultimately implement and execute experiments investigating brain processes in a naturalistic VR environment, we deem it worth investigating the expected latency values for hardware and software setups that would (commonly) be used in VR-based behavioral experiments. In this study, we aim to achieve this by utilizing a measuring apparatus with oscilloscopes, as well as a barebone experimental paradigm implemented in two widely-used VR capable 3D engines, Unity Engine and Unreal Engine. In the bare bone paradigm, we create stimulus onsets that send trigger codes to the measuring apparatus before the actual displaying of stimuli is performed in the Head Mounted Device (HMD), and measure latency between the the onset code and the actual pixel change in the HMD. Furthermore, we measure the latency between a physical input action on consumerlevel user interface hardware (keyboard) that can be used for behavioral response, and the registration of the input in the 3D engines. Lastly, we measure the latency between the physical input action and the pixel change from the resultant feedback code execution. We were interested in the measurement of the following events: 1) latency between stimulus onset code execution in the 3D engine and the actual HMD pixel changes as the stimulus was presented (Stim2Disp), 2) latency between participant behavioral response by a keypress and the code execution performed immediately upon the register of the key event in the 3D engine (Key2Led), and 3) latency between participant behavioral response by keypress and the pixel change in HMD caused by the 3D engine code that presents a visual feedback stimulus upon the key event register (Key2Disp). To measure these events, we implemented experimental paradigms capable of measuring these events in both Unity Engine and Unreal Engine, as can be seen in Figure 1.
II. Materials and Methods
A. Experiment Design
We decided to measure both 3D Engines as their implementations differ, as well as the scripting implementation language for user code: Unity utilizes C# for this, whereas Unreal uses C++. Both engines have seen usage in cognitive experimental designs in 3D environments (see Introduction).
The 3D VR experiment environment consists of a basic 3D spherical object that can move around based on the user's input. Upon a stimulus onset code or recognition of a participant's behavioral response by a keyboard, a chess-board grid of black and white covers the entire screen for one frame, then with the colors inverted for another frame. The experiment code embodies a bare-bone basic form of cognitive experimental paradigm using 3D Engines, and we send programmatic triggers upon stimulus onset and behavioral input registration, as one would for experiments requiring high temporal resolution like in EEG or other cognitive experiments that involve some form of time series physiological measurement. We measure the latency of the above 3 scenarios (Stim2Disp, Key2Disp, Key2Led) by sending programmatic markers to Arduino upon stimulus onset code execution, and behavioral response registration in the 3D engine, which triggers an LED. Furthermore, we use pressure sensors and photodiodes connected to the Arduino board to have quantifiable measures of when participant behavior and stimuli display happen in the real world.
1) Unity-specific setup: For Unity, we implemented the paradigm in Unity Engine version 2019.4.20f. Following Unity Engine's manual on order of execution for event functions (seehttps://docs.unity3d.com/Manual/ExecutionOrder.html), the FixedUpdate() function handles ticks on computations focused on the physics engine calculations, and may render more than once per rendered frame depending on the computation load and settings, while the Update() function ticks per every frame that is rendered (after FixedUpdate() calls are complete). The events are called serially, and between the Update() call and the actual rendering calls of the scene on display there are several other calls; as such, in the interest of sending the trigger for stimulus onset in experimental behavior measurement database as close to the onset of the actual stimulus in the display, it is preferable to send the marker code sometime after the Update() call but before the actual display rendering process. We achieve this by calling a coroutine that waits execution of sending stimulus onset triggers until the rendering computation is complete, but before the displaying is performed:
Furthermore, as the FixedUpdate() call executes at the beginning of the game tick and executes at a higher rate, it is preferable to process keyboard input events (i.e. behavioral response) in there. For Key2Led and Key2Disp events, the function handling keyboard input events can also call for the co-routines to send markers subsequently:
2) Unreal-specific setup: For the Unreal Engine, we implemented the paradigm in version 4.26. Unreal Engine logic ticks are separated in tick groups (PrePhysics, DuringPhysics, PostPhysics, and PostUpdateWork) that are serially run as per the documentation (https://docs.unrealengine.com/5.1/en-US/ actor-ticking-in-unreal-engine/). Processing of user input is handled in the PrePhysics segment of the ticking, and as such binding the input of specific keys to a method that sends trigger private IEnumerator HandleTriggers ( GameObject grid1 , GameObject grid2 ) { // because the first EOF is still on the same frame , wait yield return new WaitForEndOfFrame (); // send LED signal RIGHT BEFORE FRAME RENDERING duino_port . Write ( sevent_keypress , 0, 1); yield return new WaitForEndOfFrame (); // the stimuli is two frames of visual stimuli // as such , disable the first frame and enable the other grid1 . SetActive ( false ); grid2 . SetActive (true); grid2 . SetActive ( false ); yield return null; is sufficient. By calling stimulus onset triggers in a code that is executed on the PostPhysics or PostUpdateWork segment, we can also set it to be as close to the timing of the actual display as possible. To ensure tweak Unreal Engine for optimal performance, several project settings were changed in addition: First, in the Rendering-¿VR settings, Instanced Stereo was enabled while mobile HDR was disabled, as per recommendations by [27]. Second, the following console variables were were changed: R.GTSyncType to 1, R.Vsync to 0, and R.OneFrameThreadLag to 0. R.GTSyncType determines which thread in game processes sync to: 0 if they sync with the rendering thread, if 1 they sync to the RHI(render hardware interface=d3dx or opengl) thread. As per the Unreal documentation syncing to the RHI thread helps with input latency, so we set it as 1. VSync renders the frames at the pace at which the display device is capable of, but it often leads to more dropped frames when enabled than otherwise [36]. When OneFrameThreadLag is enabled, the graphics drivers keep the game thread from processing further than one frame worth of computations than what is currently being displayed. We deemed this undesirable as our purpose was to minimize lags stemming from computations not being ahead enough, along with minimization of the input latency.
B. Measuring apparatus
To measure timings of 1) behavioral response onset, 2) stimulus onset code execution, 3) feedback code execution in response to behavior, and 4) pixel changes on the HMD as precisely as possible, a circuit apparatus using that can bee seen in Figure 5 was implemented. The inspiration for the circuit board was based on a schematic from class material in Aachen university's system design course [37]. Specific components of the circuitry included an Arduino Uno Rev.3, a BPW-34 photosensitive diode developed by Vishay Semiconductors, a pressure sensor FSR402 developed by Interlink Electronics. For registering the behavior response, a Wooting One keyboard developed by Wooting was used. For running the 3D Engine based experimental paradigms, a Windows-10 based computer running on AMD's 5900x CPU with Nvidia RTX 3090Ti was used.
As the actual display of the VR environment, an Oculus DK2 headset from Meta Inc. was used, in which the photodiode was attached to next to the display. An USB-based oscilloscope developed by Pico Technology (Picoscope series 2205A) with two probes was used for measuring the changes in voltage. The oscilloscope's sampling frequency was set too 240kHz. The ground clamps of both probes were connected to the ground pin cable of the Arduino board. As the Oculus DK2's refresh rate is at 75Hz, we band-pass filtered the probe connected to the photodiode to [60 80]Hz. Sample collection length per trial was set to 200ms, with 20ms pre-trigger and 180ms posttrigger. For the Stim2Disp measurements, the first probe was clamped to the LED diode that would be toggled on and off by the stimulus control code in the 3D engine, while the second probe was clamped to the cable connected to the HMDattached photodiode. The scope data collection trigger was set to a rising threshold of 1.5V for Unity and 115mV for Unreal with a hysteresis of 5.87% on the first probe. All scope measure trigger thresholds were set manually set after trial and error for catching the events of interest, and the difference in threshold was made due to the probe attenuator settings changed to different levels between the two sets of measurements, the difference in thresholds, however, did not interfere with the trigger adequately capturing the point where the event was occurring. This was verified after measurements by visually inspecting the probe waveforms for peaks from LED and keypress actions. In Key2Disp and Key2LED measurements, the first probe was clamped to the cable attached to the pressure sensor attached to the keyboard. Here again, due to selecting different probe attenuator settings, the probe trigger threshold was set to 450mV rise with 2.44% hysteresis for Unreal, and a 4.7V rise for unity. The second probe was connected to the LED in Key2LED measurements, and to the photodiode in Key2Disp measurements. The Arduino would be connected to the experiment PC via USB, through which LED trigger communications would be sent from the 3D engines via serial communication. For each latency event of interest, we made at least 300 repetitions of the measurement in order to collect sufficient sample size.
C. Data processing
Data preprocessing and analysis were performed with Matlab 2021b by Mathworks Inc.. As the scope sampling rate was rather high considering our time epochs of interest, data was first downsampled to 20kHz. As the data epochs were temporally zero-centered to the triggering event of the first probe, the timing of the events of interest on the second probe (photodiode voltage change, LED power on) had to be found by peak detection, as the onset of events of interest would result in significant changes in the probe voltage. In photodiode measurements this meant voltage troughs that were far greater than the baseline pixels (as the black and white grids would trigger a greater change in the luminosity of the display, leading to greater voltage changes). For all measurements on the second probe, as we were looking for the latency for the onset of the event of interest, finding the timing of only the first significant peak detected was necessary. Finding the position of the peaks was performed with the findpeak() function provided in the Signal Processing Toolbox of Matlab. The resultant peaks were plotted and manually inspected for enough number of trials (¿100) in each condition to ensure the function was performing as desired. Once the timing of the events of interest were found, we calculated latency for the three events of interest.
III. Results
From stimulus onset marker code execution to the actual onset of the chess-board grid on the HMD pixels, on Unity Engine there was an average latency of 10.777ms (SD 0.672), while on Unreal Engine an average latency of 21.059ms (SD 0.671) was observed. From behavioral keypress onset detection to chess board grid onset on the HMD, an average latency of 47.026ms (SD 6.156) was observed on Unity while 46.682ms (SD 4.499) was observed on Unreal Engine. In a separate session measuring latency between physical keypress detection and LED onset upon keypress register in the 3D engine, we found an average latency of 36.948ms (SD 4.911) on Unity and 25.161ms (SD 5.087) on Unreal. Table I shows the summarized results. Figure 6 (Stim2Disp), 7 (Key2Disp), and 8 (Key2Led) each shows probe measurements for all individual trials superimposed on the top plot (as well as the detected response peaks as black scatterplots), and the averaged out measurements on the lower plot.
Two-sample T-tests between the two 3D engines for the Stim2Disp condition showed a significant difference in latency between Unity and Unreal ( =641 = 60.537, < 1 −100 , = 0.665). Similarly, a significant difference was observed between Unity and Unreal's latecny in the Key2Led condition ( 735 = 31.900, < 1 −100 , = 4.991). No significant difference was found in latency between Unity and Unreal for the Key2Disp condition ( =713 = 0.833, = 0.405, = 5.484).
IV. Discussion
This study aimed to make precise measurements of latencies that may occur during time-critical cognitive behavioral experiments in 3D engine-based virtual reality environments. To achieve this, we implemented a bare-bone 3D environment in both Unity and Unreal Engine, two prominent 3D engines that are used to develop Virtual reality and 3D scenarios for general purposes. We implemented latency measurement in three different scenarios: a scenario in which a marker event for stimuli presentation was sent and rendered on the display, a scenario in which a physical key press by a participant happened and registered in the 3D engine, and a scenario where a key press happened and the resultant feedback occurred on the display. We used oscilloscope probes combined with photodiodes on display events, serial-communication triggered LEDs for software events, and pressure sensors for physical keypress events in order to make precise timing measurements on when each of these events were occurring.
We first discuss the difference in average latency values between Key2Disp and Key2Led conditions: although mea- surements for the two conditions were made separately (as the Key2Led involved sending a LED trigger to arduino upon 3D engine recognition of the key event), considering our experiments were designed to be as basic as possible in terms of code implementation (as well as the sufficient sample size for each condition), at a glance one would expect the sum of Stim2Disp and Key2Led conditions to match up or be somewhat less than the average Key2Disp values. In Unity, the sum of the average of two conditions exceed the Key2Disp slightly. We believe this is understandable considering the communication time between PC software and the actual Arduino interface itself. In a previous study by Schubert et al. [38], downstream communication from an experimental computer to an Arduino was measured to be an average of 1.251ms with a low standard deviation. Considering the difference between the sum of mean Key2Led and Stim2Disp, and the Key2Disp condition itself, the difference appears to be enough to explain the somewhat larger mean latency in the combined latency.
A. Registering behavioral response without Unity or Unreal with LSL
From our current set of results, it appears the largest issue in maintaining a reliable latency for cognitive experiment occurs from registering behavioral responses from the user I/O device. As can be observed from results in the Key2Led condition, in both 3D engines this latency is the largest (and the most variable) in registering the physical response event to the software stack.
It is possible that the nature of serial port devices contribute a large part in this variance: parallel-port connected devices have been known to be favored over serial port connections for participant I/O in timing critical experimental design [22], [39], [40]. However serial port device technology has come a long way, and it has been suggested that the imprecision arising from user input devices may not be as critical as believed previously [24]. In older studies comparing serial and PS/2 devices, serial input devices were reported to have a much higher input latency with high variance [40]. In modern devices however, the latency gap between serial port based devices and parallel port devices may have become less considerable: response pad hardware specifically used for cognitive experiments such as Cedrus pads use serial USB connections. Furthermore, the choice of keyboard hardware that was used in our experiment was a mechanical keyboard that uses optic-based switches for faster input recognition on the hardware's part, along with high polling rates over 100Hz.
The software stack also plays as much as a large part in the mean and variation of the latency as the hardware stack does. It has been reported that the experimental software framework as well as the operating system can contribute to differently distributed latency and missed frame counts [23]. We look into lowering the input variance and latency further by utilizing software stack independent from 3D engines in this section.
In light of these considerations, we performed another set of measurements, but this time using a software outside of 3D engines for key event recognition. Lab Streaming Layer (LSL) [41] is a C++ based system for synchronizing experimental data from multiple sources through a unified clock (https: //github.com/sccn/labstreaminglayer for more info). LSL has been used for cognitive studies involving physiological signal measurements in which timing was critical [42]- [46]. It supports language bindings in multiple programming languages, as well as writing functions for adding custom data source to the data streaming system. We modified a C++ callback code available in the LSL Github Repository to catch certain key events and send Arduino LED events similar to Key2Led condition, but bypassing 3D engines for the key event recognition and getting them directly from the OS level:
The results from the set of measurements using LSL and a C++ callback function for key events can be seen in Figure 10. With a mean latency of 9.950ms (SD 1.700) from physical key press event to the Arduino sensor trigger, we are seeing much lower average latency levels that compare to older PS/2 devices, as well as more stable variations in the latency. By logging keypress events or participant behavioral input through separately run programs such as LSL, we believe some of the issues regarding input lag variation in experiments using graphics and compute-heavy 3D engines can be alleviated somewhat.
B. Stimulus presentation code to auditory stimulus onset delay in Unity
In interactive experiments utilizing VR technology, especially in those that aim to create naturalistic experimental environment with immersion, it is often worth considering a multisensory presentation of stimuli. The addition of auditory components to the visual stimuli presentation would create a much more immersive VR simulation. And like visual stimuli, presentation of auditory stimuli needs to be considerably precise in timing as well for event related design experiments measuring physiological and behavioral response to stimuli. We deemed it was also worth investigating the latency of auditory stimuli onset code and the physical propagation of the stimuli sound. In the case of Unity, one can utilize the base sound functionality provided by the engine, or, use 3rd-party sound engines that are compatible with the 3D engine such as FMOD (https://www.fmod.com/). While the default sound library from Unity does not provide a lot of tweaking options to optimize for performance, FMOD allows manually setting sound playback buffer sizes. For this study, we used a soundfile from [47] as the stimulus to playback, either using Unity's default sound library, or using FMOD with a buffer size of either 512 or 1024. A line-out cable (3.5mm M/M) was plugged into the speaker jack of the experimental computer, with the other end being connected to the oscilloscope probe as can be seen in Figure 11. We did similar measurements like in Stim2Disp conditon, measuring the latency between stimuli onset code and the actual propagation of the sound in the sound cable. We report the results in Figure 12. While using FMOD with a buffer size of 512 yielded the best results, we observe that latency for auditory stimuli presentation is much worse compared to visual stimuli both in mean accuracy and in variation. Based on this observation, we believe caution is warranted when using auditory stimuli, especially when in the absence of a concurrent visual stimuli.
C. Future works
As Wimmer et al. [25] reported, specific latency measures for experimental setup are strongly dependent on the specifics of the hardware and software one acquires for the experiment. Considering the continuously developing landscape in VR and its related hardware/software stack, simply measuring the latency of each setup is not only insufficient, but it is a fruitless endeavor long-term. Instead, it would be more prudent to develop a framework capable of measuring delays for configurable setup on the go: this is our most immediate next step. Furthermore, the purpose of establishing latency value distributions are to ultimately utilize them in developments of VR-based behavioral experiments in event-related designs to collect synchronized time-dependent behavioral and physiological data; for our purposes of investigating underlying brain processes, we are especially interested in utilizing these findings to create latencyoptimized VR EEG experiments in immersive, naturalistic 3D.
Fig. 1 .
1Diagram of the experimental data collection paradigm trial design.
}Fig. 2 .Fig. 3 .
23Code for sending trigger to Arduino in Unity private void FixedUpdate () { // updating the speed for the cueball mvVec3 .Set( mvVec .x, 0.0f, mvVec .y); // actual position update and physicis // are handled internally by unity rb. AddForce ( mvVec3 * mvspeed ); // check update for keypresses in fixedupdate // for fastest response possible if Unity code for sending trigger on behavioral response
Fig. 5 .
5Circuit diagram of the latency measuring apparatus, setup pictures
Fig. 6 .
6Latency measured between stimulus onset marker execution code and actual stimuli onset pixel change in HMD (Stim2Disp), for both Unity and Unreal Engine. Higher plot in each set of plots shows all measurements from the two probes superimposed, along with the delayed response peaks determined by Matlab's findpeak() function. Lower plot shows the average waveforms, as well as the average of the automatically detected first onset of peaks in the second probe.
Fig. 7 .
7Latency measured between keyboard press behavior and feedback stimuli onset on HMD pixel in 3D engine (Key2Disp), for both Unity and Unreal Engine. Higher plot in each set of plots shows all measurements from the two probes superimposed, along with the delayed response peaks determined by Matlab's findpeak() function. Lower plot shows the average waveforms, as well as the average of the automatically detected first onset of peaks in the second probe.
Fig. 8 .
8Latency measured between keyboard press behavior and marker execution for keyboard event recognition code in 3D engine (Key2Led), for both Unity and Unreal Engine. Higher plot in each set of plots shows all measurements from the two probes superimposed, along with the delayed response peaks determined by Matlab's findpeak() function. Lower plot shows the average waveforms, as well as the average of the automatically detected first onset of peaks in the second probe.
Fig. 10 .
10Keyboard input to C++ LSL code register. Higher plot in each set of plots shows all measurements from the two probes superimposed, along with the delayed response peaks determined by Matlab's findpeak() function. Lower plot shows the average waveforms, as well as the average of the automatically detected first onset of peaks in the second probe.
Fig. 11 .
11Circuit diagram of the latency measuring apparatus, setup pictures
Acknowledgments This study was supported by the National Research Foundation of Korea under project BK21 FOUR and grants NRF-2022R1A2C2092118, NRF-2022R1H1A2092007, NRF-2019R1A2C2007612, as well as by Institute of Information & Communications Technology Planning & Evaluation (IITP) grants funded by the Korea government (No. 2017-0-00451, Development of BCI based Brain and Cognitive Computing Technology for Recognizing User's Intentions using Deep Learning; No. 2019-0-00079, Department of Artificial Intelligence, Korea University; No. 2021-0-02068, Artificial Intelligence Innovation Hub).
Fig. 12 .
12Stimuli onset code Led to actual auditory stimuli propagation. Higher plot in each set of plots shows all measurements from the two probes superimposed, along with the delayed response peaks determined by Matlab's findpeak() function. Lower plot shows the average waveforms, as well as the average of the automatically detected first onset of peaks in the second probe.
Fig. 4. Code for sending Arduino trigger on Unrealvoid Aball3d_426Ball :: TriggerStim ()
{
FTimerHandle dispTrigger ;
// send trigger to arudino
if (! WriteFile (hSerial , &ledcode , 1,
&bytesw , 0))
{
// In case it don 't work get
comm error and return
false
std :: cout << " Writefile
failed " << std :: endl;
} //! Stim1TimerFlag
GridPlane1 -> SetVisibility (true);
Stim1TimerFlag = true;
lastRender = FDateTime :: Now ();
lastRender64 = lastRender .
ToUnixTimestamp ();
lastRender32 = lastRender .
GetMillisecond ();
}
TABLE I
IAverage and standard deviation values for the measured latency in all conditions, for both 3D engines shown in milliseconds.
In case it don 't work get comm error and return false std :: cout << " Writefile failed " << std :: endl ; } } break ; case WM_KEYUP : case WM_SYSKEYUP : key = (( KBDLLHOOKSTRUCT *) lParam )->vkCode & 0xFF; { std :: string evstr { key_names [key] + " released "}; // push key release event to LSL outlet -> push_sample (& evstr ); } isPressed [key] = false ; Fig. 9. Code for sending markers after catching Key events on LSL, and not Unreal or Unity// LSL outlet definition
static lsl :: stream_outlet * outlet = nullptr ;
// Keyboard hook
static HHOOK kbdHook = nullptr ;
static bool isPressed [256] = {0};
// const WCHAR FileFullPath [] = { L"COM4" };
HANDLE hSerial ;
char ledcode ;
DCB dcbSerialParams = { 0 }; // FILE_ATTRIBUTE_NORMAL
DWORD byteswritten ;
LRESULT CALLBACK keyboard_callback
(int code , WPARAM wParam , LPARAM lParam ) {
if (code >= 0 && outlet ) {
unsigned char key = 0;
switch ( wParam ) {
case WM_KEYDOWN :
case WM_SYSKEYDOWN :
key = (( KBDLLHOOKSTRUCT *) lParam )->vkCode & 0xFF
;
if (! isPressed [key ]) {
std :: string evstr { key_names [key] + " pressed "};
// push key event to LSL stream
outlet -> push_sample (& evstr );
std :: cout << evstr << std :: endl;
isPressed [key] = true;
// send LED trigger to Arduino
if (! WriteFile (hSerial , &key , 1, & byteswritten ,
0))
{
// break ;
default :;
}
}
return CallNextHookEx (kbdHook , code , wParam ,
lParam );
}
Intersubject synchronization of cortical activity during natural vision. U Hasson, Y Nir, I Levy, G Fuhrmann, R Malach, science. 3035664U. Hasson, Y. Nir, I. Levy, G. Fuhrmann, and R. Malach, "Intersubject synchronization of cortical activity during natural vision," science, vol. 303, no. 5664, pp. 1634-1640, 2004.
Virtual reality and the new psychophysics. B De Gelder, J Kätsyri, A W De Borst, British Journal of Psychology. 1093B. de Gelder, J. Kätsyri, and A. W. de Borst, "Virtual reality and the new psychophysics," British Journal of Psychology, vol. 109, no. 3, pp. 421-426, 2018.
A drama movie activates brains of holistic and analytical thinkers differentially. M Bacha-Trams, Y I Alexandrov, E Broman, E Glerean, M Kauppila, J Kauttonen, E Ryyppö, M Sams, I P Jääskeläinen, Social cognitive and affective neuroscience. 1312M. Bacha-Trams, Y. I. Alexandrov, E. Broman, E. Glerean, M. Kauppila, J. Kauttonen, E. Ryyppö, M. Sams, and I. P. Jääskeläinen, "A drama movie activates brains of holistic and analytical thinkers differentially," Social cognitive and affective neuroscience, vol. 13, no. 12, pp. 1293- 1304, 2018.
Movies and narratives as naturalistic stimuli in neuroimaging. I P Jääskeläinen, M Sams, E Glerean, J Ahveninen, NeuroImage. 224117445I. P. Jääskeläinen, M. Sams, E. Glerean, and J. Ahveninen, "Movies and narratives as naturalistic stimuli in neuroimaging," NeuroImage, vol. 224, p. 117445, 2021.
Idiosynchrony: From shared responses to individual differences during naturalistic neuroimaging. E S Finn, E Glerean, A Y Khojandi, D Nielson, P J Molfese, D A Handwerker, P A Bandettini, NeuroImage. 215116828E. S. Finn, E. Glerean, A. Y. Khojandi, D. Nielson, P. J. Molfese, D. A. Handwerker, and P. A. Bandettini, "Idiosynchrony: From shared responses to individual differences during naturalistic neuroimaging," NeuroImage, vol. 215, p. 116828, 2020.
Posterior parietal cortex activity reflects the significance of others' actions during natural viewing. J Salmi, E Glerean, I P Jääskeläinen, J M Lahnakoski, J Kettunen, J Lampinen, P Tikka, M Sams, Human brain mapping. 359J. Salmi, E. Glerean, I. P. Jääskeläinen, J. M. Lahnakoski, J. Kettunen, J. Lampinen, P. Tikka, and M. Sams, "Posterior parietal cortex activity reflects the significance of others' actions during natural viewing," Human brain mapping, vol. 35, no. 9, pp. 4767-4776, 2014.
What is ecological validity? a dimensional analysis. M A Schmuckler, Infancy. 24M. A. Schmuckler, "What is ecological validity? a dimensional analysis," Infancy, vol. 2, no. 4, pp. 419-436, 2001.
Virtual reality in behavioral neuroscience and beyond. M J Tarr, W H Warren, Nature neuroscience. 511M. J. Tarr and W. H. Warren, "Virtual reality in behavioral neuroscience and beyond," Nature neuroscience, vol. 5, no. 11, pp. 1089-1092, 2002.
Why and how to use virtual reality to study human social interaction: The challenges of exploring a new research landscape. X Pan, A F D C Hamilton, British Journal of Psychology. 1093X. Pan and A. F. d. C. Hamilton, "Why and how to use virtual reality to study human social interaction: The challenges of exploring a new research landscape," British Journal of Psychology, vol. 109, no. 3, pp. 395-417, 2018.
Virtual reality: A new track in psychological research. S De La Rosa, M Breidt, British Journal of Psychology. 1093S. de la Rosa and M. Breidt, "Virtual reality: A new track in psycho- logical research," British Journal of Psychology, vol. 109, no. 3, pp. 427-430, 2018.
Situational specificity of trait influences on drivers' evaluations and driving behaviour. A N Stephens, J A Groeger, Transportation research part F: traffic psychology and behaviour. 12A. N. Stephens and J. A. Groeger, "Situational specificity of trait influences on drivers' evaluations and driving behaviour," Transportation research part F: traffic psychology and behaviour, vol. 12, no. 1, pp. 29-39, 2009.
Virtual superheroes: Using superpowers in virtual reality to encourage prosocial behavior. R S Rosenberg, S L Baughman, J N Bailenson, PloS one. 8155003R. S. Rosenberg, S. L. Baughman, and J. N. Bailenson, "Virtual superheroes: Using superpowers in virtual reality to encourage prosocial behavior," PloS one, vol. 8, no. 1, p. e55003, 2013.
Building virtual reality fmri paradigms: a framework for presenting immersive virtual environments. C Mueller, M Luehrs, S Baecke, D Adolf, R Luetzkendorf, M Luchtmann, J Bernarding, Journal of neuroscience methods. 2092C. Mueller, M. Luehrs, S. Baecke, D. Adolf, R. Luetzkendorf, M. Lucht- mann, and J. Bernarding, "Building virtual reality fmri paradigms: a framework for presenting immersive virtual environments," Journal of neuroscience methods, vol. 209, no. 2, pp. 290-298, 2012.
Eeg alpha asymmetry, heart rate variability and cortisol in response to virtual reality induced stress. A.-M Brouwer, M A Neerincx, V Kallen, L Van Der Leer, M Ten Brinke, J. Cyberther. Rehabil. 41A.-M. Brouwer, M. A. Neerincx, V. Kallen, L. van der Leer, and M. ten Brinke, "Eeg alpha asymmetry, heart rate variability and cortisol in response to virtual reality induced stress," J. Cyberther. Rehabil, vol. 4, no. 1, pp. 21-34, 2011.
Using a hybrid brain computer interface and virtual reality system to monitor and promote cortical reorganization through motor activity and motor imagery training. S B Badia, A G Morgade, H Samaha, P Verschure, IEEE Transactions on Neural Systems and Rehabilitation Engineering. 212S. B. i Badia, A. G. Morgade, H. Samaha, and P. Verschure, "Using a hybrid brain computer interface and virtual reality system to monitor and promote cortical reorganization through motor activity and motor imagery training," IEEE Transactions on Neural Systems and Rehabili- tation Engineering, vol. 21, no. 2, pp. 174-181, 2012.
A feasibility study on ssvep-based interaction with motivating and immersive virtual and augmented reality. J Faller, B Z Allison, C Brunner, R Scherer, D Schmalstieg, G Pfurtscheller, C Neuper, arXiv:1701.03981arXiv preprintJ. Faller, B. Z. Allison, C. Brunner, R. Scherer, D. Schmalstieg, G. Pfurtscheller, and C. Neuper, "A feasibility study on ssvep-based in- teraction with motivating and immersive virtual and augmented reality," arXiv preprint arXiv:1701.03981, 2017.
Real brains in virtual worlds: Validating a novel oddball paradigm in virtual reality. J W Kuziek, A R Tayem, J I Burrell, E X Redman, J Murray, J Reinen, A Sipolins, K E Mathewson, 749192J. W. Kuziek, A. R. Tayem, J. I. Burrell, E. X. Redman, J. Murray, J. Reinen, A. Sipolins, and K. E. Mathewson, "Real brains in virtual worlds: Validating a novel oddball paradigm in virtual reality," bioRxiv, p. 749192, 2020.
Towards eeg-based haptic interaction within virtual environments. S Tarng, D Wang, Y Hu, F Merienne, 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEES. Tarng, D. Wang, Y. Hu, and F. Merienne, "Towards eeg-based haptic interaction within virtual environments," in 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, 2019, pp. 1179- 1180.
Decoding trajectories of imagined hand movement using electrocorticograms for brain-machine interface. S J Jang, Y J Yang, S Ryun, J S Kim, C K Chung, J Jeong, Journal of Neural Engineering. 19556011S. J. Jang, Y. J. Yang, S. Ryun, J. S. Kim, C. K. Chung, and J. Jeong, "Decoding trajectories of imagined hand movement using electrocor- ticograms for brain-machine interface," Journal of Neural Engineering, vol. 19, no. 5, p. 056011, 2022.
Virtual reality for anxiety reduction demonstrated by quantitative eeg: a pilot study. J Tarrant, J Viczko, H Cope, Frontiers in psychology. 91280J. Tarrant, J. Viczko, and H. Cope, "Virtual reality for anxiety reduction demonstrated by quantitative eeg: a pilot study," Frontiers in psychology, vol. 9, p. 1280, 2018.
A virtual reality testbed for braincomputer interface research. J D Bayliss, D H Ballard, IEEE Transactions on Rehabilitation Engineering. 82J. D. Bayliss and D. H. Ballard, "A virtual reality testbed for brain- computer interface research," IEEE Transactions on Rehabilitation En- gineering, vol. 8, no. 2, pp. 188-190, 2000.
How choice of mouse may affect response timing in psychological studies. R R Plant, N Hammond, T Whitehouse, Behavior Research Methods, Instruments, & Computers. 352R. R. Plant, N. Hammond, and T. Whitehouse, "How choice of mouse may affect response timing in psychological studies," Behavior Research Methods, Instruments, & Computers, vol. 35, no. 2, pp. 276-284, 2003.
Measuring software timing errors in the presentation of visual stimuli in cognitive neuroscience experiments. P Garaizar, M A Vadillo, D López-De Ipiña, H Matute, PloS one. 9185108P. Garaizar, M. A. Vadillo, D. López-de Ipiña, and H. Matute, "Mea- suring software timing errors in the presentation of visual stimuli in cognitive neuroscience experiments," PloS one, vol. 9, no. 1, p. e85108, 2014.
Does variability in human performance outweigh imprecision in response devices such as computer keyboards. M F Damian, Behavior Research Methods. 42M. F. Damian, "Does variability in human performance outweigh imprecision in response devices such as computer keyboards?" Behavior Research Methods, vol. 42, no. 1, pp. 205-211, 2010.
On the latency of usb-connected input devices. R Wimmer, A Schmid, F Bockes, Proceedings of the 2019 CHI conference on human factors in computing systems. the 2019 CHI conference on human factors in computing systemsR. Wimmer, A. Schmid, and F. Bockes, "On the latency of usb-connected input devices," in Proceedings of the 2019 CHI conference on human factors in computing systems, 2019, pp. 1-12.
Toward lowlatency and ultra-reliable virtual reality. M S Elbamby, C Perfecto, M Bennis, K Doppler, IEEE Network. 322M. S. Elbamby, C. Perfecto, M. Bennis, and K. Doppler, "Toward low- latency and ultra-reliable virtual reality," IEEE Network, vol. 32, no. 2, pp. 78-84, 2018.
Accuracy and precision of stimulus timing and reaction times with unreal engine and steamvr. M Wiesing, G R Fink, R Weidner, PloS one. 154231152M. Wiesing, G. R. Fink, and R. Weidner, "Accuracy and precision of stimulus timing and reaction times with unreal engine and steamvr," PloS one, vol. 15, no. 4, p. e0231152, 2020.
Domevr: A setup for experimental control of an immersive dome virtual environment created with unreal engine 4. K A Shapcott, M Weigand, I Glukhova, M N Havenith, M L Scholvinck, 2022K. A. Shapcott, M. Weigand, I. Glukhova, M. N. Havenith, and M. L. Scholvinck, "Domevr: A setup for experimental control of an immersive dome virtual environment created with unreal engine 4," bioRxiv, 2022.
To brake or not to brake? personality traits predict decision-making in an accident situation. U Ju, J Kang, C Wallraven, Frontiers in psychology. 10134U. Ju, J. Kang, and C. Wallraven, "To brake or not to brake? personality traits predict decision-making in an accident situation," Frontiers in psychology, vol. 10, p. 134, 2019.
Westdrive x loopar: An open-access virtual reality project in unity for evaluating user interaction methods during takeover requests. F N Nezami, M A Wächter, N Maleki, P Spaniol, L M Kühne, A Haas, J M Pin, L Tiemann, F Nienhaus, L Keller, Sensors. 2151879F. N. Nezami, M. A. Wächter, N. Maleki, P. Spaniol, L. M. Kühne, A. Haas, J. M. Pin, L. Tiemann, F. Nienhaus, L. Keller et al., "Westdrive x loopar: An open-access virtual reality project in unity for evaluating user interaction methods during takeover requests," Sensors, vol. 21, no. 5, p. 1879, 2021.
Acoustic cues increase situational awareness in accident situations: A vr car-driving study. U Ju, L L Chuang, C Wallraven, IEEE Transactions on Intelligent Transportation Systems. U. Ju, L. L. Chuang, and C. Wallraven, "Acoustic cues increase situa- tional awareness in accident situations: A vr car-driving study," IEEE Transactions on Intelligent Transportation Systems, 2020.
Studying human behavior with virtual reality: The unity experiment framework. J Brookes, M Warburton, M Alghadier, M Mon-Williams, F Mushtaq, Behavior research methods. 522J. Brookes, M. Warburton, M. Alghadier, M. Mon-Williams, and F. Mushtaq, "Studying human behavior with virtual reality: The unity experiment framework," Behavior research methods, vol. 52, no. 2, pp. 455-463, 2020.
Eeg-based classification of internally-and externally-directed attention in an augmented reality paradigm. L.-M Vortmann, F Kroll, F Putze, Frontiers in human neuroscience. 13348L.-M. Vortmann, F. Kroll, and F. Putze, "Eeg-based classification of internally-and externally-directed attention in an augmented reality paradigm," Frontiers in human neuroscience, vol. 13, p. 348, 2019.
Excite-o-meter: an open-source unity plugin to analyze heart activity and movement trajectories in custom vr environments. L Quintero, P Papapetrou, J E Muñoz, J De Mooij, M Gaebler, 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops. IEEEL. Quintero, P. Papapetrou, J. E. Muñoz, J. De Mooij, and M. Gaebler, "Excite-o-meter: an open-source unity plugin to analyze heart activity and movement trajectories in custom vr environments," in 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). IEEE, 2022, pp. 46-47.
Comparison of unity and unreal engine. A Šmíd, Czech Technical University in PragueA.Šmíd, "Comparison of unity and unreal engine," Czech Technical University in Prague, pp. 41-61, 2017.
Measuring latency in virtual reality systems. K Raaen, I Kjellmo, International Conference on Entertainment Computing. SpringerK. Raaen and I. Kjellmo, "Measuring latency in virtual reality systems," in International Conference on Entertainment Computing. Springer, 2015, pp. 457-462.
Designing interactive systems course. J Borchers, S Hueber, S Schröder, 2022, course website for schematicsJ. Borchers, S. Hueber, and S. Schröder, "Designing interactive systems course," 2022, course website for schematics. [Online]. Available: https://hci.rwth-aachen.de/dis2-ss22
Using arduino microcontroller boards to measure response latencies. T W Schubert, A D'ausilio, R Canto, Behavior research methods. 45T. W. Schubert, A. D'Ausilio, and R. Canto, "Using arduino microcon- troller boards to measure response latencies," Behavior research methods, vol. 45, no. 4, pp. 1332-1346, 2013.
Timing accuracy of mouse response registration on the ibm microcomputer family. J Beringer, Behavior Research Methods, Instruments, & Computers. 243J. Beringer, "Timing accuracy of mouse response registration on the ibm microcomputer family," Behavior Research Methods, Instruments, & Computers, vol. 24, no. 3, pp. 486-490, 1992.
Timing accuracy under microsoft windows revealed through external chronometry. C D Chambers, M Brown, Behavior Research Methods, Instruments, & Computers. 351C. D. Chambers and M. Brown, "Timing accuracy under microsoft windows revealed through external chronometry," Behavior Research Methods, Instruments, & Computers, vol. 35, no. 1, pp. 96-108, 2003.
Lab streaming layer (lsl). C Kothe, C. Kothe et al., "Lab streaming layer (lsl)," 2014.
Assessing the time synchronisation of eeg systems. Y Wang, C Markham, C Deegan, 2019 30th Irish Signals and Systems Conference (ISSC). IEEEY. Wang, C. Markham, and C. Deegan, "Assessing the time synchronisa- tion of eeg systems," in 2019 30th Irish Signals and Systems Conference (ISSC). IEEE, 2019, pp. 1-6.
Imaging natural cognition in action. K Gramann, D P Ferris, J Gwin, S Makeig, International Journal of Psychophysiology. 911K. Gramann, D. P. Ferris, J. Gwin, and S. Makeig, "Imaging natural cognition in action," International Journal of Psychophysiology, vol. 91, no. 1, pp. 22-29, 2014.
Pocketable labs for everyone: synchronized multi-sensor data streaming and recording on smartphones with the lab streaming layer. S Blum, D Hölle, M G Bleichner, S Debener, Sensors. 21238135S. Blum, D. Hölle, M. G. Bleichner, and S. Debener, "Pocketable labs for everyone: synchronized multi-sensor data streaming and recording on smartphones with the lab streaming layer," Sensors, vol. 21, no. 23, p. 8135, 2021.
Hierarchical event descriptor (hed) tags for analysis of event-related eeg studies. N Bigdely-Shamlo, K Kreutz-Delgado, K Robbins, M Miyakoshi, M Westerfield, T Bel-Bahar, C Kothe, J Hsi, S Makeig, 2013 IEEE Global Conference on Signal and Information Processing. IEEEN. Bigdely-Shamlo, K. Kreutz-Delgado, K. Robbins, M. Miyakoshi, M. Westerfield, T. Bel-Bahar, C. Kothe, J. Hsi, and S. Makeig, "Hierarchi- cal event descriptor (hed) tags for analysis of event-related eeg studies," in 2013 IEEE Global Conference on Signal and Information Processing. IEEE, 2013, pp. 1-4.
A virtual reality game as a tool to assess physiological correlations of stress. D H Lee, T.-P Jung, arXiv:2009.14421arXiv preprintD. H. Lee and T.-P. Jung, "A virtual reality game as a tool to assess physiological correlations of stress," arXiv preprint arXiv:2009.14421, 2020.
Pop or not? eeg correlates of risk-taking behavior in the balloon analogue risk task. Y Chen, C Wallraven, 2017 5th International Winter Conference on Brain-Computer Interface (BCI). IEEEY. Chen and C. Wallraven, "Pop or not? eeg correlates of risk-taking behavior in the balloon analogue risk task," in 2017 5th International Winter Conference on Brain-Computer Interface (BCI). IEEE, 2017, pp. 16-19.
| [] |
[
"The Surprising Effectiveness of Diffusion Models for Optical Flow and Monocular Depth Estimation",
"The Surprising Effectiveness of Diffusion Models for Optical Flow and Monocular Depth Estimation"
] | [
"Saurabh Saxena ",
"Charles Herrmann irwinherrmann@google.com ",
"Junhwa Hur junhwahur@google.com ",
"Abhishek Kar ",
"Mohammad Norouzi ",
"Deqing Sun deqingsun@google.com ",
"David J Fleet davidfleet@google.com "
] | [] | [] | Denoising diffusion probabilistic models have transformed image generation with their impressive fidelity and diversity. We show that they also excel in estimating optical flow and monocular depth, surprisingly, without task-specific architectures and loss functions that are predominant for these tasks. Compared to the point estimates of conventional regression-based methods, diffusion models also enable Monte Carlo inference, e.g., capturing uncertainty and ambiguity in flow and depth. With self-supervised pre-training, the combined use of synthetic and real data for supervised training, and technical innovations (infilling and step-unrolled denoising diffusion training) to handle noisy-incomplete training data, and a simple form of coarse-to-fine refinement, one can train state-of-the-art diffusion models for depth and optical flow estimation. Extensive experiments focus on quantitative performance against benchmarks, ablations, and the model's ability to capture uncertainty and multimodality, and impute missing values. Our model, DDVM (Denoising Diffusion Vision Model), obtains a state-of-the-art relative depth error of 0.074 on the indoor NYU benchmark and an Fl-all outlier rate of 3.26% on the KITTI optical flow benchmark, about 25% better than the best published method. For an overview see diffusion-vision.github.io * DF is also affiliated with the University of Toronto and the Vector Institute This work is an extension of "Monocular Depth Estimation using Diffusion Models" Preprint. Under review. Input image(s) Variance heat map Multimodal prediction samples NYU KITTI Sintel Figure 1: Examples of multi-modal prediction on depth (NYU) and optical flow (Sintel and KITTI). Each row shows an input image (or overlayed two images for optical flow), a variance heat map from 8 samples, and 3 individual samples. Our model captures multi-modal samples on uncertain/ambiguous cases, such as reflective (e.g. mirror on NYU), transparent (e.g. vehicle window on KITTI), and translucent (e.g. fog on Sintel) regions.High variance also exists near object boundaries due to inaccurate estimates, which are often challenging cases for optical flow, and also partially originate from noisy ground truth measurements for depth. SeeFigures 8, 9, 10 and 11 for more examples.One key barrier to training useful diffusion models for monocular depth and optical flow inference concerns the amount and quality of available training data. Given the limited availability of labelled training data, we propose a training pipeline comprising multi-task self-supervised pre-training followed by supervised pre-training using a combination of real and synthetic data. Multi-task self-supervised pre-training leverages the strong performance of diffusion models on tasks like colorization and inpainting [e.g., 52]. We also find that supervised (pre-)training with a combination of real and large-scale synthetic data improves performance significantly.A further issue concerns the fact that many existing real datasets for depth and optical flow have noisy and incomplete ground truth annotations. This presents a challenge for the conventional training framework and iterative sampling in diffusion models, leading to a problematic distribution shift between training and inference. To mitigate these issues we propose the use of an L 1 loss for robustness, infilling missing depth values during training, and the introduction of step-unrolled denoising diffusion. These elements of the model are shown through ablations to be important for both depth and flow estimation.Our contributions are as follows:1. We formulate optical flow and monocular depth estimation as image to image translation with generative diffusion models, without specialized loss functions and model architectures. 2. We identify and propose solutions to several important issues w.r.t data. For both tasks, to mitigate distribution shift between training and inference with noisy, incomplete data, we propose infilling, step-unrolling, and an L 1 loss during training. For flow, to improve generalization, we introduce a new dataset mixture for pre-training, yielding a RAFT[72]baseline that outperforms all published methods in zero-shot performance on the Sintel and KITTI training benchmarks. 3. Our diffusion models is competitive with or surpasses SOTA for both tasks. For monocular depth estimation we achieve a SOTA relative error of 0.074 on the NYU dataset and perform competitively on KITTI. For flow, diffusion surpasses the stronger RAFT baseline by a large margin in pre-training and our fine-tuned model achieves an Fl-all outlier rate of 3.26% on the public KITTI test benchmark, ∼25% lower than the best published method [68]. 4. Our diffusion model is also shown to capture flow and depth uncertainty, and the iterative denoising process enables zero-shot, coarse-to-fine refinement, and imputation.Related workOptical flow and depth estimation have been extensively studied. Here we briefly review only the most relevant work, and refer the interested readers to the references cited therein.Optical flow. The predominant approach to optical flow is regression-based, with a focus on specialized network architectures to exploit domain knowledge, | null | [
"https://export.arxiv.org/pdf/2306.01923v1.pdf"
] | 259,075,602 | 2306.01923 | 842182174ebd9e070101c85aa16c0818e5363c42 |
The Surprising Effectiveness of Diffusion Models for Optical Flow and Monocular Depth Estimation
Saurabh Saxena
Charles Herrmann irwinherrmann@google.com
Junhwa Hur junhwahur@google.com
Abhishek Kar
Mohammad Norouzi
Deqing Sun deqingsun@google.com
David J Fleet davidfleet@google.com
The Surprising Effectiveness of Diffusion Models for Optical Flow and Monocular Depth Estimation
Google DeepMind and Google Research
Denoising diffusion probabilistic models have transformed image generation with their impressive fidelity and diversity. We show that they also excel in estimating optical flow and monocular depth, surprisingly, without task-specific architectures and loss functions that are predominant for these tasks. Compared to the point estimates of conventional regression-based methods, diffusion models also enable Monte Carlo inference, e.g., capturing uncertainty and ambiguity in flow and depth. With self-supervised pre-training, the combined use of synthetic and real data for supervised training, and technical innovations (infilling and step-unrolled denoising diffusion training) to handle noisy-incomplete training data, and a simple form of coarse-to-fine refinement, one can train state-of-the-art diffusion models for depth and optical flow estimation. Extensive experiments focus on quantitative performance against benchmarks, ablations, and the model's ability to capture uncertainty and multimodality, and impute missing values. Our model, DDVM (Denoising Diffusion Vision Model), obtains a state-of-the-art relative depth error of 0.074 on the indoor NYU benchmark and an Fl-all outlier rate of 3.26% on the KITTI optical flow benchmark, about 25% better than the best published method. For an overview see diffusion-vision.github.io * DF is also affiliated with the University of Toronto and the Vector Institute This work is an extension of "Monocular Depth Estimation using Diffusion Models" Preprint. Under review. Input image(s) Variance heat map Multimodal prediction samples NYU KITTI Sintel Figure 1: Examples of multi-modal prediction on depth (NYU) and optical flow (Sintel and KITTI). Each row shows an input image (or overlayed two images for optical flow), a variance heat map from 8 samples, and 3 individual samples. Our model captures multi-modal samples on uncertain/ambiguous cases, such as reflective (e.g. mirror on NYU), transparent (e.g. vehicle window on KITTI), and translucent (e.g. fog on Sintel) regions.High variance also exists near object boundaries due to inaccurate estimates, which are often challenging cases for optical flow, and also partially originate from noisy ground truth measurements for depth. SeeFigures 8, 9, 10 and 11 for more examples.One key barrier to training useful diffusion models for monocular depth and optical flow inference concerns the amount and quality of available training data. Given the limited availability of labelled training data, we propose a training pipeline comprising multi-task self-supervised pre-training followed by supervised pre-training using a combination of real and synthetic data. Multi-task self-supervised pre-training leverages the strong performance of diffusion models on tasks like colorization and inpainting [e.g., 52]. We also find that supervised (pre-)training with a combination of real and large-scale synthetic data improves performance significantly.A further issue concerns the fact that many existing real datasets for depth and optical flow have noisy and incomplete ground truth annotations. This presents a challenge for the conventional training framework and iterative sampling in diffusion models, leading to a problematic distribution shift between training and inference. To mitigate these issues we propose the use of an L 1 loss for robustness, infilling missing depth values during training, and the introduction of step-unrolled denoising diffusion. These elements of the model are shown through ablations to be important for both depth and flow estimation.Our contributions are as follows:1. We formulate optical flow and monocular depth estimation as image to image translation with generative diffusion models, without specialized loss functions and model architectures. 2. We identify and propose solutions to several important issues w.r.t data. For both tasks, to mitigate distribution shift between training and inference with noisy, incomplete data, we propose infilling, step-unrolling, and an L 1 loss during training. For flow, to improve generalization, we introduce a new dataset mixture for pre-training, yielding a RAFT[72]baseline that outperforms all published methods in zero-shot performance on the Sintel and KITTI training benchmarks. 3. Our diffusion models is competitive with or surpasses SOTA for both tasks. For monocular depth estimation we achieve a SOTA relative error of 0.074 on the NYU dataset and perform competitively on KITTI. For flow, diffusion surpasses the stronger RAFT baseline by a large margin in pre-training and our fine-tuned model achieves an Fl-all outlier rate of 3.26% on the public KITTI test benchmark, ∼25% lower than the best published method [68]. 4. Our diffusion model is also shown to capture flow and depth uncertainty, and the iterative denoising process enables zero-shot, coarse-to-fine refinement, and imputation.Related workOptical flow and depth estimation have been extensively studied. Here we briefly review only the most relevant work, and refer the interested readers to the references cited therein.Optical flow. The predominant approach to optical flow is regression-based, with a focus on specialized network architectures to exploit domain knowledge,
Introduction
Diffusion models have emerged as powerful generative models for high fidelity image synthesis, capturing rich knowledge about the visual world [19,46,53,60]. However, at first glance, it is unclear whether these models can be as effective on many classical computer vision tasks. For example, consider two dense vision estimation tasks, namely, optical flow, which estimates frame-toframe correspondences, and monocular depth perception, which makes depth predictions based on a single image. Both tasks are usually treated as regression problems and addressed with specialized architectures and task-specific loss functions, e.g., cost volumes, feature warps, or suitable losses for depth. Without these specialized components or the regression framework, general generative techniques may be ill-equipped and vulnerable to both generalization and performance issues.
In this paper, we show that these concerns, while valid, can be addressed and that, surprisingly, a generic, conventional diffusion model for image to image translation works impressively well on both tasks, often outperforming the state of the art. In addition, diffusion models provide valuable benefits over networks trained with regression; in particular, diffusion allows for approximate inference with multi-modal distributions, capturing uncertainty and ambiguity (e.g. see Figure 1). Step unrolling + y t y Figure 2: Training architecture. Given ground truth flow/depth, we first infill missing values using interpolation. Then, we add noise to the label map and train a neural network to model the conditional distribution of the noise given the RGB image(s), noisy label, and time step. One can optionally unroll the denoising step(s) during training (with stop gradient) to bridge the distribution gap between training and inference for yt.
e.g., cost volume construction [12,20,21,36,66,79,81,83], coarse-to-fine estimation [66,75,80], occlusion handling [22,25,65], or iterative refinement [23,24,72], as evidenced by public benchmark datasets [4,42]. Some recent work has also advocated for generic architectures: Perceiver IO [26] introduces a generic transformer-based model that works for any modality, including optical flow and language modeling. Regression-based methods, however, only give a single prediction of the optical flow and do not readily capture uncertainty or ambiguity in the flow. Our work introduces a surprisingly simple, generic architecture for optical flow using a denoising diffusion model.
We find that this generic generative model is surprisingly effective for optical flow, recovering fine details on motion boundaries, while capturing multi-modality of the motion distribution. Monocular depth. Monocular depth estimation has been a long-standing problem in computer vision [56,57] with recent progress focusing on specialized loss functions and architectures [1,5,13,29] such as the use of multi-scale networks [10,11], adaptive binning [3,33] and weighted scale-shift invariant losses [11]. Large-scale in-domain pre-training has also been effective for depth estimation [47,48,50], which we find to be the case here as well. We build on this rich literature, but with a simple, generic architecture, leveraging recent advances in generative models.
Diffusion models. Diffusion models are latent-variable generative models trained to transform a sample of a Gaussian noise into a sample from a data distribution [19,60]. They comprise a forward process that gradually annihilates data by adding noise, as 'time' t increases from 0 to 1, and a learned generative process that reverses the forward process, starting from a sample of random noise at t = 1 and incrementally adding structure (attenuating noise) as t decreases to 0. A conditional diffusion model conditions the steps of the reverse process (e.g., on labels, text, or an image).
Central to the model is a denoising network f θ that is trained to take a noisy sample y t at some time-step t, along with a conditioning signal x, and predict a less noisy sample. Using Gaussian noise in the forward process, one can express the training objective over the sequence of transitions (as t slowly decreases) as a sum of non-linear regression objectives, with the L2 loss (here with the ϵ-parameterization):
E (x, y) E (t, ϵ) f θ (x, √ γ t y + 1−γ t ϵ yt , t) − ϵ 2 2(1)
where ϵ ∼ N (0, I), t ∼ U(0, 1), and where γ t > 0 is computed with a pre-determined noise schedule. For inference (i.e., sampling), one draws a random noise sample y 1 , and then iteratively uses f θ to estimate the noise, from which one can compute the next latent sample y s , for s < t.
Self-supervised pre-training. Prior work has shown that self-supervised tasks such as colorization [31,84] and masked prediction [78] serve as effective pre-training for downstream vision tasks. Our work also confirms the benefit of self-supervised pre-training [52] for diffusion-based imageto-image translation, by establishing a new SOTA on optical flow estimation and monocular depth estimation while also representing multi-modality and supporting zero-shot coarse-to-fine refinement and imputation.
Model Framework
In contrast to the conventional monocular depth and optical flow methods, with rich usage of specialized domain knowledge on their architecture designs, we introduce simple, generic architectures and loss functions. We replace the inductive bias in state-of-the-art architectures and losses with a powerful generative model along with a combination of self-supervised pre-training and supervised training on both real and synthetic data.
The denoising diffusion model ( Figure 2) takes a noisy version of the target map (i.e., a depth or flow) as input, along with the conditioning signal (one RGB image for depth and two RGB images for flow). The denoiser effectively provides a noise-free estimate of the target map (i.e., ignoring the specific loss parameterization used). The training loss penalizes residual error in the denoised map, which is quite distinct from typical image reconstruction losses used in optical flow estimation.
Synthetic pre-training data and generalization
Given that we train these models with a generic denoising objective, without task-specific inductive biases in the form of specialized architectures, the choice of training data becomes critical. Below we discuss the datasets used and their contributions in detail. Because training data with annotated ground truth is limited for many dense vision tasks, here we make extensive use of synthetic data in the hope that the geometric properties acquired from synthetic data during training will transfer to different domains, including natural images.
AutoFlow [67] has recently emerged as a powerful synthetic dataset for training flow models. We were surprised to find that training on AutoFlow alone is insufficient, as the diffusion model appears to devote a significant fraction of its representation capacity to represent the shapes of AutoFlow regions, rather than solving for correspondence. As a result, models trained on AutoFlow alone exhibit a strong bias to generate flow fields with polygonal shaped regions, much like those in AutoFlow, often ignoring the shapes of boundaries in the two-frame RGB inputs (e.g. see Figure 3).
To mitigate bias induced by AutoFlow in training, we further mix in three synthetic datasets during training, namely, FlyingThings3D [38], Kubric [17] and TartanAir [74]. Given a model pre-trained on AutoFlow, for compute efficiency, we use a greedy mixing strategy where we fix the relative ratio of the previous mixture and tune the proportion of the newly added dataset. We leave further exploration of an optimal mixing strategy to future work. Zero-shot testing of the model on Sintel and KITTI (see Table 1 and Fig. 3) shows substantial performance gains with each additional synthetic dataset.
We find that pre-training is similarly important for depth estimation (see Table 7). We learn separate indoor and outdoor models. For the indoor model we pre-train on a mix of ScanNet [7] and SceneNet RGB-D [39]. The outdoor model is pre-trained on the Waymo Open Dataset [69].
Algorithm 1 Denoising diffusion train step with infilling and step unrolling 1: x ← conditioning images, y ← flow or depth map, mask ← binary mask of known values 2: t ∼ U (0, 1), ϵ ∼ N (0, 1) 3: y = fill_holes_with_interpolation(y) 4: yt = √ γt * y + √ 1 − γt * ϵ 5: if unroll_step then 6:
ϵ pred = stop_gradient(f θ (x, yt, t)) 7:
y pred = (yt − √ 1 − γt * ϵ pred )/ √ γt 8: yt = √ γt * y pred + √ 1 − γt * ϵ 9: ϵ = (yt − √ γt * y)/ √ 1 − γt 10: end if 11: ϵ pred = f θ (x, yt, t) 12: loss = reduce_mean(|ϵ − ϵ pred |[mask])
3.2 Real data: Challenges with noisy, incomplete ground truth Ground truth annotations for real-world depth or flow data are often sparse and noisy, due to highly reflective surfaces, light absorbing surfaces [63], dynamic objects [41], etc. While regression-based methods can simply compute the loss on pixels with valid ground truth, corruption of the training data is more challenging for diffusion models. Diffusion models perform inference through iterative refinement of the target map y conditioned on RGB image data x. It starts with a sample of Gaussian noise y 1 , and terminates with a sample from the predictive distribution p(y 0 | x). A refinement step from time t to s, with s < t, proceeds by sampling from the parameterized distribution p θ (y s | y t , x); i.e., each step operates on the output from the previous step. During training, however, the denoising steps are decoupled (see Eqn. 1), where the denoising network operates on a noisy version of the ground truth depth map instead of the output of the previous iteration (reminiscent of teaching forcing in RNN training [77]). Thus there is a distribution shift between marginals over the noisy target maps during training and inference, because the ground truth maps have missing annotations and heavytailed sensor noise while the noisy maps obtained from the previous time step at inference time should not. This distribution shift has a very negative impact on model performance. Nevertheless, with the following modifications during training we find that the problems can be mitigated effectively.
Infilling. One way to reduce the distribution shift is to impute the missing ground truth.
We explored several ways to do this, including simple interpolation schemes, and inference using our model (trained with nearest neighbor interpolation). We find that nearest neighbor interpolation is sufficient to impute missing values in the ground truth maps in the depth and flow field training data.
Despite the imputation of missing ground truth depth and flow values, note that the training loss is only computed and backpropagated from pixels with known (not infilled) ground truth depth. We refer to this as the masked denoising loss (see Figure 2).
Step-unrolled denoising diffusion training. A second way to mitigate distribution shift in the y t marginals in training and inference, is to construct y t from model outputs rather than ground truth maps. One can do this by slightly modifying the training procedure (see Algorithm 1) to run one forward pass of the model and build y t by adding noise to the model's output rather than the training map. We do not propagate gradients for this forward pass. This process, called step-unrolled denoising diffusion, slows training only marginally (∼15% on a TPU v4). Interestingly, this problem of training / inference distribution shift resembles that of exposure bias [49] in autoregressive models, for which the mismatch is caused by teacher forcing during training [77]. Several solutions have been proposed for this problem in the literature [2,30,82].
Step-unrolled denoising diffusion also closely resembles the approach in [55] for training denoising autoencoders on text.
We only perform step-unrolled denoising diffusion during model fine-tuning. Early in training the denoising predictions are inaccurate, so the latent marginals over the noisy target maps will be closer to the desired true marginals than those produced by adding noise to denoiser network outputs. One might consider the use of a curriculum for gradually introducing step-unrolled denoising diffusion in the later stages of supervised pre-training, but this introduces additional hyper-parameters, so we simply invoke step-unrolled denoising diffusion during fine-tuning, and leave an exploration of curricula to future work. L 1 denoiser loss. While the L 2 loss in Eqn. 1 is ideal for Gaussian noise and noise-free ground truth maps, in practice, real ground truth depth and flow fields are noisy and heavy tailed; e.g., for distant objects, near object boundaries, and near pixels with missing annotations. We hypothesize that the robustness afforded by the L 1 loss may therefore be useful in training the neural denoising network. (See Tables 10 and 11 in the supplementary material for an ablation of the loss function for monocular depth estimation.)
Coarse-to-fine refinement
Training high resolution diffusion models is often slow and memory intensive but estimation accuracy has been shown to improve with resolution [16]. A simple solution is to perform inference in a coarse-to-fine manner, first estimating the flow over the entire field of view at low resolution, and then refining the estimates in a patch-wise manner. For refinement we first up-sample the low-resolution map to the target resolution using bicubic interpolation. Patches are cropped from the up-scaled map, denoted z, along with the corresponding RGB inputs. Then we run diffusion model inference starting at time t ′ with a noisy map
y t ′ ∼ N (y t ′ ; √ γ t ′ z, (1 − γ t ′ )I).
For simplicity, t ′ is a fixed hyper-parameter, set based on a validation set. This process is carried out for multiple overlapping patches. Following Perceiver IO [26], the patch estimates are then merged using weighted masks with lower weight near the patch boundaries since predictions at boundaries are more prone to errors.
(See Section H.5 for more details.)
Experiments
As our denoiser backbone, we adopt the Efficient UNet architecture [53], pretrained with Palette [52] style self-supervised pretraining, and slightly modified to have the appropriate input and output channels for each task. Since diffusion models expect inputs and generate outputs in the range [−1., 1.], we normalize depths using max depth of 10 meters and 80 meters respectively for the indoor and outdoor models. We normalize the flow using the height and width of the ground truth. Refer to Section H for more details on the architecture, augmentations and other hyper-parameters.
Optical flow. We pre-train on the mixture described in Section 3.1 at a resolution of 320×448 and report zero-shot results on the widely used Sintel [4] and KITTI [42] datasets. We further fine-tune this model on the standard mixture consisting of AutoFlow [67], FlyingThings [38], VIPER [51], HD1K [28], Sintel and KITTI at a resolution of 320×768 and report results on the test set from the public benchmark. We use a standard average end-point error (AEPE) metric that calculates L2 distance between ground truth and prediction. On KITTI, we additionally use the outlier rate, Fl-all, which reports the outlier ratio in % among all pixels with valid ground truth, where an estimate is considered as an outlier if its error exceeds 3 pixels and 5% w.r.t. the ground truth.
Depth. We separately pre-train indoor and outdoor models on the respective pre-training datasets described in Section 3.1. The indoor depth model is then finetuned and evaluated on the NYU depth v2 dataset [59] and the outdoor model on the KITTI depth dataset [15]. We follow the standard evaluation protocol used in prior work [33]. For both NYU depth v2 and KITTI, we report the absolute relative error (REL), root mean squared error (RMS) and accuracy metrics (δ 1 < 1.25).
Evaluation on benchmark datasets
Depth. Table 3 reports the results on NYU depth v2 and KITTI (see Section D for more detailed results and Section B for qualitative comparison with DPT on NYU). We achieve a state-of-the-art absolute relative error of 0.074 on NYU depth v2. On KITTI, our method performs competitively with prior work. We report results with averaging depth maps from one or more samples. Note that most prior works use post processing that averages two samples, one from the input image, and the other based on its reflection about the vertical axis.
Flow. Table 1 reports the zero-shot results of our model on Sintel and KITTI Train datasets where ground truth are provided. The model is trained on our newly proposed pre-training mixtures Table 5: Coarse-to-fine refinement improves zero-shot optical flow estimation results on Sintel and KITTI, along with the qualitative improvements shown in Figure 6. Our method demonstrates finer details on both object and motion boundaries. Especially on KITTI, our model recovers fine details remarkably well, e.g. on trees and its layered motion between tree and background.
We further finetune our model on the mixture of the following datasets, AutoFlow, FlyingThings, HD1K, KITTI, Sintel, and VIPER. Table 2 reports the comparison to state-of-the-art optical flow methods on public benchmark datasets, Sintel and KITTI. On KITTI, our method outperforms all existing optical flow methods by a substantial margin (even most scene flow methods that use stereo inputs), and sets the new state of the art. On the challenging Sintel final, our method is competitive with other state of the art models. Except for methods using warm-start strategies, our method is only behind FlowFormer which adopts strong domain knowledge on optical flow (e.g. cost volume, iterative refinement, or attention layers for larger context) unlike our generic model. Interestingly, we find that our model outperforms FlowFormer on 11/12 Sintel test sequences and our overall worse performance can be attributed to a much higher AEPE on a single (possibly out-of-distribution) test sequence. We discuss this in more detail in Section I. On KITTI, our diffusion model outperforms FlowFormer by a large margin (30.34%).
Ablation study
Infilling and step-unrolling. We study the effect of infilling and step-unrolling in Table 4. For depth, we report results for fine-tuning our pre-trained model on the NYU and KITTI datasets with the same resolution and augmentations as our best results. For flow, we fine-tune on the KITTI train set alone (with nearest neighbor resizing to the target resolution being the only augmentation) at a resolution of 320×448 and report metrics on the KITTI val set [37]. We report results with a single sample and no coarse-to-fine refinement. We find that training on raw sparse data without infilling and step unrolling leads to poor results, especially on KITTI where the ground truth is quite sparse.
Step-unrolling helps to stabilize training without requiring any extra data pre-processing. However, we find that most gains come from interpolating missing values in the sparse labels. Infilling and step-unrolling compose well as our best results use both; infilling (being an approximation) does not completely bridge the training-inference distribution shift of the noisy latent.
Coarse-to-fine refinement. Figure 6 shows that coarse-to-fine refinement (Section 3.3) substantially improves fine-grained details in estimated optical flow fields. It also improves the metrics for zero-shot optical flow estimation on both KITTI and Sintel, as shown in Table 5.
Datasets. When using different mixtures of datasets for pretraining, we find that diffusion models sometimes capture region boundaries and shape at the expense of local textural variation (eg see Figure 3). The model trained solely on AutoFlow tends to provide very coarse flow, and mimics the object shapes found in AutoFlow. The addition of FlyingThings, Kubric, and TartanAir removes this First frame
Ground Truth Base Refined Figure 6: Visual results with and without coarse-to-fine refinement. For our pretrained model, refinement helps correct wrong flow and adds details to correct flow. Figure 7: Application of zero-shot depth completion with our model by incorporating it into an iterative 3D scene generation pipeline. Starting with a initial image (optionally generated from a text-to-image model), we sample an image-only conditioned depth map using our model. The image-depth pair is added to a point cloud.
We then iteratively render images and depth maps (with holes) from this point cloud by moving the camera. We then fill image holes using an existing image inpainter (optionally text conditioned), and then use our model with replacement guidance to impute missing depths (conditioned on the filled RGB image and known depth).
hallucination and significantly improves the fine details in the flow estimates (eg, shadows, trees, thin structure, and motion boundaries) together with a substantial boost in accuracy (cf . Table 6). Similarly, we find that mixing SceneNet RGB-D [39], a synthetic dataset, along with ScanNet [7] provides a performance boost for fine-tuning results on NYU depth v2, shown in Table 7.
Interesting properties of diffusion models
Multimodality. One strength of diffusion models is their ability to capture complex multimodal distributions. This can be effective in representing uncertainty, especially where there may exist natural ambiguities and thus multiple predictions, e.g. in cases of transparent, translucent, or reflective cases. Figure 1 presents multiple samples on the NYU, KITTI, and Sintel datasets, showing that our model captures multimodality and provides plausible samples when ambiguities exist. More details and examples are available in Section A.
Imputation of missing labels. A diffusion model trained to model the conditional distribution p(y|x)
can be zero-shot leveraged to sample from p(y|x, y partial ) where y partial is the partially known label. One approach for doing this, known as the replacement method for conditional inference [61], is to replace the known portion the latent y t at each inference step with the noisy latent built by applying the forward process to the known label. We qualitatively study the results of leveraging replacement guidance for depth completion and find it to be surprisingly effective. We illustrate this by building a pipeline for iteratively generating 3D scenes (conditioned on a text prompt) as shown in Figure 7 by leveraging existing models for text-to-image generation and text-conditional image inpainting. While a more thorough evaluation of depth completion and novel view synthesis against existing methods is warranted, we leave that exploration to future work. (See Section C for more details and examples.)
Limitations. We adopt standard practices from image-generation models, leading to larger models and slower running times than RAFT. However, we are excited by the recent progress on progressive distillation [40,54] and consistency models [62] to improve inference speed in diffusion models.
Conclusion
We introduced a simple denoising diffusion model for monocular depth and optical flow estimation using an image-to-image translation framework. Our generative approach obtains state-of-the-art results without task-specific architectures or loss functions. In particular, our model achieves an Fl-all score of 3.26% on KITTI, about 25% better than the best published method [68]. Further, our model captures the multi-modality and uncertainty through multiple samples from the posterior. It also allows imputation of missing values, which enables iterative generation of 3D scenes conditioned on a text prompt. Our work suggests that diffusion models could be a simple and generic framework for dense vision tasks, and we hope to see more work in this direction.
A Multimodal prediction
We provide more qualitative examples for multimodal prediction. Figures 8 and 9 illustrate multimodal depth predictions on NYU and KITTI respectively. Multimodality of the posterior distribution exists in regions where there are multiple plausible predictions. For example, this includes reflective and transparent surfaces (mirrors and glass surfaces in rows 1 to 5 of Figure 8 and windows of cars in Figure 9). We further find that the model captures uncertainty in depth estimates in the vicinity of object boundaries, some of which arise due to noise in ground truth measurements in the training data. This can be observed at the boundaries of cars in Figure 9 and around edges of objects in Figure 8 (most clearly visible in the last row). Figure 10 illustrates different samples on KITTI from the optical flow diffusion model, also capturing multiple modes of the predictive posterior. Multimodality exists on transparent surfaces and near occlusions. As shown in Figure 11, on Sintel, multimodality also exists on occluded or out-of-bounds pixels where multiple predictions are plausible.
Input image Variance heat map Multimodal prediction samples C More samples for zero-shot imputation of depth Figure 13 provides samples generated using our iterative text-to-3D pipeline. We note that such pipelines for iteratively generating 3D scenes have been previously proposed in literature [35,58,76]. However, these methods explicitly learn networks to refine the color [35,58,76] and the depth map [35,58]. In contrast, we propose leveraging the text-conditioned image prior from existing large scale text-to-image [53] and text-conditional image completion [73] models, and use our depth estimation model zero-shot for depth completion. One caveat with our current approach of using the replacement method for conditional inference [61] for imputing depth, is that it does not enable one to fix errors in the depth predicted in the previous step. One approach to fix artifacts would be by noising-denoising, like that used for coarse-to-fine refinement. We leave further exploration into this to future work. Tables 8 and 9 provide detailed results on the val set of NYU depth v2 and KITTI depth datasets. We follow the standard evaluation protocol used in prior work [33]. For both the NYU depth v2 Image Ground Truth DPT Ours Figure 12: Qualitative comparison of our model with DPT-Hybrid [48] (fine-tuned on NYU) on the NYU depth v2 val set. Our method infers better depth for both scene structure (walls, floors, etc.) and individual objects. Specific differences are highlighted with red arrows.
D Complete depth results on NYU and KITTI
Prompt: A living room.
Prompt: A library.
Prompt: A meeting room.
Prompt: A kitchen.
Prompt: A warehouse.
Prompt: A movie theatre. Figure 13: Text-to-3D samples. Given a text prompt, an image is first generated using Imagen [53] (first row of first column), after which depth is estimated (second row of first column). Subsequently the camera is moved to reveal new parts of the scene which are infilled using an image completion model and our model (which conditions on both the incomplete depth map and the filled image). At each step, newly generated RGBD points are added to a global point cloud which is visualized in the rightmost column. and KITTI datasets we report the absolute relative error (REL), root mean squared error (RMS) and accuracy metrics (δ i < 1.25 i for i ∈ 1, 2, 3). For NYU we also report absolute error of log depths (log 10 ). For KITTI we additionally report the squared relative error (Sq-rel) and root mean squared error of log depths (RMS log). The predicted depth is up-sampled to the full resolution using bilinear interpolation before evaluation. For the indoor model we evaluate on the cropped region proposed by [11] and for the outdoor model the cropped region proposed by [14] as is standard in prior work. Tables 10 and 11 show that an L 1 loss in training the diffusion model performs much better than an L 2 loss for monocular depth estimation on NYU and KITTI. Tables 12 and 13 show the effectiveness of Palette-style [52] self-supervised pretraining for monocular depth estimation on NYU and KITTI respectively. All results use a single sample. Because these findings are reasonable and expected to generalize to other dense vision tasks, we do not further ablate them for optical flow estimation for compute efficiency. F Coarse-to-fine refinement for depth Figure 14 demonstrates performance of coarse-to-fine refinement on the NYU depth v2 dataset. While refinement improves fine-scale details in the estimated depth maps, the qualitative improvements are small and we do not find significant quantitative improvements. Hence the results reported in this work do not use coarse-to-fine refinement for depth estimation. Further work is needed to develop a coarse-to-fine algorithm capable of more robust gains in depth estimation.
E Ablations
Image Ground Truth
No refinement With refinement Figure 14: Samples with coarse-to-fine refinement on the NYU depth v2 dataset. We find that refinement adds sharpness and detail to the depth estimation but does not provide quantitative improvements.
G Coarse-to-fine optical flow refinement for RAFT For a fair comparison with optical flow estimation, we also apply our coarse-to-fine refinement scheme to RAFT [72], to determine whether our performance gains translate to RAFT as well. We first estimate flow at a low resolution, 320 × 448, upsample the low-resolution flow to the original resolution, divide original-resolution input images into 2 × 5 overlapping patches of size 320 × 448, then estimate flow on the cropped patches using the upsampled flow field as the initial guess for the recurrent refinement (12 steps in total) of RAFT [72]. After estimating flow of each patch, we merge them using weighted masks [26]. Table 14 reports the result. Unlike our diffusion-based method, the coarse-to-fine scheme actually hurts the accuracy of RAFT on Sintel Clean and KITTI and only marginally improves the accuracy on Sintel Final. Further exploration into better approaches for coarse-to-fine refinement for RAFT is warranted. We leave that to future work.
H Training and inference details
H.1 Architecture UNet. The predominant architecture for diffusion models is the U-Net developed for the DDPM model [19], and later improved in several respects [9,43,61]. Here we adapt the Efficient U-Net architecture that was developed for Imagen [53]. It is more efficient that the U-Nets used in prior work owing to the use of fewer self-attention layers, fewer parameters and less computation at higher Figure 15: Overview of the Efficient UNet architecture proposed in [53]. CH_IN and CH_OUT refer to the number of input and output channels respectively. t refers to the time embedding. FiLM refers to the modulation layers proposed in [45]. N is the number of ResNet + self-attention blocks.
resolutions, along with other adjustments that make it well suited to training medium resolution diffusion models.
Specifically we adopt the configuration for the 64×64 → 256×256 super-resolution model (see Figure 15 for an overview) with several changes. We drop the text cross-attention layers but preserve the selfattention in the lowest resolution layers dblock4 and ublock4 (see Figure 15). For supervised training for the flow model, we find it beneficial to additionally enable self-attention for the last-but-one layers dblock3 and ublock3. The number of input and output channels differ across self-supervised pre-training and supervised pre-training and are also different for flow and depth models. For selfsupervised pre-training CH_IN=6 and CH_OUT=3 (see Figure 15) since the input consists of a 3-channel source RGB image and a 3-channel noisy target image concatenated along the channel dimension and the output is a RGB image. The supervised depth model has CH_IN=4 (RGB image + noisy depth) and CH_OUT=1. The supervised optical flow model has CH_IN=8 (2 RGB images + noisy flow along x and y) and CH_OUT=2. Note that this means we need to reinitialize the input and output convolutional kernels and biases before the supervised pretraining stage. All other weights are re-used.
Resolution. Our self-supervised model was trained at a resolution of 256 × 256. The indoor depth model is trained at 240×320. For Waymo we use 256×384 and for KITTI depth 256×832. Flow pretraining is done at a resolution of 320×448, and finetuning at 320×768.
H.2 Datasets and augmentation
For unsupervised pre-training, we use the ImageNet-1K [8] and Places365 [86] datasets and train on the self-supervised tasks of colorization, inpainting, uncropping, and JPEG decompression, following [52]. Throughout, we mix datasets at the batch level.
Flow. For supervised flow pretraining we use a mix of AutoFlow (native resolution 448×576), FlyingThings (540×960), Kubric (512×512) and TartanAir (480×640) synthetic datasets. We finetune on the standard mixture consisting of AutoFlow, FlyingThings, Viper (540×960), HD1K (540×1280), Sintel (436×1024), and KITTI (375×1242).
We follow the same photometric and geometric augmentation schemes from [68], comprising random affine transformation, flipping, and cropping. For indoor fine-tuning and evaluation we use NYU depth v2 [59], a commonly used dataset for evaluating indoor depth prediction models. It provides aligned image and depth maps at 480×640 resolution. We use the official split comprising 50k images for training and 654 for evaluation.
For outdoor fine-tuning and evaluation, we use KITTI [15], an outdoor driving dataset which provides RGB images and LiDAR scans at resolutions close to 370×1226. We use the training/test split proposed by [11], comprising 26k training images and 652 test images.
We use random horizontal flip data augmentation which is common in prior work. Where needed, images and dense depth maps are resized using bilinear interpolation to the model's resolution for training and nearest neighbor interpolation is used for sparse maps.
H. 3 Step-unrolling and interpolation of missing depth and flow
As discussed in Section 3.2 of the main paper, infilling and step-unrolling are used to mitigate distribution shift between training and inference with diffusion models. The problem arises due to the missing data in the training depth maps and flow fields.
For indoor depth maps, we use nearest neighbor interpolation during training (see Section 3.2 in the main paper). For the outdoor depth data we use nearest neighbor interpolation except for sky regions, as they are often large and are much further from the camera than adjacent objects in the image. We use an off-the-shelf sky segmenter [34], and then set all sky pixels to be the maximum modeled depth (here, 80m). For missing optical flow ground truth we employ a simple sequence of 1D nearest neighbor interpolations first along rows, and then along columns.
Finally, while we use infilling and step-unrolling, there are other ways in which one might try to mitigate the problem. One such approach was taken by [44], which faced a similar problem when training a vector-quantizer on depth data. Their approach was to synthetically add more holes following a carefully chosen masking ratio. We prefer our approach since nearest neighbor infilling is hyper-parameter free and step-unrolled denoising diffusion could be more generally applicable to other tasks with sparse data.
H.4 Hyper-parameters
Self-supervised. The self-supervised model is trained for 2.8M steps with an L 2 loss and a mini-batch size of 512. Other hyper-parameters are same as those in the original Palette paper [52].
Supervised. The supervised flow and depth models are trained with L 1 loss. Usually a constant learning rate of 1 × 10 −4 with a warm-up over 10k steps is used. However, for depth fine-tuning we find that a lower learning rate of 3 × 10 −5 achieves slightly better results. All models are trained with a mini-batch size of 64. The indoor depth model is pre-trained for 2M steps and then fine-tuned on NYU for 40k steps. The outdoor depth model is pre-trained for 0.9M steps and fine-tuned on KITTI for 40k steps. For flow, we pretrain for 3.7M steps, followed by finetuning for 50k steps. Other details, like the optimizer and the use of EMA are the same as [52].
H.5 Inference
Sampler. We use the DDPM ancestral sampler [19] with 128 denoising steps for monocular depth models and 64 steps for optical flow models. Increasing the number of denoising steps further did not greatly improve performance. Efficiency. Inference speed with diffusion models is a well-known issue, as multiple denoising steps are used to transform noise to a target signal. This can be prohibitive for vision tasks where near real-time latency is often desired. Table 15 compares the inference speed of our diffusion model for depth against DPT [48]. Despite having an efficient denoiser backbone (∼8.5 ms per denoising step on a TPU v4), the diffusion model is considerably slower than DPT in total wall time. The most obvious way to reduce inference latency is to reduce the number of denoising steps. This can be done with only moderate reduction in performance. As shown in Table 15, we perform comparably with DPT with as few as 24 denoising steps. However, a more thorough study into optimizing the inference speed of these models while preserving the generation quality is warranted. With the use of progressive distillation [40,54] it is likely possible to reduce latency even further, as this approach has been shown to successfully distill generative image models with over 1000 denoising steps into those with just 2-4 steps.
I Limitations
Sintel fine-tuning. Under the zero-shot setting, our method achieves state-of-the-art results on both Sintel.final and KITTI. Under the fine-tuning setting, ours is state-of-the-art on KITTI but is behind FlowFormer [21] on Sintel.final. One interesting question is why the zero-shot performance does not transfer to fine-tuning on Sintel.
There are several possible reasons.
• We follow the fine-tuning procedure in [68]. While their zero-shot RAFT results are comparable to FlowFormer on Sintel and KITTI, the fine-tuned RAFT-it is significantly better on KITTI but less accurate on Sintel than FlowFormer. It is possible that the fine-tuning procedure (e.g. dataset mixture or augmentations) developed in [68] is more suited for KITTI than Sintel. • Another possible reason is that there is substantial domain gap between the training and test data on Sintel than KITTI. On Sintel test, there is a particular sequence "Ambush 1", where the girl's right arm moves out of the image boundary. Our method has an AEPE close to 30 while FlowFormer has lower than 10. It is likely that the attention on the cost volume mechanism by FlowFormer can better reason about the motion globally and handles this particular sequence well. This particular sequence may account for the major difference in the overall results; among 12 available results on the Sintel website, ours has lower AEPE on 11 sequences but a higher AEPE on the "Ambush 1" sequence, as shown in Table 16. Figure 16 further provides visualization. Uncertainty in depth estimation. We observe certain cases where the model is uncertain about the depth estimates. Interestingly, this uncertainty appears to be well captured in the predictive posterior, as illustrated in Figure 17.
Figure 3 :
3Effects of adding synthetic datasets in pretraining. Diffusion models trained only with AutoFlow (AF) tend to provide very coarse flow estimates and can hallucinate shapes. The addition of FlyingThings (FT), Kubric (KU), and TartanAir (TA) remove the AF-induced bias toward polgonal-shaped regions, and significantly improve flow quality on fine detail, e.g. trees, thin structures, and motion boundaries.
Figure 4 :Figure 5 :
45Visual results comparing RAFT with our method after pretraining. Note that our method does much better on fine details and ambiguous regions. Visual results comparing RAFT with our method after finetuning. Ours does much better on fine details and ambiguous regions.
Figure 4
4provides a qualitative comparison of pre-trained models.
Figure 8 :Figure 9 :Figure 10 :Figure 11 :
891011Qualitative examples of multimodal estimation on the NYU depth dataset. Our model is able to output multiple plausible depth maps where ambiguity exists. Rows 1 to 5 show transparent or reflective surfaces where two answers exist. In all samples (specially the last row) we observe the model's ability to capture uncertainty in depth near object boundaries (see the areas of high variance)Qualitative examples of multimodal depth estimation on KITTI. Our model is able to predict multimodal samples, especially windows of cars and object boundaries. Please refer to the areas with high variance. Qualitative examples of multimodal optical flow estimation on KITTI. Multimodality exists on transparent surfaces (e.g., windows of cars) and shadows where our model estimates layered motion in different samples (see the last row). Qualitative examples of multimodal optical flow estimation on Sintel. Multimodality also exists on examples with challenging occlusion or out-of-bound cases. B Qualitative comparison of depth estimation with DPT Figure 12 provides a qualitative comparison of our model with DPT-Hybrid [48] finetuned on the NYU depth v2 [59] dataset. The depth estimates of our diffusion model are more accurate both on coarse-scale scene structure (walls, floors, etc.) and on individual objects.
Figure 16 :Figure 17 :
1617Visual results on Sintel test. We compare with Flowformer [21] and provide flow visualization and an error map on each scene. On Ambush 1, FlowFormer can better predict the motion of the girl's right arm that moves out of the image boundary, likely due to the global reasoning capability of attention. On Cave 3 and Market 1, our method provides much finer details on motion boundaries with lower end-point error (EPE). Qualitative examples of multimodal estimation on the NYU depth dataset showing examples where our model's uncertainty gets captured in the multimodal posterior. In the examples above, the model confuses the play mat (farther away from the viewpoint) for a table (closer to the viewpoint).
Table 1 :
1Zero-shot optical flow estimation results on Sintel and KITTI. We provide a new RAFT baseline using our proposed pre-training mixture and substantially improve the accuracy over the original. Our diffusion model outperforms even this much stronger baseline and achieves state-of-the-art zero-shot results on Sintel.final and KITTI.Model
Dataset
Sintel.clean Sintel.final
KITTI
AEPE
AEPE Fl-all
FlowFormer Chairs→Things
1.01
2.40
4.09 14.72%
RAFT
Chairs→Things
1.68
2.80
5.92
-
Perceiver IO AutoFlow
1.81
2.42
4.98
-
RAFT
AutoFlow
1.74
2.41
4.18 13.41%
RAFT (ours) AF→AF+FT+KU+TA
1.27
2.28
2.71 9.16%
DDVM (ours) AF→AF+FT+KU+TA
1.24
2.00
2.19 7.58%
Table 2 :
2Opticalflow finetuning evaluation
on public benchmark datasets (AEPE↓ for
Sintel and Fl-all↓ for KITTI). Bold indicates
the best and underline the 2 nd -best. § uses ex-
tra datasets (AutoFlow and VIPER) on top of
defaults (FlyingThings, HD1K, KITTI, and
Sintel). * uses warm start on Sintel.
Method
Sintel.clean Sintel.final KITTI
SKFlow [70] *
1.30
2.26
4.84%
CRAFT [64] *
1.44
2.42
4.79%
FlowFormer [21]
1.14
2.18
4.68%
RAFT-OCTC [27] *
1.51
2.57
4.33%
RAFT-it [68] §
1.55
2.90
4.31%
DDVM (ours) §
1.75
2.48
3.26%
Table 3 :
3Performance comparison on the NYU-Depth-v2 and KITTI datasets. ⊤ indicates method uses unsupervised pretraining, †indicates supervised pretraining and ‡ indicates use of auxilliary supervised depth data. Best / second best results are bolded / underlined respectively. ↓: lower is better ↑: higher is better.Method
Architecture
NYU-Depth-v2
KITTI
δ1 ↑ REL↓ RMS↓
δ1 ↑ REL↓ RMS↓
TransDepth [85]
Res-50+ViT-B †
0.900 0.106
0.365
0.956 0.064
2.755
DPT [48]
Res-50+ViT-B † ‡
0.904 0.110
0.357
0.959 0.062
2.573
BTS [32]
DenseNet-161 †
0.885 0.110
0.392
0.956 0.059
2.756
AdaBins [3]
E-B5+Mini-ViT †
0.903 0.103
0.364
0.964 0.058
2.360
BinsFormer [33]
Swin-Large †
0.925 0.094
0.330
0.974 0.052
2.098
PixelFormer [1]
Swin-Large †
0.929 0.090
0.322
0.976 0.051
2.081
MIM [78]
SwinV2-L ⊤
0.949 0.083
0.287
0.977 0.050
1.966
AiT-P [44]
SwinV2-L ⊤
0.953 0.076
0.279
-
-
-
DDVM
samples=1 Efficient U-Net ⊤ ‡
0.944 0.075
0.324
0.964 0.056
2.700
samples=2 Efficient U-Net ⊤ ‡
0.944 0.074
0.319
0.965 0.055
2.660
samples=4 Efficient U-Net ⊤ ‡
0.946 0.074
0.315
0.965 0.055
2.613
Table 4 :
4Ablation on infilling and step-unrolling. Without either one, performance deteriorates. Without both, optical flow models fail to train on KITTI.NYU val
KITTI val
KITTI val
REL
RMS
REL
RMS AEPE
Fl-all
Baseline
0.079 0.331 0.222 3.770
-
-
Step-unroll
0.076 0.324 0.085 2.844
1.84
6.16%
Infill
0.077 0.338 0.057 2.744
1.53
5.24%
Step-unroll & infill
0.075 0.324 0.056 2.700
1.47
4.74%
Table 6 :
6The addition of optical flow synthetic datasets substantially improves the zero-shot results on Sintel and KITTI.Dataset
Sintel.clean Sintel.final KITTI KITTI Fl-all
AF pretraining
2.04
2.55
4.47
16.59%
AF→AF+FT
1.48
2.22
3.71
14.07%
AF→AF+FT+KU
1.33
2.04
2.82
9.27%
AF→AF+FT+KU+TA
1.24
2.00
2.19
7.58%
Table 7 :
7The addition of synthetic depth data in pre-training substantially improves fine-tuning performance on NYU. Kubric (KU), and TartanAir (TA)). For a fair comparison, we re-train RAFT on this pre-training mixture; this new RAFT model significantly outperforms the original RAFT model. And our diffusion model outperforms the stronger RAFT baseline. It achieves the state-of-the-art zero-shot results on both the challenging Sintel Final and KITTI datasets.Dataset
REL
RMS
SceneNet RGB-D
0.089
0.362
ScanNet
0.081
0.346
SceneNet RGB-D + ScanNet
0.075
0.324
Table 8 :
8Comparison of performance on the NYU-Depth-v2 dataset. ⊤ indicates method uses unsupervised pretraining, † indicates supervised pretraining and ‡ indicates use of auxiliary supervised depth data. Best / second best results are bolded / underlined respectively. ↓: lower is better and ↑: higher is better.Method
Architecture
δ1 ↑
δ2 ↑
δ3 ↑
REL↓
RMS↓
log10 ↓
TransDepth [85]
Res-50+ViT-B †
0.900
0.983
0.996
0.106
0.365
0.045
DPT [48]
Res-50+ViT-B † ‡
0.904
0.988
0.998
0.110
0.357
0.045
AdaBins [3]
E-B5+Mini-ViT †
0.903
0.984
0.997
0.103
0.364
0.044
BinsFormer [33]
Swin-Large †
0.925
0.989
0.997
0.094
0.330
0.040
PixelFormer [1]
Swin-Large †
0.929
0.991
0.998
0.090
0.322
0.039
MIM [78]
SwinV2-L ⊤
0.949
0.994
0.999
0.083
0.287
0.035
AiT-P [44]
SwinV2-L ⊤
0.953
0.993
0.999
0.076
0.279
0.033
DDVM
samples=1
Efficient U-Net ⊤ ‡
0.944
0.986
0.995
0.075
0.324
0.032
samples=2
Efficient U-Net ⊤ ‡
0.944
0.987
0.996
0.074
0.319
0.032
samples=4
Efficient U-Net ⊤ ‡
0.946
0.987
0.996
0.074
0.315
0.032
Table 9 :
9Comparison of performance on the KITTI dataset. ⊤ indicates method uses unsupervised pretraining, † indicates supervised pretraining and ‡ indicates use of auxiliary supervised depth data. Best / second best results are bolded / underlined respectively. ↓: lower is better and ↑: higher is better. E-B5: EfficientNet-B5[71].Method
Backbone
δ1↑
δ2↑
δ3↑
REL ↓
Sq-rel ↓ RMS ↓ RMS log ↓
BTS [32]
DenseNet-161 †
0.956
0.993
0.998
0.059
0.245
2.756
0.096
TransDepth [85]
ResNet-50+ViT-B †
0.956
0.994
0.999
0.064
0.252
2.755
0.098
DPT [48]
ResNet-50+ViT-B † ‡
0.959
0.995
0.999
0.062
-
2.573
0.092
AdaBins [3]
E-B5+mini-ViT †
0.964
0.995
0.999
0.058
0.190
2.360
0.088
BinsFormer [33]
Swin-Large †
0.974
0.997
0.999
0.052
0.151
2.098
0.079
PixelFormer [1]
Swin-Large †
0.976
0.997
0.999
0.051
0.149
2.081
0.077
MIM [78]
SwinV2-L ⊤
0.977
0.998
1.000
0.050
0.139
1.966
0.075
DDVM
samples=1
Efficient U-Net ⊤ ‡
0.964
0.994
0.998
0.056
0.339
2.700
0.091
samples=2
Efficient U-Net ⊤ ‡
0.965
0.994
0.998
0.055
0.325
2.660
0.090
samples=4
Efficient U-Net ⊤ ‡
0.965
0.994
0.998
0.055
0.292
2.613
0.089
Table 10 :
10Ablation for the choice of loss function on the NYU depth v2 dataset.δ1 ↑
δ2 ↑
δ3 ↑
REL↓ RMS↓ log10 ↓
L2 0.932 0.981 0.994 0.085
0.349
0.037
L1 0.944 0.986 0.995 0.075
0.324
0.032
Table 11 :
11Ablation for the choice of loss function on the KITTI dataset.δ1↑
δ2↑
δ3↑
REL ↓
Sq-rel ↓ RMS ↓ RMS log ↓
L2
0.954
0.993
0.998
0.065
0.321
2.773
0.099
L1
0.964
0.994
0.998
0.056
0.339
2.700
0.091
Table 12 :
12Ablation for self-supervised pretraining on the NYU depth v2 dataset.δ1 ↑
δ2 ↑
δ3 ↑ REL↓ RMS↓ log10 ↓
Table 13 :
13Ablation for self-supervised pretraining on the KITTI depth dataset.δ1↑
δ2↑
δ3↑ REL ↓ Sq-rel ↓ RMS ↓ RMS log ↓
No self-supervised pre-training
0.952 0.990 0.997 0.064 0.389 2.998
0.104
With self-supervised pre-training
0.965 0.994 0.998 0.055 0.332 2.696
0.091
Table 14 :
14Our coarse-to-fine refinement scheme marginally improves the performance of RAFT on Sintel Final while hurting performance on Sintel Clean and KITTI. We report the EPE on the Sintel and KITTI datasets.Sintel Clean Sintel Final KITTI
about 200k frames. Each frame provides RGB images from 5 cameras and LiDAR maps. We use the RGB images from the FRONT, FRONT_LEFT and FRONT_RIGHT cameras and the TOP LiDAR only to build about 600k aligned RGB depth maps.Depth. For supervised pre-training of the indoor model we mix the following datasets. ScanNet
[7] is a dataset of 2.5M examples captured using a Kinect v1-like sensor. It provides depth maps
at 480×640 and RGB images at 968×1296. SceneNet RGB-D [39] is a synthetic dataset of 5M
images generated by rendering ShapeNet [6] objects in scenes from SceneNet [18] at a resolution of
240×320.
For the outdoor model training we use the Waymo Open Dataset [69], a large-scale driving dataset
consisting of
Coarse-to-fine refinement. We use 2×5 overlapping patches ({top, bottom} × {left, center-left, center, center-right, right}) for coarse-to-fine refinement. For Sintel we use t ′ = 32/64 and for KITTI t ′ = 8/64.
Table 15 :
15Inference speed comparison of our method with DPT[48] on the indoor depth model finetuned on NYU. Diffusion model inference is bottlenecked by the large number of denoising steps. We show that some efficiency gains can be achieved by simply reducing the number of denoising steps. Our model with 24 denoising steps is comparable in performance to DPT while being ∼5x slower (modulo differences in hardware). * we use the step-time reported in the DPT paper at a resolution of 384 × 384, however, the DPT performance metrics on NYU are with a model trained at a resolution of 480 × 640, for which the step time will be higher.Method
Architecture
Resolution
Total Time [ms] Inference steps REL ↓ RMS ↓
DPT-Hybrid Nvidia RTX 2080 384 × 384*
38*
-
0.110
0.357
DDVM
TPU v4
240 × 320
204
24
0.104
0.378
272
32
0.086
0.342
544
64
0.077
0.324
1089
128
0.075
0.324
Table 16 :
16Average end-point error (AEPE) on 12 Sintel test sequences available from the public website.Sequence
Ours FlowFormer [21]
Perturbed Market 3
0.787
0.869
Perturbed Shaman 1
0.219
0.252
Ambush 1
29.33
8.141
Ambush 3
2.855
2.973
Bamboo 3
0.415
0.577
Cave 3
2.042
2.352
Market 1
0.719
1.174
Market 4
5.517
8.768
Mountain 2
0.176
0.518
Temple 1
0.452
0.612
Tiger
0.413
0.596
Wall
1.639
1.723
Temporally consecutive input frames
Ours
FlowFormer [21]
Ambush 1
EPE = 29.330
EPE = 8.141
Cave 3
EPE = 2.042
EPE = 2.352
Market 1
EPE = 0.719
EPE = 1.174
AcknowledgementsWe thank Ting Chen, Daniel Watson, Hugo Larochelle and the rest of the Brain team for feedback on this work. Thanks to Klaus Greff and Andrea Tagliasacchi for their help with the Kubric generator, and to Chitwan Saharia for help training the Palette model.
Attention Attention Everywhere: Monocular depth prediction with skip attention. Ashutosh Agarwal, Chetan Arora, WACV. Ashutosh Agarwal and Chetan Arora. Attention Attention Everywhere: Monocular depth prediction with skip attention. In WACV, 2023.
Scheduled sampling for sequence prediction with recurrent neural networks. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, Noam Shazeer, NIPS. Samy Bengio, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. Scheduled sampling for sequence prediction with recurrent neural networks. NIPS, 2015.
AdaBins: Depth estimation using adaptive bins. Ibraheem Shariq Farooq Bhat, Peter Alhashim, Wonka, CVPR. Shariq Farooq Bhat, Ibraheem Alhashim, and Peter Wonka. AdaBins: Depth estimation using adaptive bins. In CVPR, pages 4009-4018, 2021.
A naturalistic open source movie for optical flow evaluation. Daniel J Butler, Jonas Wulff, Garrett B Stanley, Michael J Black, ECCV. Daniel J. Butler, Jonas Wulff, Garrett B. Stanley, and Michael J. Black. A naturalistic open source movie for optical flow evaluation. In ECCV, pages 611-625, 2012.
Estimating depth from monocular images as classification using deep fully convolutional residual networks. Yuanzhouhan Cao, Zifeng Wu, Chunhua Shen, IEEE T-CSVT. 2811Yuanzhouhan Cao, Zifeng Wu, and Chunhua Shen. Estimating depth from monocular images as classification using deep fully convolutional residual networks. IEEE T-CSVT, 28(11): 3174-3182, 2017.
Angel X Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, Fisher Yu, arXiv:1512.03012ShapeNet: An information-rich 3D model repository. Angel X. Chang, Thomas Funkhouser, Leonidas Guibas, Pat Hanrahan, Qixing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. ShapeNet: An information-rich 3D model repository. arXiv:1512.03012, 2015.
ScanNet: Richly-annotated 3D reconstructions of indoor scenes. Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, Matthias Nießner, CVPR. Angela Dai, Angel X. Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. ScanNet: Richly-annotated 3D reconstructions of indoor scenes. In CVPR, 2017.
ImageNet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, CVPR. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. ImageNet: A large-scale hierarchical image database. In CVPR, pages 248-255, 2009.
Diffusion models beat GANs on image synthesis. Prafulla Dhariwal, Alex Nichol, NeurIPS. 2022Prafulla Dhariwal and Alex Nichol. Diffusion models beat GANs on image synthesis. In NeurIPS, 2022.
Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. David Eigen, Rob Fergus, ICCV. David Eigen and Rob Fergus. Predicting depth, surface normals and semantic labels with a common multi-scale convolutional architecture. In ICCV, pages 2650-2658, 2015.
Depth map prediction from a single image using a multi-scale deep network. David Eigen, Christian Puhrsch, Rob Fergus, NIPS. 27David Eigen, Christian Puhrsch, and Rob Fergus. Depth map prediction from a single image using a multi-scale deep network. In NIPS, volume 27, 2014.
FlowNet: Learning optical flow with convolutional networks. Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip Häusser, Caner Hazirbas, Vladimir Golkov, Patrick Van Der, Daniel Smagt, Thomas Cremers, Brox, ICCV. Philipp Fischer, Alexey Dosovitskiy, Eddy Ilg, Philip Häusser, Caner Hazirbas, Vladimir Golkov, Patrick van der Smagt, Daniel Cremers, and Thomas Brox. FlowNet: Learning optical flow with convolutional networks. In ICCV, pages 2758-2766, 2015.
Deep ordinal regression network for monocular depth estimation. Huan Fu, Mingming Gong, Chaohui Wang, Kayhan Batmanghelich, Dacheng Tao, CVPR. Huan Fu, Mingming Gong, Chaohui Wang, Kayhan Batmanghelich, and Dacheng Tao. Deep ordinal regression network for monocular depth estimation. In CVPR, pages 2002-2011, 2018.
Unsupervised CNN for single view depth estimation: Geometry to the rescue. Ravi Garg, Vijay Kumar Bg, Gustavo Carneiro, Ian Reid, ECCV. Ravi Garg, Vijay Kumar Bg, Gustavo Carneiro, and Ian Reid. Unsupervised CNN for single view depth estimation: Geometry to the rescue. In ECCV, pages 740-756, 2016.
Vision meets Robotics: The KITTI dataset. Andreas Geiger, Philip Lenz, Christoph Stiller, Raquel Urtasun, The International Journal of Robotics Research. 3211Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. Vision meets Robotics: The KITTI dataset. The International Journal of Robotics Research, 32(11):1231-1237, 2013.
Digging into self-supervised monocular depth estimation. Clément Godard, Oisin Mac Aodha, Michael Firman, Gabriel J Brostow, ICCV. Clément Godard, Oisin Mac Aodha, Michael Firman, and Gabriel J. Brostow. Digging into self-supervised monocular depth estimation. In ICCV, pages 3828-3838, 2019.
Kubric: A scalable dataset generator. Klaus Greff, Francois Belletti, Lucas Beyer, Carl Doersch, Yilun Du, Daniel Duckworth, David J Fleet, Dan Gnanapragasam, Florian Golemo, Charles Herrmann, Thomas Kipf, Abhijit Kundu, Dmitry Lagun, Issam Laradji, CVPR. Hsueh-Ti (Derek) Liu, Henning Meyer, Yishu Miao, Derek Nowrouzezahrai, Cengiz Oztireli, Etienne Pot, Noha Radwan, Daniel Rebain, Sara Sabour, Mehdi S. M. Sajjadi, Matan Sela, Vincent Sitzmann, Austin Stone, Deqing Sun, Suhani Vora, Ziyu Wang, Tianhao Wu, Kwang Moo Yi, Fangcheng Zhong, and Andrea TagliasacchiKlaus Greff, Francois Belletti, Lucas Beyer, Carl Doersch, Yilun Du, Daniel Duckworth, David J. Fleet, Dan Gnanapragasam, Florian Golemo, Charles Herrmann, Thomas Kipf, Abhijit Kundu, Dmitry Lagun, Issam Laradji, Hsueh-Ti (Derek) Liu, Henning Meyer, Yishu Miao, Derek Nowrouzezahrai, Cengiz Oztireli, Etienne Pot, Noha Radwan, Daniel Rebain, Sara Sabour, Mehdi S. M. Sajjadi, Matan Sela, Vincent Sitzmann, Austin Stone, Deqing Sun, Suhani Vora, Ziyu Wang, Tianhao Wu, Kwang Moo Yi, Fangcheng Zhong, and Andrea Tagliasacchi. Kubric: A scalable dataset generator. In CVPR, pages 3749-3761, June 2022.
Understanding real world indoor scenes with synthetic data. Ankur Handa, Vijay Viorica Patraucean, Simon Badrinarayanan, Roberto Stent, Cipolla, CVPR. Ankur Handa, Viorica Patraucean, Vijay Badrinarayanan, Simon Stent, and Roberto Cipolla. Understanding real world indoor scenes with synthetic data. In CVPR, pages 4077-4085, 2016.
Denoising Diffusion Probabilistic Models. Jonathan Ho, Ajay Jain, Pieter Abbeel, NeurIPS. Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising Diffusion Probabilistic Models. NeurIPS, 2020.
Fast cost-volume filtering for visual correspondence and beyond. Asmaa Hosni, Christoph Rhemann, Michael Bleyer, Carsten Rother, Margrit Gelautz, IEEE T-PAMI. 352Asmaa Hosni, Christoph Rhemann, Michael Bleyer, Carsten Rother, and Margrit Gelautz. Fast cost-volume filtering for visual correspondence and beyond. IEEE T-PAMI, 35(2):504-511, 2012.
FlowFormer: A transformer architecture for optical flow. Zhaoyang Huang, Xiaoyu Shi, Chao Zhang, Qiang Wang, Ka Chun Cheung, Hongwei Qin, Jifeng Dai, Hongsheng Li, ECCV. Zhaoyang Huang, Xiaoyu Shi, Chao Zhang, Qiang Wang, Ka Chun Cheung, Hongwei Qin, Jifeng Dai, and Hongsheng Li. FlowFormer: A transformer architecture for optical flow. In ECCV, pages 668-685, 2022.
MirrorFlow: Exploiting symmetries in joint optical flow and occlusion estimation. Junhwa Hur, Stefan Roth, ICCV. Junhwa Hur and Stefan Roth. MirrorFlow: Exploiting symmetries in joint optical flow and occlusion estimation. In ICCV, pages 312-321, 2017.
Iterative residual refinement for joint optical flow and occlusion estimation. Junhwa Hur, Stefan Roth, CVPR. Junhwa Hur and Stefan Roth. Iterative residual refinement for joint optical flow and occlusion estimation. In CVPR, pages 5754-5763, 2019.
FlowNet 2.0: Evolution of optical flow estimation with deep networks. Eddy Ilg, Nikolaus Mayer, Tonmoy Saikia, Margret Keuper, Alexey Dosovitskiy, Thomas Brox, CVPR. Eddy Ilg, Nikolaus Mayer, Tonmoy Saikia, Margret Keuper, Alexey Dosovitskiy, and Thomas Brox. FlowNet 2.0: Evolution of optical flow estimation with deep networks. In CVPR, pages 2462-2470, 2017.
Occlusion-aware optical flow estimation. Serdar Ince, Janusz Konrad, 17Serdar Ince and Janusz Konrad. Occlusion-aware optical flow estimation. IEEE T-IP, 17(8): 1443-1451, 2008.
Perceiver IO: A general architecture for structured inputs & outputs. Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, ICLR. 2022Andrew Jaegle, Sebastian Borgeaud, Jean-Baptiste Alayrac, Carl Doersch, Catalin Ionescu, David Ding, Skanda Koppula, Daniel Zoran, Andrew Brock, Evan Shelhamer, et al. Perceiver IO: A general architecture for structured inputs & outputs. In ICLR, 2022.
Imposing consistency for optical flow estimation. Jisoo Jeong, Jamie Lin, Fatih Porikli, Nojun Kwak, CVPR. 2022Jisoo Jeong, Jamie Lin, Fatih Porikli, and Nojun Kwak. Imposing consistency for optical flow estimation. In CVPR, 2022.
The HCI Benchmark Suite: Stereo and flow ground truth with uncertainties for urban autonomous driving. Daniel Kondermann, Rahul Nair, Katrin Honauer, Karsten Krispin, Jonas Andrulis, Alexander Brock, Burkhard Gussefeld, Mohsen Rahimimoghaddam, Sabine Hofmann, Claus Brenner, CVPR Workshops. Daniel Kondermann, Rahul Nair, Katrin Honauer, Karsten Krispin, Jonas Andrulis, Alexander Brock, Burkhard Gussefeld, Mohsen Rahimimoghaddam, Sabine Hofmann, Claus Brenner, et al. The HCI Benchmark Suite: Stereo and flow ground truth with uncertainties for urban autonomous driving. In CVPR Workshops, pages 19-28, 2016.
Deeper depth prediction with fully convolutional residual networks. Iro Laina, Christian Rupprecht, Vasileios Belagiannis, Federico Tombari, Nassir Navab, 3DV. Iro Laina, Christian Rupprecht, Vasileios Belagiannis, Federico Tombari, and Nassir Navab. Deeper depth prediction with fully convolutional residual networks. In 3DV, pages 239-248, 2016.
Professor Forcing: A new algorithm for training recurrent networks. Alex Lamb, Anirudh Goyal, Ying Zhang, Saizheng Zhang, Aaron Courville, Yoshua Bengio, NIPS. 29Alex Lamb, Anirudh Goyal, Ying Zhang, Saizheng Zhang, Aaron Courville, and Yoshua Bengio. Professor Forcing: A new algorithm for training recurrent networks. NIPS, 29, 2016.
Learning representations for automatic colorization. Gustav Larsson, Michael Maire, Gregory Shakhnarovich, ECCV. Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Learning representations for automatic colorization. In ECCV, pages 577-593, 2016.
From big to small: Multi-scale local planar guidance for monocular depth estimation. Jin Han Lee, Myung-Kyu Han, Dong Wook Ko, Il Hong Suh, arXiv:1907.10326Jin Han Lee, Myung-Kyu Han, Dong Wook Ko, and Il Hong Suh. From big to small: Multi-scale local planar guidance for monocular depth estimation. arXiv:1907.10326, 2019.
BinsFormer: Revisiting adaptive bins for monocular depth estimation. Zhenyu Li, Xuyang Wang, Xianming Liu, Junjun Jiang, arxiv.2204.00987Zhenyu Li, Xuyang Wang, Xianming Liu, and Junjun Jiang. BinsFormer: Revisiting adaptive bins for monocular depth estimation. arxiv.2204.00987, 2022.
Sky Optimization: Semantically aware image processing of skies in low-light photography. Orly Liba, Longqi Cai, Yun-Ta Tsai, Elad Eban, Yair Movshovitz-Attias, Yael Pritch, Huizhong Chen, Jonathan T Barron, CVPR Workshops. Orly Liba, Longqi Cai, Yun-Ta Tsai, Elad Eban, Yair Movshovitz-Attias, Yael Pritch, Huizhong Chen, and Jonathan T. Barron. Sky Optimization: Semantically aware image processing of skies in low-light photography. In CVPR Workshops, June 2020.
Infinite Nature: Perpetual view generation of natural scenes from a single image. Andrew Liu, Richard Tucker, Varun Jampani, Ameesh Makadia, Noah Snavely, Angjoo Kanazawa, ICCV. 2021Andrew Liu, Richard Tucker, Varun Jampani, Ameesh Makadia, Noah Snavely, and Angjoo Kanazawa. Infinite Nature: Perpetual view generation of natural scenes from a single image. In ICCV, 2021.
Learning optical flow with adaptive graph reasoning. Ao Luo, Fan Yang, Kunming Luo, Xin Li, Haoqiang Fan, Shuaicheng Liu, AAAI. Ao Luo, Fan Yang, Kunming Luo, Xin Li, Haoqiang Fan, and Shuaicheng Liu. Learning optical flow with adaptive graph reasoning. In AAAI, pages 1890-1898, 2022.
Learning rigidity in dynamic scenes with a moving camera for 3D motion field estimation. Zhaoyang Lv, Kihwan Kim, Alejandro Troccoli, Deqing Sun, James M Rehg, Jan Kautz, ECCV. Zhaoyang Lv, Kihwan Kim, Alejandro Troccoli, Deqing Sun, James M. Rehg, and Jan Kautz. Learning rigidity in dynamic scenes with a moving camera for 3D motion field estimation. In ECCV, pages 468-484, 2018.
A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. Nikolaus Mayer, Eddy Ilg, Philip Hausser, Philipp Fischer, Daniel Cremers, Alexey Dosovitskiy, Thomas Brox, CVPR. Nikolaus Mayer, Eddy Ilg, Philip Hausser, Philipp Fischer, Daniel Cremers, Alexey Dosovitskiy, and Thomas Brox. A large dataset to train convolutional networks for disparity, optical flow, and scene flow estimation. In CVPR, 2016.
SceneNet RGB-D: Can 5M synthetic images beat generic imagenet pre-training on indoor segmentation? In ICCV. John Mccormac, Ankur Handa, Stefan Leutenegger, Andrew J Davison, John McCormac, Ankur Handa, Stefan Leutenegger, and Andrew J. Davison. SceneNet RGB-D: Can 5M synthetic images beat generic imagenet pre-training on indoor segmentation? In ICCV, 2017.
On distillation of guided diffusion models. Chenlin Meng, Ruiqi Gao, P Diederik, Stefano Kingma, Jonathan Ermon, Tim Ho, Salimans, NeurIPS 2022 Workshop on Score-Based Methods. Chenlin Meng, Ruiqi Gao, Diederik P. Kingma, Stefano Ermon, Jonathan Ho, and Tim Salimans. On distillation of guided diffusion models. In NeurIPS 2022 Workshop on Score-Based Methods, 2022.
Joint 3D estimation of vehicles and scene flow. Moritz Menze, ISAChristian Heipke, ISAAndreas Geiger, ISAISPRS Workshop on Image Sequence Analysis. Moritz Menze, Christian Heipke, and Andreas Geiger. Joint 3D estimation of vehicles and scene flow. In ISPRS Workshop on Image Sequence Analysis (ISA), 2015.
Object scene flow. Moritz Menze, Christian Heipke, Andreas Geiger, ISPRS Journal of Photogrammetry and Remote Sensing. JPRS)Moritz Menze, Christian Heipke, and Andreas Geiger. Object scene flow. ISPRS Journal of Photogrammetry and Remote Sensing (JPRS), 2018.
Improved denoising diffusion probabilistic models. Alexander Quinn, Nichol , Prafulla Dhariwal, ICML. Alexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In ICML, pages 8162-8171, 2021.
All in Tokens: Unifying output space of visual tasks via soft token. Jia Ning, Chen Li, Zheng Zhang, Zigang Geng, Qi Dai, Kun He, Han Hu, arXiv:2301.02229Jia Ning, Chen Li, Zheng Zhang, Zigang Geng, Qi Dai, Kun He, and Han Hu. All in Tokens: Unifying output space of visual tasks via soft token. arXiv:2301.02229, 2023.
FiLm: Visual reasoning with a general conditioning layer. Ethan Perez, Florian Strub, Vincent Harm De Vries, Aaron C Dumoulin, Courville, AAAI. Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron C. Courville. FiLm: Visual reasoning with a general conditioning layer. In AAAI, 2018.
Hierarchical text-conditional image generation with clip latents. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen, arXiv:2204.06125Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv:2204.06125, 2022.
Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer. René Ranftl, Katrin Lasinger, David Hafner, Konrad Schindler, Vladlen Koltun, IEEE T-PAMI. 443René Ranftl, Katrin Lasinger, David Hafner, Konrad Schindler, and Vladlen Koltun. Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer. IEEE T-PAMI, 44(3):1623-1637, 2020.
Vision transformers for dense prediction. René Ranftl, Alexey Bochkovskiy, Vladlen Koltun, ICCV. René Ranftl, Alexey Bochkovskiy, and Vladlen Koltun. Vision transformers for dense prediction. In ICCV, pages 12179-12188, 2021.
Sequence level training with recurrent neural networks. Aurelio Marc, Sumit Ranzato, Michael Chopra, Wojciech Auli, Zaremba, In ICLR. Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level training with recurrent neural networks. In ICLR, 2016.
Cross-Domain self-supervised multi-task feature learning using synthetic imagery. Zhongzheng Ren, Yong Jae Lee, CVPR. Zhongzheng Ren and Yong Jae Lee. Cross-Domain self-supervised multi-task feature learning using synthetic imagery. In CVPR, 2018.
Playing for benchmarks. Stephan R Richter, Zeeshan Hayder, Vladlen Koltun, ICCV. Stephan R. Richter, Zeeshan Hayder, and Vladlen Koltun. Playing for benchmarks. In ICCV, pages 2213-2222, 2017.
Palette: Image-to-Image Diffusion Models. Chitwan Saharia, William Chan, Huiwen Chang, Chris A Lee, Jonathan Ho, Tim Salimans, David J Fleet, Mohammad Norouzi, SIGGRAPH. 2022Chitwan Saharia, William Chan, Huiwen Chang, Chris A. Lee, Jonathan Ho, Tim Salimans, David J. Fleet, and Mohammad Norouzi. Palette: Image-to-Image Diffusion Models. In SIGGRAPH, 2022.
Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, NeurIPS. Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J. Fleet, and Mohammad Norouzi2022Seyed Kamyar Seyed GhasemipourChitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David J. Fleet, and Mohammad Norouzi. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding. In NeurIPS, 2022.
Progressive distillation for fast sampling of diffusion models. Tim Salimans, Jonathan Ho, ICLR. 2022Tim Salimans and Jonathan Ho. Progressive distillation for fast sampling of diffusion models. In ICLR, 2022.
Step-unrolled denoising autoencoders for text generation. Nikolay Savinov, Junyoung Chung, Mikolaj Binkowski, Erich Elsen, Aaron Van Den Oord, ICLR. 2022Nikolay Savinov, Junyoung Chung, Mikolaj Binkowski, Erich Elsen, and Aaron van den Oord. Step-unrolled denoising autoencoders for text generation. In ICLR, 2022.
Learning depth from single monocular images. Ashutosh Saxena, Sung Chung, Andrew Ng, NIPS. Ashutosh Saxena, Sung Chung, and Andrew Ng. Learning depth from single monocular images. NIPS, 2005.
Make3D: Learning 3D scene structure from a single still image. Ashutosh Saxena, Min Sun, Andrew Y Ng, IEEE T-PAMI. 315Ashutosh Saxena, Min Sun, and Andrew Y. Ng. Make3D: Learning 3D scene structure from a single still image. IEEE T-PAMI, 31(5):824-840, 2009.
3D photography using context-aware layered depth inpainting. Meng-Li Shih, Shih-Yang Su, Johannes Kopf, Jia-Bin Huang, CVPR. Meng-Li Shih, Shih-Yang Su, Johannes Kopf, and Jia-Bin Huang. 3D photography using context-aware layered depth inpainting. In CVPR, 2020.
Indoor segmentation and support inference from RGBD images. Nathan Silberman, Derek Hoiem, Pushmeet Kohli, Rob Fergus, ECCV. Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus. Indoor segmentation and support inference from RGBD images. In ECCV, pages 746-760, 2012.
Deep unsupervised learning using nonequilibrium thermodynamics. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, Surya Ganguli, ICML. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsuper- vised learning using nonequilibrium thermodynamics. In ICML, pages 2256-2265, 2015.
Score-based generative modeling through stochastic differential equations. Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, Ben Poole, ICLR. 2021Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. In ICLR, 2021.
. Yang Song, Prafulla Dhariwal, Mark Chen, Ilya Sutskever, arXiv:2303.01469Consistency models. arXiv preprintYang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency models. arXiv preprint arXiv:2303.01469, 2023.
Inpainting of missing values in the kinect sensor's depth maps based on background estimates. Martin Stommel, Michael Beetz, Weiliang Xu, IEEE Sensors Journal. 144Martin Stommel, Michael Beetz, and Weiliang Xu. Inpainting of missing values in the kinect sensor's depth maps based on background estimates. IEEE Sensors Journal, 14(4):1107-1116, 2014.
CRAFT: Cross-attentional flow transformer for robust optical flow. Xiuchao Sui, Shaohua Li, Xue Geng, Yan Wu, Xinxing Xu, Yong Liu, Rick Goh, Hongyuan Zhu, CVPR. Xiuchao Sui, Shaohua Li, Xue Geng, Yan Wu, Xinxing Xu, Yong Liu, Rick Goh, and Hongyuan Zhu. CRAFT: Cross-attentional flow transformer for robust optical flow. In CVPR, pages 17602-17611, 2022.
Layered segmentation and optical flow estimation over time. Deqing Sun, Erik B Sudderth, Michael J Black, CVPR. Deqing Sun, Erik B. Sudderth, and Michael J. Black. Layered segmentation and optical flow estimation over time. In CVPR, pages 1768-1775, 2012.
PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. Deqing Sun, Xiaodong Yang, Ming-Yu Liu, Jan Kautz, CVPR. Deqing Sun, Xiaodong Yang, Ming-Yu Liu, and Jan Kautz. PWC-Net: CNNs for optical flow using pyramid, warping, and cost volume. In CVPR, pages 8934-8943, 2018.
AutoFlow: Learning a better training set for optical flow. Deqing Sun, Daniel Vlasic, Charles Herrmann, Varun Jampani, Michael Krainin, Huiwen Chang, Ramin Zabih, William T Freeman, Ce Liu, CVPR. Deqing Sun, Daniel Vlasic, Charles Herrmann, Varun Jampani, Michael Krainin, Huiwen Chang, Ramin Zabih, William T. Freeman, and Ce Liu. AutoFlow: Learning a better training set for optical flow. In CVPR, pages 10093-10102, 2021.
Disentangling architecture and training for optical flow. Deqing Sun, Charles Herrmann, Fitsum Reda, Michael Rubinstein, David J Fleet, William T Freeman, ECCV. Deqing Sun, Charles Herrmann, Fitsum Reda, Michael Rubinstein, David J. Fleet, and William T. Freeman. Disentangling architecture and training for optical flow. In ECCV, pages 165-182, 2022.
Pei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, Vijay Vasudevan, Wei Han, Jiquan Ngiam, Hang Zhao, Aleksei Timofeev, Scott Ettinger, Maxim Krivokon, Amy Gao, Aditya Joshi, Sheng Zhao, Shuyang Cheng, Yu Zhang, Jonathon Shlens, Zhifeng Chen, and Dragomir Anguelov. Scalability in perception for autonomous driving: Waymo open dataset. CVPRPei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, Vijay Vasudevan, Wei Han, Jiquan Ngiam, Hang Zhao, Aleksei Timofeev, Scott Ettinger, Maxim Krivokon, Amy Gao, Aditya Joshi, Sheng Zhao, Shuyang Cheng, Yu Zhang, Jonathon Shlens, Zhifeng Chen, and Dragomir Anguelov. Scalability in perception for autonomous driving: Waymo open dataset. In CVPR, 2020.
SKFlow: Learning optical flow with super kernels. Shangkun Sun, Yuanqi Chen, Yu Zhu, Guodong Guo, Ge Li, NeurIPS. 2022Shangkun Sun, Yuanqi Chen, Yu Zhu, Guodong Guo, and Ge Li. SKFlow: Learning optical flow with super kernels. In NeurIPS, 2022.
EfficientNet: Rethinking model scaling for convolutional neural networks. Mingxing Tan, Quoc Le, ICML. Mingxing Tan and Quoc Le. EfficientNet: Rethinking model scaling for convolutional neural networks. In ICML, pages 6105-6114, 2019.
RAFT: Recurrent all-pairs field transforms for optical flow. Zachary Teed, Jia Deng, ECCV. Zachary Teed and Jia Deng. RAFT: Recurrent all-pairs field transforms for optical flow. In ECCV, pages 402-419, 2020.
Imagen Editor and EditBench: Advancing and evaluating text-guided image inpainting. Su Wang, Chitwan Saharia, Ceslee Montgomery, Jordi Pont-Tuset, Shai Noy, Stefano Pellegrini, Yasumasa Onoe, Sarah Laszlo, David J Fleet, CVPR. Radu Soricut, Jason Baldridge, Mohammad Norouzi, Peter Anderson, and William Chan.Su Wang, Chitwan Saharia, Ceslee Montgomery, Jordi Pont-Tuset, Shai Noy, Stefano Pellegrini, Yasumasa Onoe, Sarah Laszlo, David J. Fleet, Radu Soricut, Jason Baldridge, Mohammad Norouzi, Peter Anderson, and William Chan. Imagen Editor and EditBench: Advancing and evaluating text-guided image inpainting. In CVPR, 2023.
TartanAir: A dataset to push the limits of visual SLAM. Wenshan Wang, Delong Zhu, Xiangwei Wang, Yaoyu Hu, Yuheng Qiu, Chen Wang, Yafei Hu, Ashish Kapoor, Sebastian Scherer, IROS. Wenshan Wang, Delong Zhu, Xiangwei Wang, Yaoyu Hu, Yuheng Qiu, Chen Wang, Yafei Hu, Ashish Kapoor, and Sebastian Scherer. TartanAir: A dataset to push the limits of visual SLAM. In IROS, 2020.
DeepFlow: Large displacement optical flow with deep matching. Philippe Weinzaepfel, Jerome Revaud, Zaid Harchaoui, Cordelia Schmid, ICCV. Philippe Weinzaepfel, Jerome Revaud, Zaid Harchaoui, and Cordelia Schmid. DeepFlow: Large displacement optical flow with deep matching. In ICCV, pages 1385-1392, 2013.
SynSin: End-to-end view synthesis from a single image. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson, CVPR. Olivia Wiles, Georgia Gkioxari, Richard Szeliski, and Justin Johnson. SynSin: End-to-end view synthesis from a single image. In CVPR, 2020.
A learning algorithm for continually running fully recurrent neural networks. Ronald J Williams, David Zipser, Neural Computation. 12Ronald J. Williams and David Zipser. A learning algorithm for continually running fully recurrent neural networks. Neural Computation, 1(2):270-280, 1989.
Revealing the dark secrets of masked image modeling. Zhenda Xie, Zigang Geng, Jingcheng Hu, Zheng Zhang, Han Hu, Yue Cao, CVPR. Zhenda Xie, Zigang Geng, Jingcheng Hu, Zheng Zhang, Han Hu, and Yue Cao. Revealing the dark secrets of masked image modeling. In CVPR, 2023.
Learning optical flow via global matching. Haofei Xu, Jing Zhang, Jianfei Cai, Hamid Rezatofighi, Dacheng Tao, Gmflow, CVPR. Haofei Xu, Jing Zhang, Jianfei Cai, Hamid Rezatofighi, and Dacheng Tao. GMFlow: Learning optical flow via global matching. In CVPR, pages 8121-8130, 2022.
Motion detail preserving optical flow estimation. Li Xu, Jiaya Jia, Yasuyuki Matsushita, IEEE T-PAMI. 349Li Xu, Jiaya Jia, and Yasuyuki Matsushita. Motion detail preserving optical flow estimation. IEEE T-PAMI, 34(9):1744-1757, 2011.
Volumetric correspondence networks for optical flow. Gengshan Yang, Deva Ramanan, NeurIPS. Gengshan Yang and Deva Ramanan. Volumetric correspondence networks for optical flow. NeurIPS, 2019.
Lantao Yu, Weinan Zhang, Jun Wang, Yong Yu, Seqgan, arXiv:1609.05473Sequence generative adversarial nets with policy gradient. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. Seqgan: Sequence generative adversarial nets with policy gradient. arXiv:1609.05473, 2016.
Separable flow: Learning motion cost volumes for optical flow estimation. Feihu Zhang, Oliver J Woodford, Philip H S Victor Adrian Prisacariu, Torr, ICCV. Feihu Zhang, Oliver J. Woodford, Victor Adrian Prisacariu, and Philip H.S. Torr. Separable flow: Learning motion cost volumes for optical flow estimation. In ICCV, pages 10807-10817, 2021.
Colorful image colorization. Richard Zhang, Phillip Isola, Alexei A Efros, ECCV. Richard Zhang, Phillip Isola, and Alexei A. Efros. Colorful image colorization. In ECCV, pages 649-666, 2016.
Transformer-based dual relation graph for multi-label image recognition. Jiawei Zhao, Ke Yan, Yifan Zhao, Xiaowei Guo, Feiyue Huang, Jia Li, ICCV. Jiawei Zhao, Ke Yan, Yifan Zhao, Xiaowei Guo, Feiyue Huang, and Jia Li. Transformer-based dual relation graph for multi-label image recognition. In ICCV, pages 163-172, 2021.
Places: A 10 million image database for scene recognition. Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, Antonio Torralba, Bolei Zhou, Agata Lapedriza, Aditya Khosla, Aude Oliva, and Antonio Torralba. Places: A 10 million image database for scene recognition. IEEE T-PAMI, 2017.
| [] |
[
"Accelerating Personalized PageRank Vector Computation",
"Accelerating Personalized PageRank Vector Computation"
] | [
"Zhen Chen zhenchen21@m.fudan.edu.cn ",
"Baojian Zhou bjzhou@fudan.edu.cn ",
"Deqing Yang yangdeqing@fudan.edu.cn ",
"Steven Skiena skiena@cs.stonybrook.edu ",
"Zhen Chen ",
"Xingzhi Guo xingzguo@cs.stonybrook.edu ",
"Baojian Zhou ",
"Deqing Yang ",
"Steven Skiena ",
"\nKey Laboratory of Data Science\nStony Brook University Stony Brook\nFudan University\nShanghai, ShanghaiChina Xingzhi Guo, USA\n",
"\nShanghai Key Laboratory of Data Science\nFudan University\nShanghaiChina\n",
"\nShanghai Key Laboratory of Data Science\nFudan University\nShanghaiChina\n",
"\nStony Brook University Stony Brook\nUSA\n"
] | [
"Key Laboratory of Data Science\nStony Brook University Stony Brook\nFudan University\nShanghai, ShanghaiChina Xingzhi Guo, USA",
"Shanghai Key Laboratory of Data Science\nFudan University\nShanghaiChina",
"Shanghai Key Laboratory of Data Science\nFudan University\nShanghaiChina",
"Stony Brook University Stony Brook\nUSA"
] | [
"Proceed-ings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data * Corresponding Author. KDD '23"
] | Personalized PageRank Vectors are widely used as fundamental graph-learning tools for learning graph embeddings, training graph neural networks, and detecting anomalous spammers. The wellknown local FwdPush algorithm[5]approximates PPVs and has a sublinear rate of O 1 . A recent study[51]found that when high precision is required, FwdPush is similar to the power iteration method, and its run time is pessimistically bounded by O log 1 . This paper looks closely at calculating PPVs for both directed and undirected graphs. By leveraging the linear invariant property, we show that FwdPush is a variant of Gauss-Seidel and propose a Successive Over-Relaxation based method, FwdPushSOR to speed it up by slightly modifying FwdPush. Additionally, we prove Fwd-Push has local linear convergence rate O vol( S) log 1 enjoying advantages of two existing bounds. We also design a new local heuristic push method that reduces the number of operations by 10-50 percent compared to FwdPush. For undirected graphs, we propose two momentum-based acceleration methods that can be expressed as one-line updates and speed up nonacceleration methods by O (1/ √ ). Our experiments on six real-world graph datasets confirm the efficiency of FwdPushSOR and the acceleration methods for directed and undirected graphs, respectively. | null | [
"https://export.arxiv.org/pdf/2306.02102v2.pdf"
] | 259,075,643 | 2306.02102 | 00870781a4ea20a951317cef32275b05121e12a6 |
Accelerating Personalized PageRank Vector Computation
2023. August 6-10, 2023
Zhen Chen zhenchen21@m.fudan.edu.cn
Baojian Zhou bjzhou@fudan.edu.cn
Deqing Yang yangdeqing@fudan.edu.cn
Steven Skiena skiena@cs.stonybrook.edu
Zhen Chen
Xingzhi Guo xingzguo@cs.stonybrook.edu
Baojian Zhou
Deqing Yang
Steven Skiena
Key Laboratory of Data Science
Stony Brook University Stony Brook
Fudan University
Shanghai, ShanghaiChina Xingzhi Guo, USA
Shanghai Key Laboratory of Data Science
Fudan University
ShanghaiChina
Shanghai Key Laboratory of Data Science
Fudan University
ShanghaiChina
Stony Brook University Stony Brook
USA
Accelerating Personalized PageRank Vector Computation
Proceed-ings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data * Corresponding Author. KDD '23
eed-ings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data * Corresponding Author. KDD '23Long Beach, CA, USA2023. August 6-10, 202310.1145/1122445.1122456ACM Reference Format:Personalized PageRanklarge-scale graphlocal linear convergenceSuccessive Over-Relaxation
Personalized PageRank Vectors are widely used as fundamental graph-learning tools for learning graph embeddings, training graph neural networks, and detecting anomalous spammers. The wellknown local FwdPush algorithm[5]approximates PPVs and has a sublinear rate of O 1 . A recent study[51]found that when high precision is required, FwdPush is similar to the power iteration method, and its run time is pessimistically bounded by O log 1 . This paper looks closely at calculating PPVs for both directed and undirected graphs. By leveraging the linear invariant property, we show that FwdPush is a variant of Gauss-Seidel and propose a Successive Over-Relaxation based method, FwdPushSOR to speed it up by slightly modifying FwdPush. Additionally, we prove Fwd-Push has local linear convergence rate O vol( S) log 1 enjoying advantages of two existing bounds. We also design a new local heuristic push method that reduces the number of operations by 10-50 percent compared to FwdPush. For undirected graphs, we propose two momentum-based acceleration methods that can be expressed as one-line updates and speed up nonacceleration methods by O (1/ √ ). Our experiments on six real-world graph datasets confirm the efficiency of FwdPushSOR and the acceleration methods for directed and undirected graphs, respectively.
INTRODUCTION
As fundamental graph-learning tools, Personalized PageRank Vectors (PPVs) [31] have been widely used in classic graph mining tasks such as detecting anomalous spammers [3,4,9,45], and modern graph representation learning methods such as graph embeddings [26,43,49] and graph neural networks [11,12,14,17,20,21,28,34,47,48,55]. PPVs effectively capture the local proximity of graph nodes, making them useful for training improved graph neural network models and designing effective clustering algorithms [5]. As a result, efficient computation of PPVs is crucial for the current field of graph representation learning.
The well-known FwdPush algorithm [5,10] is a widely used tool for computing PPVs due to its effectiveness in approximating PPVs, easy implementation, and local nature. The cost of each iteration of FwdPush is dependent only on the volumes of nodes near the target node, and its total run time complexity can be bounded by O 1 , where is the damping factor, and precision parameter controls the precision of the per-entry of PPV. This time complexity bound is independent of the graph structure, making FwdPush a preferred method over the power iteration method, which requires access to the entire graph in each iteration. However, a recent study [51] showed that when high precision is required, FwdPush behaves more like the power iteration method, with a pessimistically bounded run time complexity of O log 1 , where is the number of edges in the graph. This bound only holds for < (2 ) −1 , and it is unclear whether there exists a logarithmic factor bound for ≥ (2 ) −1 .
A natural question we address in this paper is Q1. whether Fwd-Push has a locally linear convergence rate, meaning that the perepoch complexity is locally dependent on the graph, but the total number of epochs is still bounded by a logarithmic factor O log 1 . To answer this question, a key observation is that when is close to 1, the local push method will not explore nodes far from the current target node, and thus the total run time per epoch remains local. It has been proven that when the graph is undirected, Fwd-Push is essentially a coordinate descent method, and computing an approximate PPV corresponds to an ℓ 1 -regularized optimization problem, which has a sparse solution [18]. This suggests that arXiv:2306.02102v2 [cs.DS] 6 Jun 2023
FwdPush is truly local. However, this equivalence is based on the assumption that the graph is undirected and is still unknown Q2. whether a similar optimization algorithm equivalence exists for directed graphs.
Questions Q1 and Q2 motivate us to study the PPV computation further. In this paper, we show for the first time that the wellknown FwdPush algorithm is a variant of Gauss-Seidel when the underlying graph is directed. This is due to the linear invariant property of FwdPush, which means that the updates of Gauss-Seidel for each coordinate of the target linear system are equal to the residual updates of FwdPush. We then propose to use the Successive Over Relaxation (SOR) based method to speed up FwdPush, namely FwdPushSOR. The advantage of FwdPushSOR is that it speeds up the original method, and it can be naturally applied to other variants of FwdPush [51], even ones for dynamic graph settings [54]. Furthermore, to study the convergence rate better, we prove a locally linear convergence rate O vol( S) log 1 where the precision is any positive number > 0 and vol(S) is the expected volume of a subset of active nodes explored. Our analysis is simplified by adding a dummy node to the queue and considering only the non-zero residual nodes. For undirected graphs with small , momentum acceleration-based methods can accelerate by a factor of O(1/ √ ). Both acceleration-based methods can be implemented in one-line iteration updates. Our contributions are summarized as follows
• For the first time, we show that the well-known FwdPush algorithm is a variant of Gauss-Seidel. To further improve its performance, we propose FwdPushSOR, a speed-up local method of the original FwdPush based on the SOR technique.
FwdPushSOR is as simple as that of FwdPush. • We prove a locally linear convergence rate of FwdPush with a complexity of O vol( S) log 1 , which combines the advantages of existing bounds where the expected volume of explored nodes, vol(S), is locally dependent on the underlying graph. Based on this insight, we design FwdPushMean, a new local push method variant that reduces the number of operations by 10-50% for most target nodes. • For undirected graphs, we propose using the Heavy-Ball (HB) and Nesterov Acceleration Gradient (NAG) method to calculate high-precision PPR vectors, which are O (1/ √ ) times faster than the power iteration and FwdPush variants. Our methods can be implemented in just one line of code.
• We conduct experiments on six real-world graphs and find that, FwdPushSOR and acceleration methods can significantly reduce run time and the number of operations required. For example, these local SOR methods are about 3 times faster on undirected graphs and about 2 times faster on directed graphs, respectively, when = 0.15.
The rest of the paper is organized as follows: In Sec. 2, we discuss related work. Notations and the definition of PPV are presented in Sec. 3. Sec. 4 gives the locality analysis of FwdPush. The accelerated algorithms for PPVs are presented in Sec. 5. Finally, we present our experimental results and conclusions in Sec. 6 and Sec. 7, respectively. Our code and datasets are available at https://github.com/ccczhen/AccPPR.
RELATED WORK
PPVs and power iteration-based accelerations. The Personalized PageRank computation traces its roots to the work of Jeh and Widom [31], who proposed using a personalized vector as a starting point for PageRank calculation instead of the uniform distribution used in the original PageRank algorithm [41]. Acceleration methods for directed graphs are mainly based on power iteration, accessing the whole graph once per iteration, resulting in O ( ) per-iteration updates. Arasu et al. [6] used Gauss-Seidel method to globally accelerate power iteration. Kamvar et al. [33] proposed to use the Aitken extrapolation to speed up the PPVs calculation, but its effectiveness largely depends on the underlying graph and can sometimes lead to insignificant improvements. Other acceleration techniques, including the inner-outer loop approach [23], and methods proposed by Lee et al. [36] and systematically studied by Langville and Meyer [35], have also been proposed to improve the power iteration-based method. Different from these methods, this paper mainly focuses on accelerating local methods.
FwdPush and its variants. The work of Andersen et al. [5] proposed FwdPush (also known as an Approximate PageRank Approximation, APPR) algorithm for approximating PageRank personalization vectors. Later, it was used to approximate columns of the PPV matrix [3]. The algorithm has a sublinear time complexity bound of O ( 1 ) due to significant residuals being pushed out from the residual vector per iteration. It is worth mentioning that the essentially same idea as the local push method is also proposed in Berkhin [10]. Recent work by Wu et al. [51] showed that for < (2 ) −1 , FwdPush converges more like power iteration methods, leading to a total run time complexity of O ( log 1 ). However, it remains unknown if there is an exponential improvement when > (2 ) −1 , which corresponds to local approximation. We take a step further in this direction.
A recent study by Fountoulakis et al. [18] found that computing PPV is equivalent to solving an ℓ 1 -regularized problem that can be treated as a variant of the coordinate descent algorithm. The reason is that the linear system can be reformulated as a quadratic strongly convex optimization problem when the graph is undirected. However, this analysis only works for undirected graphs, as the objective of the optimization requires a symmetric matrix. It is unclear how to apply the analysis to directed graph settings. When the graph is undirected, alternative global methods, such as the conjugate gradient method, exist but are complex to implement. Therefore, we explore using NAG and HB-based methods to speed up the computation. The question of whether there exists a locally independent bound O 1 √ for the accelerated methods such as Accelerated Coordinate Descents [2,37], linear coupling [1] remains open asked by Fountoulakis and Yang [19].
Other related methods. Many Personalized PageRank-related algorithms [22,29,31,39,50] have been proposed, including for dynamic graph settings [8,54]. These works mainly focus on computing the PPR for a single or subset of entries. The generalized PageRank problem has been reviewed in [22] and symmetrically studied [35,38]. One can find more details in [22,35,38] and references therein. Our technique may be of independent interest to these directions.
PRELIMINARY
Notations. We consider unweighted simple graph G = (V, E) where V = {1, 2, . . . , } is a set of nodes and E ⊆ V × V is a set of edges with = |E |. The underlying graph G is either a directed graph or an undirected graph depending on the context. Bold lower letters are column vectors, e.g., ∈ R . Bold capital letters, e.g., ∈ R × are matrices. = diag( 1 , 2 , . . . , ) is the diagonal out-degree matrix of G. 1 The minimal and maximal degree is denoted as min and max . Nei( ) is the set of neighbors of . is denoted as the adjacency matrix of G. The column stochastic matrix associated with is defined as := ⊤ −1 . 2 The teleportation parameter (a.k.a dumping factor) ∈ (0, 1) (usually ∈ (0.0, 0.5) in practice). A vector at time is denoted as :
Personalized PageRank Vector
Given an underlying graph G = (V, E), an initial indicating vector , and a teleportation parameter ∈ (0, 1), PPV of a target node is a probability vector such that
= + (1 − ) ⊤ −1 ,(1)
where we call satisfying Equ. (1) a PPV. The above equation is essential to access the -th column of a nonnegative matrix, that is
= − (1 − ) ⊤ −1 −1 .
The definition of PPV is a generalization of Google matrix computation where = 1/ used [41]. The standard method of solving the linear system (1) is the fixed-point iteration working as the following +1 = (1 − ) ⊤ −1 + .
As shown in [24], if we use 0 = 0, one immediately obtains that ∥ − * ∥ 1 = (1 − ) where we denote * as the true solution of the PPV for the target node . However, the above iteration method needs to access the whole graph, thus resulting in O ( ) run time complexity per iteration. Next, we introduce the local FwdPush method and explain how it can obtain a good approximation of PPV by only exploring a small set of nodes.
FwdPush algorithm
An efficient first-in-first-out queue-based implementation of Fwd-Push is presented in Algo. 1. At a higher level, FwdPush iteratively accesses nodes and their neighbors and moves a distribution to another distribution . At each iteration, each node in Q satisfies ≥ . We call such a node an active node. Similarly, if a node has a low residual, i.e., < , then it is inactive. Initially, Fwd-Push has the source node in Q. With each iteration, it maintains the updates of two vectors and , where is a residual vector, 1 See Appendix A for dealing with dangling nodes. 2 In case of = 0 for some , −1 = + where + is Moore-Penrose inverse of . and is the estimation vector. For each active node , it pushes magnitudes of to and spreads (1 − ) uniformly to Nei( ). for ∈ Nei( ) do It terminates when all < . During these push operations, one always have , ≥ 0 and ∥ ∥ 1 + ∥ ∥ 1 = 1. It can be shown that returned is guaranteed by | − * | ≤ , ∀ ∈ V (See details in [5]). The essential effectiveness of FwdPush is because entries indexing by nodes near to have large magnitudes, and entries of follow a power-law distribution as demonstrated in Fig. 7 in the Appendix. Therefore, FwdPush quickly approximate by only exploring these nodes near to it. We aim to improve the speed of this local method. In the following section, we will present our key findings.
LOCALITY ANALYSIS OF FWDPUSH
This section presents the equivalence between FwdPush and a variant of Gauss-Seidel (G-S) and introduces a faster method based on the SOR technique. We demonstrate that FwdPush has a locally linear convergence rate and offer insights that could help find more effective variants.
FwdPush is a variant of Gauss-Seidel
To show that FwdPush is a variant of G-S iteration (See Section 10.1.1 of [25]). Recall, for solving the linear system
= ,(2)
the G-S iteration updates using the following online iteration
+1 = 1 − −1 ∑︁ =1 +1 − ∑︁ = +1 , ∈ S ,(3)
where indexes the current super-iteration, +1 are elements have been updated up to time +1 , and are entries will be updated. S presents the set of indices of updated in -th super-iteration. Note that we have S = V for all in the standard G-S iteration. G-S iteration is usually for solving system (2) when is strictly diagonally-dominant. We note that − (1 − ) ⊤ −1 is a strictly diagonally-dominant matrix for a simple graph. The following theorem presents the equivalence between the variant of the Gauss-Seidel iteration and the FwdPush algorithm. Proof. The key to our proof is to use the linear invariant property. We state it as follows: Let and be the estimation and residual vector of calling FwdPush(G, , , ) at time , then for all ≥ 0, we have the following linear invariant property
= 0 − − (1 − ) ⊤ −1 .(4)
To verify Equ. (4), note that it is trivially true at the initial time = 0 where 0 = 0. For all ≥ 1 and any active node , notice that FwdPush updates −1 and −1 as the following
= −1 + −1 · (5) = −1 − −1 · + (1 − ) −1 ⊤ −1 ,(6)
where Equ. (5) corresponds to Line 5 and Equ. (6) represents Line6-10 of Algo. 1 with the initial setup 0 = 0, 0 = . To simplify Equ. (6), one can reformulate it as
−1 = − (1 − ) ⊤ −1 −1 ( −1 − ).
is thus the sum of the left-hand side of the above over , that is,
= ∑︁ =1 −1 = − (1 − ) ⊤ −1 −1 ∑︁ =1 ( −1 − ) = − (1 − ) ⊤ −1 −1 0 − .
Move the above-inverted matrix to the left; we see the linear invariant property (4) is valid. Next, we show FwdPush is a variant type of G-S iteration defined in (3). Since we assume = − (1− ) ⊤ −1 and = , then = 1 for the simple graph. The G-S iteration of (3) can be rewritten as
+1 = + − ∑︁ =1 , // = +1 for <
where we can use to represent the updated up to time . Hence, each update can be represented as a vector form as the following +1 = + − ,:
= + 0 − − (1 − ) ⊤ −1 = + ,
where the second equality is from the definition of and , and the last equality follows from the linear invariant property (4). □
When is small, has a large condition number corresponding to slow convergence of FwdPush and G-S. Fortunately, Thm. 1 immediately tells us that to speed up FwdPush, the acceleration technique used for G-S can also be applied for FwdPush. To speed up the G-S procedure, we propose to use the Successive Over-Relaxation (SOR) technique [27,53], a well-known method for accelerating G-S of solving diagonally-dominant matrix. To update , by using SOR, we have (note = 1)
+1 = (1 − ) + − −1 ∑︁ =1 +1 − ∑︁ = +1 · = (1 − ) + + ( − ,: ) = + ,
where the relaxation parameter ∈ (0, 2). The relaxed method is simply different by times. The next key is maintaining the invariant property for and . To maintain the linear invariant property, we apply Equ. (5) and (6) times of original magnitudes. Hence, the corresponding residual updates of FwdPush become
= −1 + −1 · (7) = −1 − −1 · + (1 − ) −1 ⊤ −1 ,(8)
where we relax the assumption that entries of could be negative.
Algorithm 2 FwdPushSOR( G, , , , ) 1: Initialization: = 1 , = 0 2: Q = [ ] 3: while Q ≠ ∅ do 4: = Q.pop() 5: = + · 6:
for ∈ Nei( ) do This violation enables us to move more magnitudes of residuals at once to , thereby speeding up the entire procedure. Thus, based on the over-relaxed Equ. (7) and (8), we propose FwdPushSOR shown in Algo. 2, which is simple to implement and requires only the relaxation parameter . The key invariant property of FwdPushSOR is that magnitudes are excessively removed from and distributed to all its outer neighbors and estimate . Furthermore, this SOR-based method is still local. It is worth noting that this approach can generally be applied to other variants of FwdPush [51,54]. For instance, the PwrPush algorithm proposed in [51] can easily be modified to incorporate the SOR technique. We call this method PwrPushSOR.
Parameter choosing for . First of all, FwdPushSOR exactly recovers FwdPush when = 1. Note that is symmetric and positive-definite for undirected graphs, and SOR would converge on any ∈ (0, 2) (see Thm. 4.4.12 of [27]). The optimal is
= 1 + 1− 1+ √ 1−(1− ) 2 2 .(9)
For directed graphs, choosing is more difficult since the matrix is not easy to characterize. However, to choose , one can use the following heuristic way: starts from the optimal value; if it fails, we decrease by a constant step until it reaches 1. In practice, can reach about 1.5 when = 0.15, which is more than two times faster than existing FwdPush. Yet, the convergence of SOR-based methods remains an open problem for future research. Unlike the standard G-S iteration, FwdPush updates a subset of nodes S of V at each epoch. In a recent work of Wu et al. [51], it has been proved that the queue-based implementation of FwdPush is similar to the power iteration method, hence for obtaining the final solution with precision < (2 ) −1 , it requires O ( log 1 ) operations. In the following, we improve the analysis and show that FwdPush is locally linear convergent to * .
Local linear convergence of FwdPush
To establish the locally linear convergence rate and have a better illustration, we present slightly different FwdPush and add a dummy node ‡ as presented in Algo. 3 where the dummy node helps to identify the super epochs. The parameter is indexing the epoch id, and ′ is indexing active node processing time, as shown in Fig. 1.
u t ′ u t ′ +1 u t ′ +s S t ‡ · · · v t ′ 1 v t ′ i · · · v t ′ 1 v t ′ i · · · S t+1
The only difference between Algo. 3 and Algo. 1 is that Algo. 3 always keeps ‡ in the queue until no active nodes pushed into Q. A new epoch begins whenever ‡ pops out and is pushed into Q again. In the next theorem, we prove that FwdPush admits a local linear convergence rate, and the total run time of FwdPush is only locally dependent on G. We have the following notations: At the beginning of -th epoch, the set of active and inactive nodes are denoted as S = { : ≥ · , ∈ V} and U = { : 0 < < · , ∈ V}, respectively. We also denote the support of as I = supp( ). By these definitions, the total operations of FwdPush are the volume of all active nodes of all epochs, i.e., =1 vol(S ). for ∈ Nei( ) do Then, for all ≥ 0, the ℓ 1 estimation error ∥ +1 − * ∥ 1 has locally linear convergence rate, that is
∥ +1 − * ∥ 1 ≤ (1 − )∥ − * ∥ 1 ,(10)
where is the local convergence factor := ∈S / ∈I . Furthermore, the total run time of FwdPush is locally dependent on G, is bounded by
∑︁ =1 vol(S ) ≤ vol( 1: ) · 1: log , ,(11)
where vol( 1: ) is the average volume, vol( 1: ) = =1 vol( )/ and 1: = =1 / .
Before we prove the theorem, we introduce a key lemma, a new refinement from Wu et al. [51] as the following. Proof. We prove this lemma by showing that a significant residual has been pushed out from to +1 and corresponding gain from into +1 . The set of active nodes S has been processed: we use time ′ to index these nodes. The updates are the following. For each -th active node ′ , the updates are from Line 10 to Line 15 of Algo. 3 give us the following iterations
= ′ 0 ′ 1 − −− → ′ 1 ′ 2 − −− → ′ 2 · · · ′ |S | − −−− → ′ |S | = +1 = ′ 0 ′ 1 − −− → ′ 1 ′ 2 − −− → ′ 2 · · · ′ |S | − −−− → ′ |S | = +1 .
For -th epoch, the total amount of residual that had been pushed out is ′ ∈S ′ (Line 10). That is,
∥ ∥ 1 − ∥ +1 ∥ 1 ≥ ∑︁ ′ ∈S ′ .(12)
By the definition of S and U , we have ∀ ′ ∈ S , ′ ≥ · ′ , ∀ ∈ U , 0 < < · .
Summation above inequalities over all active nodes ′ and inactive nodes , we have
′ ∈S ′ ′ ∈S ′ ≥ > ∈U ∈U , which indicates ′ ∈S ′ ′ ∈S ′ > ′ ∈S ′ + ∈U ′ ∈S ′ + ∈U = ∈I ∈I = ∥ ∥ 1 ∈I ,(13)
where the last equality is due to the fact that I indexes all nonzero entries of , i.e., ∥ ∥ 1 = ∈I ( ). On the other hand, the -th iteration error of FwdPush as ∥ − * ∥ 1 . Clearly, when = 0,
∥ − * ∥ 1 = 1. For ≥ 0, we have ∥ * − +1 ∥ 1 = ∥ * − ∥ 1 − ∑︁ ′ ∈ ′ = 1 − ′ ∈ ′ ∥ ∥ 1 ∥ * − ∥ 1 ≤ 1 − ∈S ∈I ∥ * − ∥ 1 ,
where the last inequality follows from (13). □
Our new local convergence factor is (1 − ) which is strictly great than 1 − ∈ / from Wu et al. [51], meaning better convergence rate. The other key ingredient of our theorem is to estimate the total number of epochs needed. The observation is that the total amount of residuals left in is still relatively significant, so the residual in the last epoch ∥ ∥ 1 is lower bounded. We state the upper bound of as the following.
Lemma 4. Let be the total epochs used in FwdPush(G, , , ), then it can be bounded by
≤ 1 · 1: log , ,(14)
where , = 1/((1 − )|I |) and 1: = =1 / .
Proof. After the last epoch , for each of nonzero node , there was an active neighbor of , denote as , which pushed some residual (1− ) to . We denote each of this amount residual as˜, then for all ∈ I , we have
∥ ∥ 1 = ∑︁ ∈I ≥ ∑︁ ∈I˜: = ∑︁ ∈I (1 − ) ≥ ∑︁ ∈I (1 − ) = ∑︁ ∈I (1 − ) = (1 − ) | |.
From (10) of Lemma 3, the upper bound of ∥ ∥ 1 is
∥ ∥ 1 ≤ =1 1 − ∈S ∈I ∥ * − 0 ∥ 1 = =1 (1 − ).
Combine the lower and upper bound, we obtain
(1 − ) | | ≤ =1 (1 − ).
Take log on both sides of the above and use the fact log(1 − ) < − , we reach
where we simply denote as , which only depends on and when and G are fixed. The above inequality indicates that when the solution is sparse, the volume will be much less than . Another observation is that when → 0, we have 1: → 1, and vol( 1: ) → . Hence, it will recover to the power iteration-like method studied in [51]. To see how the bound estimated the total operations, we conduct experiments on the dblp graph applying FwdPush over different as illustrated in Fig. 2. Compared to two known bounds, our parameterized local bound is tighter. We find a similar pattern on other graph datasets, detailed explained in the appendix. , which can guide us in finding better methods. For example, we can find a method that tries to minimize . By noticing that is a trade-off between the ratio of active nodes processed in each epoch, we can either postpone pushing nodes with a large degree or the magnitudes are small. Doing this can save some operations and push more residuals for the next epoch.
To make this idea concrete, at the beginning of each epoch, we push a subset of active nodes with a large magnitude ratio at each epoch, that is, to measure the magnitude factor / . To do this, we check these nodes for each epoch by seeing whether the current magnitude factor of node is bigger than the mean of these ratios (which is an easy quantity to measure at the beginning of each super epoch). We postpone ∈ S to the next epoch if its current magnitude factor satisfies
<¯≜ 1 |S | ∑︁ ∈ ,
which means it is not worth pushing it; we can save to the next epoch so that accumulates more residual from 's neighbors. We implemented this idea, called it FwdPush-Mean, and presented it in Algo. 4 of the appendix. Interestingly, FwdPush-Mean can effectively reduce the number of operations of FwdPush. Fig. 3 illustrates the number of operations reduced by the proposed FwdPush-Mean (see details of Algo. 4 in the appendix). Although run time may not be significantly reduced (observed in our experiments), this is still valuable in resource-limited scenarios as a significant amount of operations are reduced. This method could also aid in finding a better heuristic for improving FwdPush. A detailed description of Algo. 4 can be found in the appendix. Our local linear convergence guarantee for FwdPush allows for improvement through a trade-off between the number of active nodes explored in each epoch S and the total number of epochs O ( 1 log , ). If 1: is large, the total number of epochs is expected to be small, and log , will also have a minimal effect. However, the total number of epochs will greatly depend on . If is small, FwdPush becomes slow. Using SOR can speed up Fwd-Push. In the next section, we demonstrate that we can further save 1/ √ run time by employing global acceleration-based methods if the underlying graph is undirected.
MOMENTUM-BASED METHODS FOR PPVS
This section analyzes the calculation of PPV when G is undirected. It reformulates the computation of PPV as a convex optimization problem and employs an acceleration-based technique to solve the linear system.
Quadratic optimization lens
We can rewrite the linear system of (1) into strongly convex optimization and then transform the problem into a strongly convex optimization problem. Recall our target linear system is ( − (1 − ) −1 ) = . For an undirected graph, we multiply both sides by −1/2 and rewrite the system such that the right-hand side matrix is symmetric; that is (1) can be reformulated as the following Notice that − (1 − )˜is symmetric normalized Laplacian parameterized by (1 − ). That is, we shall solve the linear system,
( − (1 − )˜) =˜.
We define the minimization problem of a quadratic objective function as the following arg min
∈R ( ) ≜ 1 2 ⊤ − (1 − )˜ −˜⊤ ,(16)
where˜is not a row stochastic matrix any more but − (1 − )ĩ s still positive definite. Clearly, is a strongly convex function, by taking the gradient ∇ ( ) and letting it be zero, i.e. ∇ ( ) := ( − (1 − )˜) −˜= 0, we see that has a unique solution * = −1/2 * . Therefore, the original solution * can be recovered from 1/2 * . To characterize the strongly convex and strong smoothness parameters of (See definitions of strongly convex and smoothness in Appendix B), denote 1 , 2 , . . . , as eigenvalues of with 1 = 1 ≥ 2 ≥ 3 ≥ · · · ≥ ≥ −1 and let˜1, . . . ,˜be eigenvalues of˜. Therefore, by the fact from the graph spectral theory [15], given the normalized − −1/2 −1/2 , we have its eigenvalues 0 =˜1 ≤ · · · ≤˜≤ 2. Therefore, the range of eigenvalues of
− (1 − )˜is ( − (1 − )˜) ∈ [ , 2 − ].
The above reformulation is commonly used in the optimization community and has also been studied in Fountoulakis et al. [18]. It was observed that when the graph is undirected, FwdPush is a special case of a coordinate descent. The coordinate descent for the above problem (16) is
+1 = − ∇ ( ) · ,(17)
where each step size should be chosen such that ≤ 1/ , where is the Lipschitz continuous parameter, |∇ ( + )−∇ ( )| ≤ · , for all ∈ R . Clearly, corresponds to the diagonal of − (1 − ) −1/2 −1/2 , which we always have ≤ 1. Replacing = 1 and letting −∇( ) = in (17). We can recover FwdPush accordingly. The coordinate descent algorithm could be accelerated by choosing momentum strategy [2,37].
The challenge of obtaining faster iteration based on the accelerated coordinate descent method remains open due to the lack of linear-invariant property for the momentum method [19]. Nevertheless, we use standard momentum techniques and leverage the advantage of continuous memory. Our momentum-based method can be expressed in a single line, and local linear convergence suggests that the number of iterations is O 1 · 1: log , . In the next section, we will demonstrate that one can save 1/ √ run time.
Accelerated methods
Nesterov's Accelerated Gradient (NAG) Method. Given a strongly convex function , NAG method [40], at -th iteration, updates two vectors and . It updates with the help of (with 0 = 0 ) as the following = Notice that ( +˜) can be used for the next iteration; hence the per-iteration operations are at most . Then we have the following convergence rate for ℓ 1 estimation error of * .
Theorem 6. Let +1 be the estimated vector returned by NAG method using iteration (18) (with 0 = 0 = 0) and let = 1/2 , then the estimation error of * is upper bounded by
∥ − * ∥ 1 ≤ max √︂ 2 exp − − 1 2 √︁ (2 − )/ .(19)
And the total number of operations required for -precision of perentry of − * , i.e., ∥ − * ∥ ∞ ≤ is
= O √ log max √︂ 2 .(20)
Proof. The proof can be found at Appendix C. □ Polyak's Heavy Ball (HB) method. Similar to the NAG method, the other popular momentum-based acceleration method is the Heavy Ball method [42]; different from the NAG method, the HB method updates as the following +1 = − ∇ ( ) + ( − −1 ),
where we set = 4/( √ 2 − + √ ) 2 and = ((1 − )/(1 + )) 2 , we can reach the following update method:
+1 = 2(1 − ) 1 + 2 (1 + ) 2˜− (1 − ) 2 (1 + ) 2 −1 + 2 (1 + 2 ) (1 + ) 2˜.
By changing variable 1/2 +1 = +1 , we have HB
+1 = 2 (1 + 2 ) (1 + ) 2 −1 − (1 − ) 2 (1 + ) 2 −1 + 2(1 − )(1 + 2 ) (1 + ) 2 .
Compared with local linear methods such as CD and FwdPush, the NAG and HB method admits a better convergence rate where the total number of iterations is about O ( / √ ) compared with linear ones O ( / ). Hence O (1/ √ ) times faster than methods presented in Sec. 4. One can also consider the most popular method, the conjugate gradient method [44], which also has a better convergence rate than the standard gradient descent method and is the same as our momentum-based methods. However, our two methods are easier to implement.
EXPERIMENTS
We conduct experiments on 6 real-world benchmark graphs to evaluate our proposed PPV algorithms, including HB, NAG, and FwdPushSOR, and PwrPushSOR, which is the combination of PwrPush [51] and SOR. In the experiments, we aim to answer the following question: How fast are these proposed methods compared with baselines in terms of run time and the number of operations needed for different settings of ? The results demonstrate the supreme efficiency boosted by SOR, achieving 2-3 times faster than strong baselines when = 0.15. Datasets. We use both directed graphs (dblp [52], products [30] and orkut [52]) as well as undirected graphs (web-Stanford [32], pokec [46],livejournal [7]). These are the most common datasets for benchmark PPR algorithms. We remove nodes with no in-degrees or out-degrees (dangling nodes), relabel the rest nodes, and use two directed edges to denote an undirected edge. Table 1 presents the detailed statistics. Baselines. We compare our proposed algorithms with three stateof-the-art baselines: FwdPush [5], PowItr, and PwrPush [51], a variant of FwdPush. We use the proposed SOR technique to implement both FwdPushSOR and PwrPushSOR. For choosing the relaxation parameter : 1) for undirected graphs, we directly use optimal value in (9); 2) for directed graphs, we take an adaptive strategy described in Sec. 4 to search where we set the minimal = 1 with step size 0.1 to the maximal value defined in Sec. 4. We only record the time consumed by the best . Experiment Settings. For each graph, we uniformly sample 50 nodes for PPV calculation with = min 1/10 8 , 1/ (same as [51]) and repeat experiments 5 times, recording the average running time, the number of residual updates, and the corresponding ℓ 1 -error. Note that the number of residual updates reflects the theoretical complexity regardless of the overheads incurred by various data structures. We measure PPV precision using ℓ 1 error ∥ − * ∥ 1 which can be measured by ∥ ∥ 1 . Infrastructure and Implementation. All experiments were conducted on a machine equipped with an Intel Xeon Gold 5218R CPU @ 2.10GHz (80 cores) with 256GB Memory. All algorithms are implemented in Python with the Numba library. 4
Experimental results
SOR-based methods are faster and need much less number of operations on both undirected and directed graphs. A-F of Fig. 4 and 5 present the run time and the number of operations of PPV methods when = 0.15, respectively. First, SOR-based methods are more than 2 times faster than their counterparts on both undirected and directed graphs shown in Fig. 4 (A-F). This confirms our SOR-based methods effectively speed up their counterparts. Note that by using a continuous memory access strategy, PwrPush and PwrPushSOR is, in general, faster than FwdPush and FwdPushSOR even if the number of operations for both is similar as shown in Fig. 5 (A-F). Indeed, our SOR-based methods save half of the total operations. As shown in the appendix, we observed more significant speedup results when = 0.05. The Fig. 4 and 5 present results on undirected graphs. Among all methods, HB and NAG are faster than these non-acceleration methods but also use fewer operations. When = 0.05 is a smaller value, the gap is more significant, as seen in Fig. 10 and 11. These results verify the O (1/ √ ) times faster predicted by our theorem. However, compared with these local methods, the speedup of acceleration-based methods is insignificant. One may expect a local version of acceleration-based methods could further improve HB and NAG. PwrIter as a global method uses more operations than FwdPush but requires less run time, as shown in the results of directed graphs. This is, again, because PwrIter uses a continuous memory access strategy while the nodes in the queue of FwdPush are randomly ordered, thus slowing down the process.
Local linear convergence rate of FwdPush. To empirically answer Q1 asked in Sec. 1, we show that FwdPush has local linear convergence rate even when > (2 ) −1 . To do this, we simply set = 1/ , randomly pick a node from three directed graphs, and then run FwdPush. The convergence rates are illustrated in Fig. 6. These linear decay rates of estimation error are consistent with Thm. 2. We found similar patterns on undirected graphs.
DISCUSSION
This paper examines the calculation of PPV for directed and undirected graphs. We show that the commonly used local method for undirected graphs, FwdPush, is a variation of Gauss-Seidel. To improve the efficiency of FwdPush, we propose to use the SOR technique. Our SOR-based methods can successfully speed up current local methods significantly. Our SOR-based and acceleration #updates 1e8
(C) pokec methods could help build large-scale graph neural networks. It is worth seeing whether the SOR technique can be applied to the dynamic graph computation of PPVs. Additionally, we demonstrate that momentum-based acceleration methods can be used to obtain the PPV calculation for undirected graphs, providing O (1/ √ ) acceleration. Both acceleration methods are easy to implement and perform faster than other local methods when is small. As a future work, it is interesting to see if it is possible to reduce O ( ) from the bound in Thm. 8 to a local quantity.
ACKNOWLEDGEMENT
The authors would like to thank the anonymous reviewers for their helpful comments.
A STOCHASTIC MATRIX OF G AND DANGLING NODES
Given the directed graph G, we present several standard ways to construct row stochastic matrix . Recall is the diagonal out-degree matrix of G and is the associated adjacency matrix of G. Case 1. If all nodes in V are not dangling nodes, that is, each node has at least one outgoing edge, then = −1 ; Case 2. If some nodes are dangling nodes, two popular ways to create . Let = { : = 0, ∈ V} be the set of dangling nodes.
• For each dangling node, we create edges pointing all nodes, and the augmented degree matrix is ′ = + diag(1 ). So, = ( ′ ) −1 .
• We add a dummy node and create | | edges pointing from to meanwhile adding self-loop for node . Hence,
= −1 ; [1 ⊤ , 1]
where ; is the sign of row append. For more options of creating , one can refer to Section 3 of Gleich [22].
B STRONGLY-CONVEX AND SMOOTH OF AND THE CONVERGENCE OF NAG METHOD
Given a convex function : R → R, we say is -strongly convex if ∀ , ∈ R , we have
( ) − ( ) ≤ ∇ ( ) ⊤ ( − ) − 2 ∥ − ∥ 2 2 .
We say is -smooth, if for all , ∈ , we have
( ) − ( ) − ∇ ( ) ⊤ ( − ) ≤ 2 ∥ − ∥ 2 2 .
Define Nesterov's Accelerated Gradient descent method (NAG) as the following iteration procedure
+1 = − 1 ∇ ( ) , +1 = 1 + √ − 1 √ + 1 +1 − √ − 1 √ + 1 ,
where = / is the condition number of .
Theorem 8 ([13]
). Let be -strongly convex and -smooth, then NAG method has the following convergence rate
( ) − * ≤ + 2 1 − * 2 2 exp − − 1 √ .(21)
C PROOF OF THEOREM 6
Proof. Notice that defined in (16) is -strongly convex and (2 − )-strongly smooth. Hence letting = and = 2 − , by applying Thm. 8, we have
− * ≤ + 2 0 − * 2 2 exp − √ = 0 − * 2 2 exp − √︁ (2 − )/ .
Note for any strongly convex function , the optimization error can also be lower bounded by 2 ∥ − * ∥ 2 ≤ ( ) − ( * ), then we reach
2 ∥ − * ∥ 2 2 ≤ ∥ 0 − * ∥ 2 2 exp − √︁ (2 − )/ , as ∥ 0 − * ∥ 2 = ∥ * ∥ 2 ≤ ∥ * ∥ 1 = ∥ −1/2 * ∥ 1 ≤ √︁ 1/ min ≤ 1. Notice +1 = 1/2 +1 , we then have ∥ −1/2 ( +1 − * )∥ 2 ≤ √︂ 2 exp − 2 √︁ (2 − )/
Note for any ∈ R , we have ∥ ∥ 1 ≤ √ ∥ ∥ 2 . Then we have
∥ +1 − * ∥ 1 ≤ max √︂ 2 exp − 2 √︁ (2 − )/
Use the fact ∥ +1 − * ∥ ∞ ≤ ∥ +1 − * ∥ 1 and apply per-iteration operations of each iteration, we have the above operation complexity bound. □
D MORE EXPERIMENTAL RESULTS D.1 Power law distribution of PPVs
As shown in Fig. 7, we present the magnitudes of * as a function of their ranks. We sort all magnitudes in descending order and label these magnitudes from rank 1 to rank . Evidently, these magnitudes adhere to the power law distribution with a cutoff, as described by Clauset [16]. More specifically, let ( ) represent a magnitude where is the associated ranking ID; this yields the following relation
( ) ∝ ( ) − , where ( ) = − .(22)
One can find suitable parameters and to fit these curves using (22). The power law distribution of magnitudes * of six graphs. We randomly pick one node from a graph and run the power iteration algorithm to obtain high precision * . It is important to note that in this particular setting, the power law distribution is characterized by a power law with a cutoff, as discussed in Clauset's work [16].
D.2 Comparison of empirical bounds
To further validate the effectiveness of our parameterized bound, we carry out a series of experiments on two additional graph datasets, specifically, livejournal and pokec shown in Fig. 8 and 9, respectively. When the value of is relatively small, our bound is similar to 1 for small values of , and it is empirically tighter than 2 , irrespective of whether is small or large. As increases, the comparative tightness of our bound becomes markedly more pronounced. When compared to the setting where = 0.15, SOR-based methods exhibit a more significant improvement when = 0.05. For example, in the dataset of products, our PwrPushSOR is more than 5 times faster than FwdPush method. Observation of superlinear behavior. During the course of our experiments, we discerned that both PwrPushSOR and FwdPushSOR exhibited the potential for superlinear behavior during the final few iterations. For example, when setting at 0.05, the runtime required by PwrPushSOR displayed superlinearity in relation to ℓ 1 error (see E of Fig. 10 and 11). Interestingly, this phenomenon mirrors the well-known superlinear convergence behavior observed when employing the conjugate gradient method to solve large symmetric systems of equations. This intriguing pattern certainly warrants further exploration and study. Significant speedup of local SOR methods even with large . As depicted in Fig. 14 and Fig. 15, it is evident that the local SOR method still requires fewer runtime operations to reach equivalent approximate solutions, even when is large. This consistently efficient performance clearly underlines the effectiveness and versatility of our method across a broad spectrum of settings.
⊤ . The volume of S ⊆ V is defined as vol(S) ≜ ∈S . The support of is the set of nonzero indices, i.e., supp( ) = { : ≠ 0, ∈ V}. For any matrix , we denote as the element of at -th row and -th column. An indicating vector ∈ { 1 , 2 , . . . , } has value 1 in -th entry and 0 otherwise. Similarly, ( ) is an indicator vector with at -th column and 0 otherwise. ,: is the -th row of .
Theorem 1 (
1FwdPush is Gauss-Seidel). Each iteration updates and in Algo. 1 of the FwdPush(G, , , ) algorithm is equivalent to an iteration of the Gauss-Seidel iteration as defined in (3) when = and = − (1 − ) ⊤ −1 . Furthermore, S corresponds to the set of active nodes processed in Algo. 1 at -th epoch.3
Figure 1 :
1The -th epoch of FwdPush in Algo. 3. FwdPush maintains a queue Q which contains all active nodes. At the beginning of -th epoch, S contains all active nodes(red), which will be processed in -th epoch. New active nodes (blue) generated in the current epoch will be processed in the next.
Algorithm 3
3FwdPush(G, , , ) with a dummy node 1: Initialization: = , = 0 2: Q = [ , ‡] // Dummy node ‡ at the end of Q 3: = 0, ′ = 0 4: while Q.size()
(
Local linear convergence of FwdPush). Let S and U be the set of active and inactive nodes, respectively. Let be the estimated PPR vector updated by FwdPush after -th epoch, that is = FwdPush(G, , , ).
Lemma 3 (
3Locally linear decay of ). Let S and U be the set of active and inactive nodes at the beginning of -th epoch with ∈ {0, 1, . . . , }, respectively. Then, after -th epoch, we have ∥ − +1 ∥ 1 ≤ (1 − ) ∥ − ∥ 1 , where is the local convergence factor := ∈S / ∈I .
.
, = 1/((1 − )|I |) and 1: = =1 / . □ Proof of Theorem 2. To obtain our main theory, we apply Lemma 3 and Lemma 4, and notice that∑︁ =1 vol( ) = · vol( 1: ) ≤vol( Our locality analysis provides intuition on the performance of local FwdPush. The total convergence rate is determined by , which is always a positive number. Note that 0 = 1 for the first epoch. The average volume accessed by FwdPush is certainly controlled by the following vol(supp( , )) ≤ vol( 1: ) ≤ vol(supp( , )),
Figure 2 :
2The number of operations estimated for the dblp dataset as a function of . Real stands for actual number of operations used in FwdPush, i.e. Real= =1 vol( ). 1 = log( 1 ) + provided in [51], 2 = 1 , and our new local bound. Left: = 0.15, Middle: = 0.5, and Right: = 0.85. We randomly selected 100 nodes for each experiment and took the average of operations estimated.
Figure 3 :
3The percentage of total operations reduced by FwdPush-Mean as a function of nodes over all six graphs. We fix = 10 −6 , = 0.2 and run both FwdPush-Mean and FwdPush on 1,000 randomly selected nodes from six graphs. The reduced percentage of operations is defined as the difference in the number of operations between two methods divided by the operations of FwdPush.
Denote˜:= −1/2 −1/2 , := −1/2 , and˜:= −1/2 .
If we set = 1/(2 − ) and = (1 − )/(1 + ) with the inverse square root of condition number = √︁ (2 − )/ , then we can write the iteration in one line as the following iteration procedure NAG: +1 = 2 ( +˜) − (1− ) ( +˜) −1 + 2˜, (18) where = (1 − )/((2 − )(1 + )) and 0 = 0, 1 = −1/2 .
Figure 4 :
4Estimation error v.s. run time (seconds), = 0.15.
Figure 5 :
5Estimation error v.s. #residue updates (total operations), = 0.15 speedup advantages still exist even when is large, i.e., = 0.2 and 0.25. Acceleration-based methods are faster and need fewer operations for undirected graphs. (A, B, C) of
Figure 6 :
6Locally linear convergence of FwdPush. For each directed graph, we randomly pick up a node and run Fwd-Push with = 1/ .
Figure 7 :
7Figure 7: The power law distribution of magnitudes * of six graphs. We randomly pick one node from a graph and run the power iteration algorithm to obtain high precision * . It is important to note that in this particular setting, the power law distribution is characterized by a power law with a cutoff, as discussed in Clauset's work [16].
Figure 8 :
8The bounds of livejournal dataset as a function of . The vertical line is where = (2 ) −1 . Left: = 0.15, Middle: = 0.5, and Right: = 0.85.
Figure 9 :
9The bounds of pokec dataset as a function of . D.3 More experiments on the run time and the number of operations comparison A-F of Fig. 10, 11, 12 and 13 present the run time and the number of operations of PPV methods when = 0.05 and = 0.2, respectively.
Figure 10 :
10Actual 1 -error v.s. execution time (seconds), = 0.05.
Figure 11 :
11Actual 1 -error v.s. #residue updates, = 0.
Figure 12 :
12Actual 1 -error v.s. execution time (seconds),
Figure 13 :
13Actual 1 -error v.s. #residue updates, = 0.2
Figure 14 :Figure 15 :
1415Actual 1 -error v.s. execution time (seconds), = 0.25. Actual 1 -error v.s. #residue updates, = 0.25
Table 1 :
1Datasets statisticsDataset
Type of G
dblp
317,080
1,049,866
undirected
products
2,449,029
123,718,280
undirected
orkut
3,072,441
117,185,083
undirected
web-Stanford
281,903
2,312,497
directed
livejournal
4,847,571
68,993,773
directed
pokec
1,632,803
30,622,564
directed
The work of Baojian Zhou is sponsored by Shanghai Pujiang Program (No. 22PJ1401300). The work of Deqing Yang is supported by Chinese NSF Major Research Plan No.92270121, Shanghai Science and Technology Innovation Action Plan No.21511100401. Steven Skiena was partially supported by NSF grants IIS-1926781, IIS-1927227, IIS-1546113, OAC-191952, and a New York State Empire Innovation grant.
We will define an epoch of FwdPush by adding a dummy node presented in the next subsection.
https://numba.pydata.org/
E FWDPUSH-MEAN ALGORITHMWe present FwdPush-Mean in Algo.4. Compared with FwdPush, it computes the statistic of average residuals of active nodes¯at Line 8. Residuals of nodes that are less than this average will postpone to the next epoch. Note the run time of computing¯is not bigger than | |; hence the total run time complexity will be the same as FwdPush. In practice, we found this strategy could help to reduce the total amount of push operations as shown inFig. 3. .push( ) // Postpone current active node to next14:continue 15:= + ·16:for ∈ Nei( ) do
Linear coupling: An ultimate unification of gradient and mirror descent. Zeyuan Allen, - Zhu, Lorenzo Orecchia, arXiv:1407.1537arXiv preprintZeyuan Allen-Zhu and Lorenzo Orecchia. 2014. Linear coupling: An ultimate unification of gradient and mirror descent. arXiv preprint arXiv:1407.1537 (2014).
Even faster accelerated coordinate descent using non-uniform sampling. Zeyuan Allen-Zhu, Zheng Qu, Peter Richtárik, Yang Yuan, PMLRInternational Conference on Machine Learning. Zeyuan Allen-Zhu, Zheng Qu, Peter Richtárik, and Yang Yuan. 2016. Even faster accelerated coordinate descent using non-uniform sampling. In International Conference on Machine Learning. PMLR, 1110-1119.
Local computation of PageRank contributions. Reid Andersen, Christian Borgs, Jennifer Chayes, John Hopcraft, S Vahab, Shang-Hua Mirrokni, Teng, In WAW. 4863SpringerReid Andersen, Christian Borgs, Jennifer Chayes, John Hopcraft, Vahab S Mir- rokni, and Shang-Hua Teng. 2007. Local computation of PageRank contributions. In WAW, Vol. 4863. Springer, 150-165.
Robust PageRank and locally computable spam detection features. Reid Andersen, Christian Borgs, Jennifer Chayes, John Hopcroft, Kamal Jain, Vahab Mirrokni, Shanghua Teng, Proceedings of the 4th international workshop on Adversarial information retrieval on the web. the 4th international workshop on Adversarial information retrieval on the webReid Andersen, Christian Borgs, Jennifer Chayes, John Hopcroft, Kamal Jain, Vahab Mirrokni, and Shanghua Teng. 2008. Robust PageRank and locally com- putable spam detection features. In Proceedings of the 4th international workshop on Adversarial information retrieval on the web. 69-76.
Local graph partitioning using PageRank vectors. Reid Andersen, Fan Chung, Kevin Lang, 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06). IEEEReid Andersen, Fan Chung, and Kevin Lang. 2006. Local graph partitioning using PageRank vectors. In 2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06). IEEE, 475-486.
PageRank computation and the structure of the web: Experiments and algorithms. Arvind Arasu, Jasmine Novak, Andrew Tomkins, John Tomlin, Proceedings of the eleventh international World Wide Web conference, poster track. the eleventh international World Wide Web conference, poster trackArvind Arasu, Jasmine Novak, Andrew Tomkins, and John Tomlin. 2002. PageR- ank computation and the structure of the web: Experiments and algorithms. In Proceedings of the eleventh international World Wide Web conference, poster track. 107-117.
Group formation in large social networks: membership, growth, and evolution. Lars Backstrom, Dan Huttenlocher, Jon Kleinberg, Xiangyang Lan, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. the 12th ACM SIGKDD international conference on Knowledge discovery and data miningLars Backstrom, Dan Huttenlocher, Jon Kleinberg, and Xiangyang Lan. 2006. Group formation in large social networks: membership, growth, and evolution. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. 44-54.
Fast Incremental and Personalized PageRank. Bahman Bahmani, Abdur Chowdhury, Ashish Goel, Proceedings of the VLDB Endowment. 43Bahman Bahmani, Abdur Chowdhury, and Ashish Goel. 2010. Fast Incremental and Personalized PageRank. Proceedings of the VLDB Endowment 4, 3 (2010).
Spamrank-fully automatic link spam detection work in progress. A Andras, Karoly Benczur, Tamas Csalogany, Mate Sarlos, Uher, Proceedings of the first international workshop on adversarial information retrieval on the web. the first international workshop on adversarial information retrieval on the webAndras A Benczur, Karoly Csalogany, Tamas Sarlos, and Mate Uher. 2005. Spamrank-fully automatic link spam detection work in progress. In Proceedings of the first international workshop on adversarial information retrieval on the web. 1-14.
Bookmark-coloring algorithm for personalized PageRank computing. Pavel Berkhin, Internet Mathematics. 31Pavel Berkhin. 2006. Bookmark-coloring algorithm for personalized PageRank computing. Internet Mathematics 3, 1 (2006), 41-62.
Is PageRank all you need for scalable graph neural networks?. Aleksandar Bojchevski, Johannes Klicpera, Bryan Perozzi, Martin Blais, Amol Kapoor, Michal Lukasik, Stephan Günnemann, ACM KDD, MLG Workshop. Aleksandar Bojchevski, Johannes Klicpera, Bryan Perozzi, Martin Blais, Amol Kapoor, Michal Lukasik, and Stephan Günnemann. 2019. Is PageRank all you need for scalable graph neural networks?. In ACM KDD, MLG Workshop.
Scaling graph neural networks with approximate PageRank. Aleksandar Bojchevski, Johannes Klicpera, Bryan Perozzi, Amol Kapoor, Martin Blais, Benedek Rózemberczki, Michal Lukasik, Stephan Günnemann, Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningAleksandar Bojchevski, Johannes Klicpera, Bryan Perozzi, Amol Kapoor, Martin Blais, Benedek Rózemberczki, Michal Lukasik, and Stephan Günnemann. 2020. Scaling graph neural networks with approximate PageRank. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2464-2473.
Sébastien Bubeck, Convex optimization: Algorithms and complexity. Foundations and Trends® in Machine Learning. 8Sébastien Bubeck et al. 2015. Convex optimization: Algorithms and complexity. Foundations and Trends® in Machine Learning 8, 3-4 (2015), 231-357.
Simple and deep graph convolutional networks. Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, Yaliang Li, PMLRInternational conference on machine learning. Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, and Yaliang Li. 2020. Simple and deep graph convolutional networks. In International conference on machine learning. PMLR, 1725-1735.
R K Fan, Fan Chung Chung, Graham, Spectral graph theory. Number 92. American Mathematical SocFan RK Chung and Fan Chung Graham. 1997. Spectral graph theory. Number 92. American Mathematical Soc.
Power-law distributions in empirical data. Aaron Clauset, Cosma Rohilla Shalizi, Mark, Newman, SIAM review. 51Aaron Clauset, Cosma Rohilla Shalizi, and Mark EJ Newman. 2009. Power-law distributions in empirical data. SIAM review 51, 4 (2009), 661-703.
Anton Tsitsulin, and Peilin Zhong. 2022. Differentially Private Graph Learning via Sensitivity-Bounded Personalized PageRank. Alessandro Epasto, Vahab Mirrokni, Bryan Perozzi, NeurIPS 2022 Workshop: New Frontiers in Graph Learning. Alessandro Epasto, Vahab Mirrokni, Bryan Perozzi, Anton Tsitsulin, and Peilin Zhong. 2022. Differentially Private Graph Learning via Sensitivity-Bounded Per- sonalized PageRank. In NeurIPS 2022 Workshop: New Frontiers in Graph Learning. https://openreview.net/forum?id=dzVZGSe0NoJ
Variational perspective on local graph clustering. Kimon Fountoulakis, Farbod Roosta-Khorasani, Julian Shun, Xiang Cheng, Michael W Mahoney, Mathematical Programming. 174Kimon Fountoulakis, Farbod Roosta-Khorasani, Julian Shun, Xiang Cheng, and Michael W Mahoney. 2019. Variational perspective on local graph clustering. Mathematical Programming 174, 1 (2019), 553-573.
Open Problem: Running time complexity of accelerated ℓ 1 -regularized PageRank. Kimon Fountoulakis, Shenghao Yang, Conference on Learning Theory. PMLR. Kimon Fountoulakis and Shenghao Yang. 2022. Open Problem: Running time complexity of accelerated ℓ 1 -regularized PageRank. In Conference on Learning Theory. PMLR, 5630-5632.
Predict then Propagate: Graph Neural Networks meet Personalized PageRank. Johannes Gasteiger, Aleksandar Bojchevski, Stephan Günnemann, International Conference on Learning Representations. Johannes Gasteiger, Aleksandar Bojchevski, and Stephan Günnemann. 2019. Predict then Propagate: Graph Neural Networks meet Personalized PageRank. In International Conference on Learning Representations.
Diffusion improves graph learning. Johannes Gasteiger, Stefan Weißenberger, Stephan Günnemann, Advances in neural information processing systems. 32Johannes Gasteiger, Stefan Weißenberger, and Stephan Günnemann. 2019. Diffu- sion improves graph learning. Advances in neural information processing systems 32 (2019).
. F David, Gleich, PageRank Beyond the Web. SIAM Rev. 57David F Gleich. 2015. PageRank Beyond the Web. SIAM Rev. 57, 3 (2015), 321-363.
An inner-outer iteration for computing PageRank. F David, Andrew P Gleich, Chen Gray, Tracy Greif, Lau, SIAM Journal on Scientific Computing. 32David F Gleich, Andrew P Gray, Chen Greif, and Tracy Lau. 2010. An inner-outer iteration for computing PageRank. SIAM Journal on Scientific Computing 32, 1 (2010), 349-371.
. Lek-Heng David F Gleich, Yongyang Lim, Yu, Multilinear PageRank. SIAM J. Matrix Anal. Appl. 36David F Gleich, Lek-Heng Lim, and Yongyang Yu. 2015. Multilinear PageRank. SIAM J. Matrix Anal. Appl. 36, 4 (2015), 1507-1541.
Matrix computations. H Gene, Charles F Van Golub, Loan, JHU pressGene H Golub and Charles F Van Loan. 2013. Matrix computations. JHU press.
Subset Node Representation Learning over Large Dynamic Graphs. Xingzhi Guo, Baojian Zhou, Steven Skiena, Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. the 27th ACM SIGKDD Conference on Knowledge Discovery & Data MiningXingzhi Guo, Baojian Zhou, and Steven Skiena. 2021. Subset Node Representation Learning over Large Dynamic Graphs. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 516-526.
Iterative solution of large sparse systems of equations. Wolfgang Hackbusch, Springer95Wolfgang Hackbusch. 1994. Iterative solution of large sparse systems of equations. Vol. 95. Springer.
Contrastive multi-view representation learning on graphs. Kaveh Hassani, Amir Hosein Khasahmadi, International Conference on Machine Learning. PMLR. Kaveh Hassani and Amir Hosein Khasahmadi. 2020. Contrastive multi-view rep- resentation learning on graphs. In International Conference on Machine Learning. PMLR, 4116-4126.
Topic-sensitive PageRank: A context-sensitive ranking algorithm for web search. H Taher, Haveliwala, IEEE transactions on knowledge and data engineering. 15Taher H Haveliwala. 2003. Topic-sensitive PageRank: A context-sensitive ranking algorithm for web search. IEEE transactions on knowledge and data engineering 15, 4 (2003), 784-796.
Open graph benchmark: Datasets for machine learning on graphs. Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, Jure Leskovec, Advances in neural information processing systems. 33Weihua Hu, Matthias Fey, Marinka Zitnik, Yuxiao Dong, Hongyu Ren, Bowen Liu, Michele Catasta, and Jure Leskovec. 2020. Open graph benchmark: Datasets for machine learning on graphs. Advances in neural information processing systems 33 (2020), 22118-22133.
Scaling personalized web search. Glen Jeh, Jennifer Widom, Proceedings of the 12th international conference on World Wide Web. the 12th international conference on World Wide WebGlen Jeh and Jennifer Widom. 2003. Scaling personalized web search. In Proceed- ings of the 12th international conference on World Wide Web. 271-279.
Bepi: Fast and memoryefficient method for billion-scale random walk with restart. Jinhong Jung, Namyong Park, Sael Lee, U Kang, Proceedings of the 2017 ACM International Conference on Management of Data. the 2017 ACM International Conference on Management of DataJinhong Jung, Namyong Park, Sael Lee, and U Kang. 2017. Bepi: Fast and memory- efficient method for billion-scale random walk with restart. In Proceedings of the 2017 ACM International Conference on Management of Data. 789-804.
Extrapolation methods for accelerating PageRank computations. Sepandar D Kamvar, H Taher, Haveliwala, D Christopher, Gene H Manning, Golub, Proceedings of the 12th international conference on World Wide Web. the 12th international conference on World Wide WebSepandar D Kamvar, Taher H Haveliwala, Christopher D Manning, and Gene H Golub. 2003. Extrapolation methods for accelerating PageRank computations. In Proceedings of the 12th international conference on World Wide Web. 261-270.
Directional Message Passing on Molecular Graphs via Synthetic Coordinates. Johannes Klicpera, Chandan Yeshwanth, Stephan Günnemann, Thirty-Fifth Conference on Neural Information Processing Systems. Johannes Klicpera, Chandan Yeshwanth, and Stephan Günnemann. 2021. Di- rectional Message Passing on Molecular Graphs via Synthetic Coordinates. In Thirty-Fifth Conference on Neural Information Processing Systems.
Google's PageRank and beyond. N Amy, Carl D Langville, Meyer, Princeton university pressAmy N Langville and Carl D Meyer. 2011. Google's PageRank and beyond. Prince- ton university press.
A fast two-stage algorithm for computing PageRank and its extensions. Chris Pan-Chi Lee, Gene H Golub, Zenios, CiteseerTechnical ReportChris Pan-Chi Lee, Gene H Golub, and Stefanos A Zenios. 2003. A fast two-stage algorithm for computing PageRank and its extensions. Technical Report. Citeseer.
Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems. Yin Tat Lee, Aaron Sidford, 2013 ieee 54th annual symposium on foundations of computer science. IEEEYin Tat Lee and Aaron Sidford. 2013. Efficient accelerated coordinate descent methods and faster algorithms for solving linear systems. In 2013 ieee 54th annual symposium on foundations of computer science. IEEE, 147-156.
Efficient algorithms for personalized PageRank. Peter Lofgren, Stanford UniversityPeter Lofgren. 2015. Efficient algorithms for personalized PageRank. Stanford University.
Personalized PageRank estimation and search: A bidirectional approach. Peter Lofgren, Siddhartha Banerjee, Ashish Goel, Proceedings of the Ninth ACM International Conference on Web Search and Data Mining. the Ninth ACM International Conference on Web Search and Data MiningPeter Lofgren, Siddhartha Banerjee, and Ashish Goel. 2016. Personalized PageR- ank estimation and search: A bidirectional approach. In Proceedings of the Ninth ACM International Conference on Web Search and Data Mining. 163-172.
Introductory lectures on convex optimization: A basic course. Yurii Nesterov, Springer Science & Business Media87Yurii Nesterov. 2003. Introductory lectures on convex optimization: A basic course. Vol. 87. Springer Science & Business Media.
The PageRank Citation Ranking: Bringing Order to the Web. Lawrence Page, Sergey Brin, Rajeev Motwani, Terry Winograd, 1999-66. Previous number = SIDL-WP-1999-0120Technical ReportLawrence Page, Sergey Brin, Rajeev Motwani, and Terry Winograd. 1999. The PageRank Citation Ranking: Bringing Order to the Web. Technical Report 1999-66. Previous number = SIDL-WP-1999-0120.
Some methods of speeding up the convergence of iteration methods. T Boris, Polyak, Ussr computational mathematics and mathematical physics. 4Boris T Polyak. 1964. Some methods of speeding up the convergence of iteration methods. Ussr computational mathematics and mathematical physics 4, 5 (1964), 1-17.
Ştefan Postăvaru, Anton Tsitsulin, Filipe Miguel Gonçalves De Almeida, Yingtao Tian, Silvio Lattanzi, Bryan Perozzi, arXiv:2010.069922020. InstantEmbedding: Efficient Local Node Representations. arXiv preprintŞtefan Postăvaru, Anton Tsitsulin, Filipe Miguel Gonçalves de Almeida, Yingtao Tian, Silvio Lattanzi, and Bryan Perozzi. 2020. InstantEmbedding: Efficient Local Node Representations. arXiv preprint arXiv:2010.06992 (2020).
An introduction to the conjugate gradient method without the agonizing pain. Jonathan Richard Shewchuk, Jonathan Richard Shewchuk et al. 1994. An introduction to the conjugate gradient method without the agonizing pain.
Survey on web spam detection: principles and algorithms. Nikita Spirin, Jiawei Han, ACM SIGKDD explorations newsletter. 13Nikita Spirin and Jiawei Han. 2012. Survey on web spam detection: principles and algorithms. ACM SIGKDD explorations newsletter 13, 2 (2012), 50-64.
Data analysis in public social networks. Lubos Takac, Michal Zabovsky, International scientific conference and international workshop present day trends of innovations. 1Present Day Trends of Innovations Lamza PolandLubos Takac and Michal Zabovsky. 2012. Data analysis in public social networks. In International scientific conference and international workshop present day trends of innovations, Vol. 1. Present Day Trends of Innovations Lamza Poland.
Directed Graph Contrastive Learning. Zekun Tong, Yuxuan Liang, Henghui Ding, Yongxing Dai, Xinke Li, Changhu Wang, Advances in Neural Information Processing Systems. 34Zekun Tong, Yuxuan Liang, Henghui Ding, Yongxing Dai, Xinke Li, and Changhu Wang. 2021. Directed Graph Contrastive Learning. Advances in Neural Informa- tion Processing Systems 34 (2021).
Digraph inception convolutional networks. Zekun Tong, Yuxuan Liang, Changsheng Sun, Xinke Li, David Rosenblum, Andrew Lim, Advances in neural information processing systems. 33Zekun Tong, Yuxuan Liang, Changsheng Sun, Xinke Li, David Rosenblum, and Andrew Lim. 2020. Digraph inception convolutional networks. Advances in neural information processing systems 33 (2020), 17907-17918.
Verse: Versatile graph embeddings from similarity measures. Anton Tsitsulin, Davide Mottin, Panagiotis Karras, Emmanuel Müller, Proceedings of the 2018 world wide web conference. the 2018 world wide web conferenceAnton Tsitsulin, Davide Mottin, Panagiotis Karras, and Emmanuel Müller. 2018. Verse: Versatile graph embeddings from similarity measures. In Proceedings of the 2018 world wide web conference. 539-548.
Personalized PageRank to a target node, revisited. Hanzhi Wang, Zhewei Wei, Junhao Gan, Sibo Wang, Zengfeng Huang, Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningHanzhi Wang, Zhewei Wei, Junhao Gan, Sibo Wang, and Zengfeng Huang. 2020. Personalized PageRank to a target node, revisited. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 657-667.
Unifying the Global and Local Approaches: An Efficient Power Iteration with Forward Push. Hao Wu, Junhao Gan, Zhewei Wei, Rui Zhang, Proceedings of the 2021 International Conference on Management of Data. the 2021 International Conference on Management of DataHao Wu, Junhao Gan, Zhewei Wei, and Rui Zhang. 2021. Unifying the Global and Local Approaches: An Efficient Power Iteration with Forward Push. In Proceedings of the 2021 International Conference on Management of Data. 1996-2008.
Defining and evaluating network communities based on ground-truth. Jaewon Yang, Jure Leskovec, Proceedings of the ACM SIGKDD Workshop on Mining Data Semantics. the ACM SIGKDD Workshop on Mining Data SemanticsJaewon Yang and Jure Leskovec. 2012. Defining and evaluating network commu- nities based on ground-truth. In Proceedings of the ACM SIGKDD Workshop on Mining Data Semantics. 1-8.
Iterative methods for solving partial difference equations of elliptic type. David Young, Trans. Amer. Math. Soc. 76David Young. 1954. Iterative methods for solving partial difference equations of elliptic type. Trans. Amer. Math. Soc. 76, 1 (1954), 92-111.
Approximate personalized PageRank on dynamic graphs. Hongyang Zhang, Peter Lofgren, Ashish Goel, Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. the 22nd ACM SIGKDD international conference on knowledge discovery and data miningHongyang Zhang, Peter Lofgren, and Ashish Goel. 2016. Approximate person- alized PageRank on dynamic graphs. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 1315-1324.
Link prediction based on graph neural networks. Muhan Zhang, Yixin Chen, Advances in Neural Information Processing Systems. 31Muhan Zhang and Yixin Chen. 2018. Link prediction based on graph neural networks. Advances in Neural Information Processing Systems 31 (2018), 5165- 5175.
| [
"https://github.com/ccczhen/AccPPR."
] |
[
"ReCo: Reliable Causal Chain Reasoning via Structural Causal Recurrent Neural Networks",
"ReCo: Reliable Causal Chain Reasoning via Structural Causal Recurrent Neural Networks"
] | [
"Kai Xiong kxiong@ir.hit.edu.cn \nResearch Center for Social Computing and Information Retrieval\nHarbin Institute of Technology\nChina\n",
"Xiao Ding xding@ir.hit.edu.cn \nResearch Center for Social Computing and Information Retrieval\nHarbin Institute of Technology\nChina\n",
"Zhongyang Li lizhongyang6@huawei.com \nHuawei Cloud\nChina\n",
"Li Du \nResearch Center for Social Computing and Information Retrieval\nHarbin Institute of Technology\nChina\n",
"Ting Liu tliu@ir.hit.edu.cn \nResearch Center for Social Computing and Information Retrieval\nHarbin Institute of Technology\nChina\n",
"Bing Qin qinb@ir.hit.edu.cn \nResearch Center for Social Computing and Information Retrieval\nHarbin Institute of Technology\nChina\n",
"Yi Zheng zhengyi29@huawei.com \nHuawei Cloud\nChina\n",
"Baoxing Huai huaibaoxing@huawei.com \nHuawei Cloud\nChina\n"
] | [
"Research Center for Social Computing and Information Retrieval\nHarbin Institute of Technology\nChina",
"Research Center for Social Computing and Information Retrieval\nHarbin Institute of Technology\nChina",
"Huawei Cloud\nChina",
"Research Center for Social Computing and Information Retrieval\nHarbin Institute of Technology\nChina",
"Research Center for Social Computing and Information Retrieval\nHarbin Institute of Technology\nChina",
"Research Center for Social Computing and Information Retrieval\nHarbin Institute of Technology\nChina",
"Huawei Cloud\nChina",
"Huawei Cloud\nChina"
] | [
"Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing"
] | Causal chain reasoning (CCR) is an essential ability for many decision-making AI systems, which requires the model to build reliable causal chains by connecting causal pairs. However, CCR suffers from two main transitive problems: threshold effect and scene drift. In other words, the causal pairs to be spliced may have a conflicting threshold boundary or scenario. To address these issues, we propose a novel Reliable Causal chain reasoning framework (ReCo), which introduces exogenous variables to represent the threshold and scene factors of each causal pair within the causal chain, and estimates the threshold and scene contradictions across exogenous variables via structural causal recurrent neural networks (SRNN). Experiments show that ReCo outperforms a series of strong baselines on both Chinese and English CCR datasets. Moreover, by injecting reliable causal chain knowledge distilled by ReCo, BERT can achieve better performances on four downstream causal-related tasks than BERT models enhanced by other kinds of knowledge. † Corresponding Author * This work was conducted during the internship of Kai Xiong at Huawei Cloud (a) Threshold effect problem in causal chain. (b) Scene drift problem in causal chain. | 10.48550/arxiv.2212.08322 | [
"https://www.aclanthology.org/2022.emnlp-main.431.pdf"
] | 254,823,424 | 2212.08322 | 215806bfd49e42dd57c94ba893eba7c69f6bfad8 |
ReCo: Reliable Causal Chain Reasoning via Structural Causal Recurrent Neural Networks
December 7-11, 2022
Kai Xiong kxiong@ir.hit.edu.cn
Research Center for Social Computing and Information Retrieval
Harbin Institute of Technology
China
Xiao Ding xding@ir.hit.edu.cn
Research Center for Social Computing and Information Retrieval
Harbin Institute of Technology
China
Zhongyang Li lizhongyang6@huawei.com
Huawei Cloud
China
Li Du
Research Center for Social Computing and Information Retrieval
Harbin Institute of Technology
China
Ting Liu tliu@ir.hit.edu.cn
Research Center for Social Computing and Information Retrieval
Harbin Institute of Technology
China
Bing Qin qinb@ir.hit.edu.cn
Research Center for Social Computing and Information Retrieval
Harbin Institute of Technology
China
Yi Zheng zhengyi29@huawei.com
Huawei Cloud
China
Baoxing Huai huaibaoxing@huawei.com
Huawei Cloud
China
ReCo: Reliable Causal Chain Reasoning via Structural Causal Recurrent Neural Networks
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
the 2022 Conference on Empirical Methods in Natural Language ProcessingDecember 7-11, 2022
Causal chain reasoning (CCR) is an essential ability for many decision-making AI systems, which requires the model to build reliable causal chains by connecting causal pairs. However, CCR suffers from two main transitive problems: threshold effect and scene drift. In other words, the causal pairs to be spliced may have a conflicting threshold boundary or scenario. To address these issues, we propose a novel Reliable Causal chain reasoning framework (ReCo), which introduces exogenous variables to represent the threshold and scene factors of each causal pair within the causal chain, and estimates the threshold and scene contradictions across exogenous variables via structural causal recurrent neural networks (SRNN). Experiments show that ReCo outperforms a series of strong baselines on both Chinese and English CCR datasets. Moreover, by injecting reliable causal chain knowledge distilled by ReCo, BERT can achieve better performances on four downstream causal-related tasks than BERT models enhanced by other kinds of knowledge. † Corresponding Author * This work was conducted during the internship of Kai Xiong at Huawei Cloud (a) Threshold effect problem in causal chain. (b) Scene drift problem in causal chain.
Introduction
Causal chain reasoning aims at understanding the long-distance causal dependencies of events and building reliable causal chains. Here, reliable means that events in the causal chain can naturally occur in the order of causal evolution within some circumstance based on the commonsense (Roemmele et al., 2011). Causal chain knowledge is of great importance for various artificial intelligence applications, such as question answering (Asai et al., 2019), and abductive reasoning (Du et al., 2021a). Many studies focus on the reliability of causal pair knowledge but ignore that of causal chain knowledge, especially in the natural language processing (NLP) community. Previous works mainly acquire causal chain knowledge by first extracting precise causal pairs from text with rule-based (Heindorf et al., 2020;Li et al., 2020) or neural-based (Ding et al., 2019;Zhang et al., 2020) methods, then connecting these causal pairs into causal chains based on the textual or semantic similarity between events. However, this straightforward approach may bring some transitive problems (Johnson and Ahn, 2015), leading to unreliable causal chains, which would hinder causal-enhanced models to get higher performances. For example, given a cause event: "playing basketball", and two candidate effect events: "gets a technical foul" and "gets a red card", an unreliable causal chain ("playing basketball"→"dispute"→"gets a red card") would mislead the model to choose the less plausible effect "gets a red card".
Among these transitive problems (Johnson and Ahn, 2015), threshold effect and scene drift are the most two salient ones. As shown in Figure 1 (a), given two causal pairs (A causes B, and B causes C), the threshold effect problem is that the influence of A on B is not enough for B to cause C. We can notice that, "swimming in the sea" can only result in tens of milliliters of "salt water intake", while "dehydration" is caused by hundreds of milliliters of "salt water intake". Therefore, "salt water intake" conditioned on "swimming in the sea" cannot lead to "dehydration". Similarly, as shown in Figure 1 (b), the scene drift problem means that A → B and B → C would not happen within the same specific scene. These two "dispute" events are wrongly joined together by their surface forms. "Dispute" that happened in a video game scene cannot lead to "gets a red card" in a football match scene. Therefore, we find that the threshold effect and scene drift problems are caused by the contradictions between the threshold factors and between the scene factors, respectively.
To address these two issues, in ReCo, we first build a structural causal model (SCM) (Pearl, 2009) for each causal chain, and the SCM introduces exogenous variables to represent the threshold and scene factors of the causal pairs within the causal chain. Then, we conduct an exogenous-aware conditional variational autoencoder (EA-CVAE) to implicitly learn the semantic representations of exogenous variables according to the contexts of the causal pairs. Subsequently, we devise a novel causal recurrent neural network named SRNN to estimate the contradictions between the exogenous variables by modeling the semantic distance between them. Finally, we present a task-specific logic loss to better optimize ReCo.
Extensive experiments show that our method outperforms a series of baselines on both Chinese and English CCR datasets. The comparative experiments on different lengths of the causal chains further illustrate the superiority of our method. Moreover, BERT (Devlin et al., 2019) injected with reliable casual chains distilled by ReCo, achieves better results on four downstream causal-related tasks, which indicates that ReCo could provide more effective and reliable causal knowledge. The code is available on https://github.com/Waste-Wood/ReCo.
Background
Problem Definition
In this paper, the CCR task is defined as a binary classification problem. Specifically, input a reliable antecedent causal chain (x 1 → · · · → x n ) and a causal pair (x n → x n+1 ), the model needs to output whether the causal chain x 1 → · · · → x n → x n+1 is reliable or not.
Structural Causal Model
Structural Causal Model (SCM) was proposed by Pearl (2009), which is a probabilistic graph model that represents causality within a single system. SCM is defined as an ordered triple < U, V, E >, where U is a set of exogenous variables determined by external (implicit) factors of the system. V is a set of endogenous variables determined by internal (explicit) factors of the system. E is a set of structural equations, each structural equation represents the probability of an endogenous variable with the variables in U and V . As shown in Figure 2 (a), given two causal pairs (x → y and y → z), they can be connected into a causal chain (x → y → z). We construct an SCM for this causal chain. Events (x, y and z) are the endogenous variables, and exogenous variables (U xy , U yz ) contain the threshold and scene factors of the causal pairs. Each structural equation represents the probability of an endogenous variable in V = {x, y, z} (eg. P (y|x, U xy )). And as shown in Figure 2 (b), if the causal chain possesses threshold effect or scene drift problem, there are contradictions between U xy and U yz .
BERT ! " # $ % ! " # $ ℎ ! ℎ ! " ℎ # ℎ # " ℎ $ ℎ $ " ℎ % ℎ % " ℎ &ℎ * ℎ *)! ℎ * ℎ *)! ℎ * " Training Prediction (b) Exogenous-aware CVAE (a) Reliable Multi-hop Causal Reasoning Framework (ReCo) (c) Structural Causal Recurrent Neural Networks (SRNN) ℎ ! ℎ # ! # ℎ $ ⋯ % ℎ & ( ) * Neural Network Layer Pointwise Operation Vector Transfer ! ℎ # ( ℎ $ ( ! # ( ! ℎ ℎ ℎ ḣ + $ ℎ % ( ℎ & ( $ % ( $ ℎ ℎ ℎ ḣ + KL Structural Causal Model EA-CVAE EA-CVAE EA-CVAE EA-CVAE ! " # $ % ! " # $
Causal Chain
Contexts of the Causal Pairs
ℎ ! ℎ # ℎ $ ℎ % ℎ & # $ % !
Method
Overview
In this paper, we devise ReCo to estimate the reliability of the input causal chain. Figure 3 (a) shows the architecture of ReCo, which consists of four components: (1) an encoder to encode causal events and their contexts into dense vectors; (2) an exogenous-aware CVAE to capture the exogenous variables with contexts; (3) an SRNN to understand the causal chain along the direction of causality in the constructed SCM and solve the two transitive problems with two designed estimators; (4) a predictor to predict the existence of the two transitive problems and the reliability of the causal chain.
Encoder
Given a reliable antecedent causal chain (X 1 → · · · → X 4 ) and a causal pair (X 4 → X 5 ), the inputs of ReCo are a causal chain (X 1 → · · · → X 5 ) with 5 events and their 4 corresponding contexts (C 1 , · · · , C 4 ). C i denotes the context of the causal pair X i → X i+1 . We first construct an SCM for each causal chain, which introduces exogenous variables U = {U 1 , · · · , U 4 } to represent the threshold and scene factors of the causal pairs. The endogenous variables are the events X = {X 1 , · · · , X 5 } in the causal chain. Then we use BERT to encode the input events and contexts.
Specifically, we concatenate the events and their contexts into two sequences:
[CLS] X 1 [SEP] X 2 [SEP] X 3 [SEP] X 4 [SEP] X 5 [SEP], and [CLS] C 1 [SEP] C 2 [SEP] C 3 [SEP] C 4 [SEP].
The final hidden states of [SEP] tokens are set as the initial representations of the corresponding events and contexts. Then we scale them to mdimension. Finally, we acquire event embeddings
H X = {h 1 , h 2 , h 3 , h 4 , h 5 } and context embed- dings H C = {h C 1 , h C 2 , h C 3 , h C 4 }, where h i , h C i ∈ R m
denote the i-th event and context, respectively.
Exogenous-aware CVAE
Since the exogenous variables are hard to explicitly capture and CVAE has shown its ability to implicitly estimate variables (Chen et al., 2021;Du et al., 2021a). Thus, we devise an EA-CVAE to capture the exogenous variables based on each causal pair and its corresponding contexts.
The EA-CVAE takes a causal pair and its corresponding context as inputs, and outputs the distribution of the exogenous variable. For example, as shown in Figure 3 (b), given a causal pair h i → h i+1 and the corresponding context
h C i , we first concatenate h i , h i+1 , h C i into V i = [h i ; h i+1 ] ∈ R 2m and V ′ i = [h i ; h C i ; h i+1 ] ∈ R 3m . Hereafter, V i and V ′
i are fed into the linear layers to estimate the mean and standard deviation values of the exogenous variable distribution:
µ i = W 1 V i + b 1 , σ i = exp(W 2 V i + b 2 ), µ ′ = W 3 V ′ i + b 3 , σ ′ i = exp(W 4 V ′ i + b 4 ),(1)
where W 1 , W 2 ∈ R 2m×m and W 3 , W 4 ∈ R 3m×m are trainable parameters. The size of the multi-variate normal distribution is set as m. Finally, we obtain two multivariate normal distributions
N i (µ i , σ 2 i ) and N ′ i (µ ′ i , σ ′2 i )
. After that, we conduct reparameterization trick to sample exogenous variables from N i (µ i , σ 2 i ) and
N ′ i (µ ′ i , σ ′2 i )
. First, we sample a value ϵ from the standard normal distribution N (0, I), and then we obtain the representation u i ∈ R m of the exogenous variable U i based on ϵ:
u i = µ ′ i + ϵ × σ ′ i training, µ i + ϵ × σ i prediction.
(2)
Hereafter, for each causal pair, we get the representation of its corresponding exogenous variable and obtain
u = {u 1 , u 2 , u 3 , u 4 |u i ∈ R m }. In the training stage, u i ∈ u is sampled from N ′ i (µ ′ i , σ ′2 i ). While in the prediction stage, u i is sampled from N i (µ i , σ 2 i ).
Thus, contexts are not required as inputs in the prediction stage.
Finally, we obtain the representations of the endogenous and exogenous variables in the SCM.
Structural Causal Recurrent Neural Networks
We propose SRNN to measure the reliability of the causal chain, and estimate the two transitive problems by measuring the semantic distance between the exogenous variables. As shown in Figure 3 (c), the SRNN consists of the following five components. The input of the SRNN in the first recurrent step is a quintuple < h 1 , h 2 , h 3 , u 1 , u 2 >.
Scene Drift Estimator
We design this component to estimate the scene drift problem between two exogenous variables:
α 1 = σ(W m1 u 1 + b m1 − W m2 u 2 − b m2 ),(3)
where α 1 ∈ R m is the measurement of the scene drift problem. W m1 , W m2 ∈ R m×m are trainable parameters, and σ is the sigmoid function.
Hidden Gate Hidden gate is used for aggregating the information within the endogenous variables for the next recurrent step of the SRNN and estimating the threshold effect problem:
h ′ 2 = tanh (W h [h 1 ; h 2 ] + b h ),(4)h ′ 3 = tanh (W h [h 2 ; h 3 ] + b h ),(5)
where h ′ 2 , h ′ 3 ∈ R m are the aggregated endogenous variables, and W h ∈ R 2m×m is a trainable parameter.
Threshold Effect Estimator Since the threshold effect problem can be discussed iff the scene is consistent, and threshold factors are event-specific, we can estimate the threshold effect problem with the endogenous and exogenous variables based on the result of the scene drift estimator.
β 1 = σ(W β ([u 2 ; h ′ 3 ]−[u 1 ; h ′ 2 ])⊙(1 − α 1 )),(6)
where β 1 ∈ R m estimates whether the threshold effect problem exists, and W β ∈ R 2m×m is a trainable parameter.
Exogenous Gate u 1 contradicts u 2 if there is threshold effect or scene drift problem. We can learn the contradiction of u 1 on u 2 by:
E 1 = tanh(W e (u 2 + α 1 + β 1 2 ⊙ u 1 ) + b e ),(7)
where E 1 ∈ R m is the representation of the contradiction of u 1 on u 2 , and W e ∈ R m×m is a trainable parameter. If there are not threshold effect and scene drift problems, α 1 and β 1 are equal to 0, and E 1 is close to u 2 .
Output Gate For the inputs of the next recurrent step of the SRNN, we compose u 1 and E 1 into u ′ 2 .
u ′ 2 = tanh(W o [u 1 ; E 1 ] + b o ),(8)
where u ′ 2 ∈ R m is the aggregated exogenous variable, and W o ∈ R m×m is a trainable parameter.
Finally, we denote < h ′ 2 , h ′ 3 , h 4 , u ′ 2 , u 3 > as the input to the next recurrent step of the SRNN.
Problems and Reliability Predictor
After the SRNN, we can obtain the final output
< α 3 , β 3 , h ′ 4 , h ′ 5 , E 3 >.
First, we can measure the existence of the threshold effect and scene drift problems based on β 3 and α 3 , respectively:
P T = Softmax(W T β 3 + b T ), P S = Softmax(W S α 3 + b S ),(9)
where P T = [P T 0 ; P T 1 ], P S = [P S 0 ; P S 1 ] ∈ R 2 are the probability distributions of threshold effect and scene drift problems, respectively. The subscript 0 and 1 of P T and P S denote the non-existence and existence probabilities of the corresponding problems. W S , W T ∈ R m×2 are trainable parameters. Therefore, we can explain why this causal chain breaks according to P T and P S .
6429
Finally, we can measure the reliability of the causal chain as follows:
P 1 = tanh(W 1 [h ′ 4 ; u 3 ] + b 1 ), P 2 = tanh(W 2 [h ′ 5 ; E 3 ] + b 2 ), P C = Softmax(W C [P 1 ; P 2 ] + b C ),(10)
where P 1 , P 2 ∈ R m are the intermediate param-
eters, P C = [P C 0 ; P C 1 ] ∈ R 2
is the probability distribution of the reliability of the causal chain X 1 → · · · → X 5 , and P C 0 , P C 1 ∈ R 1 denote the probabilities that the causal chain is unreliable and reliable, respectively. W 1 , W 2 ∈ R 2m×m and W C ∈ R m×2 are trainable parameters.
Optimizing with a Logic Loss
We design a logic loss to reduce the loss function from 4 parts to 3 parts. For example, if the causal chain is reliable, the probabilities that the two problems do not exist and the causal chain is reliable should be equal. Therefore, the logic loss is:
L Logic = | log(P T 0 × P S 0 ) − log(P C 1 )|,(11)
where P T o and P S 0 are the probabilities that the threshold effect and scene drift problems do not exist, and P C 1 is the probability that the causal chain is reliable. Moreover, if the causal chain is unreliable due to the scene drift problem, the logic loss is L Logic = | log(P S 1 × P T 0 ) − log(P C 0 )|. Finally, the loss function is denoted as:
L = L Chain + λ 1 L Logic + λ 2 L kl , L Chain = CrossEntropy(Y, P C ), L kl = 4 i=1 KL(N (µ i , σ 2 i )||N (µ ′ i , σ ′2 i )),(12)
where L Chain is the loss of the causal chain reliability. L Logic is the logic loss. L kl is the Kullback-Leibler divergence loss (Hershey and Olsen, 2007) of the EA-CVAE. λ 1 and λ 2 are loss coefficients.
Experiments
CCR Datasets Construction
We choose the Chinese causal event graph CEG (Ding et al., 2019) and English CauseNet (Heindorf et al., 2020) Instance-3, Instance-4 and Instance-5 denote the instance with chain lengths of 3, 4 and 5, respectively. 5 events, and no more than three events are overlapped between any two causal chains. Then, we label the causal chains through crowdsourcing. Professional annotators are asked to label the first causal relationship where the causal chain breaks, and which problem (threshold effect or scene drift) causes this break. Each chain will be labeled by three annotators, the Cohen's agreement scores are κ = 78.21% and 75.69% for Chinese and English CCR datasets, respectively.
We split the causal chains into different lengths of training examples (Instance-3, Instance-4, Instance-5). If a causal chain of length 5 breaks at the third causal relationship, 1 positive and 1 negative training examples are constructed (positive: X 1 → X 2 → X 3 ; negative: X 1 → X 2 → X 3 → X 4 ). The statistics of the two CCR datasets are shown in Table 1
Baselines
We compare the performance of ReCo against a variety of sequence modeling methods, and causal reasoning methods developed in recent years. In Embedding and ExCAR, for a causal chain X 1 → · · · X n , we treat X 1 → · · · X n−1 and X n as the cause and effect, respectively.
Embedding (Xie and Mu, 2019) measures wordlevel causality through causal embedding. We choose the max causality score between cause and effect words, and apply a threshold for prediction.
LSTM (Hochreiter and Schmidhuber, 1997) is a recurrent neural network. We use BiLSTM to represent the causal chains for binary classification. BERT (Devlin et al., 2019;Cui et al., 2020) is pre-trained unsupervised with massive unlabeled data. Specifically, we use BERT-base to represent the causal chains for the reliability classification.
ExCAR (Du et al., 2021b) introduces evidence events for explainable causal reasoning. We introduce evidence events to the cause-effect pair for ExCAR experiments.
CausalBERT injects massive causal pair knowledge into BERT. CausalBERT is used to represent the causal chain for experiments.
We use precision, recall, F1 score, and accuracy to measure the performance of each method.
Training Details
For ReCo, we use the pre-trained BERT-base (Devlin et al., 2019;Cui et al., 2020) as the encoder to encode events and contexts. The batch size is set to 24, the dimension m is 256, we choose Adam (Kingma and Ba, 2014) as the optimizer with a learning rate of 1e-5. The loss coefficients λ 1 and λ 2 are 1 and 0.01, respectively. ReCo runs 50 epochs on two Tesla-P100-16gb GPUs.
Overall Results
We implement Embedding, LSTM, BERT, ExCAR, CausalBERT and ReCo on both Chinese and English CCR datasets. The overall results are shown in Table 2, from which we can observe that:
(1) Comparing word-level method (Embedding) to event-level methods (LSTM, BERT, ExCAR, CausalBERT and ReCo), event-level methods achieve absolute advantages, which indicates that considering the causality between words and ignoring the semantics of events is not better for CCR.
(2) Knowledge-enhanced methods (ExCAR and CausalBERT) achieve comparable results to BERT.
Instance-3
Instance - This is mainly because not all the evidence events in ExCAR are reliable, and CausalBERT only possesses causal pair knowledge, making ExCAR and CausalBERT struggle with the CCR tasks.
(3) ReCo outperforms BERT, ExCAR and CausalBERT in F1 score and accuracy, which shows that the exogenous variables captured by the EA-CVAE are significant to conducting CCR tasks and the SRNN is important to address the two transitive problems. Moreover, the advantage of ReCo is mainly reflected in precision, it is because capturing the threshold and scene factors is effective to measure the transitive problems and estimate the reliability of the causal chains.
(4) All six methods get lower precision scores on the Chinese CCR test set than that on the English CCR test set. This is mainly because all events in the Chinese CCR dataset are sentences, and most of the events in the English CCR dataset consist of only one word, making the Chinese CCR task more challenging and more complex.
Moreover, we also compare ReCo with baselines on different lengths of causal chains. Results are illustrated in Figure 4. We can observe that:
(1) Most of the methods perform worse as the chain gets longer. It indicates that longer instances need stronger CCR ability. Table 4: Overall results of the ablation study on the English CCR test set. "w" and "w/o" denote "with" and "without", respectively.
(2) ReCo performs best on almost all instance levels of both CCR datasets. This is mainly because the threshold and scene factors captured by the EA-CVAE are important for CCR tasks, and the SRNN can properly capture the two transitive problems by estimating the semantic distance between threshold factors or scene factors. However, CausalBERT achieves the best performance on the instance-5 of the English CCR test set. This is mainly because Instance-5 in the English CCR dataset might rely more on massive external causal pair knowledge.
(3) Compared with the results on Instance-4, results drop more on the English Instance-5 than that on the Chinese Instance-5. The reason is that conducting CCR on the causal chains with five or more word-level events might need more extra information to reason from the first event to the last event.
Causal Knowledge Injection
To further investigate the effectiveness of ReCo, we inject different kinds of causal knowledge into BERT. Then following Du et al. (2022), we test the causal-enhanced BERT models on four NLP benchmark datasets: a causal extraction dataset Event StoryLine v0.9 (Caselli and Vossen, 2017), two causal reasoning datasets BeCAUSE 2.1 (Dunietz et al., 2017) and COPA (Roemmele et al., 2011), as well as a commonsense reasoning dataset Common-senseQA (Talmor et al., 2019). To give a careful * Only the intra-sentence event pairs are kept for experiments and the train, dev, test sets are split randomly. We also ensure the cause event precedes the effect event.
analysis, we inject causal knowledge into BERT in the following four different ways (The details of knowledge injection can refer to Appendix C):
• BERT O injected with no external knowledge.
• BERT P injected with causal pair knowledge.
• BERT C injected with unfiltered causal chain knowledge.
• BERT R injected with causal chain knowledge distilled by ReCo.
The results are shown in Table 3, from which we can observe that:
(1) Methods (BERT P , BERT C , BERT R ) enhanced with causal knowledge outperform the original BERT (BERT O ) on all four tasks, which indicates that causal knowledge can provide extra information to conduct causal-related tasks.
(2) Comparisons between causal chain knowledge enhanced methods (BERT C , BERT R ) and causal pair knowledge enhanced method (BERT P ) show that BERT C and BERT R can push the model to a higher level than BERT P on all four tasks. The main reason is that causal chains contain more abundant knowledge than causal pairs.
(3) Unfiltered causal chain knowledge enhanced method (BERT C ) performs worse than BERT R injected with causal chain knowledge distilled by ReCo. The main reason is that some unfiltered causal chains would be unreliable due to the threshold effect or scene drift problem, which would mislead the model to choose the wrong answer.
Ablation Study
We provide ablation studies to show the superiority and effectiveness of ReCo. First, we provide the contexts of the causal pairs to BERT to prove the advantages of the EA-CVAE and SRNN in ReCo. Second, we remove the EA-CVAE in ReCo and set the contexts as the exogenous variables to investigate the effect of the EA-CVAE. Third, we remove the extra supervised signals of the problem estimators to study the effect of the problem estimators. Finally, we replace the logic loss with cross-entropy losses to validate the effectiveness of production of sebum→ acne → bacteria → salmonellosis
ReCo Prediction
Unreliable Scene Drift True Threshold Effect False Table 5: An example made by ReCo. ReCo makes the right prediction and gives the reason why this chain breaks: "salmonellosis" will not happen in the scene where "acne" causes "bacteria".
the logic loss. Overall results are shown in Table 4. From which we can find that:
(1) After providing contexts to BERT, the performance of BERT increases slightly, which shows that there is effective information in the contexts to conduct CCR tasks, but BERT cannot use it sufficiently. This proves that properly utilizing information in the contexts is of great importance.
(2) After removing EA-CVAE, the performance of ReCo drops and is worse than BERT. This is because the contexts are not proper estimations of the exogenous variables and there is also noise in the contexts which has negative impacts on ReCo.
(3) After removing the supervised signals of the problem estimators, RoCo performs worse, it indicates the problem estimators supervised by the extra problem labels are important to measure the existence of the two transitive problems. Moreover, ReCo without the extra supervised signals outperforms contexts-enhanced BERT, which indicates the EA-CVAE in ReCo can properly estimate the exogenous variables with contexts, and the SRNN in ReCo plays an important role in deeply understanding the causal chain.
(4) After replacing the logic loss with crossentropy losses, the performance of ReCo drops 1.63 in accuracy, which indicates that the logic constraints applied by the logic loss can guide ReCo to better generalization.
Case Study
To intuitively investigate whether ReCo can discover the right problem when the causal chain is unreliable, we provide an example made by ReCo. As shown in Table 5, "bacteria" in a cosmetic scene caused by "acne" cannot lead to "salmonellosis". ReCo gives the right label and the right problem which causes the causal chain unreliable. Refer to Appendix E for more cases.
Related Work
Causal Knowledge Acquisition
Causal knowledge is crucial for various artificial intelligence applications. Many works (Heindorf et al., 2020;Zhang et al., 2020) extract large-scale and precise causal pairs through neural or symbolic ways. Hereafter, they connect causal pairs into causal chains or graphs based on the textual or semantic similarity between events (Chang and Choi, 2004;Li et al., 2020;Hashimoto et al., 2014). Luo et al. (2016) used linguistic patterns (Chang and Choi, 2004) Previous studies mainly focused on extracting high-precision causal pairs, while ignoring the transitive problems when connecting event pairs into causal chains. We are trying to solve the two transitive problems in generating reliable causal chains.
Causal Reasoning
Causal reasoning aims at grasping the causal dependency between cause and effect, which consists of statistical-based and neural-based methods.
As for statistical-based methods, Gordon et al. (2011) measured PMI based on a personal story corpus and then measured causality between words with PMI. Luo et al. (2016) and Sasaki et al. (2017) introduced direction information into causal strength index. Then they infer causality between events by combining the causality of word pairs.
Many neural-based methods introduce the semantics of events to measure the causality of causal pairs. Of late, Xie and Mu (2019) proposed to measure word-level causality with an attention-based mechanism. Wang et al. (2019) and finetuned the pre-trained language model to resolve causal reasoning task and achieve impressive results. injected a vast amount of causal pair knowledge into the pre-trained language model and got a noticeable improvement in COPA (Roemmele et al., 2011) causal reasoning task. Du et al. (2021b) introduced evidence events to the causal pairs and used a conditional Markov neural logic network to model the causal paths between cause and effect events, to achieve stable and self-explainable causal reasoning. Du et al. (2022) introduced general truth to event pair for investigating explainable causal reasoning.
Most of the above causal reasoning studies focus on causal pair reasoning, while we are trying to solve the reliable causal chain reasoning.
Conclusion
We explore the problem of causal chain reasoning and propose a novel framework called ReCo to overcome the two main transitive problems of threshold effect and scene drift. ReCo first constructs an SCM for each causal chain, the SCM introduces exogenous variables to represent the threshold and scene factor of the causal pairs, and then conducts EA-CVAE to implicitly learn the representations of the exogenous variables with the contexts. Finally, ReCo devises SRNN to estimate the threshold and scene contradictions across the exogenous variables. Experiments show that ReCo can achieve the best CCR performances on both Chinese and English datasets.
Acknowledgments
We would like to thank Bibo Cai and Minglei Li for their valuable feedback and advice, and the anonymous reviewers for their constructive comments, and gratefully acknowledge the support of the Technological Innovation "2030 Megaproject" -New Generation Artificial Intelligence of China (2018AAA0101901), and the National Natural Science Foundation of China (62176079, 61976073).
Limitations
There may be some possible limitations in this study. First, the threshold and scene factors are hard to explicitly capture, which might hinder ReCo to achieve higher performances. Second, due to the loss function possessing three components and the nature of CVAE, it needs more attempts to reach convergence in training. Third, due to the nature of the CEG, each causal pair consists of only one context. Having multiple contexts for each causal pair would be better to cover more conditions as well as capture the threshold effect and scene drift problems more precisely. Moreover, it would be better to have larger CCR datasets. Future research should be undertaken to explore a more efficient and general model architecture as well as obtain larger Chinese and English CCR datasets with higher agreements and multiple contexts.
A CEG Construction
CEG (Chinese Event Graph) (Ding et al., 2019) is a large-scale and open-domain causal event graph, which consists of more than 1.6 million events and 3.6 million cause-effect edges. We list the steps of constructing CEG as follows: 1) Crawling news documents from the news websites (such Netease news † , Tencent news ‡ , etc.).
2) Conducting causal pairs extraction through sequence labeling (B Cause , I Cause , B Effect , I Effect , O), training data are annotated through crowdsourcing.
3) Event similarity computation through Jaccard similarity coefficient (Niwattanakul et al., 2013) and event clustering using a threshold. 4) Extracting common elements from events (make sure that at least a verb and a noun are kept) in the same cluster to generalize events. 5) Connecting event pairs into causal chains and CEG.
B CCR Examples
Examples of Chinese and English CCR are shown in Table 6 and Table 7, respectively. 2 and 4 in the labels represent the causal chain that will meet problems at the second and the fourth causal relationship, respectively. And the problem types are threshold effect and scene drift for Chinese and English CCR examples, respectively.
C Causal Knowledge Injection
We use the English CCR dataset for different knowledge injections: causal pair knowledge (BERT P ), unfiltered causal chain knowledge (BERT C ), and causal chain knowledge distilled by ReCo (BERT R ). All the models are based on BERTbase (Devlin et al., 2019).
C.1 Knowledge Injection Settings
For BERT P , we split causal chains in English CCR datasets into causal pairs, and for each causal pair, † https://news.163.com/ ‡ https://news.qq.com/ we randomly sample a cause event or effect event from other causal pairs to obtain negative samples. The cause together with the effect event will be concatenated and sent into the pre-trained BERT, then we use the representation of [CLS] token in the last hidden state for binary classification. For BERT C , we split causal chains in the English CCR dataset into causal chains of length 2 to 5. As for negative samples, for each causal chain, we sample an event from another causal chain to replace the first or last event of the causal chain. The events in a causal chain will be concatenated into a sequence and sent into the pre-trained BERT, then we use the representation of [CLS] token in the last hidden state for binary classification.
For BERT R , we split causal chains filtered by ReCo into causal chains of length 2 to 5. As for negative samples, for each causal chain, we sample an event from another causal chain to replace the first or last event of the causal chain. The events in a causal chain will be concatenated into a sequence and sent into the pre-trained BERT, then we use the representation of [CLS] token in the last hidden state for binary classification.
C.2 Knowledge Injection Details
For all methods (BERT P , BERT C and BERT R ), we use the base version of BERT (Devlin et al., 2019 Table 8.
C.3.2 Finetuning
We finetune BERT O , BERT P , BERT C and BERT R on the above four downstream tasks.
For Event StoryLine v0.9 and BeCAUSE 2.1, we concatenate the event pair into a sequence and send it into the above four models, then the representation of [CLS] in the last hidden state is used for binary classification. We use F1 score and accuracy as the evaluation metrics of Event StoryLine v0.9 and BeCAUSE 2.1, respectively.
For COPA and CommonsenseQA tasks, we concatenate the premise (question) together with one of the hypotheses (alternatives) and feed it into all four models, then we use the [CLS] token in the last hidden state for classification. We use accuracy as the evaluation metric for both COPA and CommonsenseQA.
As for the finetuning settings of the above four models, the batch size is set to 40, and we use Adam (Kingma and Ba, 2014) optimizer with the learning rate of 1e-5. An early-stopping mechanism is applied for finetuning.
D Ablation Study
D.1 EA-CVAE
For investigating the importance of EV-CVAE in ReCO, we remove the EA-CVAE component in ReCo, and for constructing the SCM (Pearl, 2009), we use the contexts of the causal pairs as the exogenous variables in the SCM. Other components of ReCo are not changed and the training settings are the same as the original ReCo.
D.2 Problems Estimators
We devise two problem estimators to estimate threshold effect and scene drift problems. For investigating the importance of these two problem estimators, we remove the supervised signal (by removing L logic in the loss) of Threshold and Scene Estimators in the SRNN, the parameters of the two problem estimators are only tuned by the final reliability prediction task (note that EA-CVAE are kept for training, and the tuning of Kullback-Leibler divergence loss (Hershey and Olsen, 2007) will not change the parameters in the two problem mechanisms). The model architecture and the training settings of this setting are the same as the original ReCo.
D.3 Logic Loss
The logic loss is used to apply a logic constraint on the predictions of ReCo. When the causal chain is reliable, both the threshold effect and scene drift problems do not exist. Moreover, when the causal chain is unreliable, one of the transitive problem should exist. For investigating the effect of the reading→ myopia → problems → stress
ReCo Prediction
Unreliable Scene Drift False Threshold Effect True Table 9: An example made by ReCo. Reco makes the right prediction and gives the reason why this chain is unreliable. "Problems" conditioned on "reading" and "myopis" is not enough to lead to "stress".
volume growth→ revenue growth → improvement → energy savings
ReCo Prediction Unreliable Scene Drift
True Threshold Effect False Table 10: An example made by ReCo. Reco makes the right prediction and gives the reason why this chain is unreliable. "Energy savings" will not happen in the scene of "volume growth"→"revenue growth" →"improvements".
logic loss, we replace the logic loss with two cross entropy losses. One of the cross-entropy loss is conducted to supervise the threshold effect problem, and the other is used to supervise the scene drift problem.
E Case Study
We provide another two English examples predicted by ReCo. The examples of threshold effect and scene drift problems are shown in Table 9 and Table 10, respectively.
Figure 1 :
1Causal chains with (a) threshold effect and (b) scene drift problems, which can be estimated by the contradictions of threshold and scene factors in the contexts, respectively.
Figure 2 :
2(a) Constructing SCM based on an antecedent causal chain and a causal pair. (b) If there is threshold effect or scene drift problem, then U xy would contradict U yx . And it is worth discussing the threshold effect problem when scenes are consistent.
Figure 3 :
3(a) The overall architecture of ReCo. (b) The detailed structure of EA-CVAE. (c) The detailed structure of SRNN which is a kind of recurrent neural networks.
Figure 4 :
4Accuracy on (a) Chinese and (b) English CCR test sets categorized by the lengths of the causal chains.
to construct CausalNet. Heindorf et al. (2020) built CauseNet from web resources. Rashkin et al. (2018) constructed Event2mind and Sap et al. (2019) built Atomic both through crowdsourcing. Zhang et al. (2020) proposed a largescale eventuality knowledge graph called ASER. Li et al. (2020) built CausalBank to improve the coverage of the causal knowledge base.
to obtain unlabeled causal chain reasoning examples.We first use Breadth-First Search on CEG and CauseNet to retrieve 2,911 and 1,400 causal chains with contexts, respectively. Each causal chain hasTable 1: Statistics of CCR datasets. Chain denotes the causal chains retrieved from causal event graphs.CCR
Train Dev
Test
Zh
Chain
2,131 290
490
Instance-3 2,131 290
490
Instance-4 1,552 207
324
Instance-5 1,077 139
188
Total
4,760 636 1,002
En
Chain
1,037 139
224
Instance-3 1,037 139
224
Instance-4
829 109
164
Instance-5
612
80
105
Total
2,478 328
493
. Refer to Appendix B for Chinese and English CCR examples.
Table 2 :
2Overall results on the CCR test sets.
Datasets BERT O BERT P BERT C BERT REvent StoryLine v0.9 * (Caselli and Vossen, 2017) (F1 %)
66.84
68.08
69.05
70.66
BeCAUSE 2.1 (Dunietz et al., 2017) (Accuracy %)
79.17
81.94
83.33
83.80
COPA (Roemmele et al., 2011) (Accuracy %)
73.80
74.00
74.20
75.40
CommonsenseQA (Talmor et al., 2019) (Accuracy %)
54.71
54.87
55.04
55.12
Table 3 :
3Overall results of causal knowledge injection. The evaluation metrics are computed based on manuallysplit test sets (Event StoryLine v0.9, BeCAUSE 2.1), official test (COPA) and dev (CommonsenseQA) sets.
Methods
Accuracy %
BERT
69.17
-w context
69.37
ReCo
71.81
-w/o EA-CVAE
68.97
-w/o Problems Estimators
70.18
-w/o Logic Loss
70.18
Wrong Type Threshold EffectEvents
A: 销量下滑
B: 市场竞争加剧
C: 深圳发展
D: 城市化进程快
E: 水源水质差
Contexts
A → B: 销量下滑导致了终端市场竞
争加剧
B → C: 通信市场竞争加剧将有助于
深圳的通信设备业发展
C → D: 深圳的向西发展使得宝安的
城市化进程越来越快
D → E: 水源水质极差的原因是周边
城市化进程较快
Label
2
Table 6 :
6An example in the Chinese CCR dataset. Sent off by a red card Contexts A → B: Tired at work makes me need to relax at weekends. B → C: Tom wants to relax by playing games. C → D: Jack and Mike dispute because of playing games. D → E: David Silver gets a red card because of the dispute with the referee.Wrong Type Scene DriftEvents
A: Tired at work
B: Relax
C: Playing games
D: Dispute
E: Label
4
Table 7 :
7An example in the English CCR dataset.
Table 8 :
8Statistics of Event StoryLine v0.9 (Caselli and Vossen, 2017), BeCUASE 2.1 (Dunietz et al., 2017), COPA (Roemmele et al., 2011), CommonsenseQA (Talmor et al., 2019) datasets.
). The batch size is 36, and we use Adam(Kingma and Ba, 2014) optimizer with the learning rate of 1e-5. All three models are pre-trained for 2 epochs.C.3 Downstream Tasks FinetuningC.3.1 Dataset Settings• Event StoryLine v0.9(Caselli and Vossen, 2017) For the Event StoryLine v0.9 dataset, we only keep the intra-sentence causal pairs and ensure that the cause event precedes the effect event. Finally, we randomly split the filtered causal pairs into train, dev, test sets.• BeCAUSE 2.1(Dunietz et al., 2017) For the BeCAUSE 2.1 dataset, we first extract event pairs from the annotated data, then we manually split the event pairs into train, dev, test sets.• COPA(Roemmele et al., 2011) For the COPA dataset, for the reason that COPA does not have a training set, we randomly sample 90% of the dev set for training, the remaining 10% as the new dev set.• CommonsenseQA(Talmor et al., 2019) For the CommonsenseQA dataset, we use the dev set for testing due to the test set of CommonsenseQA is a blind set.The statistics of the four datasets are shown in
Learning to retrieve reasoning paths over wikipedia graph for question answering. Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, Caiming Xiong, International Conference on Learning Representations. Akari Asai, Kazuma Hashimoto, Hannaneh Hajishirzi, Richard Socher, and Caiming Xiong. 2019. Learning to retrieve reasoning paths over wikipedia graph for question answering. In International Conference on Learning Representations.
The event storyline corpus: A new benchmark for causal and temporal relation extraction. Tommaso Caselli, Piek Vossen, Proceedings of the Events and Stories in the News Workshop. the Events and Stories in the News WorkshopTommaso Caselli and Piek Vossen. 2017. The event storyline corpus: A new benchmark for causal and temporal relation extraction. In Proceedings of the Events and Stories in the News Workshop, pages 77- 86.
Causal relation extraction using cue phrase and lexical pair probabilities. Du-Seong Chang, Key-Sun Choi, International Conference on Natural Language Processing. SpringerDu-Seong Chang and Key-Sun Choi. 2004. Causal relation extraction using cue phrase and lexical pair probabilities. In International Conference on Natural Language Processing, pages 61-70. Springer.
De-confounded variational encoderdecoder for logical table-to-text generation. Wenqing Chen, Jidong Tian, Yitian Li, Hao He, Yaohui Jin, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingLong Papers1Wenqing Chen, Jidong Tian, Yitian Li, Hao He, and Yao- hui Jin. 2021. De-confounded variational encoder- decoder for logical table-to-text generation. In Pro- ceedings of the 59th Annual Meeting of the Asso- ciation for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5532- 5542.
Revisiting pre-trained models for Chinese natural language processing. Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, Guoping Hu, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings. the 2020 Conference on Empirical Methods in Natural Language Processing: FindingsYiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, and Guoping Hu. 2020. Revisiting pre-trained models for Chinese natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 657-668, Online. Association for Computa- tional Linguistics.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 4171- 4186.
Elg: an event logic graph. Xiao Ding, Zhongyang Li, Ting Liu, Kuo Liao, arXiv:1907.08015arXiv preprintXiao Ding, Zhongyang Li, Ting Liu, and Kuo Liao. 2019. Elg: an event logic graph. arXiv preprint arXiv:1907.08015.
Learning event graph knowledge for abductive reasoning. Li Du, Xiao Ding, Ting Liu, Bing Qin, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingLong Papers1Li Du, Xiao Ding, Ting Liu, and Bing Qin. 2021a. Learning event graph knowledge for abductive rea- soning. In Proceedings of the 59th Annual Meet- ing of the Association for Computational Linguistics and the 11th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 5181-5190.
Excar: Event graph knowledge enhanced explainable causal reasoning. Li Du, Xiao Ding, Kai Xiong, Ting Liu, Bing Qin, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingLi Du, Xiao Ding, Kai Xiong, Ting Liu, and Bing Qin. 2021b. Excar: Event graph knowledge enhanced explainable causal reasoning. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 2354-2363.
2022. e-CARE: a new dataset for exploring explainable causal reasoning. Li Du, Xiao Ding, Kai Xiong, Ting Liu, Bing Qin, Proceedings of the 60th. the 60thLi Du, Xiao Ding, Kai Xiong, Ting Liu, and Bing Qin. 2022. e-CARE: a new dataset for exploring explain- able causal reasoning. In Proceedings of the 60th
Annual Meeting of the Association for Computational Linguistics. Dublin, Ireland1Association for Computational LinguisticsAnnual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 432-446, Dublin, Ireland. Association for Computational Lin- guistics.
The because corpus 2.0: Annotating causality and overlapping relations. Jesse Dunietz, Lori Levin, Jaime G Carbonell, Proceedings of the 11th Linguistic Annotation Workshop. the 11th Linguistic Annotation WorkshopJesse Dunietz, Lori Levin, and Jaime G Carbonell. 2017. The because corpus 2.0: Annotating causality and overlapping relations. In Proceedings of the 11th Linguistic Annotation Workshop, pages 95-104.
Commonsense causal reasoning using millions of personal stories. Andrew S Gordon, A Cosmin, Kenji Bejan, Sagae, Twenty-Fifth AAAI Conference on Artificial Intelligence. Andrew S Gordon, Cosmin A Bejan, and Kenji Sagae. 2011. Commonsense causal reasoning using millions of personal stories. In Twenty-Fifth AAAI Conference on Artificial Intelligence.
Toward future scenario generation: Extracting event causality exploiting semantic relation, context, and association features. Chikara Hashimoto, Kentaro Torisawa, Julien Kloetzer, Motoki Sano, István Varga, Jong-Hoon Oh, Yutaka Kidawara, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsLong Papers1Chikara Hashimoto, Kentaro Torisawa, Julien Kloetzer, Motoki Sano, István Varga, Jong-Hoon Oh, and Yu- taka Kidawara. 2014. Toward future scenario genera- tion: Extracting event causality exploiting semantic relation, context, and association features. In Pro- ceedings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 987-997.
Causenet: Towards a causality graph extracted from the web. Stefan Heindorf, Yan Scholten, Henning Wachsmuth, Axel-Cyrille Ngonga Ngomo, Martin Potthast, Proceedings of the 29th ACM International Conference on Information & Knowledge Management. the 29th ACM International Conference on Information & Knowledge ManagementStefan Heindorf, Yan Scholten, Henning Wachsmuth, Axel-Cyrille Ngonga Ngomo, and Martin Potthast. 2020. Causenet: Towards a causality graph extracted from the web. In Proceedings of the 29th ACM Inter- national Conference on Information & Knowledge Management, pages 3023-3030.
Approximating the kullback leibler divergence between gaussian mixture models. R John, Hershey, A Peder, Olsen, 2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP'07. IEEE4317John R Hershey and Peder A Olsen. 2007. Approximat- ing the kullback leibler divergence between gaussian mixture models. In 2007 IEEE International Con- ference on Acoustics, Speech and Signal Processing- ICASSP'07, volume 4, pages IV-317. IEEE.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735- 1780.
Causal networks or causal islands? the representation of mechanisms and the transitivity of causal judgment. G B Samuel, Johnson, Woo-Kyoung Ahn, Cognitive science. 397Samuel GB Johnson and Woo-kyoung Ahn. 2015. Causal networks or causal islands? the represen- tation of mechanisms and the transitivity of causal judgment. Cognitive science, 39(7):1468-1503.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Learning to rank for plausible plausibility. Zhongyang Li, Tongfei Chen, Benjamin Van Durme, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsZhongyang Li, Tongfei Chen, and Benjamin Van Durme. 2019. Learning to rank for plausible plausibility. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4818- 4823.
Causalbert: Injecting causal knowledge into pre-trained models with minimal supervision. Zhongyang Li, Xiao Ding, Kuo Liao, Bing Qin, Ting Liu, arXiv:2107.09852arXiv preprintZhongyang Li, Xiao Ding, Kuo Liao, Bing Qin, and Ting Liu. 2021. Causalbert: Injecting causal knowl- edge into pre-trained models with minimal supervi- sion. arXiv preprint arXiv:2107.09852.
Guided generation of cause and effect. Zhongyang Li, Xiao Ding, Ting Liu, J Edward Hu, Benjamin Van Durme, Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence. the Twenty-Ninth International Joint Conference on Artificial Intelligence20Zhongyang Li, Xiao Ding, Ting Liu, J. Edward Hu, and Benjamin Van Durme. 2020. Guided generation of cause and effect. In Proceedings of the Twenty- Ninth International Joint Conference on Artificial Intelligence, IJCAI-20.
Commonsense causal reasoning between short texts. Zhiyi Luo, Yuchen Sha, Q Kenny, Seung-Won Zhu, Zhongyuan Hwang, Wang, Fifteenth International Conference on the Principles of Knowledge Representation and Reasoning. Zhiyi Luo, Yuchen Sha, Kenny Q Zhu, Seung-won Hwang, and Zhongyuan Wang. 2016. Commonsense causal reasoning between short texts. In Fifteenth International Conference on the Principles of Knowl- edge Representation and Reasoning.
Using of jaccard coefficient for keywords similarity. Suphakit Niwattanakul, Jatsada Singthongchai, Ekkachai Naenudorn, Supachanun Wanapu, Proceedings of the international multiconference of engineers and computer scientists. the international multiconference of engineers and computer scientists1Suphakit Niwattanakul, Jatsada Singthongchai, Ekkachai Naenudorn, and Supachanun Wanapu. 2013. Using of jaccard coefficient for keywords similarity. In Proceedings of the international mul- ticonference of engineers and computer scientists, volume 1, pages 380-384.
Causality. Judea Pearl, Cambridge university pressJudea Pearl. 2009. Causality. Cambridge university press.
Event2mind: Commonsense inference on events, intents, and reactions. Maarten Hannah Rashkin, Emily Sap, Allaway, A Noah, Yejin Smith, Choi, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Hannah Rashkin, Maarten Sap, Emily Allaway, Noah A Smith, and Yejin Choi. 2018. Event2mind: Com- monsense inference on events, intents, and reactions. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 463-473.
Choice of plausible alternatives: An evaluation of commonsense causal reasoning. Melissa Roemmele, Andrew S Cosmin Adrian Bejan, Gordon, 2011 AAAI Spring Symposium Series. Melissa Roemmele, Cosmin Adrian Bejan, and An- drew S Gordon. 2011. Choice of plausible alter- natives: An evaluation of commonsense causal rea- soning. In 2011 AAAI Spring Symposium Series.
Atomic: An atlas of machine commonsense for ifthen reasoning. Maarten Sap, Emily Ronan Le Bras, Chandra Allaway, Nicholas Bhagavatula, Hannah Lourie, Brendan Rashkin, Roof, A Noah, Yejin Smith, Choi, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Maarten Sap, Ronan Le Bras, Emily Allaway, Chan- dra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019. Atomic: An atlas of machine commonsense for if- then reasoning. In Proceedings of the AAAI Con- ference on Artificial Intelligence, volume 33, pages 3027-3035.
Handling multiword expressions in causality estimation. Shota Sasaki, Sho Takase, Naoya Inoue, Naoaki Okazaki, Kentaro Inui, IWCS 2017-12th International Conference on Computational Semantics. Short papersShota Sasaki, Sho Takase, Naoya Inoue, Naoaki Okazaki, and Kentaro Inui. 2017. Handling mul- tiword expressions in causality estimation. In IWCS 2017-12th International Conference on Computa- tional Semantics-Short papers.
Commonsenseqa: A question answering challenge targeting commonsense knowledge. Alon Talmor, Jonathan Herzig, Nicholas Lourie, Jonathan Berant, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. Commonsenseqa: A question answering challenge targeting commonsense knowl- edge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4149-4158.
Superglue: a stickier benchmark for general-purpose language understanding systems. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R Bowman, Proceedings of the 33rd International Conference on Neural Information Processing Systems. the 33rd International Conference on Neural Information Processing SystemsAlex Wang, Yada Pruksachatkun, Nikita Nangia, Aman- preet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019. Superglue: a stickier benchmark for general-purpose language understand- ing systems. In Proceedings of the 33rd International Conference on Neural Information Processing Sys- tems, pages 3266-3280.
| [
"https://github.com/Waste-Wood/ReCo."
] |
[
"Coarse-graining fast linkers Coarse-grained dynamics of transiently-bound fast linkers",
"Coarse-graining fast linkers Coarse-grained dynamics of transiently-bound fast linkers"
] | [
"Sophie Marbach \nCNRS\nSorbonne Université\nPhysicochimie des Electrolytes et Nanosystèmes InterfaciauxF-75005ParisFrance\n\nCourant Institute of Mathematical Sciences\nNew York University\n10012NYU.S.A\n",
"Christopher E Miles \nDepartment of Mathematics\nUniversity of California\n92697Irvine, IrvineUSA\n"
] | [
"CNRS\nSorbonne Université\nPhysicochimie des Electrolytes et Nanosystèmes InterfaciauxF-75005ParisFrance",
"Courant Institute of Mathematical Sciences\nNew York University\n10012NYU.S.A",
"Department of Mathematics\nUniversity of California\n92697Irvine, IrvineUSA"
] | [] | Transient bonds between fast linkers and slower particles are widespread in physical and biological systems. In spite of their diverse structure and function, a commonality is that the linkers diffuse on timescales much faster compared to the overall motion of the particles they bind to. This limits numerical and theoretical approaches that need to resolve these diverse timescales with high accuracy. Many models, therefore, resort to effective, yet ad-hoc, dynamics, where linker motion is only accounted for when bound. This paper provides a mathematical justification for such coarse-grained dynamics that preserves detailed balance at equilibrium. Our derivation is based on multiscale averaging techniques and is broadly applicable. We verify our results with simulations on a minimal model of fast linker binding to a slow particle. We show how our framework can be applied to various systems, including those with multiple linkers, stiffening linkers upon binding, or slip bonds with force-dependent unbinding. Importantly, the preservation of detailed balance only sets the ratio of the binding to the unbinding rates, but it does not constrain the detailed expression of binding kinetics. We conclude by discussing how various choices of binding kinetics may affect macroscopic dynamics.Transient bonds between fast molecules and other slower molecules are found ubiquitously throughout physical systems. Such bonds enable momentum transfer at microscopic scales and are at the root of diverse phenomena, including linkers that tune the material properties of polymers 1-4 , shape spatial organization and function of biological systems 5-10 , and determine the rate of tethered chemical reactions 11 . Stochastic modeling of transient binding of linkers via numerical or theoretical approaches is, therefore, of widespread interest.The diversity of phenomena goes hand in hand with a variety of systems and hence of lexical terms: examples include cross-links in polymer meshes 1,2 , metal-ligand bonds in self-assembled porous materials 12,13 , complementary DNA pairs hybridizing between colloids 14-21 , and fast myosin motors binding to slender actin fibers. 5,6,22 Henceforth, the word linker will refer to any molecule with (a) a binding end that can form a bond with another molecule and (b) that relaxes rapidly to a finite length. In the previous examples, the linker is the polymer linker, ligand, DNA, or myosin molecule. The bond will refer to the chemical bond formed between a linker and a slower molecule (or another linker). In the previous examples, the bond is the weak polymer-polymer adhesion, the metal-ligand chemical bond, the hybridized DNA section, or the high-affinity myosin head after ATP hydrolysis.Despite enormous variations in mechanochemical properties of linking molecules, one unifying feature is their diffusion on characteristic timescales much faster than the overall motion of the fibers or objects they link. 10,23-25 . This rapid diffusion of (often many) linking molecules creates disparate time and length scales that must be resolved, often creating a bottleneck for numerical or theoretical investigations.12,13,26 | 10.1063/5.0139036 | [
"https://export.arxiv.org/pdf/2212.08777v3.pdf"
] | 254,854,367 | 2212.08777 | ec6f2d2fde5b26540230a77e7e0a863aa573bd89 |
Coarse-graining fast linkers Coarse-grained dynamics of transiently-bound fast linkers
Sophie Marbach
CNRS
Sorbonne Université
Physicochimie des Electrolytes et Nanosystèmes InterfaciauxF-75005ParisFrance
Courant Institute of Mathematical Sciences
New York University
10012NYU.S.A
Christopher E Miles
Department of Mathematics
University of California
92697Irvine, IrvineUSA
Coarse-graining fast linkers Coarse-grained dynamics of transiently-bound fast linkers
; Both authors contributed equally to this work.) (Dated: April 25, 2023)
Transient bonds between fast linkers and slower particles are widespread in physical and biological systems. In spite of their diverse structure and function, a commonality is that the linkers diffuse on timescales much faster compared to the overall motion of the particles they bind to. This limits numerical and theoretical approaches that need to resolve these diverse timescales with high accuracy. Many models, therefore, resort to effective, yet ad-hoc, dynamics, where linker motion is only accounted for when bound. This paper provides a mathematical justification for such coarse-grained dynamics that preserves detailed balance at equilibrium. Our derivation is based on multiscale averaging techniques and is broadly applicable. We verify our results with simulations on a minimal model of fast linker binding to a slow particle. We show how our framework can be applied to various systems, including those with multiple linkers, stiffening linkers upon binding, or slip bonds with force-dependent unbinding. Importantly, the preservation of detailed balance only sets the ratio of the binding to the unbinding rates, but it does not constrain the detailed expression of binding kinetics. We conclude by discussing how various choices of binding kinetics may affect macroscopic dynamics.Transient bonds between fast molecules and other slower molecules are found ubiquitously throughout physical systems. Such bonds enable momentum transfer at microscopic scales and are at the root of diverse phenomena, including linkers that tune the material properties of polymers 1-4 , shape spatial organization and function of biological systems 5-10 , and determine the rate of tethered chemical reactions 11 . Stochastic modeling of transient binding of linkers via numerical or theoretical approaches is, therefore, of widespread interest.The diversity of phenomena goes hand in hand with a variety of systems and hence of lexical terms: examples include cross-links in polymer meshes 1,2 , metal-ligand bonds in self-assembled porous materials 12,13 , complementary DNA pairs hybridizing between colloids 14-21 , and fast myosin motors binding to slender actin fibers. 5,6,22 Henceforth, the word linker will refer to any molecule with (a) a binding end that can form a bond with another molecule and (b) that relaxes rapidly to a finite length. In the previous examples, the linker is the polymer linker, ligand, DNA, or myosin molecule. The bond will refer to the chemical bond formed between a linker and a slower molecule (or another linker). In the previous examples, the bond is the weak polymer-polymer adhesion, the metal-ligand chemical bond, the hybridized DNA section, or the high-affinity myosin head after ATP hydrolysis.Despite enormous variations in mechanochemical properties of linking molecules, one unifying feature is their diffusion on characteristic timescales much faster than the overall motion of the fibers or objects they link. 10,23-25 . This rapid diffusion of (often many) linking molecules creates disparate time and length scales that must be resolved, often creating a bottleneck for numerical or theoretical investigations.12,13,26
Transient bonds between fast linkers and slower particles are widespread in physical and biological systems. In spite of their diverse structure and function, a commonality is that the linkers diffuse on timescales much faster compared to the overall motion of the particles they bind to. This limits numerical and theoretical approaches that need to resolve these diverse timescales with high accuracy. Many models, therefore, resort to effective, yet ad-hoc, dynamics, where linker motion is only accounted for when bound. This paper provides a mathematical justification for such coarse-grained dynamics that preserves detailed balance at equilibrium. Our derivation is based on multiscale averaging techniques and is broadly applicable. We verify our results with simulations on a minimal model of fast linker binding to a slow particle. We show how our framework can be applied to various systems, including those with multiple linkers, stiffening linkers upon binding, or slip bonds with force-dependent unbinding. Importantly, the preservation of detailed balance only sets the ratio of the binding to the unbinding rates, but it does not constrain the detailed expression of binding kinetics. We conclude by discussing how various choices of binding kinetics may affect macroscopic dynamics.
Transient bonds between fast molecules and other slower molecules are found ubiquitously throughout physical systems. Such bonds enable momentum transfer at microscopic scales and are at the root of diverse phenomena, including linkers that tune the material properties of polymers 1-4 , shape spatial organization and function of biological systems [5][6][7][8][9][10] , and determine the rate of tethered chemical reactions 11 . Stochastic modeling of transient binding of linkers via numerical or theoretical approaches is, therefore, of widespread interest.
The diversity of phenomena goes hand in hand with a variety of systems and hence of lexical terms: examples include cross-links in polymer meshes 1,2 , metal-ligand bonds in self-assembled porous materials 12,13 , complementary DNA pairs hybridizing between colloids [14][15][16][17][18][19][20][21] , and fast myosin motors binding to slender actin fibers. 5,6,22 Henceforth, the word linker will refer to any molecule with (a) a binding end that can form a bond with another molecule and (b) that relaxes rapidly to a finite length. In the previous examples, the linker is the polymer linker, ligand, DNA, or myosin molecule. The bond will refer to the chemical bond formed between a linker and a slower molecule (or another linker). In the previous examples, the bond is the weak polymer-polymer adhesion, the metal-ligand chemical bond, the hybridized DNA section, or the high-affinity myosin head after ATP hydrolysis.
Despite enormous variations in mechanochemical properties of linking molecules, one unifying feature is their diffusion on characteristic timescales much faster than the overall motion of the fibers or objects they link. 10,[23][24][25] . This rapid diffusion of (often many) linking molecules creates disparate time and length scales that must be resolved, often creating a bottleneck for numerical or theoretical investigations. 12,13,26 To alleviate this, it is common to rely on coarse-grained descriptions of the linkers, either for their dynamics, binding kinetics, or both. In these coarse-grained scenarios, crosslinkers are replaced by effective laws so that their detailed dynamics need not be considered directly. 22,[27][28][29][30][31][32][33][34][35][36] For example, if a fast linker binds transiently to a slow particle, a coarse-grained simulation or model would only specify the dynamics of the slow particle; see illustration in Fig. 1. While the proposed effective dynamics may be relatively intuitive in some sce- where fast linkers (pink) between slow fibers (blue) are modeled only when they are bound. In this work, we study a minimal system where (b) a fast jiggling linker (pink) binds and unbinds rapidly to a binding site on a slow particle with rates q on and q off that depend on the relative distance. The 1D position of the linker is x and that of the slow particle x s . In this paper, we use multiscale averaging techniques to coarse-grain the dynamics of the fast linker while preserving detailed balance. In the coarse-grained model, only the slow particle dynamics are specified, with effective kinetic rates q eff on (x s ) and q eff off (x s ) and forces that only depend on the position of the slow particle x s . narios, there does not seem to be a systematic procedure for justifying and comparing this coarse-graining across different systems. 22,29,31 This paper provides a mathematical justification for such dynamics and, more importantly, a framework to derive effective dynamics in various settings.
In establishing the justification for the effective dynamics with fast linkers, several questions arise. Importantly, when linker dynamics are in thermodynamic equilibrium, with binding and unbinding rates that depend on mutual distance, detailed balance has to be enforced 24,25,37 . An immediate consequence is that the equilibrium distribution of the system is determined by the binding and unbinding rates. As these detailed rates are often unmeasured, a variety of choices are taken in the literature 22,[27][28][29][30][31][32][33][34] . These choices of specific forms of binding and unbinding spatial dependence introduce ambiguity in interpreting the resulting dynamics. We, therefore, briefly discuss choices of kinetic rates and their consequences in the coarse-graining procedure.
The paper is organized as follows. We first consider a minimal system ( Fig. 1-b) made of a single fast linker binding to a slow particle (Sec. I) and catalog the variety of modeling choices at this microscopic level. Next, we rigorously coarse grain fast linker dynamics to obtain an effective model for the slow particle (Sec. II) that we validate with numerical simulations (Sec. III). We then show, with 3 examples, how we can apply our formalism to more complex yet common setups (Sec. IV): we investigate (i) 2 fast reactive linkers connecting to each other, (ii) a linker that stiffens upon binding, and (iii) a slip bond with force-dependent unbinding. Finally, we discuss how the choice of microscopic kinetic rates can affect the coarse-grained dynamics both at short and long timescales in nontrivial ways (Sec. V). We hope our framework will help to investigate systems with fast transient crosslinkers more systematically.
I. GENERAL SYSTEM WITH TRANSIENT CROSSLINKERS
A. Microscopic dynamics
We consider the motion of a relatively slow particle representing, for example, a slender actin filament, a cell, or a colloid. 22,24,25 The slow particle diffuses, for simplicity, in one spatial dimension (see Fig. 1-b, blue particle) and we will later discuss how to extend the results to 3D. The position of the slow particle at time t is x s (t). The diffusion coefficient of the particle is D s = k B T /γ s , where k B is Boltzmann's constant, T is temperature, and γ s is the friction coefficient of the particle. The particle evolves in an external potential U s (x s ), that could represent connections with other particles. The forces in the unbound state on the particle are thus simply
F u = −∂ s U s (x s ).
We also track the motion of a relatively fast linker, for example, a myosin head 22 or the sticky ends of a single-stranded DNA filament. 24,29 The fast linker's position is x , and the linker diffuses with diffusion coefficient D = k B T /γ , where γ is the friction coefficient of the fast particle (see Fig. 1-b, pink linker). Here, the linker is attached to another slow object or an immobile surface; for now, we will consider it connected to a fixed point. This assumption ensures that the binding is localized in space. The linker usually resists extension, as it is made of a polymer or a protein that resists uncoiling. 23,24 It is hence reasonable to assume the linker is submitted to a recoil force, −k (x − x ,0 ) where k is a spring constant, 38 and x ,0 is the rest length of the linker. Note that, as long as the force is conservative, extending our approach to other expressions is straightforward.
The unbound dynamics are
dx s dt = F u (x s ,x ) γ s + 2k B T γ s η s (t) = − ∂ s U s (x s ) γ s + 2k B T γ s η s (t) dx dt = − k γ (x − x ,0 ) + 2k B T γ η (t)(1)
where the η i (t) are uncorrelated Gaussian white noises, where
η i (t) = 0 and η i (t)η j (t ) = δ i j δ (t − t )
where δ i j is the Kronecker symbol and · is an average over realizations of the noise. Without loss of generality, we will shift the domain such that the rest length of the fast variable is at the center of the domain, namely x ,0 = 0. The slow particle may transiently bind to the linker (see Fig. 1-b, orange bond). In this entire paper, we will consider that when the bond is formed, it corresponds to a stiff spring with spring constant k b added between the particle and the linker. Hence, in the bound state, the forces on the slow par-
ticle are F b = −k b (x s − x ) − ∂ s U s (x s ). The bound dynamics are thus dx s dt = F b (x s ,x ) γ s + 2k B T γ s η s (t) = − k b γ s (x s − x ) − ∂ s U s (x s ) γ s + 2k B T γ s η s (t) dx dt = k b γ (x s − x ) − k γ (x ) + 2k B T γ η (t).
(2)
Our model for the bound dynamics is not unique. For example, one could consider that instead of a stiff spring, the bond formed is a rigid rod constraining the dynamics 24,32,33,39 . Yet, we choose this spring bond model as it is a simple starting point and because it is physically satisfying. Indeed, with the spring model, the linker and the particle relax towards each other, while in the rigid rod model, they can stay unnaturally far apart. Ultimately we will consider the dynamics in the limit where the bond is very stiff, corresponding to a so-called soft constraint [40][41][42] .
The equilibrium distribution corresponding to these choices of dynamics can be decomposed over the 2 states (bound and unbound) as π = (π u , π b ) T . The component of the equilibrium distribution corresponding to the unbound state is
π u (x s , x ) = 1 Z u e −U s (x s )/k B T −k (x ) 2 /2k B T(3)
and the bound one is
π b (x s , x ) = 1 Z b e −k b (x s −x ) 2 /2k B T −U s (x s )/k B T −k (x ) 2 /2k B T ,(4)
where Z u and Z b are constant prefactors that are set by a normalization condition on the total equilibrium distribution dx dx s (π u + π b ) = 1 and by detailed balance, which we turn to now.
B. Possible kinetic rates and detailed balance
We consider that the linker and the particle bind to each other with rate q on and unbind with rate q off . To be physically accurate, it is reasonable to assume that both rates may depend on the spatial variables (x s , x ), a priori. While the exact expression of the rates for our coarse-graining approach does not matter, it is crucial to recall how these rates should be specified to satisfy detailed balance.
If the system we consider is at equilibrium, the rates must satisfy detailed balance 24,37 . Here this means the probability flux at equilibrium of going from one state to the other is equal to the inverse flux, namely
π u (x s , x )q on (x s , x ) = π b (x s , x )q off (x s , x ).(5)
Here this relation simplifies to
q on q off = Z u Z b e −k b (x s −x ) 2 /2k B T .(6)
To make this expression more explicit, we can redefine the constants Z u = Z and Z b = Zq 0 off /q 0 on , where Z is a global normalization constant such that dx dx s (π u + π b ) = 1. Here q 0 off and q 0 on set the typical range of the kinetic rates and are related via the typical free energy of bond formation E 0 such that q 0 on /q 0 off ≡ e −E 0 /k B T . We obtain
q on q off = q 0 on q 0 off e −k b (x s −x ) 2 /2k B T .(7)
Since this is the only relation that constrains q on and q off , it is clear that the choice of q on and q off is not unique and that at least one of the rates has to depend on space.
Possible expressions of the rates
Therefore, there are infinite possibilities in how we can specify rates consistent with detailed balance, that reflects the diversity of choices made in the literature. 22,25,[27][28][29][30][31][32][33][34]36,[43][44][45] . However, we can catalog a few simple commonly chosen examples, see Fig. 2, and discuss to what extent they are consistent with physical intuition.
• (model 0) Unbinding is faster further away from the target, but the binding rate is constant,
q on = q 0 on , q off (x s , x ) = q 0 off e k b (x s −x + b ) 2 /2k B T(8)
This binding term provides the convenient feature of avoiding the need to resolve detailed dynamics of the fast linker. Further, the unbinding form is motivated by the intuitive observation that bonds break faster when a larger force is exerted. One such example is molecular motor detachment, [46][47][48] although this is often modeled as a slip bond, which we discuss in Sec. IV. However, we note that this choice suffers from the numerically undesirable feature of the off rate increasing exponentially as a function of distance.
• (model 1) Binding is faster closer to the target, but unbinding is constant,
q on (x s , x ) = q 0 on e −k b (x s −x ) 2 /2k B T , q off = q 0 off ,(9)
which is a model used in Refs. 28, 31, and 36. Constant unbinding seems quite unphysical since one would expect the bond to break if the linker and the particle are brought too far apart. We can instead opt for an expression where both rates depend on space, for example:
• (model 2) Binding is faster closer to the target, unbinding is faster further away from the target,
q on (x s , x ) = q 0 on 1 + e k b (x s −x ) 2 /2k B T , q off (x s , x ) = q 0 off 1 + e −k b (x s −x ) 2 /2k B T(10)
which solves the issues raised above.
While these are possible choices of the binding rates that agree with detailed balance, these are not the only ones, and one could consider various other kinetics, possibly with a kinetic barrier to overcome to unbind or to bind.
One might then wonder if the choice of the microscopic kinetic rates affects the long-time dynamics of the system. We raise the question here but we will not attempt to answer it thoroughly. Rather, our goal is, once the microscopic dynamics are properly chosen, to show how to systematically coarsegrain the dynamics of the fast linker.
Binding models with uniform kinetic rates
Inspired by a previous model, 49 we also discuss an alternative binding scheme. Specifically, if one wanted the binding and unbinding rates to be uniform, at least over a certain lengthscale, the only option is that the dynamics are not specified via Eqs. (1) and (2) but have to be changed. We illustrate this point briefly below.
For example, one could define uniform rates, as (11) where the rates q 0 on and q 0 off are non zero. In that case, the bound π b and unbound π u parts of the equilibrium distribution must be the same, which constrains the forces to be the same in each state. Otherwise, detailed balance is broken. For example, the bound and unbound dynamics could both satisfy
q on = q 0 on , q off = q 0 off dx s dt = − k b γ s (x s − x ) − ∂ x U s (x s ) γ s + 2k B T γ s η s (t) dx dt = k b γ (x s − x ) − k γ (x ) + 2k B T γ η (t).(12)
As a result, the dynamics are not very interesting as they simply amount to a global quadratic confining potential with spring constant k b . Tethered dynamics are more in line with physical intuition if the confining potential associated with the bond is restricted to a patch in space. Following Ref. 49 and 50, this can be achieved by defining kinetic rates that are non-zero only when the linker and the particle are close, typically
q on = q 0 on H(|x s − x | < R), q off = q 0 off H(|x s − x | < R)(13)
where R is a characteristic distance that sets a reasonable maximum binding distance 32,33 . One can define R = 2k B T /k b where k b is the spring constant of the bond, and R corresponds, therefore, to thermal vibrations around that bond. When the linker and the particle are nearby |x s − x | < R, the dynamics follow Eq. (12) while they are given by Eqs. (1) and (2) otherwise. This ensures that detailed balance is satisfied in both regions, since when |x s − x | ≥ R the kinetic rates are zero. We refer to models with uniform kinetic rates as in Eq. (13), as Doi models 37,51,52 in analogy with reaction models without linkers. However, such uniform kinetic rates are unphysical in 2 ways: (i) bond dissociation is not allowed when the bond is stretched beyond R, which seems rather unlikely; and (ii) when they are close, the interaction between the linker and particle is the same regardless of the state of the bond -see Eq. (12)), which also seems rather unlikely.
We will briefly show in Sec. V how also at the coarsegrained level, important physical differences arise between Doi models with uniform kinetic rates as in Eq. (13), and models with spatially dependent kinetic rates as say in Eq. (10). This means, again, that the choice of kinetic rates affects short-time dynamics at least in non-trivial ways, and hence that is an important assumption of any model.
II. COARSE-GRAINED DYNAMICS WITH FAST LINKERS
Our aim is now to rigorously coarse-grain the linker dynamics to obtain the effective long-time dynamics of the particle. This is useful both to gain analytic insight, but also to propose consistent and fast simulation schemes. We use multiscale averaging 53 to coarse-grain the dynamics, a technique which is broadly used in the field to properly average over short length scales and timescales 10,24,27,28,39,47 . We will show the approach is valid over a broad range of parameters in Sec. III. We provide Table II as a summary of the main notations used throughout the discussion.
A. Set up of the dynamics
The set of stochastic Eqns. (1)-(2) defines a Markov process that is conveniently studied via the Kolmogorov backward equation 53
,54 on the functions f (x s , x ,t) = ( f u (x s , x ,t), f b (x s , x ,t)) T ∂ t f = L f , f (x s , x , 0) = g(x s , x ) ,(14)
where L is the generator of the system and g is any scalar function.
Here the functions f (x s , x ,t) = p(x s , x ,t|x s , x )g(x s , x )dx dx s give the expectation of any scalar function g(x s (t), x (t)), given an initial condition x s (0) = x s , x (0) = x , where p is the probability density that the system evolved from the initial condition to (x s , x ) at time t. Once we know how such functions f evolve, we may calculate any statistic g of our stochastic process.
The generator L can be calculated from the forward equation, the Fokker Planck equation, associated with the dynamics Eqns. (1)-(2), see the Supplementary Information Sec. 1 for further details. The full generator can be written as
L = Q + V (15) where Q = −q on q on q off −q off , and V = V u 0 0 V b with V u = − k γ x ∂ + k B T γ ∂ − ∂ x U (x s ) γ s x s ∂ s + k B T γ s ∂ ss V b = − k γ x ∂ + k B T γ ∂ − ∂ x U (x s ) γ s x s ∂ s + k B T γ s ∂ ss − k b γ (x − x s )∂ − k b γ s (x s − x )∂ s .
B. Nondimensionalization
To highlight the ratio between the different temporal and spatial scales at play, we non-dimensionalize our equations. Since the particle's motion is much slower than that of the linker, we can identify a small number ε = γ /γ s . We seek the dynamics of the slow particle over a typical long timescale τ 0 such that its motion is diffusive, extending over the range
x 0 = √ D 0 τ 0 .
We now can call q on = q on τ 0 and similarly q off = q off τ 0 , remembering that these are both functions of space. We consider for now that we observe the dynamics over timescales τ 0 where transient binding and unbinding are still relevant such that q on = O(1). Typical slow dynamics are captured by the timescale τ 0 so again we consider
κ = k τ 0 /γ s = O(1). We also write λ = k b τ 0 /γ s = O(1) for now. Since ε = γ /γ s , we have k τ 0 /γ = k τ 0 /γ s ε.
In this scaling, we have the nondimensional generator L = 1 ε L 0 + L 1 such that
L 0 = −κx ∂ + ∂ 0 0 −κx ∂ − λ (x − x s )∂ + ∂ and L 1 = −q on − ∂s U (xs) k B T ∂ s + ∂ ss q on q off −q off − λ (x − x s )∂ s − ∂s U (xs ) k B T ∂ s + ∂ ss .
In the following, we will drop the · notations for simplicity.
C. Averaging procedure
We then look for a solution to ∂ t f = L f of the form f = f 0 + ε f 1 + ε 2 f 2 . To first order, we have
L 0 f 0 = −κx ∂ + ∂ 0 0 −κx ∂ − λ (x − x s )∂ + ∂ f 0 = 0 (16) Notice that −κx − λ (x − x s ) = −(κ + λ ) x − x s λ κ+λ such that the general solution to Eq. (16) is f 0 = g 1 (x s ,t) g 2 (x s ,t) + h 1 (x s ,t) 0 x 0 e +κx 2 dx + 0 h 2 (x s ,t) x 0 e +(κ+λ )(x− λ κ+λ x s ) 2 dx(17)
and for which the associated equilibrium distribution is of the form
π 0 = 1 √ 2π αe −κx 2 /2 β e −(κ+λ )(x − λ κ+λ x s ) 2 /2 .(18)
Here α and β are free parameters and are not constrained by detailed balance; they can be any real numbers. In fact, there is no exchange between the bound and the unbound times at these very short timescales. Notice that this means that the averaging approach does not know a priori that it should preserve detailed balance.
In any case, we require f 0 , π 0 to be bound; f , p is the inner product where the integration is only carried over the fast variable x . This imposes h 1 ≡ 0 and h 2 ≡ 0, and
f 0 = g 1 (x s ,t) g 2 (x s ,t)
T is actually independent of the fast variable.
Seeking the next order
L 0 f 1 = −L 1 f 0 + ∂ t f 0 . A solution exists for f 1 if the Fredholm alternative is satisfied 53 , namely if (∂ t f 0 − L 1 f 0 )
.π 0 = 0 is true for any π 0 in the nullspace of L 0 . This corresponds to any real value combinations of α and β , and hence we may pick the convenient choice of (α, β ) = (1, 0) and (α, β ) = (0, 1). We obtain
∂ t g 1 = − e −κx 2 /2 q on dx e −κx 2 /2 dx (g 1 − g 2 ) − ∂ s U (x s ) k B T ∂ s g 1 + ∂ ss g 1 ∂ t g 2 = e −(κ+λ )(x − λ κ+λ x s ) 2 /2 q off dx e −(κ+λ )(x − λ κ+λ x s ) 2 /2 dx (g 1 − g 2 ) − ∂ s U (x s ) k B T ∂ s g 2 + ∂ ss g 2 − λ (x − x s )e −(κ+λ )(x − λ κ+λ x s ) 2 /2 dx e −(κ+λ )(x − λ κ+λ x s ) 2 /2 dx ∂ s g 2 .(19)
D. Effective kinetic rates and dynamics
We have just obtained the coarse-grained backward equations for the dynamics.
We can carry over the last line's integral and return to dimensional units, to directly read off the effective on and off rates
q eff on (x s ) = e −k x 2 /2k B T q on (x ,x s )dx e −k x 2 /2k B T dx , q eff off (x s ) = e −(k +k b )(x − k b k +k b xs) 2 /2k B T q off (x ,x s )dx e −(k +k b )(x − k b k +k b xs) 2 /2k B T dx .(20)
We find that, coherently, the effective on and off rates are weighted averages over the distribution of positions of the fast variable in the respective states. A similar expression for the coarse-grained rates was obtained from a first principles derivation at equilibrium 25 .
Importantly, beyond the coarse-grained rates, from Eq. (19) we can also read off the coarse-grained dynamics of the slow particle at O(1) in the small parameter ε = γ /γ s , in the unbound and bound states
dx s dt = − ∂ s U (x s ) γ s + 2k B T γ s η u (t) (unbound) dx s dt = − ∂ s U (x s ) γ s − K γ s x s + 2k B T γ s η b (t) (bound).(21)
where
K = k k b /(k + k b )
is an effective spring constant.In Appendix B we verify that this coarse graining maintains detailed balance at the macroscopic level.
Overall, the coarse-grained dynamics we obtain are consistent with physical intuition, at this lowest order in ε. The unbound dynamics of the slow particle are not changed by the coarse-grained approach. However, in the bound state, an extra recoil force is exerted on the particle, corresponding to a force averaged over the fast-moving linker. Interestingly, the spring constant K of the bond formed at the coarse-grained level corresponds to the effective spring constant corresponding to two springs in series, with spring constants k and k b . This is precisely what is expected physically and from the sketch of the setup; see Fig. 1-b. One novelty of this calculation is that it that allows one to give meaning to the spring constant K used in effective models such as those used in Refs. 28, 31, and 36. Both in the bound and unbound states the coarse-grained dynamics are damped by the friction coefficient γ s . Although this is rather intuitive, notice again that this is only true when ε is small enough, otherwise the effective friction would be increased by the presence of the linker. 24,27 All in all, the lowest-order dynamics evolve as if the linker were moving so fast that it loses memory of previous binding and unbinding events.
Beyond these O(1) terms in the separation of scales ε = γ /γ s , we can derive further terms at O(ε) and beyond, which do account for more and more memory between binding events. This can be done by proceeding with the coarsegraining approach explained above to further terms in the expansion. We report in the Supplementary Information Sec. 2, the coarse-grained dynamics at O(ε). The effect of memory is 2-fold: it modifies the binding rates which now contain q on q off , q 2 on and q off q off couplings; and it also modifies the forces. In particular, at O(ε) there is now a force in the unbound state, which arises from a remnant memory of the recoil force on the bound fast linker as the linker unbinds. Finally, at O(ε), diffusion in the bound state is now damped by the presence of the linker. Our coarse-graining approach is thus a robust tool to systematically derive coarse-grained dynamics to any order.
E. General coarse-grained dynamics
The averaging approach can be extended in a straightforward way, following the averaging steps above, to more arbitrary dynamics, and we summarize effective dynamics in full generality in Table I. The initial forces on the slow particle in the bound F b (x s , x ) and the unbound F u (x s , x ) states can be arbitrary forces. All forces on the particle and linker should be conservative so as to define an equilibrium distribution. Table I then reports the general formulas for the effective dynamics, effective binding and unbinding rates as well as the effective force on the particle in the unbound and bound states. The diffusive part of the slow particle motion is not affected by the coarse-graining procedure, since we assume diffusion coefficients do not depend on space. Finally, the formulas can be extended in a straightforward way to 3D dynamics, and to multiple fast linkers. We will show in Sec. IV how to use these formulas with specific examples.
Notice how each formula from Eq. (22)(23)(24)(25)(26) can be interpreted intuitively. The effective force in a given state or the rate of switching from that state is simply the spatial average weighted by the local probability distribution to be in that state. While this seems consistent a posteriori, and consistent with detailed balance at the coarse grained level, the formulas in Eq. (23)(24) are not the only possible expressions for the effective rates that obey detailed balance and accurately describe the dynamics at lowest order in ε. The results of Eq. (22)(23)(24)(25)(26) are therefore not trivial. Notice that in a related work a similar expression for the coarse-grained rates q eff off and q eff on was obtained using averaging techniques (Eq. (6) or Eq. (19) of Ref. 25). However, a crucial addition here is that we provide also the coarse-grained dynamics of the slow particle, through the effective equations of motion Eq. (22) and effective forces Eqs. (25)(26).
Interestingly, the formulas in Table I are straightforward to compute. One simply needs to know the equilibrium probability distribution of the free and bound linker to obtain the effective dynamics. Specifically, one only needs to know the dependence of the equilibrium distribution on the linker coordinate x , and not the details of the landscape for the slow particle x s . In practice, one could then simulate (or calculate) a free and bound linker, obtain their local probability distributions, and integrate them to obtain the effective dynamics. We note the caveat that the full probability distribution needs to be evaluated if direct interactions between linkers exist.
F. Limit regimes
We will now comment on the effective dynamics and kinetic rates obtained for the minimal system of Fig. 1 (1) and (2)) in a few limiting regimes of the system parameters k and k b . In the following, it will be useful to specify the expression of the rates and we will use for example Eq. (9).
-b (Eqs.
In all limiting regimes, some variables such as q 0 on /q 0 off have to be specified with respect to the limits. In practice, the macroscopic probability of being bound, Π b , (or unbound, Π u ) can be measured experimentally 24,55 , and is easier to probe than spatially dependent kinetic rates. Hence, we constrain the different functional forms for the kinetic rates to predict the same Π b . The macroscopic probability of being bound (respectively unbound) is
Π b = dx s dx π b (x s , x ) (resp. Π u = dx s dx π u (x s , x )). It is easy to show that here Π b Π u = e −E 0 /k B T K k b e −U s (x s )/k B T e −k eff x 2 s /2k B T dx s e −U s (x s )/k B T dx s(27)
such that the macroscopic bound and unbound probabilities measure in particular the energy of the bond, weighted by the geometry of the system. Of course, this does not constrain the local values of the kinetic rates. Rather, it sets the value of some parameters, here of E 0 .
1. Stiff bond.
We first investigate the limit regime where the bond is very stiff, i.e. k b k . This is the so-called "soft constraint" limit [40][41][42] . In that case, we expect the fast and slow particles are constrained to move synchronously in the bound state. For TABLE I. Summary of the effective dynamics considering a coarse-grained fast linker (x ) that can bind to a slow particle (x s ). The bare dynamics of the slow particle are damped by a friction coefficient γ s . For 3D coordinates, the integrals have to be carried over all the spatial dimensions of the fast variable x and the force field on the slow particle has to be obtained coordinate by coordinate.
effective dynamics dx s dt = 1 γ s F eff u/b (x s ) + 2k B T γ s η u/b (t)(22)
effective binding rate
q eff on (x s ) = q on (x s , x )π u (x s , x )dx π u (x s , x )dx(23)
effective unbinding rate
q eff off (x s ) = q off (x s , x )π b (x s , x )dx π b (x s , x )dx(24)
effective force on the slow unbound particle
F eff u (x s ) = F u (x , x s )π u (x s , x )dx π u (x s , x )dx(25)
effective force on the slow bound particle
F eff b (x s ) = F b (x , x s )π b (x s , x )dx π b (x s , x )dx(26)
this limit to make sense, we need, according to Eq. (27), to constrain q 0 on ∼ √ k b q 0 off . The effective force in the bound state converges to F eff u = −k l x s −∂ s U s , i.e. simply to a recoil force exerted with a spring constant corresponding to that of the fast linker (and the force deriving from the external potential). This makes sense since in the limit k b k , since the springs are in series, we expect the weakest spring to take over and K ∼ k .
The effective kinetic rates, according to Eq. (20) (and choosing the kinetic rates as in Eq. (9)), are
q eff on = q 0 on K k b e −Kx 2 s /2k B T , q eff off = q 0 off .(28)
In the limit regime where k b k , we have q on ∼ q 0 off e −k x 2 s /2k B T and q off = q 0 off . Hence the on rate remains spatially dependent, and the particle binds more likely when it is closer to the average linker position. It unbinds though at the same rate regardless of the linker position. Similar results may be found with other initial choices of kinetic rates such as with Eq. (8) or (10).
A few models in the literature actually take the spring constant for the binding kinetics to be the linker's spring constant K = k 28,31,36 , making the implicit, albeit rather physical, assumption that the bond's spring constant is much stiffer than the linker's. This underlines that the meaning of the parameters in the kinetic rates is underappreciated in the field.
Notice that here we explored a limit regime after taking the limit of fast linker dynamics. This is not an issue here since the limits commute. Indeed, one could initially consider an infinitely stiff bond, and then take the limit of fast linker dynamics, and get the same result -see the Supplementary Information Sec. 1 for further details.
Stiff linker.
When the fast linker is stiff, then k k b . We then simply have that the force in the bound state is determined by the spring constant of the bond F eff u = −k b x s − ∂ s U s , as expected since now K ∼ k b in that limit. With the same choice of kinetic rates as in Eq. (9), we simply have that the effective on rate is faster near the average linker position q eff on = q 0 on e − k b x 2 s 2k B T and the effective off rate is constant. All in all, this is a simple consequence of the dynamics of 2 springs in series, when one of the spring constants is stiff compared to the other.
III. VALIDATION OF THE COARSE-GRAINING APPROACH
We now use numerical simulations to test the derived effective forces and kinetic rates, and determine the range of parameters over which the averaging procedure is valid.
A. Simulation set up
We simulate the dynamics specified through Eqs. (1)-(2) (with no confining potential on the slow particle U (x s ) ≡ 0). We use the same nondimensional variables τ 0 and x 0 = √ D 0 τ 0 = k B T τ 0 /γ s such that the problem is fully characterized by 5 non-dimensional numbers ε, κ, λ , q off and Π b /Π u . Here, ε = γ /γ s is not necessarily small and represents the ratio between friction coefficients; κ = k τ 0 /γ s is the nondimensional spring constant of the linker; λ = k b τ 0 /γ s is that of the bond, q off = q 0 off τ 0 the non-dimensional typical off-rate. Finally, the on rate is set by the conservation of macroscopic probability through Eq. (27), so by the ratio Π b /Π u , once a choice of functional forms for the kinetic rates has been made. Here, we will keep Eq. (10) as an example.
We discretize the dynamics of both slow and fast particles with a standard Euler-Maruyama scheme with time step ∆t until a terminal time of T = 1000τ 0 with M = 1000 simulations for each set of parameters. Initial configurations are sampled with the equilibrium distribution of the system. We impose periodic boundary conditions on the slow particle at a non-dimensional distance L so that the slow particle does not escape far away from the domain. The domain size L = 10x 0 and ∆t = 0.01τ 0 are chosen sufficiently large and small respectively such that our results do not depend on the specific value.
To estimate the effective rates from simulation, we note that this is a doubly-stochastic Poisson process, or a Cox process, 56 because the stochastic dynamics of the particle positions drive the stochastic Poisson events of binding and unbinding. While sophisticated methods for inference on Cox intensities exist 56 , a simple binning approach suffices here. We discretize x s into bins of width ∆x and count the number of occurrences of each event in each bin. For bin i with center x i , the estimated flux to the other state is then J(x i ) = N i /(T M∆x), where· notation means estimate and N i is the number of events (either binding or unbinding) occurring in bin i. Notably, this estimates the macroscopic kinetic rates, for instance, for unbindingĴ eff off (x s ) ≈ q eff off (x s )π eff on (x s ). The marginal densities of being bound and unbound π eff on (x s ) and π eff off (x s ) are straightforwardly computed by the fraction of time in each state and bin for x s . Then, the microscopic rates are estimated byq eff (x s ) =Ĵ eff (x s )/π eff (x s ). Lastly, the forces F eff (x s ) are taken as the average x s (t + ∆t) − x s (t) and accumulated in the corresponding bin for x s (t).
B. Agreement between simulations and averaging approach
We first compare effective rates and forces at long times. In Fig. 3-a and b, the kinetic rates and forces obtained numerically and with averaging theory Eq. (20) and (21) (alternatively with the formulas in Table. I) are in excellent agreement. The effective probability distribution function is also in agreement with the marginal distribution Eq. (B1) (Fig. 3-c). Overall, at the coarse-grained level, detailed balance holds numerically as well as analytically (Fig. 3-d). While we present in Fig. 3 for a small value of ε = 0.1, similarly good agreement can be obtained for large values of ε 10 as long as the simulation times are long enough, which is expected from the coarse-graining procedure.
The results shown in Fig. 3 compare the effective rates and forces averaged over long times, but do not necessarily validate the short and intermediate time dynamics being correct in the coarse-graining procedure. To validate this, we numerically compute the autocorrelation from trajectories of the explicit microscopic model and coarse-grained as a function of ε. In Fig. 4, the dynamics of the coarse-grained model agree with those of the explicit model only for small ε. At ε 1, the autocorrelation displays significant deviation. Intuitively, the coarse-grained model loses the memory of recent binding and unbinding events, and therefore has faster decay in correlation. This highlights that the coarse-grained dynamics in Table I. Table. I are only valid when ε 1, i.e. when the timescales associated with fast linker or slow particle relaxation are disparate enough. To account for these memory effects, one could add O(ε) terms to the coarse-grained dynamics, that we derive in Supplementary Information Sec. 2 by continuing the coarse-graining procedure in Sec. II. Altogether, our numerical results thus show the averaging approach is robust to infer coarse-grained effective dynamics.
IV. APPLICATIONS
Having shown the validity of the averaging approach, we now explore how to use the formulas in Table I in specific situations. We investigate (i) a pair of connecting linkers, (ii) a linker stiffening upon binding, and (iii) a slip bond with forcedependent unbinding. According to the specific example under scrutiny, we will focus either more on the effective forces or on the effective kinetic rates, and comment on the physical meaning of the results.
A. Pair of connecting linkers
We first consider 2 fast linkers that can connect to each other (see Fig. 5). This could represent for example two complementary DNA strands transiently hybridizing, which finds some applications for example in the field of DNA-coated A comparison between the detailed and coarse-grained model of the autocorrelation of the slow particle position x s (t + τ)x s (t) t . In the limit of separation of timescales corresponding to small ε, the dynamics approach those of the coarse grained-model. Parameter values are the same as in Fig. 3, with purple curves corresponding to ε = {10 −2 , 10 −1 , 10 0 , 10 1 , 10 2 }. The curves for the lowest two values of ε are indistinguishable. colloids 20,24,29,39,57,58 . We consider that one of the linkers, in position x 2 , is tethered to a fixed plate while the other, in position x 1 , is tethered to a mobile slow particle, itself in position x s . Both linkers are described by springs with the same spring constant k and fluctuate rapidly. Unbound dynamics The unbound dynamics are
dx s dt = − k γ s (x s − x 1 ) + 2k B T γ s η s (t) dx 1 dt = − k γ (x 1 − x s ) + 2k B T γ η 1 (t) dx 2 dt = − k γ (x 2 ) + 2k B T γ η 2 (t)(29)
where the η i (t) are uncorrelated Gaussian white noises and here we identify the force on the slow particle in the unbound state as
F u = −k (x s − x 1 ).(30)
The unbound term of the equilibrium distribution corresponding to this choice of dynamics is
π u (x s , x 1 , x 2 ) = 1 Z u e − k x 2 2 +k (xs−x 1 ) 2 2k B T .(31)
Bound dynamics The linkers can transiently form a bond with spring constant k b ,
dx s dt = − k γ s (x s − x 1 ) + 2k B T γ s η s (t) dx 1 dt = − k γ (x 1 − x s ) + 2k B T γ η 1 (t) + k b γ (x 2 − x 1 ) dx 2 dt = − k γ (x 2 ) + 2k B T γ η 2 (t) − k b γ (x 2 − x 1 )(32)
and we identify the force on the slow particle in the bound state as identical to that in the unbound state
F b = −k (x s − x 1 ).(33)
The bound term of the equilibrium distribution corresponding to this choice of dynamics is
π bound (x s , x 1 , x 2 ) = 1 Z b e − k x 2 2 +k (xs−x 1 ) 2 +k b (x 2 −x 1 ) 2 2k B T .(34)
Kinetic rates and detailed balance Here we will not specify the kinetic rates in detail. However, since we specify the dynamics, the binding and unbinding rates must satisfy detailed balance
q on q off = q 0 on q 0 off e −k (x 2 −x 1 ) 2 /2k B T .
Effective force in the unbound state We can now use the expressions in Table I to obtain the effective force. Here, compared to our foundational example with 1 fast linker in Sec. I, we have 2 fast linkers, hence we need to carry the integral over those 2 fast degrees of freedom. We find with Eq. (25)
F eff u (x s ) = F u (x 1 , x 2 , x s )π u (x 1 , x 2 , x s )dx 1 dx 2 π u (x 1 , x 2 , x s )dx 1 dx 2 = (−k (x s − x 1 )) e −k x 2 2 /2k B T e −k (x s −x 1 ) 2 /2k B T dx 1 dx 2 e −k x 2 2 /2k B T e −k (x s −x 1 ) 2 /2k B T dx 1 dx 2 = 0(35)
for symmetry reasons. In the unbound state, quite logically, at the coarse-grained level the particle doesn't feel any effective force from its unbound fast linker.
Effective force in the bound state In the bound state, using Eq. (26) we find
F eff b (x s ) = F b (x 1 , x 2 , x s )π bound (x 1 , x 2 , x s )dx 1 dx 2 π bound (x 1 , x 2 , x s )dx 1 dx 2 = − k 2 k b k 2 + k b x s .(36)
In the bound state we thus obtain that the effective force on the particle is a spring force. The force is centered around 0 as this is the average position of both springs. The spring constant at the coarse-grained level is
k 2 k b k 2 +k b
which corresponds, logically, to the effective spring constant of 3 springs in series, with spring constants k , k b , k .
B. Stiffening linker upon binding
We now turn to another example where the linker stiffens when bound -see Fig. 6-a. Such stiffening occurs when the linker undergoes conformational changes upon binding 28 or else upon single-stranded DNA hybridizing into doublestranded, resulting in a stiffer connection 16,18,21,24,59,60 . x s Unbound dynamics The unbound dynamics are still given by Eq. (1).
x ℓ qon(||x s -x ℓ ||) slow fast qoff(||xs-x ℓ ||) "slip bond" force-dependent unbinding ≈ e kslip|xs-xℓ|/F 0 x s x ℓ qon(||x s -x ℓ ||) slow fast qoff(||xs-x ℓ ||) sti ening linker k ℓ k b k' ℓ a b
Bound dynamics The bound dynamics are now changed compared to Eq. (2), since the linker now has a stiffer spring constant say k > k ,
dx s dt = − k b γ s (x s − x ) + 2k B T γ s η s (t) dx dt = − k γ x − k b γ (x − x s ) + 2k B T γ η (t).(37)
The bound term of the equilibrium distribution corresponding to this choice of bound dynamics is
π b (x s , x ) = 1 Z b e −k x 2 /2k B T e −k b (x s −x ) 2 /2k B T .(38)
Kinetic rates Again without specifying the kinetic rates in detail, since we set the dynamics, the binding and unbinding rates must satisfy detailed balance and hence are related via
q on q off = q 0 on q 0 off e −k b (x s −x ) 2 /2k B T e −(k −k )x 2 /2k B T .
This relation favors binding when the tether is not too extended. Otherwise, the energetic price to pay to "stiffen" is too costly. Reciprocally the bond is more likely to break when the tether is quite extended, and the energetic gain to loosen the bond is quite high. One could thus choose, in line with physical intuition,
q on ∼ q 0 on e −k b (x s −x ) 2 /2k B T , q off ∼ q 0 off e (k −k )x 2 /2k B T .(39)
The detailed prefactors should be specified in agreement with detailed balance.
Effective force in the bound state The effective force in the unbound state naturally vanishes. Hence we focus on the effective force in the bound state, again obtained via Eq. (26)
F eff b (x s ) = − k k b k + k b x s .(40)
The coarse-grained force is that associated with 2 springs in series with spring constants k and k b . However, here it was not obvious a priori which of the spring constants for the linker, either k or k or a mix of both, should contribute to the force.
Effective rates We now use the rates defined in Eq. (39) to study coarse-grained kinetic rates as well. Let
K = k k b /(k + k b ) and K = k k b /(k + k b ). Then we obtain q eff on (x s ) ∼ e −Kx 2 s /2k B T q eff off (x s ) ∼ e (K −K)x 2 s /2k B T .(41)
The effective binding rate is determined by the typical, unstiff, radius of the interaction, k B T /K, similarly as in our foundational example in Sec. I. The unbinding rate is now larger at larger distances, according to how much stiffening occurred. Notice that even if the bond is really stiff, meaning k b k , k then K − K k − k still converges to a finite value.
C. Slip bonds with force-dependent unbinding
Another quite common situation is that of slip bonds with force-dependent unbinding 28,43 -see Fig. 6-b. In these cases the unbinding rate scales as 43
q off (x s , x ) = q 0 off e k slip |x s −x |/F 0(42)
where k slip is a characteristic bond spring constant and F 0 a characteristic force threshold. Above this threshold, the unbinding rate is indeed greatly enhanced.
Kinetic rates and satisfying detailed balance How shall we proceed to model the dynamics with such force-dependent unbinding rate, when the system is still at equilibrium? Compared to our foundational example in Sec. I, here we might modify the binding rate to satisfy detailed balance. Notice that this is not the only possibility to satisfy detailed balance and one could also consider alternative bound dynamics. For the sake of the example, here we will consider
q on (x s , x ) = q 0 on e −k b (x s −x ) 2 /2k B T(43)
and an additional force on the bound particle that we specify in Eq. (44) below. The expressions for the rates are satisfactory since they are coherent with faster binding and slower unbinding when the particle and linker are closer. Similar choices motivated by their intuitive behavior date back to Bell and Dembo 43,44 and are derived in Ref. 25 as a coarse-graining of a microscopic model for binding. The detailed prefactors should be specified in agreement with detailed balance.
Unbound dynamics The unbound dynamics are still given by Eq. (1).
Bound dynamics The bound dynamics are now changed compared to Eq. (2), since the linker and particle are now subjected to additional forces.
dx s dt = − k b γ s (x s − x ) − k slip γ s k B T F 0 sgn(x s − x ) + 2k B T γ s η s (t) dx dt = − k γ x − k b γ (x − x s ) + k slip γ k B T F 0 sgn(x s − x ) + 2k B T γ η (t).(44)
In higher dimensions, the sgn(
x) = x/ x where x = x s − x .
The bound term of the equilibrium distribution corresponding to this choice of bound dynamics is
π b (x s , x ) = 1 Z b e −k x 2 /2k B T e −k b (x s −x ) 2 /2k B T e −k s |x s −x |/F 0 .(45)
Effective force in the bound state The effective force in the unbound state naturally vanishes. Hence we focus on the effective force in the bound state, which has a lengthy expression that we do not report here. In the case of a small slip force k slip k B T /F 0 k |x s | we obtain
F eff b (x s ) − k k b k + k b x s − k k + k b k slip k B T F 0 erf k k + k b k x 2 s 2k B T .(46)
The coarse-grained force contains now 2 contributions. The first one is that associated with 2 springs in series with spring constants k and k b that we have seen before. The second one corresponds to the slip bond, which grows stronger when the particle goes further away from the target. Notice that this latter slip bond friction force is screened by a factor k k +k b in the coarse-grained state. In the case of a stiff bond, k b k , the slip force is entirely screened, and the particle "slips"; the linker essentially accommodates changing configurations by rapidly adjusting its length.
Effective rates We now use the rates defined in Eq. (42) and Eq. (43) to study coarse-grained kinetic rates as well. Let K = k k b /(k + k b ). Then we obtain, in the case of a small slip force
k slip k B T /F 0 k |x s |, q eff on (x s ) ∼ e −Kx 2 s /2k B T q eff off (x s ) ∼ e k k +k b k slip |xs| F 0 .(47)
The effective binding rate is determined by the typical, unstiff, radius of the interaction, k B T /K, similarly as in our foundational example in Sec. I. The unbinding rate is now larger at larger distances, with force-dependent unbinding.
V. MACROSCOPIC CONSEQUENCES FOR THE CHOICE OF BINDING KINETICS
A. Coarse-graining various microscopic binding models: the Doi model How might the effective dynamics change from our foundational example in Sec. I when we consider the Doi model for binding?
Effective force in the unbound state According to Eq. (25) and with Eq. (12) and Eq. (13) describing the Doi model in our context, the unbound friction force is simply
F eff u (x s ) = −k b x s +R x s −R (x s − x )e − k b (xs−x ) 2 +k x 2 2k B T dx π u (x s , x )dx(48)
which has a cumbersome, non-vanishing, expression that we do not report here. However we can make a finite perturbation of the obtained expression in the limit of a stiff bond, when k b k , and to simplify further the expression we assume that the radius to bind R is given by the typical spatial scale of the bond
R = 2k B T /k b , such that F eff u (x s ) −0.43 k k b k x s .(49)
Since x wiggles around 0 and even in the unbound state, close to the linker, the particle feels a recoil force, then it makes sense that the particle can now feel a recoil force everywhere. The magnitude of this force is slightly decreased because the bond can only form when the particle is close enough to the linker. Notice how the scaling of the effective spring constant is entirely nontrivial as k
k b k .
Effective force in the bound state According to Eq. (26) the bound friction force is simply
F eff b (x s ) = − k k b k + k b x s(50)
which is similar to the results obtained with our foundational example.
Effective kinetic rates Again, the expression for the effective binding and unbinding rates obtained from Eqs. (23) and (24) is rather cumbersome. When k b k , and assuming R = 2k B T /k b , the kinetic rates are smoothly decaying to 0 (instead of a sharp Heaviside function as in the microscopic equations), at a characteristic distance x s
k B T k k k b .
Again, one sees how this scaling is entirely nontrivial. Systematic coarse-graining is therefore essential for faster numerical simulations and enhanced theoretical investigations.
B. Macroscopic consequences
We finish by briefly exploring how the microscopic choices for q on (x s , x ) and q off (x s , x ) can affect the macroscopic dynamics -see Sec. I B. To simplify the exploration, here we directly simulate the coarse-grained equations for the slow particle, using the coarse-grained forces and kinetic rates in Table. I. We test the impact of different initial choices of microscopic binding, specifically with model 1 corresponding to Eq. (9) with spatially dependent binding only and model 2 from Eq. (10) with both spatially dependent binding and unbinding. To compare different microscopic models in a sensible fashion, we constrain the macroscopic bound probability Π b = 0.5 (see Fig. 7-c) for all microscopic models. We then calculate the macroscopic mean binding and unbinding times for the particle (Fig. 7-a and b), as well as the particle's long-time diffusion coefficient (Fig. 7-d), determined by its mean-squared displacement. Importantly, we vary the effective bond spring constant K = k k b /(k b + k ) to probe how model parameters affect macroscopic dynamics.
We find that both the choice of microscopic binding model and the parameter K significantly affect the macroscopic binding and unbinding times (Figs. 3-a and b). Since the macroscopic binding probability Π b is fixed, the mean binding and unbinding times are identical, and hence Fig. 7-a and b appear the same. At long times, we expect, as for the coarse-graining over x , that the kinetic rates do not depend on the position x s anymore and would verify
Q off = q eff off (x s )π eff b (x s )dx s π eff b (x s )dx s ,(51)
and similarly for the binding rate Q on , in agreement with other works. 25 Eq. (51) gives the mean unbinding time as 1/Q off and reproduces perfectly the numerical results (lines in Fig. 3-a).
For the different microscopic models, the response to various binding constants K is non-trivial. For model 1, the macroscopic binding and unbinding times remain constant. In model 1, q eff off (x s ) does not depend on space, and, therefore, its macroscopic counterpart q eff off which verifies Eq. (51), seems to have no direct dependence on K. However, q eff on (x s ) is inherently spatially dependent and hence its macroscopic counterpart may depend on K as well, via the associated length scale k B T /K. In contrast, with model 2 the macroscopic kinetic rates both increase with the binding constant K. When K is larger, the bond is stronger, which decreases the binding time but increases the unbinding time. Hence, the macroscopic dependence where both kinetic rates increase with K is interesting. In fact, to properly coarse-grain to this now macroscopic scale, one should also take into account the probability distribution of being in each state with a given extension. Such entangled behavior requires proper integration beyond simple intuition. Eventually, we find that the macroscopic binding times strongly depend on microscopic choices for the kinetic rates, which means one should use caution when designing such models.
In addition, for both microscopic model choices, the longtime self-diffusion coefficient of the particle depends on K ( Fig. 3-d). In fact, at small K we find D → k B T /γ s since the bond is weak enough that it barely affects particle motion. At larger K values, we find D → k B T /2γ s since when it is bound, the bond is strong enough that it prevents any motion, and the particle is bound half of the time (Π b = 0.5). The dependence on K appears to be similar for both microscopic binding models 1 and 2. In more varied models for binding, there is no reason this should stay true. We leave the general exploration of macroscopic transport properties, such as diffusion coefficients, and their dependence on microscopic binding kinetics, for future work.
CONCLUSION
In this work, we have attempted to unify and justify various coarse-graining approaches for linker dynamics 22,25,[27][28][29][30][31][32][33][34]36,[43][44][45] .
In these earlier approaches, we have identified many different choices for how binding and unbinding depend on distance, yet (1) these are often heuristically motivated, or (2) they only provide coarse-grained binding rates and not the full dynamics of the slow particles including forces and friction. Here, we have addressed all of these issues by providing a systematic derivation of effective dynamics, including effective friction, forces and binding kinetics for linker molecules that obey any detailed microscopic descriptions. Our coarse-graining approach is based on averaging techniques and preserves detailed balance, in line with our assumption of equilibrium dynamics. We have verified our approach with numerical simulations and found excellent agreement in both the effective kinetic rates and dynamics after averaging. This averaging analysis hinges on ε = γ /γ s being small, corresponding to a large separation of timescales, between the fast linker and slow particle relaxation. When ε 1, the coarse-grained dynamics diverge from the detailed model. We showed how our general framework may be applied to diverse microscopic scenarios, including with 2 linkers binding to each other, a linker stiffening upon binding or a slip bond with force-dependent unbinding. Finally, we showed that different microscopic kinetic rates result in fundamentally different dynamics at the macroscopic scale, raising caution in making these choices without care.
Many choices for effective dynamics in the literature clearly violate detailed balance and hence operate out-ofequilibrium 28,61 . The coarse-graining in Ref. 25 computes the effective rates in a mechanical model with non-equilibrium fluctuations, but relating non-equilibrium rates and dynamics seems to be missing a unified framework. Our systematic approach to coarse-graining may provide a first step toward addressing this. The authors in Ref. 62 interestingly note that effective equilibria can arise by switching between non-conservative systems, providing hope that this pursuit is both fruitful and interesting. Thus, extending our method to out-of-equilibrium systems would be of broad relevance, in particular, to explore transient bonds with activated cleaving that are common in viral linker-mediated motion 63,64 and also artificial nano-motors 7 .
Although we have investigated relatively simple setups here, the tools we introduce and the lessons learned are applicable across many more complex systems. The formulas we derived for effective dynamics can be applied with ease via Table I and provide a baseline for more systematic coarse-grainings found in simulations and theoretical studies across the literature. One such example is molecular motor binding in intracellular transport. There, coarse-graining ranges from non-spatial effective kinetic rates 65,66 , motor linkers obeying a worm-like chain model 67,68 , or Doi-like motors that bind when within a specific radius of interaction 48 . Coarse-grained cross-linked cytoskeletal networks are studied extensively 5,6,[69][70][71] , especially in self-organization in the mitotic spindle [72][73][74][75] and actomyosin network mechanics 76,77 . Transient dynamics of (typically coarse-grained) cross-linkers are also fundamental in controlling viral responses in biogels 78,79 and building chromosomal territories 80 . Our framework also readily extends to systems that are not "cross"linked, such as membrane-filament interactions that drive cell protrusion 81,82 and adhesions 9,25,45 .
ACKNOWLEDGMENTS
We wish to acknowledge fruitful discussions with Aleksandar Donev and Miranda Holmes-Cerfon. S.M. received funding from the European Union's Horizon 2020 research and inno- Notation Meaning x s coordinate of the slow particle x coordinate of the fast linker x 0 characteristic length scale for nondimensionalisation γ s friction coefficient on the slow particle γ friction coefficient on the fast linker ε = γ /γ s ratio of friction coefficients k spring constant describing the linker k b spring constant describing the bond between the particle and the linker k B T thermal energy τ 0 characteristic timescale for nondimensionalisation π u/b (x s , x ) microscopic unbound (resp. bound) probability distribution π eff u/b (x s ) coarse-grained unbound (resp. bound) probability distribution Π u/b macroscopic unbound (resp. bound) probability F u/b (x s , x ) microscopic force on the slow particle in the unbound (resp. bound) state F eff u/b (x s ) coarse-grained force on the slow particle in the unbound (resp. bound) state q on/off (x s , x ) microscopic binding (resp. unbinding) rate q eff on/off (x s ) coarse-grained binding (resp. unbinding) rate Q on/off macroscopic binding (resp. unbinding) rate vation programme under the Marie Skłodowska-Curie grant agreement 839225, MolecularControl.
SUPPLEMENTAL INFORMATION
See the supplementary information for further mathematical details of the coarse-graining procedure, including calculations of higher order corrections.
DATA AVAILABILITY STATEMENT
The data generated by this paper is available upon reasonable request to the authors.
Appendix A: Main notations
We report in the Table II the main notations used in the paper.
Appendix B: Detailed balance at the coarse-grained level Finally, for physical consistency, we need to check that the marginal equilibrium distribution, i.e. the equilibrium distribution integrated over the fast degrees of freedom π eff (x s ) = π(x s , x )dx , is indeed a stationary solution of the effective dynamics obtained. The marginal distribution is
π eff (x s ) = π eff u (x s ) π eff b (x s ) = √ 2πe −U s (x s )/k B T Z 1 √ k q 0 on q 0 off √ k +k b e −K x 2 s 2k B T . (B1)
It is clear that the marginal distribution of either state is indeed a stationary solution of the dynamics in each state as specified in Eq. (21).
To check that the marginal distribution is a stationary distribution of the dynamics as a whole, we still need to check that detailed balance is satisfied at the coarse-grained level. This is not guaranteed a priori since the averaging technique does not use at any point that it should preserve detailed balance. At this point, it is important to notice that in fact the effective rates are related to the equilibrium probability distribution as
q eff on (x s ) = q on (x s , x )π u (x s , x )dx π u (x s , x )dx (B2)
and similarly for q eff off . Hence, we simply have that
π eff u q eff on (x s ) = π eff u q on (x s , x )π u (x s , x )dx π u (x s , x )dx = q on (x s , x )π u (x s , x )dx = q off (x s , x )π b (x s , x )dx = q off (x s , x )π b (x s , x )dx × π eff b π b (x s , x )dx = π eff b q eff off (x s ).
Detailed balance is therefore also true at the coarse-grained level. Since the marginal distribution is consistent with the dynamics in each state and with detailed balance, it is indeed the stationary solution to the effective dynamics. the Fokker-Plank equation
∂ t p = L p ,(1)with L = Q + V where Q = −q on q off q on −q off , V = V u 0 0 V b , V u = ∂ k γ x + k B T γ ∂ + ∂ s (∂ s U s (x s ) γ s ) + k B T γ s ∂ ss V b = k γ x − k b γ (x − x s ) + k B T γ ∂ + ∂ s (∂ s U s (x s ) γ s + k b γ s (x − x s )) + k B T γ s ∂ ss
with an appropriate initial condition. Additionally, we require the flux in either state to vanish at infinity, to conserve total probability. The stationary solution of Eq. (1) is π = (π u , π b ) T , the equilibrium probability density of the system. As we have seen, it satisfies detailed balance.
While probability densities have an intuitive physical meaning, when averaging it is often easier -and mathematically better posed -to consider the adjoint of the Fokker-Planck equation and the corresponding dual functions. These are functions f (x s , x ,t) = p(x s , x ,t|x s , x )g(x s , x )dx dx s that give the expectation of any scalar function g(x s (t), x (t)), given an initial condition x s (0) = We now show the limits of a stiff spring constant for the bond and of coarse-graining fast linker dynamics commute. Since in the text we started with coarse-graining, i.e. the limit of a fast linker, we here first take the limit of stiff spring constant for the bond k b k . This limit corresponds to a so-called soft constraint, where the linker and the slow particle move synchronously when they are bound. The dynamics in the bound state with this case can be readily obtained, by projecting the unbound dynamics [3][4][5] dx s dt
x s , x (0) = x . Writing f (x s , x ,t) = ( f u (x s , x ,t), f b (x s , x ,t)) T ,= dx dt = − k x + ∂ x U (x s ) γ s + γ + 2k B T γ s + γ η s (t).(2)
We obtain the generator
L = Q + V(3)
where
V u = − k γ x ∂ + k B T γ ∂ − ∂ x U (x s ) γ s x s ∂ s + k B T γ s ∂ ss (4) V b = − k x + ∂ x U (x s ) γ + γ s (∂ + ∂ s ) + k B T γ + γ s (∂ + ∂ s ) 2(5)
and we have to be careful that the bound state is only defined on the line x = x s . As a result, the equilibrium probability distribution in the unbound state contains a contribution δ (x − x s ) and
hence, the force in the bound state is simply
F eff b (x s ) = −k x s .(6)
This corresponds exactly with the effective force in the bound state when the limits are performed the other way around, see Sec. II.F.1 in the main text on the model with a stiff bond.
II. COARSE-GRAINING AT THE SUBSEQUENT ORDER
In this section, we extend the averaging procedure to determine the dynamics in the next order of ε.
a. Framework to obtain O(ε) coarse-grained dynamics Recall from Eq. (19) in the main text, that the dynamics at O(1) in ε satisfy, for f 0 = g 1 (x s ,t) g 2 (x s ,t) T any test function,
∂ t g 1 = −q eff on (g 1 − g 2 ) − ∂ s U (x s ) k B T ∂ s g 1 + ∂ ss g 1 ∂ t g 2 = q eff off (g 1 − g 2 ) − ∂ s U (x s ) k B T ∂ s g 2 + ∂ ss g 2 − λ κ κ + λ x s ∂ s g 2 .(7)
To obtain dynamics at the following order, namely O(ε), we need to find f 1 , such that L 0 f 1 = −L 1 f 0 + ∂ t f 0 . We know that such a solution f 1 exists since f 0 satisfies the closure equation, or Fredholm alternative, (L 1 f 0 − ∂ t f 0 ) · π 0 = 0, which is equivalent to Eq. (7) −κx ∂ m 2 − λ (x − x s )∂ m 2 + ∂ m 2 = (q off (x s , x ) − q eff off (x s ))(g 1 − g 2 ) − λ x − λ κ+λ x s ∂ s g 2 .
To find m 1 and m 2 we essentially need to solve second order equations of the form
− k(x − x 0 )∂ m + ∂ m = f (x )(8)
where f (x ) is a smooth function of x and m(x s , x ,t). m as m · π 0 = 0 and we have π 0 ∼ e −k(x−x 0 ) 2 /2 . We obtain
m(x ) = x 0 x −∞ f (x )e −k(x −x 0 ) 2 /2 dx e k(x−x 0 ) 2 /2 dx − k 2π ∞ −∞ x 0 x −∞ f (x )e −k(x −x 0 ) 2 /2 dx e k(x−x 0 ) 2 /2 dx e −k(x −x 0 ) 2 /2 dx ,
which can alternatively be written in a more compact form as
m(x ) = ∞ −∞ x 0 x −∞ f (x )e −k(x −x 0 ) 2 /2 dx e k(x−x 0 ) 2 /2 dx × δ (x − x ) − k 2π e −k(x −x 0 ) 2 /2 dx .
We obtain a full expression for f 1 by inserting the right variables for k and x in the above expression. We find
m 1 (x s , x ,t) = − δ Q on (x s , x )(g 1 (x s ,t) − g 2 (x s ,t))(9)
where
δ Q on (x s , x ) = ∞ −∞ x 0 x −∞ δ q on (x , x s )e −κ(x ) 2 /2 dx e κ(x) 2 /2 dx × δ (x − x ) − κ 2π e −κ(x ) 2 /2 dx(10)
using δ q on = q on (x s , x ) − q eff on (x s ) and similarly for δ q off . For m 2 we find m 2 = δ Q off (x s , x )(g 1 (x s ,t) − g 2 (x s ,t)) + λ λ + κ
x − x s λ λ + κ ∂ s g 2 (x s ,t).
where
δ Q off (x s , x ) = ∞ −∞ x 0 x −∞ δ q off (x , x s )e −(κ+λ )(x −x s λ λ +κ ) 2 /2 dx e (κ+λ )(x−x s λ λ +κ ) 2 /2 dx × δ (x − x ) − κ + λ 2π e −(κ+λ )(x −x s λ λ +κ ) 2 /2 dx .(12)
c. Fredholm alternative to obtain the O(ε) dynamics. We now calculate (∂ t f 1 − L 1 f 1 ) · π 0 . It is rather straightforward to show
∂ t f 1 · π 0 = 0(13)
since we imposed f 1 · π 0 = 0. Similarly any term as ∂ s m 1 · π 0 = 0 or ∂ ss m 1 · π 0 = 0, and ∂ s m 2 · π 0 = 0 or ∂ ss m 2 · π 0 = 0. The remaining terms yield the O(ε) dynamics as (∂ t f 0 + ε∂ t f 1 − L 1 f 0 − εL 1 f 1 ) · π 0 = 0.
The modified dynamics for g 1 , i.e. the unbound dynamics, are
∂ t g 1 = O(1) + ε π u (x s , x )dx π u (x s , x ) (−q on (x s , x ) [δ Q on (x s , x ) − δ Q off (x s , x )]) dx × (g 1 − g 2 ) + ε π u (x s , x )dx π u (x s , x )q on (x s , x ) λ λ + κ x − x s λ λ + κ dx × ∂ s g 2(14)
where we abbreviated by O(1) the O(1) dynamics for g 1 reported on the right hand side of Eq. (7).
We analyze the additional terms in O(ε) briefly.
The first term at O(ε) is a contribution to the binding rate that is modified compared to a simple integral of q on (x s , x ) by δ Q on (x s , x ) − δ Q off (x s , x ). This latter factor accounts roughly for states that are close to the average state and that are not bound yet, and therefore quantifies how memory affects binding.
Interestingly, in the unbound state, there is now the appearance of a source term, akin to a force (notice that it goes as ∂ s g 2 and not ∂ s g 1 ), corresponding to the second term in O(ε). This "force"
is is a remnant of the recoil force on the bound fast linker because again there is a slight memory in the system. This additional force in the bound state
F eff,1 u = ε π u (x s , x )dx dx π u (x s , x )q on (x s , x ) λ λ + κ x − x s λ λ + κ(15)
averages, when q on is uniform, simply to ∼ −kx s where k is some effective spring constant whose lengthy expression we do not report here.
FIG. 1 .
1Coarse-graining principle for a fast linker forming a transient bond. (a) Cartoon of a common coarse-graining procedure
FIG. 2 .
2Possible choices of binding and unbinding kinetics agreeing with detailed balance. (a) constant binding rate, (b) constant unbinding rate, and (c) both rates varying; see text for detailed expressions. Here we chose a bond spring constant k b L 2 /k B T = 40 and a fixed macroscopic probability Π b = 0.5.
FIG. 3 .
3Numerical validation of the coarse-graining procedure, here in the case of a fast linker transiently binding to a slow particle in 1D. Both binding and unbinding are chosen to be spatially dependent, as in model 2, Eq. (10), with numerical parameters ε = 0.1, κ = 1, λ = 10, q 0 off = 1/τ 0 , and q 0 on chosen such that Π b /Π u = 1 determined by the relation(27). 1000 Simulations are run until T = 1000τ 0 . a: microscropic binding and unbinding effective rates b: effective force. c: effective marginal probabilities of being bound or unbound. d: macroscopic transition rates. Theory curves are obtained with the expressions summarized in
FIG. 4. A comparison between the detailed and coarse-grained model of the autocorrelation of the slow particle position x s (t + τ)x s (t) t . In the limit of separation of timescales corresponding to small ε, the dynamics approach those of the coarse grained-model. Parameter values are the same as in Fig. 3, with purple curves corresponding to ε = {10 −2 , 10 −1 , 10 0 , 10 1 , 10 2 }. The curves for the lowest two values of ε are indistinguishable.
FIG. 5 .
5(a) Cartoon of 2 fast linkers that can connect to each other, representing for example (b) 2 complementary DNA strands transiently hybridizing, one of them being attached to a colloid whose motion is of interest.
FIG. 6 .
6(a) Cartoon of a fast linker stiffening upon binding and (b) a fast linker connecting to the particle via a slip bond with forcedependent unbinding.
FIG. 7 .
7Different choices of q on (x) and q off (x) yield different macroscopic dynamics. Model 1 corresponds to Eq. (9) with spatially dependent binding only and model 2 corresponds to Eq. (10) with both spatially dependent binding and unbinding. Simulation parameters are q 0 off = 0.1τ −1 0 and q 0 on chosen such that Π b /Π u = 1. 1000 simulations are run until T = 1000τ 0 . a: mean unbinding time as a function of the effective spring constant K, showing divergence between models 1 and 2. Theory curves are obtained with Eq. (51). b: mean binding time as a function of the effective spring constant K with similar trend. c: Macroscopic bound probabilities Π b are set to be fixed and are the same for both models. d: Macroscopic diffusion coefficient D = lim t→∞ x s (t) 2 /(2t) for both models.
Obtaining the generator from the dynamics The set of stochastic Eqns. (1)-(2) in the main text defines a Markov process that is conveniently studied via the Fokker-Planck equation and its adjoint, the Kolmogorov backward equation[1, 2]. Let p(x s , x ,t) = (p u (x s , x ,t), p b (x s , x ,t)) T be the probability density function of finding the system at time t and positions x s , x in the unbound or bound states. We obtain from Eqns.(1)-(2)
we have that f satisfies the Kolmogorov backward equation[1] Eq.(14) in the main text. Here, L is the adjoint operator of L , defined by the operator that satisfies f , L p = L f , p for any probability density p and statistic f , where f , p = ( f u p u + f b p b )dx dx s is the inner product.B. Commuting the limits of effective dynamics and stiff bond.
above. Then, we will seek a solution for f 2 asL 0 f 2 = ∂ t f 1 − L 1 f 1 which exists only if f 1 satisfies Fredholm's alternative (∂ t f 1 − L 1 f 1 ).π 0 = 0. This final closure equation together with the previous order will yield the dynamics at O(ε) as (∂ t f 0 + ε∂ t f 1 − L 1 f 0 − εL 1 f 1 ).π 0 = 0 b. Derivation of f 1 . We look for f 1 as f 1 = m 1 (x s , x ,t) m 2 (x s , x ,t)T where m 1 and m 2 are 2 functions to be determined. We have L 0 f 1 = ∂ t f 0 − L 1 f 0 . Using Eq.(7)to expand ∂ t f 0 , we obtain 2 separate equations for m 1 and m ∂ m 1 + ∂ m 1 = −(q on (x s , x ) − q eff on (x s ))(g 1 − g 2 ),
f
For m 1 , notice that k = κ and x 0 = 0 while for m 2 we have k = κ + λ and x 0 = x s λ κ+λ . A special solution of Eq. (x )e −k(x −x 0 ) 2 /2 dx e k(x−x 0 ) 2 /2 dx + A where A is a constant. To avoid terms in f 1 , that are already contained at O(1) in f 0 , we should verify that f 1 · π 0 = 0. Continuing on with the generic Eq. (8), we can check the condition for
TABLE II .
IISummary of the main notations used.
The modified dynamics for g 2 , i.e. the bound dynamics, arewhere we abbreviated by O(1) the O(1) dynamics for g 2 reported on the right hand side of Eq.(7).We analyze these 6 additional terms briefly. The 1st term and the 4th correspond to modified unbinding rates due to memory effects. The 2nd, 3rd and 5th terms (the latter of which simplifies to +ελ (λ /(λ + κ)) 3 x s ∂ s g 2 ) correspond to forces on the bound particle, which modify the O(1)force term by accounting for some degree of memory of unbound relaxation of the spring. Notice again that one of these force terms contains a ∂ s g 1 coupling which acts as a source for g 2 and translates memory effect of the force on the unbound linker feeding into the bound dynamics.The last term can be integrated to − ε λ λ + κ 2 ∂ ss g 2(17)which means the diffusion in the bound state is slowed down. Taking the usual limit of a very stiff bond λ κ we obtain the effective diffusion in the bound state, back in dimensional units,Notice that this updated diffusion coefficient corresponds to the order 1Taylor series of k B T /(γ s + γ ), which, through the projection method detailed in Sec. I B, is indeed the diffusion coefficient in the bound state, before the assumption γ γ s is made. This additional friction on the particle is also visible in the forces, namely through the 5th term of Eq.(16). In contrast, when the bond is weak κ λ , there is no additional friction on the bound particle and the particle diffuses as fast in the bound and unbound states. In summary, due to the springs-in-series structure, λ can be seen as a filter for friction on the particle: if stiff (λ κ) increased friction is
A theory of dynamic rubber friction. A Schallamach, Wear. 6A. Schallamach, "A theory of dynamic rubber friction," Wear 6, 375-382 (1963).
Dynamics of reversible networks. L Leibler, M Rubinstein, R H Colby, Macromolecules. 24L. Leibler, M. Rubinstein, and R. H. Colby, "Dynamics of reversible net- works," Macromolecules 24, 4701-4707 (1991).
Rheological tuning of entangled polymer networks by transient cross-links. X.-Z Cao, M G Forest, The Journal of Physical Chemistry B. 123X.-Z. Cao and M. G. Forest, "Rheological tuning of entangled polymer networks by transient cross-links," The Journal of Physical Chemistry B 123, 974-982 (2019).
Entropycontrolled cross-linking in linker-mediated vitrimers. Q.-L Lei, X Xia, J Yang, M Pica Ciamarra, R Ni, Proceedings of the National Academy of Sciences. 117Q.-L. Lei, X. Xia, J. Yang, M. Pica Ciamarra, and R. Ni, "Entropy- controlled cross-linking in linker-mediated vitrimers," Proceedings of the National Academy of Sciences 117, 27111-27115 (2020).
Morphological transformation and force generation of active cytoskeletal networks. T C Bidone, W Jung, D Maruri, C Borau, R D Kamm, T Kim, PLoS computational biology. 131005277T. C. Bidone, W. Jung, D. Maruri, C. Borau, R. D. Kamm, and T. Kim, "Morphological transformation and force generation of active cytoskeletal networks," PLoS computational biology 13, e1005277 (2017).
Cross-linkers both drive and brake cytoskeletal remodeling and furrowing in cytokinesis. C P Descovich, D B Cortes, S Ryan, J Nash, L Zhang, P S Maddox, F Nedelec, A S Maddox, Molecular Biology of the Cell. 29C. P. Descovich, D. B. Cortes, S. Ryan, J. Nash, L. Zhang, P. S. Maddox, F. Nedelec, and A. S. Maddox, "Cross-linkers both drive and brake cy- toskeletal remodeling and furrowing in cytokinesis," Molecular Biology of the Cell 29, 622-631 (2018).
The lawnmower: an artificial protein-based burnt-bridge molecular motor. C S Korosec, N R Forde, arXiv:2109.10293arXiv preprintC. S. Korosec and N. R. Forde, "The lawnmower: an artificial protein-based burnt-bridge molecular motor," arXiv preprint arXiv:2109.10293 (2021).
Biomechanics as driver of aggregation of tethers in adherent membranes. L Li, M A Kamal, B H Stumpf, F Thibaudau, K Sengupta, A.-S Smith, Soft Matter. 17L. Li, M. A. Kamal, B. H. Stumpf, F. Thibaudau, K. Sengupta, and A.- S. Smith, "Biomechanics as driver of aggregation of tethers in adherent membranes," Soft Matter 17, 10101-10107 (2021).
Nucleation of ligand-receptor domains in membrane adhesion. T Bihr, U Seifert, A.-S Smith, Physical review letters. 109258101T. Bihr, U. Seifert, and A.-S. Smith, "Nucleation of ligand-receptor do- mains in membrane adhesion," Physical review letters 109, 258101 (2012).
Stochastic models of intracellular transport. P C Bressloff, J M Newby, Reviews of Modern Physics. 85135P. C. Bressloff and J. M. Newby, "Stochastic models of intracellular trans- port," Reviews of Modern Physics 85, 135 (2013).
Biophysical assay for tethered signaling reactions reveals tether-controlled activity for the phosphatase shp-1. J Goyette, C S Salas, N Coker-Gordon, M Bridge, S A Isaacson, J Allard, O Dushek, Science Advances. 31601692J. Goyette, C. S. Salas, N. Coker-Gordon, M. Bridge, S. A. Isaacson, J. Al- lard, and O. Dushek, "Biophysical assay for tethered signaling reactions reveals tether-controlled activity for the phosphatase shp-1," Science Ad- vances 3, e1601692 (2017).
Liquid metal-organic frameworks. R Gaillac, P Pullumbi, K A Beyer, K W Chapman, D A Keen, T D Bennett, F.-X Coudert, Nature Materials. 16R. Gaillac, P. Pullumbi, K. A. Beyer, K. W. Chapman, D. A. Keen, T. D. Bennett, and F.-X. Coudert, "Liquid metal-organic frameworks," Nature Materials 16, 1149-1154 (2017).
Computer simulation of the early stages of self-assembly and thermal decomposition of zif-8. S R Balestra, R Semino, The Journal of Chemical Physics. 157184502S. R. Balestra and R. Semino, "Computer simulation of the early stages of self-assembly and thermal decomposition of zif-8," The Journal of Chemi- cal Physics 157, 184502 (2022).
A dna-based method for rationally assembling nanoparticles into macroscopic materials. C A Mirkin, R L Letsinger, R C Mucic, J J Storhoff, Nature. 382C. A. Mirkin, R. L. Letsinger, R. C. Mucic, and J. J. Storhoff, "A dna-based method for rationally assembling nanoparticles into macroscopic materi- als," Nature 382, 607-609 (1996).
Multistep kinetic self-assembly of dna-coated colloids. L Di Michele, F Varrato, J Kotar, S H Nathan, G Foffi, E Eiser, Nature communications. 42007L. Di Michele, F. Varrato, J. Kotar, S. H. Nathan, G. Foffi, and E. Eiser, "Multistep kinetic self-assembly of dna-coated colloids," Nature communi- cations 4, 2007 (2013).
Specificity, flexibility and valence of dna bonds guide emulsion architecture. L Feng, L.-L Pontani, R Dreyfus, P Chaikin, J Brujic, Soft Matter. 9L. Feng, L.-L. Pontani, R. Dreyfus, P. Chaikin, and J. Brujic, "Specificity, flexibility and valence of dna bonds guide emulsion architecture," Soft Mat- ter 9, 9816-9823 (2013).
Crystallization of dna-coated colloids. Y Wang, Y Wang, X Zheng, É Ducrot, J S Yodh, M Weck, D J Pine, Nature Communications. 6Y. Wang, Y. Wang, X. Zheng, É. Ducrot, J. S. Yodh, M. Weck, and D. J. Pine, "Crystallization of dna-coated colloids," Nature Communications 6, 1-8 (2015).
Using dna to program the self-assembly of colloidal nanoparticles and microparticles. W B Rogers, W M Shih, V N Manoharan, Nature Reviews Materials. 1W. B. Rogers, W. M. Shih, and V. N. Manoharan, "Using dna to program the self-assembly of colloidal nanoparticles and microparticles," Nature Re- views Materials 1, 1-14 (2016).
Linker-mediated self-assembly of mobile dna-coated colloids. X Xia, H Hu, M P Ciamarra, R Ni, Science Advances. 66921X. Xia, H. Hu, M. P. Ciamarra, and R. Ni, "Linker-mediated self-assembly of mobile dna-coated colloids," Science Advances 6, eaaz6921 (2020).
Comprehensive view of microscopic interactions between dna-coated colloids. F Cui, S Marbach, J A Zheng, M Holmes-Cerfon, D J Pine, Nature communications. 13F. Cui, S. Marbach, J. A. Zheng, M. Holmes-Cerfon, and D. J. Pine, "Com- prehensive view of microscopic interactions between dna-coated colloids," Nature communications 13, 1-10 (2022).
Programming directed motion with DNA-grafted particles. E W Gehrels, W B Rogers, Z Zeravcic, V N Manoharan, ACS Nano. E. W. Gehrels, W. B. Rogers, Z. Zeravcic, and V. N. Manoharan, "Program- ming directed motion with DNA-grafted particles," ACS Nano (2022).
Simulations of dynamically cross-linked actin networks: morphology, rheology, and hydrodynamic interactions. O Maxian, R P Peláez, A Mogilner, A Donev, PLoS Computational Biology. 171009240O. Maxian, R. P. Peláez, A. Mogilner, and A. Donev, "Simulations of dy- namically cross-linked actin networks: morphology, rheology, and hydro- dynamic interactions," PLoS Computational Biology 17, e1009240 (2021).
Substrate stiffness tunes the dynamics of polyvalent rolling motors. C S Korosec, L Jindal, M Schneider, I C De La Barca, M J Zuckermann, N R Forde, E Emberly, Soft Matter. 17C. S. Korosec, L. Jindal, M. Schneider, I. C. de la Barca, M. J. Zuckermann, N. R. Forde, and E. Emberly, "Substrate stiffness tunes the dynamics of polyvalent rolling motors," Soft Matter 17, 1468-1479 (2021).
The nanocaterpillar's random walk: diffusion with ligand-receptor contacts. S Marbach, J A Zheng, M Holmes-Cerfon, Soft Matter. 18S. Marbach, J. A. Zheng, and M. Holmes-Cerfon, "The nanocaterpillar's random walk: diffusion with ligand-receptor contacts," Soft Matter 18, 3130-3146 (2022).
First-principle coarse-graining framework for scale-free bell-like association and dissociation rates in thermal and active systems. J A Janeš, C Monzel, D Schmidt, R Merkel, U Seifert, K Sengupta, A.-S Smith, Physical Review X. 1231030J. A. Janeš, C. Monzel, D. Schmidt, R. Merkel, U. Seifert, K. Sengupta, and A.-S. Smith, "First-principle coarse-graining framework for scale-free bell-like association and dissociation rates in thermal and active systems," Physical Review X 12, 031030 (2022).
Coarse-graining in polymer simulation: from the atomistic to the mesoscopic scale and back. F Müller-Plathe, ChemPhysChem. 3F. Müller-Plathe, "Coarse-graining in polymer simulation: from the atom- istic to the mesoscopic scale and back," ChemPhysChem 3, 754-769 (2002).
Enhanced nucleocytoplasmic transport due to competition for elastic binding sites. B Fogelson, J P Keener, Biophysical Journal. 115B. Fogelson and J. P. Keener, "Enhanced nucleocytoplasmic transport due to competition for elastic binding sites," Biophysical Journal 115, 108-116 (2018).
Transport facilitated by rapid binding to elastic tethers. B Fogelson, J P Keener, SIAM Journal on Applied Mathematics. 79B. Fogelson and J. P. Keener, "Transport facilitated by rapid binding to elas- tic tethers," SIAM Journal on Applied Mathematics 79, 1405-1422 (2019).
Translational and rotational dynamics of colloidal particles interacting through reacting linkers. P K Jana, B M Mognetti, Physical Review E. 10060601P. K. Jana and B. M. Mognetti, "Translational and rotational dynamics of colloidal particles interacting through reacting linkers," Physical Review E 100, 060601 (2019).
Reversible interacting-particle reaction dynamics. C Fröhner, F Noé, The Journal of Physical Chemistry B. 122C. Fröhner and F. Noé, "Reversible interacting-particle reaction dynamics," The Journal of Physical Chemistry B 122, 11240-11250 (2018).
Interplay between brownian motion and cross-linking controls bundling dynamics in actin networks. O Maxian, A Donev, A Mogilner, Biophysical journal. 121O. Maxian, A. Donev, and A. Mogilner, "Interplay between brownian mo- tion and cross-linking controls bundling dynamics in actin networks," Bio- physical journal 121, 1230-1245 (2022).
Dimensionalitydependent crossover in motility of polyvalent burnt-bridges ratchets. C S Korosec, M J Zuckermann, N R Forde, Physical Review E. 9832114C. S. Korosec, M. J. Zuckermann, and N. R. Forde, "Dimensionality- dependent crossover in motility of polyvalent burnt-bridges ratchets," Phys- ical Review E 98, 032114 (2018).
Multivalent diffusive transport. A Kowalewski, N R Forde, C S Korosec, The Journal of Physical Chemistry B. 125A. Kowalewski, N. R. Forde, and C. S. Korosec, "Multivalent diffusive transport," The Journal of Physical Chemistry B 125, 6857-6863 (2021).
Superdiffusive transport by multivalent molecular walkers moving under load. M J Olah, D Stefanovic, Physical Review E. 8762713M. J. Olah and D. Stefanovic, "Superdiffusive transport by multivalent molecular walkers moving under load," Physical Review E 87, 062713 (2013).
Avidity and surface mobility in multivalent ligand-receptor binding. S Merminod, J R Edison, H Fang, M F Hagan, W B Rogers, Nanoscale. S. Merminod, J. R. Edison, H. Fang, M. F. Hagan, and W. B. Rogers, "Avid- ity and surface mobility in multivalent ligand-receptor binding," Nanoscale (2021).
A coarse-grained simulation model for self-assembly of liquid droplets featuring explicit mobile binders. G Mitra, C Chang, A Mcmullen, D Puchall, J Brujic, G M Hocky, arXiv:2212.11946arXiv preprintG. Mitra, C. Chang, A. McMullen, D. Puchall, J. Brujic, and G. M. Hocky, "A coarse-grained simulation model for self-assembly of liquid droplets featuring explicit mobile binders," arXiv preprint arXiv:2212.11946 (2022).
Detailed balance for particle models of reversible reactions in bounded domains. Y Zhang, S A Isaacson, The Journal of Chemical Physics. 156204105Y. Zhang and S. A. Isaacson, "Detailed balance for particle models of re- versible reactions in bounded domains," The Journal of Chemical Physics 156, 204105 (2022).
. M Rubinstein, R H Colby, Polymer physics. 23Oxford university pressM. Rubinstein, R. H. Colby, et al., Polymer physics, Vol. 23 (Oxford uni- versity press New York, 2003).
Mass changes the diffusion coefficient of particles with ligand-receptor contacts in the overdamped limit. S Marbach, M Holmes-Cerfon, Physical Review Letters. 12948003S. Marbach and M. Holmes-Cerfon, "Mass changes the diffusion coefficient of particles with ligand-receptor contacts in the overdamped limit," Physical Review Letters 129, 048003 (2022).
Theory of constrained brownian motion. D C Morse, Advances in Chemical Physics. John Wiley & Sons, LtdD. C. Morse, "Theory of constrained brownian motion," in Advances in Chemical Physics (John Wiley & Sons, Ltd, 2003) Chap. 2, pp. 65-189.
Projection of diffusions on submanifolds: Application to mean force computation. G Ciccotti, T Lelievre, E Vanden-Eijnden, Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences. 61G. Ciccotti, T. Lelievre, and E. Vanden-Eijnden, "Projection of diffusions on submanifolds: Application to mean force computation," Communica- tions on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences 61, 371-408 (2008).
Stochastic disks that roll. M Holmes-Cerfon, Physical Review E. 9452112M. Holmes-Cerfon, "Stochastic disks that roll," Physical Review E 94, 052112 (2016).
Models for the specific adhesion of cells to cells: a theoretical framework for adhesion mediated by reversible bonds between cell surface molecules. G I Bell, Science. 200G. I. Bell, "Models for the specific adhesion of cells to cells: a theoretical framework for adhesion mediated by reversible bonds between cell surface molecules." Science 200, 618-627 (1978).
The reaction-limited kinetics of membrane-to-surface adhesion and detachment. M Dembo, D Torney, K Saxman, D Hammer, Proceedings of the Royal Society of London. Series B. the Royal Society of London. Series B234M. Dembo, D. Torney, K. Saxman, and D. Hammer, "The reaction-limited kinetics of membrane-to-surface adhesion and detachment," Proceedings of the Royal Society of London. Series B. Biological Sciences 234, 55-83 (1988).
Multiscale approaches to proteinmediated interactions between membranes-relating microscopic and macroscopic dynamics in radially growing adhesions. T Bihr, U Seifert, A.-S Smith, New Journal of Physics. 1783016T. Bihr, U. Seifert, and A.-S. Smith, "Multiscale approaches to protein- mediated interactions between membranes-relating microscopic and macroscopic dynamics in radially growing adhesions," New Journal of Physics 17, 083016 (2015).
Force-dependent unbinding rate of molecular motors from stationary optical trap data. F Berger, S Klumpp, R Lipowsky, Nano letters. 19F. Berger, S. Klumpp, and R. Lipowsky, "Force-dependent unbinding rate of molecular motors from stationary optical trap data," Nano letters 19, 2598-2602 (2019).
Effective behavior of cooperative and nonidentical molecular motors. J J Klobusicky, J Fricks, P R Kramer, Research in the mathematical sciences. 7J. J. Klobusicky, J. Fricks, and P. R. Kramer, "Effective behavior of coop- erative and nonidentical molecular motors," Research in the mathematical sciences 7, 1-49 (2020).
Diffusion of kinesin motors on cargo can enhance binding and run lengths during intracellular transport. M Bovyn, B R Narayanareddy, S Gross, J Allard, Molecular Biology of the Cell. 32M. Bovyn, B. R. Janakaloti Narayanareddy, S. Gross, and J. Allard, "Diffu- sion of kinesin motors on cargo can enhance binding and run lengths during intracellular transport," Molecular Biology of the Cell 32, 984-994 (2021).
Enhanced diffusion by binding to the crosslinks of a polymer gel. C P Goodrich, M P Brenner, K Ribbeck, Nature Communications. 9C. P. Goodrich, M. P. Brenner, and K. Ribbeck, "Enhanced diffusion by binding to the crosslinks of a polymer gel," Nature Communications 9, 1-8 (2018).
Stochastic theory of diffusion-controlled reaction. M Doi, Journal of Physics A: Mathematical and General. 91479M. Doi, "Stochastic theory of diffusion-controlled reaction," Journal of Physics A: Mathematical and General 9, 1479 (1976).
A comparison of bimolecular reaction models for stochastic reaction-diffusion systems. I C Agbanusi, S A Isaacson, Bulletin of Mathematical Biology. 76I. C. Agbanusi and S. A. Isaacson, "A comparison of bimolecular reaction models for stochastic reaction-diffusion systems," Bulletin of Mathemati- cal Biology 76, 922-946 (2014).
The influence of molecular reach and diffusivity on the efficacy of membrane-confined reactions. Y Zhang, L Clemens, J Goyette, J Allard, O Dushek, S A Isaacson, Biophysical Journal. 117Y. Zhang, L. Clemens, J. Goyette, J. Allard, O. Dushek, and S. A. Isaac- son, "The influence of molecular reach and diffusivity on the efficacy of membrane-confined reactions," Biophysical Journal 117, 1189-1201 (2019).
G Pavliotis, A Stuart, Multiscale methods: averaging and homogenization. Springer Science & Business MediaG. Pavliotis and A. Stuart, Multiscale methods: averaging and homoge- nization (Springer Science & Business Media, 2008).
. C W Gardiner, Handbook of stochastic methods. 3springerC. W. Gardiner et al., Handbook of stochastic methods, Vol. 3 (springer Berlin, 1985).
Force spectroscopy shows dynamic binding of influenza hemagglutinin and neuraminidase to sialic acid. V Reiter-Scherer, J L Cuellar-Camacho, S Bhatia, R Haag, A Herrmann, D Lauster, J P Rabe, Biophysical journal. 116V. Reiter-Scherer, J. L. Cuellar-Camacho, S. Bhatia, R. Haag, A. Herrmann, D. Lauster, and J. P. Rabe, "Force spectroscopy shows dynamic binding of influenza hemagglutinin and neuraminidase to sialic acid," Biophysical journal 116, 1037-1048 (2019).
Statistical inference for cox processes. J Møller, R P Waagepetersen, Spatial Cluster Modelling. Chapman & HallJ. Møller and R. P. Waagepetersen, "Statistical inference for cox processes," in Spatial Cluster Modelling (Chapman & Hall, 2002).
Geometrical patterning of receptor sites controls kinetics via many-body effects in bivalent systems. R E Spinney, L Lee, R G Morris, Physical Review Research. 442028R. E. Spinney, L. Lee, and R. G. Morris, "Geometrical patterning of re- ceptor sites controls kinetics via many-body effects in bivalent systems," Physical Review Research 4, L042028 (2022).
Sliding across a surface: particles with fixed and mobile ligands. J Lowensohn, L Stevens, D Goldstein, B M Mognetti, The Journal of Chemical Physics. 156164902J. Lowensohn, L. Stevens, D. Goldstein, and B. M. Mognetti, "Sliding across a surface: particles with fixed and mobile ligands," The Journal of Chemical Physics 156, 164902 (2022).
Modeling the relative dynamics of dna-coated colloids. J P Lee-Thorp, M Holmes-Cerfon, Soft Matter. 14J. P. Lee-Thorp and M. Holmes-Cerfon, "Modeling the relative dynamics of dna-coated colloids," Soft Matter 14, 8147-8159 (2018).
Subdiffusion of a sticky particle on a surface. Q Xu, L Feng, R Sha, N Seeman, P Chaikin, Physical Review Letters. 106228102Q. Xu, L. Feng, R. Sha, N. Seeman, and P. Chaikin, "Subdiffusion of a sticky particle on a surface," Physical Review Letters 106, 228102 (2011).
Enhanced diffusion by reversible binding to active polymers. S Sridhar, J Dunagin, K Koo, L Hough, F Vernerey, Macromolecules. 54S. Lalitha Sridhar, J. Dunagin, K. Koo, L. Hough, and F. Vernerey, "Enhanced diffusion by reversible binding to active polymers," Macro- molecules 54, 1850-1858 (2021).
Numerical computation of effective thermal equilibria in stochastically switching langevin systems. B L Walker, K A Newhall, Physical Review E. 10564113B. L. Walker and K. A. Newhall, "Numerical computation of effective ther- mal equilibria in stochastically switching langevin systems," Physical Re- view E 105, 064113 (2022).
Mobility-based quantification of multivalent virus-receptor interactions: New insights into influenza a virus binding mode. M Müller, D Lauster, H H Wildenauer, A Herrmann, S Block, Nano Letters. 19M. Müller, D. Lauster, H. H. Wildenauer, A. Herrmann, and S. Block, "Mobility-based quantification of multivalent virus-receptor interactions: New insights into influenza a virus binding mode," Nano Letters 19, 1875- 1882 (2019).
How influenza's spike motor works. F Ziebert, I M Kulić, Physical Review Letters. 126218101F. Ziebert and I. M. Kulić, "How influenza's spike motor works," Physical Review Letters 126, 218101 (2021).
Analysis of nonprocessive molecular motor transport using renewal reward theory. C E Miles, S D Lawley, J P Keener, SIAM Journal on Applied Mathematics. 78C. E. Miles, S. D. Lawley, and J. P. Keener, "Analysis of nonprocessive molecular motor transport using renewal reward theory," SIAM Journal on Applied Mathematics 78, 2511-2532 (2018).
Coarse-grained stochastic model of myosin-driven vesicles into dendritic spines. Y Park, P Singh, T G Fai, SIAM Journal on Applied Mathematics. 82Y. Park, P. Singh, and T. G. Fai, "Coarse-grained stochastic model of myosin-driven vesicles into dendritic spines," SIAM Journal on Applied Mathematics 82, 793-820 (2022).
End-to-end distance vector distribution with fixed end orientations for the wormlike chain model. A J Spakowitz, Z.-G Wang, Physical Review E. 7241802A. J. Spakowitz and Z.-G. Wang, "End-to-end distance vector distribution with fixed end orientations for the wormlike chain model," Physical Review E 72, 041802 (2005).
Hitching a ride: mechanics of transport initiation through linker-mediated hitchhiking. S S Mogre, J R Christensen, C S Niman, S L Reck-Peterson, E F Koslover, Biophysical Journal. 118S. S. Mogre, J. R. Christensen, C. S. Niman, S. L. Reck-Peterson, and E. F. Koslover, "Hitching a ride: mechanics of transport initiation through linker-mediated hitchhiking," Biophysical Journal 118, 1357-1369 (2020).
Collective langevin dynamics of flexible cytoskeletal fibers. F Nedelec, D Foethke, New Journal of Physics. 9427F. Nedelec and D. Foethke, "Collective langevin dynamics of flexible cy- toskeletal fibers," New Journal of Physics 9, 427 (2007).
A typical workflow to simulate cytoskeletal systems with cytosim. C A Lugo, E Saikia, F Nedelec, arXiv:2205.13852arXiv preprintC. A. Lugo, E. Saikia, and F. Nedelec, "A typical workflow to simulate cy- toskeletal systems with cytosim," arXiv preprint arXiv:2205.13852 (2022).
Two dominant timescales of cytoskeletal crosslinking in the viscoelastic response of the cytoplasm. O T Cohen, A G Hendricks, Physical Review Research. 443167O. T. Cohen and A. G. Hendricks, "Two dominant timescales of cytoskeletal crosslinking in the viscoelastic response of the cytoplasm," Physical Review Research 4, 043167 (2022).
Mitotic microtubule crosslinkers: insights from mechanistic studies. E J Peterman, J M Scholey, Current Biology. 19E. J. Peterman and J. M. Scholey, "Mitotic microtubule crosslinkers: insights from mechanistic studies," Current Biology 19, R1089-R1094 (2009).
Theory of cytoskeletal reorganization during cross-linker-mediated mitotic spindle assembly. A R Lamson, C J Edelmaier, M A Glaser, M D Betterton, Biophysical Journal. 116A. R. Lamson, C. J. Edelmaier, M. A. Glaser, and M. D. Betterton, "Theory of cytoskeletal reorganization during cross-linker-mediated mitotic spindle assembly," Biophysical Journal 116, 1719-1731 (2019).
Self-organization of minimal anaphase spindle midzone bundles. J Hannabuss, M Lera-Ramirez, N I Cade, F J Fourniol, F Nédélec, T Surrey, Current Biology. 29J. Hannabuss, M. Lera-Ramirez, N. I. Cade, F. J. Fourniol, F. Nédélec, and T. Surrey, "Self-organization of minimal anaphase spindle midzone bun- dles," Current Biology 29, 2120-2130 (2019).
The mitotic crosslinking protein prc1 acts like a mechanical dashpot to resist microtubule sliding. I Gaska, M E Armstrong, A Alfieri, S Forth, Developmental Cell. 54I. Gaska, M. E. Armstrong, A. Alfieri, and S. Forth, "The mitotic crosslink- ing protein prc1 acts like a mechanical dashpot to resist microtubule slid- ing," Developmental Cell 54, 367-378 (2020).
Medyan: Mechanochemical simulations of contraction and polarity alignment in actomyosin networks. K Popov, J Komianos, G A Papoian, PLoS Computational Biology. 121004877K. Popov, J. Komianos, and G. A. Papoian, "Medyan: Mechanochemical simulations of contraction and polarity alignment in actomyosin networks," PLoS Computational Biology 12, e1004877 (2016).
Nonequilibrium phase diagrams for actomyosin networks. S L Freedman, G M Hocky, S Banerjee, A R Dinner, Soft Matter. 14S. L. Freedman, G. M. Hocky, S. Banerjee, and A. R. Dinner, "Nonequilib- rium phase diagrams for actomyosin networks," Soft Matter 14, 7740-7747 (2018).
Transient antibody-mucin interactions produce a dynamic molecular shield against viral invasion. A Chen, S A Mckinley, S Wang, F Shi, P J Mucha, M G Forest, S K Lai, Biophysical Journal. 106A. Chen, S. A. McKinley, S. Wang, F. Shi, P. J. Mucha, M. G. Forest, and S. K. Lai, "Transient antibody-mucin interactions produce a dynamic molecular shield against viral invasion," Biophysical Journal 106, 2028- 2036 (2014).
A blueprint for robust crosslinking of mobile species in biogels with weakly adhesive molecular anchors. J Newby, J L Schiller, T Wessler, J Edelstein, M G Forest, S K Lai, Nature Communications. 8J. Newby, J. L. Schiller, T. Wessler, J. Edelstein, M. G. Forest, and S. K. Lai, "A blueprint for robust crosslinking of mobile species in bio- gels with weakly adhesive molecular anchors," Nature Communications 8, 1-10 (2017).
Transient crosslinking kinetics optimize gene cluster interactions. B Walker, D Taylor, J Lawrimore, C Hult, D Adalsteinsson, K Bloom, M G Forest, PLoS Computational Biology. 151007124B. Walker, D. Taylor, J. Lawrimore, C. Hult, D. Adalsteinsson, K. Bloom, and M. G. Forest, "Transient crosslinking kinetics optimize gene cluster interactions," PLoS Computational Biology 15, e1007124 (2019).
Force generation by actin polymerization ii: the elastic ratchet and tethered filaments. A Mogilner, G Oster, Biophysical Journal. 84A. Mogilner and G. Oster, "Force generation by actin polymerization ii: the elastic ratchet and tethered filaments," Biophysical Journal 84, 1591-1605 (2003).
Actin-membrane release initiates cell protrusions. E S Welf, C E Miles, J Huh, E Sapoznik, J Chi, M K Driscoll, T Isogai, J Noh, A D Weems, T Pohlkamp, Developmental Cell. 55E. S. Welf, C. E. Miles, J. Huh, E. Sapoznik, J. Chi, M. K. Driscoll, T. Iso- gai, J. Noh, A. D. Weems, T. Pohlkamp, et al., "Actin-membrane release initiates cell protrusions," Developmental Cell 55, 723-736 (2020).
felt on the particle, but otherwise (κ λ ) the additional friction is absorbed into the bond and not felt. felt on the particle, but otherwise (κ λ ) the additional friction is absorbed into the bond and not felt.
Overall, our coarse-graining method can be used systematically to derive dynamics at any order in ε. To investigate the relevance of the terms to capture the dynamics, one could conduct simulations with and without these additional terms, and with the full dynamics and compare their relative relaxation. These simulations may shed more light also on the relevance of these cross-force terms. We leave more detailed investigation for future workOverall, our coarse-graining method can be used systematically to derive dynamics at any or- der in ε. To investigate the relevance of the terms to capture the dynamics, one could conduct simulations with and without these additional terms, and with the full dynamics and compare their relative relaxation. These simulations may shed more light also on the relevance of these cross-force terms. We leave more detailed investigation for future work.
. C W Gardiner, Handbook of stochastic methods. 3springerC. W. Gardiner et al., Handbook of stochastic methods, Vol. 3 (springer Berlin, 1985).
G Pavliotis, A Stuart, Multiscale methods: averaging and homogenization. Springer Science & Business MediaG. Pavliotis and A. Stuart, Multiscale methods: averaging and homogenization (Springer Science & Business Media, 2008).
D C Morse, Advances in Chemical Physics. LtdJohn Wiley & SonsD. C. Morse, in Advances in Chemical Physics (John Wiley & Sons, Ltd, 2003) Chap. 2, pp. 65-189.
. M Holmes-Cerfon, Physical Review E. 9452112M. Holmes-Cerfon, Physical Review E 94, 052112 (2016).
. G Ciccotti, T Lelievre, E Vanden-Eijnden, Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences. 61371G. Ciccotti, T. Lelievre, and E. Vanden-Eijnden, Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences 61, 371 (2008).
| [] |
[
"Network Regression with Graph Laplacians",
"Network Regression with Graph Laplacians"
] | [
"Yidong Zhou ydzhou@ucdavis.edu \nDepartment of Statistics\nDepartment of Statistics\nUniversity of California Davis\n95616CAUSA\n",
"Hans-Georg Müller hgmueller@ucdavis.edu \nUniversity of California Davis\n95616CAUSA\n"
] | [
"Department of Statistics\nDepartment of Statistics\nUniversity of California Davis\n95616CAUSA",
"University of California Davis\n95616CAUSA"
] | [
"Journal of Machine Learning Research"
] | Network data are increasingly available in various research fields, motivating statistical analysis for populations of networks, where a network as a whole is viewed as a data point. The study of how a network changes as a function of covariates is often of paramount interest. However, due to the non-Euclidean nature of networks, basic statistical tools available for scalar and vector data are no longer applicable. This motivates an extension of the notion of regression to the case where responses are network data. Here we propose to adopt conditional Fréchet means implemented as M-estimators that depend on weights derived from both global and local least squares regression, extending the Fréchet regression framework to networks that are quantified by their graph Laplacians. The challenge is to characterize the space of graph Laplacians to justify the application of Fréchet regression. This characterization then leads to asymptotic rates of convergence for the corresponding M-estimators by applying empirical process methods. We demonstrate the usefulness and good practical performance of the proposed framework with simulations and with network data arising from resting-state fMRI in neuroimaging, as well as New York taxi records. | null | [
"https://export.arxiv.org/pdf/2109.02981v3.pdf"
] | 250,072,078 | 2109.02981 | 5af62fb7a3b8a795a95c55b174763e643fbb4dc7 |
Network Regression with Graph Laplacians
2022
Yidong Zhou ydzhou@ucdavis.edu
Department of Statistics
Department of Statistics
University of California Davis
95616CAUSA
Hans-Georg Müller hgmueller@ucdavis.edu
University of California Davis
95616CAUSA
Network Regression with Graph Laplacians
Journal of Machine Learning Research
232022Submitted 6/22; Revised 11/22;Editor: Michael MahoneyFréchet meangraph Laplacianneuroimagingpower metricsample of networks
Network data are increasingly available in various research fields, motivating statistical analysis for populations of networks, where a network as a whole is viewed as a data point. The study of how a network changes as a function of covariates is often of paramount interest. However, due to the non-Euclidean nature of networks, basic statistical tools available for scalar and vector data are no longer applicable. This motivates an extension of the notion of regression to the case where responses are network data. Here we propose to adopt conditional Fréchet means implemented as M-estimators that depend on weights derived from both global and local least squares regression, extending the Fréchet regression framework to networks that are quantified by their graph Laplacians. The challenge is to characterize the space of graph Laplacians to justify the application of Fréchet regression. This characterization then leads to asymptotic rates of convergence for the corresponding M-estimators by applying empirical process methods. We demonstrate the usefulness and good practical performance of the proposed framework with simulations and with network data arising from resting-state fMRI in neuroimaging, as well as New York taxi records.
Introduction
Advances in modern science have led to the increasing availability of large collections of networks where a network is viewed as a fundamental unit of observation. Such data are encountered, for example, in the analysis of brain connectivity (Fornito et al., 2016) where interconnections among brain regions are collected for each patient under study, and traffic mobility (Von Ferber et al., 2009), where volumes of traffic among transit stations are recorded for each day. Several recent studies focus on the analysis of collections of networks. For example, a geometric framework for inference concerning a population of networks was introduced and complemented by asymptotic theory for network averages in Ginestet et al. (2017) and . A similar framework for studying populations of networks, where the graph space is viewed as the quotient space of a Euclidean space with respect to a finite group action, was studied in Calissano et al. (2020) and a flexible Bayesian nonparametric approach for modeling the population distribution of network-valued data in . Recently various distance-based methods for collections of networks have also been proposed (Donnat and Holmes, 2018;Josephs et al., 2020;Wills and Meyer, 2020;Lunagómez et al., 2021).
A challenging and commonly encountered problem is to model the relationship between network objects and one or more explanatory variables. Such regression problems arise, for instance, when one is interested in varying patterns of brain connectivity networks across covariates of interest, such as age and cognitive ability of subjects. As the elderly population increases, an emerging research topic is brain aging and especially age-related cognitive decline (Ferreira and Busatto, 2013;Sala-Llonch et al., 2014. With advances in neuroimaging techniques, and specifically fMRI, one is able to model the human brain as a network, where nodes correspond to anatomical regions and edges to the functional or structural connections among them. Normal brain aging can then be studied by networkresponse regression, where the brain connectivity network of a subject is the response and the subject's age the predictor. This setting is different from the time-series case where the emphasis is on modeling a sequence of networks Kim et al., 2018;Jiang et al., 2020;Dubey and Müller, 2021;Wang et al., 2021).
Although various approaches have been proposed for regression with non-Euclidean responses, such as Jain (2016), Cornea et al. (2017) or Dai and Müller (2018) among others, there are only very few studies on network-response regression. Matrix representations of networks, such as adjacency matrices and graph Laplacians, are commonly used characterizations of the space of networks. proposed the Bayesian network-response regression with a single scalar predictor by vectorizing adjacency matrices. This approach is restricted to binary networks and is computationally intensive due to the MCMC procedure involved in the posterior computation. Another approach for network-response regression is the tensor-response regression model (Zhang et al., 2022;Chen and Fan, 2021), where one models adjacency matrices locally by extending generalized linear models to the matrix case and imposes low-rank and sparse assumptions on the coefficients. A general framework for the statistical analysis of populations of networks was developed by embedding the space of graph Laplacians in a Euclidean feature-space, where linear regression was applied using extrinsic methods (Severn et al., 2022). Nonparametric network-response regression based on the same framework was proposed subsequently by adopting Nadaraya-Watson kernel estimators (Severn et al., 2021). However, embedding methods suffer from losing much of the relational information due to the non-Euclidean structure of the space of networks and assigning nonzero probability to points in the embedding space that do not represent networks. Another practical issue in this context is the need to estimate a covariance matrix which has a very large number of parameters. Based on the adjacency matrix of a network, Calissano et al. (2022) proposed a network-response regression model by implementing linear regression in the Euclidean space and then projecting back to the "graph space" through a quotient map (Calissano et al., 2020). This model is widely applicable for various kinds of unlabeled networks, but for the regression case there is no supporting theory and this approach may not be suitable for labeled networks, which are prevalent in applications, for example brain connectivity networks (Fornito et al., 2016).
To circumvent the problems of embedding methods and provide theoretical support for network-response regression, we introduce a unifying intrinsic framework for network- G 1 G 2 G 3 G 4 Figure 1: A toy example where networks G 1 , G 2 , G 3 , G 4 , each with 3 nodes, are independently observed, associated with one dimensional covariates X 1 = 2, X 2 = 4, X 3 = 6, X 4 = 8. The weight of each edge is marked beside the corresponding edge.
response regression, by viewing each network as a random object (Müller, 2016) lying in the space of graph Laplacians and adopting conditional Fréchet means (Fréchet, 1948). Specifically, let G = (V, E) be a network with a set of nodes V and a set of edge weights E. Under the assumption that G is labeled and simple (i.e., there are no self-loops or multi edges), one can uniquely associate each network G with its graph Laplacian L. Consider random pairs (X, L) ∼ F , where X takes values in R p , L is a graph Laplacian and F indicates a suitable probability law. We investigate the dependence of L on covariates of interest X by adopting the general framework of Fréchet regression (Petersen and Müller, 2019;Chen and Müller, 2022). A toy example is introduced in Figure 1 to illustrate the idea more comprehensively. The relationship between the observed networks and associated covariates in this toy example will be investigated using the proposed network-response regression. The contributions of this paper are as follows: First, we provide a precise characterization of the space of graph Laplacians, laying a foundation for the analysis of populations of networks represented as graph Laplacians, where we adopt the power metric, with the Frobenius metric as a special case. Second, we demonstrate that this characterization makes it possible to adopt Fréchet regression for graph Laplacians. Third, the resulting network regression approach is shown to be competitive in finite sample situations when compared with previous network regression approaches (Severn et al., 2022(Severn et al., , 2021Zhang et al., 2022). Fourth, our methods are supported by theory, including pointwise and uniform rates of convergence. Fifth, we demonstrate the utility and flexibility of the proposed network-response regression model with fMRI data obtained from the ADNI neuroimaging study and also with New York taxi data.
The organization of this paper is as follows. In Section 2, we provide a precise characterization of the space of graph Laplacians and discuss metrics for this space. The proposed regression model for network responses and vector covariates is introduced in Section 3. Pointwise and uniform rates of convergence for the estimators are established in Section 4. Computational details and simulation results for a sample of networks are presented in Section 5. The proposed framework is illustrated in Section 6 using the New York yellow taxi records and rs-fMRI data from the ADNI study. Finally, we conclude with a brief discussion presented in Section 7. Detailed theoretical proofs and auxiliary results are in the Appendix.
Preliminaries
Characterization of the Space of Graph Laplacians
Let G = (V, E) be a network with a set of nodes V = {v 1 , . . . , v m } and a set of edge weights E = {w ij : w ij ≥ 0, i, j = 1, . . . , m}, where w ij = 0 indicates v i and v j are unconnected. Some basic and mild restrictions on the networks G we consider here are as follows.
(C1) G is simple, i.e., there are no self-loops or multi-edges.
(C2) G is weighted, undirected, and labeled.
(C3) The edge weights w ij are bounded above by W ≥ 0, i.e., 0 ≤ w ij ≤ W .
Condition (C1) is required for the one-to-one correspondence between a network G and its graph Laplacian, which is our central tool to represent networks. Condition (C2) guarantees that the adjacency matrix A = (w ij ) is symmetric, i.e., w ij = w ji for all i, j. Condition (C3) puts a limit on the maximum strength of the connection between two nodes and prevents extremes. Any network satisfying Conditions (C1)-(C3) can be uniquely associated with its graph Laplacian L = (l ij ),
l ij = −w ij , i = j k =i w ik , i = j
for i, j = 1, . . . , m, which motivates to characterize the space of networks through the corresponding space of graph Laplacians given by
L m = {L = (l ij ) : L = L T ; L1 m = 0 m ; −W ≤ l ij ≤ 0 for i = j},(1)
where 1 m and 0 m are the m-vectors of ones and zeroes, respectively. Another well-known property of graph Laplacians is their positive semi-definiteness, x T Lx ≥ 0 for all x ∈ R m , which immediately follows if L ∈ L m , as any such L is diagonally dominant, i.e., l ii = j =i |l ij | (De Klerk, 2006, p. 232). A precise geometric characterization of the space of graph Laplacians with fixed rank can be found in Ginestet et al. (2017). However, the fixed rank assumption is not practicable when considering network-response regression where the rank often will change in dependence on predictor levels. This necessitates and motivates the study of the space of graph Laplacians without rank restrictions.
Proposition 1 The space L m , defined in (1), is a bounded, closed, and convex subset in R m 2 of dimension m(m − 1)/2.
All proofs are given in Appendix B. The convexity and closedness of L m as shown in Proposition 1 ensures the existence and uniqueness of projections onto L m (Deutsch, 2012, chap. 3) that we will use in the proposed regression approach.
Choice of Metrics
One can choose between several metrics for the space of graph Laplacians L m . A common choice is the Frobenius metric, defined as
d F (L 1 , L 2 ) = L 1 − L 2 F = {tr[(L 1 − L 2 ) T (L 1 − L 2 )]} 1/2 ,
which corresponds to the usual Euclidean metric if the matrix is viewed as a vector of length m 2 . While d F is the simplest of the possible metrics on the space of graph Laplacians, a downside of d F is the swelling effect, notably for positive semi-definite matrices (Arsigny et al., 2007;Lin, 2019).
Denote the space of real symmetric positive semi-definite m × m matrices by S + m . Note that the space of graph Laplacians L m is a subset of S + m . Let U ΛU T be the usual spectral decomposition of S ∈ S + m , with U ∈ O m an orthogonal matrix and Λ diagonal with nonnegative entries. Defining matrix power maps
F α (S) = S α = U Λ α U T : S + m → S + m ,(2)
where the power α > 0 is a constant and noting that F α is bijective with inverse F 1/α , the power metric (Dryden et al., 2009) between graph Laplacians is
d F,α (L 1 , L 2 ) = d F [F α (L 1 ), F α (L 2 )].
For α = 1, d F,α reduces to the Frobenius metric d F . For larger α, there is more emphasis on larger entries of graph Laplacians, while for small α, large and small entries are treated more evenly and there is less sensitivity to outliers. In particular, d F,α with 0 < α < 1 is associated with a reduced swelling effect, while d F,α with α > 1 in contrast will amplify it and thus often will be unfavorable. For α = 1/2 the square root metric d F,1/2 is a canonical choice that has been widely studied (Dryden et al., 2009(Dryden et al., , 2010Zhou et al., 2016;Severn et al., 2022;Tavakoli et al., 2019). For example, Dryden et al. (2010) examined different values of α in the context of diffusion tensor imaging and ended up with the choice α = 1/2 and Tavakoli et al. (2019) also demonstrated the advantages of using d F,1/2 when spatially modeling linguistic object data. In our applications, we likewise focus on d F,1/2 due to its promising properties and compare its performance with the Frobenius metric d F ; see also Petersen and Müller (2016) regarding the choice of α.
Network Regression
Consider a random object Y ∼ F Y taking values in a metric space (Ω, d). Under appropriate conditions, the Fréchet mean and Fréchet variance of random objects in metric spaces (Fréchet, 1948), as generalizations of usual notions of mean and variance, are defined as
ω ⊕ = argmin ω∈Ω E[d 2 (Y, ω)], V ⊕ = E[d 2 (Y, ω ⊕ )],
where the existence and uniqueness of the minimizer depends on structural properties of the underlying metric space and is guaranteed for Hadamard spaces. A general approach for regression of metric space-valued responses on Euclidean predictors is Fréchet regression, for which both linear and locally linear versions have been developed (Petersen and Müller, 2019;Chen and Müller, 2022), and which we adopt as a tool to model regression relationships between responses which are L m -valued, i.e., graph Laplacians of fixed dimension m, and vectors of real-valued predictors. Equipped with a proper metric d, L m becomes a metric space (L m , d), and an analysis of the properties of this space is key to establish theory for the proposed network regression models.
Suppose (X, L) ∼ F is a random pair, where X and L take values in R p and L m ≡ (L m , d), respectively, and F is the joint distribution of (X, L) on R p × L m . We denote the marginal distributions of X and L as F X and F L , respectively, and assume that µ = E(X) and Σ = var(X) exist, with Σ positive definite. The conditional distributions F X|L and F L|X are also assumed to exist. The conditional Fréchet mean, which corresponds to the regression function of L given X = x, is
m(x) = argmin ω∈Lm M (ω, x), M (·, x) = E[d 2 (L, ·) | X = x],(3)
where M (·, x) is referred to as the conditional Fréchet function. Observing that the conventional conditional mean satisfies E(Y |X = x) = argmin y∈R E[(Y − y) 2 |X = x] for real valued responses Y , the conditional Fréchet mean can be viewed as a natural extension of the notion of a conditional mean to network-valued and other metric space-valued responses, where (Y − y) 2 is replaced by d 2 (L, ω). Further suppose that (X k , L k ) ∼ F, k = 1, . . . , n, are independent and definē
X = n −1 n k=1 X k ,Σ = n −1 n k=1 (X k −X)(X k −X) T .
The global and local network regression are generalizations of multiple linear regression and local linear regression to network-valued responses. The key idea is to characterize the original regression functions as minimizers of weighted least squares problems and then to leverage the weights to define a weighted barycenter as an M-estimator, using the metric that is adopted for the space of graph Laplacians.
The global network regression given X = x is defined as
m G (x) = argmin ω∈Lm M G (ω, x), M G (·, x) = E[s G (x)d 2 (L, ·)],(4)
where
s G (x) = 1 + (X − µ) T Σ −1 (x − µ). The corresponding sample version iŝ m G (x) = argmin ω∈LmM G (ω, x),M G (·, x) = 1 n n k=1 s kG (x)d 2 (L k , ·),(5)
where s kG (x) = 1 + (X k −X) TΣ−1 (x −X). For local network regression, we present details only for the case of a scalar predictor X ∈ R, where the extension to X ∈ R p with p > 1 is straightforward but may be subject to the curse of dimensionality. Consider a smoothing kernel K(·) corresponding to a onedimensional probability density and K h (·) = h −1 K(·/h) with h a bandwidth. The local network regression given X = x is
m L,h (x) = argmin ω∈Lm M L,h (ω, x), M L,h (·, x) = E[s L (x, h)d 2 (L, ·)],(6)where s L (x, h) = K h (X − x)[µ 2 − µ 1 (X − x)]/σ 2 0 with µ j = E[K h (X − x)(X − x) j ]
for j = 0, 1, 2 and σ 2 0 = µ 0 µ 2 − µ 2 1 . The corresponding sample version iŝ
m L,n (x) = argmin ω∈LmM L,n (ω, x),M L,n (·, x) = 1 n n k=1 s kL (x, h)d 2 (L k , ·).(7)
Here
s kL (x, h) = K h (X k −x)[μ 2 −μ 1 (X k −x)]/σ 2 0 , whereμ j = n −1 n k=1 K h (X k −x)(X k −x) j for j = 0, 1, 2 andσ 2 0 =μ 0μ2 −μ 2 1 .
The dependence on n is through the bandwidth sequence h = h n .The local network regression estimatem L,n (x), similar to global network regression, is an M-estimator that depends on the weights s kL (x, h).
For the case where X ∈ R p with p > 1, the weight function for local network regression takes a slightly different form,
s L (x, h) = 1 µ 0 − µ T 1 µ −1 2 µ 1 K h (X − x)[1 − µ T 1 µ −1 2 (X − x)], where µ 0 = E[K h (X − x)], µ 1 = E[K h (X − x)(X − x)], and µ 2 = E[K h (X − x)(X − x)(X − x) T ] is nondegenerate.
The sample version s kL (x, h) can be defined similarly. While global network regression relies on stronger model assumptions, it does not require a tuning parameter and is applicable for categorical predictors. Local network regression, by contrast, is more flexible and may be preferable as long as the regression relation is smooth, the covariate dimension is low and covariates are continuous. Consider the toy example in Figure 1. In the case of global network regression, a simple calculation shows that the weight function is s kG (x) = 1 + (2k − 5)(x − 5)/5 for k = 1, 2, 3, 4. For local network regression, one can also obtain an explicit form of the weight function s kL (x, h) if a smoothing kernel K(·) and a bandwidth h are specified. Endowed with the Frobenius metric d F , the global and local network regression estimates as per (5)
Asymptotic Properties
We establish the consistency of both global and local network regression estimates as per (5) and (7), using the metrics discussed in Section 2.2. Both pointwise and uniform rates of convergence are obtained under the framework of M-estimation. For both Frobenius metric d F and power metric d F,α with 0 < α < 1, the pointwise rates of convergence are optimal for both global and local network regression in the sense that as generalizations of multiple linear regression and local linear regression for the case of Euclidean responses, they correspond to the known optimal rates for the Euclidean special case. These rates are validated by simulations in Section 5.2, where we find that the empirical convergence when sample size is increasing are entirely consistent with the theoretical predictions. We first consider the case where L m is endowed with the Frobenius metric d F . and closedness of L m implies that the minimizersm G (x) andm L,n (x) defined in (5) and (7) exist and are unique for any x. The following result formalizes the consistency of the proposed global network regression estimate and provides rates of convergence, where · E denotes the Euclidean norm in R p .
Theorem 2 Adopting the space of graph Laplacians L m endowed with the Frobenius metric d F , for a fixed x ∈ R p , it holds that for m G (x) andm G (x) as per (4) and (5) that
d F [m G (x),m G (x)] = O p (n −1/2 ).(8)
Furthermore, for a given B > 0 and any given ε > 0,
sup x E ≤B d F [m G (x),m G (x)] = O p (n −1/[2(1+ε)] ).
The pointwise rate of convergence for conventional multiple linear regression is well known to be O p (n −1/2 ). The pointwise rate of convergence in Theorem 2 is thus optimal in the sense that it remains the same as the optimal rate of multiple linear regression. Analogous to the Euclidean case, with its well-known bias-variance trade-off for nonparametric regression, the rate of convergence for the local network regression estimator depends both on the metric equivalent of bias, which is d F [m(x), m L,h (x)], as well as on the stochastic deviation d F [m L,h (x),m L,n (x)]; see the following result. The kernel and distributional assumptions (A1)-(A4) in the Appendix are standard for local regression estimation.
Theorem 3 If the space of graph Laplacians L m is endowed with the Frobenius metric d F and Assumptions (A1), (A2) hold, then for a fixed x ∈ R and m(x), m L,h (x), andm L,n (x) as per (3), (6), and (7),
d F [m(x), m L,h (x)] = O(h 2 ), d F [m L,h (x),m L,n (x)] = O p [(nh) −1/2 ], and with h ∼ n −1/5 , d F [m(x),m L,n (x)] = O p (n −2/5 ). (9) Under Assumptions (A3), (A4) for a closed interval T ⊂ R, if h → 0, nh 2 (− log h) −1 → ∞ as n → ∞, then for any ε > 0, sup x∈T d F [m(x), m L,h (x)] = O(h 2 ), sup x∈T d F [m L,h (x),m L,n (x)] = O p (max{(nh 2 ) −1/(2+ε) , [nh 2 (− log h) −1 ] −1/2 }), and with h ∼ n −1/(6+2ε) , sup x∈T d F [m(x),m L,n (x)] = O p (n −1/(3+ε) ).
The proofs of these results rely on the fact that L m is convex, bounded, and of finite dimension. We represent global and local network regressions as projections P Lm onto L m ,
m G (x) = argmin ω∈Lm d 2 F [B G (x), ω] = P Lm [B G (x)], m L,h (x) = argmin ω∈Lm d 2 F [B L,h (x), ω] = P Lm [B L,h (x)], where B G (x) = E[s G (x)L] and B L,h (x) = E[s L (x, h)L].
The corresponding sample versions arem
G (x) = argmin ω∈Lm d 2 F [B G (x), ω] = P Lm [B G (x)], m L,n (x) = argmin ω∈Lm d 2 F [B L,n (x), ω] = P Lm [B L,n (x)],
whereB G (x) = n −1 n k=1 s kG (x)L k andB L,n (x) = n −1 n k=1 s kL (x, h)L k . Next considering the power metric d F,α , recall that the graph Laplacian L is centered and the off-diagonal entries are bounded by W as per (1). By the equivalence of the Frobenius norm and the l 2 -operator norm in R m 2 , it immediately follows that the largest eigenvalue of L is bounded, say by D, a nonnegative constant depending on m and W . Define the embedding space M m as
M m = {S ∈ S + m : λ 1 (S) ≤ D α },(10)
where λ 1 (S) is the largest eigenvalue of S. The image of L m under the matrix power map F α , i.e., F α (L m ), is a subset of M m as a consequence of the bound D on the largest eigenvalue of each graph Laplacian. After applying the matrix power map F α , the image of L m is embedded in M m , where network regression is carried out using the Frobenius metric d F . When transforming back from the embedding space M m to L m , we first apply the inverse matrix power map F 1/α and then a projection P Lm onto L m . The general idea involving embedding, mapping and projections is shown schematically in Figure 3. The global network regression in the embedding space M m using the Frobenius metric d F also uses the projection P Mm onto M m , (1) and (10), respectively. Network regression is carried out using the Frobenius metric d F in the embedding space M m . Here F α is the matrix power map defined in (2) and
m α G (x) = argmin ω∈Mm d 2 F [B α G (x), ω] = P Mm [B α G (x)],(11)L m F α (L m ) M m F α F 1/α F 1/α (M m ) Id P LmF 1/α its inverse, while F α (L m ) is the image of L m under F α and F 1/α (M m ) is the image of M m under F 1/α . The identity map Id embeds F α (L m ) into M m , where P Lm is the projection onto L m . where B α G (x) = E[s G (x)F α (L)].
Then the global network regression in the space of graph Laplacians L m using the power metric d F,α is obtained by applying the inverse matrix power map F 1/α and projection P Lm successively to m α G (x), i.e.,
m G (x) = argmin ω∈Lm d 2 F {F 1/α [m α G (x)], ω} = P Lm {F 1/α [m α G (x)]}.(12)
The corresponding sample versions arê
m α G (x) = argmin ω∈Mm d 2 F [B α G (x), ω] = P Mm [B α G (x)], m G (x) = argmin ω∈Lm d 2 F {F 1/α [m α G (x)], ω} = P Lm {F 1/α [m α G (x)]},(13)whereB α G (x) = n −1 n k=1 s kG (x)F α (L k ).
Similarly for the local network regression,
m(x) = P Lm (F 1/α {P Mm [B α (x)]}),(14)m L,h (x) = P Lm (F 1/α {P Mm [B α L,h (x)]}),(15)m L,n (x) = P Lm (F 1/α {P Mm [B α L,n (x)]}),(16)where B α (x) = E[F α (L) | X = x], B α L,h (x) = E[s L (x, h)F α (L)] and B α L,n (x) = n −1 n k=1 s kL (x, h)F α (L k )
. To obtain rates of convergence for power metrics d F,α , an auxiliary result on the Hölder continuity of the matrix power map F α is needed. For U a set in R n 1 , E a nonempty subset of U and 0 < β ≤ 1, a function g : U → R n 2 is uniformly Hölder continuous with order β and coefficient H on E, i.e., (β, H)-Hölder continuous, if there exists H ≥ 0 such that
g(x) − g(y) F ≤ H x − y β F ,
for all x, y ∈ E. For β = 1 the function g is said to be H-Lipschitz continuous on E.
Proposition 4 For E m = {S ∈ S + m : λ 1 (S) ≤ C}, where λ 1 (S)
is the largest eigenvalue of S and C ≥ 0 is a constant, the matrix power map F α as per (2) is
(i) (α, m (1−α)/2 )-Hölder continuous on S + m for 0 < α < 1; (ii) αC α−1 -Lipschitz continuous on E m for α ≥ 1.
Proposition 4 leads to rate of convergence results for the global and local network regression estimators defined in (13) and (16), where the population targets for the global and local network regression under the power metric d F,α are defined in (12) and (14).
Theorem 5 If the space of graph Laplacians L m is endowed with the power metric d F,α , for any fixed x ∈ R p , it holds for m G (x) andm G (x) as per (12) and (13) that
d F [m G (x),m G (x)] = O p (n −1/2 ) 0 < α ≤ 1 O p (n −1/(2α) ) α > 1 .(17)
Furthermore, for any B > 0 and any ε > 0
sup x E ≤B d F [m G (x),m G (x)] = O p (n −1/[2(1+ε)] ) 0 < α ≤ 1 O p (n −1/[2(1+ε)α] ) α > 1 .
Theorem 6 Suppose the space of graph Laplacians L m is endowed with the power metric d F,α . Under Assumptions (A1), (A2), for a fixed x ∈ R, it holds for m(x), m L,h (x), and m L,n (x) as per (14), (15), and (16), respectively, that
d F [m(x), m L,h (x)] = O(h 2 ) 0 < α ≤ 1 O(h 2/α ) α > 1 , d F [m L,h (x),m L,n (x)] = O p [(nh) −1/2 ] 0 < α ≤ 1 O p [(nh) −1/(2α) ] α > 1 , and with h ∼ n −1/5 , d F [m(x),m L,n (x)] = O p (n −2/5 ) 0 < α ≤ 1 O p (n −2/(5α) ) α > 1 .(18)
If Assumptions (A3), (A4) hold for a given closed interval
T ⊂ R and h → 0, nh 2 (− log h) −1 → ∞ as n → ∞, then for any ε > 0, sup x∈T d F [m(x), m L,h (x)] = O(h 2 ) 0 < α ≤ 1 O(h 2/α ) α > 1 , sup x∈T d F [m L,h (x),m L,n (x)] = O p (max{(nh 2 ) −1/(2+ε) , [nh 2 (− log h) −1 ] −1/2 }) 0 < α ≤ 1 O p (max{(nh 2 ) −1/[(2+ε)α] , [nh 2 (− log h) −1 ] −1/(2α) }) α > 1 , and with h ∼ n −1/(6+2ε) , sup x∈T d F [m(x),m L,n (x)] = O p (n −1/(3+ε) ) 0 < α ≤ 1 O p (n −1/[(3+ε)α] ) α > 1 .
For the power metric d F,α , rates of convergence for both the global and local network regression depend on the choice of power α. Specifically, rates of convergence are the same as those for the Frobenius metric d F if 0 < α < 1. When α > 1, we observe that rates of convergence are sub-optimal as α appears as a denominator. Therefore, the power metric d F,α with α > 1 is generally not recommended, while whether or not to use power metric with 0 < α < 1 is not affecting the convergence and thus remains a matter of interpretation and other application-specific considerations. The convexity of the target space is crucial in the proof of existence and uniqueness for the minimizers in (3)-(7). Since uniqueness for the minimizers in (3)-(7) cannot be guaranteed, we include the embedding F α (L m ) in M m as defined in (10), which ensures uniqueness for the minimizers in (3)-(7).
Implementation and Simulations
Implementation Details
Implementation of the proposed method involves two projections P Lm and P Mm as mentioned in Section 4. Due to the convexity and closedness of L m and M m , P Lm and P Mm exist and are unique.
To implement P Lm (B) where B = (b ij ) is a constant m × m matrix, one needs to solve minimize f (L) = d 2 F (B, L) = m i=1 m j=1 (b ij − l ij ) 2 subject to l ij − l ji = 0, i, j = 1, . . . , m, m j=1 l ij = 0, i = 1, . . . , m, − W ≤ l ij ≤ 0, i, j = 1, . . . , m; i = j,(19)
where L = (l ij ) is a graph Laplacian. The objective function f (L) is convex quadratic since its Hessian 2I m 2 is strictly positive definite. The three constraints, corresponding to the definition of L m in (1), are all linear so that (19) is a convex quadratic optimization problem, for the practical solution of which we use the osqp (Stellato et al., 2020) package in R (R Core Team, 2022). Note that M m is a bounded subset of the positive semi-definite cone S + m . To implement P Mm , we first project on S + m and then truncate the eigenvalues to ensure that the largest eigenvalue is less than or equal to D α . The projection P S + m on S + m is straightforward and has been studied in Boyd et al. (2004, p. 399). The unique solution for
P S + m (B) is m i=1 max{0, λ i }v i v T i , where B = m i=1 λ i v i v T i is the spectral decomposition of a constant m × m symmetric matrix B.
R codes for the proposed regression models and numerical simulations are available at https://github.com/yidongzhou/Network-Regression-with-Graph-Laplacians. The
Simulation Assessing Rates of Convergence
To assess the performance of the global and local network regression estimates in (5) and (7) through simulations, we need to devise a generative model. Denote the half vectorization excluding the diagonal of a symmetric and centered matrix by vech, with inverse operation vech −1 . By the symmetry and centrality as per (1), every graph Laplacian L is fully determined by its upper (or lower) triangular part, which can be further vectorized into vech(L), a vector of length d = m(m − 1)/2. We construct the conditional distributions F L|X by assigning an independent beta distribution to each element of vech(L). Specifically, a random sample (β 1 , . . . , β d ) T is generated using beta distributions whose parameters depend on the scalar predictor X and vary under different simulation scenarios. The random response L is then generated conditional on X through an inverse half vectorization vech −1 applied to (−β 1 , . . . , −β d ) T . We included four different simulation scenarios involving different types of regression and metrics, which are summarized in Table 1. The space of graph Laplacians L m is not a vector space. Instead, it is a bounded, closed, and convex subset in R m 2 of dimension m(m − 1)/2 as shown in Proposition 1. To ensure that the random response L generated in simulations resides in L m , the off-diagonal entries −β i , i = 1, . . . , d, need to be nonpositive and bounded below as per (1). To this end, β i , i = 1, . . . , d are sampled from Type Metric Setting
I global network regression d F m(x) = vech −1 (−x, . . . , −x), L = vech −1 (−β 1 , . . . , −β d ), where β j i.i.d. ∼ Beta(X, 1 − X) II global network regression d F,1/2 m(x) = P Lm (F 2 {P Mm [vech −1 (−x, . . . , −x)]}), L = F 2 [vech −1 (−β 1 , . . . , −β d )], where β j i.i.d. ∼ Beta(X, 1 − X) III local network regression d F m(x) = vech −1 [−sin(πx), . . . , −sin(πx)], L = vech −1 (−β 1 , . . . , −β d ), where β j i.i.d. ∼ Beta[sin(πX), 1 − sin(πX)] IV local network regression d F,1/2 m(x) = P Lm [F 2 (P Mm {E[F 1/2 (L) | X = x]})], L = vech −1 (−β 1 , . . . , −β d ), where β j i.i.d.
∼ Beta[sin(πX), 1 − sin(πX)] Table 1: Four different simulation scenarios, where m is the true regression function and L represents the generated random response. The parameters for the beta distributions of the random variables β j depend on the predictors X as indicated for simulation scenarios I -IV.
beta distributions, which are defined on the interval (0, 1) and whose parameters depend on the uniformly distributed predictor X. The consistency of global network regression relies on the assumption that the true regression function m(x) is equal to m G (x) as defined in (3) and (4), respectively. This assumption is satisfied if each entry of the graph Laplacian L is conditionally linear in the predictor X. For the global network regression under the square root metric d F,1/2 , an extra matrix square map F 2 is required to ensure that the global network regression estimate in the metric space (M m , d F ) as per (11) with α = 1/2 is element-wise linear in X.
We investigated sample sizes n = 50, 100, 200, 500, 1000, with Q = 1000 Monte Carlo runs for each simulation scenario. In each iteration, random samples of pairs (X k , L k ), k = 1, . . . , n were generated by sampling X k ∼ U(0, 1), setting m = 10, and following the above procedure. For the qth simulation of a particular sample size, withm q (x) denoting the fitted regression function, the quality of the estimation was quantified by the integrated squared error ISE q = The bandwidths for the local network regression in simulation scenarios III and IV were chosen by leave-one-out cross-validation. The results are summarized in Figure 5 and Table 2.
With increasing sample size, ISE is seen to decrease for all scenarios, demonstrating the convergence of network regression to the target. The empirical rate of convergence under each simulation scenario was assessed by fitting a least squares regression line for log MISE versus log n. The asymptotic rates of convergence under the four simulation scenarios are O p (n −1/2 ), O p (n −1/2 ), O p (n −2/5 ), and O p (n −2/5 ), respectively, as per (8), (17), (9), and (18) in Section 4. Since MISE in (20) approximates the square distance between the true and fitted regression functions, theory predicts that the slopes of fitted least squares regression lines under the four simulation scenarios should be around −1, −1, −0.8, and −0.8, respectively, while the corresponding observed slopes were −0.99, −0.99, −0.8, and −0.8. This remarkable agreement between theory and empirical behavior supports the relevance of the theory.
Networks with Latent Block Structure
To examine the performance of global and local network regression estimates on networks with latent block structure, we generate samples of networks from a weighted stochastic block model (Aicher et al., 2015). Similar to Section 5.2, four different simulation scenarios I -IV are considered, corresponding to global and local network regression using both the Frobenius metric d F and the square root metric d F,1/2 . Specifically, consider the weighted stochastic block model with the vector of community membership z = (z 1 , z 2 ) T where z i is a vector of length m i with all elements equal to i and m 1 + m 2 = m. The existence of an edge between nodes of each block is governed by the Bernoulli distribution, parameterized by the corresponding entry in the matrix of edge probabilities θ = p 11 p 12 p 21 p 22 . The weights of existing edges are assumed to follow a beta distribution with shape parameters α = X and β = 1 − X for global network regression or α = sin(πX) and β = 1 − sin(πX) for local network regression. The associated graph Laplacian is then taken as the random response L for the proposed regression models. For global network regression under the square root metric, the random response is taken as F 2 (L) to ensure that the linearity assumption is satisfied.
We investigated sample sizes n = 50, 100, 200, 500, 1000, with Q = 1000 Monte Carlo runs for each simulation scenario. In each iteration, random samples of pairs (X k , L k ), k = 1, . . . , n were generated by sampling X k ∼ U(0, 1), setting m 1 = m 2 = 5, p 11 = p 22 = 0.5, p 12 = p 21 = 0.2, and following the above procedure. The quality of the estimation for each simulation run was quantified by the integrated squared error as defined in Section 5.2. The bandwidths for the local network regression were chosen by leave-one-out cross-validation.
We also included comparisons with two alternative methods proposed by Severn et al. (2022) and Severn et al. (2021). The integrated squared errors (ISE) for all simulation runs and different sample sizes under different simulation scenarios using the proposed methods and the two comparison methods are summarized in the boxplots in Figure 6. The proposed network regression is seen to perform as well as the method of Severn et al. (2022) under global scenarios and to achieve better performance compared to the methods in Severn et al. (2022Severn et al. ( , 2021 under local scenarios, especially for simulation scenario III. Additional simulations for comparisons between different types of regression and metrics, and for networks generated from Erdős-Rényi random graph model (Erdős and Rényi, 1959) are reported in Appendix C.
Data Applications
New York Yellow Taxi System After COVID-19 Outbreak
The yellow and green taxi trip records on pick-up and drop-off dates/times, pick-up and drop-off locations, trip distances, itemized fares, rate types, payment types and driverreported passenger counts, collected by New York City Taxi and Limousine Commission (NYC TLC), are publicly available at https://www1.nyc.gov/site/tlc/about/tlc-tr ip-record-data.page. Additionally, NYC Coronavirus Disease 2019 (COVID-19) data are available at https://github.com/nychealth/coronavirus-data, where one can find citywide and borough-specific daily counts of probable and confirmed COVID-19 cases in New York City since February 29, 2020. This is the date at which according to the Health Department the COVID-19 outbreak in NYC began. We aim to study the dependence of transport networks constructed from taxi trip records on covariates of interest, including COVID-19 new cases and a weekend indicator, as travel patterns are well known to differ between weekdays and weekends.
We focused on yellow taxi trip records in the Manhattan area, which has the highest taxi traffic, and grouped the 66 taxi zones (excluding islands) as delimited by NYC TLC into 13 regions. Details about the zones and regions are in Appendix D. Not long after the outbreak of COVID-19 in Manhattan, yellow taxi ridership, as measured by trips, began a steep decline during the week of March 15, reaching a trough around April 12. This motivated us to restrict our analysis to the time period comprising the 172 days from April 12, 2020 to September 30, 2020, during which yellow taxi ridership per day in Manhattan increased steadily. The total yellow taxi ridership per day is shown in Figure 7 7) are weekdays, they follow the same travel patterns as weekends, and consequently were classified as weekends in the following analyses.
For each day, we constructed a daily undirected network with nodes corresponding to the 13 regions and edge weights representing the number of people who traveled between the regions connected by the edge on the specified day. Since the object of interest is the connection between different regions, we removed self-loops in the networks. We thus have observations that consist of a simple undirected weighted network for each of the 172 days from April 12 to September 30. Each of these networks is uniquely associated with a 13 × 13 graph Laplacian. The covariates of interest include COVID-19 new cases for the day (see Figure 7(b)) and an indicator that is 1 for weekends and 0 otherwise. The first two multidimensional scaling (MDS) variables from a classical MDS analysis of the resulting graph Laplacians provide exploratory analysis and are shown in Figure 7(c) and 7(d). These MDS plots indicate that irrespective of the chosen metric, there is a clear separation between weekdays and weekends in MDS2 and also that the number of COVID-19 new cases plays an important role in MDS1.
Since the weekend indicator is a binary predictor, we applied global network regression, using both the Frobenius metric d F and the square root metric d F,1/2 . The estimated mean squared prediction error (MSPE) was calculated for each metric using ten-fold cross validation, averaged over 100 runs. A common baseline metric, for which we chose the Frobenius metric d F , was used to calculate the MSPEs to make them comparable across metrics. The MSPE for d F,1/2 was found to be 96.4% of that for d F , validating the utility of the square root metric in real-world applications. Additionally, the Fréchet coefficient of determination R 2 ⊕ = 1 − E{d 2 [L, m G (x)]}/V ⊕ , an extension of the coefficient of determination R 2 for linear regression, can be similarly used to quantify the proportion of response variation "explained" by the covariates. The corresponding sample version iŝ
R 2 ⊕ = 1 − d 2 [L k ,m G (X k )]/ d 2 (L k ,ω ⊕ ), whereω ⊕ = argmin ω∈Lm d 2 (L k , ω)
. We found thatR 2 ⊕ = 0.433 for d F andR 2 ⊕ = 0.453 for d F,1/2 , which further lends support for the use of d F,1/2 in this specific application. Also, we observe that the generalized tensorresponse regression of Zhang et al. (2022) can be applied to these data using a log link, as the edge weights are counts. We compared this approach with the proposed global network regression for the square root metric d F,1/2 . The MSPE of the proposed network regression averaged over 100 runs was substantially smaller by a factor 0.51 of that for tensor-response regression, clearly favoring the proposed method.
True and fitted networks using the square root metric d F,1/2 for four selected days are shown in Figure 8. From top left to bottom right, the four days were chosen to be spaced evenly in the considered time period April 12 to September 30. For each day, on the left is the observed and on the right the fitted network as obtained from network regression. The size of the nodes indicates the volume of traffic in this region and the thickness of the edges represents their weights. The fitted network regression model is seen to capture both structure and weight information of the networks given relevant covariates, and demonstrates trends that are in line with observations.
To further investigate the effects of COVID-19 new cases and weekends, predicted networks represented as heatmaps at 50, 200, and 400 COVID-19 new cases for weekdays or weekends are shown in Figure 9. Edge weights are seen to decrease for increasing COVID-19 new cases, reflecting the negative impact of the epidemic on travel. Heatmaps are increasingly concentrated with increasing numbers of new cases of COVID-19, indicating a narrowing of movements to limited areas. Weekend taxi traffic, with lighter and less essential traffic compared to weekdays, is more severely affected by COVID-19 and as new cases approach 400 per day comes to a near stop. The regions with the heaviest traffic are areas 105, 106, 107, and 108, which are chiefly residential areas and include popular locations such as Penn Station, Grand Central Terminal, and also the Metropolitan Museum of Art.
To further illustrate the effects of COVID-19 new cases and weekends versus weekdays on network structure, Figure 10 shows the same predicted networks where now each heatmap has its own color scale. This indicates that higher case numbers lead to bigger structural changes in traffic patterns on weekends than on weekdays, likely because weekend travel tends to be optional. increasing COVID-19 new cases on both weekdays and weekends; these regions are in lower Manhattan, the central borough for business (see Appendix D), which includes the Financial District and the World Trade Center. This likely reflects that more people work from home with increasing case numbers and demonstrates the flexibility of the fits obtained from the proposed network regression.
Dynamics of Networks in the Aging Brain
The increasing availability of neuroimaging data, such as functional magnetic resonance imaging (fMRI) data, has facilitated the investigation of age-related changes in human brain network organization. Resting-state fMRI (rs-fMRI), as an important modality of fMRI data acquisition, has been widely used to study normal aging, which is known to be associated with cognitive decline, even in individuals without any process of retrogressive disorder (Ferreira and Busatto, 2013;Sala-Llonch et al., 2014. FMRI measures brain activity by detecting changes in blood-oxygen-level-dependent (BOLD) signals in the brain across time. During recordings of rs-fMRI subjects relax during the sequential acquisition of fMRI scans. Spontaneous fluctuations in brain activity during rest is reflected by low-frequency oscillations of the BOLD signal, recorded as voxel-specific time series of activation strength. Network-based analyses of brain functional connectivity at the subject level typically rely on a specific spatial parcellation of the brain into a set of regions of interest (ROIs) (Bullmore and Sporns, 2009). Temporal coherence between pairwise ROIs is usually measured by so-called temporal Pearson correlation coefficients (PCC) of the fMRI time series, forming a m × m correlation matrix when considering m distinct ROIs. This correlation matrix assumes the role of observed functional connectivity for each subject. Hard or soft thresholding (Schwarz and McGonigle, 2011) is customarily applied to produce a binary or weighted functional connectivity network.
Data used in our study were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu), where n = 404 cognitively normal elderly subjects with age ranging from 55.61 to 95.39 years participated in the study; one rs-fMRI scan is randomly selected if multiple scans are available for a subject. We used the automated anatomical labeling (AAL) atlas (Tzourio-Mazoyer et al., 2002) to parcellate the whole brain into 90 ROIs, with 45 ROIs in each hemisphere. Details about the ROIs can be found in Appendix D. Preprocessing was carried out in MATLAB using the Statistical Parametric Mapping (SPM12, www.fil.ion.ucl.ac.uk/spm) and Resting-State fMRI Data Analysis Toolkit V1.8 (REST1.8, http://restfmri.net/forum/?q=rest). Briefly, this included the removal of any artifacts from head movement, correction for differences in image acquisition time between slices, normalization to the standard space, spatial smoothing and temporal filtering (bandpass filtering of 0.01-0.1 Hz). The mean time course of the voxels within each ROI was then extracted for network construction. A PCC matrix was calculated for all time course pairs for each subject. These matrices were then converted into simple, undirected, weighted networks by setting diagonal entries to 0 and thresholding the absolute values of the remaining correlations. We used density-based thresholding (Fornito et al., 2016), where the threshold is allowed to vary from subject to subject to achieve a desired, fixed connection density. Specifically, in our analyses the 15% strongest connections were kept.
To investigate age-related changes in human brain network organization, we employed local network regression using the Frobenius metric d F with graph Laplacians corresponding to the networks constructed from PCC matrices as responses, with age (in years) as scalarvalued covariate. The bandwidth for the predictor age was chosen to minimize the prediction error using leave-one-out cross-validation, resulting in a bandwidth of h = 0.20. Prediction was performed at four different ages: 65, 70, 75, and 80 (approximately the 20%, 40%, 60%, and 80% quantiles of the age distribution of the 404 subjects). The predicted networks are demonstrated in Figure 11, where the nodes were placed using the Fruchterman-Reingold layout algorithm (Fruchterman and Reingold, 1991) for visualization. Spectral clustering (Newman, 2006) was applied to detect the community structure in each network, where different communities are distinguished by different colors. The number of communities for ages 65, 70, 75, and 80 was 10, 12, 12, and 16, respectively. The communities with no less than 10 nodes are highlighted using colored polygons. These communities are found to be associated with different anatomical regions of the brain (see Table 5 in Appendix D), where a community is identified as the anatomical region to which the majority of nodes belong. As can be seen in Figure 11, the communities associated with the central region, the parietal lobe, and the limbic lobe disintegrate into several small communities with increasing age. This finding suggests that higher age is associated with increased local interconnectivity and cliquishness. High cliquishness is known to be associated with reduced capability to rapidly combine specialized information from distributed brain regions, which may contribute to cognitive decline for healthy elderly adults (Sala-Llonch et al., 2015).
Discussion
The proposed network regression models provide a novel technology for analyzing network objects, with extensive applications in various areas including neuroimaging and social sciences. We present theoretical justifications that include rates of convergence for both global and local versions. The pointwise rates of convergence are optimal for both global and local versions in the sense that they correspond to the known optimal rates in the special case of Euclidean objects. In the proposed framework, the number of nodes m is assumed to be fixed. If m → ∞, the theoretical results will no longer hold and the asymptotics for this case would be a topic for future research.
Our framework can be easily extended to the case of directed networks. For a directed network G, as long as G is simple, i.e., there are no self-loops or multi-edges, one has a oneto-one correspondence between G and its graph Laplacian. Therefore we can still represent networks using their corresponding graph Laplacians. The only difference is that the graph Laplacian is no longer symmetric. As such, the space of graph Laplacians L m is then of dimension m(m−1), rather than m(m−1)/2. However, L m is still convex and closed, which ensures the existence and uniqueness of projections onto L m , and asymptotic properties can be derived by arguments that closely follow those provided in this paper.
For the case of a time series of networks, if one adopts an autoregressive model both the response and the predictor are network objects; see for example Jiang et al. (2020). This case is beyond the scope of the present paper. One potential approach is to use geodesics in the space of graph Laplacians. The relationship between response and predictor networks can then be studied by relating movements along geodesics (Zhu and Müller, 2021). Figure 11: Topological representation using spectral community detection for predicted functional connectivity networks at different ages (in years). The communities comprising 10 or more ROIs are highlighted using colored polygons. These communities are found to be associated with different anatomical regions of the brain (see Table 5 in Appendix D).
Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). This research was supported in part by NSF grant DMS-2014626.
(B2) Let B δ [m G (x)]
⊂ Ω be the ball of radius δ centered at m G (x) and N {ε, B δ [m G (x)], d} be its covering number using balls of size ε. Then
1 0 (1 + log N {δε, B δ [m G (x)], d}) 1/2 dε = O(1) as δ → 0.
(B3) There exists η 0 > 0, C 0 > 0 and γ 0 > 1, possibly depending on x, such that
inf d[m G (x),ω]<η 0 {M G (ω, x) − M G [m G (x), x] − C 0 d[m G (x), ω] γ 0 } ≥ 0.
Uniform convergence results require stronger versions of the above conditions. Let · E be the Euclidean norm on R p and B > 0 a given constant.
(B4) Almost surely, for all x E < B, the objects m G (x) andm G (x) exist and are unique. Additionally, for any ε > 0,
inf x E ≤B inf d[m G (x),ω]>ε {M G (ω, x) − M G [m G (x), x]} > 0
and there exists ζ = ζ(ε) > 0 such that
pr( inf x E ≤B inf d[m G (x),ω]>ε {M G (ω, x) −M G [m G (x), x]} ≥ ζ) → 1. (B5) With B δ [m G (x)] and N {ε, B δ [m G (x)], d} as in Condition (B2), 1 0 sup x E ≤B (1 + log N {δε, B δ [m G (x)], d}) 1/2 dε = O(1) as δ → 0.
(B6) There exist τ 0 > 0, D 0 > 0, and ρ 0 > 1, possibly depending on B, such that
inf x E ≤B inf d[m G (x),ω]<τ 0 {M G (ω, x) − M G [m G (x), x] − D 0 d[m G (x), ω] ρ 0 } ≥ 0.
We require the following conditions to obtain pointwise rates of convergence form L,n (x). For simplicity, we assume that the marginal density f X (·) of X has unbounded support, and consider x ∈ R with f X (x) > 0. (B9) There exists η 1 , η 2 > 0, C 1 , C 2 > 0 and γ 1 , γ 2 > 1 such that
inf d[m(x),ω]<η 1 {M (ω, x) − M [m(x), x] − C 1 d[m(x), ω] γ 1 } ≥ 0, lim inf h→0 inf d[m L,h (x),ω]<η 2 {M L,h (ω, x) − M L,h [m L,h (x), x] − C 2 d[m L,h (x), ω] γ 2 } ≥ 0.
Obtaining uniform rates of convergence for local network regression is more involved and requires stronger conditions. Suppose T is a closed interval in R. Denote the interior of T by T o .
(B10) For all x ∈ T , the minimizers m(x), m L,h (x) andm L,n (x) exist and are unique, the last almost surely. Additionally, for any ε > 0, (B12) There exists τ 1 , τ 2 > 0, D 1 , D 2 > 0 and ρ 1 , ρ 2 > 1 such that
inf x∈T inf d[m(x),ω]>ε {M (ω, x) − M [m(x), x]} > 0, lim inf h→0 inf x∈T inf d[m L,h (x),inf x∈T inf d[m(x),ω]<τ 1 {M (ω, x) − M [m(x), x] − D 1 d[m(x), ω] ρ 1 } ≥ 0, lim inf h→0 inf x∈T inf d[m L,h (x),ω]<τ 2 {M L,h (ω, x) − M L,h [m L,h (x), x] − D 2 d[m L,h (x)
, ω] ρ 2 } ≥ 0.
B.2 Proof of Proposition 1
Each L ∈ L m has the following properties.
(P1) L T = L. where the expectations and sums for graph Laplacians are element-wise. Since
M (ω, x) = E[d 2 F (L, ω) | X = x] = E{d 2 F [L, B(x)] + d 2 F [B(x), ω] + 2 L − B(x), B(x) − ω F | X = x} = M [B(x), x] + d 2 F [B(x), ω] + 2E[ L − B(x), B(x) − ω F | X = x] and E[ L − B(x), B(x) − ω F | X = x] = E(L | X = x) − B(x), B(x) − ω F = B(x) − B(x), B(x) − ω F = 0, one has M (ω, x) = M [B(x), x] + d 2 F [B(x), ω], whence m(x) = argmin ω∈Lm M (ω, x) = argmin ω∈Lm d 2 F [B(x), ω].
Additionally, in view of
E[s G (x)] = 1, 1 n n k=1 s kG (x) = 1, E[s L (x, h)] = 1, 1 n n k=1 s kL (x, h) = 1,
one can similarly show that
m G (x) = argmin ω∈Lm M G (ω, x) = argmin ω∈Lm d 2 F [B G (x), ω], m G (x) = argmin ω∈LmM G (ω, x) = argmin ω∈Lm d 2 F [B G (x), ω], m L,h (x) = argmin ω∈Lm M L,h (ω, x) = argmin ω∈Lm d 2 F [B L,h (x), ω],
It follows that
M (ω, x) = E[d 2 F (L, ω) | X = x] = M [m(x), x] + d 2 F [m(x), ω] + 2E[ L − m(x), m(x) − ω F | X = x] = M [m(x), x] + d 2 F [m(x), ω] + 2 B(x) − m(x), m(x) − ω F ≥ M [m(x), x] + d 2 F [m(x), ω]
for all ω ∈ L m . Similarly,
M G (ω, x) ≥ M G [m G (x), x] + d 2 F [m G (x), ω], M L,h (ω, x) ≥ M L,h [m L,h (x), x] + d 2 F [m L,h (x), ω],
for all ω ∈ L m . Consequently, we may select η i and τ i arbitrary, C i = D i = 1 and γ i = ρ i = 2 for i = 0, 1, 2 in Conditions (B3), (B6), and (B9), (B12). Next, we show that Condition (B5) holds, which then implies Condition (B2). Since L m is a subset of R m 2 , for any ω ∈ L m we have y 1/2 e −y dy < ∞, using the substitution y = log(3/ε). Since this bound does not depend on δ, Condition (B5) holds and thus Condition (B2) as well. Likewise we can show that Conditions (B11) and (B8) also hold.
N [δε, B δ (ω), d F ] = N [ε, B 1 (ω), d F ] ≤ (1 + 2/ε) m 2 .
Thus, the integral in Condition (B5) is bounded by
Theorem 2 in Petersen and Müller (2019) yields rates of convergence for the global network regression estimator. For the local network regression estimator, rates of convergence can be obtained using Corollary 1 in Petersen and Müller (2019) and Theorem 1 in Chen and Müller (2022).
B.4 Proof of Proposition 4
Recall that the matrix power map F α is
F α (S) = S α = U Λ α U T : S + m → S + m ,
where U ΛU T is the spectral decomposition of S. Specifically, denote the eigenvalues of S by λ 1 ≥ λ 2 ≥ · · · ≥ λ m ≥ 0. Then F α (S) = U diag(λ α 1 , λ α 2 , . . . , λ α m )U T . Note that the power function f (x) = x α : [0, ∞) → [0, ∞) is (α, 1)-Hölder continuous in [0, ∞) for 0 < α < 1, and is αC α−1 -Lipschitz continuous in [0, C] for α ≥ 1. Results follow directly from Theorem 1.1 in Wihler (2009) by choosing the scalar function as the power function f (x) = x α with 0 < α < 1 in [0, ∞), and with α ≥ 1 in [0, C], respectively.
B.5 Proof of Theorem 5 and Theorem 6
Substituting for the response object Y in Appendix B.1 a m×m bounded symmetric positive semi-definite matrix S, which resides in M m , endowed with the Frobenius metric d F , one can show that the metric space (M m , d F ) satisfies Conditions (B1)-(B12) since M m , similar to L m , is a bounded, closed, and convex subset in R m 2 . According to Theorem 2 in Petersen and Müller (2019), for the metric space
(M m , d F ) it holds for m α G (x) andm α G (x) that for a fixed x ∈ R p , d F [m α G (x),m α G (x)] = O p (n −1/2 ) and for a given B > 0, sup x E ≤B d F [m α G (x),m α G (x)] = O p (n −1/[2(1+ε)] ),
for any ε > 0. Next, we derive rates of convergence when applying the inverse matrix power map F 1/α and a projection P Lm onto L m . Here we consider two cases:
1. Case 0 < α ≤ 1.
By Proposition 2,
F 1/α (S 1 ) − F 1/α (S 2 ) F ≤ 1 α (D α ) 1/α−1 S 1 − S 2 F = 1 α D 1−α S 1 − S 2 F for any S 1 , S 2 ∈ M m . Hence for a fixed x ∈ R p d F {F 1/α [m α G (x)], F 1/α [m α G (x)]} = O p (n −1/2 )
and for a given B > 0
sup x E ≤B d F {F 1/α [m α G (x)], F 1/α [m α G (x)]} = O p (n −1/[2(1+ε)] ),
for any ε > 0.
As shown in the proof for Result 2 in Severn et al. (2022), the projection P Lm does not increase the distance between two matrices. That is
d F (P Lm {F 1/α [m α G (x)]}, P Lm {F 1/α [m α G (x)]}) ≤ d F {F 1/α [m α G (x)], F 1/α [m α G (x)]}.
Therefore, for m G (x) andm G (x) and a fixed x ∈ R p ,
d F [m G (x),m G (x)] = O p (n −1/2 )
and for a given B > 0,
sup x E ≤B d F [m G (x),m G (x)] = O p (n −1/[2(1+ε)] ),
for any ε > 0.
2. Case α > 1.
By Proposition 2, it holds that
F 1/α (S 1 ) − F 1/α (S 2 ) F ≤ m (α−1)/(2α) S 1 − S 2 1/α F for any S 1 , S 2 ∈ M m . Hence we have for a fixed x ∈ R p , d F {F 1/α [m α G (x)], F 1/α [m α G (x)]} = O p (n −1/(2α) )
and for a given B > 0,
sup x E ≤B d F {F 1/α [m α G (x)], F 1/α [m α G (x)]} = O p (n −1/[2(1+ε)α] ),
for any ε > 0. Table 3: Mean integrated squared errors for five sample sizes under simulation scenarios I and III in Section 5 of the paper using the proposed methods (GF, global network regression using the Frobenius metric; GS, global network regression using the square root metric; LF, local network regression using the Frobenius metric; LS, local network regression using the square root metric).
By the same argument as in the first case, at a fixed x ∈ R p ,
d F [m G (x),m G (x)] = O p (n −1/(2α) ),
and for a given B > 0,
sup x E ≤B d F [m G (x),m G (x)] = O p (n −1/[2(1+ε)α] ),
for any ε > 0.
Similar arguments apply for the local network regression by combining Proposition 2, Corollary 1 in Petersen and Müller (2019), and Theorem 1 in Chen and Müller (2022).
Appendix C. Additional Simulations
C.1 Comparisons Between Different Types of Regression and Metrics
We replicated simulation scenarios I and III in Section 5 of the paper, corresponding to global and local settings, using both global and local network regression and both the Frobenius and square root metrics. Specifically, for each simulation scenario, the following four combinations were included: global network regression using the Frobenius metric (GF), global network regression using the square root metric (GS), local network regression using the Frobenius metric (LF) and local network regression using the square root metric (LS). The corresponding mean integrated squared errors are summarized in Table 3. Under simulation scenario I, where each entry of the graph Laplacian L is conditionally linear in the predictor X, the global network regression using the Frobenius metric is seen to perform best, validating the superiority of global approaches if the linear assumption holds. In contrast, if there is an underlying nonlinear relationship as in simulation scenario III, local network regression leads to much smaller errors, which is as expected. We also note that the square root metric achieves better performance under simulation scenario I with local linear regression. Figure 12: Boxplots of integrated square errors (ISE) for networks generated from Erdős-Rényi random graph model.
C.2 Networks Generated from Erdős-Rényi Model
Here we assess the performance of global and local network regression estimates on networks generated from Erdős-Rényi random graph model (Erdős and Rényi, 1959). Similar to Section 5, four different simulation scenarios I -IV are considered, corresponding to global and local network regression using both the Frobenius metric d F and the square root metric d F,1/2 . Specifically, consider the Erdős-Rényi random graph model G(m, N ), where networks are sampled uniformly at random from the collection of all networks which have m nodes and N edges. The probability of the presence of each edge is thus N/M where M = m(m − 1)/2. The weights of existing edges are assumed to follow a beta distribution with shape parameters α = X and β = 1 − X for global network regression or α = sin(πX) and β = 1 − sin(πX) for local network regression. The associated graph Laplacian is then taken as the random response L for the proposed regression models. In particular for global network regression under the square root metric, the random response is taken as F 2 (L) to ensure that the linearity assumption is satisfied. We investigated sample sizes n = 50, 100, 200, 500, 1000, with Q = 1000 Monte Carlo runs for each simulation scenario. In each iteration, random samples of pairs (X k , L k ), k = 1, . . . , n were generated by sampling X k ∼ U(0, 1), setting m = 10, N = 9, and following the above procedure. The quality of the estimation for each simulation run was quantified by the integrated squared error as defined in Section 5. The bandwidths for the local network regression were chosen by leave-one-out cross-validation. The integrated squared errors (ISE) for Q = 1000 simulation runs and five sample sizes under four simulation scenarios are summarized in the boxplots in Figure 12. With increasing sample size, the integrated squared errors are seen to decrease, demonstrating the validity and utility of the proposed network regression models for various network generative models.
s
kL (x, h)L k .
Figure 2
2shows the prediction networks at X = 5 using global and local network regression, where the Epanechnikov kernel K(u) = 3 4 (1 − u 2 )1 [−1,1] is used with a bandwidth h = 2.
Figure 2 :
2Prediction networks at X = 5 for the toy example as perFigure 1using global and local network regression with the Frobenius metric d F . The Epanechnikov kernel and a bandwidth h = 2 are used in local network regression. The weight of each edge is marked beside the corresponding edge.
Figure 3 :
3Schematic diagram for the power metric d F,α , where L m and M m are the space of graph Laplacians and the embedding space defined in
Figure 4 :
4Run time of the proposed regression models in seconds for different number of nodes m. GF, global network regression using the Frobenius metric; GS, global network regression using the square root metric; LF1, local network regression using the Frobenius metric with no pre-specified bandwidth; LS1, local network regression using the square root metric with no pre-specified bandwidth; LF2, local network regression using the Frobenius metric with pre-specified bandwidth; LS2, local network regression using the square root metric with pre-specified bandwidth.run time of the proposed regression models for different number of nodes m using R version 4.2.0 (2022-04-22) running under Darwin on MacBook Pro M1 are summarized inFigure 4.
FFFigure 5 :
5[m(x),m q (x)]dx, where m(x) is the true regression function and the average quality of the estimation over the Q = 1000 Monte Carlo runs was assessed by the mean integrated squared [m(x),m q (x)]dx. (20) Boxplots of integrated square errors (ISE) for five sample sizes under four simu-
Figure 6 :
6Boxplots of integrated square errors (ISE) for networks with latent block structure using the proposed methods (NRGL, red) and the methods ofSevern et al. (2022Severn et al. ( , 2021) (blue and green).
Figure 7 :
7(a), where we observe a pronounced difference between weekends and weekdays. Even though the three holidays Memorial Day (May 25), Independence Day (July 3), and Labor Day (September (a) Total yellow taxi ridership per day in Manhattan, New York in 2020, from April 12 to September 30. Three holidays Memorial Day (May 25), Independence Day (July 3), and Labor Day (September 7) are highlighted. (b) COVID-19 new cases per day in Manhattan, New York in 2020, April 12 -September 30. (c) MDS plot for taxi networks based on the Frobenius metric d F . (d) MDS plot for taxi networks based on the square root metric d F,1/2 .
Figure 8 :
8True (left) and fitted (right) networks on May 2, Jun 25, Aug 4, and Sep 7, 2020 (from top left to bottom right). The corresponding days and the number of COVID-19 new cases are in the headline of each subfigure.
Figure 9 :Figure 10 :
910Predicted networks represented as heatmaps at different number of COVID-19 new cases on weekdays or weekends. The top, middle, and bottom rows show, respectively, the predicted networks at 50, 200, and 400 COVID-19 new cases. The left and right columns depict the predicted networks on weekdays and weekends, respectively. Predicted networks represented as heatmaps at different number of COVID-19 new cases on weekdays or weekends. Each heatmap has its own scale to enhance visualization of structural changes in connections in dependence on daily COVID-19 new cases. The top, middle, and bottom rows show, respectively, the predicted networks at 50, 200, and 400 COVID-19 new cases. The left and right columns depict the predicted networks on weekdays and weekends, respectively.
( B7 )
B7The minimizers m(x), m L,h (x) andm L,n (x) exist and are unique, the last almost surely. Additionally, for any ε > 0,inf d[m(x),ω]>ε {M (ω, x) − M [m(x), x]} > 0, lim inf h→0 inf d[m L,h (x),ω]>ε {M L,h (ω, x) − M L,h [m L,h (x), x]} > 0. (B8) Let B δ [m(x)]⊂ Ω be the ball of radius δ centered at m(x) and N {ε, B δ [m(x)], d} be its covering number using balls of size ε. log N {δε, B δ [m(x)], d}) 1/2 dε = O(1) as δ → 0.
ω]>ε {M L,h (ω, x) − M L,h [m L,h (x), x]} > 0, and there exists ζ = ζ(ε) > 0 such that pr( inf x∈T inf d[m L,n (x),ω]>ε {M L,n (ω, x) −M L,n [m L,n (x), x]} ≥ ζ) → 1. (B11) With B δ [m(x)] ⊂ Ω and N {ε, B δ [m(x)], d} as in Condition (B8), log N {δε, B δ [m(x)], d}) 1/2 dε = O(1) as δ → 0.
( P2 )
P2The entries in each row sum to 0, L1 m = 0 m . (P3) The off-diagonal entries are nonpositive and bounded below, −W ≤ l ij ≤ 0. Properties (P1) and (P2) can be decomposed into m(m − 1)/2 and m constraints, respectively. Thus, the dimension of the space of m × m matrices with Properties (P1) and (P2) is m 2 − m(m − 1)/2 − m = m(m − 1)/2, and any matrix satisfying Properties (P1) and (P2) is fully determined by its upper (or lower) triangular submatrix. It is easy to verify that Properties (P1) and (P2) remain valid under matrix addition and scalar multiplication. Additionally, the matrix consisting of zeros satisfies Properties (P1) and (P2). Thus, the space of m×m matrices with Properties (P1) and (P2) is a subspace of R m 2 of dimension m(m−1)/2 and L m can be bijectively mapped to the hypercube {(x 1 , . . . , x m(m−1)/2 ) : −W ≤ x i ≤ 0}, which is bounded, closed, and convex. This proves that L m is a bounded, closed, and convex subset in R m 2 of dimension m(m − 1)/2. B.3 Proof of Theorem 2 and Theorem 3 Substituting for the response object Y in Appendix B.1 the m × m graph Laplacian L, which resides in L m , endowed with the Frobenius metric d F , we show that the metric space (L m , d F ) satisfies Conditions (B1)-(B12). Let ·, · F be the Frobenius inner product. Define B(x) = E(L|X = x); B G (x) = E[s G (x)L],B G (x) = n −1 n k=1 s kG (x)L k ; B L,h (x) = E[s L (x, h)L],B L,n (x) = n −1 n k=1 s kL (x, h)L k ,
F
[B L,n (x), ω]. Then by the convexity and closedness of L m , all the minimizers m(x), m G (x),m G (x), m L,h (x), andm L,n (x) exist and are unique for any x ∈ R p (Deutsch, 2012, chap. 3). Hence Conditions (B1), (B4) and (B7), (B10) are satisfied. To prove that Conditions (B3), (B6) and (B9), (B12) hold, we note that m(x), viewed as the best approximation of B(x) in L m , is characterized by (Deutsch, 2012, chap. 4) B(x) − m(x), ω − m(x) F ≤ 0, for all ω ∈ L m .
Table 2 :
2Mean integrated squared errors (MISE) for five samples sizes under four simulation scenarios.
Blocks involving regions 101, 102, 103 have declining traffic withridership
200
400
600
1,300
1,700
May 02, Sat, 230 new cases
ridership
400
1,100
2,100
5,800
8,600
Jun 25, Thu, 55 new cases
ridership
400
1,100
2,200
6,000
8,900
Aug 04, Tue, 44 new cases
ridership
400
1,200
2,000
5,000
7,400
Sep 07, Mon, 37 new cases
1 0
1[1 + m 2 log(1 + 2/ε)] 1/2 dε ≤ 1 + m [log(1 + 2/ε)] 1/2 dε ≤ 1 + m1
0
1
0
[log(3/ε)] 1/2 dε
= 1 + 3m
∞
log 3
Table 4 :
4Details about 13 regions in Manhattan, New York.ROI
Lobe
Label
Central region
1
Precentral gyrus
PRE
2
Postcentral gyrus
POST
3
Rolandric operculum
RO
Frontal lobe
Lateral surface
4
Superior frontal gyrus, dorsolateral
F1
5
Middle frontal gyrus
F2
6
Inferior frontal gyrus, opercular part
F3OP
7
Inferior frontal gyrus, triangular part
F3T
Medial surface
8
Superior frontal gyrus, medial
F1M
9
Supplementary motor area
SMA
10
Paracentral lobule
PCL
Orbital surface
Table 5 :
5Anatomical regions of interest (ROIs) in each hemisphere for the AAL atlas.
M G (ω, x) > M G [m G (x), x].
AcknowledgmentsWe thank Michael Mahoney and three anonymous reviewers for their helpful comments which have helped to improve this article. Data used in preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.l oni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: http://adni.lon i.usc.edu/wp-content/uploads/how to apply/ADNI Acknowledgement List.pdf. Data collection and sharing for this project was funded by the Alzheimer's Disease NeuroimagingAppendix A. Assumptions for Theorem 3 and Theorem 6In the following, f X (·) and f X|L (·, ω) stand for the marginal density of X and the conditional density of X given L = ω, respectively. T is a closed interval in R with interior T o .(A1) The kernel K(·) is a probability density function, symmetric around zero. Furthermore, defining K kj = R K k (u)u j du, |K 14 | and |K 26 | are both finite. (A2) f X (·) and f X|L (·, ω) both exist and are twice continuously differentiable, the latter for all ω ∈ L m , and sup x,ω |(∂ 2 f X|L /∂x 2 )(x, ω)| < ∞. Additionally, for any open set U ⊂ L m , U dF L|X (x, ω) is continuous as a function of x.(A3) The kernel K(·) is a probability density function, symmetric around zero, and uniformly continuous on R. Furthermore, defining K jk = R K(u) j u k du for j, k ∈ N, |K 14 | and |K 26 | are both finite. The derivative K exists and is bounded on the support of K, i.e., sup K(x)>0 |K (x)| < ∞; additionally, R x 2 |K (x)|(|x log |x||) 1/2 dx < ∞. (A4) f X (·) and f X|L (·, ω) both exist and are continuous on T and twice continuously differentiable on T o , the latter for all ω ∈ L m . The marginal density fAppendix B. ProofsB.1 ConditionsTo obtain rates of convergence for the global and local network regression estimators, we require the following conditions that parallel those inPetersen and Müller (2019)andChen and Müller (2022). For ease of presentation, we replace the graph Laplacian L in global and local network regression by a general random object Y taking values in an arbitrary metric space (Ω, d) and follow the same notations there. Consequently, the following conditions apply to any random objects taking values in a metric space. We will verify that the two metric spaces, (L m , d F ) and (M m , d F ), satisfy these conditions, so that we indeed can apply these general conditions and ensuing results. This lays the foundation for the derivation of rates of convergence for the corresponding estimators.The following conditions are required to obtain consistency and rates of convergence of m G (x). For a fixed x ∈ R p : (B1) The objects m G (x) andm G (x) exist and are unique, the latter almost surely, and, for any ε > 0, infAppendix D. Additional Materials
87 Financial District North, 88 Financial District South, 209 Seaport, 231 TriBeCa/Civic Center. 261 World Trade Center. 102103Battery Park, 13 Battery Park City, 87 Financial District North, 88 Finan- cial District South, 209 Seaport, 231 TriBeCa/Civic Center, 261 World Trade Center 102 113 Greenwich Village North, 114 Greenwich Village South, 125 Hudson Sq, 144 Little Italy/NoLiTa, 158 Meatpacking/West Village West, 211 SoHo, 249 West Village 103
79 East Village, 148 Lower East Side, 232 Two Bridges/Seward Park 104 48 Clinton East, 50 Clinton West, 68 East Chelsea, 90 Flatiron, 246 West Chelsea/Hudson Yards 105 100 Garment District, 161 Midtown Center, 163 Midtown North, 164 Midtown South, 186 Penn Station/Madison Sq West, 230 Times Sq/Theatre District. Alphabet City, 45 Chinatown. Alphabet City, 45 Chinatown, 79 East Village, 148 Lower East Side, 232 Two Bridges/Seward Park 104 48 Clinton East, 50 Clinton West, 68 East Chelsea, 90 Flatiron, 246 West Chelsea/Hudson Yards 105 100 Garment District, 161 Midtown Center, 163 Midtown North, 164 Midtown South, 186 Penn Station/Madison Sq West, 230 Times Sq/Theatre District, 234 Union Sq 106
137 Kips Bay, 162 Midtown East, 170 Murray Hill. Gramercy, 224Gramercy, 137 Kips Bay, 162 Midtown East, 170 Murray Hill, 224
. Stuy Town, /Peter Cooper Village, 229233Sutton Place/Turtle Bay NorthStuy Town/Peter Cooper Village, 229 Sutton Place/Turtle Bay North, 233
152 Manhattanville, 166 Morningside Heights 110 41 Central Harlem, 42 Central Harlem North 111. Hamilton Heights, Hamilton Heights, 152 Manhattanville, 166 Morningside Heights 110 41 Central Harlem, 42 Central Harlem North 111
. Randalls Island. 112East Harlem North, 75 East Harlem SouthEast Harlem North, 75 East Harlem South, 194 Randalls Island 112
Learning latent block structure in weighted networks. Christopher Aicher, Abigail Z Jacobs, Aaron Clauset, Journal of Complex Networks. 32Christopher Aicher, Abigail Z Jacobs, and Aaron Clauset. Learning latent block structure in weighted networks. Journal of Complex Networks, 3(2):221-248, 2015.
Geometric means in a novel vector space structure on symmetric positive-definite matrices. Vincent Arsigny, Pierre Fillard, Xavier Pennec, Nicholas Ayache, SIAM Journal on Matrix Analysis and Applications. 291Vincent Arsigny, Pierre Fillard, Xavier Pennec, and Nicholas Ayache. Geometric means in a novel vector space structure on symmetric positive-definite matrices. SIAM Journal on Matrix Analysis and Applications, 29(1):328-347, 2007.
Convex Optimization. Stephen Boyd, P Stephen, Lieven Boyd, Vandenberghe, Cambridge University PressStephen Boyd, Stephen P Boyd, and Lieven Vandenberghe. Convex Optimization. Cam- bridge University Press, 2004.
Complex brain networks: Graph theoretical analysis of structural and functional systems. Ed Bullmore, Olaf Sporns, Nature Reviews Neuroscience. 103Ed Bullmore and Olaf Sporns. Complex brain networks: Graph theoretical analysis of structural and functional systems. Nature Reviews Neuroscience, 10(3):186-198, 2009.
Populations of unlabeled networks: Graph space geometry and geodesic principal components. Anna Calissano, Aasa Feragen, Simone Vantini, MOX ReportAnna Calissano, Aasa Feragen, and Simone Vantini. Populations of unlabeled networks: Graph space geometry and geodesic principal components. MOX Report, 2020.
Graph-valued regression: Prediction of unlabelled networks in a non-Euclidean graph space. Anna Calissano, Aasa Feragen, Simone Vantini, Journal of Multivariate Analysis. Anna Calissano, Aasa Feragen, and Simone Vantini. Graph-valued regression: Prediction of unlabelled networks in a non-Euclidean graph space. Journal of Multivariate Analysis, 190:104950, 2022.
Statistical inference for high-dimensional matrix-variate factor models. Y Elynn, Jianqing Chen, Fan, Journal of the American Statistical Association. Elynn Y Chen and Jianqing Fan. Statistical inference for high-dimensional matrix-variate factor models. Journal of the American Statistical Association, pages 1-18, 2021.
Uniform convergence of local Fréchet regression with applications to locating extrema and time warping for metric space valued trajectories. Yaqing Chen, Hans-Georg Müller, The Annals of Statistics. 503Yaqing Chen and Hans-Georg Müller. Uniform convergence of local Fréchet regression with applications to locating extrema and time warping for metric space valued trajectories. The Annals of Statistics, 50(3):1573-1592, 2022.
Alzheimer's Disease Neuroimaging Initiative. Regression models on Riemannian symmetric spaces. Emil Cornea, Hongtu Zhu, Peter Kim, G Joseph, Ibrahim, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 792Emil Cornea, Hongtu Zhu, Peter Kim, Joseph G Ibrahim, and Alzheimer's Disease Neu- roimaging Initiative. Regression models on Riemannian symmetric spaces. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 79(2):463-482, 2017.
Principal component analysis for functional data on Riemannian manifolds and spheres. Xiongtao Dai, Hans-Georg Müller, The Annals of Statistics. 466BXiongtao Dai and Hans-Georg Müller. Principal component analysis for functional data on Riemannian manifolds and spheres. The Annals of Statistics, 46(6B):3334-3361, 2018.
Aspects of Semidefinite Programming: Interior Point Algorithms and Selected Applications. Etienne De Klerk, Springer Science & Business Media65Etienne De Klerk. Aspects of Semidefinite Programming: Interior Point Algorithms and Selected Applications, volume 65. Springer Science & Business Media, 2006.
Best Approximation in Inner Product Spaces. Frank R Deutsch, Springer Science & Business MediaFrank R Deutsch. Best Approximation in Inner Product Spaces. Springer Science & Business Media, 2012.
Tracking network dynamics: A survey using graph distances. Claire Donnat, Susan Holmes, The Annals of Applied Statistics. 122Claire Donnat and Susan Holmes. Tracking network dynamics: A survey using graph distances. The Annals of Applied Statistics, 12(2):971-1012, 2018.
Non-Euclidean statistics for covariance matrices, with applications to diffusion tensor imaging. Alexey Ian L Dryden, Diwei Koloydenko, Zhou, The Annals of Applied Statistics. 33Ian L Dryden, Alexey Koloydenko, and Diwei Zhou. Non-Euclidean statistics for covariance matrices, with applications to diffusion tensor imaging. The Annals of Applied Statistics, 3(3):1102-1123, 2009.
Power Euclidean metrics for covariance matrices with application to diffusion tensor imaging. Ian L Dryden, Xavier Pennec, Jean-Marc Peyrat, arXiv:1009.3045arXiv preprintIan L. Dryden, Xavier Pennec, and Jean-Marc Peyrat. Power Euclidean metrics for covariance matrices with application to diffusion tensor imaging. arXiv preprint arXiv:1009.3045, 2010.
Modeling time-varying random objects and dynamic networks. Paromita Dubey, Hans-Georg Müller, Journal of the American Statistical Association. just-acceptedParomita Dubey and Hans-Georg Müller. Modeling time-varying random objects and dy- namic networks. Journal of the American Statistical Association, (just-accepted):1-33, 2021.
Nonparametric Bayes modeling of populations of networks. Daniele Durante, B David, Joshua T Dunson, Vogelstein, Journal of the American Statistical Association. 112520Daniele Durante, David B Dunson, and Joshua T Vogelstein. Nonparametric Bayes mod- eling of populations of networks. Journal of the American Statistical Association, 112 (520):1516-1530, 2017.
On random graphs i. Paul Erdős, Alfréd Rényi, Publicationes Mathematicae. 61Paul Erdős and Alfréd Rényi. On random graphs i. Publicationes Mathematicae, 6(1): 290-297, 1959.
Resting-state functional connectivity in normal brain aging. Kobuti Luiz, Geraldo F Ferreira, Busatto, Neuroscience & Biobehavioral Reviews. 373Luiz Kobuti Ferreira and Geraldo F Busatto. Resting-state functional connectivity in normal brain aging. Neuroscience & Biobehavioral Reviews, 37(3):384-400, 2013.
Fundamentals of Brain Network Analysis. Alex Fornito, Andrew Zalesky, Edward Bullmore, Academic PressAlex Fornito, Andrew Zalesky, and Edward Bullmore. Fundamentals of Brain Network Analysis. Academic Press, 2016.
Leséléments aléatoires de nature quelconque dans un espace distancié. Maurice Fréchet, Annales de l'institut Henri Poincaré. 10Maurice Fréchet. Leséléments aléatoires de nature quelconque dans un espace distancié. Annales de l'institut Henri Poincaré, 10:215-310, 1948.
Graph drawing by force-directed placement. Software: Practice and Experience. M J Thomas, Fruchterman, Edward M Reingold, 21Thomas MJ Fruchterman and Edward M Reingold. Graph drawing by force-directed place- ment. Software: Practice and Experience, 21(11):1129-1164, 1991.
Hypothesis testing for network data in functional neuroimaging. Cedric E Ginestet, Jun Li, Prakash Balachandran, Steven Rosenberg, Eric D Kolaczyk, The Annals of Applied Statistics. 112Cedric E Ginestet, Jun Li, Prakash Balachandran, Steven Rosenberg, and Eric D Kolaczyk. Hypothesis testing for network data in functional neuroimaging. The Annals of Applied Statistics, 11(2):725-750, 2017.
On the geometry of graph spaces. J Brijnesh, Jain, Discrete Applied Mathematics. 214Brijnesh J Jain. On the geometry of graph spaces. Discrete Applied Mathematics, 214: 126-144, 2016.
. Binyan Jiang, Jailing Li, Qiwei Yao, arXiv:2006.13548Autoregressive networks. arXiv preprintBinyan Jiang, Jailing Li, and Qiwei Yao. Autoregressive networks. arXiv preprint arXiv:2006.13548, 2020.
Bayesian classification, anomaly detection, and survival analysis using network inputs with application to the microbiome. Nathaniel Josephs, Lizhen Lin, Steven Rosenberg, Eric D Kolaczyk, arXiv:2004.04765arXiv preprintNathaniel Josephs, Lizhen Lin, Steven Rosenberg, and Eric D Kolaczyk. Bayesian classi- fication, anomaly detection, and survival analysis using network inputs with application to the microbiome. arXiv preprint arXiv:2004.04765, 2020.
A review of dynamic network models with latent variables. Bomin Kim, H Kevin, Lingzhou Lee, Xiaoyue Xue, Niu, Statistics Surveys. 12105Bomin Kim, Kevin H Lee, Lingzhou Xue, and Xiaoyue Niu. A review of dynamic network models with latent variables. Statistics Surveys, 12:105, 2018.
Averages of unlabeled networks: Geometric characterization and asymptotic behavior. Lizhen Eric D Kolaczyk, Steven Lin, Jackson Rosenberg, Jie Walters, Xu, The Annals of Statistics. 481Eric D Kolaczyk, Lizhen Lin, Steven Rosenberg, Jackson Walters, and Jie Xu. Averages of unlabeled networks: Geometric characterization and asymptotic behavior. The Annals of Statistics, 48(1):514-538, 2020.
Riemannian geometry of symmetric positive definite matrices via Cholesky decomposition. Zhenhua Lin, SIAM Journal on Matrix Analysis and Applications. 404Zhenhua Lin. Riemannian geometry of symmetric positive definite matrices via Cholesky decomposition. SIAM Journal on Matrix Analysis and Applications, 40(4):1353-1370, 2019.
Modeling network populations via graph distances. Simón Lunagómez, C Sofia, Patrick J Olhede, Wolfe, Journal of the American Statistical Association. 116536Simón Lunagómez, Sofia C Olhede, and Patrick J Wolfe. Modeling network populations via graph distances. Journal of the American Statistical Association, 116(536):2023-2040, 2021.
Peter Hall, functional data analysis and random objects. Hans-Georg Müller, The Annals of Statistics. 445Hans-Georg Müller. Peter Hall, functional data analysis and random objects. The Annals of Statistics, 44(5):1867-1887, 2016.
Finding community structure in networks using the eigenvectors of matrices. E J Mark, Newman, Physical Review E. 74336104Mark EJ Newman. Finding community structure in networks using the eigenvectors of matrices. Physical Review E, 74(3):036104, 2006.
Small-worldness and modularity of the resting-state functional brain network decrease with aging. Keiichi Onoda, Shuhei Yamaguchi, Neuroscience Letters. 556Keiichi Onoda and Shuhei Yamaguchi. Small-worldness and modularity of the resting-state functional brain network decrease with aging. Neuroscience Letters, 556:104-108, 2013.
Fréchet integration and adaptive metric selection for interpretable covariances of multivariate functional data. Alexander Petersen, Hans-Georg Müller, Biometrika. 1031Alexander Petersen and Hans-Georg Müller. Fréchet integration and adaptive metric se- lection for interpretable covariances of multivariate functional data. Biometrika, 103(1): 103-120, 2016.
Fréchet regression for random objects with Euclidean predictors. Alexander Petersen, Hans-Georg Müller, The Annals of Statistics. 472Alexander Petersen and Hans-Georg Müller. Fréchet regression for random objects with Euclidean predictors. The Annals of Statistics, 47(2):691-719, 2019.
R Core Team, R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. Vienna, Austria2022R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2022.
Complex network measures of brain connectivity: Uses and interpretations. Mikail Rubinov, Olaf Sporns, NeuroImage. 523Mikail Rubinov and Olaf Sporns. Complex network measures of brain connectivity: Uses and interpretations. NeuroImage, 52(3):1059-1069, 2010.
Antoni Salvà, Nuria Bargalló, and David Bartrés-Faz. Changes in whole-brain functional networks and memory performance in aging. Roser Sala-Llonch, Carme Junqué, M Eider, Dídac Arenaza-Urquijo, Cinta Vidal-Piñeiro, Valls-Pedret, M Eva, Sara Palacios, Domènech, Neurobiology of Aging. 3510Roser Sala-Llonch, Carme Junqué, Eider M Arenaza-Urquijo, Dídac Vidal-Piñeiro, Cinta Valls-Pedret, Eva M Palacios, Sara Domènech, Antoni Salvà, Nuria Bargalló, and David Bartrés-Faz. Changes in whole-brain functional networks and memory performance in aging. Neurobiology of Aging, 35(10):2193-2202, 2014.
Reorganization of brain networks in aging: A review of functional connectivity studies. Roser Sala-Llonch, David Bartrés-Faz, Carme Junqué, Frontiers in Psychology. 6663Roser Sala-Llonch, David Bartrés-Faz, and Carme Junqué. Reorganization of brain networks in aging: A review of functional connectivity studies. Frontiers in Psychology, 6:663, 2015.
Negative edges and soft thresholding in complex network analysis of resting state functional connectivity data. J Adam, John Schwarz, Mcgonigle, NeuroImage. 553Adam J Schwarz and John McGonigle. Negative edges and soft thresholding in complex network analysis of resting state functional connectivity data. NeuroImage, 55(3):1132- 1146, 2011.
Non-parametric regression for networks. Katie E Severn, Ian L Dryden, Simon P Preston, Stat. 101373Katie E Severn, Ian L Dryden, and Simon P Preston. Non-parametric regression for net- works. Stat, 10(1):e373, 2021.
Manifold valued data analysis of samples of networks. Katie E Severn, Ian L Dryden, Simon P Preston, The Annals of Applied Statistics. 161with applications in corpus linguisticsKatie E Severn, Ian L Dryden, and Simon P Preston. Manifold valued data analysis of samples of networks, with applications in corpus linguistics. The Annals of Applied Statistics, 16(1):368-390, 2022.
OSQP: An operator splitting solver for quadratic programs. B Stellato, G Banjac, P Goulart, A Bemporad, S Boyd, Mathematical Programming Computation. 124B. Stellato, G. Banjac, P. Goulart, A. Bemporad, and S. Boyd. OSQP: An operator splitting solver for quadratic programs. Mathematical Programming Computation, 12(4):637-672, 2020.
A spatial modeling approach for linguistic object data: Analyzing dialect sound variations across Great Britain. Shahin Tavakoli, Davide Pigoli, A D John, John S Aston, Coleman, Journal of the American Statistical Association. 114527Shahin Tavakoli, Davide Pigoli, John A. D. Aston, and John S. Coleman. A spatial model- ing approach for linguistic object data: Analyzing dialect sound variations across Great Britain. Journal of the American Statistical Association, 114(527):1081-1096, 2019.
Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. Nathalie Tzourio-Mazoyer, Brigitte Landeau, Dimitri Papathanassiou, Fabrice Crivello, Olivier Etard, Nicolas Delcroix, Bernard Mazoyer, Marc Joliot, NeuroImage. 151Nathalie Tzourio-Mazoyer, Brigitte Landeau, Dimitri Papathanassiou, Fabrice Crivello, Olivier Etard, Nicolas Delcroix, Bernard Mazoyer, and Marc Joliot. Automated anatom- ical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. NeuroImage, 15(1):273-289, 2002.
Public transport networks: Empirical analysis and modeling. Taras Christian Von Ferber, Yu Holovatch, V Holovatch, Palchykov, The European Physical Journal B. 682Christian Von Ferber, Taras Holovatch, Yu Holovatch, and V Palchykov. Public transport networks: Empirical analysis and modeling. The European Physical Journal B, 68(2): 261-275, 2009.
High-dimensional vector autoregressive time series modeling via tensor decomposition. Di Wang, Yao Zheng, Heng Lian, Guodong Li, Journal of the American Statistical Association. Di Wang, Yao Zheng, Heng Lian, and Guodong Li. High-dimensional vector autoregres- sive time series modeling via tensor decomposition. Journal of the American Statistical Association, pages 1-19, 2021.
Bayesian network-response regression. Lu Wang, Daniele Durante, Rex E Jung, David B Dunson, Bioinformatics. 3312Lu Wang, Daniele Durante, Rex E Jung, and David B Dunson. Bayesian network-response regression. Bioinformatics, 33(12):1859-1866, 2017.
Collective dynamics of 'small-world' networks. J Duncan, Watts, H Steven, Strogatz, Nature. 3936684Duncan J Watts and Steven H Strogatz. Collective dynamics of 'small-world' networks. Nature, 393(6684):440-442, 1998.
On the Hölder continuity of matrix functions for normal matrices. P Thomas, Wihler, Journal of Inequalities in Pure and Applied Mathematics. 10Thomas P Wihler. On the Hölder continuity of matrix functions for normal matrices. Journal of Inequalities in Pure and Applied Mathematics, 10:1-5, 2009.
Metrics for graph comparison: a practitioner's guide. Peter Wills, G François, Meyer, PloS One. 152228728Peter Wills and François G Meyer. Metrics for graph comparison: a practitioner's guide. PloS One, 15(2):e0228728, 2020.
Generalized connectivity matrix response regression with applications in brain connectivity studies. Jingfei Zhang, Will Wei Sun, Lexin Li, Journal of Computational and Graphical Statistics. just-acceptedJingfei Zhang, Will Wei Sun, and Lexin Li. Generalized connectivity matrix response regression with applications in brain connectivity studies. Journal of Computational and Graphical Statistics, (just-accepted):1-30, 2022.
Regularisation, interpolation and visualisation of diffusion tensor images using non-Euclidean statistics. Diwei Zhou, Ian L Dryden, A Alexey, Koloydenko, M R Koenraad, Li Audenaert, Bai, Journal of Applied Statistics. 435Diwei Zhou, Ian L Dryden, Alexey A Koloydenko, Koenraad MR Audenaert, and Li Bai. Regularisation, interpolation and visualisation of diffusion tensor images using non- Euclidean statistics. Journal of Applied Statistics, 43(5):943-978, 2016.
Changbo Zhu, Hans-Georg Müller, arXiv:2105.05439Autoregressive optimal transport models. arXiv preprintChangbo Zhu and Hans-Georg Müller. Autoregressive optimal transport models. arXiv preprint arXiv:2105.05439, 2021.
Network vector autoregression. Xuening Zhu, Rui Pan, Guodong Li, Yuewen Liu, Hansheng Wang, The Annals of Statistics. 453Xuening Zhu, Rui Pan, Guodong Li, Yuewen Liu, and Hansheng Wang. Network vector autoregression. The Annals of Statistics, 45(3):1096-1123, 2017.
| [
"https://github.com/yidongzhou/Network-Regression-with-Graph-Laplacians.",
"https://github.com/nychealth/coronavirus-data,"
] |
[
"BOHR SETS IN SUMSETS II: COUNTABLE ABELIAN GROUPS",
"BOHR SETS IN SUMSETS II: COUNTABLE ABELIAN GROUPS"
] | [
"John T Griesmer ",
"ANDAnh N Le ",
"Thái Hoàng Lê "
] | [] | [] | We prove three results concerning the existence of Bohr sets in threefold sumsets. More precisely, letting G be a countable discrete abelian group and ϕ1, ϕ2, ϕ3 : G → G be commuting endomorphisms whose images have finite indices, we show that (1) If A ⊂ G has positive upper Banach density and ϕ1 + ϕ2 + ϕ3 = 0, then ϕ1(A) + ϕ2(A) + ϕ3(A) contains a Bohr set. This generalizes a theorem of Bergelson and Ruzsa in Z and a recent result of the first author.(2) For any partition G = r i=1 Ai, there exists an i ∈ {1, . . . , r} such that ϕ1(Ai) + ϕ2(Ai) − ϕ2(Ai) contains a Bohr set. This generalizes a result of the second and third authors from Z to countable abelian groups.(3) If B, C ⊂ G have positive upper Banach density and G = r i=1 Ai is a partition, B + C + Ai contains a Bohr set for some i ∈ {1, . . . , r}. This is a strengthening of a theorem of Bergelson, Furstenberg, and Weiss. All results are quantitative in the sense that the radius and rank of the Bohr set obtained depends only on the indices [G : ϕj(G)], the upper Banach density of A (in (1)), or the number of sets in the given partition (in(2)and(3)). | null | [
"https://export.arxiv.org/pdf/2207.04150v3.pdf"
] | 250,426,343 | 2207.04150 | 97149cb14e07d74aff27d7457c5a6d940b2cb1fe |
BOHR SETS IN SUMSETS II: COUNTABLE ABELIAN GROUPS
John T Griesmer
ANDAnh N Le
Thái Hoàng Lê
BOHR SETS IN SUMSETS II: COUNTABLE ABELIAN GROUPS
We prove three results concerning the existence of Bohr sets in threefold sumsets. More precisely, letting G be a countable discrete abelian group and ϕ1, ϕ2, ϕ3 : G → G be commuting endomorphisms whose images have finite indices, we show that (1) If A ⊂ G has positive upper Banach density and ϕ1 + ϕ2 + ϕ3 = 0, then ϕ1(A) + ϕ2(A) + ϕ3(A) contains a Bohr set. This generalizes a theorem of Bergelson and Ruzsa in Z and a recent result of the first author.(2) For any partition G = r i=1 Ai, there exists an i ∈ {1, . . . , r} such that ϕ1(Ai) + ϕ2(Ai) − ϕ2(Ai) contains a Bohr set. This generalizes a result of the second and third authors from Z to countable abelian groups.(3) If B, C ⊂ G have positive upper Banach density and G = r i=1 Ai is a partition, B + C + Ai contains a Bohr set for some i ∈ {1, . . . , r}. This is a strengthening of a theorem of Bergelson, Furstenberg, and Weiss. All results are quantitative in the sense that the radius and rank of the Bohr set obtained depends only on the indices [G : ϕj(G)], the upper Banach density of A (in (1)), or the number of sets in the given partition (in(2)and(3)).
This paper continues the investigation set forth in [33]. Let G be an abelian topological group. If A, B ⊂ G, the sumset and difference set of A and B are A + B := {a + b : a ∈ A, b ∈ B} and A − B := {a − b : a ∈ A, b ∈ B}, respectively. For a ∈ G, the translate a + B is {a + B : b ∈ B}. If s ∈ Z, we define sA := {sa : a ∈ A}. A character of G is a continuous homomorphism from G to S 1 := {z ∈ C : |z| = 1}.
Many classical results in additive combinatorics state, roughly, that sumsets are more structured than their summands. Such results often quantify the structure found in sumsets in terms of Bohr sets, which we define here. For a finite set Λ of characters of G and a constant η > 0, the set B(Λ; η) := {x ∈ G : |γ(x) − 1| < η for all γ ∈ Λ} is called a Bohr set, a Bohr 0 -set, or a Bohr neighborhood of 0 in the literature. In this paper we use mostly the first nomenclature. The set B(Λ; η) is also called a Bohr-(k, η) set where k = |Λ|. We refer to η as the radius and k as the rank of the Bohr set. By a translate of a Bohr set, or a Bohr neighborhood, we mean a set of the form a + B(Λ; η) for some a ∈ G.
After summarizing previous results in Sections 1.1 and 1.2, we state our new results in Section 1.3. The study of Bohr sets in sumsets started with the following important theorem of Bogolyubov [11].
Theorem A (Bogolyubov). If A ⊂ Z has positive upper Banach density, then A − A + A − A contains a Bohr set whose rank and radius depend only on d * (A).
While it originated from the study of almost periodic functions, Bogolyubov's theorem is now a standard tool in additive combinatorics. It was used in Ruzsa's proof of Freiman's theorem [35] and in Gowers's proof of Szemerédi's theorem [22].
Følner [15] showed that the last two summands in Bogolyubov's theorem are "almost" redundant by proving that A − A already contains a set of the form B \ E, where B is a Bohr set and d * (E) = 0. The exceptional set E is unavoidable: Kriz [32] demonstrated that there exists a set A of positive upper Banach density for which A − A contains no Bohr sets. The first author [26] showed that there is a set A having d * (A) > 0 such that A − A contains no Bohr neighborhood of any integer.
Hegyvári and Ruzsa [28] generalized Bogolyubov's theorem in a different direction, showing that there exist "many" a ∈ Z for which A − A + A − a contains a Bohr set. Björklund and the first author [10, Theorem 1.1] strengthened this result by providing explicit bounds on the rank and radius of such a Bohr set, and generalized the result to all countable amenable discrete groups (and hence all countable discrete abelian groups).
Regarding more general threefold sumsets, Bergelson and Ruzsa proved the following:
Theorem B ([7, Theorem 6.1]). Let s 1 , s 2 , s 3 be non-zero integers satisfying s 1 + s 2 + s 3 = 0.
If A ⊂ Z has positive upper Banach density, then s 1 A + s 2 A + s 3 A contains a Bohr set whose rank and radius depend only on s 1 , s 2 , s 3 and d * (A).
Since any Bohr set in Z must contain 0, the condition s 1 + s 2 + s 3 = 0 is easily seen to be necessary by taking A = M Z + 1 for some M > |s 1 | + |s 2 | + |s 3 |. In particular, one cannot expect A + A − A to contain a Bohr set for every A of positive upper Banach density. When While the problem of finding Bohr sets in sumsets where the summands have positive upper Banach density has attracted much attention, the analogous question concerning partitions was little studied until recently, and the situation is less well understood. The following question, popularized by Katznelson [31] and Ruzsa [36,Chapter 5], is a well-known open problem in additive combinatorics and dynamical systems.
Question 1.1. If Z = r i=1
A i , must one of the difference sets A i − A i contain a Bohr set?
In terms of dynamical systems, Question 1.1 asks if every set of recurrence for minimal isometries (also known as a set of Bohr recurrence) is also a set of recurrence for minimal topological systems. See [20] for a detailed account of the history of Question 1.1 and many equivalent formulations. See [27] for more equivalent formulations and resolution of some special cases.
Regarding three summands, the second and third authors proved the following partition analogue of Theorem B.
Theorem C ( [33,Theorem 1.4]).
(i) Let s 1 , s 2 ∈ Z \ {0}. For any partition Z = r i=1 A i , there is an i such that s 1 A i + s 2 A i − s 2 A i contains a Bohr set whose rank and radius depend only on s 1 , s 2 and r.
(ii) For any partition Z = r i=1 A i , there is an i such that A i − A i + sA i contains a Bohr set for any s ∈ Z \ {0}.
Rado's theorem says that an equation k j=1 s j x j = 0 with coefficients s j ∈ Z \ {0} is partition regular over Z \ {0} if and only if there exists J ⊂ {1, . . . , k}, J ̸ = ∅ such that j∈J s j = 0. Combined with Theorem B, part (i) of Theorem C gives a complete characterization of tuples (s 1 , . . . , s k ) ∈ (Z \ {0}) k that guarantee the existence of a Bohr set in k j=1 s j A i , for some i, as long as k ≥ 3: They are precisely tuples satisfying Rado's condition. 1 This characterization is a strengthening of Rado's theorem. As the integer s in Part (ii) can be arbitrarily large, this suggests that either the answer to Question 1.1 is positive, or the construction of a counterexample must be very delicate.
1.2.
Previous results in compact groups. As part of a general program, we aim to study the Bohr sets in sumsets phenomenon in more general groups. A natural setup is amenable groups, since in these groups there is a natural notion of density, and Bohr sets can also be defined. 2 A locally compact group G with left Haar measure m G is said to be amenable if there exists an invariant mean on G, that is, a linear functional λ on L ∞ (m G ) that is nonnegative (i.e. λ(f ) ≥ 0 if f ≥ 0), of norm 1 (i.e. λ(1 G ) = 1) and left-invariant (i.e. λ(f t ) = λ(f ), where f t (x) = f (t −1 x)). If A ⊂ G is a Borel set, we can define its upper Banach density as d * (A) = sup{λ(1 A ) : λ is an invariant mean on G.}
The supremum is actually a maximum, since the set of invariant means on G is weak*compact, by the Banach-Alaoglu theorem. It is well known that all locally compact abelian groups are amenable. Følner [15,16] generalized Theorem A to discrete abelian groups, and the results of [10] mentioned above apply to countable discrete amenable groups which are not necessarily abelian. Against this backdrop, our objective in this program is threefold. First, we ask for analogues of Theorems B and C in (a subclass of) amenable groups. Second, in the context of general groups, we can replace the dilate sA by ϕ(A), the image of A under a homomorphism ϕ. This point of view leads to a wider range of applications: we can consider linear maps on vector spaces and multiplication by an element in a ring (see Corollary 1.6 below). This broader perspective was also adopted in recent works [2,3] on Khintchine-type recurrence for actions of an abelian group. Third, we aim for uniformity in terms of rank and radius of the Bohr set in question, i.e., they are allowed to depend on d * (A) and other parameters, but not A itself. This is because, in some situations, the existence of Bohr sets is straightforward (for example, an interval around 0 in R/Z always contains a Bohr set), but obtaining uniformity is much harder. 1 To see that this condition is necessary, suppose k j=1 sjAi contains a Bohr set. By giving 0 its own partition class, we may assume 0 ̸ ∈ Ai. Since a Bohr set must necessarily contain 0, this implies that there are xj ∈ Ai such that k j=1 sjxj = 0, and Rado's condition applies. To see that this condition is sufficient, observe that (s + t)A ⊂ sA + tA, so the case k ≥ 3 can be reduced to the case k = 3.
2 For non-abelian groups G, Bohr sets can be defined in terms of finite-dimensional unitary irreducible representations of G, see [10].
In [33], these objectives were achieved for compact abelian groups. Note that in this case, the only invariant mean on G is given by m G (the normalized Haar measure on G) and d * (A) = m G (A). The second and third authors proved the following.
Theorem D (Le-Lê [33]). Let K be a compact abelian group with normalized Haar measure m K . Let ϕ 1 , ϕ 2 , ϕ 3 : K → K be commuting continuous endomorphisms such that [K : ϕ j (K)] < ∞ for each j.
(i) If ϕ 1 + ϕ 2 + ϕ 3 = 0 and A ⊂ K is a Borel set with m K (A) > 0, then ϕ 1 (A) + ϕ 2 (A) + ϕ 3 (A)
contains a Bohr-(k, η) set, where k and η depend only on m K (A) and The finite index condition is necessary and also appears in [2]. On the other hand, we do not know if the assumption that the ϕ j commute can be omitted.
[G : ϕ j (G)]. (ii) If K = r i=1 A i is a partition of K into Borel sets, then there exists i such that ϕ 1 (A i ) + ϕ 2 (A i ) − ϕ 2 (A i )
1.3. New results in discrete groups. In this paper we extend many of the preceding results to the setting of countable discrete abelian groups. Our main results are discrete analogues of Theorem D, and as such are direct generalizations of Theorems B and C. Theorem 1.2. Let G be a countable discrete abelian group. Let ϕ 1 , ϕ 2 , ϕ 3 : G → G be commuting endomorphisms such that ϕ 1 + ϕ 2 + ϕ 3 = 0 and [G : ϕ j (G)] are finite for j ∈ {1, 2, 3}. Suppose A ⊂ G has positive upper Banach density, i.e. d * (A) > 0. Then the set • In the special case ϕ j (x) = s j x where s j ∈ Z \ {0}, Theorem 1.2 was proven by the first author [23] without the conclusion on the uniformity of k and η. • The conclusion of Theorem 1.2 remains valid if the ϕ j do not necessarily commute, but one of them is an automorphism. Indeed, assume that ϕ 1 is an automorphism. We observe that
ϕ 1 (A) + ϕ 2 (A) + ϕ 3 (A)ϕ 1 (A) + ϕ 2 (A) + ϕ 3 (A) = ϕ 1 A + ϕ −1 1 • ϕ 2 (A) + ϕ −1 1 • ϕ 3 (A)
. Consider the endomorphisms Id, ϕ −1 1 • ϕ 2 and ϕ −1 1 • ϕ 3 . They add up to 0 since
Id + ϕ −1 1 • ϕ 2 + ϕ −1 1 • ϕ 3 = Id + ϕ −1 1 • (ϕ 2 + ϕ 3 ) = Id + ϕ −1 1 • (−ϕ 1 ) = 0.
They also commute 3 , and have finite index images. Theorem 1.
2 implies A + ϕ −1 1 • ϕ 2 (A) + ϕ −1 1 • ϕ 3 (A)
contains a Bohr set, and the image of a Bohr set under an automorphism is easily seen to be a Bohr set of the same rank and radius (see Lemma 2.2). 3 Whenever three endomorphisms sum to 0 and two of them commute, all three must commute. Since Id commutes with every endomorphism, these three commute.
• The hypothesis ϕ 1 + ϕ 2 + ϕ 3 = 0 cannot be removed as demonstrated in the remark after Theorem B. • Similarly, the condition that each index [G : ϕ j (G)] is finite cannot be omitted. For example, take G = Z, ϕ 1 (x) = x, ϕ 2 (x) = −x, and ϕ 3 (x) = 0 for x ∈ Z. Then
ϕ 1 (A) + ϕ 2 (A) + ϕ 3 (A) = A − A,ϕ 1 (A i ) + ϕ 2 (A i ) − ϕ 2 (A i )
contains a Bohr-(k, η) set, where k and η depend only on r and the indices [G : ϕ j (G)].
Remark 1.5.
• In contrast to Theorem 1.2 and Theorem 1.7 below, Theorem 1.4 does not assume G is countable. The reason is that the former two theorems use Kronecker factors via Furstenberg's correspondence principle, and the theory of factors requires the group to be countable. There are two ways to think of a factor of a measure preserving G-system: as a spatial map or as a G-invariant sub σ-algebra. The latter can be obtained trivially from the former, but the converse is not trivial, and requires the group to be countable (in addition to the σ-algebras being separable). For instance, the method of proof of Theorem 5.15 in [18] requires G to be countable. • Since Bohr sets contain 0, Theorem 1.4 implies that the equation ϕ 1 (x) + ϕ 2 (y) − ϕ 2 (z) = 0 is partition regular in discrete abelian groups, that is, under any partition G = r i=1 A i , there exists non-zero x, y, z in the same class A i such that ϕ 1 (x) + ϕ 2 (y) − ϕ 2 (z) = 0. (To see that we can take x, y, z to be nonzero, give 0 its own partition class.) • If d * (A) > 0, then A + A − A is not guaranteed to contain a Bohr set as remarked after Theorem B. In particular, the analogous version of Theorem 1.4 for sets of positive upper Banach density is false. • The hypothesis that ϕ 2 (G) has finite index in G cannot be omitted. For example, taking ϕ 2 = 0 and ϕ 1 (x) = x for x ∈ G, the sumset in Theorem 1.4 simplifies to A i . The question of whether the Theorem 1.4 remains true without the assumption that [G : ϕ 1 (G)] is finite is essentially Question 1.1: we may take ϕ 1 (x) = 0 and ϕ 2 (x) = x for all x ∈ G, and the sumset in Theorem 1.4 simplifies to A i − A i . • Similar to Theorem 1.2, the hypothesis that the ϕ j commute can be removed if one of them is an automorphism.
As a consequence of Theorems 1.2 and 1.4, we obtain immediately the following number field generalization of Theorems B and C. In [33], this result was proved (at least for Z[i]) using a different argument, similar to Bogolyubov and Bergelson-Ruzsa's proofs of Theorems A and B in Z. Corollary 1.6. Let K be an algebraic number field of degree d and O K be its ring of integers (so the additive group of O K is isomorphic to Z d ). Let s 1 , s 2 , s 3 ∈ O K \ {0} such that s 1 + s 2 + s 3 = 0.
(i) If A ⊂ O K has d * (A) > 0, then s 1 A + s 2 A + s 3 A contains a Bohr set, whose rank and radius depend only on d * (A) and the norms of s 1 , s 2 , s 3 .
(ii) If O K = r i=1 A i , then there exists i such that s 1 A i + s 2 A i − s 2 A i
contains a Bohr set, whose rank and radius depend only on r and the norms of s 1 and s 2 .
Bergelson, Furstenberg, and Weiss [5,Corollary 1.3] showed that if B, C ⊂ Z have positive upper Banach density and A ⊂ Z is syndetic, then B + C + A contains a translate of a Bohr set. Here a set A ⊂ Z is syndetic if a collection of finitely many translates of A covers Z. Our next theorem not only generalizes Bergelson-Furstenberg-Weiss's result to countable abelian groups but also strengthens it by only assuming that A arises from an arbitrary partition. Moreover, we provide quantitative bounds on the radius and rank of the Bohr set, a feature not presented in [5].
G = r i=1 A i , there is an i ∈ {1, .
. . , r} such that B + C + A i contains a Bohr-(k, η) set where k, η depend only on d * (B), d * (C) and r.
We deduce Theorems 1.2, 1.4 and 1.7 from their counterparts for compact abelian groups (i.e. Theorems D and 10.1). However, the latter can be used as black boxes and the reader does not need to know their inner workings. The heavy lifting of this paper is done by correspondence principles, which state that sumsets in discrete abelian groups can be modeled by sumsets in compact abelian groups. This strategy dates back at least to Furstenberg's correspondence principle [17], used in his proof of Szemerédi's theorem. However, to accommodate the three different kinds of sumsets in our results, we need three different correspondence principles. These are Proposition 6.2, Proposition 7.1, and Proposition 9.6.
Our bounds for k and η in Theorems 1.2, 1.4 and 1.7 are transferred from and have the same quality as their compact analogues. Since the proof of Theorem D (i) relies on a regularity lemma, the bounds in Theorem 1.2 are of tower type. The proof of Theorem D(ii) relies on the Hales-Jewett theorem, so the bounds in Theorem 1.4 are extremely poor (albeit still primitive recursive). As for Theorem 1.7, we get more appealing bounds of the form η = Ω(d * (B)d * (C)r −1 ) and k = O(d * (B) −2 d * (C) −2 r 2 ), though these may not be optimal (see Question 11.2).
Main ideas of the proofs.
Here we outline the obstacles to proving Theorems 1.2, 1.4 and 1.7 and our strategies for overcoming them. We will use notation and terminology defined in Section 2.
Theorem 1.2: To prove the first theorem, we find a parameterized solution to the relation
ϕ 1 (w) ∈ ϕ 1 (A) + ϕ 2 (A) + ϕ 3 (A).(2)
For instance, w will satisfy (2) if
u + w − ϕ 2 (v), u + ϕ 1 (v)
, and u all belong to A for some u, v ∈ G.
Then Furstenberg's correspondence principle is applied to show that the set of such w contains the support of the multilinear ergodic average:
I(w) := U C − lim g∈G X f · T ϕ 1 (g) f · T w−ϕ 2 (g) f dµ(3)
where (X, µ, T ) is an ergodic G-system and f : X → [0, 1] is a measurable function with
X f dµ = d * (A)
. As shown in [2], the Kronecker factor (Z, m Z , R) is characteristic for the average in (3) and so
I(w) = U C − lim g∈G Xf · R ϕ 1 (g)f · R w−ϕ 2 (g)f dm Z , wheref : Z → [0, 1] satisfies f dm Z = f dµ (see Section 2.2 for the definition of U C −lim).
In order to utilize the corresponding result in compact groups [33], we need to show that the homomorphisms ϕ 1 , ϕ 2 , ϕ 3 induce homomorphismsφ j on Z satisfyingφ j • τ = τ • ϕ j , where τ is a natural embedding of G in Z. This is straightforward under the additional assumption that spectrum of (X, µ, T ) (i.e. the group of eigenvalues) is closed under each ϕ j . However, the spectrum of (X, µ, T ) will not, in general, be closed under the ϕ j .
To overcome this problem, we find an ergodic extension (Y, ν, S) of (X, µ, T ) such that the spectrum of (Y, ν, S) contains a subgroup Γ which extends the spectrum of (X, µ, T ) and is invariant under each ϕ j . After lifting f to Y , the Kronecker factor Z of X can be viewed as a factor of Y, and is still characteristic for the averages in (3). Thus, any extension of Z in Y will also be characteristic for these averages. The group rotation factor K of Y corresponding to Γ is such an extension of Z, and this allows us to transfer the Bohr sets obtained in [33] to G. The diagram below demonstrates the relations among X, Y, Z and K where Y → X means Y is an extension of X.
2 (w) ∈ ϕ 1 (A) + ϕ 2 (A) − ϕ 2 (A) is ϕ 2 (v), u + w, u + ϕ 1 (v) ∈ A.(4)
The absence of the variable u in the first function prohibits us from using Furstenberg's correspondence principle as we do in the Proof of Theorem 1.2. Instead we use (Proposition 7.1), which models the relevant sumsets by convolutions on the Bohr compactification of G. This idea was used in [10] to express A + A − A in terms of convolutions on a compact group. Parts of this process also already appeared in Følner's works [15,16]. Specifically, we fix an invariant mean ν on G with d * (A) = ν(1 A ), and observe that the difference set A − A contains the support of the convolution 1 A * ν 1 −A (t) := ν(1 A 1 A+t ). This convolution is easily verified to be a positive definite function on G, which can therefore be represented as a Fourier transform of a positive measure σ on " G. The continuous part of σ can be ignored, allowing us to expand 1 A * ν 1 −A (t) as a Fourier series and express A + A − A in terms of a convolution h A * h A * h −A on bG, the Bohr compactification of G.
To study the more complicated expression ϕ 1 (A) + ϕ 2 (A) − ϕ 2 (A), we need to investigate the relationship between 1 A * ν 1 −A and 1 ϕ 2 (A) * ν 1 −ϕ 2 (A) . This investigation leads to the introduction of Radon-Nikodym densities ρ ν A , ρ ν ϕ 2 (A) and their relationship in Section 4. After the required relationship is established, we put all ingredients together (Proposition 7.1, Corollary 4.10) and use the compact counterpart in [33] to prove Theorem 1.4.
+ C + A i ,
where B, C are subsets of a compact abelian group K and K = r i=1 A i . We bound the rank and radius in terms of m K (B), m K (C), and r, using the pigeonhole principle and elementary estimates on Fourier coefficients. (ii) a correspondence principle relating the expression B + C + A i in a discrete abelian group to an analogous expression in a compact abelian group.
The two correspondence principles previously mentioned do not apply to the expression B + C + A i ; see Remark 1.8. Instead, we use a result from [25] which exhibits piecewise Bohr structure in B + C. This allows us to relate B + C + A i to a convolution h B * h C * h A i on a compact group K, where each of these functions takes values in
[0, 1], h B dm K ≥ d * (B), h C dm K ≥ d * (C), and r i=1 h A i ≥ 1 K .
Remark 1.8. None of the three correspondence principles outlined above subsumes the others. The sumset ϕ 1 (A) + ϕ 2 (A) + ϕ 3 (A) with ϕ 1 + ϕ 2 + ϕ 3 = 0 is translation invariant (replacing A with a translate of A does not affect this sumset) and so a straightforward application of Furstenberg's correspondence principle suffices. The second sumset
ϕ 1 (A) + ϕ 2 (A) − ϕ 2 (A)
is no longer translation invariant and hence requires a different correspondence principle. Since the last sumset B + C + A i is neither translation invariant nor has the form A + B − B, we need yet another correspondence principle. Conversely, one cannot use the third principle for the first two sums since this principle does not retain the relations among the summands which are present in the fact that ϕ 1 (A), ϕ 2 (A), ϕ 3 (A) are images of the same set A.
1.5.
Outline of the article. In Section 2, we set up notation and present some basic facts about measure preserving systems, Bohr compactifications, Kronecker factors, etc. In Section 3 we describe a general construction of homomorphisms from discrete groups into compact groups with dense image. This construction is used in the proofs of all of our results. Section 4 is devoted to transferring functions on discrete groups to compact groups, an ingredient used in the proofs of Theorems 1.4 and 1.7. After these preliminaries, Theorem 1.2 is proved in Sections 5 and 6, then Theorem 1.4 is proved in Sections 7 and 8. We prove the correspondence principle needed for Theorem 1.7 in Section 9 and establish the theorem in Section 10. Lastly, we present some open questions in Section 11. Acknowledgement. We thank the anonymous referee for carefully reading the manuscript, pointing out some oversights, and providing many suggestions which help improve the presentation of the paper. The third author is partially supported by NSF Grant DMS-2246921.
2. Background 2.1. Notation and convention. Throughout this paper, G is a countable discrete abelian group, and K is used to denote a compact Hausdorff abelian group. We use m K to denote the unique probability Haar measure on K. The set of all continuous functions on K is denoted by C(K).
For r ∈ N, we use [r] to denote {1, 2, . . . , r}. By the support of a function f , denoted by supp f , we mean {x : f (x) ̸ = 0}.
Følner sequences and uniform Cesàro averages.
A sequence F = (F N ) N ∈N of finite subsets of G is a Følner sequence if for all g ∈ G, lim N →∞ |F N △(g + F N )| |F N | = 0.
Every countable abelian group admits a Følner sequence. This is due to the fact that all discrete abelian groups are amenable, and having a Følner sequence is one of the many equivalent definitions of amenability for countable discrete groups (see [30]). If F is a Følner sequence and A ⊂ G, the upper density of A with respect to F is
d F (A) := lim sup N →∞ |A ∩ F N | |F N | .
The upper Banach density of A is
d * (A) := sup{d F (A) : F is a Følner sequence}.(5)
(For a proof that the definitions (1) and (5) are equivalent, see [9, Proposition A.6].) Let u : G → C be a bounded sequence. We say (u(g)) g∈G has a uniform Cesàro average if for every Følner sequence (F N ) N ∈N , the limit
lim N →∞ 1 |F N | n∈F N u(g)
exists and is independent of the choice of Følner sequence. In this case, we denote the common limit by U C − lim g∈G u(g).
Measure preserving systems.
A measure preserving G-system (or G-system) is a quadruple X = (X, B, µ, T ) where (X, B, µ) is a probability space and G acts on X by transformations T g which preserve µ; that is
µ(T −1 g A) = µ(A)
for all measurable A ⊂ X and all g ∈ G. In this paper, all probability spaces underlying G-systems are assumed to be separable, that is, B is countably generated modulo null sets, or equivalently, L p (X, B, µ) is separable for all 1 ≤ p < ∞. In particular, if X is a compact metric space, B is its Borel σ-algebra and µ is any probability measure on B, then (X, B, µ) is separable. When there is no danger of confusion, we will suppress the σ-algebra B and write (X, µ, T ) for a G-system. We abbreviate G-systems with boldface letters:
X = (X, µ, T ). The G-system (X, B, µ, T ) is said to be ergodic if µ(A△T −1 g A) = 0 for all g ∈ G implies µ(A) = 0 or µ(A) = 1. If f ∈ L 2 (µ) and g ∈ G, we write T g f for f • T g . This defines an action of G on L 2 (µ) by unitary operators T g . A G-system Y = (Y, D, ν, S) together with a map π : X → Y defined for µ−almost every x ∈ X is a factor of X = (X, B, µ, T ) if π * µ = ν (i.e. µ(π −1 (A)) = ν(A) for all A ∈ D) and for all g ∈ G, π(T g x) = S g π(x) for µ-almost all x ∈ X.
The map π is called a factor map. The space L 2 (ν) can be identified with the subspace of L 2 (µ) consisting of functions of the form h • π where h ∈ L 2 (ν). We use E(·|Y ) : L 2 (µ) → L 2 (ν) to denote the corresponding orthogonal projection. Later we abuse notation and write "Y is a factor of X" instead of "(Y, π) is a factor of X."
For a Følner sequence (F N ) N ∈N in G, functions f 0 , . . . , f k ∈ L ∞ (µ), and sequences s 1 , . . . , s k : G → G, we say the factor Y is characteristic for the average
I := lim N →∞ 1 |F N | g∈F N X f 0 · T s 1 (g) f 1 · · · T s k (g) f k dµ if I = lim N →∞ 1 |F N | g∈F N Yf 0 · T s 1 (g)f1 · · · T s k (g)fk dν wheref i = E(f i |Y ).
Let " G denote the Pontryagin dual of G, i.e. the group of characters χ : G → S 1 with the operation of pointwise multiplication. A character χ ∈ "
G called an eigenvalue of X if there exists a nonzero function f ∈ L 2 (µ) such that T g f = χ(g)f for all g ∈ G. The set of all eigenvalues for X forms a subgroup of " G, called the spectrum of X and denoted by E(X). If Y is a factor of X, then E(Y) is a subgroup of E(X). If X is ergodic, then all eigenspaces are one-dimensional and mutually orthogonal (for a proof, see [39,Theorem 3.1]). Since L 2 (µ) is separable, E(X) is at most countable.
Kronecker factors.
A group rotation G-system is a G-system K = (K, m K , R) in which • K is a compact metrizable abelian group with Borel σ-algebra K, probability Haar measure m K , and • there is a homomorphism τ : G → K such R g (z) = z + τ (g) for all z ∈ K and g ∈ G.
The group rotation (K, m K , R) is ergodic if and only if τ (G) is dense in K. In this case, (K, m K , R) is in fact uniquely ergodic, i.e. m K is the unique R-invariant probability measure on K (for a proof, see [2,Lemma 2.4]). Consequently, the sequence (τ (g)) g∈G is well-distributed in K, i.e. for every continuous function h ∈ C(K),
U C − lim g∈G h(τ (g)) = K h dm K .(6)
For an ergodic G-system X, its Kronecker factor K = (K, m K , R) is a factor of X with factor map π : X → K such that L 2 (m K ) is spanned by the eigenfunctions of X, meaning:
(i) every eigenfunction f ∈ L 2 (µ) is equal µ-a.e. tof • π for some eigenfunctionf ∈ L 2 (m K ), and (ii) the span of the eigenfunctions of K is dense in L 2 (m K ).
It can be shown that K is the largest factor of X that is isomorphic to an ergodic group rotation G-system. More concretely,
K = (K, m K , R) where K = ' E(X) (see Lemma 3.3 (iii)). Let (X, µ, T ) be an ergodic G-system with Kronecker factor (K, m K , R) and f 1 , f 2 , f 3 ∈ L ∞ (X). It is shown in [2, Theorem 3.1] that if ϕ, ψ : G → G are homomorphisms such that ϕ(G), ψ(G), and (ψ − ϕ)(G) each have finite index in G, U C − lim g∈G X f 1 · T ϕ(g) f 2 · T ψ(g) f 3 dµ(7)
exists and is equal to
U C − lim g∈G Kf 1 · R ϕ(g)f2 · R ψ(g)f3 dm K wheref i = E(f i |K) is projection of f i onto L 2 (m K ).
In other words, the Kronecker factor is characteristic for the average in (7).
2.5. Invariant means. If f ∈ ℓ ∞ (G) and t ∈ G, define f t ∈ ℓ ∞ (G) by f t (s) := f (s − t). An invariant mean on G is a positive linear functional ν : ℓ ∞ (G) → C such that ν(1 G ) = 1 and ν(f t ) = ν(f ) for every f ∈ ℓ ∞ (G), t ∈ G.
In the weak * topology on ℓ ∞ (G) * , the space M (G) of invariant means forms a compact convex set. An invariant mean ν is said to be extremal, or an extreme point, if it cannot be written as a convex linear combination of two other invariant means.
Bauer's maximum principle [1, 7.69] implies that if C is a compact convex subset of a locally convex Hausdorff space, then every real-valued continuous linear functional on C has a maximizer that is an extreme point. Thus if A ⊂ G, there is an extremal invariant mean ν such that d * (A) = ν(1 A ).
Let H be a countable abelian group and ϕ : G → H be a surjective homomorphism. For any invariant mean ν on G, the pushforward ϕ * ν is an invariant mean on H and is defined by
ϕ * ν(h) := ν(h • ϕ),
for all h ∈ ℓ ∞ (H). Given f ∈ ℓ ∞ (G) and an invariant mean ν, we sometimes write
G f (t) dν(t) instead of ν(f ). If g ∈ ℓ ∞ (G)
, we define the "convolution" of f and g with respect to ν by
f * ν g(t) := G f (x)g(t − x) dν(x).
In conventional notation, this could be written as f * ν g :
= ν((g ′ ) t f ), where g ′ (x) := g(−x).
The following lemma is a special case of [9, Proposition 2.1].
Lemma 2.1. If λ is an extremal invariant mean on G and f, g ∈ ℓ ∞ (G), then G 2 f (t)g(t − s) dλ(t)dµ(s) = λ(f )λ(g)(8)
for every invariant mean µ on G.
For completeness we include a proof.
Proof. It suffices to prove (8) for 0 ≤ f ≤ 1.
When λ(f ) = 0 or 1, it is straightforward to check (8). Suppose λ(f ) = α ∈ (0, 1). Define two invariant means η and η ′ by
η(g) = 1 α G 2 f (t)g(t−s) dλ(t)dµ(s) and η ′ (g) = 1 1 − α G 2 (1−f (t))g(t−s) dλ(t)dµ(s).
Then it is easy to check that λ(g) = αη(g) + (1 − α)η ′ (g). Since λ is extremal, we must have η = η ′ = λ, and we are done. □ 2.6. Bohr compactification. The Bohr compactification of G is a compact abelian group bG, together with a homomorphism τ : G → bG such that τ (G) is dense in bG and every character χ ∈ " G can be written as χ = χ ′ • τ , where χ ′ is a continuous homomorphism from bG to S 1 . The homomorphism τ is universal with respect to homomorphisms into compact Hausdorff groups; that is if K is another compact Hausdorff group and π : G → K is a homomorphism, then there is a unique continuous homomorphismπ : bG → K such that π =π • τ . The Bohr compactification also has a concrete description; it is the dual of " G where "
G is given the discrete topology (see Section 3). See [34] for basic results on the Bohr compactification and [9] for a recent application to sumsets.
2.7.
Lemmas on Bohr sets. We document two lemmas concerning Bohr sets for later use. Similar lemmas for compact abelian groups have been proved in [33]; the proofs for arbitrary abelian groups are identical and so we omit them.
The first lemma states that the preimage of a Bohr set is a Bohr set. The second lemma says that the image of a Bohr set under a homomorphism with finite index image is again a Bohr set.
G → G be an endomorphism with [G : ϕ(G)] < ∞. If B is a Bohr-(k, η) set in G, then ϕ(B) is a Bohr-(k ′ , η ′ ) set in G where k ′ , η ′ depend only on k, η, and [G : ϕ(G)].
Almost periodic functions and null functions. A function on
G of the form g → k i=1 c i χ i (g) where c i ∈ C and χ i ∈ " G is called a trigonometric polynomial. An f ∈ ℓ ∞ (G) is called a (Bohr) almost periodic function if it is a uniform limit of a sequence of trigonometric polynomials. Alternatively, f is almost periodic if f = h • τ where
h is a continuous function on bG and τ : G → bG is the natural embedding. Given an almost periodic function f , a χ ∈ " G, and an invariant mean ν on G, we writef (χ) for the Fourier coefficient ν(f χ) -it is easy to verify that for an almost periodic f ,f (χ) does not depend on the choice of ν.
An f ∈ ℓ ∞ (G) is called a null function if ν(|f |) = 0 for every invariant mean ν on G.
Dense images of discrete groups in compact groups
This section describes a general way to construct a homomorphism τ : G → K from a discrete abelian group G into a compact abelian group K. It also provides sufficient conditions for an endomorphism ϕ of G to induce an endomorphismφ of K. This framework provides a concrete description of the Bohr compactification of G and of the Kronecker factor of an ergodic G-system. We start with the following. Proof. (i) By definition, Γ is equipped with the topology of uniform convergence on compact subsets of Γ. It therefore suffices to prove that if (χ n ) n∈I is a net of elements of Γ converging to χ ∈ Γ uniformly on compact subsets of Γ, then (χ n • ϕ) n∈I converges to χ • ϕ uniformly on compact subsets of Γ. Continuity of ϕ implies ϕ(K) is compact for every compact K ⊂ Γ, so the assumption that χ n → χ uniformly on every compact K ⊂ Γ implies χ n → χ uniformly on ϕ(K) for every compact K ⊂ Γ. But this means (χ n • ϕ) n∈I converges to χ • ϕ uniformly on compact subsets of Γ, as desired.
(ii) For γ ∈ Γ, define the evaluation map e γ (χ) = χ(γ) for any χ ∈ Γ. It suffices to prove that (ϕ * ) * (e γ ) = e ϕ(γ) ,
meaning (ϕ * ) * (e γ )(χ) = χ(ϕ(γ)) for all χ ∈ Γ. To see this, note that χ → (ϕ * ) * (e γ )(χ) is defined by e γ (ϕ * (χ)) = e γ (χ • ϕ). □
We now apply Lemma 3.1 in the case where Γ is a discrete group.
Lemma 3.2. Let Λ be a subgroup of " G, viewed as a discrete group, so that Λ is compact. For g ∈ G, define the evaluation map e g (χ) = χ(g) for χ ∈ " G. Define a homomorphism τ : G → Λ by τ (g) = e g | Λ . Then
(i) τ (G) is dense in Λ. (ii) Suppose ϕ : G → G is an endomorphism such that χ • ϕ ∈ Λ for all χ ∈ Λ. Then there is a continuous endomorphismφ of Λ such thatφ • τ = τ • ϕ. Furthermore, [ Λ :φ( Λ)] ≤ [G : ϕ(G)]. Proof. (i) Let ψ ∈ Λ, let F = {χ 1 , . . . , χ d } ⊂ Λ be finite, and ε > 0. We will show that there is a g ∈ G such that |ψ(χ j ) − e g (χ j )| < ε for all χ j ∈ F . Consider the subgroup H := {(χ 1 (g), . . . , χ d (g)) : g ∈ G} ⊂ (S 1 ) d .
It suffices to prove that
⃗ t := (ψ(χ 1 ), . . . , ψ(χ d )) ∈ H.(9)
Assume, to get a contradiction, that (9) is false. Then there is a nontrivial character α ∈ ' (S 1 ) d which annihilates H but does not annihilate ⃗ t. Writing α(x 1 , . . . , x d ) as x n 1 1 · · · x n d d , we have
χ 1 (g) n 1 · · · χ d (g) n d = 1 for all g ∈ G,(10)
but ψ(χ 1 ) n 1 · · · ψ(χ d ) n d ̸ = 1. Since ψ is a character, the latter equation means
ψ(χ n 1 1 · · · χ n d d ) ̸ = 1.(11)
But (10) means that χ n 1 1 · · · χ n d d is trivial, contradicting (11).
(ii) Define ϕ ′ : Λ → Λ by ϕ ′ (χ) = χ • ϕ. Letφ := (ϕ ′ ) * as in Lemma 3.1, meaning that for ψ ∈ Λ,φ(ψ) = ψ • ϕ ′ . By Lemma 3.1,φ is a continuous endomorphism. To verify that ϕ • τ = τ • ϕ, fix χ ∈ Λ, g ∈ G, and evaluatẽ ϕ(τ (g))(χ) = e g (ϕ ′ (χ)) = e g (χ • ϕ) = χ • ϕ(g) = e ϕ(g) (χ) = τ (ϕ(g))(χ).
Thusφ • τ = τ • ϕ. Now let k = [G : ϕ(G)] (assuming this index is finite), and let t j + ϕ(G), j = 1, . . . , k be coset representatives of ϕ(G). The identityφ • τ = τ • ϕ impliesφ( Λ) contains τ (ϕ(G)). The latter subgroup has index at most k, since the translates τ (t j + ϕ(G)) = τ (t j ) + τ (ϕ(G)) are closed and cover a dense subset of Λ. Thusφ( Λ) also has index at most k. □ It can be shown that all homomorphisms from G into compact groups with dense images arise from the construction in Lemma 3.2, though we do not need this fact. When Λ = " G with the discrete topology, Λ is the Bohr compactification bG of G, which is relevant in the proof of Theorem 1.4.
In the proofs of Theorems 1.2 and 1.7, we will focus on the case where Λ is at most countable. The relevance of countability is that, in this case, Λ is compact and metrizable. Consequently, its Borel σ-algebra is separable (so the theory of factors applies).
The group Λ being abelian, we can write its group operation additively. Equipped with its normalized Haar measure m Λ , Λ is naturally endowed with a group rotation via the G-action R given by R g (z) := z + τ (g) for all z ∈ Λ and g ∈ G, where τ is defined in Lemma 3.2. Since τ (G) is dense in Λ, this action is ergodic. We will now state some properties of these group rotations.
v λ , where v λ (x) = x(λ) for all x ∈ Λ. (ii) If Λ 1 ≤ Λ 2 are countable subgroups of "
G, then the group rotation associated with Λ 1 is a factor of the group rotation associated with Λ 2 . (iii) If X = (X, µ, T ) is an ergodic G-system and Λ = E(X), then ( Λ, m Λ , R) is the Kronecker factor of X.
Proof. (i) For λ ∈ Λ and x ∈ Λ, we have v λ (x + τ (g)) = (x + e g )(λ) = x(λ)λ(g) = λ(g)v λ (x).
This shows that λ is an eigenvalue of ( Λ, m Λ , R) and v λ is a corresponding eigenvector. Conversely, suppose χ ∈ " G and there exists non-zero f ∈ L 2 ( Λ) such that for all g ∈ G, f (x + τ (g)) = χ(g)f (x) for almost all x, we need to show that χ ∈ Λ. Since f is not zero, there exists λ ∈ Λ such that f (λ) ̸ = 0. Computing the Fourier coefficients of both sides, we have
χ(g) f (λ) = e g (λ) f (λ) = λ(g) f (λ)
for any g ∈ G. Since f (λ) ̸ = 0, this implies that χ(g) = λ(g) for any g ∈ G. Therefore, χ = λ ∈ Λ. Furthermore, this also shows that f has exactly one non-zero Fourier coefficient and f = f (λ)v λ . (ii) Define π : Λ 2 → Λ 1 by π(x) = x| Λ 1 for all x ∈ Λ 2 . Then π is a surjective, continuous group homomorphism. By [33,Lemma 2.7], π is measure-preserving.
Recall that the homomorphisms from G to Λ 1 and Λ 2 are τ 1 (g) = e g | Λ 1 and τ 2 (g) = e g | Λ 2 . It is clear that π(x + τ 2 (g)) = π(x) + τ 1 (g), thus showing that π is a factor map. (iii) We assume (see Section 2.3) that L 2 (µ) is separable. For each λ ∈ Λ = E(X), there is an eigenvector f λ ∈ L 2 (X) such that T g f λ = λ(g)f λ for any g ∈ G. Arguing similarly to [39,Theorem 3.4], we may assume that |f λ | = 1 and f λξ = f λ f ξ for any λ, ξ ∈ Λ. Defining V (v λ ) = f λ and extending V linearly, we have an isometry V : L 2 ( Λ) → L 2 (X) satisfying V (f g) = V (f )V (g) for any f, g ∈ L 2 ( Λ). By [39,Theorem 2.4], V induces a homomorphism of measure algebras, and therefore a factor map X → Λ. Since E( Λ) = Λ, part (ii) shows that Λ is the largest group rotation that is a factor of X. □
Radon-Nikodym densities
In this section we make no assumption on the countability (or uncountability) of G. In particular, the lemmas here will apply when G is an arbitrary discrete abelian group.
ν((h • τ ) · f ) = K h · ρ ν f dm K .(12)
for every continuous h : K → C. It is unique up to m K -measure 0.
Thus ρ ν f depends on the compact group K and the map τ . When f = 1 A is the characteristic function of a subset of G, we write ρ ν A in place of ρ ν 1 A to avoid nested subscripts. Given an invariant mean ν on G, and f : G → [0, 1] we will prove that there is a function ρ ν f satisfying Definition 4.1. We first observe the following.
Lemma 4.2. For all h ∈ C(K), we have ν(h • τ ) = K h dm K .(13)
Proof. We define a linear functional L on C(K) by
L(h) := ν(h • τ ).
By the Riesz representation theorem, there exists a regular Borel probability measure m on K such that L(h) = K h dm. On the other hand, for any g ∈ G, we have
L(h τ (g) ) = ν((h • τ ) g ) = ν(h • τ ) = L(h)(14)
by translation invariance of ν. Since the map x → h x from K to C(K) is continuous, and since τ (G) is dense in K, (14) implies L(h x ) = L(h) for all x ∈ K. Hence m is translation invariant. By uniqueness of the Haar measure, we have m = m K as desired. □
Given f : G → [0, 1], we define a linear functional Λ ν f : C(K) → R by Λ ν f (h) := ν((h • τ ) · f ).(15)
Clearly Λ ν f is a positive linear functional; thus by the Riesz representation theorem, there exists a regular Borel measure m on K such that
Λ ν f (h) = K h dm(16)
for all h ∈ C(K). Proof. First, by (13), we have
K h dm = ν((h • τ ) · f ) ≤ ν(h • τ ) = K h dm K(17)
for any h ∈ C(K).
m(B) ≤ m(V ) + ϵ ≤ K h dm + ϵ ≤ K h dm K + ϵ ≤ m K (U ) + ϵ ≤ m K (B) + 2ϵ.
Since ϵ is arbitrary, this implies that m(B) ≤ m K (B). Therefore, m is absolutely continuous with respect to m K . □
We now prove that, for each f : G → [0, 1], there is a ρ ν f satisfying (12). Given such an f , we consider the measure m on K defined above. Since m is absolutely continuous with respect to m K , we may define ρ ν f to be the Radon-Nikodym derivative of m with respect to m K , meaning ρ ν f is the unique (up to m K -measure 0) function in L 1 (m K ) satisfying h ρ ν f dm K = h dm for all h ∈ C(K). Then (12) follows from (15) and (16). The inequality 0 ≤ ρ ν f ≤ 1 m Ka.e. follows from the fact that 0 ≤ m(B) ≤ m K (B) for all Borel sets B.
Properties of ρ ν
A . We will now state some properties of ρ ν f when f is the characteristic function of a set. Recall that we write ρ ν A in place of ρ ν 1 A .
Lemma 4.4. Let A ⊂ G and let ν be an invariant mean on G. Then
(i) K ρ ν A dm K = ν(1 A ), (ii) ρ ν A is supported on τ (A), that is, ρ ν A = 0 m K -a.e. on K \ τ (A).
Proof. The first claim follows from the definition of ρ ν A . For the second claim, let h : K → R ≥0 be any continuous function that is supported on K \ τ (A). If g ∈ A, then τ (g) ∈ τ (A) will not be in the support of h. In other words, h • τ · 1 A (g) = 0 for all g ∈ G, and so
K h · ρ ν A dm K = ν((h • τ ) · 1 A ) = 0.(18)
Suppose for a contradiction that there exists a Borel set
V ⊂ K \ τ (A) with m K (V ) > 0 such that ρ ν A > 0 on V .
Since m K is regular, we may assume that V is closed. By Urysohn's lemma, there is a continuous function h : K → [0, 1] that is equal to 1 on V and 0 on τ (A). Then (18) implies that V ρ ν A dm K = 0, a contradiction. □ Lemma 4.5. Let G = r i=1 A i be a partition of G and let ν be an invariant mean on G. Then
r i=1 ρ ν A i (x) = 1 for m K -almost every x. Proof. Since r i=1 1 A i = 1, for any h ∈ C(K), K h r i=1 ρ ν A i dm K = r i=1 ν(h • τ · 1 A i ) = ν(h • τ ) = K h dm K
where the last equality comes from Lemma 4.2. Since C(K) is dense in L 1 (m K ), this implies that r i=1 ρ ν A i = 1 almost everywhere. □
4.3.
Relation between ρ A and ρ ϕ(A) . Let G = A 1 ∪ · · · ∪ A r . Our proof of Theorem 1.4 relies on a correspondence principle relating
ϕ 1 (A i ) + ϕ 2 (A i ) − ϕ 2 (A i ) to a convolution of the form 1φ 1 (B i ) * 1φ 2 (B i ) * 1φ 2 (−B i ) on a compact abelian group K.
To prove such a correspondence principle, we need Lemma 4.6 and Corollary 4.10, which specify the relationship between the Radon-Nikodym densities of 1 A and 1 ϕ(A) . In order to make the relevant issues apparent, the next lemma takes place in slightly greater generality than we need for our application.
Lemma 4.6. Let G and H be discrete abelian groups and let ϕ : G → H be a surjective homomorphism. Let K 1 , K 2 be compact abelian groups and τ 1 : G → K 1 , τ 2 : H → K 2 be homomorphisms with dense images. Supposeφ : K 1 → K 2 is a continuous surjective homomorphism such that (19)). • The surjectivity of ϕ is required for ϕ * ν to be an invariant mean on H, and thus for ρ ϕ * ν f to be defined on K 2 . • The assumption (ii) is satisfied by the groups we use in the proof of Theorem 1.4;
(i)φ • τ 1 = τ 2 • ϕ, and (ii) for all χ ∈ " K 1 , if there is a ψ ∈ " H such that χ • τ 1 = ψ • ϕ, then there is a χ ′ ∈ " K 2 such that ψ = χ ′ • τ 2 (see Diagramρ ν f •ϕ = (ρ ϕ * ν f ) •φ m K 1 -almost everywhere. G K 1 S 1 H K 2 τ 1 ϕ χ ϕ τ 2 ψ χ ′(19)
namely K 1 will be the Bohr compactification of G, K 2 will beφ(K 1 ), which will coincide with the Bohr compactification bH of H, and τ 2 : H → K 2 will be the usual embedding of H into bH.
Proof. We will prove that
' ρ ν f •ϕ = ⁄ (ρ ϕ * ν f ) •φ.(20)
We first identify some characters of G which are orthogonal to f • ϕ.
Claim 4.8. Let ψ ∈ " G. Then ν((f • ϕ) · ψ) = 0 unless ψ = ψ ′ • ϕ for some ψ ′ ∈ " H. Similarly, if χ ∈ " K 1 , and h ∈ L 2 (m K 2 ), then ' h •φ(χ) = 0 unless χ = χ ′ •φ for some χ ′ ∈ " K 2 .
To see this, assume ψ ∈ " G does not have the form ψ ′ • ϕ for some ψ ′ ∈ " H. Then there is a g ∈ ker ϕ such that ψ(g) ̸ = 1. 4 We then have
ν((f • ϕ) · ψ) = ν ((f • ϕ) · ψ) g = ν (f • ϕ) · (ψ) g = ψ(g)ν (f • ϕ) · ψ . So ν((f • ϕ) · ψ) = ψ(g)ν((f • ϕ) · ψ), which means ' f • ϕ(ψ) = 0, since ψ(g) ̸ = 1.
This proves the first statement in the claim, and the second statement is proved similarly.
Claim 4.9. Let χ ∈ " K 1 . Then ' ρ ν f •ϕ (χ) = 0 unless χ = χ ′ •φ for some χ ′ ∈ " K 2 .
To prove this claim, let χ ∈ " K 1 . Then
' ρ ν f •ϕ (χ) = K 1 ρ ν f •ϕ χ dm K 1 = ν ((f • ϕ) · (χ • τ 1 )) .
By Claim 4.8, the above evaluates to 0 unless χ • τ 1 = ψ • ϕ for some ψ ∈ " H. Choosing such a ψ, we have
' ρ ν f •ϕ (χ) = ν((f • ϕ) · (ψ • ϕ)) = ϕ * ν(f ψ).
By assumption (ii), we may write ψ as χ ′ • τ 2 for some χ ′ ∈ "
K 2 . Then χ • τ 1 = (χ ′ • τ 2 ) • ϕ = χ ′ •φ • τ 1 . So χ • τ 1 = χ ′ •φ • τ 1 .
The denseness of τ 1 (G) in K 1 and continuity of χ then implies χ = χ ′ •φ. This shows that ' ρ ν f •ϕ (χ) = 0 unless χ = χ ′ •φ for some χ ′ ∈ " K 2 . We now prove equation (20).
Case 1: χ = χ ′ •φ for some χ ′ ∈ " K 2 . Then ' ρ ν f •ϕ (χ) = K 1 ρ ν f •ϕ χ dm K 1 = ν ((f • ϕ) · (χ • τ 1 )) by definition of ρ ν f •ϕ = ν (f • ϕ) · χ ′ •φ • τ 1 = ν (f • ϕ) · χ ′ • τ 2 • ϕ = ϕ * ν f · χ ′ • τ 2 = K 2 ρ ϕ * ν f χ ′ dm K 2 = K 1 Ä ρ ϕ * ν f •φ ä · Ä χ ′ •φ ä dm K 1 = ÿ ρ ϕ * ν f •φ(χ). Case 2: χ ̸ = χ ′ •φ for all χ ′ ∈ " K 2 .
In this case, Claim 4.8 implies ⁄ (ρ ϕ * ν f ) •φ(χ) = 0 and Claim 4.9 implies ' ρ ν f •ϕ (χ) = 0. □ Corollary 4.10. Let G be a discrete abelian group, ν an invariant mean on G and ϕ : G → G an endomorphism. Let K be a compact abelian group, τ : G → K a homomorphism with dense image, andφ : K → K an endomorphism such thatφ • τ = τ • ϕ. Assume further that for all χ ∈ " K, if there is a ψ ∈ " G such that χ • τ = ψ • ϕ, then there is a χ ′ ∈ " K such that ψ = χ ′ • τ .
ρ ν 1 ϕ(A) •ϕ = ρ ϕ * ν 1 ϕ(A) •φ. Since 1 ϕ(A) • ϕ = 1 ϕ −1 (ϕ(A)) ≥ 1 A , we have ρ ν 1 ϕ(A) •ϕ ≥ ρ ν A .
It follows that ρ ν
1 A ≤ ρ ϕ * ν 1 ϕ(A) •φ, meaning ρ ν A ≤ ρ ϕ * ν ϕ(A) •φ. □
Reducing correlation sequences to integrals in compact groups
The goal of this section is to show that certain averages for ergodic G-systems can be reduced to double integrals on a compact group. Lemma 5.1 establishes this for group rotations on a compact abelian group K, as long as some endomorphisms on G can be extended to all of K. Lemma 5.1. Let K be a compact abelian group and let τ : G → K be a homomorphism with dense image. Let ϕ 1 , ϕ 2 , ϕ 3 : G → G be endomorphisms. Suppose there are continuous endomorphismsφ i : K → K such thatφ i • τ = τ • ϕ i for 1 ≤ i ≤ 3. Then for all bounded measurable f 1 , f 2 , f 3 : K → C, we have
I( ⃗ f , ⃗ ϕ) := U C − lim g∈G K f 1 (z + τ (ϕ 1 (g)))f 2 (z + τ (ϕ 2 (g)))f 3 (z + τ (ϕ 3 (g))) dm K (z) = K 2 f 1 (z +φ 1 (t))f 2 (z +φ 2 (t))f 3 (z +φ 3 (t)) dm K (z) dm K (t).
Proof. Since I( ⃗ f , ⃗ ϕ) is continuous in f i (with respect to the L 2 (m K )-norm) and multilinear in f i , it suffices to prove the identity when each f i is a character χ i of K. In this case we have
I(χ 1 , χ 2 , χ 3 , ⃗ ϕ) = U C − lim g∈G K χ 1 χ 2 χ 3 (z) 3 i=1 χ i (τ (ϕ i (g))) dm K (z) = U C − lim g∈G K χ 1 χ 2 χ 3 (z) 3 i=1 χ i •φ i (τ (g)) dm K (z).
By (6), we have
I(χ 1 , χ 2 , χ 3 , ⃗ ϕ) = K 2 χ 1 χ 2 χ 3 (z) 3 i=1 χ i •φ i (t) dm K (z)dm K (t) = K 2 3 i=1 χ i (z +φ i (t)) dm K (z)dm K (t),
and this finishes our proof. □
The next proposition deals with a general ergodic G-system X. The compact group in question will be an extension K of the group Z underlying Kronecker factor of X, constructed to be invariant under the correspondingφ i , as required by Lemma 5.1.
Proposition 5.2. Given an ergodic measure preserving G-system X = (X, µ, T ) and f : X → [0, 1], define I : G → R ≥0 by
I(w) := U C − lim g∈G X f · T ϕ 3 (g) f · T w−ϕ 2 (g) f dµ,
where ϕ 2 , ϕ 3 : G → G are endomorphisms such that ϕ 2 , ϕ 3 , ϕ 2 + ϕ 3 have finite index images in G.
Then there are a compact abelian group K, a homomorphism τ : G → K with dense image, endomorphismsφ 2 ,φ 3 : K → K andf : K → [0, 1] with Kf dm K = X f dµ such that for all w ∈ G, Proof. Let ϕ 1 = −ϕ 2 − ϕ 3 . We first prove the special case of the lemma where E(X) is invariant under each ϕ i , meaning that for all eigenvalues λ ∈ E(X) and i ∈ {1, 2, 3}, we have λ•ϕ i ∈ E(X). In this case, the conclusion was also observed in [2,Remark 3.2]. By [2,Section 3], the Kronecker factor (Z, m Z , R) of (X, µ, T ) is characteristic for the average defining I(w). Let τ : G → Z be the canonical projection. We can therefore replace f withf := E(f |Z) without changing I(w):
I(w) = K 2f (z)f (z +φ 3 (t))f (z + τ (w) −φ 2 (t)) dm K (z) dm K (t).(21)I(w) = U C − lim g∈G Zf · R ϕ 3 (g)f · R w−ϕ 2 (g)f dm Z = U C − lim g∈G Zf (z)f (z + τ (ϕ 3 (g)))f (z + τ (w − ϕ 2 (g))) dm Z (z).(22)
In view of Lemma 3.2, letφ i : Z → Z be continuous endomorphisms satisfying τ •ϕ i =φ i •τ . Applying this identity to (22), we have
I(w) = U C − lim g∈G Zf (z)f (z +φ 3 (τ (g)))f (z + τ (w) −φ 2 (τ (g))) dm Z (z).
By Lemma 5.1, we can rewrite the previous line as
I(w) = Z 2f (z)f (z +φ 3 (t))f (z + τ (w) −φ 2 (t))) dm Z (z) dm Z (t).
Taking K = Z, we prove the proposition in this special case.
For the general case, let Λ be the smallest subgroup of " G that contains E(X) and is closed under each ϕ * i . Since E(X) is countable, it is easy to see that Λ is countable. Let K = ( Λ, m Λ , R) be the group rotation on Λ described in Lemma 3.3. By part (i) of Lemma 3.3, we have E(K) = Λ. Since E(Z) = E(X) ⊂ Λ, part (ii) of Lemma 3.3 implies that Z is a factor of K.
We now fix an ergodic G-system Y = (Y, ν, S) that is a common extension of X and K. For example, we can take Y = (X × K, ν, T × R) to be an ergodic joining of X and K. (For details about joinings and the existence of ergodic joinings, see Glasner [19,Section 6] or de la Rue [14, Section 3.1].)
Writing π : Y → X for the factor map, we define f ′ : Y → [0, 1] to satisfy f ′ := f • π and
I ′ (w) := U C − lim g∈G Y f ′ · S ϕ 3 (g) f ′ · S w−ϕ 2 (g) f ′ dν.
Since f ′ is a lift from f on X, it is obvious that I ′ = I and the Kronecker factor Z of X is characteristic for the averages I ′ (w). Thus any factor of Y between Y and Z is also characteristic for I ′ (w). In particular, K is characteristic for I ′ (w). Now applying an argument similar to the first part of the proof to the factor K of Y and the function f ′ , we obtain the compact group K = Λ, the functionf = E(f ′ |K), and endomorphismsφ i satisfying (21).
I(w) := U C − lim g∈G X f · T ϕ 3 (g) f · T w−ϕ 2 (g) f dµ.
Then supp(I) contains a Bohr-(k, η) set where k, η depend only on δ and the indices of ϕ i (G) in G.
Proof. By Proposition 5.2, there exist a compact abelian group K with Haar measure m K , a homomorphism τ : G → K with dense image, and endomorphismsφ i : K → K, and
f : K → [0, 1] with Kf dm K = X f dµ = δ such that I(w) = K 2f (z)f (z +φ 3 (t))f (z + τ (w) −φ 2 (t)) dm K (z) dm K (t). Furthermore, [K :φ i (K)] ≤ [G : ϕ i (G)] for each i. Now define I ′ : K → [0, 1] by I ′ ( w) := K 2f (z)f (z +φ 3 (t))f (z + w −φ 2 (t)) dm K (z) dm K (t).
By change of variable z → z +φ 2 (t) and using ϕ 2 + ϕ 3 = −ϕ 1 , we obtain
I ′ ( w) = K 2f (z +φ 2 (t))f (z −φ 1 (t))f (z + w) dm K (z) dm K (t).
Applying
I(w) := U C − lim g∈G X f · T ϕ 3 (g) f · T w−ϕ 2 (g) f dµ satisfies ϕ 3 (supp I) ⊂ ϕ 1 (A) + ϕ 2 (A) + ϕ 3 (A).
Proof. By Furstenberg's correspondence principle (for example, see [6, Theorem 2.8]), there exists an ergodic G-system (X, µ, T ) and a measurable set
E ⊂ X with µ(E) = d * (A) such that for all w 1 , w 2 ∈ G, µ(E ∩ T −1 w 1 E ∩ T −1 w 2 E) ≤ d * (A ∩ (A − w 1 ) ∩ (A − w 2 )
).
Letting f = 1 E , w 1 = ϕ 3 (g) and w 2 = w − ϕ 2 (g), we deduce that for all w and g ∈ G,
X f · T ϕ 3 (g) f · T w−ϕ 2 (g) f dµ ≤ d * (A ∩ (A − ϕ 3 (g)) ∩ (A − (w − ϕ 2 (g)))
.
It follows that if w ∈ supp(I), then there are h ∈ A and g ∈ G such that h, h + ϕ 3 (g), and h + w − ϕ 2 (g) all belong to A. Therefore,
ϕ 3 (w) = ϕ 1 (h) + ϕ 2 (h + ϕ 3 (g)) + ϕ 3 (h + w − ϕ 2 (g)) ∈ ϕ 1 (A) + ϕ 2 (A) + ϕ 3 (A)(23)
and this finishes our proof. Note that in (23), we use the fact that ϕ 2 • ϕ 3 = ϕ 3 • ϕ 2 .
□
We are ready to prove Theorem 1.2.
Proof of Theorem 1.2. By Proposition 6.2, there exists an ergodic G-system (X, µ, T ) and
f : X → [0, 1] with X f = d * (A) such that I(w) = U C − lim g∈G X f · T ϕ 3 (g) f · T w−ϕ 2 (g) f dµ has ϕ 3 (supp(I)) ⊂ ϕ 1 (A) + ϕ 2 (A) + ϕ 3 (A).
In view of Proposition 6.1, supp(I) contains a Bohr-(k, η) set where k, η only depends on δ and the indices of ϕ i (G) in G. Lemma 2.3 then implies that ϕ 3 (supp(I)) contains a Bohr-(k ′ , η ′ ) set where k ′ , η ′ depends only on δ and the indices mentioned above. □
Second correspondence principle
In this section we establish the second correspondence principle Proposition 7.1, which is used in the proof of Theorem 1.4. This can be thought of as a special case of Propositions 3.1 and 3.2 of [10]. Here we write bG for the Bohr compactification of G.
Proposition 7.1 (Second correspondence principle). Let K = bG and let τ : G → K be the natural embedding. Let A, B ⊂ G and let ν, λ be two invariant means on G where λ is extremal.
Then A + B − B contains τ −1 (supp(ρ ν A * ρ λ B * ρ λ −B )).
Proof. By Lemma 4.4, the Radon-Nikodym density ρ ν A is supported on τ (A). Therefore the convolution ρ ν A * ρ λ B , which is defined as
ρ ν A * ρ λ B (z) := K ρ ν A (x)ρ λ B (z − x) dm K (x), is supported on τ (A)+τ (B) = τ (A + B). Similarly ρ ν A * ρ λ B * ρ λ −B is supported on τ (A + B − B)
. This, however, is weaker than the conclusion of Proposition 7.1 and is insufficient for our purpose.
Define ϕ, θ : G → [0, 1] by
ϕ(t) := 1 B * λ 1 −B (t) := G 1 B (x)1 −B (t − x) dλ(x)
and
θ(t) := 1 A * ν ϕ(t) := G 1 A (y)ϕ(t − y) dν(y).
We can see that θ is supported on
A + B − B. It remains to show that θ = (ρ ν A * ρ λ B * ρ λ −B ) • τ .
Claim 7.2. ϕ = η + ψ where ψ is a null function and η :
= (ρ λ B * ρ λ −B ) • τ .
Proof of claim. One can verify that ϕ is positive definite by writing g,h∈G c g c h ϕ(g − h) as
G ( g c g 1 B (x − g)) h c h 1 B (x − h) dλ(x) = G g 1 B (x − g) 2 dλ(x)
for a finite collection of coefficients c g ∈ C. Therefore, by the Bochner-Herglotz Theorem, ϕ is the Fourier transform of a positive measure σ on " G. Decomposing σ = σ d + σ c where σ d is the discrete component of σ and σ c is the continuous part, we have
ϕ =σ d +σ c .(24)
Since σ d has only countably many atoms,σ d is an almost periodic function. On the other hand, by Wiener's lemma (see [21,Théorème 16(2)]), G |σ c | 2 dµ = 0 for all invariant means µ on G. Now we will prove thatσ d = η. We first show thatσ d and η are almost periodic functions defined by Fourier series on G with absolutely summable coefficients. To see this forσ d , we
writeσ d = χ∈ " G σ({χ})χ, where χ∈ " G σ({χ})
is a convergent sum of nonnegative values. For η, note that both ρ λ B and ρ λ −B are in L 2 (m K ). Thus, their Fourier coefficients are squaresummable, and the Fourier coefficents of ρ λ B * ρ λ −B are absolutely summable. To prove that σ d = η, it therefore suffices to prove thatσ d and η have the same Fourier coefficients. This is the same as showing that ϕ and η have the same Fourier coefficients, as the Fourier coefficients ofσ c are all 0 (sinceσ c is a null function). So we verify that
µ(ϕχ) = µ(ηχ)
for every invariant mean µ on G and every character χ ∈ " G. Fix the invariant mean µ, characters χ ∈ " G, and χ ′ ∈ " K such that χ = χ ′ • τ . We then have
µ(ϕχ) = G 2 1 B (t)1 −B (s − t)χ(s) dλ(t)dµ(s) = G 2 (1 B · χ)(t) · (1 −B · χ)(s − t) dλ(t)dµ(s) = λ(1 B · χ)λ(1 −B · χ) (by Lemma 2.1) = K ρ λ B χ ′ dm K · K ρ λ −B χ ′ dm K (by definitions of ρ λ B and ρ λ −B ) = ρ λ B (χ ′ ) · ' ρ λ −B (χ ′ ) = Ÿ ρ λ B * ρ λ −B (χ ′ ) = K (ρ λ B * ρ λ −B ) · χ ′ dm K = µ(ηχ) (
by the definition of η and Lemma 4.2). □
We are ready to prove θ = (ρ ν
A * ρ λ B * ρ λ −B ) • τ . Indeed, by Claim 7.2, θ := 1 A * ν ϕ = 1 A * ν η + 1 A * ν ψ where ψ is a null function and η = (ρ λ B * ρ λ −B ) • τ . For all t ∈ G, we have |1 A * ν ψ(t)| ≤ ν(| − ψ t |) = ν(|ψ|) = 0.
Moreover, since η is a Fourier series with absolutely summable coefficients, 1 A * ν η is as well. It follows that θ is almost periodic. Therefore, to show θ = (ρ ν A * ρ λ B * ρ λ −B ) • τ , it suffices to check that θ and (ρ ν A * ρ λ B * ρ λ −B ) • τ have the same Fourier coefficients. We omit the computations as they are nearly identical to the proof of Claim 7.2. □
Bohr sets in
ϕ 1 (A i ) + ϕ 2 (A i ) − ϕ 2 (A i )
In this section we prove Theorem 1.4, which says that ϕ 1 (A i ) + ϕ 2 (A i ) − ϕ 2 (A i ) contains a Bohr set for some A i in any partition G = r i=1 A i . Since the proof is technical and uses cumbersome notation, we first sketch the main idea. Fix an invariant mean ν on G. The pushforwards ϕ 1, * ν and ϕ 2, * ν are invariant means on H 1 = ϕ 1 (G) and H 2 = ϕ 2 (G), respectively. Since H 1 , H 2 are only subgroups of G, in order to apply the correspondence principle (Proposition 7.1), we need to extend ϕ 1, * ν and ϕ 2, * ν to means ν 1 and ν 2 on G. Furthermore, ν can be chosen in such a way that ν 2 is extremal. Having found such extensions,
Proposition 7.1 implies that ϕ 1 (A i ) + ϕ 2 (A i ) − ϕ 2 (A i ) contains the preimage of the support of ρ ν 1 ϕ 1 (A i ) * ρ ν 2 ϕ 2 (A i ) * ρ ν 2 −ϕ 2 (A i )
, which in turn contains a Bohr set for some i ∈ [r] thanks to Corollary 4.10 and the corresponding partition result in compact groups (Theorem D (ii)) from [33].
The precise result we need from [33] is the following. . Let K be a compact abelian group andφ 1 ,φ 2 be commuting continuous endomorphisms on K with finite index images. Suppose ρ 1 , . . . , ρ r :
K → [0, 1] are measurable functions such that r i=1 ρ i ≥ 1 almost everywhere. For w ∈ G, define R i (w) = K 2 ρ i (φ 2 (v))ρ i (w + u)ρ i (u +φ 1 (v)) dµ K (u)dµ K (v).
Then there are k, η > 0 depending only on [K :φ 1 (K)], [K :φ 2 (K)] and r such that for some i ∈ [r], the support of R i contains a Bohr-(k, η) set.
We turn to the details. The following lemma helps us extend an invariant mean on H = ϕ(G) to a mean on G by thinking of ℓ ∞ (H) as embedded into ℓ ∞ (G) through the pullback map ϕ * . Lemma 8.2. Let G and H be discrete abelian groups and ϕ : G → H be a surjective homomorphism. Then for every invariant mean µ on H, there exists an invariant mean ν on G such that ϕ * ν = µ.
Proof. First we observe that if ν is a linear functional on ℓ ∞ R (G) and ν(1
G ) = 1, then ν is positive if and only if ν(f ) ≥ p(f ) := inf x∈G f (x) for all f ∈ ℓ ∞ R (G).
Clearly p is a concave function.
Let V be the vector subspace of ℓ ∞ R (G) consisting of functions of the form h • ϕ for some
h ∈ ℓ ∞ R (H). If f ∈ V , then by surjectivity of ϕ, there is a unique h ∈ ℓ ∞ R (H) such that f = h • ϕ. We have µ(h) ≥ inf y∈H h(y) (since µ is an invariant mean on H) = inf x∈G h(ϕ(x)) = p(f ) (since ϕ is surjective).
By the Hahn-Banach theorem, the linear functional f → µ(h) on V can be extended to a linear functional λ on ℓ ∞ R (G) such that λ(f ) ≥ p(f ) for any f ∈ ℓ ∞ R (G). In particular, λ is positive and λ(1 G ) = λ(1 H • ϕ) = µ(1 H ) = 1. We now show that λ can be further refined to become G-invariant. We let η be an invariant mean on G, and define
ν(f ) := G λ(f x ) dη(x) for all f ∈ ℓ ∞ R (G).
Then ν(f g ) = ν(f ) for all g ∈ G, since η is translation invariant. The positivity of ν follows from the positivity of λ and η. If f = h • ϕ ∈ V , then λ(f g ) = µ(h ϕ(g) ) = µ(h) for all g ∈ G, so ν(f ) = µ(h). The lemma now follows, since an invariant mean is completely determined by its values on real-valued functions.
□
If H happens to be a subgroup of G, then another way to extend a mean on H to a mean on G is to consider ℓ ∞ (H) as a subset of ℓ ∞ (G) consisting of functions supported on H. This is the content of the next lemma.
ν(f ) = µ(f ) k for every f ∈ ℓ ∞ (G) supported on H. Furthermore, if µ is extremal then ν is also extremal. Proof. Let H − g i for 0 ≤ i ≤ k − 1 be the cosets of H in G with g 0 = 0.
We first show that an invariant mean ν satisfying the conclusion of the lemma must be unique. For a function f supported on H − g i , the function f g i given by x → f (x − g i ) is supported on H. Therefore, in this case, since ν is G-invariant, we must have
ν(f ) = ν(f g i ) = µ(f g i ) k .(25)
For an arbitrary f ∈ ℓ ∞ (G), define f i = f · 1 H−g i . Since f = k−1 i=0 f i , from the previous paragraph, we must have
ν(f ) = k−1 i=0 ν(f i ) = 1 k k−1 i=0 µ((f i ) g i ).(26)
This equation uniquely defines ν.
It is easy to see that ν as defined in (26) is a linear functional on ℓ ∞ (G) with ν(1 G ) = 1. To show ν is G-invariant, we consider arbitrary g ∈ G and f ∈ ℓ ∞ (G). By the linearity of ν and (25),
ν(f g ) = k−1 i=0 ν((f i ) g ) = 1 k k−1 i=0 µ(((f i ) g ) g j(i) ) = 1 k k−1 i=0 µ(((f i ) g+g j(i) ).(27)
where j(i) ∈ {0, . . . , k − 1} is such that −g i + g + g j(i) ∈ H. For i ∈ {0, . . . , k − 1}, let h = −g i + g + g j(i) . Since µ is H-invariant,
µ(((f i ) g+g j(i) ) = µ(((f i ) g i +h ) = µ((f i ) g i ).(28)
Relations (26), (27), and (28) give ν(f g ) = ν(f ), and so ν is G-invariant.
Suppose µ is extremal. To show that ν is extremal, suppose ν = αν 1 +(1−α)ν 2 where ν 1 and ν 2 are means on G and 0 < α < 1. Restricting to S := {f ∈ ℓ ∞ (G) : f is supported on H}, we get
µ/k = ν| S = αν 1 | S + (1 − α)ν 2 | S .
Since µ is extremal, it must be that ν 1 | S = ν 2 | S = µ/k. Due to the uniqueness of the extension of µ from H to G, we deduce that ν 1 = ν 2 = ν. Therefore, ν is extremal. □
The next lemma shows that if H is a subgroup of G with finite index, then the Radon-Nikodym density associated with the mean µ on H and the one associated with its extension on G are the same.
ν(h • τ · 1 B ) = 1 k k−1 i=0 µ((h • τ · 1 B · 1 H−g i ) g i ).(29)
Since 1 B is supported on H,
h • τ · 1 B · 1 H−g i = 0 if i ̸ = 0.
Therefore, the right hand side of (29) is equal to
1 k µ(h • τ · 1 B )
which is equal to
1 k K H h · ρ µ B dm K H .
It follows that
K h · ρ ν B dm K = ν(h • τ · 1 B ) = 1 k K H h · ρ µ B dm K H .
Since when restricting to K H , the measure m K is equal to 1 k m K H , we deduce that ρ ν B = ρ µ B . □
We are ready to prove Theorem 1.4. Our proof will use Corollary 4.10, applied in the case where K 1 = bG, K 2 =φ(bG) (whereφ is given by Lemma 3.2(ii)), and τ 1 = τ 2 = τ = the canonical embedding of G into bG. In order to verify that the hypotheses of Corollary 4.10 are satisfied, we want to know that every character ψ of ϕ(G) can be written in the form χ ′ • τ , where χ ′ is a character ofφ(bG). This is the case, as every ψ ∈ ' ϕ(G) can be extended to a character ψ 0 ∈ " G, and ψ 0 = χ 0 • τ for some χ 0 ∈ bG. Let χ ′ := χ 0 |φ (bG) . We claim that χ ′ • τ = ψ. To see this, note that
χ 0 • τ = ψ 0 , so (χ 0 • τ )| ϕ(G) = ψ 0 | ϕ(G) = ψ. Finally, note that τ (ϕ(G)) ⊂φ(bG), sinceφ • τ = τ • ϕ. Thus (χ 0 • τ )| ϕ(G) = χ ′ • τ .
Proof of Theorem 1.4. Let H 1 = ϕ 1 (G) and H 2 = ϕ 2 (G). Let µ be an extremal invariant mean on H 2 . By Lemma 8.2, there exists an invariant mean ν on G such that the pushforward ϕ 2, * ν is equal to µ. In view of Lemma 8.3, ϕ 1, * ν can be extended canonically from H 1 to a mean ν 1 on G such that
ν 1 (f ) = (ϕ 1, * ν)(f ) [G : H 1 ]
for every f ∈ ℓ ∞ (G) supported on H 1 . Likewise, extend µ = ϕ 2, * ν from H 2 to a mean ν 2 on G. Since µ is extremal, ν 2 is extremal; however, ν 1 may not be extremal. Let A ⊂ G, K = bG and τ : G → K be the natural embedding. By Proposition 7.1 and because ν 2 is extremal, the sumset
ϕ 1 (A) + ϕ 2 (A) − ϕ 2 (A) contains τ −1 (supp ρ ν 1 ϕ 1 (A) * ρ ν 2 ϕ 2 (A) * ρ ν 2 ϕ 2 (−A) ).
In light of Lemma 8.4,
ρ ν j ϕ j (A) = ρ ϕ j, * ν ϕ j (A)
where we identify ρ For j ∈ {1, 2}, letφ j : K → K be continuous homomorphism such thatφ j • τ = τ • ϕ j .
Thenφ 1 •φ 2 • τ = τ • ϕ 1 • ϕ 2 = τ • ϕ 2 • ϕ 1 =φ 2 •φ 1 • τ .S(w) := K 2 f (φ 1 •φ 2 (v)) · g(w +φ 2 (u)) · h(−φ 2 (u) −φ 2 •φ 1 (v)) dm K (u)dm K (v).
Proof of Claim. Note that by [33,Lemma 2.6],φ 1 •φ 2 (K) has finite index in K. We recall [33,Lemma 2.8], which says that if f is a nonnegative function on a compact abelian group K, ϕ is a continuous endomorphism on K and m = [K : ϕ(K)] < ∞, then
K f (ϕ(x)) dµ K (x) ≤ m K f (x) dµ K (x).
By two applications of this fact, we have
S(w) ≤ [K :φ 2 (K)] K 2 f (φ 1 •φ 2 (v)) · g(w + u) · h(−u −φ 2 •φ 1 (v)) dm K (u)dm K (v) ≤ [K :φ 2 (K)] · [K :φ 1 •φ 2 (K)] K 2 f (v) · g(w + u) · h(−u − v) dm K (u)dm K (v) = [K :φ 2 (K)] · [K :φ 1 •φ 2 (K)] · f * g * h(w),
thus proving the claim. □ By Corollary 4.10 we have
f (φ 1 •φ 2 (v)) ≥ ρ ν A (φ 2 (v)),(30)g(φ 2 (w) +φ 2 (u)) ≥ ρ ν A (w + u),(31)
and
h(−φ 2 (u) −φ 2 •φ 1 (v))) ≥ ρ ν A (u +φ 1 (v)).(32)
Therefore
S(φ 2 (w)) ≥ R A (w)(33)
for all w ∈ K, where
R A (w) := K 2 ρ ν A (φ 2 (v))ρ ν A (w + u)ρ ν A (u +φ 1 (v)) dm K (u)dm K (v).
Combining (30) -(33), we get that for all A ⊂ G, the sumset ϕ 1 (A) + ϕ 2 (A) − ϕ 2 (A) contains τ −1 (φ 2 (supp R A )).
As a consequence, we have for each partition G = r i=1 A i and each i ∈ [r],
ϕ 1 (A i ) + ϕ 2 (A i ) − ϕ 2 (A i ) ⊃ τ −1 (φ 2 (supp R A i )).
By Lemma 4.5, r i=1 ρ ν A i = 1 almost everywhere. Therefore, in view of Proposition 8.1, for some i ∈ [r], the support of R A i contains a Bohr-(k, η) set B ⊂ K where k, η depend only on r and the indices [K :φ 1 (K)], [K :φ 2 (K)].
By Lemma 2.3,φ 2 (B) is a Bohr-(k ′ , η ′ ) set where k ′ , η ′ depend only on k, η and [K :φ 2 (K)]. Lemma 2.2 then implies that τ −1 (φ 2 (B)) contains a Bohr-(k ′ , η ′ ) set and our proof finishes. □
Third correspondence principle
In this section we derive a correspondence principle for B + C + A i . Assuming only that the summands A, B, C have positive upper Banach density, we cannot guarantee that A+B +C is a Bohr set, a translate of a Bohr set, or even that A + B + C is syndetic. 5 Under the stronger assumption that A and B have positive upper Banach density and that C is syndetic, [5] proves (for the ambient group Z) that A + B + C contains a translate of a Bohr set. Our Theorem 1.7 has a similar, but weaker hypothesis: partitioning G as A 1 ∪· · ·∪A r , it is possible 5 In every countably infinite abelian group, there are sets D, E with positive upper Banach density where D + E is not syndetic, and Proposition 6.2 of [4] produces sets A, B, C having positive upper Banach density where A + B + C ⊂ D + E. that none of the A i are syndetic. Of course, one of the A i must be piecewise syndetic ( [12], [29]). Proposition 9.6 says that when A, B, C ⊂ G with d * (B), d * (C) > 0, the sumset B + C + A can be modeled by a convolution h B * h C * h A on a compact group K, where h B dm K ≥ d * (B) and h C dm K ≥ d * (C). In this correspondence principle, the hypothesis d * (A) > 0 is not strong enough to guarantee that h A is nonzero. However, assuming that G = A 1 ∪ · · · ∪ A r , we will be able to conclude that r i=1 h A i ≥ 1 almost everywhere and this suffices to give an useful bound on the h B * h C * h A i (0) for some i ∈ [r].
Definition 9.1. Let A, B ⊂ G. We write A ≺ B if for all finite subsets A ′ ⊂ A, there exists t ∈ G such that A ′ + t ⊂ B.
In this case, we say that A is finitely embeddable in B.
The following lemma is implicit in [25] and to some extent in [24]. A similar statement for amenable groups can be obtained from Propositions 1.10 and 1.11 in [8].
Lemma 9.2. Let B, C ⊂ G. There exist a compact abelian group K, a homomorphism τ :
G → K for which τ (G) is dense in K, functions h B , h C : K → [0, 1] such that (i) K h B dm K = d * (B) and K h C dm K = d * (C), and (ii) {g ∈ G : h B * h C (τ (g)) > 0} ≺ B + C.
Remark 9.3. Readers familiar with Furstenberg's correspondence principle and Kronecker factors may appreciate the following additional detail: to obtain the group K, one may apply the Furstenberg correspondence principle to find ergodic measure preserving systems X B = (X B , µ B , T B ) and X C = (X C , µ C , T C ) modeling B and C, with corresponding Kronecker factors K B = (K B , m K B , R B ) and K C = (K C , m K C , R C ). The groups K B and K C are the respective duals of the eigenvalue groups E(X B ) and E(X C ) of X B and X C (as described by Lemma 3.3). The group K may be realized as the phase space of the maximal common factor of K B and K C , or, equivalently, as the dual of E(X B ) ∩ E(X C ).
Proof. By [25,Lemma 2.8], there is an ergodic measure preserving G-system (X, µ, T ), where X is a compact metric space, and a clopen set
O C ⊂ X with µ(O C ) = d * (C) such that for all x ∈ X, {g ∈ G : T g x ∈ b∈B T b O C } ≺ B + C.(34)
By [25,Lemma 4.1], there is a group rotation factor (K, m K , R) of (X, µ, T ) with factor map π : X → K and a homomorphism τ :
G → K with dense image such that b∈B T b O C ⊃ π −1 (J) up to a set of µ-measure 0,(35)
where
J := supp(f B * f C ) for some functions f B , f C : K → [0, 1] with K f B dm K = d * (B) and K f C dm K = d * (C).
Note that for µ-almost every x ∈ X, R g π(x) = π(T g x). Therefore, if R g (π(x)) ∈ J, then T g x ∈ π −1 (J). Thus, from (35), for µ-almost every x ∈ X, we have
if R g (π(x)) ∈ J then T g x ∈ b∈B T b O C .
Fix such an x. Then
{g ∈ G : f B * f C (π(x) + τ (g)) > 0} ⊂ {g ∈ G : T g x ∈ b∈B T b O C }.
The relation (34) then implies {g ∈ G : f B * f C (π(x) + τ (g)) > 0} ≺ B + C. By defining functions h B , h C as h B (t) := f B (t + π(x)) and h C = f C , we obtain our conclusion. □ Lemma 9.4. Let K be a compact metrizable abelian group and τ : G → K be a homomorphism with dense image. Let h : K → [0, 1] be continuous and let A h := {g ∈ G : h(τ (g)) > 0}. If A h ≺ D, then there is a translate h ′ of h and an invariant mean λ on G such that
1 D * λ q ≥ h ′ • τ * λ q for all q : G → [0, 1]. Proof. Let (F N ) N ∈N be a Følner sequence for G. Since F N ∩ A h ⊂ A h and A h ≺ D, we may choose, for each N ∈ N, a t N ∈ G so that (F N ∩ A h ) + t N ⊂ D. Note that (F N + t N ) N ∈N is also a Følner sequence. Passing to a subsequence if necessary, we assume τ (t N ) converges to a point k 0 in K. Let h ′ (k) = h(k − k 0 ) for k ∈ K, so that h(k − τ (t N )) converges uniformly to h ′ (k).
Define a sequence of functions p N :
F N + t N → [0, 1] by p N (g + t N ) = h(τ (g)).
Since h(τ (g)) = 0 for each g ∈ (F N \ A h ), and F N ∩ A h + t N ⊂ D, we have 1 D (g) ≥ p N (g) for all g ∈ F N + t N .
For each N ∈ N and each q : G → [0, 1] we have
1 |F N | g∈F N +t N 1 D (g)q(t − g) ≥ 1 |F N | g∈F N +t N p N (g)q(t − g) = 1 |F N | g∈F N +t N h(τ (g) − τ (t N ))q(t − g).(36)
For each N , let λ N be the linear functional on ℓ ∞ (G) defined by λ N (f ) := 1
|F N | g∈F N +t N f (g).
Let λ be a linear functional on ℓ ∞ (G) that is a weak * limit point of the sequence λ N (meaning that for all f ∈ ℓ ∞ (G), all ε > 0, and all M ∈ N there is an N > M such that
|λ(f ) − λ N (f )| < ε). In other words, λ ∈ ∞ M =1 {λ N : N > M }. Since h(k − τ (t N )) converges uniformly in N to h(k − k 0 ) = h ′ (k), (36) implies 1 D * λ q(t) ≥ h ′ • τ * λ q(t) for all t ∈ G. □
Lemma 9.5. Let K be a compact abelian group and τ : G → K be homomorphism with dense image. Let h : K → [0, 1] be a continuous function and λ be an invariant mean on G. Then for every A ⊂ G,
(h • τ ) * λ 1 A = (h * ρ λ A ) • τ,
where ρ λ A is defined in Definition 4.1.
Proof. Approximating h by trigonometric polynomials, it suffices to prove the statement for the special case where h is a trigonometric polynomial. By linearity, we may assume h = χ ∈ " K. For such χ, we have
(χ • τ ) * λ 1 A (g) := G χ • τ (x) · 1 A (g − x) dλ(x) = G χ • τ (g + x)1 A (−x) dλ(x) = χ • τ (g) G χ • τ (x) · 1 A (−x) dλ(x) = χ • τ (g) G χ • τ · 1 −A dλ = χ • τ (g) K χ · ρ λ −A dm K .
Computing χ * ρ λ A (t) for t ∈ K, we get
χ * ρ λ A (t) = K χ(z)ρ λ A (t − z) dm K (z) = K χ(z + t)ρ λ A (−z) dm K (z) = χ(t) K χ(z)ρ λ −A (z) dm K (z) = χ(t) K χ · ρ λ −A dm K .
Substituting τ (g) for t, we get (χ • τ ) * λ 1 A (g) = (χ * ρ λ A )(τ (g)), completing the proof. □ Combining Lemmas 9.2, 9.4 and 9.5, we have a proposition which serves as a correspondence principle for B + C + A i . Proposition 9.6 (Third correspondence principle). Let B, C ⊂ G. There exist a compact abelian group K, a homomorphism τ : G → K with dense image, measurable functions h B , h C : K → [0, 1] and an invariant mean λ on G such that
(i) K h B dm K = d * (B) and K h C dm K = d * (C), (ii) for all A ⊂ G, B + C + A ⊃ τ −1 (supp(h B * h C * ρ λ A )).
Remark 9.7. The invariant mean λ depends on B and C; it may not realize the upper Banach density of A. In particular, it is possible that λ(A) = 0 while d * (A) > 0.
Proof. In view of Lemma 9.2, there are a compact abelian group K, homomorphism τ : G → K with dense image, measurable functions h B , h C : K → [0, 1] with h B dm K = d * (B), h C dm K = d * (C) such that {g ∈ G : h B * h C (τ (g)) > 0} ≺ B + C.
We now apply Lemma 9.4 with h B * h C in place of h: there is an invariant mean λ on G such that
1 B+C * λ 1 A ≥ h ′ • τ * λ 1 A ,(37)
where h ′ is a translate of h B * h C . By Lemma 9.5,
h ′ • τ * λ 1 A = (h ′ * ρ λ A ) • τ.(38)
Note that B + C + A contains the support of 1 B+C * λ 1 A and h ′ can be written as h ′ B * h C where h ′ B is a translate of h B . Therefore, (37) and (38) imply B + C + A ⊃ {g ∈ G : h ′ B * h C * ρ λ A (τ (g)) > 0} and this proves our proposition. □
Bohr sets in B + C + A i
The next proposition establishes the existence of Bohr sets in B +C +A i in compact abelian groups.
Proposition 10.1. Let δ 1 , δ 2 > 0 and r ∈ N. There are constants η > 0 and k ∈ N such that the following holds: Let K be a compact abelian group with probability Haar measure m K and let f, g : K → [0, 1] be measurable functions such that K f dm K ≥ δ 1 and K g dm K ≥ δ 2 . For i ∈ [r], let h i : K → [0, 1] be measurable functions such that r i=1 h i = 1 m K -almost everywhere. Then for some i ∈ [r], the support of f * g * h i contains a Bohr-(k, η) set.
Proof. The proof is similar to an argument used in [33] (Part I of this series). Since r i=1 h i = 1 almost everywhere, we have
f * g * r i=1 h i (x) = f * g * 1 K (x) = K f dm K · K g dm K ≥ δ 1 δ 2
for all x ∈ K. Therefore, by the pigeonhole principle, there exists i ∈ [r] such that f * g * h i (0) ≥ δ 1 δ 2 /r. By [33, Lemma 2.12], we have
|f * g * h i (t) − f * g * h i (0)| = K 2 (g(x) − g t (x))f (y)h i (−x − y) dm K (x)dm K (y) ≤ ∥ g − " g t ∥ ∞ ∥f ∥ 2 ∥h i ∥ 2 ≤ ∥ g − " g t ∥ ∞ ,
where g t (x) = g(t + x). Hence f * g * h i (t) > δ 1 δ 2 2r whenever ∥ g − " g t ∥ ∞ < δ 1 δ 2 2r . By [33, Lemma 2.1], the set of those t contains a Bohr-(k, δ 1 δ 2 2r ) set B with k ≤ 16r 2 (δ 1 δ 2 ) 2 . □
We are ready to prove Theorem 1.7.
Proof of Theorem 1.7. By Proposition 9.6, there exist a compact abelian group K, a homomorphism τ : G → K with dense image, measurable functions h B , h C : K → [0, 1] and an invariant mean λ on G such that (i) K h B dm K = d * (B) and K h C dm K = d * (C),
(ii) for all i ∈ [r], B + C + A i ⊃ τ −1 (supp(h B * h C * ρ λ A i )). In light of Lemma 4.5, r i=1 ρ λ A i = 1 almost everywhere. Therefore, by Proposition 10.1, there exist k and η depending only on δ and r such that the support of h B * h C * ρ λ A i contains a Bohr-(k, η) set in K for some i ∈ [r]. Lemma 2.2 then implies that B + C + A i contains a Bohr-(k, η) set in G. □ Remark 10.2. The proof of Theorem 1.7 follows a general phenomenon: if D ⊂ G is a piecewise Bohr set, then for any partition G = r i=1 A i , there is an i ∈ [r] such that D + A i contains a Bohr set. However, if we did not know that D has the form B + C, it is impossible to give any quantitative bounds on the rank and radius of the Bohr set in D + A i . This necessitates the presence of triple sum B + C + A i in Theorem 1.7.
Open questions
In the proofs of Theorems 1.2 and 1.4, the assumption that ϕ 1 , ϕ 2 , ϕ 3 commute is used to provide a parameterized solution to the relation w ∈ ϕ 1 (A) + ϕ 2 (A) + ϕ 3 (A). This concern raises the question:
Question 11.1. Can the assumption that the ϕ j commute in Theorems 1.2 and 1.4 be omitted?
The Bohr sets in Proposition 10.1 and Theorem 1.7 have the same rank k and radius η. Proposition 10.1 gives k ≪ α −6 and η ≫ α 3 , where α = (δ 1 δ 2 r −1 ) 1/3 . If we are only interested in translates of Bohr sets (i.e., Bohr neighborhoods of some element), then better bounds are available. A result of Sanders [37,Theorem 2.4] implies that there exists i such that B +C +A i contains a translate of a Bohr-(k, η) set with k ≪ α −1 and η ≥ exp −cα −1 log α −1 , for some absolute constant c. We ask the following.
Question 11.2. Is it possible to improve on k and/or η in Theorem 1.7? Can we take k ≪ α −1 ?
In the spirit of Ruzsa and Hegyvári's result [28] on Bohr sets in A + A − A − a mentioned in the introduction, we ask whether the Bohr set in Theorem 1.7 can be given by a fixed element of C. More precisely: Question 11.3. If B, C ⊂ G with d * (B), d * (C) > 0 and G = r i=1 A i , must there exist c ∈ C and i ∈ [r] such that B + c + A i contains a Bohr set?
The proof of Theorem 1.7 uses the fact that D := B + C is a piecewise Bohr set to deduce the Bohr structure in D+A i . It is natural to ask besides piecewise Bohr, what other conditions on D guarantee the existence of a Bohr set in D + A i . Question 11.4. What is a sufficient condition on D ⊂ G so that for any partition G = r i=1 A i , there is i ∈ [r] such that D + A i is Bohr set (or a translate of a Bohr set)? In particular, does the assumption that D is piecewise syndetic or d * (D) > 0 suffice? What if G = Z and D = P (the set of primes) or D = {n 2 : n ∈ N}?
Our Theorem 1.2 generalizes Theorem B in two ways: replacing the ambient group Z with an arbitrary countable abelian group, and replacing the endomorphisms g → s i g with commuting endomorphisms having finite index image. The main result of [23] generalizes Theorem B in a different way: the endomorphisms still have the form g → s i g, but more summands are considered. The following conjecture is a natural joint generalization of these results.
Conjecture 11.5. Let G be a (not necessarily countable) abelian group, let d ≥ 3, let ϕ 1 , . . . , ϕ d be endomorphisms of G such that [G : ϕ j (G)] < ∞ for each j, and such that ϕ 1 + · · · + ϕ d = 0. Then for all A ⊂ G with d * (A) > 0, the sumset ϕ 1 (A) + · · · + ϕ d (A) contains a Bohr set with rank and radius depending only on d * (A) and the indices [G : ϕ j (G)].
Defining endomorphism ψ : G → G by ψ(g) := d j=3 ϕ j (g). Then ϕ 1 + ϕ 2 + ψ = 0 and Therefore, if [G : ψ(G)] is finite, then Conjecture 11.5 immediately follows from Theorem 1.2. However, it is not true in general that ψ(G) has finite index (for example, take d = 4, ϕ 3 = −ϕ 4 ), and so Conjecture 11.5 is genuinely interesting. It may be necessary to impose some additional hypotheses on the ϕ j ; see [23, Section 4] for more discussion. Along the same lines, we have the following conjecture for partition that extends Theorem 1.4. Conjecture 11.6. Let G be a (not necessarily countable) abelian group, let d ≥ 3 and let ϕ 1 , . . . , ϕ d be endomorphisms of G such that [G : ϕ j (G)] < ∞ for each j. Suppose j∈S ϕ j = 0 for some non-empty subset S ⊂ [d]. Then for every finite partition G = r i=1 A i , there exists i ∈ [r] such that d j=1 ϕ j (A i ) contains a Bohr-(k, η) set, where k and η depend only on r and the indices [G : ϕ j (G)].
1. 1 .
1Previous results in Z. If A ⊂ Z, the upper Banach density of A is d * (A) = lim sup N →∞ max M ∈Z |A ∩ {M + 1, . . . , M + N }| N .
(s 1
1, s 2 , s 3 ) = (1, 1, −2), Theorem B generalizes Theorem A, since A+A−2A ⊂ A+A−A−A.
contains a Bohr-(k, η) set, where k and η depend only on r and [G : ϕ j (G)].
contains a Bohr-(k, η) set, where k and η depend only on d * (A) and the indices [G : ϕ j (G)]. Remark 1.3.
Theorem 1 . 7 .
17Let G be a countable discrete abelian group and let B, C ⊂ G have positive upper Banach density. Then for any partition
Figure 1 .
1Relations among X, Y, Z and K Theorem 1.4: In contrast to the sumset ϕ 1 (A) + ϕ 2 (A) + ϕ 3 (A), a parametrized solution to ϕ
Theorem 1 . 7 :
17This last theorem relies on two ingredients:(i) an estimate for the rank and radius of a Bohr set in sumsets of the form B
Lemma 2.2 ([33, Lemma 2.9]). Let G, H be abelian groups and τ : G → H be a homomorphism. If B is a Bohr-(k, η) set in H, then τ −1 (B) is a Bohr-(k, η) set in G.
Let G be an abelian group and ϕ :
Lemma 3 . 1 .
31Let Γ be a locally compact abelian group and let ϕ : Γ → Γ be a continuous endomorphism. Define an endomorphism ϕ * : Γ → Γ by ϕ * (χ) = χ • ϕ. Then (i) ϕ * is continuous.(ii) Under the canonical identification of Γ with Γ, (ϕ * ) * = ϕ.
all countable subgroups Λ of " G, we have E( Λ, m Λ , R) = Λ. Furthermore, all the eigenvectors of R corresponding to the eigenvalue λ ∈ Λ are constant multiples of
4. 1 .
1Definition of Radon-Nikodym densities. Let K be a compact abelian group and τ : G → K be a homomorphism such that τ (G) is dense in K. We describe a way to transfer a function f : G → [0, 1] to a function ρ : K → [0, 1] with the aid of invariant means. This construction follows the proof of [24, Lemma 2.5] (cf. Section 4 of[10]); it will be used in the proofs of Theorems 1.4 and 1.7.
Definition 4. 1 .
1Let f : G → [0, 1] and let ν be an invariant mean on G. The Radon-Nikodym density associated with f and ν is a Borel measurable function ρ ν f : K → [0, 1] satisfying
Lemma 4. 3 .
3The measure m defined by(16) is absolutely continuous with respect to the Haar probability measure m K on K, and in fact m(B) ≤ m K (B) for all Borel sets B ⊂ K.
Let B be any Borel set in K. By regularity of m and m K , there is an open set U , a closed set V , such that V ⊂ B ⊂ U , m(U \ V ) < ϵ and m K (U \ V ) < ϵ. By Urysohn's lemma, there exists a continuous function h : K → [0, 1] such that h = 1 on V and h = 0 on U c . Applying (17), we have
Let f : H → [0, 1] and let ν be an invariant mean on G. Let ρ ν f •ϕ : K 1 → [0, 1] and ρ ϕ * ν f : K 2 → [0, 1] be the associated Radon-Nikodym densities as in Definition 4.1. Then
Figure 2 .
2Illustration of (ii) Remark 4.7.
Let H = ϕ(G), A ⊂ G, and let ρ ν A : K → [0, 1] and ρ ϕ * ν ϕ(A) :φ(K) → [0, 1] be the associated Radon-Nikodym densities. Then 0 ≤ ρ ν A ≤ ρ ϕ * ν ϕ(A) •φ m K -almost everywhere. Proof. Applying Lemma 4.6 for H = ϕ(G) and f = 1 ϕ(A) : H → [0, 1], we get
Furthermore,
[K :φ i (K)] ≤ [G : ϕ i (G)] for each i ∈ {2, 3} and [K : (φ 2 +φ 3 )(K)] ≤ [G : (ϕ 2 + ϕ 3 )(G)].
Finally, we have [K :φ i (K)] ≤ [G : ϕ i (G)] for each i ∈ {1, 2, 3} by Lemma 3.2 (ii).□ 6. First correspondence principle and Bohr sets in ϕ 1 (A) + ϕ 2 (A) + ϕ 3 (A) Proposition 6.1. Let G be a countable abelian group. Let ϕ 1 , ϕ 2 , ϕ 3 : G → G be commuting endomorphisms with finite index images such that ϕ 1 + ϕ 2 + ϕ 3 = 0. Let (X, µ, T ) be an ergodic G-system and f : X → [0, 1] with X f = δ > 0. Define the function I : G → [0, 1] by
[ 33 ,
33Proposition 4.3], it follows that supp(I ′ ) contains a Bohr-(k, η) set B in K where k, η depends only on δ and the indices [K :φ i (K)]. It is easy to see that supp(I) contains τ −1 (B). Moreover, Lemma 2.2 implies that τ −1 (B) contains a Bohr-(k, η) set in G, completing the proof. □ Proposition 6.2 (First correspondence principle). Let G be a countable abelian group and A ⊂ G with d * (A) = δ > 0. Let ϕ 1 , ϕ 2 , ϕ 3 be commuting endomorphisms of G with finite index image such that ϕ 1 + ϕ 2 + ϕ 3 = 0. Then there is an ergodic G-system X := (X, µ, T ) and a function f : X → [0, 1] with X f dµ = d * (A) such that the function I : G → [0, 1] defined by
Lemma 8 . 3 .
83Let H be a subgroup of G of index k ∈ N and let µ be an invariant mean on H. There exists a unique invariant mean ν on G such that
Lemma 8 . 4 .
84Let H be a subgroup of G of index k ∈ N. Let K be a compact abelian group and τ : G → K be a homomorphism with dense image and K H = τ (H). Let B ⊂ H and µ be an invariant mean on H. Let ν be the extension of µ to G as stated in Lemma 8.3. Suppose ρ ν B : K → [0, 1] and ρ µ B : K H → [0, 1] are the associated Radon-Nikodym densities. By identifying ρ µ B with its extension to 0 outside of K H , we have ρ ν B = ρ µ B m K -almost everywhere. Proof. As in the proof of Lemma 8.3, let H − g i for 0 ≤ i ≤ k − 1 be the cosets of H in G with g 0 = 0. Since B ⊂ H, according to Lemma 4.4, both ρ ν B and ρ µ B are supported on K H . From (26), for h ∈ C(K),
(A) with its extension to 0 outside of ϕ j (K). It follows that ϕ 1 (A) + ϕ 2 (A) − ϕ 2 (A)
It follows thatφ 1 andφ 2 commute since τ (G) is dense in K. By Lemma 3.2, [K :φ j (K)] ≤ [G : ϕ j (G)] is finite.For ease of notation, we write Note that f, g, h are nonnegative.Claim 8.5. The support of f * g * h contains the support of S : K → [0, 1] defined by
(A) ⊃ ϕ 1 (A) + ϕ 2 (A) + ψ(A).
and the Kriz example[32] shows that there exists a set A of positive upper Banach density such that A − A does not contain any Bohr set. See [23, Remark 1.6] for further discussion.Theorem 1.4. Let G be a discrete abelian group and let ϕ 1 , ϕ 2 : G → G be commuting
endomorphisms such that [G : ϕ j (G)] is finite for j ∈ {1, 2}. Then for every finite partition
G = r
i=1 A i , there exists i ∈ {1, . . . , r} such that
Supposing ψ(g) = 1 for all g ∈ ker ϕ, we define a character ψ ′ on H by ψ ′ (ϕ(g)) = ψ(g). This is well defined, since ϕ(g) = ϕ(g ′ ) implies ψ(g) = ψ(g ′ ). To check that ψ ′ (h + h ′ ) = ψ ′ (h)ψ ′ (h ′ ), choose g, g ′ so that ϕ(g) = h and ϕ(g ′ ) = h ′ , and evaluate ψ ′ (h + h ′ ) as ϕ(g + g ′ ) = ϕ(g)ϕ(g ′ ) = ψ ′ (ϕ(g))ψ ′ (ϕ(h)).
Infinite dimensional analysis. A hitchhiker's guide. C Aliprantis, K Border, SpringerBerlinThird editionC. Aliprantis, K. Border. Infinite dimensional analysis. A hitchhiker's guide, Third edition. Springer, Berlin, 2006.
Multiple recurrence and large intersections for abelian group actions. Discrete Anal. E Ackelsberg, V Bergelson, A Best, 182021E. Ackelsberg, V. Bergelson, and A. Best. Multiple recurrence and large intersections for abelian group actions. Discrete Anal. 2021:18, 91 pp, 2021.
Khintchine-type recurrence for 3-point configurations. E Ackelsberg, V Bergelson, O Shalom, Forum Math. Sigma. 102022E. Ackelsberg, V. Bergelson, and O. Shalom. Khintchine-type recurrence for 3-point configurations. Forum Math. Sigma, 10: E107, 57 pp, 2022.
Sumset phenomenon in countable amenable groups. M Beiglböck, V Bergelson, A Fish, Adv. Math. 2232M. Beiglböck, V. Bergelson, and A. Fish. Sumset phenomenon in countable amenable groups. Adv. Math., 223(2):416-432, 2010.
Piecewise-Bohr sets of integers and combinatorial number theory. V Bergelson, H Furstenberg, B Weiss, Algorithms Combin. 26V. Bergelson, H. Furstenberg, and B. Weiss. Piecewise-Bohr sets of integers and combinatorial number theory. Algorithms Combin., 26:13-37, 2006.
An ergodic correspondence principle, invariant means and applications. V Bergelson, A Moragues, Isr. J. Math. 245V. Bergelson, A. Moragues. An ergodic correspondence principle, invariant means and applications. Isr. J. Math., 245: 921-962, 2021.
Sumsets in difference sets. V Bergelson, I Ruzsa, Israel J. Math. 174V. Bergelson, I. Ruzsa. Sumsets in difference sets. Israel J. Math., 174:1-18, 2009.
Product set phenomena for countable groups. M Björklund, A Fish, Adv. Math. 275M. Björklund, A. Fish. Product set phenomena for countable groups. Adv. Math. 275:47-113, 2019.
Approximate invariance for ergodic actions of amenable groups. M Björklund, A Fish, Discrete Anal. M. Björklund, A. Fish. Approximate invariance for ergodic actions of amenable groups. Discrete Anal., 2019:6, 56p, 2019.
Bohr sets in triple products of large sets in amenable groups. M Björklund, J Griesmer, J. Fourier Anal. Appl. 253M. Björklund, J. Griesmer. Bohr sets in triple products of large sets in amenable groups. J. Fourier Anal. Appl., 25(3):923-936, 2019.
Sur quelques propriétés arithmétiques des presque-périodes. N Bogolyubov, Ann. Chaire Phys. Math. Kiev. 4N. Bogolyubov. Sur quelques propriétés arithmétiques des presque-périodes. Ann. Chaire Phys. Math. Kiev, 4:185-205, 1939.
Locally finite semigroups. T K Braun, Ukrain. Mat. Ž. 20T. K. Braun. Locally finite semigroups. Ukrain. Mat. Ž., 20:732-738, 1968.
D Choimet, H Queffélec, Twelve Landmarks of Twentieth-Century Analysis. Cambridge University PressD. Choimet, H. Queffélec. Twelve Landmarks of Twentieth-Century Analysis, Cambridge University Press, 2015.
T De La Rue, Joinings in ergodic theory. Encyclopedia of Complexity and Systems Science. T. de la Rue. Joinings in ergodic theory. Encyclopedia of Complexity and Systems Science, 2020.
Generalization of a theorem of Bogolioùboff to topological abelian groups. E Følner, Math. Scand. 2E. Følner. Generalization of a theorem of Bogolioùboff to topological abelian groups. Math. Scand., 2:5-18, 1954.
Note on a generalization of a theorem of Bogolioùboff. E Følner, Math. Scand. 2E. Følner. Note on a generalization of a theorem of Bogolioùboff. Math. Scand., 2:224-226, 1954.
Ergodic behavior of diagonal measures and a theorem of Szemerédi on arithmetic progressions. H Furstenberg, J. Analyse Math. 31H. Furstenberg. Ergodic behavior of diagonal measures and a theorem of Szemerédi on arithmetic pro- gressions. J. Analyse Math., 31:204-256, 1977.
H Furstenberg, Recurrence in ergodic theory and combinatorial number theory. Princeton University PressH. Furstenberg. Recurrence in ergodic theory and combinatorial number theory. Princeton University Press, 1981.
Ergodic theory via joinings. E Glasner, Mathematical Surveys and Monographs. 101384American Mathematical SocietyE. Glasner. Ergodic theory via joinings, Mathematical Surveys and Monographs, 101. American Mathe- matical Society, Providence, RI, xii+384 pp, 2003.
On Katznelson's Question for skew product systems. D Glasscock, A Koutsogiannis, F Richter, Bull. Amer. Math. Soc. 594D. Glasscock, A. Koutsogiannis, F. Richter. On Katznelson's Question for skew product systems. Bull. Amer. Math. Soc., 59(4):569-606, 2022.
Les fonctions de type positif et la théorie des groupes. R Godement, Trans. Amer. Math. Soc. 63R. Godement. Les fonctions de type positif et la théorie des groupes. Trans. Amer. Math. Soc., 63:1-84, 1948.
A new proof of Szemerédi's theorem. W T Gowers, Geom. Func. Anal. 11W. T. Gowers. A new proof of Szemerédi's theorem. Geom. Func. Anal., 11:465-588, 2001.
Bohr neighborhoods in generalized difference sets. J Griesmer, Electron. J. Combin. 291J. Griesmer. Bohr neighborhoods in generalized difference sets. Electron. J. Combin., 29(1):1-34, 2022.
Sumsets of dense sets and sparse sets. J Griesmer, Israel J. Math. 190J. Griesmer. Sumsets of dense sets and sparse sets. Israel J. Math., 190:229-252, 2012.
Small-sum pairs for upper Banach density in countable abelian groups. J Griesmer, Adv. Math. 246J. Griesmer. Small-sum pairs for upper Banach density in countable abelian groups. Adv. Math., 246:220- 264, 2013.
Separating Bohr denseness from measurable recurrence. J Griesmer, 2021J. Griesmer. Separating Bohr denseness from measurable recurrence. Discrete Anal., Paper No. 9, 20pp, 2021.
Special cases and equivalent forms of Katznelson's problem on recurrence. J Griesmer, Monatsh. Math. 2001J. Griesmer. Special cases and equivalent forms of Katznelson's problem on recurrence. Monatsh. Math., 200(1):63-79, 2023.
Additive structure of difference sets and a theorem of Følner. N Hegyvári, I Ruzsa, Australas. J. Combin. 64N. Hegyvári, I. Ruzsa. Additive structure of difference sets and a theorem of Følner. Australas. J. Combin., 64:437-443, 2016.
Strauss Algebra in the Stone-Čech compactification. Theory and applications. Second revised and extended edition. N Hindman, D , Walter de Gruyter & Co591BerlinN. Hindman, D. Strauss Algebra in the Stone-Čech compactification. Theory and applications. Second revised and extended edition. Walter de Gruyter & Co., Berlin. pp. xviii+591, 2012.
Amenable locally compact groups. J.-P Pier, Pure and Applied Mathematics. 418J.-P. Pier. Amenable locally compact groups. Pure and Applied Mathematics, New York. pp. x+418, 1984.
Chromatic numbers of Cayley graphs on Z and recurrence. Y Katznelson, Combinatorica. 212Y. Katznelson. Chromatic numbers of Cayley graphs on Z and recurrence. Combinatorica., 21(2):211-219, 2001.
Large independent sets in shift-invariant graphs. Solution of Bergelson's problem. I Kriz, Graphs Combin. 3I. Kriz. Large independent sets in shift-invariant graphs. Solution of Bergelson's problem. Graphs Combin., 3:145-158, 1987.
Bohr sets in sumsets I: compact abelian groups. A Le, T H Lê, arXiv:2112.119972021PreprintA. Le, T. H. Lê. Bohr sets in sumsets I: compact abelian groups. Preprint, 2021. arXiv:2112.11997.
Fourier Analysis on Groups. W Rudin, Dover PublicationsW. Rudin. Fourier Analysis on Groups, Dover Publications, 2017.
Generalized arithmetical progressions and sumsets. I Ruzsa, Acta Math. Hungar. 654I. Ruzsa. Generalized arithmetical progressions and sumsets. Acta Math. Hungar., 65(4):379-388, 1994.
Sumsets and structure. I Ruzsa, Combinatorial number theory and additive group theory, Advanced Courses in Mathematics. BaselBirkhäuser VerlagCRM BarcelonaI. Ruzsa. Sumsets and structure. Combinatorial number theory and additive group theory, Advanced Courses in Mathematics, 87-210. CRM Barcelona, Birkhäuser Verlag, Basel, 2009.
Additive structures in sumsets. T Sanders, Math. Proc. Cambridge Philos. Soc. 144T. Sanders. Additive structures in sumsets. Math. Proc. Cambridge Philos. Soc., 144:289-316, 2008.
A proof of Roth's theorem. T Tao, T. Tao. A proof of Roth's theorem. 2014, https://terrytao.wordpress.com/2014/04/24/a-proof-of-roths- theorem/
An Introduction to Ergodic Theory. P Walters, Springer-Verlag79New York-BerlinGraduate Texts in MathematicsP. Walters. An Introduction to Ergodic Theory. Graduate Texts in Mathematics, 79. Springer-Verlag, New York-Berlin, 1982.
| [] |
[
"Weighted fractional generalized cumulative past entropy and its properties",
"Weighted fractional generalized cumulative past entropy and its properties"
] | [
"Suchandan Kayal \nDepartment of Mathematics\nNational Institute of Technology Rourkela\nRourkela-769008OdishaIndia\n",
"N Balakrishnan \nDepartment of Mathematics and Statistics\nMcMaster University\nL8S 4K1HamiltonOntarioCanada\n"
] | [
"Department of Mathematics\nNational Institute of Technology Rourkela\nRourkela-769008OdishaIndia",
"Department of Mathematics and Statistics\nMcMaster University\nL8S 4K1HamiltonOntarioCanada"
] | [] | In this paper, we introduce weighted fractional generalized cumulative past entropy of a nonnegative absolutely continuous random variable with bounded support. Various properties of the proposed weighted fractional measure are studied. Bounds and stochastic orderings are derived. A connection between the proposed measure and the left-sided Riemann-Liouville fractional integral is established. Further, the proposed measure is studied for the proportional reversed hazard rate models. Next, a nonparametric estimator of the weighted fractional generalized cumulative past entropy is suggested based on empirical distribution function. Various examples with a real life data set are considered for the illustration purposes. Finally, large sample properties of the proposed empirical estimator are studied. | 10.1007/s11009-023-10035-0 | [
"https://arxiv.org/pdf/2106.10312v2.pdf"
] | 250,491,961 | 2106.10312 | 8ef81d19dd3a088b80701506cbcdf6c8b083f869 |
Weighted fractional generalized cumulative past entropy and its properties
13 Jul 2022 July 14, 2022
Suchandan Kayal
Department of Mathematics
National Institute of Technology Rourkela
Rourkela-769008OdishaIndia
N Balakrishnan
Department of Mathematics and Statistics
McMaster University
L8S 4K1HamiltonOntarioCanada
Weighted fractional generalized cumulative past entropy and its properties
13 Jul 2022 July 14, 2022Weighted generalized cumulative past entropyFractional calculusStochas- tic orderingReversed hazard rate modelEmpirical cumulative distribution functionCen- tral limit theorem 2020 Mathematics Subject Classifications: 94A17, 60E15, 26A33
In this paper, we introduce weighted fractional generalized cumulative past entropy of a nonnegative absolutely continuous random variable with bounded support. Various properties of the proposed weighted fractional measure are studied. Bounds and stochastic orderings are derived. A connection between the proposed measure and the left-sided Riemann-Liouville fractional integral is established. Further, the proposed measure is studied for the proportional reversed hazard rate models. Next, a nonparametric estimator of the weighted fractional generalized cumulative past entropy is suggested based on empirical distribution function. Various examples with a real life data set are considered for the illustration purposes. Finally, large sample properties of the proposed empirical estimator are studied.
Introduction
Entropy plays an important role in several areas of statistical mechanics and information theory. In statistical mechanics, the most widely applied form of entropy was proposed by Boltzmann and Gibbs, and in information theory, that was introduced by Shannon. Due to the growing applicability of the entropy measures, various generalizations were proposed and their information theoretic properties were studied. See, for instance, Rényi (1961) and Tsallis (1988). We recall that most of the generalized entropies were developed based on the concept of deformed logarithm. But, two generalized concept of entropies: fractal (see Wang (2003)) and fractional (Ubriaco (2009)) entropies were proposed based on the natural logarithm. Let P = (p 1 , . . . , p n ) be the probability mass function of a discrete random variable X. The Boltzmann-Gibbs-Shannon entropy of X can be defined through an equation involving the ordinary derivative as
H BGS (X) = − d du n i=1 p u i u=1
.
(1.1) Ubriaco (2009) proposed a new entropy measure known as the fractional entropy after replacing the ordinary derivative by the Weyl fractional derivative (see Ferrari (2018)) in (1.1). It is given by
H α (X) = n i=1 p i (− ln p i ) α , 0 ≤ α ≤ 1. (1.2)
The fractional entropy in (1.2) is positive, concave and non-additive. Further, one can recover the Shannon entropy (see Shannon (1948)) from (1.2) under α = 1. From (1.2), we notice that the measure of information is mainly a function of probabilities of occurrence of various events. However, we often face with many situations (see Guiaşu (1971)) in different fields, where the probabilities and qualitative characteristics of events need to be taken into account for better uncertainty analysis. As a result, the concept of weighted entropy was introduced by Guiaşu (1971), which is given by
H w (X) = − n i=1 w i p i ln p i ,(1.3)
where w i is a nonnegative number (known as weight) directly proportional to the importance of the ith elementary event. Note that the weights w i 's can be equal. Following the same line as in (1.3), the weighted fractional entropy can be defined as
H w α (X) = n i=1 w i p i (− ln p i ) α , 0 ≤ α ≤ 1. (1.4)
Note that for w i = 1, i = 1, . . . , n, (1.4) reduces to the fractional entropy given by (1.2). Further, (1.4) equals to the weighted entropy given by (1.3) when α = 1. Recently, motivated by the aspects of the cumulative residual entropy due to Rao et al. (2004) and the fractional entropy given by (1.2), Xiong et al. (2019) introduced a new information measure, known as the fractional cumulative residual entropy. The concept of multiscale fractional cumulative residual entropy was described by Dong and Zhang (2020). Very recently, inspired by the cumulative entropy (see Di Crescenzo and Longobardi (2009)) and (1.2), Di Crescenzo et al. (2021) proposed fractional generalized cumulative entropy of a random variable X with bounded support (0, s), which is given by
CP E γ (X) = 1 Γ(γ + 1) s 0 K(x)[− ln K(x)] γ dx, γ > 0, (1.5)
where K is the cumulative distribution function (CDF) of X. The fractional generalized cumulative entropy is a generalization of the cumulative entropy and generalized cumulative entropy proposed by Di Crescenzo and Longobardi (2009) and Kayal (2016), respectively. We remark that the cumulative entropy and generalized cumulative entropy are independent of the location. This property appears as a drawback when quantifying information of an electronics device or a neuron in different intervals having equal widths. Thus, to cope with these situations, various authors proposed length-biased (weighted) information measures. The weighted measures are also called shift-dependent measures by some researchers. Readers may refer to Di Crescenzo and Longobardi (2007), Misagh et al. (2011), Misagh (2016, Das (2017), Kayal and Moharana (2017a), Kayal and Moharana (2017b), Mirali et al. (2017), Nourbakhsh and Yari (2017), Kayal (2018) and Kayal and Moharana (2019) for some weighted versions of various information measures. The existing weighted information measures and the fractional generalized cumulative entropy in (1.5) inspire us to consider the weighted fractional generalized cumulative past entropy (WFGCPE), which has been studied in the subsequent sections of this paper. The following definitions will be useful in order to obtain some ordering results for the WFGCPE.
Definition 1.1. Let X 1 and X 2 be two nonnegative absolutely continuous random variables with probability density functions (PDFs) k 1 , k 2 and CDFs K 1 , K 2 , respectively. Then, X 1 is said to be smaller than X 2 in the sense of the (i) usual stochastic order, denoted by
X 1 ≤ st X 2 , if K 2 (x) ≤ K 1 (x), for all x ∈ R;
(ii) hazard rate order, denoted by X 1 ≤ hr X 2 , ifK 2 (x)/K 1 (x) is nondecreasing in x > 0,
whereK 1 = 1 − K 1 andK 2 = 1 − K 2 ; (ii) dispersive order, denoted by X 1 ≤ disp X 2 , if K −1 1 (u) − K −1 1 (v) ≥ K −1 2 (u) − K −1 2 (v), for all 0 < u < v < 1, where K −1 1 and K −1 2
are the right continuous inverses of K 1 and K 2 , respectively;
(iii) decreasing convex order, denoted by X 1 ≤ dcx X 2 , if and only if E(τ (X 1 )) ≤ E(τ (X 2 )) holds for all nonincreasing convex real valued functions for which the expectations are defined.
The rest of the paper is organized as follows. In Section 2, we introduce WFGCPE and study its various properties. Some ordering results are obtained. It is shown that a less dispersed distribution produces smaller uncertainty in terms of the WFGCPE. Some bounds are obtained. Further, a connection of the proposed measure with the fractional calculus is discovered. The proportional reversed hazard model is considered and the WFGCPE is studied under this set up. Section 3 deals with the estimation of the introduced measure. An empirical WFGCPE estimator is proposed based on the empirical distribution function. Further, large sample properties of the proposed estimator have been studied. Finally, Section 4 concludes the paper with some discussions.
Throughout the paper, the random variables are considered as nonnegative random variables. The terms increasing and decreasing are used in wide sense. The differentiation and integration exist whenever they are used. The notation N denotes the set of natural numbers. Further, throughout the paper, a standard argument 0 = 0. ln 0 = 0. ln ∞ is adopted. The prime ′ denotes the first order ordinary derivatve.
Weighted fractional generalized cumulative past entropy
In this section, we propose WFGCPE and study its various properties. Consider a nonnegative absolutely continuous random variable X with support (0, s) and CDF K and PDF k. Then, the WFGCPE of X with a general nonnegative weight function ψ(x) (≥ 0) is defined as
CP E ψ γ (X) = 1 Γ(γ + 1) s 0 ψ(x)K(x)[− ln K(x)] γ dx, γ > 0,
(2.1) provided the right-hand-side integral is finite, where Γ is a gamma function. From (2.1), one can easily notice that the information measure CP E ψ γ (X) is always nonnegative. It is equal to zero when X is degenerate. Note that the WFGCPE is nonadditive. We recall that an information measure H is additive if
H(A + B) = H(A) + H(B),(2.2)
for any two probabilitically independent systems A and B. If (2.2) is not satisfied, then the information measure is said to be nonadditive. Several information measures have been proposed in the literature since the introduction of the Shannon entropy. Among those, probably Shannon's entropy and Renyi's entropy (see Rényi (1961)) are additive and all other generalizations (see, for example, Tsallis (1988)) are nonadditive. For γ ∈ N and s → +∞, CP E ψ γ (X) reduces to the weighted generalized cumulative entropy proposed by Tahmasebi et al. (2020). Further, when ψ(x) = 1, we get the fractional generalized cumulative entropy due to Di Crescenzo et al. (2021). Let Ψ ′ (x) = d dx Ψ(x) = ψ(x). Then, when γ → 0 + and 0 < s < +∞, we have from (2.1),
CP E ψ γ (X) = s 0 ψ(x)dx − s 0 ψ(x)K(x)dx = Ψ(s) − Ψ(0) − s 0 ψ(x) s x k(y)dy dx = Ψ(s) − Ψ(0) − s 0 y 0 ψ(x)k(y)dxdy = Ψ(s) − E[Ψ(X)]. (2.3)
Thus,
CP E ψ γ (X) = Ψ(s) − E[Ψ(X)], if γ → 0 + and 0 < s < +∞ +∞, if γ → 0 + and s = +∞. (2.4) Moreover, in particular, for ψ(x) = x, we have CP E ψ γ (X) = 1 2 [s 2 − E(X 2 )] , if γ → 0 + and 0 < s < +∞ CP E ψ(x)=x n (X), if γ = n ∈ N and s = +∞ CP E ψ(x)=x (X), if γ = 1 and s = +∞ +∞, if γ → 0 + and s = +∞, (2.5)
where CP E ψ(x)=x n (X) and CP E ψ(x)=x (X) are the shift-dependent generalized cumulative past entropy of order n (see Eq. (1.4) of Kayal and Moharana (2019)) and weighted cumulative past entropy (see Eq. (10) of Misagh (2016)), respectively.
Next, we consider an example to show that even though the fractional generalized cumulative past entropy of two distributions are same, but the WFGCPEs are not same.
Example 2.1. Consider two random variables X 1 and X 2 with respective CDFs K 1 (x) = x − a, 0 < a < x < a + 1 and K 2 (x) = x − (a + 1), a + 1 < x < a + 2 < +∞. Then, the fractional generalized cumulative past entropy of X 1 and X 2 can be obtained respectively as
CP E γ (X 1 ) = 1 2 γ+1 = CP E γ (X 2 ), γ > 0.
That is, the fractional generalized cumulative past entropy of X 1 and X 2 are same. Indeed, it is expected since the fractional generalized cumulative past entropy is shift-independent (see Propositon 2.2 of Di Crescenzo et al. (2021)). In order to reach to the goal, let us consider ψ(x) = x. Then,
CP E ψ(x)=x γ (X 1 ) = 1 3 γ+1 + a 2 γ+1 and CP E ψ(x)=x γ (X 2 ) = 1 3 γ+1 + a + 1 2 γ+1 ,
which show that the WFGCPEs of X 1 and X 2 are not same. Here, CP E ψ γ (X 1 ) < CP E ψ γ (X 2 ). Further, let ψ(x) = x 2 . Thus, we have CP E ψ(x)=x γ (X 1 ) = 1 4 γ+1 + 2a 3 γ+1 + a 2 2 γ+1 and CP E ψ(x)=x γ (X 2 ) = 1 4 γ+1 + 2(a + 1) 3 γ+1 + (a + 1) 2 2 γ+1 , which also reveal that the WFGCPEs of X 1 and X 2 are different from each other.
From Example 2.1, we notice that when ignoring qualitative characteristic of a given data set, the fractional generalized cumulative past entropy of two distributions are same, as treated from the quantitative point of view. However, when we do not ignore it, they are not same. In Table 1, we provide closed form expressions of the WFGCPE of various distributions for two choices of ψ(x). Let K andK = 1 − K be the distribution and survival functions of a symmetrically distributed random variable with bounded support (0, s). Di Crescenzo et al. (2021) showed that for this symmetric random variable the fractional generalized cumulative residual entropy and the fractional generalized cumulative entropy are same. However, this property does not hold for the weighted versions of the fractional generalized cumulative residual entropy and fractional generalized cumulative entropy. Indeed,
CP E ψ γ (X) = 1 Γ(γ + 1) s 0 ψ(s − x)K(x)[− lnK(x)] γ dx, γ > 0. (2.6) Particularly, for a symmetric random variable X with ψ(x) = x, we have CP E ψ γ (X) = s Γ(γ + 1) s 0K (x)[− lnK(x)] γ dx − 1 Γ(γ + 1) s 0 xK(x)[− lnK(x)] γ dx = sCRE γ (X) − CRE ψ(x)=x γ (X), say, (2.7)
where CRE γ (X) and CRE ψ(x)=x γ (X) are respectively known as the fractional generalized cumulative residual entropy and weighted fractional generalized cumulative residual entropy. Di Crescenzo et al. (2021) showed that the fractional generalized cumulative entropy of a nonnegative random variable is shift-independent. Golomb (1966) proposed an information generating (IG) function for a PDF k as
G β (k) = ∞ 0 k β (x)dx, β > 0. (2.8)
The derivatives of this IG function with respect to β at β = 1 yield statistical information measures for a probability distribution. For example, the first order derivative of G β (k) with respect to β at β = 1 produces negative Shannon entropy measure. For detailed properties of the Shannon entropy, please refer to Shannon (1948). Very recently, considered the IG function and discussed some new properties that reveal its connections to some other well-known information measures. The authors have also shown that the IG measure can be expressed based on different orders of fractional Shannon entropy. studied IG function and relative IG function measures associated with maximum and minimum ranked set sampling schemes with unequal sizes. Along the similar lines, here we define a weighted cumulative past entropy generating function as
G β (K) = s 0 ψ(x)K β (x)dx, β > 0, (2.9)
where ψ(x) is a positive valued weight function. Clearly,
d dβ G β (K)| β=1 = s 0 ψ(x)K(x) ln K(x)dx = −CP E ψ γ=1 (X).
(2.10) Indeed, higher order derivatives of G β (K) yield higher order weighted cumulative past entropy measues.
In the following proposition, we establish that the WFGCPE is shift-dependent. This makes the proposed weighted fractional measure useful in context-dependent situations.
Proposition 2.1. Let Y = aX + b, where a > 0 and b ≥ 0. Then, CP E ψ γ (Y ) = a Γ(γ + 1) s 0 ψ(ax + b)K(x)[− ln K(x)] γ dx, γ > 0. (2.11) Proof. The proof follows from K Y (x) = K( x−b a ).
Thus, it is omitted.
γ > 3/c. Model Cumulative distribution function ψ(x) = x ψ(x) = x 2 Power distribution K(x) = x b c , 0 < x < b, c > 0, b 2 c(1 + 2 c ) γ+1 b 3 c(1 + 3 c ) γ+1 Frèchet distribution K(x) = e −bx −c , x > 0, b, c > 0 b 2 c Γ(γ − 2 c ) cΓ(γ + 1) b 3 c Γ(γ − 3 c ) cΓ(γ + 1)
In particular, let us consider ψ(x) = x. Then, after some simplification, form (2.11) we get
CP E ψ γ (Y ) = a 2 CP E ψ(x)=x γ (X) + ab CP E γ (X), (2.12) where CP E ψ(x)=x γ (X) = 1 Γ(γ + 1) s 0 xK(x)[− ln K(x)] γ dx (2.13)
and CP E γ (X) is given by (1.5). It is always of interest to express various information measures in terms of the expectation of a function of random variable of interest. Define
µ(t) = t 0 K(x) K(t) dx, (2.14)
which is known as the mean inactivity time of X. Di Crescenzo and Longobardi (2009) expressed cumulative entropy in terms of the expectation of the mean inactivity time of X. Recently, Di Crescenzo et al. (2021) showed that the fractional generalized cumulative entropy can be written as the expectation of a decreasing function of X. Below, we get similar findings for the case of the WFGCPE.
Proposition 2.2. Let X be nonnegative absolutely continuous random variable with distribution function K(.) and density function k(.) such that CP E ψ γ (X) < +∞. Then,
CP E ψ γ (X) = E[τ ψ γ (X)], (2.15) where τ ψ γ (u) = 1 Γ(γ + 1) s u ψ(x)[− ln K(x)] γ dx, γ > 0.
(2.16)
Proof. Noting K(x) =
x 0 k(u)du and applying Fubini's theorem, we have from (2.1) that
CP E ψ γ (X) = 1 Γ(γ + 1) s 0 ψ(x)[− ln K(x)] γ x 0 k(u)du dx = 1 Γ(γ + 1) s 0 f (u) s u ψ(x)[− ln K(x)] γ dx du = E[τ ψ γ (X)],
where τ ψ γ (.) is given by (2.16). This complets the proof. Note that (2.15) reduces to Eq. (20) of Tahmasebi et al. (2020), for γ = n ∈ N and s = +∞. For ψ(x) = 1, Proposition 2.2 turns out as Proposition 2.1 of Di Crescenzo et al. (2021).
Similar to the normalized cumulative entropy, Di Crescenzo et al. (2021) propoosed a normalized fractional generalized cumulative entropy of a random variable X with nonnegative bounded support. Here, we define a normalized WFGCPE. It is assumed that the weighted cumulative past entropy with a general nonnegative weight function is nonzero and finite, which is given by (see Suhov and Sekeh (2015))
CP E ψ (X) = − s 0 ψ(x)K(x) ln K(x)dx, ψ(x) ≥ 0.
( 2.17) The normalized WFGCPE of X can be defined as
NCP E ψ γ (X) = CP E ψ γ (X) (CP E ψ ) γ = 1 Γ(γ + 1) s 0 ψ(x)K(x)[− ln K(x)] γ dx s 0 ψ(x)K(x)[− ln K(x)]dx γ , γ > 0. (2.18) Note that lim γ→0 + NCP E ψ γ (X) = s 0 ψ(x)K(x)dx and lim γ→1 NCP E ψ γ (X) = 1.
The closed form expressions of the normalized WFGCPE of power and Frèchet distributions are presented in Table 2 for two choices of the weight functions. Model Table 2.
ψ(x) = x ψ(x) = x 2 Power distribution (c + 2) γ−1 Γ(γ + 1)b 2(γ−1) (c + 3) γ−1 Γ(γ + 1)b 3(γ−1) Frèchet distribution 1 (Γ(γ + 1)) 2 c γ−1 b 2 c (γ−1) Γ(γ − 2 c ) (Γ(1 − 2 c )) γ 1 (Γ(γ + 1)) 2 c γ−1 b 3 c (γ−1) Γ(γ − 3 c ) (Γ(1 − 3 c )) γ
Some ordering results
In this subsection, we obtain some ordering properties for the WFGCPE. It can be shown that the function τ ψ γ (u) given by (2.16) is decreasing and convex when ψ(x) is decreasing in x. Thus, for decreasing ψ, we have
X 1 ≤ dcx X 2 ⇒ E[τ ψ γ (X 1 )] ≤ E[τ ψ γ (X 2 )] ⇒ CP E ψ γ (X 1 ) ≤ CP E ψ γ (X 2 ). (2.19)
Di Crescenzo and Toomaj (2017) showed that more dispersed distributions produce larger generalized cumulative entropy. Note that the generalized cumulative entropy was introduced and studied by Kayal (2016). Recently, Di Crescenzo et al. (2021) established similar property for the fractional generalized cumulative entropy. In the following proposition, we notice that analogous result holds for the proposed measure given by (2.1). We recall that the dispersive order X 1 ≤ disp X 2 can be equivalently characterized by (see P. 160, Oja (1981)) k 1 (K −1 1 (u)) ≥ k 2 (K −1 2 (u)), u ∈ (0, 1).
(2.20)
Proposition 2.3. Consider two nonnegative absolutely continuous random variables X 1 and X 2 with CDFs K 1 and K 2 , respectively. Then,
X 1 ≤ disp X 2 ⇒ CP E ψ γ (X 1 ) ≤ CP E ψ γ (X 2 ), (2.21)
provided ψ is increasing.
Proof. Using the transformation K 1 (x) = u, we have
CP E ψ γ (X 1 ) = 1 Γ(γ + 1) 1 0 ψ(K −1 1 (u))u(− ln u) γ du k 1 (K −1 1 (u))
.
Thus
,
CP E ψ γ (X 1 ) − CP E ψ γ (X 2 ) = 1 Γ(γ + 1) 1 0 u(− ln u) γ ψ(K −1 1 (u)) k 1 (K −1 1 (u)) − ψ(K −1 2 (u)) k 2 (K −1 2 (u)) du.(2.22)
We know that X 1 ≤ disp X 2 ⇒ X 1 ≤ st X 2 , and as a result, K −1 1 (u) ≤ K −1 2 (u), u ∈ (0, 1). Moreover, ψ is increasing. Thus, ψ(K −1 1 (u)) ≤ ψ(K −1 2 (u)). Using this inequality and (2.20) in (2.22), the hypothesis in (2.21) holds. This completes the proof. Proposition 2.3 reduces to Proposition 1 of Tahmasebi (2020) when s = +∞ and γ = n ∈ N. In the following, we obtain different sufficient conditions involving the hazard rate order for the similar outcome in (2.21). We recall that X 1 has decreasing failure rate (DFR) if the hazard rate of X 1 is decreasing, equivalently,K 1 (x) = 1 − K 1 (x) is log-convex.
Proposition 2.4. For the random variables X 1 and X 2 as in Proposition 2.3, let X 1 ≤ hr X 2 hold. Further, let X 1 or X 2 be DFR. Then, for γ > 0, one has CP E ψ γ (X 1 ) ≤ CP E ψ γ (X 2 ). Proof. Making use of the result in Proposition 2.3, the proof follows from Theorem 2.1(b) of Bagai and Kochar (1986).
Next, we will study whether the usual stochastic order implies the ordering of the WFGCPE. In doing so, we consider two random variables X 1 and X 2 with respective distribution functions K 1 (x) = x c 1 and K 2 (x) = x c 2 , 0 < x < 1, c 1 , c 2 > 0. For c 1 ≤ c 2 , clearly K 1 (x) ≥ K 2 (x) implies X 1 ≤ st X 2 . Now, we plot graphs of the difference of the WFGCPEs of X 1 and X 2 in Figure 2, for some values of γ, which reveal that in general, the ordering between the WFGCPEs may not hold.
We end this subsection with a result which compares the WFGCPE measures when two random variables are ordered in the sense of the usual stochastic order. Here, prime denotes the ordinary derivative.
Proposition 2.5. Consider two nonnegative absolutely continuous random variables X 1 and X 2 with CDFs K 1 and K 2 , respectively, such that X 1 ≤ st X 2 . Further, assume that the means of X 1 and X 2 are finite but unequal. Then, for τ ψ γ (x) < +∞ and E[τ ψ γ (X)] < ∞, we have where V is nonnegative absolutely continuous random variable with density function given by
CP E ψ γ (X 1 ) = E[τ ψ γ (X 2 )] + E[τ ψ γ ′ (V )][E(X 1 ) − E(X 2 )],(2.k V (x) =K 2 (x) −K 1 (x) E(X 2 ) − E(X 1 ) , x > 0 (2.24)
Proof. We note that CP E ψ γ (X 1 ) = E[τ ψ γ (X 1 )] (see Proposition 2.2). Now, the rest of the proof follows from Theorem 4.1 of Di Crescenzo (1999).
It can be easily seen that when ψ(x) = 1, Proposition 2.5 reduces to Proposition 3.4 of Di Crescenzo et al. (2021). Further, when γ = n ∈ N and ψ(x) = 1, then Proposition 2.5 coincides with Proposition 3.8 of Di Crescenzo and Toomaj (2017). Here, τ ψ γ ′ (v) ≤ 0, for v > 0. Thus, under the assumptions made in Proposition 2.5, a lower bound of CP E ψ γ (X 1 ) can be obtained, which is given by
CP E ψ γ (X 1 ) ≥ E[τ ψ γ (X 2 )
]. In the next subsection, we discuss various bounds of the WFGCPE given by (2.1).
Bounds
Di Crescenzo and Longobardi (2009) established that the cumulative entropy of the sum of two independent nonnegative random variables is larger than the maximum of the cumulative entropies of the individual random variables. Similar result was obtained by Di Crescenzo et al. (2021) for the fractonal generalized cumulative entropy. Below, we establish analogous result for the WFGCPE under the assumption that the weight function ψ(x) is increasing in x and the PDFs of the random variables are log-concave.
Proposition 2.6. Let X 1 and X 2 be a pair of independent nonnegative absolutely continuous random variables having log-concave PDFs. Then, for all increasing function ψ, we have
CP E ψ γ (X 1 + X 2 ) ≥ max{CP E ψ γ (X 1 ), CP E ψ γ (X 2 )}, γ > 0. (2.25)
Proof. Under the assumptions made, the proof follows from Theorem 3.B.7 of Shaked and Shanthikumar (2007) and Proposition 2.3. Thus, it is omitted.
When γ = n ∈ N, s = +∞ and ψ(x) = [K(x)] n , the result in Proposition 2.6 yields Proposition 2 of Tahmasebi et al. (2020). Further, if we consider ψ(x) = 1, then one can easily obtain the result stated in Proposition 3.1 of Di Crescenzo et al. (2021). Next, we obtain a bound of the WFGCPE given by (2.1).
Proposition 2.7. For a nonnegative random variable with support (0, s) and ψ(
x) = [ξ(x)] γ , ξ(x) ≥ 0, we have CP E ψ γ (X) ≥ s 1−γ Γ(γ + 1) CP E ξ (X) γ , if γ ≥ 1 ≤ s 1−γ Γ(γ + 1) CP E ξ (X) γ , if 0 < γ ≤ 1, (2.26)
where CP E ξ (X) = − s 0 ξ(x)K(x) ln K(x)dx is known as the weighted cumulative past entropy with weight function ξ(x).
Proof. Let γ ≥ 1. Then, for 0 < x < s, K(x) ≥ [K(x)] γ . Thus, under the assumptions made, from (2.1), we obtain
CP E ψ γ (X) ≥ 1 Γ(γ + 1) s 0 α γ (β(x))dx, (2.27) where β(x) = −ξ(x)K(x) ln K(x) ≥ 0 and α γ (t) = t γ .
It can be shown that t γ is convex in t ≥ 0, for γ ≥ 1. Thus, from Jensen's integral inequality, the rest of the proof follows. The case for 0 < γ ≤ 1 can be proved similarly.
Proposition 2.8. For a nonnegative random variable with support (0, s) and γ > 0, we have
CP E ψ γ (X) ≥ ψ(s)CP E γ (X), if ψ is decreasing ≤ ψ(s)CP E γ (X), if ψ is increasing, (2.28)
where CP E γ (X) = 1 Γ(γ+1) s 0 K(x)[− ln K(x)] γ dx is known as the fractional generalized cumulative past entropy.
Proof. The proof is straightforward, and thus it is omitted. Proposition 2.9. Let X be an absolutely continuous random variable with support (0, s) with mean E(X) = µ < +∞. Then,
(i) CP E ψ γ (X) ≥ 1 Γ(γ+1) s 0 ψ(x)K(x)[1 − K(x)] γ dx; (ii) CP E ψ γ (X) ≥ D(γ)e H(X) , where D(γ) = e 1 0 ln[ψ(K −1 (u))u(− ln u) γ ]du and H(X) is the dif- ferential entropy of X; (iii) CP E ψ γ (X) ≥ τ ψ γ (µ)
, provided ψ is decreasing. Proof. The first part of this proposition follows from the relation ln u ≤ u − 1, for 0 < u < 1. To prove the second part, from the log-sum inequality, we have
s 0 k(x) ln k(x) ψ(x)K(x)[− ln K(x)] γ dx ≥ ln 1 s 0 ψ(x)K(x)[− ln K(x)] γ dx = −CP E ψ γ (X). (2.29)
Now, the rest of the proof follows using the arguments as in the proof of Theorem 2 of Xiong et al. (2019). Third part follows from Jensen's inequality.
We end this subsection with the following result, which provides bounds of the WFGCPE of X 2 , where the CDF of X 2 is given by (2.34).
Proposition 2.10. Let X 1 and X 2 be two random variables with CDFs K 1 and K 2 , respectively. Further, assume that the random variables satisfy proportional reversed hazard model described in (2.34). Then,
CP E ψ γ (X 2 ) ≤ η γ CP E ψ γ (X 1 ), for η ≥ 1 ≥ η γ CP E ψ γ (X 1 ), for 0 < η ≤ 1. Proof.
The proof is simple, and thus it is omitted.
Connection with fractional calculus
Fractional calculus and its widely application have recently been paid more and more attentions. We refer to Miller and Ross (1993,) and Gorenflo and Mainardi (2008) for more recent development on fractional calculus. Several known forms of the fractional integrals have been proposed in the literature. Among these, the Riemann-Liouville fractional integral of order γ > 0 has been studied extensively for its applications. See, for instance Dahmani et al. (2010), Romero et al. (2013) and Tunc (2013). Let γ > 0 and f ∈ L 1 (a, b), a ≥ 0. Then, the left-sided Riemann-Liouville fractional integral in the interval [a, b] is defined as follows
J γ a + f (t) = 1 Γ(γ) t a f (τ ) (t − τ ) 1−γ dτ, t ∈ [a, b], (2.30)
where f is a real-valued continuous function. We recall that the notion of the left-sided Riemann-Liouville fractional integral given by (2.30) can be elongated with respect to a strictly increasing function h(.). In addition to this strictly increasing property, we further assume that the first order derivative h ′ (.) is continuous in the interval (a, b). Then, for γ > 0, the left-sided Riemann-Liouville fractional integral of f with respect to h is given by
J γ a + ;h f (t) = 1 Γ(γ) t a h ′ (τ )f (τ ) (h(t) − h(τ )) 1−γ dτ, t ∈ [a, b].
(2.31)
One may refer to Samko et al. (1993) (Section 18.2) for the representation given in (2.31). Now, we will notice that the WFGCPE can be expressed in terms of the limits of the integral (2.31) after suitable choices of the functions f (x) and h(x), that is, the fractional nature of the proposed measure is justified. Let us take
h(x) = ln K(x) and f (x) = ψ(x)(K(x)) 2 k(x) . Then, lim a→0, t→s J γ+1 a + ;h f (t) = 1 Γ(γ) s 0 ψ(x)K(x)[− ln K(x)] γ dx = CP E ψ γ (X).
(2.32)
Proportional reversed hazards model
Let X be a nonnegative absolutely continuous random variable with distribution function K and density function k. Here, X may be treated as the lifetime of a unit. If λ(t) = d dt ln K(t) denotes the reversed hazard rate of X, then λ(t)dt represents the conditional probability the unit stopped working in an infinitesimal interval of width dt preceding t, given that the unit failed before t. In otherwords, λ(t)dt is the probability of failing in the interval (t − dt, t) given that the unit is found failed at time t. Let X 1 and X 2 be two random variables with PDFs k 1 and k 2 , CDFs K 1 and K 2 and reversed hazard rate functions λ 1 and λ 2 , respectively. It is well known that X 1 and X 2 have proportional reversed hazard rate model if
λ 2 (x) = ηλ 1 (x) = η k 1 (x) K 1 (x) ,(2.33)
where η > 0 is known as the proportionality constant. Note that (2.33) is equivalent to the model
K 2 (x) = [K 1 (x)] η , x ∈ R, η > 0, (2.34)
where K 1 is the baseline distribution function (see Gupta et al. (1998), Di Crescenzo (2000 and Gupta and Gupta (2007)). The PDF of X 2 is k 2 (x) = η(K 1 (x)) η−1 k 1 (x), x > 0, η > 0.
(2.35) Next, we evaluate the WFGCPE of X 2 . Making use of (2.35), from (2.1) and (2.34), we have after some standard calculations that
CP E ψ γ (X 2 ) = 1 Γ(γ + 1) s 0 ψ(x)(K 1 (x)) η [− ln(K 1 (x)) η ] γ dx = − 1 Γ(γ + 1) s 0 xψ(x)[− ln(K 1 (x)) η ] γ η(K 1 (x)) η−1 k 1 (x)dx −γ s 0 xψ(x)[− ln(K 1 (x)) η ] γ−1 η(K 1 (x)) η−1 k 1 (x)dx + s 0 xψ ′ (x) ηλ 1 (x) [− ln(K 1 (x)) η ] γ η(K 1 (x)) η−1 k 1 (x)dx = − 1 Γ(γ + 1) s 0 xψ(x)[− ln K 2 (x)] γ k 2 (x)dx −γ s 0 xψ(x)[− ln K 2 (x)] γ−1 k 2 (x)dx + s 0 xψ ′ (x) ηλ 1 (x) [− ln K 2 (x)] γ k 2 (x)dx . (2.36) Now, denote E η 2 (γ) = 1 Γ(γ) E X 2 ψ(X 2 )[− ln K 2 (X 2 )] γ−1 (2.37) andẼ η 2 (γ) = 1 Γ(γ) E X 2 ψ ′ (X 2 ) λ 1 (X 2 ) [− ln K 2 (X 2 )] γ−1 .
(2.38) Thus, using (2.37) and (2.38) in (2.36), the following proposition can be obtained.
Proposition 2.11. Let (2.34) hold. Then, the WFGCPE of X 2 can be expressed as
CP E ψ γ (X 2 ) = E η 2 (γ) − E η 2 (γ + 1) − η −1Ẽ η 2 (γ + 1), γ > 0, (2.39)
provided the associated expectations are finite.
We note that when ψ(x) = 1, (2.39) reduces to Eq. (19) of Di Crescenzo et al. (2021). An illustration of the result in Proposition 2.11 is provided in the following example when ψ(x) = x.
Example 2.2. Let K 1 (x) = x, 0 < x < 1 be the baseline distribution function. We will find the WFGCPE of a random variable X 2 with distribution function K 2 (x) = [K 1 (x)] c = x c , 0 < x < 1, c > 0. Under this set up, from (2.37), for ψ(x) = x, we obtain
E η 2 (γ) = c γ (2 + c) γ =Ẽ η 2 (γ).
(2.40) Now, using (2.40) in (2.39), we get
CP E ψ(x)=x γ (X 2 ) = c γ (2 + c) γ − c γ+1 (2 + c) γ+1 − c γ (2 + c) γ+1 = c γ (2 + c) γ+1 ,
which coincides with the case of the Power distribution as in Table 1 for b = 1.
The WFGCPE of X 2 can be represented in terms of the WFGCPE with different weight functions as follows
CP E ψ γ (X 2 ) = −CP E ψ 1 γ (X 2 ) − ηCP E ψ 2 γ (X 2 ) + ηγ −1 CP E ψ 2 γ+1 (X 2 ), (2.41)
where ψ is increasing, ψ 1 (x) = xψ ′ (x) and ψ 2 (x) = xψ(x)λ 1 (x). Next, we show that a recurrence relation can be constructed for the WFGCPE of X 2 . It is shown that the WFGCPE of X 2 of order (γ + 1) can be expressed in terms of that of order γ. From (2.39),
CP E ψ γ+1 (X 2 ) = E η 2 (γ + 1) − E η 2 (γ + 2) − η −1Ẽ η 2 (γ + 2) = E η 2 (γ) − E η 2 (γ + 2) − η −1 [Ẽ η 2 (γ + 1) +Ẽ η 2 (γ + 2)] − CP E ψ γ (X 2 ). (2.42)
Further, when ψ(x) = 1, (2.42) reduces to Eq. (22) of Di Crescenzo et al. (2021). We note that the recurrence relation in (2.42) can be generalized for any integer n ≥ 1, which is presented in the following proposition.
Proposition 2.12. Let n be a positive integer. Then, under the model in (2.34), for η > 0 and γ > 0, we obtain CP E ψ γ+n (X 2 ) = E η 2 (γ + n) − E η 2 (γ + n + 1) + (−1) n−1 [E η 2 (γ) − E η 2 (γ + 1)] +η −1 [(−1) nẼ η 2 (γ + 1) −Ẽ η 2 (γ + n + 1)] + (−1) n CP E ψ γ (X 2 ). (2.43)
Proof. The proof follows using similar arguments as in the proof of Proposition 2.4 of Di Crescenzo et al. (2021). Thus, it is omitted.
We note that for the weight function ψ(x) = 1, Proposition 2.12 coincides with Proposition 2.4 of Di Crescenzo et al. (2021). In this case, the termsẼ η 2 (γ + 1) andẼ η 2 (γ + n + 1) become zero.
Empirical WFGCPE
Let T = (T 1 , . . . , T n ) be a random sample of size n drawn from a population with CDF K. The order statistics of the sample T are the ordered sample values, denoted by T 1:n ≤ . . . ≤ T n:n . Denote the indicator function of the set A by I A , where
I A = 1, if A is true 0, otherwise.
The empirical CDF on the basis of the random sample T is given by
K n (x) = 1 n n i=1 I {T i ≤x} = 0, if x < T 1:n l n , if T l:n ≤ x < T l+1:n 1, if x ≥ T n:n ,(3.1)
where l = 1, . . . , n − 1. Using (3.1), for γ > 0 and ψ(x) ≥ 0, the WFGCPE given by (2.1) can be expressed as
CP E ψ γ ( K n ) = 1 Γ(γ + 1) s 0 ψ(x) K n (x)[− ln K n (x)] γ dx = 1 Γ(γ + 1) n−1 l=1 T l+1:n T l:n ψ(x) K n (x)[− ln K n (x)] γ dx = 1 Γ(γ + 1) n−1 l=1 Z l l n − ln l n γ , (3.2)
where Z l = Ψ(T l+1:n ) − Ψ(T l:n ) and Ψ(x) =
x 0 ψ(x)dx. Note that when ψ(x) = 1, we get the empirical fractional generalized cumulative entropy (see Di Crescenzo et al. (2021)) from (3.2). For ψ(x) = x and γ = 1, (3.2) coincides with the empirical weighted cumulative entropy proposed by Misagh et al. (2011). Further, let γ be a natural number. Then, for ψ(x) = x, (3.2) reduces to the empirical shift-dependent generalized cumulative entropy due to Kayal and Moharana (2019). Thus, we observe that the proposed empirical estimate in (3.2) is a generalization of several empirical estimates developed so far. In the following theorem, we show that the empirical WFGCPE converges to the WFGCPE almost surely.
Theorem 3.1. Consider a nonnegative absolutely continuous random variable X with CDF K. Then, for X ∈ L p , p > 2, we have
CP E ψ γ ( K n ) → CP E ψ γ (X),
almost surely.
Proof. We have
Γ(γ + 1) (−1) γ CP E ψ γ ( K n ) = 1 0 ψ(x) K n (x) ln K n (x) γ dx + s 1 ψ(x) K n (x) ln K n (x) γ dx = I 1 + I 2 , say. (3.3)
Now, using dominated convergence theorem and Glivenko-Cantelli theorem, the rest of the proof follows as in Theorem 14 of Tahmasebi et al. (2020).
Next, we consider a data set, which was studied by Abouammoh et al. (1994). It represents the ordered lifetimes (in days) of 43 blood cancer patients, due to one of the Ministry of Health Hospitals in Saudi Arabia. -----115, 181, 255, 418, 441, 461, 516, 739, 743, 789, 807, 865, 924, 983, 1024, 1062, 1063, 1165, 1191, 1222, 1222, 1251, 1277, 1290, 1357, 1369, 1408, 1455, 1478, 1549, 1578, 1578, 15999, 1603, 1605, 1696, 1735, 1799, 1815, 1852, 1899, 1925, 1965.
----------------------------------
---------------------------------------Based on this dats set, tet us now compute the values of the WFGCPE with weight functions ψ(x) = √ x, ψ(x) = x and ψ(x) = x 2 for various values of γ, which are presented in Table 3. Indeed, one can compute the values of the WFGCPE with any positive valued weight functions. Example 3.1. Let T = (T 1 , . . . , T n ) be a random sample drawn from a population with CDF K(x) = x 2 , 0 < x < 1. Consider ψ(x) = x. It can be shown that T 2 l , l = 1, . . . , n − 1 follow uniform distribution in the interval (0, 1). Further, the sample spacings T 2 l+1:n − T 2 l:n , l = 1, . . . , n − 1 are independently beta distributed with parameters 1 and n. For details, please refer to Pyke (1965). Thus, from (3.2), for γ > 0, we get
E[CP E ψ γ ( K n )] = 1 Γ(γ + 1) n−1 l=1 1 2(1 + n) l n − ln l n γ (3.4) and V ar[CP E ψ γ ( K n )] = 1 (Γ(γ + 1)) 2 n−1 l=1 n 4(1 + n) 2 (2 + n) l n 2 − ln l n 2γ .
(3.5)
We present the computed values of the means and variances of the empirical estimator of WFGCPE under the set up explained in Example 3.1 in Table 4. From Table 4, we observe that for fixed sample sizes, as γ increases, the mean and variance of the proposed estimator decrease. Further, for fixed γ, the mean and variance respectively increase and decrease, as the sample size increases. Table 4: Numerical values of E(CP E ψ γ ( K n )) and V ar(CP E ψ γ ( K n )) for the distribution as in Example 3.1.
γ n E(CP E ψ γ ( K n )) V ar(CP E ψ γ ( K n )) γ n E(CP E ψ γ ( K n )) V ar(CP E ψ γ ( K n )) 0. Example 3.2. Consider a random sample T from a Weibull population with CDF K(x) = 1 − e −θx 2 , x > 0, θ > 0. Using simple transformation theory, it can be established that T 2 i , i = 1, . . . , n follow exponential distribution with mean 1/θ. Further, let ψ(x) = x. Under the present set up, the sample spacings T 2 l+1:n − T 2 l:n , l = 1, . . . , n − 1 are independent and exponentialy distributed with mean 1/(θ(n − l)) (see Pyke (1965)). Thus, from (3.2), we obtain E[CP E ψ γ ( K n )] = 1 Γ(γ + 1) (3.7)
Example 3.3. Let T be a random sample from a population with absolutely continuous CDF K and PDF k. Let ψ(x) = k(x). Then, Z l = K(T l+1:n ) − K(T l:n ), l = 1, . . . , n − 1 are independent and beta distributed random variables with parameters 1 and n. We refer to Pyke (1965) (3.9) Hereafter, we provide central limit theorems for the empirical WFGCPE when the random samples are drawn from (i) a Weibull distribution with ψ(x) = x and (ii) a general CDF K(x) with ψ(x) = k(x) = d dx K(x).
Theorem 3.2. Consider a random sample T from a population with PDF k(x) = 2λxe −λx 2 , x > 0, λ > 0. Then, for γ > 0 and ψ(x) = x, CP E ψ γ ( K n ) − E(CP E ψ γ ( K n )) V ar(CP E ψ γ ( K n )) → N(0, 1) in distribution as n → ∞.
Proof. The proof is similar to that of Theorem 5.1 of Kayal and Moharana (2019). Thus, it is omitted.
Theorem 3.3. Consider a random sample T from a population with CDF K(x). Then, for γ > 0 and ψ(x) = k(x), CP E ψ γ ( K n ) − E(CP E ψ γ ( K n )) V ar(CP E ψ γ ( K n )) → N(0, 1)
in distribution as n → ∞.
Proof. The proof is similar to that of Theorem 15 of Tahmasebi et al. (2020). Thus, it is omitted.
Concluding remarks and some discussions
In this paper, we have proposed a weighted fractional generalized cumulative past entropy of a nonnegative random variable having bounded support. A number of results for the proposed weighted fractional measure have been obtained when the weight is a general nonnegative function. It is noticed that WFGCPE is shift-dependent and can be written as the expectation of a decreasing function of the random variable. Some ordering results and bounds are established. Based on the properties, it can be seen that the proposed measure is a variability measure. Further, a connection between the proposed weighted fractional measure and the fractional calculus is provded. The weighted fractional generalized cumulative past entropy measure is studied for the proportional reversed hazards model. A nonparametric estimator of the weighted fractional generalized cumulative past entropy is introduced based on the empirical cumulative distribution function. Few examples are considered to compute mean and variance of the estimator. Finally, a large sample property of the estimator is studied. The proposed measure is not appropriate when uncertainty is associated with past. Suppose a system has started working at time t = 0. At a pre-specified inspection time say t ∈ (0, s), the system is found to be down. Then, the random variable X (t) = X|X ≤ t, where t ∈ (0, s) is known as the past lifetime. The dynamic weighted fractional generalized cumulative past entropy of X (t) is defined as CP E ψ γ (X; t) = 1 Γ(γ + 1) t 0 ψ(x) K(x) K(t) − ln K(x) K(t) γ dx, γ > 0, ψ(x) ≥ 0.
(4.10)
One can prove most of the similar properties for CP E ψ γ (X; t) as established for the proposed measure given by (2.1).
Figure 1 :
1Graphs of the normalized WFGCPE with (a) ψ(x) = x and (b) ψ(x) = x 2 for the power distribution as in
Figure 2 :
2The plots of CP E ψ γ (X 1 ) − CP E ψ γ (X 2 ) for (a) γ = 0.5, (b) γ = 0.75, (c) γ = 1.5 and (d) γ = 2.5. Here, ψ(x) = x is considered.
Figure 3 :
3Graphs of the empirical WFGCPE with (a) ψ(x) = x and (b) ψ(x) = x 2 based on the ordered lifetimes (in days) of 43 blood cancer patients, for γ = 0.25, 0.5, 0.75, 1.5 and 2.75 (from above) Next, we consider examples to illustrate the proposed empirical measure.
Table 1 :
1The closed form expressions of the WFGCPE of different distributions. For the case of Frèchet distribution, we assume that
Table 2 :
2The closed form expressions of the normalized WFGCPE of different distributions asTable 1. For the case of the Frèchet distribution, c > max{3/γ, 3}, γ > 0.
Table 3 :
3Values of the WFGCPE based on the ordered lifetimes (in days) of 43 blood cancer patients.γ
ψ(x) =
√ x ψ(x) = x
ψ(x) = x 2
0.25
24004.3
881460 1.27542 × 10 9
0.5
20065.8
707724 9.59358 × 10 8
0.75
16858.4
570814 7.23578 × 10 8
1.5
10279.3
309581 3.22149 × 10 8
2.75
4489.63
114320 8.89639 × 10 7
10
20
30
40
n
200 000
400 000
600 000
800 000
Acknowledgements: The author Suchandan Kayal acknowledges the partial financial support for this work under a grant MTR/2018/000350, SERB, India.Conflict of interest statement:The authors declare that they do not have any conflict of interests.Data availability statement: NA
On partial orderings and testing of new better than renewal used classes. A Abouammoh, S Abdulghani, I Qamber, Reliability Engineering & System Safety. 431Abouammoh, A., Abdulghani, S. and Qamber, I. (1994). On partial orderings and testing of new better than renewal used classes, Reliability Engineering & System Safety. 43(1), 37- 41.
On tail-ordering and comparison of failure rates. I Bagai, S C Kochar, Communications in Statistics-Theory and Methods. 154Bagai, I. and Kochar, S. C. (1986). On tail-ordering and comparison of failure rates, Com- munications in Statistics-Theory and Methods. 15(4), 1377-1388.
New generalizations of Grüss inequality using Riemann-Liouville fractional integrals. Z Dahmani, L Tabharit, S Taf, Bulletin of Mathematical Analysis and Applications. 23Dahmani, Z., Tabharit, L. and Taf, S. (2010). New generalizations of Grüss inequality using Riemann-Liouville fractional integrals, Bulletin of Mathematical Analysis and Applica- tions. 2(3), 93-99.
On weighted generalized entropy. S Das, Communications in Statistics-Theory and Methods. 4612Das, S. (2017). On weighted generalized entropy, Communications in Statistics-Theory and Methods. 46(12), 5707-5727.
A probabilistic analogue of the mean value theorem and its applications to reliability theory. A Di Crescenzo, Journal of Applied Probability. 363Di Crescenzo, A. (1999). A probabilistic analogue of the mean value theorem and its appli- cations to reliability theory, Journal of Applied Probability. 36(3), 706-719.
Some results on the proportional reversed hazards model. A Di Crescenzo, Statistics & Probability Letters. 504Di Crescenzo, A. (2000). Some results on the proportional reversed hazards model, Statistics & Probability Letters. 50(4), 313-321.
Fractional generalized cumulative entropy and its dynamic version. A Di Crescenzo, S Kayal, A Meoli, Communications in Nonlinear Science and Numerical Simulation. 102105899Di Crescenzo, A., Kayal, S. and Meoli, A. (2021). Fractional generalized cumulative entropy and its dynamic version, Communications in Nonlinear Science and Numerical Simulation. 102, 105899.
A Di Crescenzo, M Longobardi, On weighted residual and past entropies. arXiv preprint math/0703489.Di Crescenzo, A. and Longobardi, M. (2007). On weighted residual and past entropies, arXiv preprint math/0703489. .
On cumulative entropies. A Di Crescenzo, M Longobardi, Journal of Statistical Planning and Inference. 13912Di Crescenzo, A. and Longobardi, M. (2009). On cumulative entropies, Journal of Statistical Planning and Inference. 139(12), 4072-4087.
Further results on the generalized cumulative entropy. A Di Crescenzo, A Toomaj, Kybernetika. 535Di Crescenzo, A. and Toomaj, A. (2017). Further results on the generalized cumulative entropy, Kybernetika. 53(5), 959-982.
Multiscale fractional cumulative residual entropy of higherorder moments for estimating uncertainty, Fluctuation and Noise Letters. K Dong, X Zhang, 192050038Dong, K. and Zhang, X. (2020). Multiscale fractional cumulative residual entropy of higher- order moments for estimating uncertainty, Fluctuation and Noise Letters. 19(04), 2050038.
Weyl and Marchaud derivatives: A forgotten history. F Ferrari, Mathematics. 616Ferrari, F. (2018). Weyl and Marchaud derivatives: A forgotten history, Mathematics. 6(1), 6.
The information generating function of a probability distribution. S Golomb, IEEE Transactions on Information Theory. 121Golomb, S. (1966). The information generating function of a probability distribution, IEEE Transactions on Information Theory. 12(1), 75-77.
R Gorenflo, F Mainardi, arXiv:0805.3823Fractional calculus: integral and differential equations of fractional order. arXiv preprintGorenflo, R. and Mainardi, F. (2008). Fractional calculus: integral and differential equations of fractional order, arXiv preprint arXiv:0805.3823. .
Weighted entropy. S Guiaşu, Reports on Mathematical Physics. 23Guiaşu, S. (1971). Weighted entropy, Reports on Mathematical Physics. 2(3), 165-179.
Modeling failure time data by Lehman alternatives. R C Gupta, P L Gupta, R D Gupta, Communications in Statistics-Theory and methods. 274Gupta, R. C., Gupta, P. L. and Gupta, R. D. (1998). Modeling failure time data by Lehman alternatives, Communications in Statistics-Theory and methods. 27(4), 887-904.
Proportional reversed hazard rate model and its applications. R C Gupta, R D Gupta, Journal of Statistical Planning and Inference. 13711Gupta, R. C. and Gupta, R. D. (2007). Proportional reversed hazard rate model and its applications, Journal of Statistical Planning and Inference. 137(11), 3525-3536.
On generalized cumulative entropies. S Kayal, Probability in the Engineering and Informational Sciences. 304Kayal, S. (2016). On generalized cumulative entropies, Probability in the Engineering and Informational Sciences. 30(4), 640-662.
On weighted generalized cumulative residual entropy of order n, Methodology and Computing in Applied Probability. S Kayal, 20Kayal, S. (2018). On weighted generalized cumulative residual entropy of order n, Method- ology and Computing in Applied Probability. 20(2), 487-503.
On weighted cumulative residual entropy. S Kayal, R Moharana, Journal of Statistics and Management Systems. 202Kayal, S. and Moharana, R. (2017a). On weighted cumulative residual entropy, Journal of Statistics and Management Systems. 20(2), 153-173.
On weighted measures of cumulative entropy. S Kayal, R Moharana, International Journal of Mathematics and Statistics. 183Kayal, S. and Moharana, R. (2017b). On weighted measures of cumulative entropy, Inter- national Journal of Mathematics and Statistics. 18(3), 26-46.
A shift-dependent generalized cumulative entropy of order n. S Kayal, R Moharana, Communications in Statistics-Simulation and Computation. 486Kayal, S. and Moharana, R. (2019). A shift-dependent generalized cumulative entropy of order n, Communications in Statistics-Simulation and Computation. 48(6), 1768-1783.
Jensen-information generating function and its connections to some well-known information measures. O Kharazmi, N Balakrishnan, Statistics & Probability Letters. 170108995Kharazmi, O. and Balakrishnan, N. (2021). Jensen-information generating function and its connections to some well-known information measures, Statistics & Probability Letters. 170, 108995.
Information generating function of ranked set samples. O Kharazmi, M Tamandi, N Balakrishnan, Entropy. 23111381Kharazmi, O., Tamandi, M. and Balakrishnan, N. (2021). Information generating function of ranked set samples, Entropy. 23(11), 1381.
An Introduction to the Fractional Calculus and Fractional Differential Equations. K S Miller, B Ross, Wiley and SonsNew York, NY, USAMiller, K. S. and Ross, B. (1993,). An Introduction to the Fractional Calculus and Fractional Differential Equations, Wiley and Sons, New York, NY, USA.
On weighted cumulative residual entropy. M Mirali, S Baratpour, V Fakoor, Communications in Statistics-Theory and Methods. 466Mirali, M., Baratpour, S. and Fakoor, V. (2017). On weighted cumulative residual entropy, Communications in Statistics-Theory and Methods. 46(6), 2857-2869.
On shift-dependent cumulative entropy measures. F Misagh, International Journal of Mathematics and Mathematical Sciences. Misagh, F. (2016). On shift-dependent cumulative entropy measures, International Journal of Mathematics and Mathematical Sciences. 2016.
Weighted cumulative entropy and its estimation. F Misagh, Y Panahi, G Yari, R Shahi, IEEE International Conference on Quality and Reliability. Misagh, F., Panahi, Y., Yari, G. and Shahi, R. (2011). Weighted cumulative entropy and its estimation, IEEE International Conference on Quality and Reliability. pp. 477-480.
Weighted Renyi's entropy for lifetime distributions. M Nourbakhsh, G Yari, Communications in Statistics-Theory and Methods. 4614Nourbakhsh, M. and Yari, G. (2017). Weighted Renyi's entropy for lifetime distributions, Communications in Statistics-Theory and Methods. 46(14), 7085-7098.
On location, scale, skewness and kurtosis of univariate distributions. H Oja, Scandinavian Journal of Statistics. 83Oja, H. (1981). On location, scale, skewness and kurtosis of univariate distributions, Scan- dinavian Journal of Statistics. 8(3), 154-168.
R Pyke, Spacings. 27Pyke, R. (1965). Spacings, Journal of the Royal Statistical Society: Series B (Methodologi- cal). 27(3), 395-436.
Cumulative residual entropy: a new measure of information. M Rao, Y Chen, B C Vemuri, F Wang, IEEE transactions on Information Theory. 506Rao, M., Chen, Y., Vemuri, B. C. and Wang, F. (2004). Cumulative residual entropy: a new measure of information, IEEE transactions on Information Theory. 50(6), 1220-1228.
On measures of entropy and information. A Rényi, Contributions to the Theory of Statistics. 1The Regents of the University of CaliforniaRényi, A. (1961). On measures of entropy and information, Proceedings of the Fourth Berke- ley Symposium on Mathematical Statistics and Probability, Volume 1: Contributions to the Theory of Statistics, The Regents of the University of California.
On the k-Riemann-Liouville fractional derivative. L G Romero, L L Luque, G A Dorrego, R A Cerutti, International Journal of Contemporary Mathematical Sciences. 81Romero, L. G., Luque, L. L., Dorrego, G. A. and Cerutti, R. A. (2013). On the k-Riemann- Liouville fractional derivative, International Journal of Contemporary Mathematical Sci- ences. 8(1), 41-51.
Fractional Integrals and Derivatives. S G Samko, A A Kilbas, O I Marichev, Theory and Applications. Gordon and Breach ScienceSamko, S. G., Kilbas, A. A. and Marichev, O. I. (1993). Fractional Integrals and Derivatives, Theory and Applications, Gordon and Breach Science, Yverdon, Switzerland.
Stochastic Orders. M Shaked, J G Shanthikumar, SpringerNew YorkShaked, M. and Shanthikumar, J. G. (2007). Stochastic Orders, Springer, New York.
A note on the concept of entropy. C E Shannon, Bell System Technical Journal. 273Shannon, C. E. (1948). A note on the concept of entropy, Bell System Technical Journal. 27(3), 379-423.
Weighted cumulative entropies: an extension of CRE and CE. Y Suhov, S Y Sekeh, arXiv:1507.07051arXiv preprintSuhov, Y. and Sekeh, S. Y. (2015). Weighted cumulative entropies: an extension of CRE and CE, arXiv preprint arXiv:1507.07051. .
Weighted extensions of generalized cumulative residual entropy and their applications. S Tahmasebi, Communications in Statistics-Theory and Methods. 4921Tahmasebi, S. (2020). Weighted extensions of generalized cumulative residual entropy and their applications, Communications in Statistics-Theory and Methods. 49(21), 5196-5219.
An extension of weighted generalized cumulative past measure of information. S Tahmasebi, M Longobardi, F Foroghi, F Lak, Ricerche di Matematica. 691Tahmasebi, S., Longobardi, M., Foroghi, F. and Lak, F. (2020). An extension of weighted generalized cumulative past measure of information, Ricerche di Matematica. 69(1), 53-81.
Possible generalization of Boltzmann-Gibbs statistics. C Tsallis, Journal of Statistical Physics. 521Tsallis, C. (1988). Possible generalization of Boltzmann-Gibbs statistics, Journal of Statis- tical Physics. 52(1), 479-487.
On new inequalities for h-convex functions via Riemann-Liouville fractional integration. M Tunc, Filomat. 274Tunc, M. (2013). On new inequalities for h-convex functions via Riemann-Liouville fractional integration, Filomat. 27(4), 559-565.
Entropies based on fractional calculus. M R Ubriaco, Physics Letters A. 37330Ubriaco, M. R. (2009). Entropies based on fractional calculus, Physics Letters A. 373(30), 2516-2519.
Extensive generalization of statistical mechanics based on incomplete information theory. Q A Wang, Entropy. 52Wang, Q. A. (2003). Extensive generalization of statistical mechanics based on incomplete information theory, Entropy. 5(2), 220-232.
Fractional cumulative residual entropy, Communications in Nonlinear Science and Numerical Simulation. H Xiong, P Shang, Y Zhang, 78Xiong, H., Shang, P. and Zhang, Y. (2019). Fractional cumulative residual entropy, Com- munications in Nonlinear Science and Numerical Simulation. 78, 104879.
| [] |
[
"New infinite families of near MDS codes holding t-designs",
"New infinite families of near MDS codes holding t-designs",
"New infinite families of near MDS codes holding t-designs",
"New infinite families of near MDS codes holding t-designs"
] | [
"Ziling Heng \nSchool of Science\nChang'an University\nXi'an 710064China\n",
"Xinran Wang \nSchool of Science\nChang'an University\nXi'an 710064China\n",
"Ziling Heng \nSchool of Science\nChang'an University\nXi'an 710064China\n",
"Xinran Wang \nSchool of Science\nChang'an University\nXi'an 710064China\n"
] | [
"School of Science\nChang'an University\nXi'an 710064China",
"School of Science\nChang'an University\nXi'an 710064China",
"School of Science\nChang'an University\nXi'an 710064China",
"School of Science\nChang'an University\nXi'an 710064China"
] | [] | In "Infinite families of near MDS codes holding t-designs, IEEE Trans. Inform. Theory, 2020, 66(9), pp. 5419-5428", Ding and Tang made a breakthrough in constructing the first two infinite families of NMDS codes holding 2-designs or 3-designs. Up to now, there are only a few known infinite families of NMDS codes holding t-designs in the literature. The objective of this paper is to construct new infinite families of NMDS codes holding t-designs. We determine the weight enumerators of the NMDS codes and prove that the NMDS codes hold 2-designs or 3-designs. Compared with known t-designs from NMDS codes, ours have different parameters. Besides, several infinite families of optimal locally recoverable codes are also derived via the NMDS codes. | 10.1016/j.disc.2023.113538 | [
"https://export.arxiv.org/pdf/2210.05194v2.pdf"
] | 252,815,698 | 2210.05194 | 349883b2ee814a284025077f26e73a4af518998e |
New infinite families of near MDS codes holding t-designs
17 Mar 2023
Ziling Heng
School of Science
Chang'an University
Xi'an 710064China
Xinran Wang
School of Science
Chang'an University
Xi'an 710064China
New infinite families of near MDS codes holding t-designs
17 Mar 2023arXiv:2210.05194v2 [cs.IT]Linear codeweight enumeratort-design 2000 MSC: 94B0594A05
In "Infinite families of near MDS codes holding t-designs, IEEE Trans. Inform. Theory, 2020, 66(9), pp. 5419-5428", Ding and Tang made a breakthrough in constructing the first two infinite families of NMDS codes holding 2-designs or 3-designs. Up to now, there are only a few known infinite families of NMDS codes holding t-designs in the literature. The objective of this paper is to construct new infinite families of NMDS codes holding t-designs. We determine the weight enumerators of the NMDS codes and prove that the NMDS codes hold 2-designs or 3-designs. Compared with known t-designs from NMDS codes, ours have different parameters. Besides, several infinite families of optimal locally recoverable codes are also derived via the NMDS codes.
Introduction
Let q be a power of a prime p. Denote by F q the finite field with q elements and F * q = F q \ {0}.
Linear codes
For a positive integer n, a non-empty subset C of F n q is called an [n, κ, d] linear code over F q provided that it is a κ-dimensional linear subspace of F n q , where d is its minimum distance. Define the dual of an [n, κ] linear code C over F q by C ⊥ = u ∈ F n q : u, c = 0 for all c ∈ C ,
where ·, · denotes the standard inner product. By definition, C ⊥ is an [n, n − κ] linear code over F q . For an [n, κ] linear code C over F q , let A i represent the number of codewords with weight i in C , where 0 ≤ i ≤ n. Define the weight enumerator of C as the following polynomial: 15,16,19,20,21,32,37]. The weight distribution of a linear code contains crucial information including the error detection and correction capabilities of the code and allows to compute the error probability of its error detection and correction [17]. In coding theory, modifying existing codes may yield interesting new codes. A longer code can be constructed by adding a coordinate. Let C be an [n, κ, d] linear code over F q . The extended code C of C is defined by
C = (c 1 , c 2 , . . . , c n , c n+1 ) ∈ F n+1 q : (c 1 , c 2 , . . . , c n ) ∈ C with n+1 ∑ i=1 c i = 0 .
This construction is said to be adding an overall parity check [23]. Note that C has only even-like vectors. Then C is also a linear code over F q with parameters [n + 1, κ, d], where d = d or d + 1.
For instance, the extended code of the binary [7,4,3] Hamming code has parameters [8,4,4]. The extension technique was used in [3,33] to obtain desirable codes. An [n, κ, d] linear code is said to be good if it has both large rate κ/n and large minimum distance d. However, there is a tradeoff among the parameters n, κ and d. If an [n, κ, d] linear code over F q exists, then the following Singleton bound holds:
d ≤ n − κ + 1.
Linear codes achieving the Singleton bound with parameters [n, κ, n − κ + 1] are called maximum distance separable (MDS for short) codes. Linear codes nearly achieving the Singleton bound are also interesting and have attracted the attention of many researchers. An [n, κ, n − κ] linear code is said to be almost maximum distance separable (AMDS for short). It is known that the dual of an AMDS code may not be AMDS. AMDS codes whose duals are also AMDS are said to be near maximum distance separable (NMDS for short). NMDS codes are of interest because they have many nice applications in finite geometry, combinatorial designs, locally recoverable codes and many other fields [5,20,21,28,29]. In general, constructing infinite families of NMDS codes with desirable weight distribution is challenging. In recent years, a few families of NMDS codes were constructed in [5,20,21,29,33,34] and their weight distributions were determined.
Combinatorial designs from linear codes
Let k,t, n be positive integers such that 1 ≤ t ≤ k ≤ n. Let P be a set with |P | = n ≥ 1. Denote by B a collection of k-subsets of P . For each t-subset of P , if there exist exactly λ elements of B such that they contain this t-subset, then the pair D := (P , B) is referred to as a t-(n, k, λ) design (t-design for short). The elements in P and B are called points and blocks, respectively. A t-design is said to be simple if it contains no repeated blocks. A t-design with k = t or k = n is said to be trivial. In this paper, we are interested in only simple and nontrivial t-designs with n > k > t. A t-(n, k, λ) design satisfying t ≥ 2 and λ = 1 is called a Steiner system denoted by S(t, k, n). For a t-(n, k, λ) design, the following equality holds [23]:
n t λ = k t b,
where b is the number of blocks in B. Let B c represent the set of the complements of the blocks in B. Then (P , B c ) is a t-(n, n − k, λ 0,t ) design if (P , B) is a t-(n, k, λ) design, where
λ 0,t = λ n−t k n−t k−t .(1)
The pair (P , B c ) is referred to as the complementary design of (P , B).
In past decades, the interplay between linear codes and t-designs has been a very interesting research topic. For one thing, the incidence matrix of a t-design yields a linear code. See [2] for progress in this direction. For another thing, linear codes may hold t-designs. The well-known coding-theoretic construction of t-designs is described as follows. Let C be an [n, κ] linear code over F q and the coordinates of a codeword in it be indexed by ( where {{}} denotes the multiset notation, wt(c) is the Hamming weight of c and 1 q−1 S denotes the multiset obtained by dividing the multiplicity of each element in the multiset S by q − 1 [30]. If the pair (P (C ), B w (C )) is a t-(n, w, λ) design with b blocks for 0 ≤ w ≤ n, we say that the code C
supports t-designs, where b = 1 q − 1 A w , λ = w t (q − 1) n t A w .(2)
The following Assmus-Mattson Theorem provides a sufficient condition for a linear code to hold t-designs.
Theorem 1. [3, Assmus-Mattson
Theorem] Let C be an [n, κ, d] linear code over F q whose weight distribution is denoted by (1, A 1 , A 2 , . . . , A n ). Let d ⊥ be the minimum weight of C ⊥ whose weight distribution is denoted by (
1, A ⊥ 1 , A ⊥ 2 , . . . , A ⊥ n ). Let t be an integer satisfying 1 ≤ t < min{d, d ⊥ }.
Assume that there are at most d ⊥ − t nonzero weights of C in the range {1, 2, . . ., n − t}. Then the followings hold:
1. (P (C ), B i (C )) is a simple t-design if A i = 0 with d ≤ i ≤ w,
where w is the largest integer satisfying w ≤ n and
w − w + q − 2 q − 1 < d. 2. (P (C ⊥ ), B i (C ⊥ )) is a simple t-design if A ⊥ i = 0 with d ⊥ ≤ i ≤ w ⊥ , where w ⊥ is the largest integer satsifying w ⊥ ≤ n and w ⊥ − w ⊥ + q − 2 q − 1 < d ⊥ .
The Assmus-Mattson Theorem is a powerful tool for constructing t-designs from linear codes and has been widely used in [3,6,7,8]. Another method to prove that a linear code holds t-designs is via the automorphism group of the code. Several infinite families of t-designs were constructed via this method in the literature [3,8,9,10,11,31,35,36]. The third method is directly characterizing the supports of the codewords of fixed weight. See [5,29,35] for known t-designs obtained by this method. Recently, Tang, Ding and Xiong generalized the Assmus-Mattson Theorem and derived t-designs from codes which don't satisfy the conditions in the Assmus-Mattson Theorem and don't admit t-homogeneous group as a subgroup of their automorphisms [30].
Motivations and objectives of this work
Constructing t-designs from special NMDS codes has been an interesting research topic for a long time. The first NMDS code dates back to 1949. Golay discovered the [11,6,5] ternary NMDS code which is called the ternary Golay code. This NMDS code holds 4-designs. In the past 70 years after this discovery, only sporadic NMDS codes holding t-designs were found. The question as to whether there exists an infinite family of NMDS codes holding an infinite family of t-designs for t ≥ 2 remained open during this long period. In 2020, Ding and Tang made a breakthrough in constructing the first two infinite families of NMDS codes holding 2-designs or 3-designs [5]. Up to now, there are only a few known infinite families of NMDS codes holding t-designs for t = 2, 3, 4 in the literature [5,29,34,38]. It is challenging to construct new infinite families of NMDS codes holding t-designs with t > 1.
The objective of this paper is to construct several new infinite families of NMDS codes holding t-designs. To this end, some special matrices over finite fields are used as the generator matrices of the NMDS codes. We then determine the weight enumerators of the NMDS codes and prove that the NMDS codes hold 2-deigns or 3-designs. Most of the NMDS codes in this paper don't satisfy the conditions in the Assmus-Mattson Theorem but still hold t-designs. Compared with known t-designs from NMDS codes, ours have different parameters. Besides, several infinite families of optimal locally recoverable codes are also derived via the NMDS codes.
Preliminaries
In this section, we present some preliminaries on the properties of NMDS codes and oval polynomials, and the number of zeros of some equations over finite fields.
Properties of NMDS codes
Let C be an [n, κ] linear code and C ⊥ be its dual. Denote by (1, A 1 , . . . , A n ) and (1,
A ⊥ 1 , . . . , A ⊥ n )
the weight distributions of C and C ⊥ , respectively. If C is an NMDS code, then the weight distributions of C and C ⊥ are given in the following lemma.
Lemma 2. [3] Let C be an [n, κ, n − κ] NMDS code over F q . If s ∈ {1, 2, . . ., n − κ}, then A ⊥ κ+s = n κ + s s−1 ∑ j=0 (−1) j κ + s j (q s− j − 1) + (−1) s n − κ s A ⊥ κ .
If s ∈ {1, 2, . . ., κ}, then
A n−κ+s = n κ − s s−1 ∑ j=0 (−1) j n − κ + s j (q s− j − 1) + (−1) s κ s A n−κ .
Though Lemma 2 and the relation 1 + ∑ κ s=0 A n−κ+s = q κ hold, the weight distribution of an [n, κ] NMDS code still can not be totally determined. In [20], some infinite families of NMDS codes with the same parameters but different weight distributions were constructed.
The following lemma establishes an interesting relationship between the minimum weight codewords in C and those in C ⊥ . Lemma 3. [12] Let C be an NMDS code. For any c = (c 1 , . . . , c n ) ∈ C , its support is defined by suppt(c) = {1 ≤ i ≤ n : c i = 0}. Then for any minimum weight codeword c in C , there exists, up to a multiple, a unique minimum weight codeword c ⊥ in C ⊥ satisfying suppt(c) ∩ suppt(c ⊥ ) = / 0.
Besides, the number of minimum weight codewords in C and the number of those in C ⊥ are the same.
By Lemma 3, if the minimum weight codewords of an NMDS code hold a t-design, then the minimum weight codes of its dual hold a complementary t-design.
Oval polynomials and their properties
The definition of oval polynomial is presented as follows.
Definition 4. [25] Let q = 2 m with m ≥ 2. If f ∈ F q [x] is a polynomial such that f is a permutation polynomial of F q with deg( f ) < q and f (0) = 0, f (1) = 1, and g a (x) := ( f (x + a) + f (a))
x q−2 is also a permutation polynomial of F q for each a ∈ F q , then f is called an oval polynomial.
The following gives some known oval polynomials.
Lemma 5.
[26] Let m ≥ 2 be an integer. Then the followings are oval polynomials of F q , where q = 2 m .
1. The translation polynomial f (x) = x 2 h , where gcd(h, m) = 1. 2. The Segre polynomial f (x) = x 6 , where m is odd.
By Lemma 5, it is obvious that f (x) = x 4 is an oval polynomial of F q , where q = 2 m for odd m. By Definition 4, the following also holds for oval polynomials. f (x) + f (y)
x + y = f (x) + f (z) x + z
for all pairwise distinct elements x, y, z in F q .
The number of zeros of some equations over finite fields
Let q = p m with p a prime. The following lemma is very useful for determining the greatest common divisor of some special integers.
gcd(p h + 1, q − 1) = 1,
for odd m ℓ and p = 2, 2, for odd m ℓ and odd p, p ℓ + 1, for even m ℓ .
The following lemmas present some results on the number of solutions of some equations over F q . Lemma 8. [34] Let n be a positive integer and q a prime power such that gcd(n, q − 1) = s. Then x n − 1 has s zeros in F q .
Lemma 9. [34, Proof of Lemma 4] Let h be a positive integer. Denote by N g the number of zeros
of g(x) = x p h +1 + c in F q , where c ∈ F * q . Then N g = 0 if and only if −c is not a (p h + 1)-th power in F * q . If −c is a (p h + 1)-th power in F * q , then N g = gcd(p h + 1, q − 1)
. Lemma 10. [25] The trace function from F q to F p is defined by
Tr q/p (x) = x + x p + x p 2 + · · · + x p m−1 .
Then for α ∈ F q , Tr q/p (α) = 0 if and only if α = β p − β for some β ∈ F q . Lemma 11. [27] Let F q be a finite field of characteristic 2 and let f (
x) = ax 2 + bx + c ∈ F q [x]f (x) = ax 2 h +1 + bx + c, a, b, c ∈ F q and g(x) = ax 2 h +1 + bx 2 h + c, a, b, c ∈ F q in F q . If a, b, c ∈ F * q , then f (x) can be reduced to P γ (x) = x 2 h +1 + x + γ, by the substitution x −→ ux, where u 2 h = b a and γ = c au 2 h +1 ∈ F * q . If a, b, c ∈ F * q , then g(x) can be reduced to U ℓ (x) = x 2 h +1 + x 2 h + ℓ, by the substitution x −→ vx, where v = b a and ℓ = c av 2 h +1 ∈ F * q .
Lemma 12. [18] Let N γ denote the number of zeros of P γ (x) in F q with γ ∈ F * q . If gcd(h, m) = 1, then N γ is equal to 0, 1 or 3.
(x) = ax 2 h +1 + bx + c in F * q , where (a, b, c) = (0, 0, 0), a, b, c ∈ F q . Then N f ∈ {0, 1, 3}.
Proof. It is obvious that N f is equal to 0 or 1 if a = 0 or b = c = 0. Now let a = 0. If b = 0 and c = 0, then f (x) = ax 2 h +1 + bx = ax(x 2 h + b a ). Since q is even, it is clear that f (x) has only one zero in F * q . If b = 0 and c = 0, then f (x) = ax 2 h +1 + c = a(x 2 h +1 + c a ) and N f ∈ {0, 1, 3} by Lemmas 7 and 9. If b = 0 and c = 0, by Lemma 12, N f = N γ ∈ {0, 1, 3}. Then the desired conclusion follows.
Lemma 14.
Let N ℓ denote the number of zeros of U ℓ (x) in F * q with ℓ ∈ F * q . If gcd(h, m) = 1, then N ℓ is equal to 0, 1 or 3.
Proof. Let x 0 be a zero of U ℓ (x) in F * q , then it is easy to prove that P ℓ (x 0 + 1) = U ℓ (x 0 ) = 0. Let x 0 be a zero of P γ (x) in F * q , then we have U γ (x 0 + 1) = P γ (x 0 ) = 0. Then it is easy to deduce N ℓ = N γ . The desired conclusion follows from Lemma 12.
(x) = ax 2 h +1 + bx 2 h + c in F * q , where (a, b, c) = (0, 0, 0), a, b, c ∈ F q . Then N g ∈ {0, 1, 3}.
Proof. Similarly to the proof of Lemma 13, the desired conclusion follows from Lemmas 7, 9 and 14.
Lemma 16. Let q = 2 m , where m is an odd integer with m ≥ 3. Then for two distinct elements a, b ∈ F q , the polynomial u(x) = x 2 + (a + b)x + a 2 + b 2 + ab has no root in F q . In other words, for any c ∈ F q , we have a 2 + b 2 + c 2 + ab + ac + bc = 0. Particularly, if c = 0, then a 2 + b 2 + ab = 0.
Proof. It is obvious that
Tr q/2 a 2 + b 2 + ab (a + b) 2 = Tr q/2 (1) + Tr q/2 ab a 2 + b 2 .
Let β = a a+b ∈ F q . It is clear that ab a 2 +b 2 = β 2 − β. By Lemma 10,
Tr q/2 ab a 2 + b 2 = 0.
When m is odd, Tr q/2 a 2 +b 2 +ab (a+b) 2 = Tr q/2 (1) = 1. Then by Lemma 11,u(x) has no root in F q . The proof is completed.
Generalized Vandermonde determinant
The following provides a general equation for a generalized Vandermonde determinant with one deleted row in terms of the elementary symmetric polynomial. Lemma 17. [22,Lemma 17] [24,Page 466] For each ℓ with 0 ≤ ℓ ≤ n, it holds that
1 1 · · · 1 1 u 1 u 2 · · · u n−1 u n . . . . . . . . . . . . . . . u ℓ−1 1 u ℓ−1 2 · · · u ℓ−1 n−1 u ℓ−1 n u ℓ+1 1 u ℓ+1 2 · · · u ℓ+1 n−1 u ℓ+1 n . . . . . . . . . . . . . . .
Two families of 3-dimensional near MDS codes holding 2-designs
In this section, let q = 2 m with m ≥ 3. Hereafter, let dim(C ) and d(C ) respectively denote the dimension and minimum distance of a linear code C . Let α be a generator of F * q and α i :
= α i for 1 ≤ i ≤ q − 1. Then α q−1 = 1.
Let h be a positive integer with gcd(m, h) = 1. Define
D = 1 1 · · · 1 1 α 1 α 2 · · · α q−2 α q−1 α 2 h +1 1 α 2 h +1 2 · · · α 2 h +1 q−2 α 2 h +1 q−1 .
D is a 3 by q − 1 matrix over F q . Let C D be the linear code over F q generated by D. We will show that C D is an NMDS code and both C D and its dual C ⊥ D support 2-designs.
Theorem 18. Let q = 2 m with m ≥ 3, h be a positive integer with gcd(m, h) = 1. Then C D is a [q − 1, 3, q − 4] NMDS code over F q with weight enumerator A(z) = 1 + (q − 1) 2 (q − 2) 6 z q−4 + (q − 1) 2 (q + 4) 2 z q−2 + (q − 1)(q 2 + 8) 3 z q−1 .
Moreover, the minimum weight codewords of
C D support a 2-(q −1, q −4, (q−4)(q−5) 6 ) simple design and the minimum weight codewords of C ⊥ D support a 2-(q − 1, 3, 1) simple design, i.e. a Steiner sys- tem S(2, 3, q − 1). Furthermore, the codewords of weight 4 in C ⊥ D support a 2-(q − 1, 4, (q−4)(q−7) 2 ) simple design.
Proof. We first prove that dim(C D ) = 3. Let g 1 , g 2 and g 3 respectively represent the first, second and third rows of D. Assume that there exist elements a, b, c ∈ F q with (a, b, c) = (0, 0, 0) such that
cg 1 + bg 2 + ag 3 = 0. Then aα 2 h +1 1 + bα 1 + c = 0, aα 2 h +1 2 + bα 2 + c = 0, . . . aα 2 h +1 q−1 + bα q−1 + c = 0.
This contradicts with the fact that the polynomial f (x) = ax 2 h +1 + bx + c has at most 3 zeros in F * q by Lemma 13. Hence g 1 , g 2 and g 3 are linearly independent over F q and dim(C D ) = 3.
We then prove that C ⊥
D has parameters [q − 1, q − 4, 3]. Obviously, dim(C ⊥ D ) = (q − 1) − 3 = q − 4.
It is clear that each column of D is nonzero and any two columns of D are linearly independent over F q . Then d(C ⊥ D ) > 2. Let x 1 , x 2 , x 3 be three pairwise different elements in F * q . Consider the following submatrix as
D 1 = 1 1 1 x 1 x 2 x 3 x 2 h +1 1 x 2 h +1 2 x 2 h +1 3 .
Then D has 3 columns that are linearly dependent if and only if |D 1 | = 0 for some (x 1 , x 2 , x 3 ). Besides, rank(D 1 ) = 2 if |D 1 | = 0. Now we consider the following two cases.
Case 1: Let m be even. By Lemmas 7 and 8, the polynomial x 2 h +1 − 1 has 3 zeros in F * q denoted by r 1 , r 2 and r 3 . Let (x 1 , x 2 , x 3 ) = (r 1 , r 2 , r 3 ). Then
D 1 = 1 1 1 r 1 r 2 r 3 1 1 1 . Thus |D 1 | = 0.
Case 2: Let m be odd and x 3 = α q−1 = 1. Then
D 1 = 1 1 1 x 1 x 2 1 x 2 h +1 1 x 2 h +1 2 1 .
It is easy to deduce that |D 1
| = (1 + x 1 )x 2 h +1 2 + (1 + x 2 h +1 1 )x 2 + x 1 + x 2 h +1 1 . Denote by f (x) = (1 + x 1 )x 2 h +1 + (1 + x 2 h +1 1 )x + x 1 + x 2 h +1 1 . Note that 1 + x 1 = 0 as x 1 = x 3 . By Lemma 13, f (x) has 0 or 1 or 3 zeros in F * q . It is easy to verify that f (1) = f (x 1 ) = 0.
Then exists an element r ∈ F * q which is different from 1 and x 1 such that f (r) = 0. Let x 2 = r and we have |D 1 | = 0. Summarizing the above cases yields that d(
C ⊥ D ) = 3. Therefore, C ⊥ D has parameters [q − 1, q − 4, 3].
By definition, we have
C D = {c a,b,c = (ax 2 h +1 + bx + c) x∈F * q , a, b, c ∈ F q }.
To determine the weight wt(c a,b,c ) of a codeword c a,b,c ∈ C D , it is sufficient to determine the number of zeros of the equation
ax 2 h +1 + bx + c = 0
in F * q . By Lemma 13, the above equation has 0 or 1 or 3 zeros in
F * q . Hence, wt(c a,b,c ) ∈ {q − 1, q − 2, q − 4}.
Finally, we compute the weight enumerator of C D by the first three Pless Power Moments in [23] and prove that C D is a [q − 1, 3, q − 4] NMDS code. Let A w 1 , A w 2 , A w 3 respectively represent the frequencies of the weights We finally prove that the codewords of weight 4 in C ⊥ D support a 2-(q − 1, 4, (q−4)(q−7)
w 1 = q − 4, w 2 = q − 2, w 3 = q − 1. Then we have A w 1 + A w 2 + A w 3 = q 3 − 1, w 1 A w 1 + w 2 A w 2 + w 3 A w 3 = q 2 (q − 1) 2 , w 2 1 A w 1 + w 2 2 A w 2 + w 2 3 A w 3 = q(q − 1) 2 (q 2 − 2q + 2).
Solving the above system of linear equations gives
A q−4 = (q − 1) 2 (q − 2) 6 , A q−2 = (q − 1) 2 (q + 4) 2 , A q−1 = (q − 1)(q 2 + 8) 3 . Thus C D is a [q − 1,
2 ) simple design. Thanks to a generalized version of the Assmus-Mattson Theorem (Theorem 2.2 in [30]), the codewords of weight 4 in C ⊥ D support a 2-design. We need to prove that this design is simple. Let x, y, z be three pairwise distinct elements in F * q . Define
D 2 = 1 1 1 x y z x 2 h +1 y 2 h +1 z 2 h +1 .
It is obvious that
|D 2 | = z 2 h +1 (x + y) + z(x 2 h +1 + y 2 h +1 ) + xy 2 h +1 + yx 2 h +1 . Let f (z) = z 2 h +1 (x+y)+z(x 2 h +1 +y 2 h +1 )+xy 2 h +1 +yx 2 h +1 . Note that f (x) = f (y) = 0. By Lemma 13, f (z) has 0 or 1 or 3 zeros in F * q .
Thus there exists an element r (x,y) ∈ F * q which is different from x and y such that f (r (x,y) ) = 0. Then we have rank(D 2 ) = 3 if and only if z / ∈ {x, y, r (x,y) }. Next we prove that the rank of the submatrix
D (x,y,z,w) = 1 1 1 1 x y z w x 2 h +1 y 2 h +1 z 2 h +1 w 2 h +1 equals 3 for any four pairwise distinct elements x, y, z, w ∈ F * q .
It is obvious that at least one of z, w is not equal to r (x,y) , which implies D (x,y,z,w) has a three-order non-zero minor. Thus
rank(D (x,y,z,w) ) = rank(D 2 ) = 3. Let c = (c 1 , c 2 , . . . , c q−1 ) be a codeword of weight 4 in C ⊥ D with nonzero coordinates in {i 1 , i 2 , i 3 , i 4 }, which means c i j = 0 for 1 ≤ j ≤ 4 and c v = 0 for all v ∈ {1, 2, . . ., q − 1} \ {i 1 , i 2 , i 3 , i 4 }. Since D is a parity-check matrix of C ⊥ D , there exist four pair- wise distinct elements x, y, z, w ∈ F * q such that 1 1 1 1 x y z w x 2 h +1 y 2 h +1 z 2 h +1 w 2 h +1 c i 1 c i 2 c i 3 c i 4 = 0.
Since rank(D (x,y,z,w) ) = 3, the all nonzero solutions of the above equation are
{a(c i 1 , c i 2 , c i 3 , c i 4 ) : a ∈ F * q }. Thus {ac : a ∈ F * q } is a set of all codewords of weight 4 in C ⊥ D whose nonzero coordinates are {i 1 , i 2 , i 3 , i 4 }. Hence, the codewords of weight 4 in C ⊥ D support a 2-(q − 1, 4, λ) simple design. Since C ⊥ D is an NMDS code, we have A ⊥ 4 = (q−1) 2 (q−2)(q−4)(q−7)
24 by Lemma 2. By Equation (2),
we have λ = (q−4)(q−7) 2 .
Then we have completed the proof.
Let h be a positive integer with gcd(m, h) = 1. Define
H = 1 1 · · · 1 1 α 2 h 1 α 2 h 2 · · · α 2 h q−2 α 2 h q−1 α 2 h +1 1 α 2 h +1 2 · · · α 2 h +1 q−2 α 2 h +1 q−1 .
H is a 3 by q − 1 matrix over F q . Let C H be the linear code over F q generated by H. We will show that C H is an NMDS code and both C H and its dual C ⊥ H support 2-designs.
Theorem 19. Let q = 2 m with m ≥ 3, h be a positive integer with gcd(m, h) = 1. Then C H is a [q − 1, 3, q − 4] NMDS code over F q with weight enumerator A(z) = 1 + (q − 1) 2 (q − 2) 6 z q−4 + (q − 1) 2 (q + 4) 2 z q−2 + (q − 1)(q 2 + 8) 3 z q−1 .
Moreover, the minimum weight codewords of C H support a 2-
(q −1, q −4, (q−4)(q−5) 6
) simple design and the minimum weight codewords of C ⊥ H support a 2-(q − 1, 3, 1) simple design, i.e. a Steiner sys-
tem S(2, 3, q − 1). Furthermore, the codewords of weight 4 in C ⊥ H support a 2-(q − 1, 4, (q−4)(q−7) 2 ) simple design.
Proof. Similarly to the proof of Theorem 18, we can easily derive this theorem by Equation (2), Lemmas 2 and 15.
Note that C D and C H have the same parameters and weight enumerator. It is open whether they are equivalent to each other.
Two families of 4-dimensional near MDS codes holding 2-designs
In this section, let q = 2 m with m ≥ 3. Let α be a generator of F * q and α i :
= α i for 1 ≤ i ≤ q − 1. Then α q−1 = 1. Define a 4 by q − 1 matrix over F q by G (i, j) = 1 1 · · · 1 1 α i 1 α i 2 · · · α i q−2 α i q−1 α j 1 α j 2 · · · α j q−2 α j q−1 α 4 1 α 4 2 · · · α 4 q−2 α 4 q−1 ,
where (i, j) = (1, 3) or (2, 3). Let C (i, j) be the linear code over F q generated by G (i, j) . We will prove that C (i, j) is an NMDS code and the minimum weight codewords of C (i, j) and its dual C ⊥ (i, j) support 2-designs.
When
(i, j) = (1, 3)
The following lemma plays an important role in the proof of our main result.
Lemma 20. Let m be an odd integer with m
> 3, q = 2 m . Let x 1 , x 2 , x 3 , x 4 be four pairwise distinct elements in F * q and we define the matrix M (1,3) = 1 1 1 1 x 1 x 2 x 3 x 4 x 3 1 x 3 2 x 3 3 x 3 4 x 4 1 x 4 2 x 4 3 x 4 4 .(3)
Then for any two different and fixed elements x 1 , x 2 , the total number of different choices of x 3 , x 4 such that |M (1,3) | = 0 is equal to q−8 2 (regardless of the ordering of x 3 , x 4 ).
Proof. By Lemma 17,
|M (1,3) | = 0 if and only if x 1 x 2 + x 1 x 3 + x 2 x 3 + (x 1 + x 2 + x 3 )x 4 = 0.
Then we first need to consider whether x 1 + x 2 + x 3 equals 0 or not in the following cases.
Case 1: Let x 1 + x 2 + x 3 = 0. Then x 3 = x 1 + x 2 and |M (1,3) | = ∏ 1 i< j 4 (x j − x i )(x 2 1 + x 2 2 + x 1 x 2 ) = 0 by Lemma 16. So there is no (x 3 , x 4 ) such that |M (1,3) | = 0 in this case. Case 2: Let x 1 + x 2 + x 3 = 0. Then x 3 = x 1 + x 2 and |M (1,3) | = 0 ⇐⇒ x 1 x 2 + x 1 x 3 + x 2 x 3 + (x 1 + x 2 + x 3 )x 4 = 0 ⇔ x 4 = x 1 x 2 + x 1 x 3 + x 2 x 3 x 1 + x 2 + x 3 . Since x 1 , x 2 , x 3 , x 4 are four pairwise distinct elements in F * q , we have x 4 / ∈ {0, x 1 , x 2 , x 3 }.
Note that
x 4 = 0 ⇐⇒ x 1 x 2 + x 1 x 3 + x 2 x 3 x 1 + x 2 + x 3 = 0 ⇔ x 1 x 2 + x 1 x 3 + x 2 x 3 = 0 ⇔ x 3 = x 1 x 2 x 1 + x 2 .
Similarly,
x 4 / ∈ {x 1 , x 2 , x 3 } if and only if x 3 / ∈ { x 2 2 x 1 , x 2 1 x 2 , a},where a 2 = x 1 x 2 . We conclude that |M (1,3) | = 0 if and only if x 3 / ∈ 0, x 1 , x 2 , x 1 + x 2 , x 1 x 2 x 1 + x 2 , x 2 2 x 1 , x 2 1 x 2 , a ,
where a 2 = x 1 x 2 , and
x 4 = x 1 x 2 + x 1 x 3 + x 2 x 3 x 1 + x 2 + x 3 .
By Lemmas 8 and 16, it is easy to prove that the elements in
0, x 1 , x 2 , x 1 + x 2 , x 1 x 2 x 1 + x 2 , x 2 2 x 1 , x 2 1 x 2 , a are pairwise distinct. It is obvious that if (x 3 , x 4 ) is a choice, so is (x 4 , x 3 )
. Hence the total number of different choices of x 3 , x 4 ∈ F * q such that |M (1,3) | = 0 is equal to q−8 2 for any two different fixed elements
x 1 , x 2 .
The proof is completed.
Theorem 21. Let m be an odd integer with m > 3, q = 2 m . Then C (1,3) is a [q − 1, 4, q − 5] NMDS code over F q with weight enumerator A(z) = 1 + (q − 1) 2 (q − 2)(q − 8) 24 z q−5 + 5(q − 1) 2 (q − 2) 6 z q−4 + q(q − 1) 2 (q − 2) 4 z q−3 + (q − 1) 2 (2q 2 + 7q + 20) 6 z q−2 + (q − 1)(9q 3 + 13q 2 − 6q + 80) 24 z q−1 .
Moreover, the minimum weight codewords of C (1,3) support a 2-(q−1, q−5, (q−5)(q−6)(q−8)
24
) simple design and the minimum weight codewords of C ⊥ Proof. We first prove that dim(C (1,3) ) = 4. Let g 1 , g 2 , g 3 and g 4 respectively represent the first, second, third and fourth rows of G (1,3) . Assume that there exist elements a, b, c, d ∈ F q with (a, b, c, d) = (0, 0, 0, 0) such that ag 1 + bg 2 + cg 3 + dg 4 = 0. Then
a + bα 1 + cα 3 1 + dα 4 1 = 0, a + bα 2 + cα 3 2 + dα 4 2 = 0, . . . a + bα q−1 + cα 3 q−1 + dα 4 q−1 = 0.
Obviously, the polynomial f (x) = a + bx + cx 3 + dx 4 has at most 4 zeros in F * q , which leads to a contradiction. Hence g 1 , g 2 , g 3 and g 4 are linearly independent over F q . Thus dim(C (1,3) ) = 4.
We then prove that C ⊥
(1,3) has parameters [q −1, q −5, 4]. Obviously, dim(C ⊥ (1,3) ) = (q −1) −4 = q − 5. We need to prove d(C ⊥ (1,3) ) = 4.
It is sufficient to prove that any 3 columns of G (1,3) are linearly independent and there exist 4 columns of G (1,3) that are linearly dependent. Choosing any three columns from G (1,3) yields the submatrix
M 1.1 = 1 1 1 x 1 x 2 x 3 x 3 1 x 3 2 x 3 3 x 4 1 x 4 2 x 4 3 , where x 1 , x 2 , x 3 are pairwise distinct elements in F * q .
Consider the 3 by 3 submatrix of M 1.1 as
M 1.2 = 1 1 1 x 1 x 2 x 3 x 4 1 x 4 2 x 4 3 .
Denote by f (x) = x 4 . By Lemma 6, we have |M 1. 2 ) design. Then by Equation (2),
2 | = (x 2 + x 1 )( f (x 3 ) + f (x 1 )) + (x 3 + x 1 )( f (x 2 ) + f (x 1 )) = 0c i j = r i j ∈ F * q , 1 ≤ j ≤ 4, and c v = 0 for all v ∈ {1, 2, . . ., q − 1} \ {i 1 , i 2 , i 3 , i 4 }, i.e., suppt(c) = {i 1 , i 2 , i 3 , i 4 }. Set x j = α i j . By definition, 1 1 1 1 x 1 x 2 x 3 x 4 x 3 1 x 3 2 x 3 3 x 3 4 x 4 1 x 4 2 x 4 3 x 4 4 r i 1 r i 2 r i 3 r i 4 = 0.A ⊥ 4 = (q − 1) q−1 2 4 2 q − 8 2 = (q − 1) 2 (q − 2)(q − 8) 24 .
We next prove d(C (1,3) ) = q − 5. By definition, we have
C (1,3) = {c a,b,c,d = (a + bx + cx 3 + dx 4 ) x∈F * q , a, b, c, d ∈ F q }.
To determine the weight wt (c a,b,c,d ) NMDS code. By Lemma 3,
A q−5 = A ⊥ 4 = (q − 1) 2 (q − 2)(q − 8) 24 .
Then by Lemma 3 and Equation (2), the minimum weight codewords of
C (1,3) support a 2-(q − 1, q − 5, (q−5)(q−6)(q−8) 24
) simple design. Finally, the weight enumerator of C (1,3) follows from Lemma 2.
When
(i, j) = (2, 3)
In this subsection, we consider the case for (i, j) = (2, 3). We will need the following lemma in the proof of our main result.
Lemma 22. Let m be an integer with m ≥ 3, q = 2 m . Let x 1 , x 2 , x 3 , x 4 be four pairwise distinct elements in F * q . Define the matrix
M (2,3) = 1 1 1 1 x 2 1 x 2 2 x 2 3 x 2 4 x 3 1 x 3 2 x 3 3 x 3 4 x 4 1 x 4 2 x 4 3 x 4 4 .
Then for any different fixed elements x 1 , x 2 , the total number of different choices of x 3 , x 4 such that |M
A(z) = 1 + (q − 1) 2 (q − 2)(q − 4) 24 z q−5 + (q − 1) 2 (q − 2) 6 z q−4 + (q − 1) 2 (q − 2)(q + 4) 4 z q−3 + (q − 1) 2 (2q 2 + 3q + 28) 6 z q−2 + (q − 1)(9q 3 + 17q 2 − 18q + 88) 24 z q−1 .
Moreover, the minimum weight codewords of C (2,3) support a 2-(q−1, q−5, (q−4)(q−5)(q−6)
24
) simple design and the minimum weight codewords of C ⊥ (2,3) support a 2-(q − 1, 4, q−4 2 ) simple design.
Proof. Similarly to the proof of Theorem 21, we can prove this theorem by Lemma 22. Note that Theorem 23 works for any m ≥ 3 as Lemma 22 dose not rely on Lemma 16.
Five families of 5-dimensional near MDS codes holding 2-designs or 3-designs
In this section, let q = 2 m with m > 3. Let α be a generator of F * q and α i := α i for 1 ≤ i ≤ q − 1. Then α q−1 = 1. Define
G (i, j,k) = 1 1 · · · 1 1 α i 1 α i 2 · · · α i q−2 α i q−1 α j 1 α j 2 · · · α j q−2 α j q−1 α k 1 α k 2 · · · α k q−2 α k q−1 α 5 1 α 5 2 · · · α 5 q−2 α 5 q−1 ,
where (i, j, k) = (2, 3, 4), (1, 2, 3), (1, 2, 4) or (1, 3, 4). Let C (i, j,k) be the linear code over F q generated by G (i, j,k) . We will show that C (i, j,k) is an NMDS code and the minimum weight codewords of C (i, j,k) and its dual C ⊥ (i, j,k) support 2-designs. Besides, we denote by C (1,2,4) the extended code of C (1,2,4) . We will also prove that the minimum weight codewords of C (1,2,4) and its dual C (1,2,4) ⊥ support 3-designs.
When (i, j, k) = (2, 3, 4)
Let (i, j, k) = (2, 3, 4). We study the linear code C (2,3,4) in this subsection.
The following lemma plays an important role in the proof of our result.
Lemma 24. Let m be a positive integer with m > 3 and q = 2 m . Let x 1 , x 2 , x 3 , x 4 , x 5 be five pairwise distinct elements in F * q . Define the matrix
M (2,3,4) = 1 1 1 1 1 x 2 1 x 2 2 x 2 3 x 2 4 x 2 5 x 3 1 x 3 2 x 3 3 x 3 4 x 3 5 x 4 1 x 4 2 x 4 3 x 4 4 x 4 5 x 5 1 x 5 2 x 5 3 x 5 4 x 5 5 .
Then for any two different and fixed elements x 1 , x 2 , the total number of different choices of (x 3 , x 4 , x 5 ) such that |M (2,3,4) | = 0 is equal to (q−4)(q−8)
6
(regardless of the ordering of x 3 , x 4 , x 5 ).
Proof. By Lemma 17, |M (2,3,4) | = 0 if and only if
x 1 x 2 x 3 x 4 + (x 1 x 2 x 3 + (x 1 x 2 + x 1 x 3 + x 2 x 3 )x 4 ) x 5 = 0.
Let x 1 , x 2 be two different and fixed elements in F * q . Consider the following cases.
Case 1: Let x 3 = x 1 x 2 x 1 +x 2 , then x 1 x 2 + x 1 x 3 + x 2 x 3 = 0 and x 1 x 2 x 3 + (x 1 x 2 + x 1 x 3 + x 2 x 3 )x 4 = x 1 x 2 x 3 = 0. Then |M (2,3,4) | = ∏ 1 i< j 5 (x j − x i )(x 1 x 2 x 3 (x 4 + x 5 )) = 0.
So there is no (x 3 , x 4 , x 5 ) such that |M (2,3,4) | = 0 in this case.
Case 2: Let
x 3 = x 1 x 2 x 1 +x 2 and x 1 x 2 x 3 +(x 1 x 2 +x 1 x 3 +x 2 x 3 )x 4 = 0 which implies x 4 = x 1 x 2 x 3 x 1 x 2 +x 1 x 3 +x 2 x 3 . Then |M (2,3,4) | = ∏ 1 i< j 5 (x j − x i )(x 1 x 2 x 3 x 4 ) = 0. So there is no (x 3 , x 4 , x 5 ) such that |M (2,3,4) | = 0 in this case. Case 3: Let x 3 = x 1 x 2 x 1 +x 2 and x 1 x 2 x 3 +(x 1 x 2 +x 1 x 3 +x 2 x 3 )x 4 = 0 which implies x 4 = x 1 x 2 x 3 x 1 x 2 +x 1 x 3 +x 2 x 3 . Then |M (2,3,4) | = 0 ⇔ x 5 = x 1 x 2 x 3 x 4 x 1 x 2 x 3 + x 1 x 2 x 4 + x 1 x 3 x 4 + x 2 x 3 x 4 . Since x 1 , x 2 , x 3 , x 4 , x 5 are five pairwise distinct elements in F * q , then x 5 / ∈ {0, x 1 , x 2 , x 3 , x 4 }.
Similarly to the proof in Lemma 20, we can derive that
x 4 / ∈ { x 2 x 3 x 2 +x 3 , x 1 x 3 x 1 +x 3 , x 1 x 2 x 1 +x 2 }.
We then conclude that |M (2,3,4) | = 0 if and only
x 3 / ∈ 0, x 1 , x 2 , x 1 x 2 x 1 + x 2 , x 4 / ∈ 0, x 1 , x 2 , x 3 , x 2 x 3 x 2 + x 3 , x 1 x 3 x 1 + x 3 , x 1 x 2 x 1 + x 2 , x 1 x 2 x 3 x 1 x 2 + x 1 x 3 + x 2 x 3 and x 5 = x 1 x 2 x 3 x 4 x 1 x 2 x 3 + x 1 x 2 x 4 + x 1 x 3 x 4 + x 2 x 3 x 4 .
It is easy to prove that the elements in
0, x 1 , x 2 , x 1 x 2 x 1 + x 2 are pairwise distinct, so is 0, x 1 , x 2 , x 3 , x 2 x 3 x 2 + x 3 , x 1 x 3 x 1 + x 3 , x 1 x 2 x 1 + x 2 , x 1 x 2 x 3 x 1 x 2 + x 1 x 3 + x 2 x 3 .
Then the total number of different choices of (x 3 , x 4 , x 5 ) such that |M (2,3,4) | = 0 is equal to NMDS code over F q with weight enumerator
A(z) = 1 + (q − 1) 2 (q − 2)(q − 4)(q − 8) 120 z q−6 + 5(q − 1) 2 (q − 2)(q − 4) 24 z q−5 + (q − 1) 2 (q − 2)(q 2 − 2q + 2) 12 z q−4 + (q − 1) 2 (q − 2)(2q 2 + 9q + 28) 12 z q−3 + (q − 1) 2 (9q 3 + 22q 2 + 12q + 176) 24 z q−2 + (q − 1)(44q 4 + 65q 3 + 125q 2 − 170q + 536) 120 z q−1 .
Moreover, the minimum weight codewords of C (2,3,4) support a 2-(q − 1, q − 6, (q−4)(q−6)(q−7)(q−8)
120
) simple design and the minimum weight codewords of C ⊥ (2,3,4) support a 2-(q − 1, 5,
(q−4)(q−8) 6
) simple design.
Proof. Similarly to the proof of Theorem 21, we can derive this Theorem by Lemmas 2,3,24 and Equation (2). The details are omitted here. 1, 2, 3). We study the linear code C (1,2,3) in this subsection.
When
(i, j, k) = (1, 2, 3) Let (i, j, k) = (
The following lemma plays an important role in the proof of our next main result.
Lemma 26. Let m be a positive integer with m > 3 and q
= 2 m . Let x 1 , x 2 , x 3 , x 4 , x 5 be five pairwise distinct elements in F * q . Define a matrix M (1,2,3) = 1 1 1 1 1 x 1 x 2 x 3 x 4 x 5 x 2 1 x 2 2 x 2 3 x 2 4 x 2 5 x 3 1 x 3 2 x 3 3 x 3 4 x 3 5 x 5 1 x 5 2 x 5 3 x 5 4 x 5 5 .
Then for any two different and fixed elements x 1 , x 2 , the total number of different choices of (x 3 , x 4 , x 5 ) such that |M (
|M (1,2,3) | = 0 ⇔ x 5 = x 1 + x 2 + x 3 + x 4 . Since x 5 / ∈ {0, x 1 , x 2 , x 3 , x 4 }, we deduce that x 4 / ∈ {x 2 + x 3 , x 1 + x 3 , x 1 + x 2 , x 1 + x 2 + x 3 } and x 3 = x 1 + x 2 .
We then conclude that |M (1,2,3) | = 0 if and only if
x 3 / ∈ {0, x 1 , x 2 , x 1 + x 2 }, x 4 / ∈ {0, x 1 , x 2 , x 3 , x 2 + x 3 , x 1 + x 3 , x 1 + x 2 , x 1 + x 2 + x 3 } and x 5 = x 1 + x 2 + x 3 + x 4 .
Obviously, the elements in {0,
x 1 , x 2 , x 1 + x 2 } and {0, x 1 , x 2 , x 3 , x 2 + x 3 , x 1 + x 3 , x 1 + x 2 , x 1 + x 2 + x 3 } are pairwise distinct,
NMDS code over F q with weight enumerator
A(z) = 1 + (q − 1) 2 (q − 2)(q − 4)(q − 8) 120 z q−6 + 5(q − 1) 2 (q − 2)(q − 4) 24 z q−5 + (q − 1) 2 (q − 2)(q 2 − 2q + 2) 12 z q−4 + (q − 1) 2 (q − 2)(2q 2 + 9q + 28) 12 z q−3 + (q − 1) 2 (9q 3 + 22q 2 + 12q + 176) 24 z q−2 + (q − 1)(44q 4 + 65q 3 + 125q 2 − 170q + 536) 120 z q−1 .
Moreover, the minimum weight codewords of C ( ) simple design.
Proof. With a similar proof as that of Theorem 21, we can easily prove this theorem by Lemma 26.
When (i, j, k) = (1, 2, 4)
In this subsection, let (i, j, k) = (1, 2, 4) and we study the linear code C (1,2,4) .
The following lemma is essential for the proof of our result.
Lemma 28. Let m be an odd integer with m > 3 and q
= 2 m . Let x 1 , x 2 , x 3 , x 4 , x 5 be five pairwise distinct elements in F * q . Define the matrix M (1,2,4) = 1 1 1 1 1 x 1 x 2 x 3 x 4 x 5 x 2 1 x 2 2 x 2 3 x 2 4 x 2 5 x 4 1 x 4 2 x 4 3 x 4 4 x 4 5 x 5 1 x 5 2 x 5 3 x 5 4 x 5 5 .
Then for any two different and fixed elements x 1 , x 2 , the total number of different choices of (x 3 , x 4 , x 5 ) such that |M (1,2,4) | = 0 is equal to (q−5)(q−8)
6
(regardless of the ordering of x 3 , x 4 , x 5 ).
Proof. By Lemma 17, |M (1,2,4) | = 0 if and only if
x 1 x 2 + x 1 x 3 + x 1 x 4 + x 2 x 3 + x 2 x 4 + x 3 x 4 + (x 1 + x 2 + x 3 + x 4 )x 5 = 0.
Let x 1 , x 2 be two different and fixed elements in F * q . Consider the following cases.
Case 1: Let x 3 = x 1 + x 2 . Then |M (1,2,4) | = ∏ 1 i< j 5 (x j − x i )(x 2 1 + x 2 2 + x 1 x 2 + x 4 x 5 ) = 0 ⇔ x 5 = x 2 1 + x 2 2 + x 1 x 2 x 4 .
Then
x 5 / ∈ {0, x 1 , x 2 , x 3 , x 4 } implies x 4 / ∈ {x 1 + x 2 + x 2 2 x 1 , x 1 + x 2 + x 2 1 x 2 , x 2 1 +x 1 x 2 +x 2 2 x 1 +x 2 , a}, where a 2 = x 2 1 + x 1 x 2 + x 2 2 .
In this case , we conclude that |M (1,2,4) | = 0 if and only if
x 4 / ∈ 0, x 1 , x 2 , x 1 + x 2 , x 1 + x 2 + x 2 2 x 1 , x 1 + x 2 + x 2 1 x 2 , x 2 1 + x 1 x 2 + x 2 2 x 1 + x 2 , a ,
where a 2 = x 2 1 + x 1 x 2 + x 2 2 , and
x 5 = x 2 1 + x 1 x 2 + x 2 2 x 4 .
By Lemmas 8 and 16, we can easily prove that the elements in
0, x 1 , x 2 , x 1 + x 2 , x 1 + x 2 + x 2 2 x 1 , x 1 + x 2 + x 2 1 x 2 , x 2 1 + x 1 x 2 + x 2 2 x 1 + x 2 , a
are pairwise distinct. In this case, the total number of different choices of (x 3 , x 4 , x 5 ) such that |M (1,2,4) | = 0 is equal to
(q−8)
6 regardless of the ordering of x 3 , x 4 , x 5 . Case 2: Let x 3 = x 1 + x 2 and x 1 + x 2 + x 3 + x 4 = 0. Then x 4 = x 1 + x 2 + x 3 . By Lemma 16,
|M (1,2,4) | = ∏ 1 i< j 5 (x j − x i )(x 2 1 + x 2 2 + x 2 3 + x 1 x 2 + x 1 x 3 + x 2 x 3 ) = 0. So there is no (x 3 , x 4 , x 5 ) such that |M (1,2,4) | = 0 in this case. Case 3: Let x 3 = x 1 + x 2 and x 1 + x 2 + x 3 + x 4 = 0. Then x 4 = x 1 + x 2 + x 3 and |M (1,2,4) | = 0 ⇔ x 5 = x 1 x 2 + x 1 x 3 + x 1 x 4 + x 2 x 3 + x 2 x 4 + x 3 x 4 x 1 + x 2 + x 3 + x 4 .
It is easy to deduce that
x 5 / ∈ {0, x 1 , x 2 , x 3 , x 4 } implies x 4 / ∈ x 1 x 2 + x 1 x 3 + x 2 x 3 x 1 + x 2 + x 3 , x 2 1 + x 2 x 3 x 2 + x 3 , x 2 2 + x 1 x 3 x 1 + x 3 , x 2 3 + x 1 x 2 x 1 + x 2 , b , where b 2 = x 1 x 2 + x 1 x 3 + x 2 x 3 .
In this case, we conclude that |M (1,2,4) | = 0 if and only if
x 3 / ∈ {0, x 1 , x 2 , x 1 + x 2 }, x 4 / ∈ L := 0, x 1 , x 2 , x 3 , x 1 + x 2 + x 3 , x 1 x 2 + x 1 x 3 + x 2 x 3 x 1 + x 2 + x 3 , x 2 1 + x 2 x 3 x 2 + x 3 , x 2 2 + x 1 x 3 x 1 + x 3 , x 2 3 + x 1 x 2 x 1 + x 2 , b , where b 2 = x 1 x 2 + x 1 x 3 + x 2 x 3
, and
x 5 = x 1 x 2 + x 1 x 3 + x 1 x 4 + x 2 x 3 + x 2 x 4 + x 3 x 4 x 1 + x 2 + x 3 + x 4 .
Consider the following subcases of L.
Subcase 3.1: If x 3 = x 2 1 x 2 , then x 2 1 +x 2 x 3 x 2 +x 3 = 0, x 1 x 2 +x 1 x 3 +x 2 x 3x 1 +x 2 = 0, x 1 x 2 +x 1 x 3 +x 2 x 3
x 1 +x 2 +x 3 = x 3 and the other elements in L are pairwise distinct, which implies |L| = 8.
Subcase 3.3: If x 3 = x 1 x 2 x 1 +x 2 , then x 1 x 2 +x 1 x 3 +x 2 x 3
x 1 +x 2 +x 3 = b = 0 and other elements in L are pairwise distinct, which implies |L| = 8.
Subcase 3.4:
Let x 3 / ∈ S := 0, x 1 , x 2 , x 1 + x 2 , x 2 1 x 2 , x 2 2 x 1 , x 1 x 2 x 1 +x 2 , c , where c 2 = x 1 x 2 .
By Lemmas 8 and 16, it is easy to prove that the elements in S are pairwise distinct and |S| = 8. By Lemmas 8 and 11, the elements in L are pairwise distinct, which implies |L| = 10.
In this case, the total number of different choices of (x 3 , x 4 , x 5 ) such that |M (1,2,4)
| = 0 is equal to 4(q − 8) 3! + (q − 8)(q − 10) 3! = (q − 6)(q − 8) 6 regardless of the ordering of x 3 , x 4 , x 5 .
Thanks to the above cases, the total number of different choices of (x 3 ,
x 4 , x 5 ) such that |M (1,2,4) | = 0 is equal to (q − 8) 6 + (q − 6)(q − 8) 6 = (q − 5)(q − 8) 6 regardless of the ordering of x 3 , x 4 , x 5 .
Then we have completed the proof. NMDS code over F q with weight enumerator
A(z) = 1 + (q − 1) 2 (q − 2)(q − 5)(q − 8) 120 z q−6 + (q − 1) 2 (q − 2)(3q − 14) 12 z q−5 + (q − 1) 2 (q − 2)(q 2 − 3q + 10) 12 z q−4 + (q − 1) 2 (q − 2)(q 2 + 5q + 10) 6 z q−3 + (q − 1) 2 (9q 3 + 21q 2 + 22q + 160) 24 z q−2 + (q − 1)(22q 4 + 33q 3 + 57q 2 − 72q + 260) 60 z q−1 .
Moreover, the minimum weight codewords of C (1,2,4) support a 2-(q − 1, q − 6, (q−5)(q−6)(q−7)(q−8)
120
) simple design and the minimum weight codewords of C ⊥ (1,2,4) support a 2-(q − 1, 5,
(q−5)(q−8) 6
) simple design.
Proof. By Lemma 28, we can prove this theorem with a similar proof as that of Theorem 21. The details are omitted here. (1,2,4) It is obvious that the extended code C (1,2,4) of C (1,2,4) is generated by the following matrix:
The extended code C
G (1,2,4) = 1 1 · · · 1 1 α 1 1 α 1 2 · · · α 1 q−1 0 α 2 1 α 2 2 · · · α 2 q−1 0 α 4 1 α 4 2 · · · α 4 q−1 0 α 5 1 α 5 2 · · · α 5 q−1 0 .
We need the following lemma to give our main result in this subsection.
Lemma 30. Let m be an odd integer with m > 3 and q = 2 m . Let x 1 , x 2 , x 3 , x 4 , x 5 be five pairwise distinct elements in F q . Define the matrix
M (1,2,4) = 1 1 1 1 1 x 1 x 2 x 3 x 4 x 5 x 2 1 x 2 2 x 2 3 x 2 4 x 2 5 x 4 1 x 4 2 x 4 3 x 4 4 x 4 5 x 5 1 x 5 2 x 5 3 x 5 4 x 5 5 .
Then for any pairwise different and fixed elements x 1 , x 2 , x 3 , the total number of different choices of (x 4 , x 5 ) such that |M (1,2,4) | = 0 is equal to q−8 2 (regardless of the ordering of x 4 , x 5 ).
Proof. By Lemmas 11, 16 and 17, we can prove this theorem with a similar proof as that of Lemma 20.
Theorem 31. Let m be an odd integer with m > 3 and q = 2 m . Then C (1,2,4) is a [q, 5, q − 5] NMDS code over F q with weight enumerator
A(z) = 1 + q(q − 1) 2 (q − 2)(q − 8) 120 z q−5 + 5q(q − 1) 2 (q − 2) 24 z q−4 + q 2 (q − 1) 2 (q − 2) 12 z q−3 + q(q − 1) 2 (2q 2 + 7q + 20) 12 z q−2 + q(q − 1)(9q 3 + 13q 2 − 6q + 80) 24 z q−1 + (q − 1)(44q 4 + 21q 3 + 49q 2 − 114q + 120) 120 z q .
Moreover, the minimum weight codewords of C (1,2,4) support a 3-(q, q − 5, (q−5)(q−6)(q−7)(q−8)
120
) simple design and the minimum weight codewords of C (1,2,4) ⊥ support a 3-(q, 5, q−8 2 ) simple design.
Proof. By Lemma 30, we can derive this theorem with a similar proof as that of Theorem 21. The details are omitted here.
When (i, j, k) = (1, 3, 4)
Let (i, j, k) = (1, 3, 4) and we study the linear code C (1,3,4) in this subsection.
The following lemma will be used to prove the main result in this subsection.
Lemma 32. Let m be an odd integer with m > 3 and q = 2 m . Let
x 1 , x 2 , x 3 , x 4 , x 5 be five pairwise distinct elements in F * q . Define the matrix M (1,3,4) = 1 1 1 1 1 x 1 x 2 x 3 x 4 x 5 x 3 1 x 3 2 x 3 3 x 3 4 x 3 5 x 4 1 x 4 2 x 4 3 x 4 4 x 4 5 x 5 1 x 5 2 x 5 3 x 5 4 x 5 5 .
Then for any two different and fixed elements x 1 , x 2 , the total number of different choices of (x 3 , x 4 , x 5 ) such that |M (1,3,4) | = 0 is equal to (q−5)(q−8)
6
(regardless of the ordering of x 3 , x 4 , x 5 ).
Proof. Let x 1 , x 2 be two different and fixed elements. By Lemma 17, |M (1,3,4) | = 0 if and only if
x 1 x 2 x 3 + x 1 x 2 x 4 + x 1 x 3 x 4 + x 2 x 3 x 4 + (x 1 x 2 + x 1 x 3 + x 2 x 3 + x 1 x 4 + x 2 x 4 + x 3 x 4 )x 5 = 0.
Consider the following cases. Case 1: Let
x 3 = x 1 + x 2 . Then x 1 x 2 + x 1 x 3 + x 2 x 3 + (x 1 + x 2 + x 3 )x 4 = x 2 1 + x 1 x 2 + x 2 2 = 0 and |M (1,3,4) | = 0 ⇔ x 5 = (x 1 + x 2 )x 1 x 2 x 2 1 + x 1 x 2 + x 2 2 + x 4 .
It is easy to deduce that
x 5 / ∈ {0, x 1 , x 2 , x 3 , x 4 } if and only if x 4 / ∈ (x 1 + x 2 )x 1 x 2 x 2 1 + x 1 x 2 + x 2 2 , x 3 1 x 2 1 + x 1 x 2 + x 2 2 , x 3 2 x 2 1 + x 1 x 2 + x 2 2 , (x 1 + x 2 ) 3 x 2 1 + x 1 x 2 + x 2 2 .
In this case, we conclude that |M (1,3,4) | = 0 if and only if
x 4 / ∈ 0, x 1 , x 2 , x 1 + x 2 , (x 1 + x 2 )x 1 x 2 x 2 1 + x 1 x 2 + x 2 2 , x 3 1 x 2 1 + x 1 x 2 + x 2 2 , x 3 2 x 2 1 + x 1 x 2 + x 2 2 , (x 1 + x 2 ) 3 x 2 1 + x 1 x 2 + x 2 2 and x 5 = (x 1 + x 2 )x 1 x 2 x 2 1 + x 1 x 2 + x 2 2 + x 4 .
By Lemmas 8 and 16, we can easily prove that the elements in
0, x 1 , x 2 , x 1 + x 2 , (x 1 + x 2 )x 1 x 2 x 2 1 + x 1 x 2 + x 2 2 , x 3 1 x 2 1 + x 1 x 2 + x 2 2 , x 3 2 x 2 1 + x 1 x 2 + x 2 2 , (x 1 + x 2 ) 3 x 2 1 + x 1 x 2 + x 2 2
are pairwise distinct. Then the total number of different choices of (x 3 , x 4 , x 5 ) such that |M (1,3,4) | = 0 is equal to q−8 6 regardless of the ordering of x 3 , x 4 , x 5 . Case 2: Let
x 3 = x 1 + x 2 and x 1 x 2 + x 1 x 3 + x 2 x 3 + (x 1 + x 2 + x 3 )x 4 = 0 which implies x 4 = x 1 x 2 +x 1 x 3 +x 2 x 3 x 1 +x 2 +x 3
. By Lemma 11, it is easy to prove
|M (1,3,4) | = ∏ 1 i< j 5 (x j − x i ) x 2 1 (x 2 2 + x 2 x 3 + x 2 3 ) + x 1 (x 2 2 x 3 + x 2 x 2 3 ) + x 2 2 x 2 3 x 1 + x 2 + x 3 = 0.
Hence there is no (
x 3 , x 4 , x 5 ) such that |M (1,3,4) | = 0. Case 3: Let x 3 = x 1 + x 2 and x 1 x 2 + x 1 x 3 + x 2 x 3 + (x 1 + x 2 + x 3 )x 4 = 0 which implies x 4 = x 1 x 2 +x 1 x 3 +x 2 x 3 x 1 +x 2 +x 3 .
Then
|M (1,3,4) | = 0 ⇔ x 5 = (x 1 x 2 + x 1 x 3 + x 2 x 3 )x 4 + x 1 x 2 x 3 x 1 x 2 + x 1 x 3 + x 2 x 3 + (x 1 + x 2 + x 3 )x 4 .(4)
Let
x 5 = (x 1 x 2 + x 1 x 3 + x 2 x 3 )x 4 + x 1 x 2 x 3 x 1 x 2 + x 1 x 3 + x 2 x 3 + (x 1 + x 2 + x 3 )x 4 , a 2 = x 1 x 2 x 3 x 1 + x 2 + x 3 .
Consider the following subcases.
Subcase 3.1: Let x 3 = x 1 x 2 x 1 +x 2 , then x 5 = x 2 1 x 2 2 (x 2 1 +x 1 x 2 +x 2 2 )x 4 and x 1 x 2 +x 1 x 3 +x 2 x 3 x 1 +x 2 +x 3 = 0. It is easy to deduce that x 5 / ∈ {0, x 1 , x 2 , x 3 , x 4 } implies x 4 / ∈ x 1 x 2 2 x 2 1 +x 1 x 2 +x 2 2 , x 2 1 x 2 x 2 1 +x 1 x 2 +x 2 2 , x 1 x 2 (x 1 +x 2 ) x 2 1 +x 1 x 2 +x 2 2
, a . In this subcase, we conclude that |M (1,3,4) | = 0 if and only if
x 4 / ∈ L 1 := 0, x 1 , x 2 , x 1 x 2 x 1 + x 2 , x 1 x 2 2 x 2 1 + x 1 x 2 + x 2 2 , x 2 1 x 2 x 2 1 + x 1 x 2 + x 2 2 , x 1 x 2 (x 1 + x 2 ) x 2 1 + x 1 x 2 + x 2 2
, a , and
x 5 = x 2 1 x 2 2 (x 2 1 + x 1 x 2 + x 2 2 )x 4 .
It is obvious that the elements in L 1 are pairwise distinct. Then the total number of different choices of (x 3 , x 4 , x 5 ) such that |M (1,3,4) | = 0 is equal to q−8 6 regardless of the ordering of x 3 , x 4 , x 5 .
Subcase 3.2: Let x 3 = x 2 1 x 2 , then x 5 = x 1 (x 4 (x 1 +x 2 ) 2 +x 1 x 2 (x 1 +x 4 )) x 4 (x 1 +x 2 ) 2 +x 1 x 2 (x 1 +x 4 )+x 1 (x 1 +x 2 ) 2 and x 1 x 2 +x 1 x 3 +x 2 x 3 x 1 +x 2 +x 3 = x 1 . It is easy to deduce that x 5 / ∈ {0, x 1 , x 2 , x 3 , x 4 } if and only if x 4 / ∈ { x 1 x 2 2 x 2 1 +x 1 x 2 +x 2 2 , x 3 1 x 2 1 +x 1 x 2 +x 2 2 , x 2 1 x 2 x 2 1 +x 1 x 2 +x 2 2
, a}. We conclude that |M (1,3,4) | = 0 if and only if
x 4 / ∈ L 2 := 0, x 1 , x 2 , x 2 1 x 2 , x 1 x 2 2 x 2 1 + x 1 x 2 + x 2 2 , x 3 1 x 2 1 + x 1 x 2 + x 2 2 , x 2 1 x 2 x 2 1 + x 1 x 2 + x 2 2 , a and x 5 = x 1 (x 4 (x 1 + x 2 ) 2 + x 1 x 2 (x 1 + x 4 )) x 4 (x 1 + x 2 ) 2 + x 1 x 2 (x 1 + x 4 ) + x 1 (x 1 + x 2 ) 2 .
It is obvious that the elements in L 2 are pairwise distinct. In this subcase, the total number of choices of x 3 , x 4 , x 5 such that |M (1,3,4) | = 0 is equal to q−8 6 regardless of the ordering of x 3 , x 4 , x 5 . Let x 3 = x 2 2 x 1 , then by the symmetry of x 1 and x 2 , we can drive the same conclusion that the total number of choices of x 3 , x 4 , x 5 such that |M (1,3,4) | = 0 is equal to q−8 6 regardless of the ordering of
x 3 , x 4 , x 5 . Subcase 3.3: Let x 3 = b, where b 2 = x 1 x 2 . Then x 2 5 = x 1 x 2 (x 2 4 (x 2 1 +x 1 x 2 +x 2 2 )+x 2 1 x 2 2 ) x 2 4 (x 2 1 +x 1 x 2 +x 2 2 )+x 2 1 x 2 2 +x 1 x 2 (x 1 +x 2 ) 2 . It is easy to deduce that x 5 / ∈ {0, x 1 , x 2 , x 3 , x 4 } implies x 2 4 / ∈ x 3 1 x 2 x 2 1 + x 1 x 2 + x 2 2 , x 3 2 x 1 x 2 1 + x 1 x 2 + x 2 2 , x 2 1 x 2 2 x 2 1 + x 1 x 2 + x 2 2 , x 1 x 2 b x 1 + x 2 + b .
We conclude that |M (1,3,4) | = 0 if and only if
x 2 4 / ∈ L 4 := 0, x 2 1 , x 2 2 , x 1 x 2 , x 3 1 x 2 x 2 1 + x 1 x 2 + x 2 2 , x 3 2 x 1 x 2 1 + x 1 x 2 + x 2 2 , x 2 1 x 2 2 x 2 1 + x 1 x 2 + x 2 2 , x 1 x 2 b x 1 + x 2 + b and x 2 5 = x 1 x 2 (x 2 4 (x 2 1 + x 1 x 2 + x 2 2 ) + x 2 1 x 2 2 ) x 2 4 (x 2 1 + x 1 x 2 + x 2 2 ) + x 2 1 x 2 2 + x 1 x 2 (x 1 + x 2 ) 2 .
By Lemma 11, we can verify that the elements in L 4 are pairwise distinct. In this subcase, the total number of different choices of (x 3 , x 4 , x 5 ) such that |M (1,3,4) | = 0 is equal to q−8 6 regardless of the ordering of x 3 , x 4 , x 5 .
Subcase 3.4:
Let x 3 / ∈ L 5 := 0, x 1 , x 2 , x 1 + x 2 , x 1 x 2 x 1 +x 2 , x 2 1 x 2 , x 2 2 x 1 , b , where b 2 = x 1 x 2 .
By Lemmas 8 and 16, the elements in L 5 are pairwise distinct. It is easy to prove that
x 5 / ∈ {0, x 1 , x 2 , x 3 , x 4 } implies x 4 / ∈ x 2 1 (x 2 +x 3 ) x 2 1 +x 2 x 3 , x 2 2 (x 1 +x 3 ) x 2 2 +x 1 x 3 , x 1 x 2 x 3 x 1 +x 2 +x 3 , x 2 3 (x 1 +x 2 ) x 2 3 +x 1 x 2
, a . We conclude that |M (1,3,4) | = 0 if and only if x 3 / ∈ L 5 and x 4 / ∈ L 6 , where L 6 is given by
0, x 1 , x 2 , x 3 , x 1 x 2 + x 1 x 3 + x 2 x 3 x 1 + x 2 + x 3 , x 1 x 2 x 3 x 1 x 2 + (x 1 + x 2 )x 3 , x 2 1 (x 2 + x 3 ) x 2 1 + x 2 x 3 , x 2 2 (x 1 + x 3 ) x 2 2 + x 1 x 3 , x 2 3 (x 1 + x 2 ) x 2 3 + x 1 x 2 , x 1 x 2 x 3 x 1 + x 2 + x 3 ,
and
x 5 = (x 1 x 2 + x 1 x 3 + x 2 x 3 )x 4 + x 1 x 2 x 3 x 1 x 2 + x 1 x 3 + x 2 x 3 + (x 1 + x 2 + x 3 )x 4 .
By Lemma 11, we can verify that the elements in L 6 are pairwise distinct. In this subcase, the total number of different choices of (x 3 , x 4 , x 5 ) such that |M (1,3,4) | = 0 is equal to (q−8)(q−10) 6 regardless of the ordering of x 3 , x 4 , x 5 .
In Case 3, the total number of different choices of (x 3 , x 4 , x 5 ) such that |M (1,3,4) | = 0 is equal to
4(q − 8) 6 + (q − 8)(q − 10) 6 = (q − 6)(q − 8) 6
regardless of the ordering of x 3 , x 4 , x 5 . Thanks to the above cases, the total number of different choices of (x 3 , x 4 , x 5 ) such that |M (1,3,4) | = 0 is equal to (q − 8) 6 + (q − 6)(q − 8) 6 = (q − 5)(q − 8) 6 regardless of the ordering of x 3 , x 4 , x 5 . Then we complete the proof.
Constructions of 6-dimensional near MDS codes holding 2-designs or 3-designs
In this section, let q = 2 m with m > 3. Let α be a generator of F * q and α t = α t for 1 ≤ t ≤ q − 1. Then α q−1 = 1. Define
H (i, j,k,l) = 1 1 · · · 1 1 α i 1 α i 2 · · · α i q−2 α i q−1 α j 1 α j 2 · · · α j q−2 α j q−1 α k 1 α k 2 · · · α k q−2 α k q−1 α l 1 α l 2 · · · α l q−2 α l q−1 α 6 1 α 6 2 · · · α 6 q−2 α 6 q−1 ,
where (i, j, k, l) ∈ {(1, 2, 3, 4), (2,3,4,5), (1, 2, 4, 5)}. Then H (i, j,k,l) is a 6 by q − 1 matrix over F q .
Let C (i, j,k,l) be the linear code over F q generated by H (i, j,k,l) . We will prove that C (i, j,k,l) is an NMDS code and the minimum weight codewords of both C (i, j,k,l) and its dual C ⊥ (i, j,k,l) support 2designs. Let C (i, j,k,l) be the extended code of C (i, j,k,l) . We will also prove that C (i, j,k,l) is an NMDS code and the minimum weight codewords of both C (i, j,k,l) and its dual C (i, j,k,l) Case 1: Let x 3 = x 1 + x 2 . Then x 1 + x 2 + x 3 = 0 and x 1 + x 2 + x 3 + x 4 = x 4 . We have |L 1 | = 4, |L 2 | = 8. In this case, the total number of different choices of (x 3 , x 4 , x 5 , x 6 ) such that |M (1,2,3,4) | = 0 is equal to (q−4)(q−8) 24 regardless of the ordering of x 3 , x 4 , x 5 , x 6 . Case 2: Let x 3 = x 1 + x 2 . It is obvious that the elements in L 1 are pairwise distinct. Consider the following subcases of L 2 .
Subcase 2.1: Let x 4 = x 1 + x 2 . Then x 1 + x 2 + x 3 + x 4 = x 3 , x 1 + x 2 + x 4 = 0 and elements in L 2 are pairwise distinct. Thus |L 2 | = 8. If x 4 = x 1 + x 3 or x 4 = x 2 + x 3 , then by the symmetry of x 1 , x 2 , and x 3 , we also have |L 2 | = 8.
Subcase 2.2: Let x 4 / ∈ L 3 = {0, x 1 , x 2 , x 3 , x 1 +x 2 +x 3 , x 1 +x 2 , x 1 +x 3 , x 2 +x 3 }.
Then the elements in L 2 and L 3 are pairwise distinct. Thus |L 2 | = 10.
By summarizing the four subcases in Case 2, the total number of different choices of (x 3 , x 4 , x 5 , x 6 ) such that |M (1,2,3,4) | = 0 is equal to
3(q − 4)(q − 8) 4! + (q − 4)(q − 8)(q − 10) 4! = (q − 4)(q − 7)(q − 8) 24
regardless of the ordering of x 3 , x 4 , x 5 , x 6 . By the above cases, the total number of different choices of (x 3 ,
x 4 , x 5 , x 6 ) such that |M (1,2,3,4) | = 0 is equal to (q − 4)(q − 8) 24 + (q − 4)(q − 7)(q − 8) 24 = (q − 4)(q − 6)(q − 8)
24 regardless of the ordering of x 3 , x 4 , x 5 , x 6 . Then the proof is completed.
By Theorems 27 and 35, we have the following conjecture.
Conjecture 36. Let q = 2 m with m ≥ k 1 ≥ 3, where k 1 is some proper positive integer. Let α 1 , α 2 , · · · , α q−1 be all elements of F * q . Define 6.2. The extended code C (1,2,3,4) of C (1,2,3,4) In this subsection, we study the extended code C (1,2,3,4) of C (1,2,3,4) . It is obvious that the linear code C (1,2,3,4) is generated by the following matrix:
M k = 1 1 · · · 1 1 α 1 α 2 · · · α q−2 α q−1 α 2 1 α 2 2 · · · α 2 q−2 α 2 q−1 . . . . . . . . . . . . . . . α k−2 1 α k−2 2 · · · α k−2 q−2 α k−2 q−1 α k 1 α k 2 · · · α k q−2 α k q−1 , where 2 < k < q − 1.
H (1,2,3,4) = 1 1 · · · 1 1 α 1 α 2 · · · α q−1 0 α 2 1 α 2 2 · · · α 2 q−1 0 α 3 1 α 3 2 · · · α 3 q−1 0 α 4 1 α 4 2 · · · α 4 q−1 0 α 6 1 α 6 2 · · · α 6 q−1 0
.
The following lemma will be used to give our main result in this subsection. q(q − 1) 2 (q − 2)(q 2 − 3q + 10) 48 z q−4 + q(q − 1) 2 (q − 2)(q 2 + 5q + 10) 18 z q−3 + q(q − 1) 2 (9q 3 + 21q 2 + 22q + 160) 48 z q−2 + q(q − 1)(22q 4 + 33q 3 + 57q 2 − 72q + 260) 60 z q−1 + (q − 1)(265q 5 + 134q 4 + 21q 3 + 424q 2 − 844q + 720) 720 z q .
Moreover, the minimum weight codewords of C (1,2,4,5) support a 3-(q, q − 6, (q−5)(q−6)(q−7)(q−8) 2
720
) simple design and the minimum weight codewords of C (1,2,4,5) ⊥ support a 3-(q, 6, (q−5)(q−8) 6
) simple design.
Proof. Similarly to the proof of Theorem 21, we can prove this theorem by Lemma 41.
Optimal locally recoverable codes
In this section, we study the minimum locality of the codes constructed in this paper. Locally recoverable codes (LRCs for short) are applied in distributed storage and cloud storage. LRCs were proposed for the recovery of data by Gopalan, Huang, Simitci and Yikhanin [14]. Let [n] := {1, 2, · · · , n} for a positive integer n. Let C be an [n, k, d] linear code over F q . For every i ∈ [n], if there exist a subset R i ⊆ [n]\{i} of size r (r < k) and a function f i (x 1 , x 2 , · · · , x r ) on F r q such that c i = f i (c R i ) for each c = (c 1 , c 2 , · · · , c n ) in C , then C is called an (n, k, d, q; r)-LRC, where c R i is the projection of c at R i . R i is said to be the recovering set of c i . The minimum r such that a linear code C is an (n, k, d, q; r)-LRC is called the minimum locality of this code. A LRC is said to be distance-optimal (d-optimal for short) if it achieves the Singleton-like bound. A LRC is said to be dimension-optimal (k-optimal for short) if it achieves the Cadambe-Mazumdar bound.
Let B i (C ) denote the set of supports of all codewords with Hamming weight i in C . The following lemma is useful for determining the minimum locality for nontrivial linear codes whose minimum distance is larger than 1.
Lemma 45. [28] Let C be a nontrivial linear code of length n and put d ⊥ = d(C ⊥ ). If (P , B d ⊥ (C ⊥ )) is a 1-(n, d ⊥ , λ ⊥ 1 ) design with λ ⊥ Theorem 46. All the locally repairable codes listed in table 1 are both d-optimal and k-optimal.
Proof. Note that all NMDS codes in Sections 3-6 hold 2-designs or 3-designs. Then the desired conclusion follows from Lemmas 43, 44 and 45.
In [13,20,21,28], optimal locally repairable codes of distances 3 and 4 were constructed. We remark that the optimal locally repairable codes of distance 3 and 4 in this paper have different parameters from those in these papers.
Summary and concluding remarks
In this paper, we presented several infinite families of near MDS codes holding t-designs over F 2 m . Our main contributions are listed in the following:
Constructing NMDS codes holding t-designs for t ≥ 2 is very challenging. All known infinite families of NMDS codes holding t-designs have small dimensions. It is open to construct NMDS codes holding t-designs with general dimensions. If Conjecture 36 in this paper is true, then this question can be tackled. The reader is invited to prove our conjecture.
1, 2, . . ., n). Denote by P (C ) = {1, 2, . . ., n}. For a codeword c = {c 1 , c 2 , . . . , c n } ∈ C , define its support by suppt(c) = {1 ≤ i ≤ n : c i = 0}. c) : wt(c) = w and c ∈ C }},
Lemma 6 .
6Let q = 2 m with m ≥ 2. Then f is an oval polynomial of F q if and only if the followings simultaneously hold: 1. f is a permutation polynomial of F q with deg( f ) < q and f (0) = 0, f (1) = 1; and 2.
Lemma 7 .
7Let h and m be two integers with gcd(h, m) = ℓ. Then
be a polynomial of degree 2. Then 1. f has exactly one root in F q if and only if b = 0; 2. f has exactly two roots in F q if and only if b = 0 and Tr q/2 ( ac b 2 ) = 0; 3. f has no root in F q if and only if b = 0 and Tr q/2 ( ac b 2 ) = 1. Let m and h be positive integers and q = 2 m . Now we consider the zeros of polynomials
Lemma 13 .
13Let h and m be positive integers with gcd(h, m) = 1 and q = 2 m . Denote by N f the number of zeros of f
Lemma 15 .
15Let h and m be positive integers with gcd(h, m) = 1 and q = 2 m . Denote by N g the number of zeros of g
j − u i ) σ n−ℓ (u 1 , u 2 , . . ., u n ), where σ n−ℓ (u 1 , u 2 , . . . , u n ) = ∑ 1 i 1 <...<i n−ℓ n u i 1 · · · u i n−ℓ is the (n − ℓ)-th elementary symmetric polynomial over the set {u 1 , u 2 , . . . , u n }.
3, q − 4] NMDS code and the weight enumerator of C D follows from Lemma 2. It then follows from the Assmus-Mattson Theorem in Theorem 1 and Equation (2) that the minimum weight codewords of C D support a 2-(q − 1, q − 4, (q−4)(q−5) 6 ) simple design, and the minimum weight codewords of C ⊥ D support a 2-(q − 1, 3, 1) simple design.
(1, 3 )
3support a 2-(q − 1, 4, q−8 2 ) simple design.
. Then rank(M 1.2 ) = 3 and any 3 columns of G(1,3) are linearly independent. Now we consider the submatrix M(1,3) of G(1,3) in Equation(3). By the proof of Lemma 20, there is proper {x 1 , x 2 , x 3 , x 4 } such that |M(1, 3)| = 0. This shows that there exist 4 columns of G(1, 3) that are linearly dependent. To sum up, d(C ⊥ (1,3) ) = 4. Let c = (c 1 , c 2 , . . . , c q−1 ) ∈ C ⊥ (1,3) and wt(c) = 4. Assume that
Since rank(M(1,3) ) = 3, the number of solutions with {r i 1 , r i 2 , r i 3 , r i 4 } ∈ (F * q ) 4 is q − 1. Then {ac : a ∈ F * q } is the set of all codewords of weight 4 in C ⊥ (1,3) whose nonzero coordinates are{i 1 , i 2 , i 3 , i 4 }.Therefore, every codeword of weight 4 and its nonzero multiples in C ⊥ (1,3) with nonzero coordinates {i 1 , i 2 , i 3 , i 4 } must correspond to the set {x 1 , x 2 , x 3 , x 4 }. By Lemma 20, the number of choices of x 3 , x 4 is independent of x 1 , x 2 . We then deduce that the codewords of weight 4 in C ⊥
of a codeword c a,b,c,d ∈ C(1,3) , it is sufficient to determine the number of zeros of the equationa + bx + cx 3 + dx 4 = 0 in F * q .The above equation has at most 4 zeros in F * q . Hence, d(C (1,3) ) ≥ q − 5. By the Singleton bound, d(C (1,3) ) ≤ q −4. Then d(C (1,3) ) = q −4 or q −5. If d(C (1,3) ) = q −4, then C (1,3) is an MDS code. Then C ⊥ (1,3) is also MDS, which leads to a contradiction. Therefore, C (1,3) is a [q − 1, 4, q − 5]
(2, 3 )
3| = 0 is equal to q−4 2 .Proof. Similarly to the proof of Lemma 20, we can easily derive this lemma by Lemma 17.
Theorem 23 .
23Let m be an integer with m ≥ 3, q = 2 m . Then C (2,3) is a [q − 1, 4, q − 5] NMDS code over F q with weight enumerator
the ordering of x 3 , x 4 , x 5 . The desired conclusion follows. Theorem 25. Let m be a positive integer with m > 3 and q = 2 m . Then C (2,3,4) is a [q − 1, 5, q − 6]
1,2,3) | = 0 is equal to (q−4)(q−8) 6 (regardless of the ordering of x 3 , x 4 , x 5 ). Proof. Let x 1 , x 2 be two different and fixed elements in F * q . By Lemma 17, |M (1,2,3) | = 0 if and only if x 1 + x 2 + x 3 + x 4 + x 5 = 0. Then
respectively. So the total number of different choices of (x 3 , x 4 , x 5 ) such that |M (1,2,3) | = 0 is equal to the ordering of x 3 , x 4 , x 5 . Then we have completed the proof.
Theorem 27 .
27Let m be a positive integer with m > 3 and q = 2 m . Then C (1,2,3) is a [q − 1, 5, q − 6]
1, 2 , 3 )
23support a 2-(q − 1, q − 6, design and the minimum weight codewords of C ⊥ (1,2,3) support a 2-(q − 1, 5,
x 1 +x 2 +x 3 =
3x 1 and other elements in L are pairwise distinct, which implies |L| = 8. If x by the symmetry of x 1 and x 2 , we have |L| = 8.
Subcase 3 . 2 :
32If x 3 = c for c 2 = x 1 x 2 , then x 2 3 +x 1 x 2
Theorem 29 .
29Let m be an odd integer with m > 3 and q = 2 m . Then C (1,2,4) is a [q − 1, 5, q − 6]
(m, k 1 , k) ∈ {(4, 4, 4), (4, 4, 5), . . ., (4, 4, 12)} and (m, k 1 , k) ∈ {(5, 4, 5), (5, 4, 6), (5, 4, 7), (5, 4, 8)}.
Lemma 37 .M.
37Let m be an integer with m > 3 and q = 2 m . Let x 1 , x 2 , x 3 , x 4 , x 5 , x 6 be six pairwise distinct elements in F q . Define the matrix (Then for any pairwise different and fixed elements x 1 , x 2 , x 3 , the total number of different choices of (x 4 , x 5 , x 6 ) such that |M (1,2,3,4) | = 0 is equal to (q−4)(q−8) 6 (regardless of the ordering of x 4 , x 5 , x 6 ).
Lemma 44 .
44[1, For any (n, k, d, q; r)-LRC,k ≤ min t∈Z + rt + k (q) opt (n − t(r + 1), d) ,where Z + denotes the set of all positive integers, k (q) opt (n, d) is the the largest possible dimension of a linear code with alphabet size q, length n, and minimum distance d.
, q − 6, 6, q; q − 7)
Let C k be the linear code over F q generated by M k . Then C k is a [q − 1, k, q − 1 − k] NMDS code and the minimum weight codewords of both C k and its dual C ⊥ By Magma, Conjecture 36 has been verified to be correct fork support
2-designs.
Lemma 43. [14, Singleton-like bound] For any (n, k, d, q; r)-LRC, we haved ≤ n − k −
k
r
+ 2.
Table 1 :
1Optimal locally repairable codes Optimal locally repairable codes parameters C D ,C H
(q − 1) 2 (q − 2)(q 3 − 7q 2 + 34q − 96)
≥ 1, then C has minimum locality d ⊥ − 1.
Theorem 33. Let m be an odd integer with m > 3 and q = 2 m . Then C(1,3,4)is a [q − 1, 5, q − 6] NMDS code over F q with weight enumerator A(z) = 1 + (q − 1) 2 (q − 2)(q − 5)(q − 8) 120 z q−6 + (q − 1) 2 (q − 2)(3q − 14) 12 z q−5 + (q − 1) 2 (q − 2)(q 2 − 3q + 10) 12 z q−4 + (q − 1) 2 (q − 2)(q 2 + 5q + 10) 6 z q−3 + (q − 1) 2 (9q 3 + 21q 2 + 22q + 160) 24 z q−2 + (q − 1)(22q 4 + 33q 3 + 57q 2 − 72q + 260) 60 z q−1 .Moreover, the minimum weight codewords of C (1,3,4) support a 2-(q − 1, q − 6, (q−5)(q−6)(q−7)(q−8) ) simple design.Proof. By Lemma 32, the desired conclusions can be derived with a similar proof as that of Theorem 21.⊥ support 3-designs for (i, j, k, l) ∈ {(1, 2, 3, 4), (2, 3, 4, 5)}.WhenThe following lemma plays an important role in the proof of our main result in this subsection.Lemma 34. Let m be an integer with m > 3 and q = 2 m . Letwhere (i, j, k, l) = (1, 2, 3, 4) or(2,3,4,5). Then for any two different and fixed elements x 1 , x 2 and any (i, j, k, l) ∈ {(1, 2, 3, 4), (2, 3, 4, 5)}, the total number of different choices of (x 3 ,(regardless of the ordering of x 3 , x 4 , x 5 , x 6 ).Proof. In the following, we only give the proof for (i, j, k, l) = (1, 2, 3, 4) as the proof for (i, j, k, l) = (2, 3, 4, 5) can be similarly given. Let x 1 , x 2 be any two different and fixed elements in F * q . By Lemma 17, |M (1,2,3,4) | = 0 if and only ifConsider the following cases.Theorem 35. Let m be a integer with m > 3 and q.Moreover, the minimum weight codewords of) simple design and the minimum weight codewords of C ⊥Proof. By Lemma 34, we can prove this theorem with a similar proof as that of Theorem 21.Remark 1. Note that C (1,2,3) and C(2,3,4), C (1,2,4) and C(1,3,4), C(1,2,3,4)and C(2,3,4,5)) simple design.Proof. By Lemma 37, we can derive the desired conclusion with a similar proof as that of Theorem 21. The details are omitted.6.3. When (i, j, k, l) =(1,2,4,5)In this subsection, we consider the case for (i, j, k, l) = (1, 2, 4, 5).The following lemma plays an important role in the proof of our next main result.Lemma 39. Let m be an odd integer with m > 3, q = 2 m . Let x 1 , x 2 , x 3 , x 4 , x 5 , x 6 be six pairwise distinct elements in F * q . Define the matrix M(1,2,4,5)Then for any two different and fixed elements x 1 , x 2 , the total number of different choices of (x 3 , x 4 , x 5 , x 6 ) such that |M (1,2,4,5) | = 0 is equal to (q−5)(q−8)6(regardless of the ordering of x 3 , x 4 , x 5 , x 6 ).Proof. Similarly to the proof of Lemma 32, we can easily derive this lemma by Lemma 17. NMDS code over F q with weight enumeratorMoreover, the minimum weight codewords of C(1,2,4,5)) simple design.Proof. Similarly to the proof of Theorem 21, we can prove this theorem by Lemma 39.6.4. The extended code C(1,2,4,5)It is obvious that the extended code C(1,2,4,5)of C(1,2,4,5)is generated by the following matrix:We need the following lemma to give our main result in this subsection. Proof. Similarly to the proof of Lemma 32, we can easily derive this lemma by Lemma 17.Theorem 42. Let m be an odd integer with m > 3 and q = 2 m . Then C(1,2,4,5)is a [q, 6, q −6] NMDS code over F q with weight enumerator A(z) = 1 + q(q − 1) 2 (q − 2)(q − 5)(q − 8) 720 z q−6 + q(q − 1) 2 (q − 2)(3q − 14) 60 z q−5 +
We remark that only the first two families of NMDS codes satisfy the Assmus-Mattson Theorem. Though other families of NMDS codes do not satisfy the Assmus-Mattson Theorem, they still hold t-designs. Besides, all NMDS codes. constructed in this paper are both d-optimal and k-optimal LRCsWe remark that only the first two families of NMDS codes satisfy the Assmus-Mattson Theorem. Though other families of NMDS codes do not satisfy the Assmus-Mattson Theorem, they still hold t-designs. Besides, all NMDS codes constructed in this paper are both d-optimal and k-optimal LRCs.
An upper bound on the size of locally recoverable codes. V Cadambe, A Mazumdar, IEEE Int. Symp. Network Coding. V. Cadambe, A. Mazumdar, An upper bound on the size of locally recoverable codes, IEEE Int. Symp. Network Coding (2013) 1-5.
C Ding, Codes from Difference Sets. SingaporeWorld ScientificC. Ding, Codes from Difference Sets, World Scientific, Singapore, 2015.
C Ding, C Tang, Designs from Linear Codes. SingaporeWorld Scientific2022C. Ding, C. Tang, Designs from Linear Codes, World Scientific, Singapore, 2022.
Linear codes from some 2-designs. C Ding, IEEE Trans. Inform. Theory. 616C. Ding, Linear codes from some 2-designs, IEEE Trans. Inform. Theory 61 (6) (2015) 3265- 3275.
Infinite families of near MDS codes holding t-designs. C Ding, C Tang, IEEE Trans. Inform. Theory. 669C. Ding, C. Tang, Infinite families of near MDS codes holding t-designs, IEEE Trans. Inform. Theory 66 (9) (2020) 5419-5428.
Infinite families of 2-designs and 3-designs from linear codes. C Ding, C Li, Discrete Math. 34010C. Ding, C. Li, Infinite families of 2-designs and 3-designs from linear codes, Discrete Math. 340 (10) (2017) 2415-2431.
Infinite families of 3-designs from a type of five-weight code. C Ding, Des. Codes Cryptogr. 863C. Ding, Infinite families of 3-designs from a type of five-weight code, Des. Codes Cryptogr. 86 (3) (2018) 703-719.
Combinatorial t-designs from special functions. C Ding, C Tang, Cryptogr. Commun. 125C. Ding, C. Tang, Combinatorial t-designs from special functions, Cryptogr. Commun. 12 (5) (2020) 1011-1033.
Infinite families of 2-designs from a class of cyclic codes. X Du, R Wang, C Fan, J. Comb. Des. 283X. Du, R. Wang, C. Fan, Infinite families of 2-designs from a class of cyclic codes, J. Comb. Des. 28 (3) (2020) 157-170.
Infinite families of 2-designs from linear codes. X Du, R Wang, C Tang, Q Wang, Appli. Algebra Engineer. Commun. Comput. 33X. Du, R. Wang, C. Tang, Q. Wang, Infinite families of 2-designs from linear codes, Appli. Algebra Engineer. Commun. Comput. 33 (2022) 193-211.
Infinite families of 2-designs from two classes of binary cyclic codes with three nonzeros. X Du, R Wang, C Tang, Q Wang, Adv. Math. Commun. 16X. Du, R. Wang, C. Tang, Q. Wang, Infinite families of 2-designs from two classes of binary cyclic codes with three nonzeros, Adv. Math. Commun. 16 (2022) 157-168.
Codes of small defect. A Faldum, W Willems, Des. Codes Cryptogr. 10A. Faldum, W. Willems, Codes of small defect, Des. Codes Cryptogr. 10 (1997) 341-350.
Singleton-type optimal LRCs with minimum distance 3 and 4 from projective code. Q Fu, R Li, L Guo, G Chen, IEICE T. Fund. Electr. 1041Q. Fu, R. Li, L. Guo, G. Chen, Singleton-type optimal LRCs with minimum distance 3 and 4 from projective code, IEICE T. Fund. Electr. 104 (1) (2021) 319-323.
On the locality of codeword symbols. P Gopalan, C Huang, H Simitci, S Yekhanin, IEEE Trans. Inform. Theory. 5811P. Gopalan, C. Huang, H. Simitci, S. Yekhanin, On the locality of codeword symbols, IEEE Trans. Inform. Theory 58 (11) (2012) 6925-6934.
Minimal linear codes over finite fields, Finite Fields Appl. Z Heng, C Ding, Z Zhou, 54Z. Heng, C. Ding, Z. Zhou, Minimal linear codes over finite fields, Finite Fields Appl. 54 (2018) 176-196.
A family of projective two-weight linear codes, Designs Codes Cryptogr. Z Heng, D Li, J Du, F Chen, 89Z. Heng, D. Li, J. Du, F. Chen, A family of projective two-weight linear codes, Designs Codes Cryptogr. 89 (8) (2021) 1993-2007.
Codes for Error Detection. T Kløve, World Scientfic. Kløve T., Codes for Error Detection, World Scientfic, Singapore, 2007.
Solving x 2 k +1 + x + a = 0 in F 2 n with gcd(n, k) = 1. K H Kim, S Mesnager, Finite Fields Appl. 63101630K. H. Kim, S. Mesnager, Solving x 2 k +1 + x + a = 0 in F 2 n with gcd(n, k) = 1, Finite Fields Appl. 63 (2020) 101630.
Weight distributions of cyclic codes with respect to pairwise coprime order elements. C Li, Q Yue, F Li, Finite Fields Appl. 28C. Li, Q. Yue, F. Li, Weight distributions of cyclic codes with respect to pairwise coprime order elements, Finite Fields Appl. 28 (2014) 94-114.
X Li, Z Heng, Constructions of near MDS codes which are optimal locally recoverable codes. 88102184X. Li, Z. Heng, Constructions of near MDS codes which are optimal locally recoverable codes, Finite Fields Appl. 88 (2023) 102184.
A construction of optimal locally recoverable codes. X Li, Z Heng, 10.1007/s12095-022-00619-xCryptogr. Commun. X. Li, Z. Heng, A construction of optimal locally recoverable codes, Cryptogr. Commun. (2022) https://doi.org/10.1007/s12095-022-00619-x.
On infinite families of narrow-sense antiprimitive BCH codes admitting 3-transitive automorphism groups and their consequences. Q Liu, C Ding, S Mesnager, C Tang, V D Tonchev, IEEE Trans. Inform. Theory. 68Q. Liu, C. Ding, S. Mesnager, C. Tang, V. D. Tonchev, On infinite families of narrow-sense antiprimitive BCH codes admitting 3-transitive automorphism groups and their consequences, IEEE Trans. Inform. Theory 68 (2022) 3096-3107.
W C Huffman, V Pless, Fundamentals of Error-Correcting Codes. Cambridge, U.K.Cambridge Univ. PressW. C. Huffman, V. Pless, Fundamentals of Error-Correcting Codes, Cambridge Univ. Press, Cambridge, U.K., 2003.
Generalized Vandermonde determinants. E R Heineman, Trans. Amer. Math. Soc. 313E. R. Heineman, Generalized Vandermonde determinants, Trans. Amer. Math. Soc. 31 (3) (1929) 464-476.
R Lidl, H Niederreiter, Finite Fields. CambridgeCambridge University PressR. Lidl, H. Niederreiter, Finite Fields, Cambridge, Cambridge University Press, 1997.
Bent vectorial functions and linear codes from o-polynomials, Des. Codes Cryptogr. S Mesnager, 77S. Mesnager, Bent vectorial functions and linear codes from o-polynomials, Des. Codes Cryp- togr. 77 (2015) 99-116.
Quadratic equations in finite fields of characteristic 2. K Pommerening, K. Pommerening, Quadratic equations in finite fields of characteristic 2, online at https://www.staff.unimainz.de/pommeren/MathMisc/ QuGlChar2.pdf.
The minimum locality of linear codes. P Tan, C Fan, C Ding, C Tang, Z Zhou, 10.1007/s10623-022-01099-zDes. Codes Cryptogr. P. Tan, C. Fan, C. Ding, C. Tang, Z. Zhou, The minimum locality of linear codes, Des. Codes Cryptogr. (2022) DOI: 10.1007/s10623-022-01099-z.
An infinite family of linear codes supporting 4-designs. C C Tang, Ding, IEEE Trans. Inform. Theory. 671C. Tang. C. Ding, An infinite family of linear codes supporting 4-designs, IEEE Trans. Inform. Theory 67 (1) (2020) 244-254.
Codes, differentially δ-uniform functions, and t-designs. C Tang, C Ding, M Xiong, IEEE Trans. Inform. Theory. 66C. Tang, C. Ding, M. Xiong, Codes, differentially δ-uniform functions, and t-designs, IEEE Trans. Inform. Theory 66 (2020) 3691-3703.
Stenier systems S(2, 4, 3 m −1 2 ) and 2-designs from ternary linear codes of length 3 m −1 2 , Designs Codes Cryptogr. C Tang, C Ding, M Xiong, 87C. Tang, C. Ding, M. Xiong, Stenier systems S(2, 4, 3 m −1 2 ) and 2-designs from ternary linear codes of length 3 m −1 2 , Designs Codes Cryptogr. 87 (2019) 2793-2811.
Linear codes with few weights from inhomogeneous quadratic functions. C Tang, C Xiang, K Feng, Des. Codes Cryptogr. 833C. Tang, C. Xiang, K. Feng, Linear codes with few weights from inhomogeneous quadratic functions, Des. Codes Cryptogr. 83 (3) (2017) 691-714.
Near MDS codes from oval polynomials. Q Wang, Z Heng, Discrete Math. 3444112277Q. Wang, Z. Heng, Near MDS codes from oval polynomials, Discrete Math. 344 (4) (2021) 112277.
Infinite families of 3-designs and 2-designs from almost MDS codes. G Xu, X Cao, L Qu, IEEE Trans. Inform. Theory. 687G. Xu, X. Cao, L. Qu, Infinite families of 3-designs and 2-designs from almost MDS codes, IEEE Trans. Inform. Theory 68 (7) (2022) 4344-4353.
An infinite family of antiprimitive cyclic codes supporting Steiner systems S(3, 8, 7 m + 1). C Xiang, C Tang, Q Liu, Des. Codes Cryptogr. 903C. Xiang, C. Tang, Q. Liu, An infinite family of antiprimitive cyclic codes supporting Steiner systems S(3, 8, 7 m + 1), Des. Codes Cryptogr. 90 (3) (2022) 1319-1333.
Some t-designs from BCH codes. C Xiang, Cryptogr. Commun. 143C. Xiang, Some t-designs from BCH codes, Cryptogr. Commun. 14 (3) (2022) 641-652.
Two classes of linear codes and their weight distributions. C Xiang, X Wang, C Tang, F Fu, Appl. Algebra Engin. Commun. Comput. 293C. Xiang, X. Wang, C. Tang, F. Fu, Two classes of linear codes and their weight distributions, Appl. Algebra Engin. Commun. Comput. 29 (3) (2018) 209-225.
Infinite families of linear codes supporting more t-designs. Q Yan, J Zhou, IEEE Trans. Inform. Theory. 687Q. Yan, J. Zhou, Infinite families of linear codes supporting more t-designs, IEEE Trans. Inform. Theory 68 (7) (2022) 4365-4377.
| [] |